text
stringlengths 0
3.34M
|
---|
The \mEdhoc{} \mSpec{} \cite{our-analysis-selander-lake-edhoc-00} claims
that \mEdhoc{} satisfies many security properties, but these are imprecisely
expressed and motivated.
%
In particular, there is no coherent adversary model.
%
It is therefore not clear in which context properties should be verified.
%
We resolve this by clearly specifying an adversary model, in which we can verify
properties.
%
\subsection{Adversary Model}\label{sec:threat-model}
We verify \mEdhoc{} in the symbolic Dolev-Yao model, with idealized
cryptographic primitives, e.g, encrypted messages can only be
decrypted using the key, no hash collisions exist etc.
%
The adversary controls the
communication channel, and can interact with an unbounded number of sessions
of the protocol, dropping, injecting and modifying messages at their liking.
%
In addition to the basic Dolev-Yao model, we also consider two more adversary
capabilities, namely long-term key reveal and ephemeral key reveal.
%
Long-term key reveal models an adversary compromising a party $A$'s
long-term private key \mPriv{A} at time $t$, and we denote this event by
$\mRevLTK^t(A)$.
%
The event $\mRevEph^t(A, k)$ represents that the adversary learns
the ephemeral private key \mPrivE{A} used by party $A$ at time $t$ in a session
establishing key material $k$.
%
These two capabilities model the possibility to store and operate on
long-term keys in a secure module, whereas ephemeral keys
may be stored in a less secure part of a device.
%
This is more granular and realistic than assuming an adversary have equal
opportunity to access both types of keys.
%
We now define and formalize the security properties we are interested in, and
then describe how we encode them into \mTamarin{}-tool.
%
The adversary model becomes part of the security properties themselves.
%
\subsection{Formalization of Properties}
\label{sec:desired-properties}
We use the \mTamarin{} verification
tool~\cite{DBLP:conf/cav/MeierSCB13} to encode the model and verify properties.
%
This tool uses a fragment of temporal first order logic to reason about
events and knowledge of the parties and of the adversary.
%
For conciseness we use a slightly different syntax than
that used by \mTamarin{}, but which has a direct mapping to \mTamarin{}'s logic.
%
Event types are predicates over global states of system execution.
%
Let $E$ be an event type and let $t$ be a timestamp associated with a point in a
trace.
%
Then $E^{t}(p_i)_{i\in\mathbb{N}}$ denotes an event of type $E$ associated with
a sequence of parameters $(p_i)_{i\in\mathbb{N}}$ at time $t$ in that trace.
%
In general, more than one event may have the same timestamp and hence
timestamps form a quasi order, which we denote by $t_1 \lessdot t_2$ when $t_1$
is before $t_2$ in a trace.
%
We define $\doteq$ analogously.
%
However, two events of the same type cannot have the same timestamp, so
$t_1 \doteq t_2$ implies $E^{t_1} = E^{t_2}$.
%
Two events having the same timestamp does not imply the there is a fork in the
trace, only that the two events happen simultaneously and that comparing their
timestamps results in that they are equal.
%
This notation corresponds to \mTamarin{}'s use of action facts
$E(p_i)_{i\in\mathbb{N}}@t$.
%
The event $\mK^t(p)$ denotes that the adversary knows a parameter $p$ at
time $t$.
%
Parameters are terms in a term algebra of protocol specific operations and
generic operations, e.g., tuples $\langle\cdot\rangle$.
%
Intuitively, $\mK^t(p)$ evaluates to true when $p$ is in
the closure of the
parameters the adversary observed from interacting with parties using the
protocol, under the Dolev-Yao message deduction operations and
the advanced adversary capabilities up until time $t$.
%
For a more precise definition of knowledge management, we refer to~\cite{DBLP:conf/cav/MeierSCB13}.
%
An example of a formula is
\[
\forall t, k, k'\mLogicDot \mK^{t}(\langle k, k'\rangle)\ \rightarrow\
\mK^{t}(k) \land \mK^{t}(k'),
\]
expressing that if there is a time $t$ when the adversary knows the tuple
$\langle k, k'\rangle$, then the adversary knows both
$k$ and $k'$ at the same point in time.
%
An initiator $I$ considers the
protocol run started when it sends a message \mMsgone{} (event type \mIStart)
and the run completed after sending a message \mMsgthree{} (event type
\mIComplete).
%
Similarly, a responder $R$ considers the run started upon receiving
a \mMsgone{} (event type \mRStart), and completed upon receiving a \mMsgthree{}
(event type \mRComplete).
%
%----------------------------------------------------------------------- PFS
\subsubsection{Perfect Forward Secrecy (PFS)}
\label{sec:secrecy}
Informally, PFS captures the idea that session key material remains secret
even if a long-term key leaks in the future.
%
We define PFS for session key material \mSessKey{} as \mPredPfs{} in
Figure~\ref{fig:props}.
%
\begin{figure*}
\begin{align*}
\mPredPfs \triangleq\ & \forall I, R, \mSessKey, t_2, t_3\mLogicDot
\mK^{t_3}(\mSessKey)\ \land\
(\mIComplete^{t_2}(I, R, \mSessKey)\, \lor\, \mRComplete^{t_2}(I, R, \mSessKey))
\rightarrow\\
&(\exists t_1\mLogicDot \mRevLTK^{t_1}(I) \land t_1 \lessdot t_2)
\ \lor\ (\exists t_1\mLogicDot \mRevLTK^{t_1}(R) \land t_1 \lessdot t_2)
\ \lor\ (\exists t_1\mLogicDot \mRevEph^{t_1}(R, \mSessKey))
\ \lor\ (\exists t_1\mLogicDot \mRevEph^{t_1}(I, \mSessKey))
\end{align*}
\begin{align*}
\mPredInjI \triangleq\ &
\forall I, R, \mSessKey, S, t_2\mLogicDot \mIComplete^{t_2}(I, R, \mSessKey, S)
\rightarrow\\
&(\exists t_1\mLogicDot \mRStart^{t_1}(R, \mSessKey, S) \land t_1 \lessdot t_2)
\land (\forall I' R' t_1' \mLogicDot \mIComplete^{t_1'}(I' , R', \mSessKey, S)
\rightarrow t_1' \doteq t_1)
\ \ \lor\ \ (\exists t_1\mLogicDot \mRevLTK^{t_1}(R) \land t_1 \lessdot t_2)
\end{align*}
\begin{align*}
\mPredInjR \triangleq\ &
\forall I, R, \mSessKey, S, t_2\mLogicDot \mRComplete^{t_2}(I, R, \mSessKey, S)
\rightarrow\\
&(\exists t_1\mLogicDot \mIStart^{t_1}(I, R, \mSessKey, S) \land t_1 \lessdot t_2)
\land (\forall I' R' t_1' \mLogicDot \mRComplete^{t_1'}(I' , R', \mSessKey, S)
\rightarrow t_1' \doteq t_1)
\ \ \lor\ \ (\exists t_1\mLogicDot \mRevLTK^{t_1}(I) \land t_1 \lessdot t_2).
\end{align*}
\begin{align*}
\mPredImpI \triangleq\ &
\forall I, R, \mSessKey, S, t_1\mLogicDot \mIComplete^{t_1}(I, R, \mSessKey, S)
\rightarrow\\
&(\forall I', R', S', t_2\mLogicDot \mRComplete^{t_2}(I', R', \mSessKey, S') \rightarrow
(I=I' \land R=R' \land S=S'))\\
&\land (\forall I', R', S', t_1'\mLogicDot
(\mIComplete^{t_1'}(I', R', \mSessKey, S') \rightarrow t_1' \doteq t_1
)\\
&\ \ \ \lor(\exists t_0\mLogicDot \mRevLTK^{t_0}(R) \land t_0 \lessdot t_1)
\lor(\exists t_0\mLogicDot \mRevEph^{t_0}(R, \mSessKey))
\lor(\exists t_0\mLogicDot \mRevEph^{t_0}(I, \mSessKey)).
\end{align*}
%
\caption{Formalization of security properties and adversary model.}
\label{fig:props}
\end{figure*}
The first parameter $I$ of the \mIComplete{} event represents the
initiator's identity,
and the second, $R$, represents that $I$ believes $R$ to be playing
the responder role.
%
The third parameter, \mSessKey{}, is the established session key material.
%
The parameters of the \mRComplete{} event are defined analogously.
%
Specifically, the first parameter of \mRComplete{} represents the identity of
whom $R$ believes is playing the initiator role.
%
The essence of the definition is that an adversary only knows \mSessKey{} if they
compromised one of the
parties long-term keys before that party completed the run, or if the adversary
compromised any of the ephemeral keys at any time after a party starts
its protocol run.
%
One way the definition slightly differs from the corresponding \mTamarin{} lemma
is that \mTamarin{} does not allow a disjunction on the left-hand side of an
implication in a universally quantified formula.
%
In the lemma, therefore, instead of the disjunction
$\mIComplete^{t_2}(I, R, \mSessKey)\, \lor\, \mRComplete^{t_2}(I, R, \mSessKey)$,
we use a single action parametrized by $I$, $R$, and \mSessKey{} to signify that
\emph{either} party has completed their role.
%
%-------------------------------------------------------------------- InjAgree
\subsubsection{Authentication}
\label{sec:authenticationDef}
We prove two different flavors of authentication, the first being classical
\emph{injective agreement} following Lowe~\cite{DBLP:conf/csfw/Lowe97a}, and
the second being an implicit agreement property.
%
Informally, injective agreement guarantees to an initiator $I$ that whenever
$I$ completes a run ostensibly with a responder $R$,
then $R$ has been engaged in the protocol as a responder,
and this run of $I$ corresponds to a unique run of $R$.
%
In addition, the property guarantees to $I$ that the two parties agree on a set
$S$ of parameters associated with the run, including, in particular, the
session key material \mSessKey{}.
%
However, we will treat \mSessKey{} separately for clarity.
%
On completion, $I$ knows that $R$ has access to the session key material.
%
The corresponding property for $R$ is analogous.
%
Traditionally, the event types used to describe injective agreement are called
\emph{Running} and \emph{Commit}, but to harmonize the presentations of
authentication and PFS in this section, we refer to these event types as
\mIStart{} and \mIComplete{} respectively for the initiator, and
\mRStart{} and \mRComplete{} for the responder.
%
For the initiator role we define injective agreement by
\mPredInjI{} in Figure~\ref{fig:props}.
%
The property captures that for an initiator $I$, either the injective agreement
property as described above holds, or the long-term key of the believed
responder $R$ has been compromised before $I$ completed its role.
%
Had the adversary compromised $R$'s long-term key, they could have generated a
message of their liking (different to what $R$ agreed on), and signed this or
computed a $\mathit{MAC}_R$ based on \mPubE{I}, \mPriv{R} and their own chosen
ephemeral key pair $\langle\mPrivE{R},\ \mPubE{R}\rangle$.
%
This places no restrictions on the ephemeral
key reveals, or on the reveal of the initiator's long-term key.
%
For the responder we define the property \mPredInjR{} as in
Figure~\ref{fig:props}.
%
%------------------------------------------------------------- Implicit auth
Unlike PFS, not all \mEdhoc{} methods enjoy the injective agreement property.
%
Hence, we show for all methods a form of \emph{implicit agreement} on all the
parameters mentioned above.
%
We take inspiration from the computational model definitions of implicit
authentication, proposed by Guilhem~et~al.~\cite{DBLP:conf/csfw/GuilhemFW20}, to
modify classical injective agreement into an implicit property.
%
A small but important difference between our definition and theirs, is that
they focus on
authenticating a key and related identities, whereas we extend the more general
concept of agreeing on a set of parameters, starting from the idea of injective
agreement~\cite{DBLP:conf/csfw/Lowe97a}.
%
We use the term \emph{implicit} in this context to denote that a party $A$
assumes that any other party $B$ who knows the session key material \mSessKey{} must
be the intended party, and that $B$ (if honest) will also agree on a set
$S$ of parameters computed by the protocol, one of which is \mSessKey{}.
%
When implicit agreement holds for both roles, upon completion, $A$ is guaranteed
that $A$ has been or is engaged in exactly one protocol run with $B$ in the
opposite role, and that $B$ has been or will be able to agree on $S$.
%
The main difference to injective agreement is that $A$ concludes that if
$A$ sends the last message and this reaches $B$, then $A$ and $B$ have agreed
on $I$, $R$ and $S$.
%
While almost full explicit key authentication, as defined by
Guilhem~et~al.~\cite{DBLP:conf/csfw/GuilhemFW20}, is a similar property, our
definition does not require key confirmation, so our definition is closer to
their definition of implicit authentication.
%
In the \mTamarin{} model we split the property into one lemma for
$I$ (\mPredImpI{}) and one for $R$ (\mPredImpR{}) to save memory during
verification.
%
We show only the definition for $I$ in Figure~\ref{fig:props}, because it is
symmetric to the one for $R$.
%
For implicit agreement to hold for the initiator $I$, the ephemeral keys
can never be revealed.
%
Intuitively, the reason for this is that the implicit agreement relies on that
whomever knows the session key material is the intended responder.
%
An adversary with access to the ephemeral keys and the public keys of
both parties can compute the session key material produced by all methods.
%
However, the responder $R$'s long-term key can be revealed after $I$ completes
its run, because the adversary is still unable to compute $P_e$.
%
The initiator's long-term key can also be revealed at any time without affecting
$I$'s guarantee for the same reason.
%
%------------------------------------------------------- Agreed parameters
\subsubsection{Agreed Parameters}
\label{sec:agreedParams}
The initiator $I$ gets injective and implicit agreement guarantees on the
following partial set $S_P$ of parameters:
\begin{itemize}
\item the roles played by itself and its peer,
\item responder identity,
\item session key material (which varies depending on \mEdhoc{} method),
\item context identifiers \mCi{} and \mCr{}, and
\item cipher suites \mSuites{}.
\end{itemize}
%
Because \mEdhoc{} aims to provide identity protection for $I$, there is no
injective agreement guarantee for $I$ that $R$ agrees on the initiator's
identity.
%
For the same reason, there is no such guarantee for $I$ with respect to
the $P_I$ part of the session key material when $I$ uses the \mStat{}
authentication method.
%
There is, however, an implicit agreement guarantee for $I$ that $R$ agrees on
$I$'s identity and the full session key material.
%
Since $R$ completes after $I$, $R$ can get injective agreement guarantees on
more parameters, namely also the initiator's identity and the full session key
material for all methods.
%
The full set of agreed parameters $S_F$ is $S_P \cup \{I, P_I\}$
when $P_I$ is
part of the session key material, and $S_P \cup \{I\}$ otherwise.
%
%------------------------------------------------------- Implied properties
\subsubsection{Inferred Properties}
From the above, other properties can be inferred to hold in our adversary model.
%
Protocols where a party does not get confirmation that their peer knows the
session key material may be susceptible to
\emph{Key-Compromise Impersonation (KCI)}
attacks~\cite{DBLP:conf/ima/Blake-WilsonJM97}.
%
Attacks in this class allow an adversary in possession of a party $A$'s secret
long-term key to coerce $A$ to complete a
protocol run believing it authenticated a certain peer $B$, but where $B$ did
not engage with $A$ at all in a run.
%
Because both our notions of agreement above ensure agreement on identities,
roles and session key material, all methods passing verification of those are
also resistant to KCI attacks.
%
If a party $A$ can be coerced into believing it completed a run with $B$, but
where the session key material is actually shared with $C$ instead, the
protocol is vulnerable to an \emph{Unknown Key-Share (UKS)}
attack~\cite{DBLP:conf/ima/Blake-WilsonJM97}.
%
For the same reason as for KCI, any method for which our agreement
properties hold is also resistant to UKS attacks.
%
From the injective agreement properties it follows that each party is assured
the identity of its peer upon completion.
%
Therefore, the agreement properties also capture \emph{entity authentication}.
%
%---------------------------------------------------------------------------
\subsection{\mTamarin{}}
\label{sec:tamarin}
We chose \mTamarin{} to model and verify \mEdhoc{} in the symbolic model.
%
\mTamarin{} is an interactive verification tool in which models are specified
as multi-set rewrite rules that define a transition relation.
%
The elements of the multi-sets are called facts and represent the global system
state.
%
Rules are equipped with event annotations called actions.
%
Sequences of actions make up the execution trace, over which
logic formulas are defined and verified.
%
Multi-set rewrite rules with actions are written\\ $ l \ifarrow[e] r $,
where $l$ and $r$ are multi-sets of facts, and $e$ is a multi-set of actions.
%
Facts and actions are $n$-ary predicates over a term algebra, which defines a
set of function symbols, variables and names.
%
\mTamarin{} checks equality of these terms under an equational theory $E$.
%
For example, one can write $ dec(enc(x,y),y) =_E x $
to denote that symmetric decryption reverses the encryption operation under $E$.
%
The equational theory $E$ is fixed per model, and hence we omit the subscript.
%
\mTamarin{} supports let-bindings and tuples as syntactic sugar to simplify
model definitions.
%
It also provides built-in rules for Dolev-Yao adversaries and for
managing their knowledge.
%
We implement events using actions, and parameters associated with events using
terms of the algebra.
%
\subsubsection{Protocol Rules and Equations}
\mTamarin{} allows users to define new function symbols and equational theories.
rules, which are added to the set of considered rules during verification.
For example, in our model we have a symbol to denote authenticated encryption,
for which \mTamarin{} produces a rule of the following form:
%
\begin{small}
\begin{verbatim}
[!KU(k), !KU(m), !KU(ad), !KU(ai)] --[]->
[!KU(aeadEncrypt(k, m, ad, ai))]
\end{verbatim}
\end{small}
%
to denote that if the adversary knows a key \mT{k}, a message \mT{m}, the
authenticated data \mT{ad}, and an algorithm \mT{ai}, then they can construct
the encryption, and thus get to know the message
\mT{aeadEncrypt(k, m, ad, ai)}.
%
%-------------------------------------------------------------------------- sub
\subsection{\mTamarin{} Encoding of \mEdhoc{}}
\label{sec:modeling}
%
We model all four methods of \mEdhoc{}, namely
\mSigSig, \mSigStat, \mStatSig{} and \mStatStat.
%
Because the methods share a lot of common structure, we derive
their \mTamarin-models from a single specification written with the aid of the
M4 macro language.
%
To keep the presentation brief, we only present the \mStatSig{} metohod, as it
illustrates the use of two different asymmetric authentication methods
simultaneously.
%
The full \mTamarin{} code for all models can be found at~\cite{edhocTamarinRepo}.
%
Variable names used in the code excerpts here are sometimes shortened compared
to the model itself to fit the paper format.
%
\subsubsection{Primitive Operations}
Our model uses the built-in theories of exclusive-or and DH operations, as
in~\cite{DBLP:conf/csfw/DreierHRS18,DBLP:conf/csfw/SchmidtMCB12}.
%
Hashing is modeled via the built-in hashing function symbol augmented
with a public constant as additional input, modelling different
hash functions.
%
The HKDF interface is represented by \mT{expa} for the
expansion operation and \mT{extr} for the extraction operation.
%
Signatures use \mTamarin's built-in theory for \mT{sign} and \mT{verify}
operations.
%
For \mAead{} operations on key \mT{k}, message \mbox{\mT{m}}, additional data \mT{ad}
and algorithm identifier \mT{ai}, we use \mT{aeadEncrypt(m, k, ad, ai)}
for encryption.
%
Decryption with verification of the integrity is defined via the equation
\begin{small}\begin{verbatim}
aeadDecrypt(aeadEncrypt(m, k, ad, ai),
k, ad, ai) = m.
\end{verbatim}\end{small}
%
The integrity protection of AEAD covers \mT{ad}, and this equation hence requires
an adversary to know \mT{ad} even if they only wish to decrypt the data.
%
To enable the adversary to decrypt without needing to verify the integrity
we add the equation
\begin{small}\begin{verbatim}
decrypt(aeadEncrypt(m, k, ad, ai), k, ai) = m.
\end{verbatim}\end{small}
%
The latter equation is not used by honest parties.
%
\subsubsection{Protocol Environment and Adversary Model}
We model the binding between a party's identity and their long-term
key pairs using the following rules.
%
\begin{small}
\begin{verbatim}
rule registerLTK_SIG:
[Fr(~ltk)] --[UniqLTK($A, ~ltk)]->
[!LTK_SIG($A, ~ltk),
!PK_SIG($A, pk(~ltk)),
Out(<$A, pk(~ltk)>)]
rule registerLTK_STAT:
[Fr(~ltk)] --[UniqLTK($A, ~ltk)]->
[!LTK_STAT($A, ~ltk),
!PK_STAT($A, 'g'^~ltk),
Out(<$A, 'g'^~ltk>)]
\end{verbatim}
\end{small}
%
The two rules register long-term
key pairs for the \mSig{}- and \mStat{}-based methods respectively.
%
The fact \verb|Fr(~ltk)| creates a fresh term \mT{ltk}, representing a long-term
secret key, not known to the adversary.
%
The fact \verb|Out(<$A, pk(~ltk)>)| sends the identity of the party
owning the long-term key and the corresponding public key to the adversary.
%
The event \mT{UniqLTK} together with a corresponding restriction models the fact
that the each party is associated with exactly one long-term key.
%
Consequently, an adversary cannot register additional long-term keys for an
identity.
%
In line with the \mEdhoc{} \mSpec{}, this models an external mechanism
ensuring that long term keys are bound to correct identities, e.g.,
a certificate authority.
%
We rely on \mTamarin's{} built-in message deduction rules for a Dolev-Yao adversary.
%
To model an adversary compromising long-term keys, i.e., events of type
\mRevLTK{}, and revealing ephemeral keys, i.e., events of type
\mRevEph{}, we use standard reveal rules.
%
The timing of reveals as modelled by these events is important.
%
The long-term keys can be revealed on registration, before protocol execution.
%
The ephemeral key of a party can be revealed when the party completes,
i.e., at events of type \mIComplete{} and \mRComplete.~\footnote{A stronger, and perhaps more realistic, model would reveal ephemeral keys upon
creation at the start of the run, but we failed to get \mTamarin{} to
terminate on this.}
%
\subsubsection{Protocol Roles}
We model each method of the protocol with four rules: \mT{I1}, \mT{R2}, \mT{I3}
and \mT{R4} (with the method suffixed to the rule name).
%
Each of these represent one step of the protocol as run by the initiator $I$
or the responder $R$.
%
The rules correspond to the event types \mIStart, \mRStart, \mIComplete, and
\mRComplete, respectively.
%
Facts prefixed with \mT{StI} carry state information between \mT{I1} and \mT{I3}.
%
A term unique to the current thread, \mT{tid}, links two rules to a given state fact.
%
Similarly, facts prefixed with \mT{StR} carry state information between the
responder role's rules.
%
Line 28 in the \mT{R2\_STAT\_SIG} rule shown below illustrate one such use of state
facts.
%
We do not model the error message that $R$ can send in response to message
\mMsgone, and hence our model does not
capture the possibility for $R$ to reject $I$'s offer.
%
We model the XOR encryption of \mT{CIPHERTEXT\_2} with the key \mT{K\_2e} using
\mTamarin{}'s built in theory for XOR, and allow each term of the encrypted
element to be attacked individually.
%
That is, we first expand \mT{K\_2e} to as many key-stream terms as there are
terms in the plaintext tuple by applying the \mHkdfExpand{} function to unique
inputs per term.
%
We then XOR each term in the plaintext with its own key-stream term.
%
This models the \mSpec{} closer than if we would have XORed \mT{K\_2e} as a
single term onto the plaintext tuple.
%
The XOR encryption can be seen in lines 19--22 in the listing of
\mT{R2\_STAT\_SIG} below.
%
\begin{small}
\begin{verbatim}
1 rule R2_STAT_SIG:
2 let
3 agreed = <CS0, CI, ~CR>
4 gx = 'g'^xx
5 data_2 = <'g'^~yy, CI, ~CR>
6 m1 = <'STAT', 'SIG', CS0, CI, gx>
7 TH_2 = h(<$H0, m1, data_2>)
8 prk_2e = extr('e', gx^~yy)
9 prk_3e2m = prk_2e
10 K_2m = expa(<$cAEAD0, TH_2, 'K_2m'>,
11 prk_3e2m)
12 protected2 = $V // ID_CRED_V
13 CRED_V = pkV
14 extAad2 = <TH_2, CRED_V>
15 assocData2 = <protected2, extAad2>
16 MAC_2 = aead('e', K_2m, assocData2,
$cAEAD0)
17 authV = sign(<assocData2, MAC_2>, ~ltk)
18 plainText2 = <$V, authV>
19 K_2e = expa(<$cAEAD0, TH_2,
'K_2e'>, prk_2e)
20 K_2e_1 = expa(<$cAEAD0, TH_2,
'K_2e', '1'>, prk_2e)
21 K_2e_2 = expa(<$cAEAD0, TH_2,
'K_2e', '2'>, prk_2e)
22 CIPHERTEXT_2 = <$V XOR K_2e_1,
authV XOR K_2e_2>
23 m2 = <data_2, CIPHERTEXT_2>
24 exp_sk = <gx^~yy>
25 in
26 [!LTK_SIG($V, ~ltk), !PK_SIG($V, pkV),
In(m1), Fr(~CR), Fr(~yy), Fr(~tid)]
27 --[ExpRunningR(~tid, $V, exp_sk, agreed),
R2(~tid, $V, m1, m2)]->
28 [StR2_STAT_SIG($V, ~ltk, ~yy, prk_3e2m,
TH_2, CIPHERTEXT_2, gx^~yy,
~tid, m1, m2, agreed),
29 Out(m2)]
\end{verbatim}
\end{small}
%
To implement events and
to bind them to parameters, we use actions.
%
For example, the action \verb|ExpRunningR(~tid, $V, exp_sk, agreed)| in line 27
above implements binding of an event of type \mRStart{} to the parameters and session key
material.
%
As explained in Section~\ref{sec:agreedParams}, it is not possible to show
injective agreement on session key material when it includes
$P_I$ (not visible in the rule \mT{R2\_STAT\_SIG}).
%
Therefore, we use certain actions to implement events that include $P_I$ in the
session key material and other actions that do not.
%
Session key material which includes (resp. does not include) $P_I$ is referred
to as \mT{imp\_sk} (resp. \mT{exp\_sk}) in the
\mTamarin{} model.
%
In the case of \mSigSig{} and \mSigStat, therefore, \mT{imp\_sk} is the same as
\mT{exp\_sk}.
%
%-------------------------------------------------------------------------- sub
\subsection{\mTamarin{} Encoding of Properties}
\label{sec:propertyFormalization}
The properties and adversary model we defined in
Section~\ref{sec:desired-properties} translate directly into \mTamarin's logic,
using the straightforward mapping of events to the actions emitted from the model.
%
As an example, we show the lemma for verifying the property \mPredPfs.
%
\begin{small}
\begin{verbatim}
1 lemma secrecyPFS:
2 all-traces
3 "All u v sk #t3 #t2.
4 (K(sk)@t3 & CompletedRun(u, v, sk)@t2) ==>
5 ( (Ex #t1. LTKRev(u)@t1 & #t1 < #t2)
6 | (Ex #t1. LTKRev(v)@t1 & #t1 < #t2)
7 | (Ex #t1. EphKeyRev(sk)@t1))"
\end{verbatim}
\end{small}
%
As mentioned earlier, the action \mT{CompletedRun(u, v, sk)} in line 4 is
emitted by both the rules \mT{I3} and \mT{R4}, and corresponds
to the disjunction of events $\mIComplete^{t_2} \lor \mRComplete^{t_2}$ in the
definition of \mPredPfs{} in Section~\ref{sec:secrecy}.
%
Similarly, \mT{EphKeyRev(sk)} in line 7 models that the ephemeral
key is revealed for either $I$ or $R$, or both.
%
|
Parameters p1 p2 t1 t2 : Prop.
Definition aff1 := t1 /\ t2.
Definition aff2 := t1.
Definition k := ((p1 /\ aff1) \/ (t1 /\ ~aff1)) /\ ((p2 /\ ~aff2) \/ (t2 /\ aff2)).
Definition h1 := ~(p1 /\ t1) /\ ~(p2 /\ t2).
Definition h2 := (p1 \/ t1) /\ (p2 \/ t2).
Lemma epreuve_8 : h1 /\ h2 /\ k -> t1 /\ p2.
Proof.
unfold k, h1, h2.
unfold aff1, aff2.
intros.
destruct H.
destruct H.
destruct H0.
destruct H0.
destruct H2.
destruct H2.
destruct H2.
destruct H5.
elimtype False.
apply H.
split; assumption.
destruct H2.
destruct H4.
destruct H4.
auto.
destruct H4.
elimtype False.
apply H5.
split; assumption.
Qed.
|
State Before: α : Type u
inst✝² : Group α
inst✝¹ : LE α
inst✝ : CovariantClass α α (swap fun x x_1 => x * x_1) fun x x_1 => x ≤ x_1
a✝ b c d a : α
⊢ symm (mulRight a) = mulRight a⁻¹ State After: case h.h
α : Type u
inst✝² : Group α
inst✝¹ : LE α
inst✝ : CovariantClass α α (swap fun x x_1 => x * x_1) fun x x_1 => x ≤ x_1
a✝ b c d a x : α
⊢ ↑(symm (mulRight a)) x = ↑(mulRight a⁻¹) x Tactic: ext x State Before: case h.h
α : Type u
inst✝² : Group α
inst✝¹ : LE α
inst✝ : CovariantClass α α (swap fun x x_1 => x * x_1) fun x x_1 => x ≤ x_1
a✝ b c d a x : α
⊢ ↑(symm (mulRight a)) x = ↑(mulRight a⁻¹) x State After: no goals Tactic: rfl
|
A prime number is greater than 1.
|
/*
Copyright (C) 2003-2014 by David White <[email protected]>
This software is provided 'as-is', without any express or implied
warranty. In no event will the authors be held liable for any damages
arising from the use of this software.
Permission is granted to anyone to use this software for any purpose,
including commercial applications, and to alter it and redistribute it
freely, subject to the following restrictions:
1. The origin of this software must not be misrepresented; you must not
claim that you wrote the original software. If you use this software
in a product, an acknowledgement in the product documentation would be
appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not be
misrepresented as being the original software.
3. This notice may not be removed or altered from any source
distribution.
*/
#include <sstream>
#include <string>
#include <cstdint>
#include <numeric>
#include <stdio.h>
#include <boost/asio.hpp>
#include <boost/regex.hpp>
#include <deque>
#include <functional>
#include "asserts.hpp"
#include "controls.hpp"
#include "formatter.hpp"
#include "level.hpp"
#include "multiplayer.hpp"
#include "preferences.hpp"
#include "profile_timer.hpp"
#include "random.hpp"
#include "regex_utils.hpp"
#include "unit_test.hpp"
using boost::asio::ip::tcp;
using boost::asio::ip::udp;
namespace multiplayer
{
namespace
{
std::shared_ptr<boost::asio::io_service> asio_service;
std::shared_ptr<tcp::socket> tcp_socket;
std::shared_ptr<udp::socket> udp_socket;
std::shared_ptr<udp::endpoint> udp_endpoint;
std::vector<std::shared_ptr<udp::endpoint> > udp_endpoint_peers;
int32_t id;
int player_slot;
bool udp_packet_waiting()
{
if(!udp_socket) {
return false;
}
boost::asio::socket_base::bytes_readable command(true);
udp_socket->io_control(command);
return command.get() != 0;
}
bool tcp_packet_waiting()
{
if(!tcp_socket) {
return false;
}
boost::asio::socket_base::bytes_readable command(true);
tcp_socket->io_control(command);
return command.get() != 0;
}
}
int slot()
{
return player_slot;
}
Manager::Manager(bool activate)
{
if(activate) {
asio_service.reset(new boost::asio::io_service);
}
}
Manager::~Manager() {
udp_endpoint.reset();
tcp_socket.reset();
udp_socket.reset();
asio_service.reset();
player_slot = 0;
}
void setup_networked_game(const std::string& server)
{
boost::asio::io_service& io_service = *asio_service;
tcp::resolver resolver(io_service);
tcp::resolver::query query(server, "17002");
tcp::resolver::iterator endpoint_iterator = resolver.resolve(query);
tcp::resolver::iterator end;
tcp_socket.reset(new tcp::socket(io_service));
tcp::socket& socket = *tcp_socket;
boost::system::error_code error = boost::asio::error::host_not_found;
while(error && endpoint_iterator != end) {
socket.close();
socket.connect(*endpoint_iterator++, error);
}
if(error) {
LOG_INFO("NETWORK ERROR: Can't connect to host: " << server);
throw multiplayer::Error();
}
boost::array<char, 4> initial_response;
size_t len = socket.read_some(boost::asio::buffer(initial_response), error);
if(error) {
LOG_INFO("ERROR READING INITIAL RESPONSE");
throw multiplayer::Error();
}
if(len != 4) {
LOG_INFO("INITIAL RESPONSE HAS THE WRONG SIZE: " << (int)len);
throw multiplayer::Error();
}
memcpy(&id, &initial_response[0], 4);
LOG_INFO("ID: " << id);
udp::resolver udp_resolver(io_service);
udp::resolver::query udp_query(udp::v4(), server, "17001");
udp_endpoint.reset(new udp::endpoint);
*udp_endpoint = *udp_resolver.resolve(udp_query);
udp::endpoint& receiver_endpoint = *udp_endpoint;
udp_socket.reset(new udp::socket(io_service));
udp_socket->open(udp::v4());
boost::array<char, 4> udp_msg;
memcpy(&udp_msg[0], &id, 4);
// udp_socket->send_to(boost::asio::buffer(udp_msg), receiver_endpoint);
LOG_INFO("SENT UDP PACKET");
udp::endpoint sender_endpoint;
// len = udp_socket->receive_from(boost::asio::buffer(udp_msg), sender_endpoint);
LOG_INFO("GOT UDP PACKET: " << (int)len);
std::string msg = "greetings!";
socket.write_some(boost::asio::buffer(msg), error);
if(error) {
LOG_INFO("NETWORK ERROR: Could not send data");
throw multiplayer::Error();
}
}
namespace
{
void send_confirm_packet(int nplayer, std::vector<char>& msg, bool has_confirm) {
msg.resize(6);
msg[0] = has_confirm ? 'a' : 'A';
memcpy(&msg[1], &id, 4);
msg.back() = player_slot;
if(nplayer == player_slot || nplayer < 0 || static_cast<unsigned>(nplayer) >= udp_endpoint_peers.size()) {
return;
}
LOG_INFO("SENDING CONFIRM TO " << udp_endpoint_peers[nplayer]->port() << ": " << nplayer);
udp_socket->send_to(boost::asio::buffer(msg), *udp_endpoint_peers[nplayer]);
}
}
void sync_start_time(const Level& lvl, std::function<bool()> idle_fn)
{
if(!tcp_socket) {
return;
}
ffl::IntrusivePtr<const Level> lvl_ptr(&lvl);
ffl::IntrusivePtr<Client> client(new Client(lvl.id(), lvl.players().size()));
while(!client->PumpStartLevel()) {
if(idle_fn) {
const bool res = idle_fn();
if(!res) {
LOG_INFO("quitting game...");
throw multiplayer::Error();
}
}
}
}
namespace {
struct QueuedMessages {
std::vector<std::function<void()> > send_fn;
};
PREF_INT(fakelag, 0, "Number of milliseconds of artificial lag to introduce to multiplayer");
}
void send_and_receive()
{
if(!udp_socket || controls::num_players() == 1) {
return;
}
static std::deque<QueuedMessages> message_queue;
//send our ID followed by the send packet.
std::vector<char> send_buf(5);
send_buf[0] = 'C';
memcpy(&send_buf[1], &id, 4);
controls::write_control_packet(send_buf);
if(message_queue.empty() == false) {
QueuedMessages& msg = message_queue.front();
for(const std::function<void()>& fn : msg.send_fn) {
fn();
}
message_queue.pop_front();
}
for(int n = 0; n != udp_endpoint_peers.size(); ++n) {
if(n == player_slot) {
continue;
}
const unsigned lagframes = g_fakelag/20;
if(lagframes == 0) {
udp_socket->send_to(boost::asio::buffer(send_buf), *udp_endpoint_peers[n]);
} else {
while(lagframes >= message_queue.size()) {
message_queue.push_back(QueuedMessages());
}
message_queue[lagframes].send_fn.push_back([=]() {
udp_socket->send_to(boost::asio::buffer(send_buf), *udp_endpoint_peers[n]);
});
}
}
receive();
}
void receive()
{
while(udp_packet_waiting()) {
udp::endpoint sender_endpoint;
boost::array<char, 4096> udp_msg;
size_t len = udp_socket->receive(boost::asio::buffer(udp_msg));
if(len == 0 || udp_msg[0] != 'C') {
continue;
}
if(len < 5) {
LOG_INFO("UDP PACKET TOO SHORT: " << (int)len);
continue;
}
controls::read_control_packet(&udp_msg[5], len - 5);
}
}
Client::Client(const std::string& game_id, int nplayers) : game_id_(game_id), nplayers_(nplayers), completed_(false)
{
//find our host and port number within our NAT and tell the server
//about it, so if two servers from behind the same NAT connect to
//the server, it can tell them how to connect directly to each other.
std::string local_host;
int local_port = 0;
{
//send something on the UDP socket, just so that we get a port.
std::vector<char> send_buf;
send_buf.push_back('.');
udp_socket->send_to(boost::asio::buffer(send_buf), *udp_endpoint);
tcp::endpoint local_endpoint = tcp_socket->local_endpoint();
local_host = local_endpoint.address().to_string();
local_port = udp_socket->local_endpoint().port();
LOG_INFO("LOCAL ENDPOINT: " << local_host << ":" << local_port);
}
std::ostringstream s;
s << "READY/" << game_id_ << "/" << nplayers_ << "/" << local_host << " " << local_port;
boost::system::error_code error = boost::asio::error::host_not_found;
tcp_socket->write_some(boost::asio::buffer(s.str()), error);
if(error) {
LOG_INFO("ERROR WRITING TO SOCKET");
throw multiplayer::Error();
}
}
bool Client::PumpStartLevel()
{
if(!tcp_packet_waiting()) {
std::vector<char> send_buf(5);
send_buf[0] = 'Z';
memcpy(&send_buf[1], &id, 4);
udp_socket->send_to(boost::asio::buffer(send_buf), *udp_endpoint);
return false;
}
boost::system::error_code error = boost::asio::error::host_not_found;
boost::array<char, 1024> response;
size_t len = tcp_socket->read_some(boost::asio::buffer(response), error);
if(error) {
LOG_INFO("ERROR READING FROM SOCKET");
throw multiplayer::Error();
}
std::string str(&response[0], &response[0] + len);
if(std::string(str.begin(), str.begin() + 5) != "START") {
LOG_INFO("UNEXPECTED RESPONSE: '" << str << "'");
throw multiplayer::Error();
}
const char* ptr = str.c_str() + 6;
char* end_ptr = nullptr;
const int nplayers = strtol(ptr, &end_ptr, 10);
ptr = end_ptr;
ASSERT_EQ(*ptr, '\n');
++ptr;
boost::asio::io_service& io_service = *asio_service;
udp::resolver udp_resolver(io_service);
udp_endpoint_peers.clear();
for(int n = 0; n != nplayers; ++n) {
const char* end = strchr(ptr, '\n');
ASSERT_LOG(end != nullptr, "ERROR PARSING RESPONSE: " << str);
std::string line(ptr, end);
ptr = end+1;
if(line == "SLOT") {
player_slot = n;
udp_endpoint_peers.push_back(std::shared_ptr<udp::endpoint>());
LOG_INFO("SLOT: " << player_slot);
continue;
}
static boost::regex pattern("(.*?) (.*?)");
std::string host, port;
match_regex(line, pattern, &host, &port);
LOG_INFO("SLOT " << n << " = " << host << " " << port);
udp_endpoint_peers.push_back(std::shared_ptr<udp::endpoint>(new udp::endpoint));
udp::resolver::query peer_query(udp::v4(), host, port);
*udp_endpoint_peers.back() = *udp_resolver.resolve(peer_query);
if(preferences::relay_through_server()) {
*udp_endpoint_peers.back() = *udp_endpoint;
}
}
std::set<int> confirmed_players;
confirmed_players.insert(player_slot);
LOG_INFO("PLAYER " << player_slot << " CONFIRMING...");
int confirmation_point = 1000;
for(unsigned m = 0; (m != 1000 && confirmed_players.size() < static_cast<unsigned>(nplayers)) || m < static_cast<unsigned>(confirmation_point) + 50; ++m) {
std::vector<char> msg;
for(int n = 0; n != nplayers; ++n) {
send_confirm_packet(n, msg, confirmed_players.count(n) != 0);
}
while(udp_packet_waiting()) {
boost::array<char, 4096> udp_msg;
udp::endpoint endpoint;
size_t len = udp_socket->receive_from(boost::asio::buffer(udp_msg), endpoint);
if(len == 6 && ::toupper(udp_msg[0]) == 'A') {
confirmed_players.insert(udp_msg[5]);
if(udp_msg[5] >= 0 && static_cast<unsigned>(udp_msg[5]) < udp_endpoint_peers.size()) {
if(endpoint.port() != udp_endpoint_peers[udp_msg[5]]->port()) {
LOG_INFO("REASSIGNING PORT " << endpoint.port() << " TO " << udp_endpoint_peers[udp_msg[5]]->port());
}
*udp_endpoint_peers[udp_msg[5]] = endpoint;
}
if(confirmed_players.size() >= static_cast<unsigned>(nplayers) && m < static_cast<unsigned>(confirmation_point)) {
//now all players are confirmed
confirmation_point = m;
}
LOG_INFO("CONFIRMED PLAYER: " << static_cast<int>(udp_msg[5]) << "/ " << player_slot);
}
}
//we haven't had any luck so far, so start port scanning, in case
//it's on another port.
if(m > 100 && (m%100) == 0) {
for(int n = 0; n != nplayers; ++n) {
if(n == player_slot || confirmed_players.count(n)) {
continue;
}
LOG_INFO("PORTSCANNING FOR PORTS...");
for(int port_offset=-5; port_offset != 100; ++port_offset) {
udp::endpoint peer_endpoint;
const int port = udp_endpoint_peers[n]->port() + port_offset;
if(port <= 1024 || port >= 65536) {
continue;
}
std::string port_str = formatter() << port;
udp::resolver::query peer_query(udp::v4(), udp_endpoint_peers[n]->address().to_string(), port_str.c_str());
peer_endpoint = *udp_resolver.resolve(peer_query);
udp_socket->send_to(boost::asio::buffer(msg), peer_endpoint);
}
}
}
profile::delay(10);
}
if(confirmed_players.size() < static_cast<unsigned>(nplayers)) {
LOG_INFO("COULD NOT CONFIRM NETWORK CONNECTION TO ALL PEERS");
throw multiplayer::Error();
}
controls::set_delay(3);
LOG_INFO("HANDSHAKING...");
if(player_slot == 0) {
int ping_id = 0;
std::map<int, int> ping_sent_at;
std::map<int, int> ping_player;
std::map<std::string, int> contents_ping;
std::map<int, int> player_nresponses;
std::map<int, int> player_latency;
int delay = 0;
int last_send = -1;
const int game_start = profile::get_tick_time() + 1000;
boost::array<char, 1024> receive_buf;
while(profile::get_tick_time() < game_start) {
const int ticks = profile::get_tick_time();
const int start_in = game_start - ticks;
if(start_in < 500 && delay == 0) {
//calculate what the delay should be
for(int n = 0; n != nplayers; ++n) {
if(n == player_slot) {
continue;
}
if(player_nresponses[n]) {
const int avg_latency = player_latency[n]/player_nresponses[n];
const int delay_time = avg_latency/(20*2) + 2;
if(delay_time > delay) {
delay = delay_time;
}
}
}
if(delay) {
LOG_INFO("SET DELAY TO " << delay);
controls::set_delay(delay);
}
}
if(last_send == -1 || ticks >= last_send+10) {
last_send = ticks;
for(int n = 0; n != nplayers; ++n) {
if(n == player_slot) {
continue;
}
char buf[256];
int start_advisory = start_in;
const int player_responses = player_nresponses[n];
if(player_responses) {
const int avg_latency = player_latency[n]/player_responses;
start_advisory -= avg_latency/2;
if(start_advisory < 0) {
start_advisory = 0;
}
}
LOG_INFO("SENDING ADVISORY TO START IN " << start_in << " - " << (start_in - start_advisory));
const int buf_len = sprintf(buf, "PXXXX%d %d %d", ping_id, start_advisory, delay);
memcpy(&buf[1], &id, 4);
std::string msg(buf, buf + buf_len);
udp_socket->send_to(boost::asio::buffer(msg), *udp_endpoint_peers[n]);
ping_sent_at[ping_id] = ticks;
contents_ping[std::string(msg.begin()+5, msg.end())] = ping_id;
ping_player[ping_id] = n;
ping_id++;
}
}
while(udp_packet_waiting()) {
size_t len = udp_socket->receive(boost::asio::buffer(receive_buf));
if(len > 5 && receive_buf[0] == 'P') {
std::string msg(&receive_buf[0], &receive_buf[0] + len);
std::string msg_content(msg.begin()+5, msg.end());
ASSERT_LOG(contents_ping.count(msg_content), "UNRECOGNIZED PING: " << msg);
const int ping = contents_ping[msg_content];
const int latency = ticks - ping_sent_at[ping];
const int nplayer = ping_player[ping];
player_nresponses[nplayer]++;
player_latency[nplayer] += latency;
LOG_INFO("RECEIVED PING FROM " << nplayer << " IN " << latency << " AVG LATENCY " << player_latency[nplayer]/player_nresponses[nplayer]);
} else {
if(len == 6 && receive_buf[0] == 'A') {
std::vector<char> msg;
const int player_num = receive_buf[5];
send_confirm_packet(player_num, msg, true);
}
}
}
profile::delay(1);
}
} else {
std::vector<unsigned> start_time;
boost::array<char, 1024> buf;
for(;;) {
while(udp_packet_waiting()) {
size_t len = udp_socket->receive(boost::asio::buffer(buf));
LOG_INFO("GOT MESSAGE: " << buf[0]);
if(len > 5 && buf[0] == 'P') {
memcpy(&buf[1], &id, 4); //write our ID for the return msg.
const std::string s(&buf[0], &buf[0] + len);
std::string::const_iterator begin_start_time = std::find(s.begin() + 5, s.end(), ' ');
ASSERT_LOG(begin_start_time != s.end(), "NO WHITE SPACE FOUND IN PING MESSAGE: " << s);
std::string::const_iterator begin_delay = std::find(begin_start_time + 1, s.end(), ' ');
ASSERT_LOG(begin_delay != s.end(), "NO WHITE SPACE FOUND IN PING MESSAGE: " << s);
const std::string start_in(begin_start_time+1, begin_delay);
const std::string delay_time(begin_delay+1, s.end());
const int start_in_num = atoi(start_in.c_str());
const int delay = atoi(delay_time.c_str());
start_time.push_back(profile::get_tick_time() + start_in_num);
while(start_time.size() > 5) {
start_time.erase(start_time.begin());
}
if(delay) {
LOG_INFO("SET DELAY TO " << delay);
controls::set_delay(delay);
}
udp_socket->send_to(boost::asio::buffer(s), *udp_endpoint_peers[0]);
} else {
if(len == 6 && buf[0] == 'A') {
std::vector<char> msg;
const int player_num = buf[5];
send_confirm_packet(player_num, msg, true);
}
}
}
if(start_time.size() > 0) {
const int start_time_avg = static_cast<int>(std::accumulate(start_time.begin(), start_time.end(), 0)/start_time.size());
if(profile::get_tick_time() >= start_time_avg) {
break;
}
}
profile::delay(1);
}
}
rng::seed_from_int(0);
completed_ = true;
return true;
}
BEGIN_DEFINE_CALLABLE_NOBASE(Client)
DEFINE_FIELD(ready_to_start, "bool")
return variant::from_bool(obj.completed_);
BEGIN_DEFINE_FN(pump, "() ->commands")
ffl::IntrusivePtr<Client> client(const_cast<Client*>(&obj));
return variant(new game_logic::FnCommandCallable("Multiplayer::Client::Pump", [=]() {
client->PumpStartLevel();
}));
END_DEFINE_FN
END_DEFINE_CALLABLE(Client)
}
namespace {
struct Peer {
std::string host, port;
};
}
COMMAND_LINE_UTILITY(hole_punch_test) {
boost::asio::io_service io_service;
udp::resolver udp_resolver(io_service);
std::string server_hostname = "wesnoth.org";
std::string server_port = "17001";
size_t narg = 0;
if(narg < args.size()) {
server_hostname = args[narg];
++narg;
}
if(narg < args.size()) {
server_port = args[narg];
++narg;
}
ASSERT_LOG(narg == args.size(), "wrong number of args");
udp::resolver::query udp_query(udp::v4(), server_hostname.c_str(), server_port.c_str());
udp::endpoint udp_endpoint;
udp_endpoint = *udp_resolver.resolve(udp_query);
udp::socket udp_socket(io_service);
udp_socket.open(udp::v4());
udp_socket.send_to(boost::asio::buffer("hello"), udp_endpoint);
std::vector<Peer> peers;
boost::array<char, 1024> buf;
for(;;) {
udp_socket.receive(boost::asio::buffer(buf));
LOG_INFO("RECEIVED {{{" << &buf[0] << "}}}\n");
char* beg = &buf[0];
char* mid = strchr(beg, ' ');
if(mid) {
*mid = 0;
const char* port = mid+1;
Peer peer;
peer.host = beg;
peer.port = port;
peers.push_back(peer);
}
for(int m = 0; m != 10; ++m) {
for(int n = 0; n != peers.size(); ++n) {
const std::string host = peers[n].host;
const std::string port = peers[n].port;
LOG_INFO("sending to " << host << " " << port);
udp::resolver::query peer_query(udp::v4(), host, port);
udp::endpoint peer_endpoint;
peer_endpoint = *udp_resolver.resolve(peer_query);
udp_socket.send_to(boost::asio::buffer("peer"), peer_endpoint);
}
profile::delay(1000);
}
}
io_service.run();
}
|
State Before: R : Type u_1
inst✝ : Semiring R
p q : R[X]
⊢ natDegree (mirror p) = natDegree p State After: case pos
R : Type u_1
inst✝ : Semiring R
p q : R[X]
hp : p = 0
⊢ natDegree (mirror p) = natDegree p
case neg
R : Type u_1
inst✝ : Semiring R
p q : R[X]
hp : ¬p = 0
⊢ natDegree (mirror p) = natDegree p Tactic: by_cases hp : p = 0 State Before: case neg
R : Type u_1
inst✝ : Semiring R
p q : R[X]
hp : ¬p = 0
⊢ natDegree (mirror p) = natDegree p State After: R : Type u_1
inst✝ : Semiring R
p q : R[X]
hp : ¬p = 0
✝ : Nontrivial R
⊢ natDegree (mirror p) = natDegree p Tactic: nontriviality R State Before: R : Type u_1
inst✝ : Semiring R
p q : R[X]
hp : ¬p = 0
✝ : Nontrivial R
⊢ natDegree (mirror p) = natDegree p State After: R : Type u_1
inst✝ : Semiring R
p q : R[X]
hp : ¬p = 0
✝ : Nontrivial R
⊢ leadingCoeff (reverse p) * leadingCoeff (X ^ natTrailingDegree p) ≠ 0 Tactic: rw [mirror, natDegree_mul', reverse_natDegree, natDegree_X_pow,
tsub_add_cancel_of_le p.natTrailingDegree_le_natDegree] State Before: R : Type u_1
inst✝ : Semiring R
p q : R[X]
hp : ¬p = 0
✝ : Nontrivial R
⊢ leadingCoeff (reverse p) * leadingCoeff (X ^ natTrailingDegree p) ≠ 0 State After: no goals Tactic: rwa [leadingCoeff_X_pow, mul_one, reverse_leadingCoeff, Ne, trailingCoeff_eq_zero] State Before: case pos
R : Type u_1
inst✝ : Semiring R
p q : R[X]
hp : p = 0
⊢ natDegree (mirror p) = natDegree p State After: no goals Tactic: rw [hp, mirror_zero]
|
classdef MimLinkedDatasetChooser < CoreBaseClass
% MimLinkedDatasetChooser. Part of the internal framework of the TD MIM Toolkit.
%
% You should not use this class within your own code. It is intended to
% be used internally within the framework of the TD MIM Toolkit.
%
% MimLinkedDatasetChooser is used to select between linked datasets.
% By default, each dataset acts independently, but you can link datasets
% together (for example, if you wanted to register images between two
% datasets). When datasets are linked, one is the primary dataset, and
% linked results are stored in the primary cache. The primary dataset
% may access results from any of its linked datasets (but not vice
% versa). Linking can be nested.
%
%
% Licence
% -------
% Part of the TD MIM Toolkit. https://github.com/tomdoel
% Author: Tom Doel, Copyright Tom Doel 2014. www.tomdoel.com
% Distributed under the MIT licence. Please see website for details.
%
properties (Access = private)
LinkedRecorderSingleton
DatasetCache
PrimaryDatasetResults % Handle to the MimDatasetResults object for this dataset
LinkedDatasetChooserList % Handles to MimLinkedDatasetChooser objects for all linked datasets, including this one
PrimaryDatasetUid % The uid of this dataset
end
events
% This event is fired when a plugin has been run for this dataset, and
% has generated a new preview thumbnail.
PreviewImageChanged
end
methods
function obj = MimLinkedDatasetChooser(framework_app_def, context_def, image_info, dataset_disk_cache, linked_recorder_singleton, plugin_cache, reporting)
obj.LinkedRecorderSingleton = linked_recorder_singleton;
obj.DatasetCache = dataset_disk_cache;
primary_dataset_results = MimDatasetResults(framework_app_def, context_def, image_info, obj, obj, dataset_disk_cache, plugin_cache, reporting);
obj.PrimaryDatasetUid = primary_dataset_results.GetImageInfo.ImageUid;
obj.PrimaryDatasetResults = primary_dataset_results;
obj.LinkedDatasetChooserList = containers.Map;
obj.LinkedDatasetChooserList(obj.PrimaryDatasetUid) = obj;
end
function AddLinkedDataset(obj, linked_name, linked_dataset_chooser, reporting)
% Links a different dataset to this one, using the specified name.
% The name exists only within the scope of this dataset, and is used
% to identify the linked dataset from which results should be
% obtained.
linked_uid = linked_dataset_chooser.PrimaryDatasetUid;
obj.LinkedDatasetChooserList(linked_uid) = linked_dataset_chooser;
obj.LinkedDatasetChooserList(linked_name) = linked_dataset_chooser;
obj.LinkedRecorderSingleton.AddLink(obj.PrimaryDatasetUid, linked_uid, linked_name, reporting);
end
function dataset_results = GetDataset(obj, reporting, varargin)
% Returns a handle to the DatasetResults object for a particular linked dataset.
% The dataset is identified by its uid in varargin, or an empty
% input will return the primary dataset.
if nargin < 3
dataset_name = [];
else
dataset_name = varargin{1};
end
if isempty(dataset_name)
dataset_name = obj.PrimaryDatasetUid;
end
if ~obj.LinkedDatasetChooserList.isKey(dataset_name)
reporting.Error('MimLinkedDatasetChooser:DatasetNotFound', 'No linked dataset was found with this name. Did you add the dataset with LinkDataset()?');
end
linked_dataset_chooser = obj.LinkedDatasetChooserList(dataset_name);
dataset_results = linked_dataset_chooser.PrimaryDatasetResults;
end
function is_linked_dataset = IsLinkedDataset(obj, linked_name_or_uid, reporting)
% Returns true if another dataset has been linked to this one, using
% the name or uid specified
is_linked_dataset = obj.LinkedDatasetChooserList.isKey(linked_name_or_uid);
end
function ClearMemoryCacheInAllLinkedDatasets(obj)
% Clears the temporary memory cache of this and all linked
% datasets
for linker = obj.LinkedDatasetChooserList.values
if linker{1} == obj
obj.DatasetCache.ClearTemporaryMemoryCache;
else
linker{1}.ClearMemoryCacheInAllLinkedDatasets;
end
end
end
function NotifyPreviewImageChanged(obj, plugin_name)
notify(obj,'PreviewImageChanged', CoreEventData(plugin_name));
end
function primary_uid = GetPrimaryDatasetUid(obj)
primary_uid = obj.PrimaryDatasetUid;
end
end
end
|
-- Copyright (c) 2013 Radek Micek
module Main
import Common
import D3
data Item = MkItem String String
getKey : Item -> String
getKey (MkItem k _) = k
getValue : Item -> String
getValue (MkItem _ v) = v
main : IO ()
main = do
let items : List Item =
[ MkItem "a" "porch"
, MkItem "d" "larder"
, MkItem "b" "utility room"
, MkItem "f" "shed"
]
arr <- mkArray items
d3 ?? select "ul" >=>
selectAll "li" >=>
bindK arr (\d, i => return $ getKey d) >=>
enter >=>
append "li" >=>
text' (\d, i => return $ getValue d)
let items2 : List Item =
[ MkItem "b" "Utility room"
, MkItem "B" "kitchen"
, MkItem "d" "Larder"
]
arr2 <- mkArray items2
li <- d3 ?? selectAll "ul" >=>
bind' (\d, i => mkArray [arr2]) >=>
selectAll "li" >=>
bindK' (\d, i => return d) (\d, i => return $ getKey d)
li ?? style "background-color" "orange"
li ?? enter >=>
append "li" >=>
style "background-color" "green"
li ?? exit >=>
remove
li ?? text' (\d, i => return $ getValue d)
return ()
|
= = = Early antiquarian descriptions = = =
|
#include <boost/functional/hash_fwd.hpp>
|
# GFF3 File Format
# ================
module GFF3
import Automa
import Automa.RegExp: @re_str
import BufferedStreams
import Bio.Exceptions: missingerror
import URIParser
importall Bio
include("record.jl")
include("reader.jl")
include("writer.jl")
end
|
lemma tendsto_mult_right: "(f \<longlongrightarrow> l) F \<Longrightarrow> ((\<lambda>x. (f x) * c) \<longlongrightarrow> l * c) F" for c :: "'a::topological_semigroup_mult"
|
[STATEMENT]
lemma has_vector_derivative_scaleR[derivative_intros]:
"(f has_field_derivative f') (at x within s) \<Longrightarrow> (g has_vector_derivative g') (at x within s) \<Longrightarrow>
((\<lambda>x. f x *\<^sub>R g x) has_vector_derivative (f x *\<^sub>R g' + f' *\<^sub>R g x)) (at x within s)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>(f has_real_derivative f') (at x within s); (g has_vector_derivative g') (at x within s)\<rbrakk> \<Longrightarrow> ((\<lambda>x. f x *\<^sub>R g x) has_vector_derivative f x *\<^sub>R g' + f' *\<^sub>R g x) (at x within s)
[PROOF STEP]
unfolding has_real_derivative_iff_has_vector_derivative
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>(f has_vector_derivative f') (at x within s); (g has_vector_derivative g') (at x within s)\<rbrakk> \<Longrightarrow> ((\<lambda>x. f x *\<^sub>R g x) has_vector_derivative f x *\<^sub>R g' + f' *\<^sub>R g x) (at x within s)
[PROOF STEP]
by (rule bounded_bilinear.has_vector_derivative[OF bounded_bilinear_scaleR])
|
State Before: α : Type u_1
β : Type ?u.122274
γ : Type ?u.122277
l✝ m : Language α
a b x : List α
l : Language α
⊢ 1 + l * l∗ = l∗ State After: α : Type u_1
β : Type ?u.122274
γ : Type ?u.122277
l✝ m : Language α
a b x : List α
l : Language α
⊢ (l ^ 0 + ⨆ (i : ℕ), l ^ (i + 1)) = ⨆ (i : ℕ), l ^ i Tactic: simp only [kstar_eq_iSup_pow, mul_iSup, ← pow_succ, ← pow_zero l] State Before: α : Type u_1
β : Type ?u.122274
γ : Type ?u.122277
l✝ m : Language α
a b x : List α
l : Language α
⊢ (l ^ 0 + ⨆ (i : ℕ), l ^ (i + 1)) = ⨆ (i : ℕ), l ^ i State After: no goals Tactic: exact sup_iSup_nat_succ _
|
[STATEMENT]
lemma empty_setinterleaving : "[] setinterleaves ((t, u), A) \<Longrightarrow> t = []"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. [] setinterleaves ((t, u), A) \<Longrightarrow> t = []
[PROOF STEP]
by (cases t, cases u, auto, cases u, simp_all split:if_splits)
|
The first scientific description of the plain maskray was authored by Commonwealth Scientific and Industrial Research Organisation ( CSIRO ) researcher Peter Last in a 1987 issue of Memoirs of the National Museum of Victoria . The specific name <unk> comes from the Latin an ( " not " ) and <unk> ( " marked " ) , and refers to the ray 's coloration . The holotype is a male 21 @.@ 2 cm ( 8 @.@ 3 in ) across , caught off Western Australia ; several paratypes were also designated . Last tentatively placed the species in the genus Dasyatis , noting that it belonged to the " maskray " species group that also included the bluespotted stingray ( then Dasyatis kuhlii ) . In 2008 , Last and William White elevated the kuhlii group to the rank of full genus as Neotrygon , on the basis of morphological and molecular phylogenetic evidence .
|
logbook.prorate = function(lgbk) {
# manual pro-rate:
# fix problems with slip weights
new.slip.sums = tapply2( x=lgbk, indices=c("doc_id"), var="slip_weight_lbs", newvars=c("new.slip.wgt"),
func=function(x){ sum(unique(x)) } )
lgbk = merge( x=lgbk, y=new.slip.sums, by="doc_id", sort=F, all.x=T, all.y=F)
i = which(( lgbk$new.slip.wgt - lgbk$slip_weight_lbs) != 0 )
j = which( duplicated( lgbk$log_efrt_std_info_id, ))
bad.data = intersect ( i, j )
if (length(bad.data) > 0 ) lgbk = lgbk[-bad.data ,]
lgbk$slip_weight_lbs = lgbk$new.slip.wgt
lgbk$new.slip.wgt = NULL
# compute rescaling factors that ensure sums are still correct after rescaling
# counts
pr.count = tapply2( x=lgbk, indices=c("doc_id"), var="log_efrt_std_info_id", newvars=c("pr.count"),
func=function(x){ length(unique(x)) } )
# sums
pr.sum = tapply2( x=lgbk, indices=c("doc_id"), var="est_weight_log_lbs", newvars=c("pr.sum"),
func=function(x){ sum(x, na.rm=F) } )
# merge and rescale
pr = merge(x=pr.count, y=pr.sum, by="doc_id", sort=F, all=F)
pr$doc_id = as.character(pr$doc_id )
lgbk = merge(x=lgbk, y=pr, by="doc_id", all.x=T, all.y=F, sort=F)
lgbk$pr.fraction = lgbk$est_weight_log_lbs / lgbk$pr.sum
lgbk$pro_rated_slip_wt_lbs = lgbk$pr.fraction * lgbk$slip_weight_lbs
# second pro-rating of data with missing values/nulls, etc
na.logs = which (!is.finite ( lgbk$pr.sum ) )
if (length( na.logs) > 0 ) {
lgbk$pro_rated_slip_wt_lbs[ na.logs ] = lgbk$slip_weight_lbs[na.logs ] / lgbk$pr.count [ na.logs]
}
bad.usage.codes = which ( lgbk$catch_usage_code!=10 )
if (length(bad.usage.codes) > 0 ) lgbk = lgbk[-bad.usage.codes, ]
return( lgbk )
}
|
// This file is part of libigl, a simple c++ geometry processing library.
//
// Copyright (C) 2013 Alec Jacobson <[email protected]>
//
// This Source Code Form is subject to the terms of the Mozilla Public License
// v. 2.0. If a copy of the MPL was not distributed with this file, You can
// obtain one at http://mozilla.org/MPL/2.0/.
#include "readMESH.h"
template <typename Scalar, typename Index>
IGL_INLINE bool igl::readMESH(
const std::string mesh_file_name,
std::vector<std::vector<Scalar > > & V,
std::vector<std::vector<Index > > & T,
std::vector<std::vector<Index > > & F)
{
using namespace std;
FILE * mesh_file = fopen(mesh_file_name.c_str(),"r");
if(NULL==mesh_file)
{
fprintf(stderr,"IOError: %s could not be opened...",mesh_file_name.c_str());
return false;
}
return igl::readMESH(mesh_file,V,T,F);
}
template <typename Scalar, typename Index>
IGL_INLINE bool igl::readMESH(
FILE * mesh_file,
std::vector<std::vector<Scalar > > & V,
std::vector<std::vector<Index > > & T,
std::vector<std::vector<Index > > & F)
{
using namespace std;
#ifndef LINE_MAX
# define LINE_MAX 2048
#endif
char line[LINE_MAX];
bool still_comments;
V.clear();
T.clear();
F.clear();
// eat comments at beginning of file
still_comments= true;
while(still_comments)
{
if(fgets(line,LINE_MAX,mesh_file) == NULL)
{
fprintf(stderr, "Error: couldn't find start of .mesh file");
fclose(mesh_file);
return false;
}
still_comments = (line[0] == '#' || line[0] == '\n');
}
char str[LINE_MAX];
sscanf(line," %s",str);
// check that first word is MeshVersionFormatted
if(0!=strcmp(str,"MeshVersionFormatted"))
{
fprintf(stderr,
"Error: first word should be MeshVersionFormatted not %s\n",str);
fclose(mesh_file);
return false;
}
int one = -1;
if(2 != sscanf(line,"%s %d",str,&one))
{
// 1 appears on next line?
fscanf(mesh_file," %d",&one);
}
if(one != 1)
{
fprintf(stderr,"Error: second word should be 1 not %d\n",one);
fclose(mesh_file);
return false;
}
// eat comments
still_comments= true;
while(still_comments)
{
fgets(line,LINE_MAX,mesh_file);
still_comments = (line[0] == '#' || line[0] == '\n');
}
sscanf(line," %s",str);
// check that third word is Dimension
if(0!=strcmp(str,"Dimension"))
{
fprintf(stderr,"Error: third word should be Dimension not %s\n",str);
fclose(mesh_file);
return false;
}
int three = -1;
if(2 != sscanf(line,"%s %d",str,&three))
{
// 1 appears on next line?
fscanf(mesh_file," %d",&three);
}
if(three != 3)
{
fprintf(stderr,"Error: only Dimension 3 supported not %d\n",three);
fclose(mesh_file);
return false;
}
// eat comments
still_comments= true;
while(still_comments)
{
fgets(line,LINE_MAX,mesh_file);
still_comments = (line[0] == '#' || line[0] == '\n');
}
sscanf(line," %s",str);
// check that fifth word is Vertices
if(0!=strcmp(str,"Vertices"))
{
fprintf(stderr,"Error: fifth word should be Vertices not %s\n",str);
fclose(mesh_file);
return false;
}
//fgets(line,LINE_MAX,mesh_file);
int number_of_vertices;
if(1 != fscanf(mesh_file," %d",&number_of_vertices) || number_of_vertices > 1000000000)
{
fprintf(stderr,"Error: expecting number of vertices less than 10^9...\n");
fclose(mesh_file);
return false;
}
// allocate space for vertices
V.resize(number_of_vertices,vector<Scalar>(3,0));
int extra;
for(int i = 0;i<number_of_vertices;i++)
{
double x,y,z;
if(4 != fscanf(mesh_file," %lg %lg %lg %d",&x,&y,&z,&extra))
{
fprintf(stderr,"Error: expecting vertex position...\n");
fclose(mesh_file);
return false;
}
V[i][0] = x;
V[i][1] = y;
V[i][2] = z;
}
// eat comments
still_comments= true;
while(still_comments)
{
fgets(line,LINE_MAX,mesh_file);
still_comments = (line[0] == '#' || line[0] == '\n');
}
sscanf(line," %s",str);
// check that sixth word is Triangles
if(0!=strcmp(str,"Triangles"))
{
fprintf(stderr,"Error: sixth word should be Triangles not %s\n",str);
fclose(mesh_file);
return false;
}
int number_of_triangles;
if(1 != fscanf(mesh_file," %d",&number_of_triangles))
{
fprintf(stderr,"Error: expecting number of triangles...\n");
fclose(mesh_file);
return false;
}
// allocate space for triangles
F.resize(number_of_triangles,vector<Index>(3));
// triangle indices
int tri[3];
for(int i = 0;i<number_of_triangles;i++)
{
if(4 != fscanf(mesh_file," %d %d %d %d",&tri[0],&tri[1],&tri[2],&extra))
{
printf("Error: expecting triangle indices...\n");
return false;
}
for(int j = 0;j<3;j++)
{
F[i][j] = tri[j]-1;
}
}
// eat comments
still_comments= true;
while(still_comments)
{
fgets(line,LINE_MAX,mesh_file);
still_comments = (line[0] == '#' || line[0] == '\n');
}
sscanf(line," %s",str);
// check that sixth word is Triangles
if(0!=strcmp(str,"Tetrahedra"))
{
fprintf(stderr,"Error: seventh word should be Tetrahedra not %s\n",str);
fclose(mesh_file);
return false;
}
int number_of_tetrahedra;
if(1 != fscanf(mesh_file," %d",&number_of_tetrahedra))
{
fprintf(stderr,"Error: expecting number of tetrahedra...\n");
fclose(mesh_file);
return false;
}
// allocate space for tetrahedra
T.resize(number_of_tetrahedra,vector<Index>(4));
// tet indices
int a,b,c,d;
for(int i = 0;i<number_of_tetrahedra;i++)
{
if(5 != fscanf(mesh_file," %d %d %d %d %d",&a,&b,&c,&d,&extra))
{
fprintf(stderr,"Error: expecting tetrahedra indices...\n");
fclose(mesh_file);
return false;
}
T[i][0] = a-1;
T[i][1] = b-1;
T[i][2] = c-1;
T[i][3] = d-1;
}
fclose(mesh_file);
return true;
}
#include <Eigen/Core>
#include "list_to_matrix.h"
template <typename DerivedV, typename DerivedF, typename DerivedT>
IGL_INLINE bool igl::readMESH(
const std::string mesh_file_name,
Eigen::PlainObjectBase<DerivedV>& V,
Eigen::PlainObjectBase<DerivedT>& T,
Eigen::PlainObjectBase<DerivedF>& F)
{
using namespace std;
FILE * mesh_file = fopen(mesh_file_name.c_str(),"r");
if(NULL==mesh_file)
{
fprintf(stderr,"IOError: %s could not be opened...",mesh_file_name.c_str());
return false;
}
return readMESH(mesh_file,V,T,F);
}
template <typename DerivedV, typename DerivedF, typename DerivedT>
IGL_INLINE bool igl::readMESH(
FILE * mesh_file,
Eigen::PlainObjectBase<DerivedV>& V,
Eigen::PlainObjectBase<DerivedT>& T,
Eigen::PlainObjectBase<DerivedF>& F)
{
using namespace std;
#ifndef LINE_MAX
# define LINE_MAX 2048
#endif
char line[LINE_MAX];
bool still_comments;
// eat comments at beginning of file
still_comments= true;
while(still_comments)
{
fgets(line,LINE_MAX,mesh_file);
still_comments = (line[0] == '#' || line[0] == '\n');
}
char str[LINE_MAX];
sscanf(line," %s",str);
// check that first word is MeshVersionFormatted
if(0!=strcmp(str,"MeshVersionFormatted"))
{
fprintf(stderr,
"Error: first word should be MeshVersionFormatted not %s\n",str);
fclose(mesh_file);
return false;
}
int one = -1;
if(2 != sscanf(line,"%s %d",str,&one))
{
// 1 appears on next line?
fscanf(mesh_file," %d",&one);
}
if(one != 1)
{
fprintf(stderr,"Error: second word should be 1 not %d\n",one);
fclose(mesh_file);
return false;
}
// eat comments
still_comments= true;
while(still_comments)
{
fgets(line,LINE_MAX,mesh_file);
still_comments = (line[0] == '#' || line[0] == '\n');
}
sscanf(line," %s",str);
// check that third word is Dimension
if(0!=strcmp(str,"Dimension"))
{
fprintf(stderr,"Error: third word should be Dimension not %s\n",str);
fclose(mesh_file);
return false;
}
int three = -1;
if(2 != sscanf(line,"%s %d",str,&three))
{
// 1 appears on next line?
fscanf(mesh_file," %d",&three);
}
if(three != 3)
{
fprintf(stderr,"Error: only Dimension 3 supported not %d\n",three);
fclose(mesh_file);
return false;
}
// eat comments
still_comments= true;
while(still_comments)
{
fgets(line,LINE_MAX,mesh_file);
still_comments = (line[0] == '#' || line[0] == '\n');
}
sscanf(line," %s",str);
// check that fifth word is Vertices
if(0!=strcmp(str,"Vertices"))
{
fprintf(stderr,"Error: fifth word should be Vertices not %s\n",str);
fclose(mesh_file);
return false;
}
//fgets(line,LINE_MAX,mesh_file);
int number_of_vertices;
if(1 != fscanf(mesh_file," %d",&number_of_vertices) || number_of_vertices > 1000000000)
{
fprintf(stderr,"Error: expecting number of vertices less than 10^9...\n");
fclose(mesh_file);
return false;
}
// allocate space for vertices
V.resize(number_of_vertices,3);
int extra;
for(int i = 0;i<number_of_vertices;i++)
{
double x,y,z;
if(4 != fscanf(mesh_file," %lg %lg %lg %d",&x,&y,&z,&extra))
{
fprintf(stderr,"Error: expecting vertex position...\n");
fclose(mesh_file);
return false;
}
V(i,0) = x;
V(i,1) = y;
V(i,2) = z;
}
// eat comments
still_comments= true;
while(still_comments)
{
fgets(line,LINE_MAX,mesh_file);
still_comments = (line[0] == '#' || line[0] == '\n');
}
sscanf(line," %s",str);
// check that sixth word is Triangles
if(0!=strcmp(str,"Triangles"))
{
fprintf(stderr,"Error: sixth word should be Triangles not %s\n",str);
fclose(mesh_file);
return false;
}
int number_of_triangles;
if(1 != fscanf(mesh_file," %d",&number_of_triangles))
{
fprintf(stderr,"Error: expecting number of triangles...\n");
fclose(mesh_file);
return false;
}
// allocate space for triangles
F.resize(number_of_triangles,3);
// triangle indices
int tri[3];
for(int i = 0;i<number_of_triangles;i++)
{
if(4 != fscanf(mesh_file," %d %d %d %d",&tri[0],&tri[1],&tri[2],&extra))
{
printf("Error: expecting triangle indices...\n");
return false;
}
for(int j = 0;j<3;j++)
{
F(i,j) = tri[j]-1;
}
}
// eat comments
still_comments= true;
while(still_comments)
{
fgets(line,LINE_MAX,mesh_file);
still_comments = (line[0] == '#' || line[0] == '\n');
}
sscanf(line," %s",str);
// check that sixth word is Triangles
if(0!=strcmp(str,"Tetrahedra"))
{
fprintf(stderr,"Error: seventh word should be Tetrahedra not %s\n",str);
fclose(mesh_file);
return false;
}
int number_of_tetrahedra;
if(1 != fscanf(mesh_file," %d",&number_of_tetrahedra))
{
fprintf(stderr,"Error: expecting number of tetrahedra...\n");
fclose(mesh_file);
return false;
}
// allocate space for tetrahedra
T.resize(number_of_tetrahedra,4);
// tet indices
int a,b,c,d;
for(int i = 0;i<number_of_tetrahedra;i++)
{
if(5 != fscanf(mesh_file," %d %d %d %d %d",&a,&b,&c,&d,&extra))
{
fprintf(stderr,"Error: expecting tetrahedra indices...\n");
fclose(mesh_file);
return false;
}
T(i,0) = a-1;
T(i,1) = b-1;
T(i,2) = c-1;
T(i,3) = d-1;
}
fclose(mesh_file);
return true;
}
//{
// std::vector<std::vector<double> > vV,vT,vF;
// bool success = igl::readMESH(mesh_file_name,vV,vT,vF);
// if(!success)
// {
// // readMESH already printed error message to std err
// return false;
// }
// bool V_rect = igl::list_to_matrix(vV,V);
// if(!V_rect)
// {
// // igl::list_to_matrix(vV,V) already printed error message to std err
// return false;
// }
// bool T_rect = igl::list_to_matrix(vT,T);
// if(!T_rect)
// {
// // igl::list_to_matrix(vT,T) already printed error message to std err
// return false;
// }
// bool F_rect = igl::list_to_matrix(vF,F);
// if(!F_rect)
// {
// // igl::list_to_matrix(vF,F) already printed error message to std err
// return false;
// }
// assert(V.cols() == 3);
// assert(T.cols() == 4);
// assert(F.cols() == 3);
// return true;
//}
#ifdef IGL_STATIC_LIBRARY
// Explicit template instantiation
// generated by autoexplicit.sh
template bool igl::readMESH<Eigen::Matrix<float, -1, 3, 0, -1, 3>, Eigen::Matrix<int, -1, 3, 0, -1, 3>, Eigen::Matrix<int, -1, -1, 0, -1, -1> >(FILE*, Eigen::PlainObjectBase<Eigen::Matrix<float, -1, 3, 0, -1, 3> >&, Eigen::PlainObjectBase<Eigen::Matrix<int, -1, -1, 0, -1, -1> >&, Eigen::PlainObjectBase<Eigen::Matrix<int, -1, 3, 0, -1, 3> >&);
// generated by autoexplicit.sh
template bool igl::readMESH<Eigen::Matrix<float, -1, 3, 1, -1, 3>, Eigen::Matrix<int, -1, 3, 1, -1, 3>, Eigen::Matrix<int, -1, -1, 0, -1, -1> >(FILE*, Eigen::PlainObjectBase<Eigen::Matrix<float, -1, 3, 1, -1, 3> >&, Eigen::PlainObjectBase<Eigen::Matrix<int, -1, -1, 0, -1, -1> >&, Eigen::PlainObjectBase<Eigen::Matrix<int, -1, 3, 1, -1, 3> >&);
// generated by autoexplicit.sh
template bool igl::readMESH<Eigen::Matrix<double, -1, -1, 1, -1, -1>, Eigen::Matrix<int, -1, -1, 0, -1, -1>, Eigen::Matrix<int, -1, -1, 0, -1, -1> >(std::basic_string<char, std::char_traits<char>, std::allocator<char> >, Eigen::PlainObjectBase<Eigen::Matrix<double, -1, -1, 1, -1, -1> >&, Eigen::PlainObjectBase<Eigen::Matrix<int, -1, -1, 0, -1, -1> >&, Eigen::PlainObjectBase<Eigen::Matrix<int, -1, -1, 0, -1, -1> >&);
// generated by autoexplicit.sh
template bool igl::readMESH<Eigen::Matrix<float, -1, 3, 1, -1, 3>, Eigen::Matrix<unsigned int, -1, 3, 1, -1, 3>, Eigen::Matrix<int, -1, -1, 0, -1, -1> >(std::basic_string<char, std::char_traits<char>, std::allocator<char> >, Eigen::PlainObjectBase<Eigen::Matrix<float, -1, 3, 1, -1, 3> >&, Eigen::PlainObjectBase<Eigen::Matrix<int, -1, -1, 0, -1, -1> >&, Eigen::PlainObjectBase<Eigen::Matrix<unsigned int, -1, 3, 1, -1, 3> >&);
// generated by autoexplicit.sh
template bool igl::readMESH<Eigen::Matrix<double, -1, 3, 0, -1, 3>, Eigen::Matrix<int, -1, 3, 0, -1, 3>, Eigen::Matrix<int, -1, -1, 0, -1, -1> >(std::basic_string<char, std::char_traits<char>, std::allocator<char> >, Eigen::PlainObjectBase<Eigen::Matrix<double, -1, 3, 0, -1, 3> >&, Eigen::PlainObjectBase<Eigen::Matrix<int, -1, -1, 0, -1, -1> >&, Eigen::PlainObjectBase<Eigen::Matrix<int, -1, 3, 0, -1, 3> >&);
// generated by autoexplicit.sh
template bool igl::readMESH<Eigen::Matrix<double, -1, -1, 0, -1, -1>, Eigen::Matrix<int, -1, -1, 0, -1, -1>, Eigen::Matrix<int, -1, -1, 0, -1, -1> >(std::basic_string<char, std::char_traits<char>, std::allocator<char> >, Eigen::PlainObjectBase<Eigen::Matrix<double, -1, -1, 0, -1, -1> >&, Eigen::PlainObjectBase<Eigen::Matrix<int, -1, -1, 0, -1, -1> >&, Eigen::PlainObjectBase<Eigen::Matrix<int, -1, -1, 0, -1, -1> >&);
template bool igl::readMESH<Eigen::Matrix<double, -1, -1, 0, -1, -1>, Eigen::Matrix<int, -1, -1, 0, -1, -1>, Eigen::Matrix<double, -1, -1, 0, -1, -1> >(std::basic_string<char, std::char_traits<char>, std::allocator<char> >, Eigen::PlainObjectBase<Eigen::Matrix<double, -1, -1, 0, -1, -1> >&, Eigen::PlainObjectBase<Eigen::Matrix<double, -1, -1, 0, -1, -1> >&, Eigen::PlainObjectBase<Eigen::Matrix<int, -1, -1, 0, -1, -1> >&);
template bool igl::readMESH<double, int>(std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<std::vector<double, std::allocator<double> >, std::allocator<std::vector<double, std::allocator<double> > > >&, std::vector<std::vector<int, std::allocator<int> >, std::allocator<std::vector<int, std::allocator<int> > > >&, std::vector<std::vector<int, std::allocator<int> >, std::allocator<std::vector<int, std::allocator<int> > > >&);
template bool igl::readMESH<Eigen::Matrix<double, -1, 3, 1, -1, 3>, Eigen::Matrix<int, -1, 3, 1, -1, 3>, Eigen::Matrix<int, -1, -1, 0, -1, -1> >(std::basic_string<char, std::char_traits<char>, std::allocator<char> >, Eigen::PlainObjectBase<Eigen::Matrix<double, -1, 3, 1, -1, 3> >&, Eigen::PlainObjectBase<Eigen::Matrix<int, -1, -1, 0, -1, -1> >&, Eigen::PlainObjectBase<Eigen::Matrix<int, -1, 3, 1, -1, 3> >&);
#endif
|
import numpy as np
import pytest
from scipy.stats import (bootstrap, BootstrapDegenerateDistributionWarning,
monte_carlo_test)
from numpy.testing import assert_allclose, assert_equal, suppress_warnings
from scipy import stats
from .. import _bootstrap as _bootstrap
from scipy._lib._util import rng_integers
from scipy.optimize import root
def test_bootstrap_iv():
message = "`data` must be a sequence of samples."
with pytest.raises(ValueError, match=message):
bootstrap(1, np.mean)
message = "`data` must contain at least one sample."
with pytest.raises(ValueError, match=message):
bootstrap(tuple(), np.mean)
message = "each sample in `data` must contain two or more observations..."
with pytest.raises(ValueError, match=message):
bootstrap(([1, 2, 3], [1]), np.mean)
message = ("When `paired is True`, all samples must have the same length ")
with pytest.raises(ValueError, match=message):
bootstrap(([1, 2, 3], [1, 2, 3, 4]), np.mean, paired=True)
message = "`vectorized` must be `True` or `False`."
with pytest.raises(ValueError, match=message):
bootstrap(1, np.mean, vectorized='ekki')
message = "`axis` must be an integer."
with pytest.raises(ValueError, match=message):
bootstrap(([1, 2, 3],), np.mean, axis=1.5)
message = "could not convert string to float"
with pytest.raises(ValueError, match=message):
bootstrap(([1, 2, 3],), np.mean, confidence_level='ni')
message = "`n_resamples` must be a positive integer."
with pytest.raises(ValueError, match=message):
bootstrap(([1, 2, 3],), np.mean, n_resamples=-1000)
message = "`n_resamples` must be a positive integer."
with pytest.raises(ValueError, match=message):
bootstrap(([1, 2, 3],), np.mean, n_resamples=1000.5)
message = "`batch` must be a positive integer or None."
with pytest.raises(ValueError, match=message):
bootstrap(([1, 2, 3],), np.mean, batch=-1000)
message = "`batch` must be a positive integer or None."
with pytest.raises(ValueError, match=message):
bootstrap(([1, 2, 3],), np.mean, batch=1000.5)
message = "`method` must be in"
with pytest.raises(ValueError, match=message):
bootstrap(([1, 2, 3],), np.mean, method='ekki')
message = "`method = 'BCa' is only available for one-sample statistics"
def statistic(x, y, axis):
mean1 = np.mean(x, axis)
mean2 = np.mean(y, axis)
return mean1 - mean2
with pytest.raises(ValueError, match=message):
bootstrap(([.1, .2, .3], [.1, .2, .3]), statistic, method='BCa')
message = "'herring' cannot be used to seed a"
with pytest.raises(ValueError, match=message):
bootstrap(([1, 2, 3],), np.mean, random_state='herring')
@pytest.mark.parametrize("method", ['basic', 'percentile', 'BCa'])
@pytest.mark.parametrize("axis", [0, 1, 2])
def test_bootstrap_batch(method, axis):
# for one-sample statistics, batch size shouldn't affect the result
np.random.seed(0)
x = np.random.rand(10, 11, 12)
res1 = bootstrap((x,), np.mean, batch=None, method=method,
random_state=0, axis=axis, n_resamples=100)
res2 = bootstrap((x,), np.mean, batch=10, method=method,
random_state=0, axis=axis, n_resamples=100)
assert_equal(res2.confidence_interval.low, res1.confidence_interval.low)
assert_equal(res2.confidence_interval.high, res1.confidence_interval.high)
assert_equal(res2.standard_error, res1.standard_error)
@pytest.mark.parametrize("method", ['basic', 'percentile', 'BCa'])
def test_bootstrap_paired(method):
# test that `paired` works as expected
np.random.seed(0)
n = 100
x = np.random.rand(n)
y = np.random.rand(n)
def my_statistic(x, y, axis=-1):
return ((x-y)**2).mean(axis=axis)
def my_paired_statistic(i, axis=-1):
a = x[i]
b = y[i]
res = my_statistic(a, b)
return res
i = np.arange(len(x))
res1 = bootstrap((i,), my_paired_statistic, random_state=0)
res2 = bootstrap((x, y), my_statistic, paired=True, random_state=0)
assert_allclose(res1.confidence_interval, res2.confidence_interval)
assert_allclose(res1.standard_error, res2.standard_error)
@pytest.mark.parametrize("method", ['basic', 'percentile', 'BCa'])
@pytest.mark.parametrize("axis", [0, 1, 2])
@pytest.mark.parametrize("paired", [True, False])
def test_bootstrap_vectorized(method, axis, paired):
# test that paired is vectorized as expected: when samples are tiled,
# CI and standard_error of each axis-slice is the same as those of the
# original 1d sample
if not paired and method == 'BCa':
# should re-assess when BCa is extended
pytest.xfail(reason="BCa currently for 1-sample statistics only")
np.random.seed(0)
def my_statistic(x, y, z, axis=-1):
return x.mean(axis=axis) + y.mean(axis=axis) + z.mean(axis=axis)
shape = 10, 11, 12
n_samples = shape[axis]
x = np.random.rand(n_samples)
y = np.random.rand(n_samples)
z = np.random.rand(n_samples)
res1 = bootstrap((x, y, z), my_statistic, paired=paired, method=method,
random_state=0, axis=0, n_resamples=100)
reshape = [1, 1, 1]
reshape[axis] = n_samples
x = np.broadcast_to(x.reshape(reshape), shape)
y = np.broadcast_to(y.reshape(reshape), shape)
z = np.broadcast_to(z.reshape(reshape), shape)
res2 = bootstrap((x, y, z), my_statistic, paired=paired, method=method,
random_state=0, axis=axis, n_resamples=100)
assert_allclose(res2.confidence_interval.low,
res1.confidence_interval.low)
assert_allclose(res2.confidence_interval.high,
res1.confidence_interval.high)
assert_allclose(res2.standard_error, res1.standard_error)
result_shape = list(shape)
result_shape.pop(axis)
assert_equal(res2.confidence_interval.low.shape, result_shape)
assert_equal(res2.confidence_interval.high.shape, result_shape)
assert_equal(res2.standard_error.shape, result_shape)
@pytest.mark.parametrize("method", ['basic', 'percentile', 'BCa'])
def test_bootstrap_against_theory(method):
# based on https://www.statology.org/confidence-intervals-python/
data = stats.norm.rvs(loc=5, scale=2, size=5000, random_state=0)
alpha = 0.95
dist = stats.t(df=len(data)-1, loc=np.mean(data), scale=stats.sem(data))
expected_interval = dist.interval(confidence=alpha)
expected_se = dist.std()
res = bootstrap((data,), np.mean, n_resamples=5000,
confidence_level=alpha, method=method,
random_state=0)
assert_allclose(res.confidence_interval, expected_interval, rtol=5e-4)
assert_allclose(res.standard_error, expected_se, atol=3e-4)
tests_R = {"basic": (23.77, 79.12),
"percentile": (28.86, 84.21),
"BCa": (32.31, 91.43)}
@pytest.mark.parametrize("method, expected", tests_R.items())
def test_bootstrap_against_R(method, expected):
# Compare against R's "boot" library
# library(boot)
# stat <- function (x, a) {
# mean(x[a])
# }
# x <- c(10, 12, 12.5, 12.5, 13.9, 15, 21, 22,
# 23, 34, 50, 81, 89, 121, 134, 213)
# # Use a large value so we get a few significant digits for the CI.
# n = 1000000
# bootresult = boot(x, stat, n)
# result <- boot.ci(bootresult)
# print(result)
x = np.array([10, 12, 12.5, 12.5, 13.9, 15, 21, 22,
23, 34, 50, 81, 89, 121, 134, 213])
res = bootstrap((x,), np.mean, n_resamples=1000000, method=method,
random_state=0)
assert_allclose(res.confidence_interval, expected, rtol=0.005)
tests_against_itself_1samp = {"basic": 1780,
"percentile": 1784,
"BCa": 1784}
@pytest.mark.parametrize("method, expected",
tests_against_itself_1samp.items())
def test_bootstrap_against_itself_1samp(method, expected):
# The expected values in this test were generated using bootstrap
# to check for unintended changes in behavior. The test also makes sure
# that bootstrap works with multi-sample statistics and that the
# `axis` argument works as expected / function is vectorized.
np.random.seed(0)
n = 100 # size of sample
n_resamples = 999 # number of bootstrap resamples used to form each CI
confidence_level = 0.9
# The true mean is 5
dist = stats.norm(loc=5, scale=1)
stat_true = dist.mean()
# Do the same thing 2000 times. (The code is fully vectorized.)
n_replications = 2000
data = dist.rvs(size=(n_replications, n))
res = bootstrap((data,),
statistic=np.mean,
confidence_level=confidence_level,
n_resamples=n_resamples,
batch=50,
method=method,
axis=-1)
ci = res.confidence_interval
# ci contains vectors of lower and upper confidence interval bounds
ci_contains_true = np.sum((ci[0] < stat_true) & (stat_true < ci[1]))
assert ci_contains_true == expected
# ci_contains_true is not inconsistent with confidence_level
pvalue = stats.binomtest(ci_contains_true, n_replications,
confidence_level).pvalue
assert pvalue > 0.1
tests_against_itself_2samp = {"basic": 892,
"percentile": 890}
@pytest.mark.parametrize("method, expected",
tests_against_itself_2samp.items())
def test_bootstrap_against_itself_2samp(method, expected):
# The expected values in this test were generated using bootstrap
# to check for unintended changes in behavior. The test also makes sure
# that bootstrap works with multi-sample statistics and that the
# `axis` argument works as expected / function is vectorized.
np.random.seed(0)
n1 = 100 # size of sample 1
n2 = 120 # size of sample 2
n_resamples = 999 # number of bootstrap resamples used to form each CI
confidence_level = 0.9
# The statistic we're interested in is the difference in means
def my_stat(data1, data2, axis=-1):
mean1 = np.mean(data1, axis=axis)
mean2 = np.mean(data2, axis=axis)
return mean1 - mean2
# The true difference in the means is -0.1
dist1 = stats.norm(loc=0, scale=1)
dist2 = stats.norm(loc=0.1, scale=1)
stat_true = dist1.mean() - dist2.mean()
# Do the same thing 1000 times. (The code is fully vectorized.)
n_replications = 1000
data1 = dist1.rvs(size=(n_replications, n1))
data2 = dist2.rvs(size=(n_replications, n2))
res = bootstrap((data1, data2),
statistic=my_stat,
confidence_level=confidence_level,
n_resamples=n_resamples,
batch=50,
method=method,
axis=-1)
ci = res.confidence_interval
# ci contains vectors of lower and upper confidence interval bounds
ci_contains_true = np.sum((ci[0] < stat_true) & (stat_true < ci[1]))
assert ci_contains_true == expected
# ci_contains_true is not inconsistent with confidence_level
pvalue = stats.binomtest(ci_contains_true, n_replications,
confidence_level).pvalue
assert pvalue > 0.1
@pytest.mark.parametrize("method", ["basic", "percentile"])
@pytest.mark.parametrize("axis", [0, 1])
def test_bootstrap_vectorized_3samp(method, axis):
def statistic(*data, axis=0):
# an arbitrary, vectorized statistic
return sum((sample.mean(axis) for sample in data))
def statistic_1d(*data):
# the same statistic, not vectorized
for sample in data:
assert sample.ndim == 1
return statistic(*data, axis=0)
np.random.seed(0)
x = np.random.rand(4, 5)
y = np.random.rand(4, 5)
z = np.random.rand(4, 5)
res1 = bootstrap((x, y, z), statistic, vectorized=True,
axis=axis, n_resamples=100, method=method, random_state=0)
res2 = bootstrap((x, y, z), statistic_1d, vectorized=False,
axis=axis, n_resamples=100, method=method, random_state=0)
assert_allclose(res1.confidence_interval, res2.confidence_interval)
assert_allclose(res1.standard_error, res2.standard_error)
@pytest.mark.xfail_on_32bit("Failure is not concerning; see gh-14107")
@pytest.mark.parametrize("method", ["basic", "percentile", "BCa"])
@pytest.mark.parametrize("axis", [0, 1])
def test_bootstrap_vectorized_1samp(method, axis):
def statistic(x, axis=0):
# an arbitrary, vectorized statistic
return x.mean(axis=axis)
def statistic_1d(x):
# the same statistic, not vectorized
assert x.ndim == 1
return statistic(x, axis=0)
np.random.seed(0)
x = np.random.rand(4, 5)
res1 = bootstrap((x,), statistic, vectorized=True, axis=axis,
n_resamples=100, batch=None, method=method,
random_state=0)
res2 = bootstrap((x,), statistic_1d, vectorized=False, axis=axis,
n_resamples=100, batch=10, method=method,
random_state=0)
assert_allclose(res1.confidence_interval, res2.confidence_interval)
assert_allclose(res1.standard_error, res2.standard_error)
@pytest.mark.parametrize("method", ["basic", "percentile", "BCa"])
def test_bootstrap_degenerate(method):
data = 35 * [10000.]
if method == "BCa":
with np.errstate(invalid='ignore'):
with pytest.warns(BootstrapDegenerateDistributionWarning):
res = bootstrap([data, ], np.mean, method=method)
assert_equal(res.confidence_interval, (np.nan, np.nan))
else:
res = bootstrap([data, ], np.mean, method=method)
assert_equal(res.confidence_interval, (10000., 10000.))
assert_equal(res.standard_error, 0)
@pytest.mark.parametrize("method", ["basic", "percentile", "BCa"])
def test_bootstrap_gh15678(method):
# Check that gh-15678 is fixed: when statistic function returned a Python
# float, method="BCa" failed when trying to add a dimension to the float
rng = np.random.default_rng(354645618886684)
dist = stats.norm(loc=2, scale=4)
data = dist.rvs(size=100, random_state=rng)
data = (data,)
res = bootstrap(data, stats.skew, method=method, n_resamples=100,
random_state=np.random.default_rng(9563))
# this always worked because np.apply_along_axis returns NumPy data type
ref = bootstrap(data, stats.skew, method=method, n_resamples=100,
random_state=np.random.default_rng(9563), vectorized=False)
assert_allclose(res.confidence_interval, ref.confidence_interval)
assert_allclose(res.standard_error, ref.standard_error)
assert isinstance(res.standard_error, np.float64)
def test_jackknife_resample():
shape = 3, 4, 5, 6
np.random.seed(0)
x = np.random.rand(*shape)
y = next(_bootstrap._jackknife_resample(x))
for i in range(shape[-1]):
# each resample is indexed along second to last axis
# (last axis is the one the statistic will be taken over / consumed)
slc = y[..., i, :]
expected = np.delete(x, i, axis=-1)
assert np.array_equal(slc, expected)
y2 = np.concatenate(list(_bootstrap._jackknife_resample(x, batch=2)),
axis=-2)
assert np.array_equal(y2, y)
@pytest.mark.parametrize("rng_name", ["RandomState", "default_rng"])
def test_bootstrap_resample(rng_name):
rng = getattr(np.random, rng_name, None)
if rng is None:
pytest.skip(f"{rng_name} not available.")
rng1 = rng(0)
rng2 = rng(0)
n_resamples = 10
shape = 3, 4, 5, 6
np.random.seed(0)
x = np.random.rand(*shape)
y = _bootstrap._bootstrap_resample(x, n_resamples, random_state=rng1)
for i in range(n_resamples):
# each resample is indexed along second to last axis
# (last axis is the one the statistic will be taken over / consumed)
slc = y[..., i, :]
js = rng_integers(rng2, 0, shape[-1], shape[-1])
expected = x[..., js]
assert np.array_equal(slc, expected)
@pytest.mark.parametrize("score", [0, 0.5, 1])
@pytest.mark.parametrize("axis", [0, 1, 2])
def test_percentile_of_score(score, axis):
shape = 10, 20, 30
np.random.seed(0)
x = np.random.rand(*shape)
p = _bootstrap._percentile_of_score(x, score, axis=-1)
def vectorized_pos(a, score, axis):
return np.apply_along_axis(stats.percentileofscore, axis, a, score)
p2 = vectorized_pos(x, score, axis=-1)/100
assert_allclose(p, p2, 1e-15)
def test_percentile_along_axis():
# the difference between _percentile_along_axis and np.percentile is that
# np.percentile gets _all_ the qs for each axis slice, whereas
# _percentile_along_axis gets the q corresponding with each axis slice
shape = 10, 20
np.random.seed(0)
x = np.random.rand(*shape)
q = np.random.rand(*shape[:-1]) * 100
y = _bootstrap._percentile_along_axis(x, q)
for i in range(shape[0]):
res = y[i]
expected = np.percentile(x[i], q[i], axis=-1)
assert_allclose(res, expected, 1e-15)
@pytest.mark.parametrize("axis", [0, 1, 2])
def test_vectorize_statistic(axis):
# test that _vectorize_statistic vectorizes a statistic along `axis`
def statistic(*data, axis):
# an arbitrary, vectorized statistic
return sum((sample.mean(axis) for sample in data))
def statistic_1d(*data):
# the same statistic, not vectorized
for sample in data:
assert sample.ndim == 1
return statistic(*data, axis=0)
# vectorize the non-vectorized statistic
statistic2 = _bootstrap._vectorize_statistic(statistic_1d)
np.random.seed(0)
x = np.random.rand(4, 5, 6)
y = np.random.rand(4, 1, 6)
z = np.random.rand(1, 5, 6)
res1 = statistic(x, y, z, axis=axis)
res2 = statistic2(x, y, z, axis=axis)
assert_allclose(res1, res2)
# --- Test Monte Carlo Hypothesis Test --- #
class TestMonteCarloHypothesisTest:
atol = 2.5e-2 # for comparing p-value
def rvs(self, rvs_in, rs):
return lambda *args, **kwds: rvs_in(*args, random_state=rs, **kwds)
def test_input_validation(self):
# test that the appropriate error messages are raised for invalid input
def stat(x):
return stats.skewnorm(x).statistic
message = "`axis` must be an integer."
with pytest.raises(ValueError, match=message):
monte_carlo_test([1, 2, 3], stats.norm.rvs, stat, axis=1.5)
message = "`vectorized` must be `True` or `False`."
with pytest.raises(ValueError, match=message):
monte_carlo_test([1, 2, 3], stats.norm.rvs, stat, vectorized=1.5)
message = "`rvs` must be callable."
with pytest.raises(TypeError, match=message):
monte_carlo_test([1, 2, 3], None, stat)
message = "`statistic` must be callable."
with pytest.raises(TypeError, match=message):
monte_carlo_test([1, 2, 3], stats.norm.rvs, None)
message = "`n_resamples` must be a positive integer."
with pytest.raises(ValueError, match=message):
monte_carlo_test([1, 2, 3], stats.norm.rvs, stat,
n_resamples=-1000)
message = "`n_resamples` must be a positive integer."
with pytest.raises(ValueError, match=message):
monte_carlo_test([1, 2, 3], stats.norm.rvs, stat,
n_resamples=1000.5)
message = "`batch` must be a positive integer or None."
with pytest.raises(ValueError, match=message):
monte_carlo_test([1, 2, 3], stats.norm.rvs, stat, batch=-1000)
message = "`batch` must be a positive integer or None."
with pytest.raises(ValueError, match=message):
monte_carlo_test([1, 2, 3], stats.norm.rvs, stat, batch=1000.5)
message = "`alternative` must be in..."
with pytest.raises(ValueError, match=message):
monte_carlo_test([1, 2, 3], stats.norm.rvs, stat,
alternative='ekki')
def test_batch(self):
# make sure that the `batch` parameter is respected by checking the
# maximum batch size provided in calls to `statistic`
rng = np.random.default_rng(23492340193)
x = rng.random(10)
def statistic(x, axis):
batch_size = 1 if x.ndim == 1 else len(x)
statistic.batch_size = max(batch_size, statistic.batch_size)
statistic.counter += 1
return stats.skewtest(x, axis=axis).statistic
statistic.counter = 0
statistic.batch_size = 0
kwds = {'sample': x, 'statistic': statistic,
'n_resamples': 1000, 'vectorized': True}
kwds['rvs'] = self.rvs(stats.norm.rvs, np.random.default_rng(32842398))
res1 = monte_carlo_test(batch=1, **kwds)
assert_equal(statistic.counter, 1001)
assert_equal(statistic.batch_size, 1)
kwds['rvs'] = self.rvs(stats.norm.rvs, np.random.default_rng(32842398))
statistic.counter = 0
res2 = monte_carlo_test(batch=50, **kwds)
assert_equal(statistic.counter, 21)
assert_equal(statistic.batch_size, 50)
kwds['rvs'] = self.rvs(stats.norm.rvs, np.random.default_rng(32842398))
statistic.counter = 0
res3 = monte_carlo_test(**kwds)
assert_equal(statistic.counter, 2)
assert_equal(statistic.batch_size, 1000)
assert_equal(res1.pvalue, res3.pvalue)
assert_equal(res2.pvalue, res3.pvalue)
@pytest.mark.parametrize('axis', range(-3, 3))
def test_axis(self, axis):
# test that Nd-array samples are handled correctly for valid values
# of the `axis` parameter
rng = np.random.default_rng(2389234)
norm_rvs = self.rvs(stats.norm.rvs, rng)
size = [2, 3, 4]
size[axis] = 100
x = norm_rvs(size=size)
expected = stats.skewtest(x, axis=axis)
def statistic(x, axis):
return stats.skewtest(x, axis=axis).statistic
res = monte_carlo_test(x, norm_rvs, statistic, vectorized=True,
n_resamples=20000, axis=axis)
assert_allclose(res.statistic, expected.statistic)
assert_allclose(res.pvalue, expected.pvalue, atol=self.atol)
@pytest.mark.parametrize('alternative', ("less", "greater"))
@pytest.mark.parametrize('a', np.linspace(-0.5, 0.5, 5)) # skewness
def test_against_ks_1samp(self, alternative, a):
# test that monte_carlo_test can reproduce pvalue of ks_1samp
rng = np.random.default_rng(65723433)
x = stats.skewnorm.rvs(a=a, size=30, random_state=rng)
expected = stats.ks_1samp(x, stats.norm.cdf, alternative=alternative)
def statistic1d(x):
return stats.ks_1samp(x, stats.norm.cdf, mode='asymp',
alternative=alternative).statistic
norm_rvs = self.rvs(stats.norm.rvs, rng)
res = monte_carlo_test(x, norm_rvs, statistic1d,
n_resamples=1000, vectorized=False,
alternative=alternative)
assert_allclose(res.statistic, expected.statistic)
if alternative == 'greater':
assert_allclose(res.pvalue, expected.pvalue, atol=self.atol)
elif alternative == 'less':
assert_allclose(1-res.pvalue, expected.pvalue, atol=self.atol)
@pytest.mark.parametrize('hypotest', (stats.skewtest, stats.kurtosistest))
@pytest.mark.parametrize('alternative', ("less", "greater", "two-sided"))
@pytest.mark.parametrize('a', np.linspace(-2, 2, 5)) # skewness
def test_against_normality_tests(self, hypotest, alternative, a):
# test that monte_carlo_test can reproduce pvalue of normality tests
rng = np.random.default_rng(85723405)
x = stats.skewnorm.rvs(a=a, size=150, random_state=rng)
expected = hypotest(x, alternative=alternative)
def statistic(x, axis):
return hypotest(x, axis=axis).statistic
norm_rvs = self.rvs(stats.norm.rvs, rng)
res = monte_carlo_test(x, norm_rvs, statistic, vectorized=True,
alternative=alternative)
assert_allclose(res.statistic, expected.statistic)
assert_allclose(res.pvalue, expected.pvalue, atol=self.atol)
@pytest.mark.parametrize('a', np.arange(-2, 3)) # skewness parameter
def test_against_normaltest(self, a):
# test that monte_carlo_test can reproduce pvalue of normaltest
rng = np.random.default_rng(12340513)
x = stats.skewnorm.rvs(a=a, size=150, random_state=rng)
expected = stats.normaltest(x)
def statistic(x, axis):
return stats.normaltest(x, axis=axis).statistic
norm_rvs = self.rvs(stats.norm.rvs, rng)
res = monte_carlo_test(x, norm_rvs, statistic, vectorized=True,
alternative='greater')
assert_allclose(res.statistic, expected.statistic)
assert_allclose(res.pvalue, expected.pvalue, atol=self.atol)
@pytest.mark.parametrize('a', np.linspace(-0.5, 0.5, 5)) # skewness
def test_against_cramervonmises(self, a):
# test that monte_carlo_test can reproduce pvalue of cramervonmises
rng = np.random.default_rng(234874135)
x = stats.skewnorm.rvs(a=a, size=30, random_state=rng)
expected = stats.cramervonmises(x, stats.norm.cdf)
def statistic1d(x):
return stats.cramervonmises(x, stats.norm.cdf).statistic
norm_rvs = self.rvs(stats.norm.rvs, rng)
res = monte_carlo_test(x, norm_rvs, statistic1d,
n_resamples=1000, vectorized=False,
alternative='greater')
assert_allclose(res.statistic, expected.statistic)
assert_allclose(res.pvalue, expected.pvalue, atol=self.atol)
@pytest.mark.parametrize('dist_name', ('norm', 'logistic'))
@pytest.mark.parametrize('i', range(5))
def test_against_anderson(self, dist_name, i):
# test that monte_carlo_test can reproduce results of `anderson`. Note:
# `anderson` does not provide a p-value; it provides a list of
# significance levels and the associated critical value of the test
# statistic. `i` used to index this list.
# find the skewness for which the sample statistic matches one of the
# critical values provided by `stats.anderson`
def fun(a):
rng = np.random.default_rng(394295467)
x = stats.tukeylambda.rvs(a, size=100, random_state=rng)
expected = stats.anderson(x, dist_name)
return expected.statistic - expected.critical_values[i]
with suppress_warnings() as sup:
sup.filter(RuntimeWarning)
sol = root(fun, x0=0)
assert(sol.success)
# get the significance level (p-value) associated with that critical
# value
a = sol.x[0]
rng = np.random.default_rng(394295467)
x = stats.tukeylambda.rvs(a, size=100, random_state=rng)
expected = stats.anderson(x, dist_name)
expected_stat = expected.statistic
expected_p = expected.significance_level[i]/100
# perform equivalent Monte Carlo test and compare results
def statistic1d(x):
return stats.anderson(x, dist_name).statistic
dist_rvs = self.rvs(getattr(stats, dist_name).rvs, rng)
with suppress_warnings() as sup:
sup.filter(RuntimeWarning)
res = monte_carlo_test(x, dist_rvs,
statistic1d, n_resamples=1000,
vectorized=False, alternative='greater')
assert_allclose(res.statistic, expected_stat)
assert_allclose(res.pvalue, expected_p, atol=2*self.atol)
|
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
⊢ IsSeparating (Set.op 𝒢) ↔ IsCoseparating 𝒢
[PROOFSTEP]
refine' ⟨fun h𝒢 X Y f g hfg => _, fun h𝒢 X Y f g hfg => _⟩
[GOAL]
case refine'_1
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
h𝒢 : IsSeparating (Set.op 𝒢)
X Y : C
f g : X ⟶ Y
hfg : ∀ (G : C), G ∈ 𝒢 → ∀ (h : Y ⟶ G), f ≫ h = g ≫ h
⊢ f = g
[PROOFSTEP]
refine' Quiver.Hom.op_inj (h𝒢 _ _ fun G hG h => Quiver.Hom.unop_inj _)
[GOAL]
case refine'_1
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
h𝒢 : IsSeparating (Set.op 𝒢)
X Y : C
f g : X ⟶ Y
hfg : ∀ (G : C), G ∈ 𝒢 → ∀ (h : Y ⟶ G), f ≫ h = g ≫ h
G : Cᵒᵖ
hG : G ∈ Set.op 𝒢
h : G ⟶ op Y
⊢ (h ≫ f.op).unop = (h ≫ g.op).unop
[PROOFSTEP]
simpa only [unop_comp, Quiver.Hom.unop_op] using hfg _ (Set.mem_op.1 hG) _
[GOAL]
case refine'_2
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
h𝒢 : IsCoseparating 𝒢
X Y : Cᵒᵖ
f g : X ⟶ Y
hfg : ∀ (G : Cᵒᵖ), G ∈ Set.op 𝒢 → ∀ (h : G ⟶ X), h ≫ f = h ≫ g
⊢ f = g
[PROOFSTEP]
refine' Quiver.Hom.unop_inj (h𝒢 _ _ fun G hG h => Quiver.Hom.op_inj _)
[GOAL]
case refine'_2
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
h𝒢 : IsCoseparating 𝒢
X Y : Cᵒᵖ
f g : X ⟶ Y
hfg : ∀ (G : Cᵒᵖ), G ∈ Set.op 𝒢 → ∀ (h : G ⟶ X), h ≫ f = h ≫ g
G : C
hG : G ∈ 𝒢
h : X.unop ⟶ G
⊢ (f.unop ≫ h).op = (g.unop ≫ h).op
[PROOFSTEP]
simpa only [op_comp, Quiver.Hom.op_unop] using hfg _ (Set.op_mem_op.2 hG) _
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
⊢ IsCoseparating (Set.op 𝒢) ↔ IsSeparating 𝒢
[PROOFSTEP]
refine' ⟨fun h𝒢 X Y f g hfg => _, fun h𝒢 X Y f g hfg => _⟩
[GOAL]
case refine'_1
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
h𝒢 : IsCoseparating (Set.op 𝒢)
X Y : C
f g : X ⟶ Y
hfg : ∀ (G : C), G ∈ 𝒢 → ∀ (h : G ⟶ X), h ≫ f = h ≫ g
⊢ f = g
[PROOFSTEP]
refine' Quiver.Hom.op_inj (h𝒢 _ _ fun G hG h => Quiver.Hom.unop_inj _)
[GOAL]
case refine'_1
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
h𝒢 : IsCoseparating (Set.op 𝒢)
X Y : C
f g : X ⟶ Y
hfg : ∀ (G : C), G ∈ 𝒢 → ∀ (h : G ⟶ X), h ≫ f = h ≫ g
G : Cᵒᵖ
hG : G ∈ Set.op 𝒢
h : op X ⟶ G
⊢ (f.op ≫ h).unop = (g.op ≫ h).unop
[PROOFSTEP]
simpa only [unop_comp, Quiver.Hom.unop_op] using hfg _ (Set.mem_op.1 hG) _
[GOAL]
case refine'_2
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
h𝒢 : IsSeparating 𝒢
X Y : Cᵒᵖ
f g : X ⟶ Y
hfg : ∀ (G : Cᵒᵖ), G ∈ Set.op 𝒢 → ∀ (h : Y ⟶ G), f ≫ h = g ≫ h
⊢ f = g
[PROOFSTEP]
refine' Quiver.Hom.unop_inj (h𝒢 _ _ fun G hG h => Quiver.Hom.op_inj _)
[GOAL]
case refine'_2
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
h𝒢 : IsSeparating 𝒢
X Y : Cᵒᵖ
f g : X ⟶ Y
hfg : ∀ (G : Cᵒᵖ), G ∈ Set.op 𝒢 → ∀ (h : Y ⟶ G), f ≫ h = g ≫ h
G : C
hG : G ∈ 𝒢
h : G ⟶ Y.unop
⊢ (h ≫ f.unop).op = (h ≫ g.unop).op
[PROOFSTEP]
simpa only [op_comp, Quiver.Hom.op_unop] using hfg _ (Set.op_mem_op.2 hG) _
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set Cᵒᵖ
⊢ IsCoseparating (Set.unop 𝒢) ↔ IsSeparating 𝒢
[PROOFSTEP]
rw [← isSeparating_op_iff, Set.unop_op]
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set Cᵒᵖ
⊢ IsSeparating (Set.unop 𝒢) ↔ IsCoseparating 𝒢
[PROOFSTEP]
rw [← isCoseparating_op_iff, Set.unop_op]
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
⊢ IsDetecting (Set.op 𝒢) ↔ IsCodetecting 𝒢
[PROOFSTEP]
refine' ⟨fun h𝒢 X Y f hf => _, fun h𝒢 X Y f hf => _⟩
[GOAL]
case refine'_1
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
h𝒢 : IsDetecting (Set.op 𝒢)
X Y : C
f : X ⟶ Y
hf : ∀ (G : C), G ∈ 𝒢 → ∀ (h : X ⟶ G), ∃! h', f ≫ h' = h
⊢ IsIso f
[PROOFSTEP]
refine' (isIso_op_iff _).1 (h𝒢 _ fun G hG h => _)
[GOAL]
case refine'_1
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
h𝒢 : IsDetecting (Set.op 𝒢)
X Y : C
f : X ⟶ Y
hf : ∀ (G : C), G ∈ 𝒢 → ∀ (h : X ⟶ G), ∃! h', f ≫ h' = h
G : Cᵒᵖ
hG : G ∈ Set.op 𝒢
h : G ⟶ op X
⊢ ∃! h', h' ≫ f.op = h
[PROOFSTEP]
obtain ⟨t, ht, ht'⟩ := hf (unop G) (Set.mem_op.1 hG) h.unop
[GOAL]
case refine'_1.intro.intro
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
h𝒢 : IsDetecting (Set.op 𝒢)
X Y : C
f : X ⟶ Y
hf : ∀ (G : C), G ∈ 𝒢 → ∀ (h : X ⟶ G), ∃! h', f ≫ h' = h
G : Cᵒᵖ
hG : G ∈ Set.op 𝒢
h : G ⟶ op X
t : Y ⟶ G.unop
ht : f ≫ t = h.unop
ht' : ∀ (y : Y ⟶ G.unop), (fun h' => f ≫ h' = h.unop) y → y = t
⊢ ∃! h', h' ≫ f.op = h
[PROOFSTEP]
exact ⟨t.op, Quiver.Hom.unop_inj ht, fun y hy => Quiver.Hom.unop_inj (ht' _ (Quiver.Hom.op_inj hy))⟩
[GOAL]
case refine'_2
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
h𝒢 : IsCodetecting 𝒢
X Y : Cᵒᵖ
f : X ⟶ Y
hf : ∀ (G : Cᵒᵖ), G ∈ Set.op 𝒢 → ∀ (h : G ⟶ Y), ∃! h', h' ≫ f = h
⊢ IsIso f
[PROOFSTEP]
refine' (isIso_unop_iff _).1 (h𝒢 _ fun G hG h => _)
[GOAL]
case refine'_2
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
h𝒢 : IsCodetecting 𝒢
X Y : Cᵒᵖ
f : X ⟶ Y
hf : ∀ (G : Cᵒᵖ), G ∈ Set.op 𝒢 → ∀ (h : G ⟶ Y), ∃! h', h' ≫ f = h
G : C
hG : G ∈ 𝒢
h : Y.unop ⟶ G
⊢ ∃! h', f.unop ≫ h' = h
[PROOFSTEP]
obtain ⟨t, ht, ht'⟩ := hf (op G) (Set.op_mem_op.2 hG) h.op
[GOAL]
case refine'_2.intro.intro
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
h𝒢 : IsCodetecting 𝒢
X Y : Cᵒᵖ
f : X ⟶ Y
hf : ∀ (G : Cᵒᵖ), G ∈ Set.op 𝒢 → ∀ (h : G ⟶ Y), ∃! h', h' ≫ f = h
G : C
hG : G ∈ 𝒢
h : Y.unop ⟶ G
t : op G ⟶ X
ht : t ≫ f = h.op
ht' : ∀ (y : op G ⟶ X), (fun h' => h' ≫ f = h.op) y → y = t
⊢ ∃! h', f.unop ≫ h' = h
[PROOFSTEP]
refine' ⟨t.unop, Quiver.Hom.op_inj ht, fun y hy => Quiver.Hom.op_inj (ht' _ _)⟩
[GOAL]
case refine'_2.intro.intro
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
h𝒢 : IsCodetecting 𝒢
X Y : Cᵒᵖ
f : X ⟶ Y
hf : ∀ (G : Cᵒᵖ), G ∈ Set.op 𝒢 → ∀ (h : G ⟶ Y), ∃! h', h' ≫ f = h
G : C
hG : G ∈ 𝒢
h : Y.unop ⟶ G
t : op G ⟶ X
ht : t ≫ f = h.op
ht' : ∀ (y : op G ⟶ X), (fun h' => h' ≫ f = h.op) y → y = t
y : X.unop ⟶ G
hy : (fun h' => f.unop ≫ h' = h) y
⊢ (fun h' => h' ≫ f = h.op) y.op
[PROOFSTEP]
exact Quiver.Hom.unop_inj (by simpa only using hy)
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
h𝒢 : IsCodetecting 𝒢
X Y : Cᵒᵖ
f : X ⟶ Y
hf : ∀ (G : Cᵒᵖ), G ∈ Set.op 𝒢 → ∀ (h : G ⟶ Y), ∃! h', h' ≫ f = h
G : C
hG : G ∈ 𝒢
h : Y.unop ⟶ G
t : op G ⟶ X
ht : t ≫ f = h.op
ht' : ∀ (y : op G ⟶ X), (fun h' => h' ≫ f = h.op) y → y = t
y : X.unop ⟶ G
hy : (fun h' => f.unop ≫ h' = h) y
⊢ (y.op ≫ f).unop = h.op.unop
[PROOFSTEP]
simpa only using hy
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
⊢ IsCodetecting (Set.op 𝒢) ↔ IsDetecting 𝒢
[PROOFSTEP]
refine' ⟨fun h𝒢 X Y f hf => _, fun h𝒢 X Y f hf => _⟩
[GOAL]
case refine'_1
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
h𝒢 : IsCodetecting (Set.op 𝒢)
X Y : C
f : X ⟶ Y
hf : ∀ (G : C), G ∈ 𝒢 → ∀ (h : G ⟶ Y), ∃! h', h' ≫ f = h
⊢ IsIso f
[PROOFSTEP]
refine' (isIso_op_iff _).1 (h𝒢 _ fun G hG h => _)
[GOAL]
case refine'_1
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
h𝒢 : IsCodetecting (Set.op 𝒢)
X Y : C
f : X ⟶ Y
hf : ∀ (G : C), G ∈ 𝒢 → ∀ (h : G ⟶ Y), ∃! h', h' ≫ f = h
G : Cᵒᵖ
hG : G ∈ Set.op 𝒢
h : op Y ⟶ G
⊢ ∃! h', f.op ≫ h' = h
[PROOFSTEP]
obtain ⟨t, ht, ht'⟩ := hf (unop G) (Set.mem_op.1 hG) h.unop
[GOAL]
case refine'_1.intro.intro
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
h𝒢 : IsCodetecting (Set.op 𝒢)
X Y : C
f : X ⟶ Y
hf : ∀ (G : C), G ∈ 𝒢 → ∀ (h : G ⟶ Y), ∃! h', h' ≫ f = h
G : Cᵒᵖ
hG : G ∈ Set.op 𝒢
h : op Y ⟶ G
t : G.unop ⟶ X
ht : t ≫ f = h.unop
ht' : ∀ (y : G.unop ⟶ X), (fun h' => h' ≫ f = h.unop) y → y = t
⊢ ∃! h', f.op ≫ h' = h
[PROOFSTEP]
exact ⟨t.op, Quiver.Hom.unop_inj ht, fun y hy => Quiver.Hom.unop_inj (ht' _ (Quiver.Hom.op_inj hy))⟩
[GOAL]
case refine'_2
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
h𝒢 : IsDetecting 𝒢
X Y : Cᵒᵖ
f : X ⟶ Y
hf : ∀ (G : Cᵒᵖ), G ∈ Set.op 𝒢 → ∀ (h : X ⟶ G), ∃! h', f ≫ h' = h
⊢ IsIso f
[PROOFSTEP]
refine' (isIso_unop_iff _).1 (h𝒢 _ fun G hG h => _)
[GOAL]
case refine'_2
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
h𝒢 : IsDetecting 𝒢
X Y : Cᵒᵖ
f : X ⟶ Y
hf : ∀ (G : Cᵒᵖ), G ∈ Set.op 𝒢 → ∀ (h : X ⟶ G), ∃! h', f ≫ h' = h
G : C
hG : G ∈ 𝒢
h : G ⟶ X.unop
⊢ ∃! h', h' ≫ f.unop = h
[PROOFSTEP]
obtain ⟨t, ht, ht'⟩ := hf (op G) (Set.op_mem_op.2 hG) h.op
[GOAL]
case refine'_2.intro.intro
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
h𝒢 : IsDetecting 𝒢
X Y : Cᵒᵖ
f : X ⟶ Y
hf : ∀ (G : Cᵒᵖ), G ∈ Set.op 𝒢 → ∀ (h : X ⟶ G), ∃! h', f ≫ h' = h
G : C
hG : G ∈ 𝒢
h : G ⟶ X.unop
t : Y ⟶ op G
ht : f ≫ t = h.op
ht' : ∀ (y : Y ⟶ op G), (fun h' => f ≫ h' = h.op) y → y = t
⊢ ∃! h', h' ≫ f.unop = h
[PROOFSTEP]
refine' ⟨t.unop, Quiver.Hom.op_inj ht, fun y hy => Quiver.Hom.op_inj (ht' _ _)⟩
[GOAL]
case refine'_2.intro.intro
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
h𝒢 : IsDetecting 𝒢
X Y : Cᵒᵖ
f : X ⟶ Y
hf : ∀ (G : Cᵒᵖ), G ∈ Set.op 𝒢 → ∀ (h : X ⟶ G), ∃! h', f ≫ h' = h
G : C
hG : G ∈ 𝒢
h : G ⟶ X.unop
t : Y ⟶ op G
ht : f ≫ t = h.op
ht' : ∀ (y : Y ⟶ op G), (fun h' => f ≫ h' = h.op) y → y = t
y : G ⟶ Y.unop
hy : (fun h' => h' ≫ f.unop = h) y
⊢ (fun h' => f ≫ h' = h.op) y.op
[PROOFSTEP]
exact Quiver.Hom.unop_inj (by simpa only using hy)
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
h𝒢 : IsDetecting 𝒢
X Y : Cᵒᵖ
f : X ⟶ Y
hf : ∀ (G : Cᵒᵖ), G ∈ Set.op 𝒢 → ∀ (h : X ⟶ G), ∃! h', f ≫ h' = h
G : C
hG : G ∈ 𝒢
h : G ⟶ X.unop
t : Y ⟶ op G
ht : f ≫ t = h.op
ht' : ∀ (y : Y ⟶ op G), (fun h' => f ≫ h' = h.op) y → y = t
y : G ⟶ Y.unop
hy : (fun h' => h' ≫ f.unop = h) y
⊢ (f ≫ y.op).unop = h.op.unop
[PROOFSTEP]
simpa only using hy
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set Cᵒᵖ
⊢ IsDetecting (Set.unop 𝒢) ↔ IsCodetecting 𝒢
[PROOFSTEP]
rw [← isCodetecting_op_iff, Set.unop_op]
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set Cᵒᵖ
⊢ IsCodetecting (Set.unop 𝒢) ↔ IsDetecting 𝒢
[PROOFSTEP]
rw [← isDetecting_op_iff, Set.unop_op]
[GOAL]
C : Type u₁
inst✝² : Category.{v₁, u₁} C
D : Type u₂
inst✝¹ : Category.{v₂, u₂} D
inst✝ : HasCoequalizers C
𝒢 : Set C
⊢ IsCodetecting 𝒢 → IsCoseparating 𝒢
[PROOFSTEP]
simpa only [← isSeparating_op_iff, ← isDetecting_op_iff] using IsDetecting.isSeparating
[GOAL]
C : Type u₁
inst✝² : Category.{v₁, u₁} C
D : Type u₂
inst✝¹ : Category.{v₂, u₂} D
inst✝ : Balanced C
𝒢 : Set C
h𝒢 : IsSeparating 𝒢
⊢ IsDetecting 𝒢
[PROOFSTEP]
intro X Y f hf
[GOAL]
C : Type u₁
inst✝² : Category.{v₁, u₁} C
D : Type u₂
inst✝¹ : Category.{v₂, u₂} D
inst✝ : Balanced C
𝒢 : Set C
h𝒢 : IsSeparating 𝒢
X Y : C
f : X ⟶ Y
hf : ∀ (G : C), G ∈ 𝒢 → ∀ (h : G ⟶ Y), ∃! h', h' ≫ f = h
⊢ IsIso f
[PROOFSTEP]
refine' (isIso_iff_mono_and_epi _).2 ⟨⟨fun g h hgh => h𝒢 _ _ fun G hG i => _⟩, ⟨fun g h hgh => _⟩⟩
[GOAL]
case refine'_1
C : Type u₁
inst✝² : Category.{v₁, u₁} C
D : Type u₂
inst✝¹ : Category.{v₂, u₂} D
inst✝ : Balanced C
𝒢 : Set C
h𝒢 : IsSeparating 𝒢
X Y : C
f : X ⟶ Y
hf : ∀ (G : C), G ∈ 𝒢 → ∀ (h : G ⟶ Y), ∃! h', h' ≫ f = h
Z✝ : C
g h : Z✝ ⟶ X
hgh : g ≫ f = h ≫ f
G : C
hG : G ∈ 𝒢
i : G ⟶ Z✝
⊢ i ≫ g = i ≫ h
[PROOFSTEP]
obtain ⟨t, -, ht⟩ := hf G hG (i ≫ g ≫ f)
[GOAL]
case refine'_1.intro.intro
C : Type u₁
inst✝² : Category.{v₁, u₁} C
D : Type u₂
inst✝¹ : Category.{v₂, u₂} D
inst✝ : Balanced C
𝒢 : Set C
h𝒢 : IsSeparating 𝒢
X Y : C
f : X ⟶ Y
hf : ∀ (G : C), G ∈ 𝒢 → ∀ (h : G ⟶ Y), ∃! h', h' ≫ f = h
Z✝ : C
g h : Z✝ ⟶ X
hgh : g ≫ f = h ≫ f
G : C
hG : G ∈ 𝒢
i : G ⟶ Z✝
t : G ⟶ X
ht : ∀ (y : G ⟶ X), (fun h' => h' ≫ f = i ≫ g ≫ f) y → y = t
⊢ i ≫ g = i ≫ h
[PROOFSTEP]
rw [ht (i ≫ g) (Category.assoc _ _ _), ht (i ≫ h) (hgh.symm ▸ Category.assoc _ _ _)]
[GOAL]
case refine'_2
C : Type u₁
inst✝² : Category.{v₁, u₁} C
D : Type u₂
inst✝¹ : Category.{v₂, u₂} D
inst✝ : Balanced C
𝒢 : Set C
h𝒢 : IsSeparating 𝒢
X Y : C
f : X ⟶ Y
hf : ∀ (G : C), G ∈ 𝒢 → ∀ (h : G ⟶ Y), ∃! h', h' ≫ f = h
Z✝ : C
g h : Y ⟶ Z✝
hgh : f ≫ g = f ≫ h
⊢ g = h
[PROOFSTEP]
refine' h𝒢 _ _ fun G hG i => _
[GOAL]
case refine'_2
C : Type u₁
inst✝² : Category.{v₁, u₁} C
D : Type u₂
inst✝¹ : Category.{v₂, u₂} D
inst✝ : Balanced C
𝒢 : Set C
h𝒢 : IsSeparating 𝒢
X Y : C
f : X ⟶ Y
hf : ∀ (G : C), G ∈ 𝒢 → ∀ (h : G ⟶ Y), ∃! h', h' ≫ f = h
Z✝ : C
g h : Y ⟶ Z✝
hgh : f ≫ g = f ≫ h
G : C
hG : G ∈ 𝒢
i : G ⟶ Y
⊢ i ≫ g = i ≫ h
[PROOFSTEP]
obtain ⟨t, rfl, -⟩ := hf G hG i
[GOAL]
case refine'_2.intro.intro
C : Type u₁
inst✝² : Category.{v₁, u₁} C
D : Type u₂
inst✝¹ : Category.{v₂, u₂} D
inst✝ : Balanced C
𝒢 : Set C
h𝒢 : IsSeparating 𝒢
X Y : C
f : X ⟶ Y
hf : ∀ (G : C), G ∈ 𝒢 → ∀ (h : G ⟶ Y), ∃! h', h' ≫ f = h
Z✝ : C
g h : Y ⟶ Z✝
hgh : f ≫ g = f ≫ h
G : C
hG : G ∈ 𝒢
t : G ⟶ X
⊢ (t ≫ f) ≫ g = (t ≫ f) ≫ h
[PROOFSTEP]
rw [Category.assoc, hgh, Category.assoc]
[GOAL]
C : Type u₁
inst✝² : Category.{v₁, u₁} C
D : Type u₂
inst✝¹ : Category.{v₂, u₂} D
inst✝ : Balanced C
𝒢 : Set C
⊢ IsCoseparating 𝒢 → IsCodetecting 𝒢
[PROOFSTEP]
simpa only [← isDetecting_op_iff, ← isSeparating_op_iff] using IsSeparating.isDetecting
[GOAL]
C : Type u₁
inst✝² : Category.{v₁, u₁} C
D : Type u₂
inst✝¹ : Category.{v₂, u₂} D
𝒢 : Set C
inst✝ : ∀ (A : C), HasCoproduct fun f => ↑f.fst
⊢ IsSeparating 𝒢 ↔ ∀ (A : C), Epi (Sigma.desc Sigma.snd)
[PROOFSTEP]
refine' ⟨fun h A => ⟨fun u v huv => h _ _ fun G hG f => _⟩, fun h X Y f g hh => _⟩
[GOAL]
case refine'_1
C : Type u₁
inst✝² : Category.{v₁, u₁} C
D : Type u₂
inst✝¹ : Category.{v₂, u₂} D
𝒢 : Set C
inst✝ : ∀ (A : C), HasCoproduct fun f => ↑f.fst
h : IsSeparating 𝒢
A Z✝ : C
u v : A ⟶ Z✝
huv : Sigma.desc Sigma.snd ≫ u = Sigma.desc Sigma.snd ≫ v
G : C
hG : G ∈ 𝒢
f : G ⟶ A
⊢ f ≫ u = f ≫ v
[PROOFSTEP]
simpa using Sigma.ι (fun f : Σ G : 𝒢, (G : C) ⟶ A => (f.1 : C)) ⟨⟨G, hG⟩, f⟩ ≫= huv
[GOAL]
case refine'_2
C : Type u₁
inst✝² : Category.{v₁, u₁} C
D : Type u₂
inst✝¹ : Category.{v₂, u₂} D
𝒢 : Set C
inst✝ : ∀ (A : C), HasCoproduct fun f => ↑f.fst
h : ∀ (A : C), Epi (Sigma.desc Sigma.snd)
X Y : C
f g : X ⟶ Y
hh : ∀ (G : C), G ∈ 𝒢 → ∀ (h : G ⟶ X), h ≫ f = h ≫ g
⊢ f = g
[PROOFSTEP]
haveI := h X
[GOAL]
case refine'_2
C : Type u₁
inst✝² : Category.{v₁, u₁} C
D : Type u₂
inst✝¹ : Category.{v₂, u₂} D
𝒢 : Set C
inst✝ : ∀ (A : C), HasCoproduct fun f => ↑f.fst
h : ∀ (A : C), Epi (Sigma.desc Sigma.snd)
X Y : C
f g : X ⟶ Y
hh : ∀ (G : C), G ∈ 𝒢 → ∀ (h : G ⟶ X), h ≫ f = h ≫ g
this : Epi (Sigma.desc Sigma.snd)
⊢ f = g
[PROOFSTEP]
refine' (cancel_epi (Sigma.desc (@Sigma.snd 𝒢 fun G => (G : C) ⟶ X))).1 (colimit.hom_ext fun j => _)
[GOAL]
case refine'_2
C : Type u₁
inst✝² : Category.{v₁, u₁} C
D : Type u₂
inst✝¹ : Category.{v₂, u₂} D
𝒢 : Set C
inst✝ : ∀ (A : C), HasCoproduct fun f => ↑f.fst
h : ∀ (A : C), Epi (Sigma.desc Sigma.snd)
X Y : C
f g : X ⟶ Y
hh : ∀ (G : C), G ∈ 𝒢 → ∀ (h : G ⟶ X), h ≫ f = h ≫ g
this : Epi (Sigma.desc Sigma.snd)
j : Discrete ((G : ↑𝒢) × (↑G ⟶ X))
⊢ colimit.ι (Discrete.functor fun b => ↑b.fst) j ≫ Sigma.desc Sigma.snd ≫ f =
colimit.ι (Discrete.functor fun b => ↑b.fst) j ≫ Sigma.desc Sigma.snd ≫ g
[PROOFSTEP]
simpa using hh j.as.1.1 j.as.1.2 j.as.2
[GOAL]
C : Type u₁
inst✝² : Category.{v₁, u₁} C
D : Type u₂
inst✝¹ : Category.{v₂, u₂} D
𝒢 : Set C
inst✝ : ∀ (A : C), HasProduct fun f => ↑f.fst
⊢ IsCoseparating 𝒢 ↔ ∀ (A : C), Mono (Pi.lift Sigma.snd)
[PROOFSTEP]
refine' ⟨fun h A => ⟨fun u v huv => h _ _ fun G hG f => _⟩, fun h X Y f g hh => _⟩
[GOAL]
case refine'_1
C : Type u₁
inst✝² : Category.{v₁, u₁} C
D : Type u₂
inst✝¹ : Category.{v₂, u₂} D
𝒢 : Set C
inst✝ : ∀ (A : C), HasProduct fun f => ↑f.fst
h : IsCoseparating 𝒢
A Z✝ : C
u v : Z✝ ⟶ A
huv : u ≫ Pi.lift Sigma.snd = v ≫ Pi.lift Sigma.snd
G : C
hG : G ∈ 𝒢
f : A ⟶ G
⊢ u ≫ f = v ≫ f
[PROOFSTEP]
simpa using huv =≫ Pi.π (fun f : Σ G : 𝒢, A ⟶ (G : C) => (f.1 : C)) ⟨⟨G, hG⟩, f⟩
[GOAL]
case refine'_2
C : Type u₁
inst✝² : Category.{v₁, u₁} C
D : Type u₂
inst✝¹ : Category.{v₂, u₂} D
𝒢 : Set C
inst✝ : ∀ (A : C), HasProduct fun f => ↑f.fst
h : ∀ (A : C), Mono (Pi.lift Sigma.snd)
X Y : C
f g : X ⟶ Y
hh : ∀ (G : C), G ∈ 𝒢 → ∀ (h : Y ⟶ G), f ≫ h = g ≫ h
⊢ f = g
[PROOFSTEP]
haveI := h Y
[GOAL]
case refine'_2
C : Type u₁
inst✝² : Category.{v₁, u₁} C
D : Type u₂
inst✝¹ : Category.{v₂, u₂} D
𝒢 : Set C
inst✝ : ∀ (A : C), HasProduct fun f => ↑f.fst
h : ∀ (A : C), Mono (Pi.lift Sigma.snd)
X Y : C
f g : X ⟶ Y
hh : ∀ (G : C), G ∈ 𝒢 → ∀ (h : Y ⟶ G), f ≫ h = g ≫ h
this : Mono (Pi.lift Sigma.snd)
⊢ f = g
[PROOFSTEP]
refine' (cancel_mono (Pi.lift (@Sigma.snd 𝒢 fun G => Y ⟶ (G : C)))).1 (limit.hom_ext fun j => _)
[GOAL]
case refine'_2
C : Type u₁
inst✝² : Category.{v₁, u₁} C
D : Type u₂
inst✝¹ : Category.{v₂, u₂} D
𝒢 : Set C
inst✝ : ∀ (A : C), HasProduct fun f => ↑f.fst
h : ∀ (A : C), Mono (Pi.lift Sigma.snd)
X Y : C
f g : X ⟶ Y
hh : ∀ (G : C), G ∈ 𝒢 → ∀ (h : Y ⟶ G), f ≫ h = g ≫ h
this : Mono (Pi.lift Sigma.snd)
j : Discrete ((G : ↑𝒢) × (Y ⟶ ↑G))
⊢ (f ≫ Pi.lift Sigma.snd) ≫ limit.π (Discrete.functor fun b => ↑b.fst) j =
(g ≫ Pi.lift Sigma.snd) ≫ limit.π (Discrete.functor fun b => ↑b.fst) j
[PROOFSTEP]
simpa using hh j.as.1.1 j.as.1.2 j.as.2
[GOAL]
C : Type u₁
inst✝⁴ : Category.{v₁, u₁} C
D : Type u₂
inst✝³ : Category.{v₂, u₂} D
inst✝² : WellPowered C
inst✝¹ : HasLimits C
𝒢 : Set C
inst✝ : Small.{v₁, u₁} ↑𝒢
h𝒢 : IsCoseparating 𝒢
⊢ HasInitial C
[PROOFSTEP]
haveI : HasProductsOfShape 𝒢 C := hasProductsOfShape_of_small C 𝒢
[GOAL]
C : Type u₁
inst✝⁴ : Category.{v₁, u₁} C
D : Type u₂
inst✝³ : Category.{v₂, u₂} D
inst✝² : WellPowered C
inst✝¹ : HasLimits C
𝒢 : Set C
inst✝ : Small.{v₁, u₁} ↑𝒢
h𝒢 : IsCoseparating 𝒢
this : HasProductsOfShape (↑𝒢) C
⊢ HasInitial C
[PROOFSTEP]
haveI := fun A => hasProductsOfShape_of_small.{v₁} C (Σ G : 𝒢, A ⟶ (G : C))
[GOAL]
C : Type u₁
inst✝⁴ : Category.{v₁, u₁} C
D : Type u₂
inst✝³ : Category.{v₂, u₂} D
inst✝² : WellPowered C
inst✝¹ : HasLimits C
𝒢 : Set C
inst✝ : Small.{v₁, u₁} ↑𝒢
h𝒢 : IsCoseparating 𝒢
this✝ : HasProductsOfShape (↑𝒢) C
this : ∀ (A : C), HasProductsOfShape ((G : ↑𝒢) × (A ⟶ ↑G)) C
⊢ HasInitial C
[PROOFSTEP]
letI := completeLatticeOfCompleteSemilatticeInf (Subobject (piObj (Subtype.val : 𝒢 → C)))
[GOAL]
C : Type u₁
inst✝⁴ : Category.{v₁, u₁} C
D : Type u₂
inst✝³ : Category.{v₂, u₂} D
inst✝² : WellPowered C
inst✝¹ : HasLimits C
𝒢 : Set C
inst✝ : Small.{v₁, u₁} ↑𝒢
h𝒢 : IsCoseparating 𝒢
this✝¹ : HasProductsOfShape (↑𝒢) C
this✝ : ∀ (A : C), HasProductsOfShape ((G : ↑𝒢) × (A ⟶ ↑G)) C
this : CompleteLattice (Subobject (∏ Subtype.val)) :=
completeLatticeOfCompleteSemilatticeInf (Subobject (∏ Subtype.val))
⊢ HasInitial C
[PROOFSTEP]
suffices ∀ A : C, Unique (((⊥ : Subobject (piObj (Subtype.val : 𝒢 → C))) : C) ⟶ A) by
exact hasInitial_of_unique ((⊥ : Subobject (piObj (Subtype.val : 𝒢 → C))) : C)
[GOAL]
C : Type u₁
inst✝⁴ : Category.{v₁, u₁} C
D : Type u₂
inst✝³ : Category.{v₂, u₂} D
inst✝² : WellPowered C
inst✝¹ : HasLimits C
𝒢 : Set C
inst✝ : Small.{v₁, u₁} ↑𝒢
h𝒢 : IsCoseparating 𝒢
this✝² : HasProductsOfShape (↑𝒢) C
this✝¹ : ∀ (A : C), HasProductsOfShape ((G : ↑𝒢) × (A ⟶ ↑G)) C
this✝ : CompleteLattice (Subobject (∏ Subtype.val)) :=
completeLatticeOfCompleteSemilatticeInf (Subobject (∏ Subtype.val))
this : (A : C) → Unique (Subobject.underlying.obj ⊥ ⟶ A)
⊢ HasInitial C
[PROOFSTEP]
exact hasInitial_of_unique ((⊥ : Subobject (piObj (Subtype.val : 𝒢 → C))) : C)
[GOAL]
C : Type u₁
inst✝⁴ : Category.{v₁, u₁} C
D : Type u₂
inst✝³ : Category.{v₂, u₂} D
inst✝² : WellPowered C
inst✝¹ : HasLimits C
𝒢 : Set C
inst✝ : Small.{v₁, u₁} ↑𝒢
h𝒢 : IsCoseparating 𝒢
this✝¹ : HasProductsOfShape (↑𝒢) C
this✝ : ∀ (A : C), HasProductsOfShape ((G : ↑𝒢) × (A ⟶ ↑G)) C
this : CompleteLattice (Subobject (∏ Subtype.val)) :=
completeLatticeOfCompleteSemilatticeInf (Subobject (∏ Subtype.val))
⊢ (A : C) → Unique (Subobject.underlying.obj ⊥ ⟶ A)
[PROOFSTEP]
refine' fun A => ⟨⟨_⟩, fun f => _⟩
[GOAL]
case refine'_1
C : Type u₁
inst✝⁴ : Category.{v₁, u₁} C
D : Type u₂
inst✝³ : Category.{v₂, u₂} D
inst✝² : WellPowered C
inst✝¹ : HasLimits C
𝒢 : Set C
inst✝ : Small.{v₁, u₁} ↑𝒢
h𝒢 : IsCoseparating 𝒢
this✝¹ : HasProductsOfShape (↑𝒢) C
this✝ : ∀ (A : C), HasProductsOfShape ((G : ↑𝒢) × (A ⟶ ↑G)) C
this : CompleteLattice (Subobject (∏ Subtype.val)) :=
completeLatticeOfCompleteSemilatticeInf (Subobject (∏ Subtype.val))
A : C
⊢ Subobject.underlying.obj ⊥ ⟶ A
[PROOFSTEP]
let s := Pi.lift fun f : Σ G : 𝒢, A ⟶ (G : C) => id (Pi.π (Subtype.val : 𝒢 → C)) f.1
[GOAL]
case refine'_1
C : Type u₁
inst✝⁴ : Category.{v₁, u₁} C
D : Type u₂
inst✝³ : Category.{v₂, u₂} D
inst✝² : WellPowered C
inst✝¹ : HasLimits C
𝒢 : Set C
inst✝ : Small.{v₁, u₁} ↑𝒢
h𝒢 : IsCoseparating 𝒢
this✝¹ : HasProductsOfShape (↑𝒢) C
this✝ : ∀ (A : C), HasProductsOfShape ((G : ↑𝒢) × (A ⟶ ↑G)) C
this : CompleteLattice (Subobject (∏ Subtype.val)) :=
completeLatticeOfCompleteSemilatticeInf (Subobject (∏ Subtype.val))
A : C
s : ∏ Subtype.val ⟶ ∏ fun f => ↑f.fst := Pi.lift fun f => id (Pi.π Subtype.val) f.fst
⊢ Subobject.underlying.obj ⊥ ⟶ A
[PROOFSTEP]
let t := Pi.lift (@Sigma.snd 𝒢 fun G => A ⟶ (G : C))
[GOAL]
case refine'_1
C : Type u₁
inst✝⁴ : Category.{v₁, u₁} C
D : Type u₂
inst✝³ : Category.{v₂, u₂} D
inst✝² : WellPowered C
inst✝¹ : HasLimits C
𝒢 : Set C
inst✝ : Small.{v₁, u₁} ↑𝒢
h𝒢 : IsCoseparating 𝒢
this✝¹ : HasProductsOfShape (↑𝒢) C
this✝ : ∀ (A : C), HasProductsOfShape ((G : ↑𝒢) × (A ⟶ ↑G)) C
this : CompleteLattice (Subobject (∏ Subtype.val)) :=
completeLatticeOfCompleteSemilatticeInf (Subobject (∏ Subtype.val))
A : C
s : ∏ Subtype.val ⟶ ∏ fun f => ↑f.fst := Pi.lift fun f => id (Pi.π Subtype.val) f.fst
t : A ⟶ ∏ fun b => ↑b.fst := Pi.lift Sigma.snd
⊢ Subobject.underlying.obj ⊥ ⟶ A
[PROOFSTEP]
haveI : Mono t := (isCoseparating_iff_mono 𝒢).1 h𝒢 A
[GOAL]
case refine'_1
C : Type u₁
inst✝⁴ : Category.{v₁, u₁} C
D : Type u₂
inst✝³ : Category.{v₂, u₂} D
inst✝² : WellPowered C
inst✝¹ : HasLimits C
𝒢 : Set C
inst✝ : Small.{v₁, u₁} ↑𝒢
h𝒢 : IsCoseparating 𝒢
this✝² : HasProductsOfShape (↑𝒢) C
this✝¹ : ∀ (A : C), HasProductsOfShape ((G : ↑𝒢) × (A ⟶ ↑G)) C
this✝ : CompleteLattice (Subobject (∏ Subtype.val)) :=
completeLatticeOfCompleteSemilatticeInf (Subobject (∏ Subtype.val))
A : C
s : ∏ Subtype.val ⟶ ∏ fun f => ↑f.fst := Pi.lift fun f => id (Pi.π Subtype.val) f.fst
t : A ⟶ ∏ fun b => ↑b.fst := Pi.lift Sigma.snd
this : Mono t
⊢ Subobject.underlying.obj ⊥ ⟶ A
[PROOFSTEP]
exact Subobject.ofLEMk _ (pullback.fst : pullback s t ⟶ _) bot_le ≫ pullback.snd
[GOAL]
case refine'_2
C : Type u₁
inst✝⁴ : Category.{v₁, u₁} C
D : Type u₂
inst✝³ : Category.{v₂, u₂} D
inst✝² : WellPowered C
inst✝¹ : HasLimits C
𝒢 : Set C
inst✝ : Small.{v₁, u₁} ↑𝒢
h𝒢 : IsCoseparating 𝒢
this✝¹ : HasProductsOfShape (↑𝒢) C
this✝ : ∀ (A : C), HasProductsOfShape ((G : ↑𝒢) × (A ⟶ ↑G)) C
this : CompleteLattice (Subobject (∏ Subtype.val)) :=
completeLatticeOfCompleteSemilatticeInf (Subobject (∏ Subtype.val))
A : C
f : Subobject.underlying.obj ⊥ ⟶ A
⊢ f = default
[PROOFSTEP]
suffices ∀ (g : Subobject.underlying.obj ⊥ ⟶ A), f = g by apply this
[GOAL]
C : Type u₁
inst✝⁴ : Category.{v₁, u₁} C
D : Type u₂
inst✝³ : Category.{v₂, u₂} D
inst✝² : WellPowered C
inst✝¹ : HasLimits C
𝒢 : Set C
inst✝ : Small.{v₁, u₁} ↑𝒢
h𝒢 : IsCoseparating 𝒢
this✝² : HasProductsOfShape (↑𝒢) C
this✝¹ : ∀ (A : C), HasProductsOfShape ((G : ↑𝒢) × (A ⟶ ↑G)) C
this✝ : CompleteLattice (Subobject (∏ Subtype.val)) :=
completeLatticeOfCompleteSemilatticeInf (Subobject (∏ Subtype.val))
A : C
f : Subobject.underlying.obj ⊥ ⟶ A
this : ∀ (g : Subobject.underlying.obj ⊥ ⟶ A), f = g
⊢ f = default
[PROOFSTEP]
apply this
[GOAL]
case refine'_2
C : Type u₁
inst✝⁴ : Category.{v₁, u₁} C
D : Type u₂
inst✝³ : Category.{v₂, u₂} D
inst✝² : WellPowered C
inst✝¹ : HasLimits C
𝒢 : Set C
inst✝ : Small.{v₁, u₁} ↑𝒢
h𝒢 : IsCoseparating 𝒢
this✝¹ : HasProductsOfShape (↑𝒢) C
this✝ : ∀ (A : C), HasProductsOfShape ((G : ↑𝒢) × (A ⟶ ↑G)) C
this : CompleteLattice (Subobject (∏ Subtype.val)) :=
completeLatticeOfCompleteSemilatticeInf (Subobject (∏ Subtype.val))
A : C
f : Subobject.underlying.obj ⊥ ⟶ A
⊢ ∀ (g : Subobject.underlying.obj ⊥ ⟶ A), f = g
[PROOFSTEP]
intro g
[GOAL]
case refine'_2
C : Type u₁
inst✝⁴ : Category.{v₁, u₁} C
D : Type u₂
inst✝³ : Category.{v₂, u₂} D
inst✝² : WellPowered C
inst✝¹ : HasLimits C
𝒢 : Set C
inst✝ : Small.{v₁, u₁} ↑𝒢
h𝒢 : IsCoseparating 𝒢
this✝¹ : HasProductsOfShape (↑𝒢) C
this✝ : ∀ (A : C), HasProductsOfShape ((G : ↑𝒢) × (A ⟶ ↑G)) C
this : CompleteLattice (Subobject (∏ Subtype.val)) :=
completeLatticeOfCompleteSemilatticeInf (Subobject (∏ Subtype.val))
A : C
f : Subobject.underlying.obj ⊥ ⟶ A
g : Subobject.underlying.obj ⊥ ⟶ A
⊢ f = g
[PROOFSTEP]
suffices IsSplitEpi (equalizer.ι f g) by exact eq_of_epi_equalizer
[GOAL]
C : Type u₁
inst✝⁴ : Category.{v₁, u₁} C
D : Type u₂
inst✝³ : Category.{v₂, u₂} D
inst✝² : WellPowered C
inst✝¹ : HasLimits C
𝒢 : Set C
inst✝ : Small.{v₁, u₁} ↑𝒢
h𝒢 : IsCoseparating 𝒢
this✝² : HasProductsOfShape (↑𝒢) C
this✝¹ : ∀ (A : C), HasProductsOfShape ((G : ↑𝒢) × (A ⟶ ↑G)) C
this✝ : CompleteLattice (Subobject (∏ Subtype.val)) :=
completeLatticeOfCompleteSemilatticeInf (Subobject (∏ Subtype.val))
A : C
f : Subobject.underlying.obj ⊥ ⟶ A
g : Subobject.underlying.obj ⊥ ⟶ A
this : IsSplitEpi (equalizer.ι f g)
⊢ f = g
[PROOFSTEP]
exact eq_of_epi_equalizer
[GOAL]
case refine'_2
C : Type u₁
inst✝⁴ : Category.{v₁, u₁} C
D : Type u₂
inst✝³ : Category.{v₂, u₂} D
inst✝² : WellPowered C
inst✝¹ : HasLimits C
𝒢 : Set C
inst✝ : Small.{v₁, u₁} ↑𝒢
h𝒢 : IsCoseparating 𝒢
this✝¹ : HasProductsOfShape (↑𝒢) C
this✝ : ∀ (A : C), HasProductsOfShape ((G : ↑𝒢) × (A ⟶ ↑G)) C
this : CompleteLattice (Subobject (∏ Subtype.val)) :=
completeLatticeOfCompleteSemilatticeInf (Subobject (∏ Subtype.val))
A : C
f : Subobject.underlying.obj ⊥ ⟶ A
g : Subobject.underlying.obj ⊥ ⟶ A
⊢ IsSplitEpi (equalizer.ι f g)
[PROOFSTEP]
exact
IsSplitEpi.mk'
⟨Subobject.ofLEMk _ (equalizer.ι f g ≫ Subobject.arrow _) bot_le,
by
ext
simp⟩
[GOAL]
C : Type u₁
inst✝⁴ : Category.{v₁, u₁} C
D : Type u₂
inst✝³ : Category.{v₂, u₂} D
inst✝² : WellPowered C
inst✝¹ : HasLimits C
𝒢 : Set C
inst✝ : Small.{v₁, u₁} ↑𝒢
h𝒢 : IsCoseparating 𝒢
this✝¹ : HasProductsOfShape (↑𝒢) C
this✝ : ∀ (A : C), HasProductsOfShape ((G : ↑𝒢) × (A ⟶ ↑G)) C
this : CompleteLattice (Subobject (∏ Subtype.val)) :=
completeLatticeOfCompleteSemilatticeInf (Subobject (∏ Subtype.val))
A : C
f : Subobject.underlying.obj ⊥ ⟶ A
g : Subobject.underlying.obj ⊥ ⟶ A
⊢ Subobject.ofLEMk ⊥ (equalizer.ι f g ≫ Subobject.arrow ⊥)
(_ : ⊥ ≤ Subobject.mk (equalizer.ι f g ≫ Subobject.arrow ⊥)) ≫
equalizer.ι f g =
𝟙 (Subobject.underlying.obj ⊥)
[PROOFSTEP]
ext
[GOAL]
case h.h
C : Type u₁
inst✝⁴ : Category.{v₁, u₁} C
D : Type u₂
inst✝³ : Category.{v₂, u₂} D
inst✝² : WellPowered C
inst✝¹ : HasLimits C
𝒢 : Set C
inst✝ : Small.{v₁, u₁} ↑𝒢
h𝒢 : IsCoseparating 𝒢
this✝¹ : HasProductsOfShape (↑𝒢) C
this✝ : ∀ (A : C), HasProductsOfShape ((G : ↑𝒢) × (A ⟶ ↑G)) C
this : CompleteLattice (Subobject (∏ Subtype.val)) :=
completeLatticeOfCompleteSemilatticeInf (Subobject (∏ Subtype.val))
A : C
f : Subobject.underlying.obj ⊥ ⟶ A
g : Subobject.underlying.obj ⊥ ⟶ A
b✝ : { x // x ∈ 𝒢 }
⊢ ((Subobject.ofLEMk ⊥ (equalizer.ι f g ≫ Subobject.arrow ⊥)
(_ : ⊥ ≤ Subobject.mk (equalizer.ι f g ≫ Subobject.arrow ⊥)) ≫
equalizer.ι f g) ≫
Subobject.arrow ⊥) ≫
Pi.π Subtype.val b✝ =
(𝟙 (Subobject.underlying.obj ⊥) ≫ Subobject.arrow ⊥) ≫ Pi.π Subtype.val b✝
[PROOFSTEP]
simp
[GOAL]
C : Type u₁
inst✝⁴ : Category.{v₁, u₁} C
D : Type u₂
inst✝³ : Category.{v₂, u₂} D
inst✝² : WellPowered Cᵒᵖ
inst✝¹ : HasColimits C
𝒢 : Set C
inst✝ : Small.{v₁, u₁} ↑𝒢
h𝒢 : IsSeparating 𝒢
⊢ HasTerminal C
[PROOFSTEP]
haveI : Small.{v₁} 𝒢.op := small_of_injective (Set.opEquiv_self 𝒢).injective
[GOAL]
C : Type u₁
inst✝⁴ : Category.{v₁, u₁} C
D : Type u₂
inst✝³ : Category.{v₂, u₂} D
inst✝² : WellPowered Cᵒᵖ
inst✝¹ : HasColimits C
𝒢 : Set C
inst✝ : Small.{v₁, u₁} ↑𝒢
h𝒢 : IsSeparating 𝒢
this : Small.{v₁, u₁} ↑(Set.op 𝒢)
⊢ HasTerminal C
[PROOFSTEP]
haveI : HasInitial Cᵒᵖ := hasInitial_of_isCoseparating ((isCoseparating_op_iff _).2 h𝒢)
[GOAL]
C : Type u₁
inst✝⁴ : Category.{v₁, u₁} C
D : Type u₂
inst✝³ : Category.{v₂, u₂} D
inst✝² : WellPowered Cᵒᵖ
inst✝¹ : HasColimits C
𝒢 : Set C
inst✝ : Small.{v₁, u₁} ↑𝒢
h𝒢 : IsSeparating 𝒢
this✝ : Small.{v₁, u₁} ↑(Set.op 𝒢)
this : HasInitial Cᵒᵖ
⊢ HasTerminal C
[PROOFSTEP]
exact hasTerminal_of_hasInitial_op
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
h𝒢 : IsDetecting 𝒢
X : C
P Q : Subobject X
h₁ : P ≤ Q
h₂ : ∀ (G : C), G ∈ 𝒢 → ∀ {f : G ⟶ X}, Factors Q f → Factors P f
⊢ P = Q
[PROOFSTEP]
suffices IsIso (ofLE _ _ h₁) by exact le_antisymm h₁ (le_of_comm (inv (ofLE _ _ h₁)) (by simp))
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
h𝒢 : IsDetecting 𝒢
X : C
P Q : Subobject X
h₁ : P ≤ Q
h₂ : ∀ (G : C), G ∈ 𝒢 → ∀ {f : G ⟶ X}, Factors Q f → Factors P f
this : IsIso (ofLE P Q h₁)
⊢ P = Q
[PROOFSTEP]
exact le_antisymm h₁ (le_of_comm (inv (ofLE _ _ h₁)) (by simp))
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
h𝒢 : IsDetecting 𝒢
X : C
P Q : Subobject X
h₁ : P ≤ Q
h₂ : ∀ (G : C), G ∈ 𝒢 → ∀ {f : G ⟶ X}, Factors Q f → Factors P f
this : IsIso (ofLE P Q h₁)
⊢ inv (ofLE P Q h₁) ≫ arrow P = arrow Q
[PROOFSTEP]
simp
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
h𝒢 : IsDetecting 𝒢
X : C
P Q : Subobject X
h₁ : P ≤ Q
h₂ : ∀ (G : C), G ∈ 𝒢 → ∀ {f : G ⟶ X}, Factors Q f → Factors P f
⊢ IsIso (ofLE P Q h₁)
[PROOFSTEP]
refine' h𝒢 _ fun G hG f => _
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
h𝒢 : IsDetecting 𝒢
X : C
P Q : Subobject X
h₁ : P ≤ Q
h₂ : ∀ (G : C), G ∈ 𝒢 → ∀ {f : G ⟶ X}, Factors Q f → Factors P f
G : C
hG : G ∈ 𝒢
f : G ⟶ underlying.obj Q
⊢ ∃! h', h' ≫ ofLE P Q h₁ = f
[PROOFSTEP]
have : P.Factors (f ≫ Q.arrow) := h₂ _ hG ((factors_iff _ _).2 ⟨_, rfl⟩)
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
h𝒢 : IsDetecting 𝒢
X : C
P Q : Subobject X
h₁ : P ≤ Q
h₂ : ∀ (G : C), G ∈ 𝒢 → ∀ {f : G ⟶ X}, Factors Q f → Factors P f
G : C
hG : G ∈ 𝒢
f : G ⟶ underlying.obj Q
this : Factors P (f ≫ arrow Q)
⊢ ∃! h', h' ≫ ofLE P Q h₁ = f
[PROOFSTEP]
refine' ⟨factorThru _ _ this, _, fun g (hg : g ≫ _ = f) => _⟩
[GOAL]
case refine'_1
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
h𝒢 : IsDetecting 𝒢
X : C
P Q : Subobject X
h₁ : P ≤ Q
h₂ : ∀ (G : C), G ∈ 𝒢 → ∀ {f : G ⟶ X}, Factors Q f → Factors P f
G : C
hG : G ∈ 𝒢
f : G ⟶ underlying.obj Q
this : Factors P (f ≫ arrow Q)
⊢ (fun h' => h' ≫ ofLE P Q h₁ = f) (factorThru P (f ≫ arrow Q) this)
[PROOFSTEP]
simp only [← cancel_mono Q.arrow, Category.assoc, ofLE_arrow, factorThru_arrow]
[GOAL]
case refine'_2
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
𝒢 : Set C
h𝒢 : IsDetecting 𝒢
X : C
P Q : Subobject X
h₁ : P ≤ Q
h₂ : ∀ (G : C), G ∈ 𝒢 → ∀ {f : G ⟶ X}, Factors Q f → Factors P f
G : C
hG : G ∈ 𝒢
f : G ⟶ underlying.obj Q
this : Factors P (f ≫ arrow Q)
g : G ⟶ underlying.obj P
hg : g ≫ ofLE P Q h₁ = f
⊢ g = factorThru P (f ≫ arrow Q) this
[PROOFSTEP]
simp only [← cancel_mono (Subobject.ofLE _ _ h₁), ← cancel_mono Q.arrow, hg, Category.assoc, ofLE_arrow,
factorThru_arrow]
[GOAL]
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasPullbacks C
𝒢 : Set C
inst✝ : Small.{v₁, u₁} ↑𝒢
h𝒢 : IsDetecting 𝒢
X : C
P Q : Subobject X
h : (fun P => {f | Subobject.Factors P f.snd}) P = (fun P => {f | Subobject.Factors P f.snd}) Q
⊢ ∀ (G : C), G ∈ 𝒢 → ∀ {f : G ⟶ X}, Subobject.Factors P f ↔ Subobject.Factors Q f
[PROOFSTEP]
simpa [Set.ext_iff] using h
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
S : D
T : C ⥤ D
𝒢 : Set C
h𝒢 : IsCoseparating 𝒢
⊢ IsCoseparating ((proj S T).toPrefunctor.obj ⁻¹' 𝒢)
[PROOFSTEP]
refine' fun X Y f g hfg => ext _ _ (h𝒢 _ _ fun G hG h => _)
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
S : D
T : C ⥤ D
𝒢 : Set C
h𝒢 : IsCoseparating 𝒢
X Y : StructuredArrow S T
f g : X ⟶ Y
hfg : ∀ (G : StructuredArrow S T), G ∈ (proj S T).toPrefunctor.obj ⁻¹' 𝒢 → ∀ (h : Y ⟶ G), f ≫ h = g ≫ h
G : C
hG : G ∈ 𝒢
h : Y.right ⟶ G
⊢ f.right ≫ h = g.right ≫ h
[PROOFSTEP]
exact congr_arg CommaMorphism.right (hfg (mk (Y.hom ≫ T.map h)) hG (homMk h rfl))
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
S : C ⥤ D
T : D
𝒢 : Set C
h𝒢 : IsSeparating 𝒢
⊢ IsSeparating ((proj S T).toPrefunctor.obj ⁻¹' 𝒢)
[PROOFSTEP]
refine' fun X Y f g hfg => ext _ _ (h𝒢 _ _ fun G hG h => _)
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
S : C ⥤ D
T : D
𝒢 : Set C
h𝒢 : IsSeparating 𝒢
X Y : CostructuredArrow S T
f g : X ⟶ Y
hfg : ∀ (G : CostructuredArrow S T), G ∈ (proj S T).toPrefunctor.obj ⁻¹' 𝒢 → ∀ (h : G ⟶ X), h ≫ f = h ≫ g
G : C
hG : G ∈ 𝒢
h : G ⟶ X.left
⊢ h ≫ f.left = h ≫ g.left
[PROOFSTEP]
exact congr_arg CommaMorphism.left (hfg (mk (S.map h ≫ X.hom)) hG (homMk h rfl))
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
G : C
⊢ IsSeparator (op G) ↔ IsCoseparator G
[PROOFSTEP]
rw [IsSeparator, IsCoseparator, ← isSeparating_op_iff, Set.singleton_op]
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
G : C
⊢ IsCoseparator (op G) ↔ IsSeparator G
[PROOFSTEP]
rw [IsSeparator, IsCoseparator, ← isCoseparating_op_iff, Set.singleton_op]
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
G : Cᵒᵖ
⊢ IsCoseparator G.unop ↔ IsSeparator G
[PROOFSTEP]
rw [IsSeparator, IsCoseparator, ← isCoseparating_unop_iff, Set.singleton_unop]
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
G : Cᵒᵖ
⊢ IsSeparator G.unop ↔ IsCoseparator G
[PROOFSTEP]
rw [IsSeparator, IsCoseparator, ← isSeparating_unop_iff, Set.singleton_unop]
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
G : C
⊢ IsDetector (op G) ↔ IsCodetector G
[PROOFSTEP]
rw [IsDetector, IsCodetector, ← isDetecting_op_iff, Set.singleton_op]
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
G : C
⊢ IsCodetector (op G) ↔ IsDetector G
[PROOFSTEP]
rw [IsDetector, IsCodetector, ← isCodetecting_op_iff, Set.singleton_op]
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
G : Cᵒᵖ
⊢ IsCodetector G.unop ↔ IsDetector G
[PROOFSTEP]
rw [IsDetector, IsCodetector, ← isCodetecting_unop_iff, Set.singleton_unop]
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
G : Cᵒᵖ
⊢ IsDetector G.unop ↔ IsCodetector G
[PROOFSTEP]
rw [IsDetector, IsCodetector, ← isDetecting_unop_iff, Set.singleton_unop]
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
G : C
hG : IsSeparator G
X Y : C
f g : X ⟶ Y
hfg : ∀ (h : G ⟶ X), h ≫ f = h ≫ g
H : C
hH : H ∈ {G}
h : H ⟶ X
⊢ h ≫ f = h ≫ g
[PROOFSTEP]
obtain rfl := Set.mem_singleton_iff.1 hH
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
X Y : C
f g : X ⟶ Y
H : C
h : H ⟶ X
hG : IsSeparator H
hfg : ∀ (h : H ⟶ X), h ≫ f = h ≫ g
hH : H ∈ {H}
⊢ h ≫ f = h ≫ g
[PROOFSTEP]
exact hfg h
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
G : C
hG : IsCoseparator G
X Y : C
f g : X ⟶ Y
hfg : ∀ (h : Y ⟶ G), f ≫ h = g ≫ h
H : C
hH : H ∈ {G}
h : Y ⟶ H
⊢ f ≫ h = g ≫ h
[PROOFSTEP]
obtain rfl := Set.mem_singleton_iff.1 hH
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
X Y : C
f g : X ⟶ Y
H : C
h : Y ⟶ H
hG : IsCoseparator H
hfg : ∀ (h : Y ⟶ H), f ≫ h = g ≫ h
hH : H ∈ {H}
⊢ f ≫ h = g ≫ h
[PROOFSTEP]
exact hfg h
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
G : C
hG : IsDetector G
X Y : C
f : X ⟶ Y
hf : ∀ (h : G ⟶ Y), ∃! h', h' ≫ f = h
H : C
hH : H ∈ {G}
h : H ⟶ Y
⊢ ∃! h', h' ≫ f = h
[PROOFSTEP]
obtain rfl := Set.mem_singleton_iff.1 hH
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
X Y : C
f : X ⟶ Y
H : C
h : H ⟶ Y
hG : IsDetector H
hf : ∀ (h : H ⟶ Y), ∃! h', h' ≫ f = h
hH : H ∈ {H}
⊢ ∃! h', h' ≫ f = h
[PROOFSTEP]
exact hf h
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
G : C
hG : IsCodetector G
X Y : C
f : X ⟶ Y
hf : ∀ (h : X ⟶ G), ∃! h', f ≫ h' = h
H : C
hH : H ∈ {G}
h : X ⟶ H
⊢ ∃! h', f ≫ h' = h
[PROOFSTEP]
obtain rfl := Set.mem_singleton_iff.1 hH
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
X Y : C
f : X ⟶ Y
H : C
h : X ⟶ H
hG : IsCodetector H
hf : ∀ (h : X ⟶ H), ∃! h', f ≫ h' = h
hH : H ∈ {H}
⊢ ∃! h', f ≫ h' = h
[PROOFSTEP]
exact hf h
[GOAL]
C : Type u₁
inst✝² : Category.{v₁, u₁} C
D : Type u₂
inst✝¹ : Category.{v₂, u₂} D
G : C
inst✝ : ∀ (A : C), HasCoproduct fun x => G
⊢ IsSeparator G ↔ ∀ (A : C), Epi (Sigma.desc fun f => f)
[PROOFSTEP]
rw [isSeparator_def]
[GOAL]
C : Type u₁
inst✝² : Category.{v₁, u₁} C
D : Type u₂
inst✝¹ : Category.{v₂, u₂} D
G : C
inst✝ : ∀ (A : C), HasCoproduct fun x => G
⊢ (∀ ⦃X Y : C⦄ (f g : X ⟶ Y), (∀ (h : G ⟶ X), h ≫ f = h ≫ g) → f = g) ↔ ∀ (A : C), Epi (Sigma.desc fun f => f)
[PROOFSTEP]
refine' ⟨fun h A => ⟨fun u v huv => h _ _ fun i => _⟩, fun h X Y f g hh => _⟩
[GOAL]
case refine'_1
C : Type u₁
inst✝² : Category.{v₁, u₁} C
D : Type u₂
inst✝¹ : Category.{v₂, u₂} D
G : C
inst✝ : ∀ (A : C), HasCoproduct fun x => G
h : ∀ ⦃X Y : C⦄ (f g : X ⟶ Y), (∀ (h : G ⟶ X), h ≫ f = h ≫ g) → f = g
A Z✝ : C
u v : A ⟶ Z✝
huv : (Sigma.desc fun f => f) ≫ u = (Sigma.desc fun f => f) ≫ v
i : G ⟶ A
⊢ i ≫ u = i ≫ v
[PROOFSTEP]
simpa using Sigma.ι _ i ≫= huv
[GOAL]
case refine'_2
C : Type u₁
inst✝² : Category.{v₁, u₁} C
D : Type u₂
inst✝¹ : Category.{v₂, u₂} D
G : C
inst✝ : ∀ (A : C), HasCoproduct fun x => G
h : ∀ (A : C), Epi (Sigma.desc fun f => f)
X Y : C
f g : X ⟶ Y
hh : ∀ (h : G ⟶ X), h ≫ f = h ≫ g
⊢ f = g
[PROOFSTEP]
haveI := h X
[GOAL]
case refine'_2
C : Type u₁
inst✝² : Category.{v₁, u₁} C
D : Type u₂
inst✝¹ : Category.{v₂, u₂} D
G : C
inst✝ : ∀ (A : C), HasCoproduct fun x => G
h : ∀ (A : C), Epi (Sigma.desc fun f => f)
X Y : C
f g : X ⟶ Y
hh : ∀ (h : G ⟶ X), h ≫ f = h ≫ g
this : Epi (Sigma.desc fun f => f)
⊢ f = g
[PROOFSTEP]
refine' (cancel_epi (Sigma.desc fun f : G ⟶ X => f)).1 (colimit.hom_ext fun j => _)
[GOAL]
case refine'_2
C : Type u₁
inst✝² : Category.{v₁, u₁} C
D : Type u₂
inst✝¹ : Category.{v₂, u₂} D
G : C
inst✝ : ∀ (A : C), HasCoproduct fun x => G
h : ∀ (A : C), Epi (Sigma.desc fun f => f)
X Y : C
f g : X ⟶ Y
hh : ∀ (h : G ⟶ X), h ≫ f = h ≫ g
this : Epi (Sigma.desc fun f => f)
j : Discrete (G ⟶ X)
⊢ colimit.ι (Discrete.functor fun f => G) j ≫ (Sigma.desc fun f => f) ≫ f =
colimit.ι (Discrete.functor fun f => G) j ≫ (Sigma.desc fun f => f) ≫ g
[PROOFSTEP]
simpa using hh j.as
[GOAL]
C : Type u₁
inst✝² : Category.{v₁, u₁} C
D : Type u₂
inst✝¹ : Category.{v₂, u₂} D
G : C
inst✝ : ∀ (A : C), HasProduct fun x => G
⊢ IsCoseparator G ↔ ∀ (A : C), Mono (Pi.lift fun f => f)
[PROOFSTEP]
rw [isCoseparator_def]
[GOAL]
C : Type u₁
inst✝² : Category.{v₁, u₁} C
D : Type u₂
inst✝¹ : Category.{v₂, u₂} D
G : C
inst✝ : ∀ (A : C), HasProduct fun x => G
⊢ (∀ ⦃X Y : C⦄ (f g : X ⟶ Y), (∀ (h : Y ⟶ G), f ≫ h = g ≫ h) → f = g) ↔ ∀ (A : C), Mono (Pi.lift fun f => f)
[PROOFSTEP]
refine' ⟨fun h A => ⟨fun u v huv => h _ _ fun i => _⟩, fun h X Y f g hh => _⟩
[GOAL]
case refine'_1
C : Type u₁
inst✝² : Category.{v₁, u₁} C
D : Type u₂
inst✝¹ : Category.{v₂, u₂} D
G : C
inst✝ : ∀ (A : C), HasProduct fun x => G
h : ∀ ⦃X Y : C⦄ (f g : X ⟶ Y), (∀ (h : Y ⟶ G), f ≫ h = g ≫ h) → f = g
A Z✝ : C
u v : Z✝ ⟶ A
huv : (u ≫ Pi.lift fun f => f) = v ≫ Pi.lift fun f => f
i : A ⟶ G
⊢ u ≫ i = v ≫ i
[PROOFSTEP]
simpa using huv =≫ Pi.π _ i
[GOAL]
case refine'_2
C : Type u₁
inst✝² : Category.{v₁, u₁} C
D : Type u₂
inst✝¹ : Category.{v₂, u₂} D
G : C
inst✝ : ∀ (A : C), HasProduct fun x => G
h : ∀ (A : C), Mono (Pi.lift fun f => f)
X Y : C
f g : X ⟶ Y
hh : ∀ (h : Y ⟶ G), f ≫ h = g ≫ h
⊢ f = g
[PROOFSTEP]
haveI := h Y
[GOAL]
case refine'_2
C : Type u₁
inst✝² : Category.{v₁, u₁} C
D : Type u₂
inst✝¹ : Category.{v₂, u₂} D
G : C
inst✝ : ∀ (A : C), HasProduct fun x => G
h : ∀ (A : C), Mono (Pi.lift fun f => f)
X Y : C
f g : X ⟶ Y
hh : ∀ (h : Y ⟶ G), f ≫ h = g ≫ h
this : Mono (Pi.lift fun f => f)
⊢ f = g
[PROOFSTEP]
refine' (cancel_mono (Pi.lift fun f : Y ⟶ G => f)).1 (limit.hom_ext fun j => _)
[GOAL]
case refine'_2
C : Type u₁
inst✝² : Category.{v₁, u₁} C
D : Type u₂
inst✝¹ : Category.{v₂, u₂} D
G : C
inst✝ : ∀ (A : C), HasProduct fun x => G
h : ∀ (A : C), Mono (Pi.lift fun f => f)
X Y : C
f g : X ⟶ Y
hh : ∀ (h : Y ⟶ G), f ≫ h = g ≫ h
this : Mono (Pi.lift fun f => f)
j : Discrete (Y ⟶ G)
⊢ (f ≫ Pi.lift fun f => f) ≫ limit.π (Discrete.functor fun f => G) j =
(g ≫ Pi.lift fun f => f) ≫ limit.π (Discrete.functor fun f => G) j
[PROOFSTEP]
simpa using hh j.as
[GOAL]
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
G H : C
inst✝ : HasBinaryCoproduct G H
⊢ IsSeparator (G ⨿ H) ↔ IsSeparating {G, H}
[PROOFSTEP]
refine' ⟨fun h X Y u v huv => _, fun h => (isSeparator_def _).2 fun X Y u v huv => h _ _ fun Z hZ g => _⟩
[GOAL]
case refine'_1
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
G H : C
inst✝ : HasBinaryCoproduct G H
h : IsSeparator (G ⨿ H)
X Y : C
u v : X ⟶ Y
huv : ∀ (G_1 : C), G_1 ∈ {G, H} → ∀ (h : G_1 ⟶ X), h ≫ u = h ≫ v
⊢ u = v
[PROOFSTEP]
refine' h.def _ _ fun g => coprod.hom_ext _ _
[GOAL]
case refine'_1.refine'_1
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
G H : C
inst✝ : HasBinaryCoproduct G H
h : IsSeparator (G ⨿ H)
X Y : C
u v : X ⟶ Y
huv : ∀ (G_1 : C), G_1 ∈ {G, H} → ∀ (h : G_1 ⟶ X), h ≫ u = h ≫ v
g : G ⨿ H ⟶ X
⊢ coprod.inl ≫ g ≫ u = coprod.inl ≫ g ≫ v
[PROOFSTEP]
simpa using huv G (by simp) (coprod.inl ≫ g)
[GOAL]
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
G H : C
inst✝ : HasBinaryCoproduct G H
h : IsSeparator (G ⨿ H)
X Y : C
u v : X ⟶ Y
huv : ∀ (G_1 : C), G_1 ∈ {G, H} → ∀ (h : G_1 ⟶ X), h ≫ u = h ≫ v
g : G ⨿ H ⟶ X
⊢ G ∈ {G, H}
[PROOFSTEP]
simp
[GOAL]
case refine'_1.refine'_2
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
G H : C
inst✝ : HasBinaryCoproduct G H
h : IsSeparator (G ⨿ H)
X Y : C
u v : X ⟶ Y
huv : ∀ (G_1 : C), G_1 ∈ {G, H} → ∀ (h : G_1 ⟶ X), h ≫ u = h ≫ v
g : G ⨿ H ⟶ X
⊢ coprod.inr ≫ g ≫ u = coprod.inr ≫ g ≫ v
[PROOFSTEP]
simpa using huv H (by simp) (coprod.inr ≫ g)
[GOAL]
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
G H : C
inst✝ : HasBinaryCoproduct G H
h : IsSeparator (G ⨿ H)
X Y : C
u v : X ⟶ Y
huv : ∀ (G_1 : C), G_1 ∈ {G, H} → ∀ (h : G_1 ⟶ X), h ≫ u = h ≫ v
g : G ⨿ H ⟶ X
⊢ H ∈ {G, H}
[PROOFSTEP]
simp
[GOAL]
case refine'_2
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
G H : C
inst✝ : HasBinaryCoproduct G H
h : IsSeparating {G, H}
X Y : C
u v : X ⟶ Y
huv : ∀ (h : G ⨿ H ⟶ X), h ≫ u = h ≫ v
Z : C
hZ : Z ∈ {G, H}
g : Z ⟶ X
⊢ g ≫ u = g ≫ v
[PROOFSTEP]
simp only [Set.mem_insert_iff, Set.mem_singleton_iff] at hZ
[GOAL]
case refine'_2
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
G H : C
inst✝ : HasBinaryCoproduct G H
h : IsSeparating {G, H}
X Y : C
u v : X ⟶ Y
huv : ∀ (h : G ⨿ H ⟶ X), h ≫ u = h ≫ v
Z : C
g : Z ⟶ X
hZ : Z = G ∨ Z = H
⊢ g ≫ u = g ≫ v
[PROOFSTEP]
rcases hZ with (rfl | rfl)
[GOAL]
case refine'_2.inl
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
H X Y : C
u v : X ⟶ Y
Z : C
g : Z ⟶ X
inst✝ : HasBinaryCoproduct Z H
h : IsSeparating {Z, H}
huv : ∀ (h : Z ⨿ H ⟶ X), h ≫ u = h ≫ v
⊢ g ≫ u = g ≫ v
[PROOFSTEP]
simpa using coprod.inl ≫= huv (coprod.desc g 0)
[GOAL]
case refine'_2.inr
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
G X Y : C
u v : X ⟶ Y
Z : C
g : Z ⟶ X
inst✝ : HasBinaryCoproduct G Z
h : IsSeparating {G, Z}
huv : ∀ (h : G ⨿ Z ⟶ X), h ≫ u = h ≫ v
⊢ g ≫ u = g ≫ v
[PROOFSTEP]
simpa using coprod.inr ≫= huv (coprod.desc 0 g)
[GOAL]
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
G H : C
inst✝ : HasBinaryCoproduct G H
hG : IsSeparator G
⊢ {G} ⊆ {G, H}
[PROOFSTEP]
simp
[GOAL]
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
G H : C
inst✝ : HasBinaryCoproduct G H
hH : IsSeparator H
⊢ {H} ⊆ {G, H}
[PROOFSTEP]
simp
[GOAL]
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
β : Type w
f : β → C
inst✝ : HasCoproduct f
⊢ IsSeparator (∐ f) ↔ IsSeparating (Set.range f)
[PROOFSTEP]
refine' ⟨fun h X Y u v huv => _, fun h => (isSeparator_def _).2 fun X Y u v huv => h _ _ fun Z hZ g => _⟩
[GOAL]
case refine'_1
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
β : Type w
f : β → C
inst✝ : HasCoproduct f
h : IsSeparator (∐ f)
X Y : C
u v : X ⟶ Y
huv : ∀ (G : C), G ∈ Set.range f → ∀ (h : G ⟶ X), h ≫ u = h ≫ v
⊢ u = v
[PROOFSTEP]
refine' h.def _ _ fun g => colimit.hom_ext fun b => _
[GOAL]
case refine'_1
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
β : Type w
f : β → C
inst✝ : HasCoproduct f
h : IsSeparator (∐ f)
X Y : C
u v : X ⟶ Y
huv : ∀ (G : C), G ∈ Set.range f → ∀ (h : G ⟶ X), h ≫ u = h ≫ v
g : ∐ f ⟶ X
b : Discrete β
⊢ colimit.ι (Discrete.functor f) b ≫ g ≫ u = colimit.ι (Discrete.functor f) b ≫ g ≫ v
[PROOFSTEP]
simpa using huv (f b.as) (by simp) (colimit.ι (Discrete.functor f) _ ≫ g)
[GOAL]
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
β : Type w
f : β → C
inst✝ : HasCoproduct f
h : IsSeparator (∐ f)
X Y : C
u v : X ⟶ Y
huv : ∀ (G : C), G ∈ Set.range f → ∀ (h : G ⟶ X), h ≫ u = h ≫ v
g : ∐ f ⟶ X
b : Discrete β
⊢ f b.as ∈ Set.range f
[PROOFSTEP]
simp
[GOAL]
case refine'_2
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
β : Type w
f : β → C
inst✝ : HasCoproduct f
h : IsSeparating (Set.range f)
X Y : C
u v : X ⟶ Y
huv : ∀ (h : ∐ f ⟶ X), h ≫ u = h ≫ v
Z : C
hZ : Z ∈ Set.range f
g : Z ⟶ X
⊢ g ≫ u = g ≫ v
[PROOFSTEP]
obtain ⟨b, rfl⟩ := Set.mem_range.1 hZ
[GOAL]
case refine'_2.intro
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
β : Type w
f : β → C
inst✝ : HasCoproduct f
h : IsSeparating (Set.range f)
X Y : C
u v : X ⟶ Y
huv : ∀ (h : ∐ f ⟶ X), h ≫ u = h ≫ v
b : β
hZ : f b ∈ Set.range f
g : f b ⟶ X
⊢ g ≫ u = g ≫ v
[PROOFSTEP]
classical simpa using Sigma.ι f b ≫= huv (Sigma.desc (Pi.single b g))
[GOAL]
case refine'_2.intro
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
β : Type w
f : β → C
inst✝ : HasCoproduct f
h : IsSeparating (Set.range f)
X Y : C
u v : X ⟶ Y
huv : ∀ (h : ∐ f ⟶ X), h ≫ u = h ≫ v
b : β
hZ : f b ∈ Set.range f
g : f b ⟶ X
⊢ g ≫ u = g ≫ v
[PROOFSTEP]
simpa using Sigma.ι f b ≫= huv (Sigma.desc (Pi.single b g))
[GOAL]
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
β : Type w
f : β → C
inst✝ : HasCoproduct f
b : β
hb : IsSeparator (f b)
⊢ {f b} ⊆ Set.range f
[PROOFSTEP]
simp
[GOAL]
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
G H : C
inst✝ : HasBinaryProduct G H
⊢ IsCoseparator (G ⨯ H) ↔ IsCoseparating {G, H}
[PROOFSTEP]
refine' ⟨fun h X Y u v huv => _, fun h => (isCoseparator_def _).2 fun X Y u v huv => h _ _ fun Z hZ g => _⟩
[GOAL]
case refine'_1
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
G H : C
inst✝ : HasBinaryProduct G H
h : IsCoseparator (G ⨯ H)
X Y : C
u v : X ⟶ Y
huv : ∀ (G_1 : C), G_1 ∈ {G, H} → ∀ (h : Y ⟶ G_1), u ≫ h = v ≫ h
⊢ u = v
[PROOFSTEP]
refine' h.def _ _ fun g => prod.hom_ext _ _
[GOAL]
case refine'_1.refine'_1
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
G H : C
inst✝ : HasBinaryProduct G H
h : IsCoseparator (G ⨯ H)
X Y : C
u v : X ⟶ Y
huv : ∀ (G_1 : C), G_1 ∈ {G, H} → ∀ (h : Y ⟶ G_1), u ≫ h = v ≫ h
g : Y ⟶ G ⨯ H
⊢ (u ≫ g) ≫ prod.fst = (v ≫ g) ≫ prod.fst
[PROOFSTEP]
simpa using huv G (by simp) (g ≫ Limits.prod.fst)
[GOAL]
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
G H : C
inst✝ : HasBinaryProduct G H
h : IsCoseparator (G ⨯ H)
X Y : C
u v : X ⟶ Y
huv : ∀ (G_1 : C), G_1 ∈ {G, H} → ∀ (h : Y ⟶ G_1), u ≫ h = v ≫ h
g : Y ⟶ G ⨯ H
⊢ G ∈ {G, H}
[PROOFSTEP]
simp
[GOAL]
case refine'_1.refine'_2
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
G H : C
inst✝ : HasBinaryProduct G H
h : IsCoseparator (G ⨯ H)
X Y : C
u v : X ⟶ Y
huv : ∀ (G_1 : C), G_1 ∈ {G, H} → ∀ (h : Y ⟶ G_1), u ≫ h = v ≫ h
g : Y ⟶ G ⨯ H
⊢ (u ≫ g) ≫ prod.snd = (v ≫ g) ≫ prod.snd
[PROOFSTEP]
simpa using huv H (by simp) (g ≫ Limits.prod.snd)
[GOAL]
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
G H : C
inst✝ : HasBinaryProduct G H
h : IsCoseparator (G ⨯ H)
X Y : C
u v : X ⟶ Y
huv : ∀ (G_1 : C), G_1 ∈ {G, H} → ∀ (h : Y ⟶ G_1), u ≫ h = v ≫ h
g : Y ⟶ G ⨯ H
⊢ H ∈ {G, H}
[PROOFSTEP]
simp
[GOAL]
case refine'_2
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
G H : C
inst✝ : HasBinaryProduct G H
h : IsCoseparating {G, H}
X Y : C
u v : X ⟶ Y
huv : ∀ (h : Y ⟶ G ⨯ H), u ≫ h = v ≫ h
Z : C
hZ : Z ∈ {G, H}
g : Y ⟶ Z
⊢ u ≫ g = v ≫ g
[PROOFSTEP]
simp only [Set.mem_insert_iff, Set.mem_singleton_iff] at hZ
[GOAL]
case refine'_2
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
G H : C
inst✝ : HasBinaryProduct G H
h : IsCoseparating {G, H}
X Y : C
u v : X ⟶ Y
huv : ∀ (h : Y ⟶ G ⨯ H), u ≫ h = v ≫ h
Z : C
g : Y ⟶ Z
hZ : Z = G ∨ Z = H
⊢ u ≫ g = v ≫ g
[PROOFSTEP]
rcases hZ with (rfl | rfl)
[GOAL]
case refine'_2.inl
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
H X Y : C
u v : X ⟶ Y
Z : C
g : Y ⟶ Z
inst✝ : HasBinaryProduct Z H
h : IsCoseparating {Z, H}
huv : ∀ (h : Y ⟶ Z ⨯ H), u ≫ h = v ≫ h
⊢ u ≫ g = v ≫ g
[PROOFSTEP]
simpa using huv (prod.lift g 0) =≫ Limits.prod.fst
[GOAL]
case refine'_2.inr
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
G X Y : C
u v : X ⟶ Y
Z : C
g : Y ⟶ Z
inst✝ : HasBinaryProduct G Z
h : IsCoseparating {G, Z}
huv : ∀ (h : Y ⟶ G ⨯ Z), u ≫ h = v ≫ h
⊢ u ≫ g = v ≫ g
[PROOFSTEP]
simpa using huv (prod.lift 0 g) =≫ Limits.prod.snd
[GOAL]
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
G H : C
inst✝ : HasBinaryProduct G H
hG : IsCoseparator G
⊢ {G} ⊆ {G, H}
[PROOFSTEP]
simp
[GOAL]
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
G H : C
inst✝ : HasBinaryProduct G H
hH : IsCoseparator H
⊢ {H} ⊆ {G, H}
[PROOFSTEP]
simp
[GOAL]
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
β : Type w
f : β → C
inst✝ : HasProduct f
⊢ IsCoseparator (∏ f) ↔ IsCoseparating (Set.range f)
[PROOFSTEP]
refine' ⟨fun h X Y u v huv => _, fun h => (isCoseparator_def _).2 fun X Y u v huv => h _ _ fun Z hZ g => _⟩
[GOAL]
case refine'_1
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
β : Type w
f : β → C
inst✝ : HasProduct f
h : IsCoseparator (∏ f)
X Y : C
u v : X ⟶ Y
huv : ∀ (G : C), G ∈ Set.range f → ∀ (h : Y ⟶ G), u ≫ h = v ≫ h
⊢ u = v
[PROOFSTEP]
refine' h.def _ _ fun g => limit.hom_ext fun b => _
[GOAL]
case refine'_1
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
β : Type w
f : β → C
inst✝ : HasProduct f
h : IsCoseparator (∏ f)
X Y : C
u v : X ⟶ Y
huv : ∀ (G : C), G ∈ Set.range f → ∀ (h : Y ⟶ G), u ≫ h = v ≫ h
g : Y ⟶ ∏ f
b : Discrete β
⊢ (u ≫ g) ≫ limit.π (Discrete.functor f) b = (v ≫ g) ≫ limit.π (Discrete.functor f) b
[PROOFSTEP]
simpa using huv (f b.as) (by simp) (g ≫ limit.π (Discrete.functor f) _)
[GOAL]
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
β : Type w
f : β → C
inst✝ : HasProduct f
h : IsCoseparator (∏ f)
X Y : C
u v : X ⟶ Y
huv : ∀ (G : C), G ∈ Set.range f → ∀ (h : Y ⟶ G), u ≫ h = v ≫ h
g : Y ⟶ ∏ f
b : Discrete β
⊢ f b.as ∈ Set.range f
[PROOFSTEP]
simp
[GOAL]
case refine'_2
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
β : Type w
f : β → C
inst✝ : HasProduct f
h : IsCoseparating (Set.range f)
X Y : C
u v : X ⟶ Y
huv : ∀ (h : Y ⟶ ∏ f), u ≫ h = v ≫ h
Z : C
hZ : Z ∈ Set.range f
g : Y ⟶ Z
⊢ u ≫ g = v ≫ g
[PROOFSTEP]
obtain ⟨b, rfl⟩ := Set.mem_range.1 hZ
[GOAL]
case refine'_2.intro
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
β : Type w
f : β → C
inst✝ : HasProduct f
h : IsCoseparating (Set.range f)
X Y : C
u v : X ⟶ Y
huv : ∀ (h : Y ⟶ ∏ f), u ≫ h = v ≫ h
b : β
hZ : f b ∈ Set.range f
g : Y ⟶ f b
⊢ u ≫ g = v ≫ g
[PROOFSTEP]
classical simpa using huv (Pi.lift (Pi.single b g)) =≫ Pi.π f b
[GOAL]
case refine'_2.intro
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
β : Type w
f : β → C
inst✝ : HasProduct f
h : IsCoseparating (Set.range f)
X Y : C
u v : X ⟶ Y
huv : ∀ (h : Y ⟶ ∏ f), u ≫ h = v ≫ h
b : β
hZ : f b ∈ Set.range f
g : Y ⟶ f b
⊢ u ≫ g = v ≫ g
[PROOFSTEP]
simpa using huv (Pi.lift (Pi.single b g)) =≫ Pi.π f b
[GOAL]
C : Type u₁
inst✝³ : Category.{v₁, u₁} C
D : Type u₂
inst✝² : Category.{v₂, u₂} D
inst✝¹ : HasZeroMorphisms C
β : Type w
f : β → C
inst✝ : HasProduct f
b : β
hb : IsCoseparator (f b)
⊢ {f b} ⊆ Set.range f
[PROOFSTEP]
simp
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
G : C
⊢ IsDetector G ↔ ReflectsIsomorphisms (coyoneda.obj (op G))
[PROOFSTEP]
refine' ⟨fun hG => ⟨fun f hf => hG.def _ fun h => _⟩, fun h => (isDetector_def _).2 fun X Y f hf => _⟩
[GOAL]
case refine'_1
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
G : C
hG : IsDetector G
A✝ B✝ : C
f : A✝ ⟶ B✝
hf : IsIso ((coyoneda.obj (op G)).map f)
h : G ⟶ B✝
⊢ ∃! h', h' ≫ f = h
[PROOFSTEP]
rw [isIso_iff_bijective, Function.bijective_iff_existsUnique] at hf
[GOAL]
case refine'_1
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
G : C
hG : IsDetector G
A✝ B✝ : C
f : A✝ ⟶ B✝
hf : ∀ (b : (coyoneda.obj (op G)).obj B✝), ∃! a, (coyoneda.obj (op G)).map f a = b
h : G ⟶ B✝
⊢ ∃! h', h' ≫ f = h
[PROOFSTEP]
exact hf h
[GOAL]
case refine'_2
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
G : C
h : ReflectsIsomorphisms (coyoneda.obj (op G))
X Y : C
f : X ⟶ Y
hf : ∀ (h : G ⟶ Y), ∃! h', h' ≫ f = h
⊢ IsIso f
[PROOFSTEP]
suffices IsIso ((coyoneda.obj (op G)).map f) by exact @isIso_of_reflects_iso _ _ _ _ _ _ _ (coyoneda.obj (op G)) _ h
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
G : C
h : ReflectsIsomorphisms (coyoneda.obj (op G))
X Y : C
f : X ⟶ Y
hf : ∀ (h : G ⟶ Y), ∃! h', h' ≫ f = h
this : IsIso ((coyoneda.obj (op G)).map f)
⊢ IsIso f
[PROOFSTEP]
exact @isIso_of_reflects_iso _ _ _ _ _ _ _ (coyoneda.obj (op G)) _ h
[GOAL]
case refine'_2
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
G : C
h : ReflectsIsomorphisms (coyoneda.obj (op G))
X Y : C
f : X ⟶ Y
hf : ∀ (h : G ⟶ Y), ∃! h', h' ≫ f = h
⊢ IsIso ((coyoneda.obj (op G)).map f)
[PROOFSTEP]
rwa [isIso_iff_bijective, Function.bijective_iff_existsUnique]
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
G : C
⊢ IsCodetector G ↔ ReflectsIsomorphisms (yoneda.obj G)
[PROOFSTEP]
refine' ⟨fun hG => ⟨fun f hf => _⟩, fun h => (isCodetector_def _).2 fun X Y f hf => _⟩
[GOAL]
case refine'_1
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
G : C
hG : IsCodetector G
A✝ B✝ : Cᵒᵖ
f : A✝ ⟶ B✝
hf : IsIso ((yoneda.obj G).map f)
⊢ IsIso f
[PROOFSTEP]
refine' (isIso_unop_iff _).1 (hG.def _ _)
[GOAL]
case refine'_1
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
G : C
hG : IsCodetector G
A✝ B✝ : Cᵒᵖ
f : A✝ ⟶ B✝
hf : IsIso ((yoneda.obj G).map f)
⊢ ∀ (h : B✝.unop ⟶ G), ∃! h', f.unop ≫ h' = h
[PROOFSTEP]
rwa [isIso_iff_bijective, Function.bijective_iff_existsUnique] at hf
[GOAL]
case refine'_2
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
G : C
h : ReflectsIsomorphisms (yoneda.obj G)
X Y : C
f : X ⟶ Y
hf : ∀ (h : X ⟶ G), ∃! h', f ≫ h' = h
⊢ IsIso f
[PROOFSTEP]
rw [← isIso_op_iff]
[GOAL]
case refine'_2
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
G : C
h : ReflectsIsomorphisms (yoneda.obj G)
X Y : C
f : X ⟶ Y
hf : ∀ (h : X ⟶ G), ∃! h', f ≫ h' = h
⊢ IsIso f.op
[PROOFSTEP]
suffices IsIso ((yoneda.obj G).map f.op) by exact @isIso_of_reflects_iso _ _ _ _ _ _ _ (yoneda.obj G) _ h
[GOAL]
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
G : C
h : ReflectsIsomorphisms (yoneda.obj G)
X Y : C
f : X ⟶ Y
hf : ∀ (h : X ⟶ G), ∃! h', f ≫ h' = h
this : IsIso ((yoneda.obj G).map f.op)
⊢ IsIso f.op
[PROOFSTEP]
exact @isIso_of_reflects_iso _ _ _ _ _ _ _ (yoneda.obj G) _ h
[GOAL]
case refine'_2
C : Type u₁
inst✝¹ : Category.{v₁, u₁} C
D : Type u₂
inst✝ : Category.{v₂, u₂} D
G : C
h : ReflectsIsomorphisms (yoneda.obj G)
X Y : C
f : X ⟶ Y
hf : ∀ (h : X ⟶ G), ∃! h', f ≫ h' = h
⊢ IsIso ((yoneda.obj G).map f.op)
[PROOFSTEP]
rwa [isIso_iff_bijective, Function.bijective_iff_existsUnique]
|
# git c1ed2e+ abb2cfe e53b6d4
# with awsgpu cpu gpu cpu af kn kn+gc1 kn+gc2 kn+gc3
#1 mul 0.67 0.95 0.56 0.94 0.56 0.56 0.56 0.56 0.56
#2 bias 0.71 1.05 0.59 1.05 0.56 0.59 0.59 0.59 0.59
#3 max 0.75 1.34 0.62 1.34 0.56 0.63 0.62 0.62 0.62
#4 mul 0.81 1.43 0.75 1.44 0.74 0.75 0.75 0.75 0.75
#5 bias 0.85 1.48 0.78 1.48 0.75 0.79 0.78 0.78 0.78
#6 sub 0.89 1.49 0.82 1.49 0.81 0.82 0.81 0.81 0.82
#7 sq 0.92 1.62 0.85 1.62 0.93 0.85 0.84 0.84 0.85
#8 sum 1.21 1.63 1.06±01 1.62 1.22 1.19 1.07 1.08 1.07
#9 forw 1.51 1.96±04 1.18±02 2.47 2.60 2.25 1.67 1.46 1.68
#A grad 2.89 4.40±12 2.10±22 5.52 6.53 5.86 3.52 3.62 3.30
#
# (*) timeall(weights(), weights(64), data(), 10)
# (*) af results with gc_enable=false and sync()
# (*) kn uses `similar`, +gc1 runs tmpfree every epoch, +gc2 runs tmpfree every iteration (minibatch), +gc3 uses KnetArray.
# AF: The forw records arrays preventing their reuse?
# AF: They are merging consecutive ops in one kernel, which breaks down with forw?
using AutoGrad, GZip, Compat
using AutoGrad: forward_pass
fun = []
push!(fun,(w,x,y)->w[1]*x)
push!(fun,(w,x,y)->w[1]*x.+w[2])
push!(fun,(w,x,y)->max(0,w[1]*x.+w[2]))
push!(fun,(w,x,y)->w[3]*max(0,w[1]*x.+w[2]))
push!(fun,(w,x,y)->w[3]*max(0,w[1]*x.+w[2]).+w[4])
push!(fun,(w,x,y)->((w[3]*max(0,w[1]*x.+w[2]).+w[4])-y))
push!(fun,(w,x,y)->(((w[3]*max(0,w[1]*x.+w[2]).+w[4])-y).^2))
fun1 = (w,x,y)->sum(((w[3]*max(0,w[1]*x.+w[2]).+w[4])-y).^2)
push!(fun, fun1)
push!(fun,(w,x,y)->forward_pass(fun1,(w,x,y),(),1))
push!(fun,grad(fun1))
function timeall(w=w2,d=d0,t=10)
for i=1:length(fun)
printfun(fun[i])
for j=1:3
sleep(2)
@time loop(fun[i],w,d,t)
end
end
end
function loop(f,w,d,t)
for i in 1:t
for (x,y) in d
f(w,x,y)
end
end
end
function weights(h...; seed=nothing)
seed==nothing || srand(seed)
w = Array{Float32}[]
x = 28*28
for y in [h..., 10]
push!(w, convert(Array{Float32}, 0.1*randn(y,x)))
push!(w, zeros(Float32,y))
x = y
end
return w
end
function data()
info("Loading data...")
xshape(a)=reshape(a./255f0,784,div(length(a),784))
yshape(a)=(a[a.==0]=10; full(sparse(convert(Vector{Int},a),1:length(a),1f0)))
xtrn = xshape(gzload("train-images-idx3-ubyte.gz")[17:end])
ytrn = yshape(gzload("train-labels-idx1-ubyte.gz")[9:end])
#xtst = xshape(gzload("t10k-images-idx3-ubyte.gz")[17:end])
#ytst = yshape(gzload("t10k-labels-idx1-ubyte.gz")[9:end])
batch(xtrn,ytrn,100)
end
function gzload(file; path=joinpath(AutoGrad.datapath,file), url="http://yann.lecun.com/exdb/mnist/$file")
isfile(path) || download(url, path)
f = gzopen(path)
a = @compat read(f)
close(f)
return(a)
end
function batch(x, y, batchsize)
data = Any[]
nx = size(x,2)
for i=1:batchsize:nx
j=min(i+batchsize-1,nx)
push!(data, (x[:,i:j], y[:,i:j]))
end
return data
end
function printfun(x)
if isdefined(x,:code)
println(Base.uncompressed_ast(x.code).args[3].args[2].args[1])
else
println(x)
end
end
if !isdefined(:d0)
d0 = data()
w1 = weights(seed=1)
w2 = weights(64;seed=1)
end
:ok
# julia> timeall()
# (Main.getindex)(w,1) * x
# 0.956369 seconds (30.00 k allocations: 147.400 MB, 0.97% gc time)
# 0.947161 seconds (30.00 k allocations: 147.400 MB, 0.78% gc time)
# 0.947129 seconds (30.00 k allocations: 147.400 MB, 0.66% gc time)
# (Main.getindex)(w,1) * x .+ (Main.getindex)(w,2)
# 1.055720 seconds (144.00 k allocations: 297.913 MB, 1.41% gc time)
# 1.054730 seconds (144.00 k allocations: 297.913 MB, 1.41% gc time)
# 1.054276 seconds (144.00 k allocations: 297.913 MB, 1.31% gc time)
# (Main.max)(0,(Main.getindex)(w,1) * x .+ (Main.getindex)(w,2))
# 1.356736 seconds (168.03 k allocations: 445.315 MB, 1.63% gc time)
# 1.353720 seconds (168.00 k allocations: 445.313 MB, 1.55% gc time)
# 1.353312 seconds (168.00 k allocations: 445.313 MB, 1.56% gc time)
# (Main.getindex)(w,3) * (Main.max)(0,(Main.getindex)(w,1) * x .+ (Main.getindex)(w,2))
# 1.443865 seconds (180.04 k allocations: 468.844 MB, 1.62% gc time)
# 1.440977 seconds (180.00 k allocations: 468.842 MB, 1.56% gc time)
# 1.441619 seconds (180.00 k allocations: 468.842 MB, 1.63% gc time)
# (Main.getindex)(w,3) * (Main.max)(0,(Main.getindex)(w,1) * x .+ (Main.getindex)(w,2)) .+ (Main.getindex)(w,4)
# 1.486593 seconds (288.04 k allocations: 495.852 MB, 1.63% gc time)
# 1.487364 seconds (288.00 k allocations: 495.850 MB, 1.69% gc time)
# 1.485600 seconds (288.00 k allocations: 495.850 MB, 1.69% gc time)
# ((Main.getindex)(w,3) * (Main.max)(0,(Main.getindex)(w,1) * x .+ (Main.getindex)(w,2)) .+ (Main.getindex)(w,4)) - y
# 1.498100 seconds (300.04 k allocations: 519.382 MB, 1.70% gc time)
# 1.498842 seconds (300.00 k allocations: 519.379 MB, 1.78% gc time)
# 1.497004 seconds (300.00 k allocations: 519.379 MB, 1.78% gc time)
# (((Main.getindex)(w,3) * (Main.max)(0,(Main.getindex)(w,1) * x .+ (Main.getindex)(w,2)) .+ (Main.getindex)(w,4)) - y) .^ 2
# 1.634301 seconds (318.04 k allocations: 543.277 MB, 1.73% gc time)
# 1.631012 seconds (318.00 k allocations: 543.274 MB, 1.66% gc time)
# 1.632553 seconds (318.00 k allocations: 543.274 MB, 1.96% gc time)
# (Main.sum)((((Main.getindex)(w,3) * (Main.max)(0,(Main.getindex)(w,1) * x .+ (Main.getindex)(w,2)) .+ (Main.getindex)(w,4)) - y) .^ 2)
# 1.636876 seconds (324.00 k allocations: 543.365 MB, 1.74% gc time)
# 1.635672 seconds (324.00 k allocations: 543.365 MB, 1.74% gc time)
# 1.636505 seconds (324.00 k allocations: 543.365 MB, 1.74% gc time)
# (Main.forward_pass)(Main.fun1,(top(tuple))(w,x,y),(top(tuple))(),1)
# 2.109984 seconds (1.45 M allocations: 601.044 MB, 1.89% gc time)
# 2.110840 seconds (1.45 M allocations: 601.044 MB, 1.91% gc time)
# 2.109753 seconds (1.45 M allocations: 601.044 MB, 1.83% gc time)
# gradfun
# 4.794647 seconds (3.39 M allocations: 2.200 GB, 2.58% gc time)
# 4.788467 seconds (3.39 M allocations: 2.200 GB, 2.54% gc time)
# 4.790467 seconds (3.39 M allocations: 2.200 GB, 2.57% gc time)
# julia> include(Pkg.dir("Knet/test/profile_kn.jl"))
# before d0kn,w2kn (4934356992,:cuda_ptrs,0)
# after d0kn,w2kn (4720431104,(4000,600,0),(200704,1,0),(2560,1,0),(40,1,0),(256,1,0),(313600,600,0),:cuda_ptrs,0)
# 1(Main.getindex)(w,1) * x
# 0.615343 seconds (278.52 k allocations: 10.871 MB)
# 0.559348 seconds (222.00 k allocations: 8.331 MB)
# 0.559312 seconds (222.00 k allocations: 8.331 MB)
# 2(Main.getindex)(w,1) * x .+ (Main.getindex)(w,2)
# 1.278285 seconds (374.67 k allocations: 14.424 MB)
# 0.593602 seconds (330.00 k allocations: 12.268 MB)
# 0.592977 seconds (312.00 k allocations: 11.627 MB)
# 3(Main.max)(0,(Main.getindex)(w,1) * x .+ (Main.getindex)(w,2))
# 0.632009 seconds (371.78 k allocations: 13.691 MB)
# 0.623921 seconds (348.00 k allocations: 12.817 MB)
# 0.624452 seconds (366.00 k allocations: 13.458 MB)
# 4(Main.getindex)(w,3) * (Main.max)(0,(Main.getindex)(w,1) * x .+ (Main.getindex)(w,2))
# 0.755348 seconds (528.04 k allocations: 20.327 MB)
# 0.752531 seconds (510.00 k allocations: 19.684 MB)
# 0.752733 seconds (528.00 k allocations: 20.325 MB)
# 5(Main.getindex)(w,3) * (Main.max)(0,(Main.getindex)(w,1) * x .+ (Main.getindex)(w,2)) .+ (Main.getindex)(w,4)
# 0.787278 seconds (618.04 k allocations: 23.623 MB)
# 0.784750 seconds (618.00 k allocations: 23.621 MB)
# 0.785054 seconds (618.00 k allocations: 23.621 MB)
# 6((Main.getindex)(w,3) * (Main.max)(0,(Main.getindex)(w,1) * x .+ (Main.getindex)(w,2)) .+ (Main.getindex)(w,4)) - y
# 0.852492 seconds (705.88 k allocations: 27.000 MB)
# 0.815621 seconds (654.00 k allocations: 24.811 MB)
# 0.815684 seconds (654.00 k allocations: 24.811 MB)
# 7(((Main.getindex)(w,3) * (Main.max)(0,(Main.getindex)(w,1) * x .+ (Main.getindex)(w,2)) .+ (Main.getindex)(w,4)) - y) .^ 2
# 0.853973 seconds (713.80 k allocations: 26.875 MB)
# 0.845969 seconds (690.00 k allocations: 26.001 MB)
# 0.846093 seconds (690.00 k allocations: 26.001 MB)
# 8(Main.sum)((((Main.getindex)(w,3) * (Main.max)(0,(Main.getindex)(w,1) * x .+ (Main.getindex)(w,2)) .+ (Main.getindex)(w,4)) - y) .^ 2)
# 1.065069 seconds (697.55 k allocations: 26.163 MB)
# 1.059029 seconds (696.00 k allocations: 26.093 MB)
# 1.058951 seconds (696.00 k allocations: 26.093 MB)
# 9(Main.forward_pass)(Main.fun1,(top(tuple))(w,x,y),(top(tuple))(),1)
# 1.606405 seconds (2.22 M allocations: 101.994 MB)
# 1.270358 seconds (1.90 M allocations: 87.799 MB)
# 1.270771 seconds (1.90 M allocations: 87.799 MB)
# 10gradfun
# 4.155291 seconds (4.51 M allocations: 188.382 MB)
# 2.494650 seconds (4.09 M allocations: 171.112 MB)
# 2.528842 seconds (4.09 M allocations: 171.112 MB)
|
# Glosten-Milgrom Model for security market
## Deriving the model
```python
%matplotlib inline
#import the needed packages
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.figure import Figure
import sympy as sm
import pandas as pd
#from local file import plotterwindow
from plotter import PlotterWindow
#use pretty print
sm.init_printing(use_unicode=True)
```
we create the equation symbols needed
```python
mu_t = sm.symbols("mu_t")
mu_t1 = sm.symbols("mu_t-1")
b_t = sm.symbols("b_t")
a_t = sm.symbols("a_t")
p_t = sm.symbols("P_t")
v = sm.symbols("v")
v_h = sm.symbols("v^H")
v_l = sm.symbols("v^L")
d_t = sm.symbols("d_t")
sd_t = sm.symbols("s(d_t)")
s_a = sm.symbols("s_t^a")
s_b = sm.symbols("s_t^b")
theta_t = sm.symbols("theta_t")
theta_t1 = sm.symbols("theta_t-1")
pi = sm.symbols("pi")
beta_b = sm.symbols("beta_B")
beta_s = sm.symbols("beta_S")
spib = sm.symbols("Pi_buy")
spis = sm.symbols("Pi_sell")
theta_p=sm.symbols("theta^+")
theta_m=sm.symbols("theta^-")
```
There is a single market with 1 dealer. The dealer buys and sells a single security at the ask price $a_t$ and bid price $b_t$. In every time period t he takes a single order from a single trader. The order is denoted $d_t$ which takes the value 1 for buy order, -1 for a sale order and 0 for no order.
On the market there are two types of traders, Informed traders and liquidity traders. The informed traders have a high amount of information about the security and knows the true value of it. Further they seek to maximize their profit trading with the security, which leads to only buying if the security value higher than the ask price and only selling selling if the value is lower than the bid price.
Liquidity traders do not have information about the security or rather, they dont care. These traders buys or sells the security independent on the value of the security. Among liquidity traders you find traders who either seeks to diversify their portfolio, thus buy, or who needs to liquify some assets. The liquidity trader buys with probability $\beta_B$ and sells with probability $\beta_S$.
$\pi$ share of the traders is informed traders.
On the market there is unequal information. The informed traders have private information about the value of the security. The dealer only has access to public information about the market, such as the ratio of informed traders. The dealer however receives information with each trade order. An informed trader would never sell if $a_t<v$, therefore an unequal amount of buy orders would indicate $v=v^H$
Before we start on our model equations we make two assumptions:<br> i) The dealer is risk neutral and is in a competitive. <br> ii) there is no trade cost on orders on the market
To simplify our model we set the security v to be binary distributed taking the value $v^H$ or $v^L$, the superscript denoting high and low value.
The dealer has the belief $\theta_t$, which is his estimate of the probability $p(v=v^h)=\theta$. From his belief he finds his expected value of security v denoted $\mu_t$
```python
sm.Eq(mu_t,theta_t*v_h+(1-theta_t)*v_l)
```
The dealer sets his ask and bid price based on his expectition to v conditional on the information $\Omega_{t-1}$ and whether he expects a buy or sell order.
$$a_t=E(v\mid \Omega_{t-1} , d_t=1)$$
$$b_t=E(v\mid \Omega_{t-1} , d_t=-1)$$
Further, since the dealer is in a competitive market he sets the ask and bid price according to an expected null profit. Since the dealer does not know the true value of the security, he loses on every trade made with an informed trader. The deficit is made up by trading with liquidity traders. Based on his expectation to the security and his knowledge of the market, he derives the following profit functions
```python
sm.Eq(theta_t1*pi*(a_t-v_h)+beta_b*(1-pi)*(a_t-mu_t1), sm.symbols("Pi^buy_t"))
```
```python
sm.Eq((1-theta_t1)*pi*(v_l-b_t)+beta_s*(1-pi)*(mu_t1-b_t),sm.symbols("Pi^sell_t"))
```
From these profits he derives his optimal ask and bid price.
```python
sm.Eq(mu_t1+(pi*theta_t1*(1-theta_t1))/(pi*theta_t1+(1-pi)*beta_b)*(v_h-v_l), a_t)
```
```python
sm.Eq(mu_t1-(pi*(1-theta_t1)*(1-theta_t1))/(pi*theta_t1+(1-pi)*beta_s)*(v_h-v_l),b_t)
```
The ask and bid price equal the dealers expected value of the security with a markup and discount respectively. We denote the markup $s_t^a$ and the discount $s_t^b$
```python
sm.Eq((pi*theta_t1*(1-theta_t1))/(pi*theta_t1+(1-pi)*beta_b)*(v_h-v_l), s_a)
sm.Eq(mu_t1+s_a, a_t)
```
```python
sm.Eq((pi*(1-theta_t1)*(1-theta_t1))/(pi*theta_t1+(1-pi)*beta_s)*(v_h-v_l), s_b)
sm.Eq(mu_t1-s_b, b_t)
```
Depending on the trade order the dealer receives in period t, his beliefs about the value of the security in period t+1 is changed.
$$\theta_t^{+}\equiv P(v=v^H|\Omega_{t-1},d_t=1)$$
$$\theta_t^{-}\equiv P(v=v^L|\Omega_{t-1},d_t=-1)$$
```python
sm.Eq(((1+pi)*beta_b)/(pi*theta_t1+(1-pi)*beta_b),theta_p)
```
```python
sm.Eq(((1-pi)*beta_s)/(pi*(1-theta_t1)+(1-pi)*beta_s),theta_m)
```
With his updated beliefs about the security, the dealer updates his expectation of the value of the security for the period t+1. If the dealer recieved a buy order in in period t, his expectation $\mu$ can be shown to have the following relation
$$\mu^+_{t+1}-\mu_t=s_t^a$$
The dealers expectation for any order is therefore
$$\mu_t=\mu_{t-1}+s(d_t)d_t$$
with $s(d_t)\equiv \begin{cases} s_t^a & d_t=1 \\ s_t^b & d_t=-1 \end{cases}$
## Simulating the model
Solving the model equilibrium is unfortunately not straight forward. The model have local equilibria for every period, which maximize the profit for informed traders and ensure null profit for the dealer. In order to find the model equibrilium, we use the algorithm below.
**How to simulate the Glosten-Milgrom model**
1. choose start values and parameters. Set threshold value $\epsilon$
2. calculate value expectation, then ask/bid price.
3. determine trader type, then order type
4. update dealer beliefs
5. repeat step 2-4 until $\epsilon > spread = a_t - b_t$
The flow chart below shows more detailed steps to simulate our model. We followed the flow of the chart to create the function simulation()
In the code below we define the function simulation, which simulates our model in accordance with flow chart above.
```python
def simulation(distribution=(0,1), decision="v_h", ratio=0.2, uninformed=0.5, startvalue=0.5, iterations = 500, seed=5000, epsilon=10**-5, shockperiod = None, shock={}):
#define constants
v_l, v_h = distribution
pi = ratio
beta_b = uninformed
beta_s = 1-beta_b
shock = shock
#determine realized value of v
v = decision
#allocate space
values={}
ratiovalues = []
iteration = []
thetavalues = np.empty(iterations)
muvalues = np.empty(iterations)
askvalues = np.empty(iterations)
bidvalues = np.empty(iterations)
gapvalues = np.empty(iterations)
pivalues = np.empty(iterations)
decisionvalues = np.empty(iterations)
#simulation settings
thetavalues[0]=startvalue
theta_t1 = startvalue
N = iterations
np.random.seed(seed)
break_index = 0
for i in range(N):
if i==shockperiod:
if shock != {}:
if "Public" not in shock:
if shock["Private"]==1:
v="v_h"
if shock["Private"]==0:
v="v_l"
elif "Private" not in shock:
v_l, v_h = shock["Public"]
v = decision
else:
v_l, v_h = shock["Public"]
if shock["Private"]==1:
v="v_h"
if shock["Private"]==0:
v="v_l"
if v=="v_h":
v=v_h
elif v=="v_l":
v=v_l
mu_t1 = theta_t1*v_h+(1-theta_t1)*v_l
muvalues[i] = mu_t1
#calculate markup/discount
s_a = (pi*theta_t1*(1-theta_t1))/(pi*theta_t1+(1-pi)*beta_b)*(v_h-v_l)
s_b = (pi*theta_t1*(1-theta_t1))/(pi*(1-theta_t1)+(1-pi)*beta_s)*(v_h-v_l)
#calculate ask/bid price
askvalues[i] = a_t = mu_t1 + s_a
bidvalues[i] = b_t = mu_t1 - s_b
#calculate gap
gapvalues[i] = gap_t = a_t - b_t
#realize pi
trader = np.random.binomial(1,pi)
pivalues[i] = trader
#if trader is informed
if trader == 1:
if v == v_h:
if v_h>a_t:
d_t=1
elif v == v_l:
if v_l<b_t:
d_t=-1
else:
d_t=0
#if trader is uninformed
if trader == 0:
buysell = np.random.binomial(1,beta_b)
if buysell == 1:
d_t = 1
else:
d_t = -1
decisionvalues[i] = d_t
#update theta
if d_t == 1:
theta_t = ((1+pi)*beta_b)/(pi*theta_t1+(1-pi)*beta_b)*theta_t1
theta_t1 = theta_t
elif d_t == -1:
theta_t = ((1-pi)*beta_b)/(pi*(1-theta_t1)+(1-pi)*beta_b)*theta_t1
theta_t1 = theta_t
if i<iterations-1:
thetavalues[i+1] = theta_t
ratiovalues.append(str(ratio))
iteration.append(int(i))
#off by one error
break_index=i+1
if gap_t<epsilon or i == N-1:
values.update({"Theta": theta_t,"Bid": b_t, "Ask": (a_t), "Mu": mu_t1, "Equilibrium period": break_index-1})
break
dataframe = pd.DataFrame()
dataframe["Iteration"] = iteration
dataframe["ratio"] = ratiovalues
dataframe["theta"] = thetavalues[0:break_index]
dataframe["mu"] = muvalues[0:break_index]
dataframe["ask"] = askvalues[0:break_index]
dataframe["bid"] = bidvalues[0:break_index]
dataframe["spread"] = gapvalues[0:break_index]
dataframe["trader"] = pivalues[0:break_index]
dataframe["order"] = decisionvalues[0:break_index]
return dataframe, values
```
The simulation function depends on several random distributions in order to simulate our models. Thus, running a single instance of the function will not create credible results. In order to get credible values for our equilibrium we create the function numericalsolution, which loops the simulation function N times with random seeds. The output is the mean values for the N different equilibria.
```python
def numericalsolution(N=1000, ratio=0.15):
data = pd.DataFrame()
for i in range(N):
seed = int(np.random.uniform(0,10000))
dataframe, values = simulation(distribution=(0,10), ratio=ratio, startvalue=0.5, iterations = 1001, seed=seed, epsilon=5**-5)
for key in values:
data.loc[i,key]=values[key]
meanvalues = {}
for key in list(data.columns):
meanvalues[key]=data[key].mean()
return meanvalues
```
```python
numericalsolution()
```
{'Theta': 0.9999522089136563,
'Bid': 9.99937042512405,
'Ask': 9.999656044732731,
'Mu': 9.999534654405569,
'Equilibrium period': 216.922}
with a ratio 0.15 we reach equilibrium after 216 iterations on average. As expected $\theta_t$ converges towards 1, while ask price, bid price and expected value converges towards the true value of the security.
In order to examine the influence of the ratio of traders, we construct a for loop to run numericalsolution with $\pi \in [0.01;1]$.
The code below is quite taxing to run, so we have included a line to prevent you from running it. We have included a data file in our repository containing the data from running the code below, which we use to make the following graph. You can run the code by commenting out import nothing.py.
```python
#To prevent you from accidently running it we have include the line below.
import nothing.py
#make lists
ratiolist = []
periodlist = []
#loop the function numericalsolution
for i in range(99):
num = ((i+1)/100)
values = numericalsolution(ratio=num)
ratiolist.append(num)
periodlist.append(values["Equilibrium period"])
```
```python
#if you ran the above code, uncomment the three lines below to graph the data
#plt.plot(ratiolist,periodlist)
#plt.xlabel("ratio")
#plt.ylabel("Period")
#alternative:
#read data file
doomloopdata = pd.read_csv("./doomloopdata.csv")
doomloopdata.columns = ["periodlist","ratiolist"]
#plot the data
doomloopdata.plot("ratiolist","periodlist")
plt.xlabel("ratio")
plt.ylabel("Period")
```
The speed of convergence towards the equilibrium is heavily determined by the ratio of informed traders. In our model the dealer gets information about the true value of the security from the trade flow. With more informed traders on the market the amount of information in the trade flow is higher.
## Aspects of the model
#### Seed
In our simulation function we rely on random distributions to simulate our model. This short chapter will demonstrate the randomness of a single iteration of the simulation function. We call our function with alternating seeds but all other settings equal.
```python
#calling simulation function
thetarun1 = simulation(ratio=0.15, iterations=201,seed=4795)
thetarun2 = simulation(ratio=0.15, iterations=201,seed=6279)
thetarun3 = simulation(ratio=0.15, iterations=201,seed=6130)
thetarun4 = simulation(ratio=0.15, iterations=201,seed=9352)
thetarun5 = simulation(ratio=0.15, iterations=201,seed=5059)
#saving return data
thetadata1, thetavalues1 = thetarun1
thetadata2, thetavalues2 = thetarun2
thetadata3, thetavalues3 = thetarun3
thetadata4, thetavalues4 = thetarun4
thetadata5, thetavalues5 = thetarun5
fig = plt.figure(dpi=100)
ax2 = fig.add_subplot(1,1,1)
ax2.plot(thetadata1["theta"], label="1")
ax2.plot(thetadata2["theta"], label="2")
ax2.plot(thetadata3["theta"], label="3")
ax2.plot(thetadata4["theta"], label="4")
ax2.plot(thetadata5["theta"], label="")
ax2.grid(True)
ax2.legend()
ax2.set_title("Theta")
```
As we can see in the graph above, all the simulations converges towards equilibrium. The speed of convergence however is vastly different for the different seeds.
#### ratio of informed traders
We call the simulation for different values of $\pi$ and plot bid price, ask price and expected value $\mu_t$ in our custom interactive graphwindow class. In the graph window you are able move around in the plot and examine the development of the model for different $\pi$ values
```python
#call simulation function 5 times with different ratios
pirun1 = simulation(ratio=0.10, iterations=1001, seed=404)
pirun2 = simulation(ratio=0.15, iterations=1001, seed=404)
pirun3 = simulation(ratio=0.25, iterations=1001, seed=404)
pirun4 = simulation(ratio=0.5, iterations=201, seed=404)
pirun5 = simulation(ratio=0.9, iterations=201, seed=404)
#save data
pirundata1, pirunvalues1 = pirun1
pirundata2, pirunvalues2 = pirun2
pirundata3, pirunvalues3 = pirun3
pirundata4, pirunvalues4 = pirun4
pirundata5, pirunvalues5 = pirun5
#merge data
pirunmerged = pd.concat([pirundata1, pirundata2, pirundata3, pirundata4, pirundata5])
#call custom graphwindow class with graphtype=piplot
pirungraphwindow = PlotterWindow(data = pirunmerged, slicename = "ratio", xvariable = "Iteration", yvariablelist = ["mu","ask","bid"], graphtype ="piplot")
pirungraphwindow.start()
```
The code below opens another instance of our graphwindow. The plots are candlestick plots, which maps the difference in ask and bid price.
```python
#call custom graphwindow class with graphtype=candlestick
pirungraphwindow2 = PlotterWindow(data = pirunmerged, slicename = "ratio", graphtype = "candlestick")
pirungraphwindow2.start()
```
C:\Users\Corfixen\Anaconda3\lib\site-packages\matplotlib\figure.py:98: MatplotlibDeprecationWarning:
Adding an axes using the same arguments as a previous axes currently reuses the earlier instance. In a future version, a new instance will always be created and returned. Meanwhile, this warning can be suppressed, and the future behavior ensured, by passing a unique label to each axes instance.
"Adding an axes using the same arguments as a previous axes "
As seen in both the piplot and candlestick graphwindow, the initial spread between ask and bid prices heavily depend on the ratio $\pi$. Higher values of $\pi$ leads to higher initial spread. The dealer have to compensate his prices for the higher amount of orders from informed traders to ensure the null profit
## Shocks to the model
Simply simulating the model is not as interesting. In this chapter we introduce several types of shocks to the market and graph the result. We would have preferred to run this chapter with a modified numericalsolution function to get better results. We settled on the simulation function due to the lower amount of computational power needed. As such the results are not as credible as they could be.
We simulate our model with three different types of shocks.
1. The first shock is a shock in private information. We change the realisation of the variable v, which only the informed traders know the true value of. A real-life example of this type of shock would be a trader recieving insider info from a firm.
2. The second shock is a public information shock. We change the upper and lower value for v. This information is available to all on the market. A real-life example of this type of shock would be the release of the annual report from a firm
3. We shock the model with both private and public information.
all of the shocks incur in the 90th period. To avoid naming conflict in our graphwindow we set a slighty different ratio for the different calls. The difference on the results are negligible
```python
#call simulation function with different shock types
shockrun1 = simulation(ratio=0.15001, iterations=1001, seed=404, shockperiod = 90, shock={"Private":0})
shockrun2 = simulation(ratio=0.15002, iterations=1001, seed=404, shockperiod = 90, shock={"Public":(0.3, 0.8)})
shockrun3 = simulation(ratio=0.15003, iterations=1001, seed=404, shockperiod = 90, shock={"Private":0, "Public":(0.3,0.8)})
#save results
shockdata1, shockvalues1 = shockrun1
shockdata2, shockvalues2 = shockrun2
shockdata3, shockvalues3 = shockrun3
#merge results
shockmerged = pd.concat([shockdata1,shockdata2,shockdata3])
#call custom graphwindow class with graphtype = piplot
shockgraphwindow = PlotterWindow(data = shockmerged, slicename = "ratio", xvariable = "Iteration", yvariablelist = ["mu","ask","bid"], graphtype="piplot")
shockgraphwindow.start()
```
all graph converges towards the same equilibrium before the shock.
1. After the shock the model slowly converges towards the new equilibrium, which is at $a_t, b_t, \mu_t \rightarrow v^L$. The dealer slowly receives information about the new equilibrium from the informed traders behavior. He adjusts his ask and bid price to make up for the new informed trader strategy.
2. The change in the prices happens instantly when the new information is made available. This behavior on the market suggests our model is semi-strong form efficient, since the market instantly adepts to the new public information.
3. With both shock types we see the instant respons to the public information. The reaction to the private information shock is rather slow, as with the first graph.
# Conclusion
In this paper we found the Gloston-milgrom model capable of adapting to public and private information. The model simulated the behavior of several types of traders and a market maker (the dealer), and showed their adaptions to the information avaliable. The model further shows this adaptation of information by the equilibrium prices converging on the true value of the security <br>
The ratio of informed traders $\pi$ on the market have a huge impact on the speed of convergence and spread of ask and bid prices.<br>
From the shocks we introduce to the model we conclude the model is semi-strong form efficient.
|
#include "job.hpp"
#include "genr.hpp"
#include "proc.hpp"
#include "transd.hpp"
#include "gpt_config.hpp"
#include <json/json.h>
// definitions for accessing the model factory
#include <efscape/impl/ModelHomeI.hpp>
#include <efscape/impl/ModelHomeSingleton.hpp>
#include <efscape/impl/ModelType.ipp> // for specifying model metadata
#include <efscape/utils/type.hpp>
// #include <adevs/adevs_cereal.hpp>
#include <cereal/types/base_class.hpp>
#include <cereal/types/memory.hpp>
#include <cereal/types/vector.hpp>
#include <cereal/access.hpp>
#include <sstream>
namespace cereal
{
template <class Archive>
struct specialize<Archive, gpt::genr, cereal::specialization::member_serialize> {};
template <class Archive>
struct specialize<Archive, gpt::proc, cereal::specialization::member_serialize> {};
template <class Archive>
struct specialize<Archive, gpt::transd, cereal::specialization::member_serialize> {};
}
#include <cereal/archives/json.hpp>
#include <cereal/archives/xml.hpp>
CEREAL_REGISTER_TYPE(gpt::genr);
CEREAL_REGISTER_TYPE(gpt::proc);
CEREAL_REGISTER_TYPE(gpt::transd);
CEREAL_REGISTER_POLYMORPHIC_RELATION(efscape::impl::ATOMIC,
gpt::genr);
CEREAL_REGISTER_POLYMORPHIC_RELATION(efscape::impl::ATOMIC,
gpt::proc);
CEREAL_REGISTER_POLYMORPHIC_RELATION(efscape::impl::ATOMIC,
gpt::transd);
#include <boost/function.hpp>
namespace efscape {
namespace impl {
template Json::Value exportDataTypeToJSON<::gpt::job>(gpt::job value);
}
}
namespace gpt {
// library name: gpt
char const gcp_libname[] = "libgpt";
// register genr model
const bool lb_genr_registered =
efscape::impl::Singleton<efscape::impl::ModelHomeI>::Instance().
getModelFactory().
registerType<genr>(genr::getModelType().typeName(),
genr::getModelType().toJSON());
// register proc model
const bool lb_proc_registered =
efscape::impl::Singleton<efscape::impl::ModelHomeI>::Instance().
getModelFactory().
registerType<proc>(proc::getModelType().typeName(),
proc::getModelType().toJSON());
// registerType<proc>(efscape::utils::type<proc>());
// register transd model
const bool lb_transd_registered =
efscape::impl::Singleton<efscape::impl::ModelHomeI>::Instance().
getModelFactory().
registerType<transd>(transd::getModelType().typeName(),
transd::getModelType().toJSON());
//
// export "gpt coupled model"
//
/**
* Creates a gpt coupled model.
*
* @param aC_args arguments embedded in JSON
* @returns handle to a gpt model
*/
efscape::impl::DEVS* createGptModel(Json::Value aC_args) {
// (default) parameters
double g = 1;
double p = 2;
double t = 10;
if (aC_args.isObject()) {
Json::Value lC_attribute = aC_args["period"];
if (lC_attribute.isDouble()) {
g = lC_attribute.asDouble();
} else {
LOG4CXX_ERROR(efscape::impl::ModelHomeI::getLogger(),
"Unable to parse attribute <period>");
}
lC_attribute = aC_args["processing_time"];
if (lC_attribute.isDouble()) {
p = lC_attribute.asDouble();
} else {
LOG4CXX_ERROR(efscape::impl::ModelHomeI::getLogger(),
"Unable to parse attribute <processing_time>");
}
lC_attribute = aC_args["observ_time"];
if (lC_attribute.isDouble()) {
t = lC_attribute.asDouble();
} else {
LOG4CXX_ERROR(efscape::impl::ModelHomeI::getLogger(),
"Unable to parse attribute <observ_time>");
}
}
/// Create and connect the atomic components using a digraph model.
efscape::impl::DIGRAPH* lCp_digraph = new efscape::impl::DIGRAPH();
gpt::genr* lCp_gnr = new gpt::genr(g);
gpt::transd* lCp_trnsd = new gpt::transd(t);
gpt::proc* lCp_prc = new gpt::proc(p);
/// Add the components to the digraph
lCp_digraph->add(lCp_gnr);
lCp_digraph->add(lCp_trnsd);
lCp_digraph->add(lCp_prc);
/// Establish component coupling
lCp_digraph->couple(lCp_gnr, lCp_gnr->out, lCp_trnsd, lCp_trnsd->ariv);
lCp_digraph->couple(lCp_gnr, lCp_gnr->out, lCp_prc, lCp_prc->in);
lCp_digraph->couple(lCp_prc, lCp_prc->out, lCp_trnsd, lCp_trnsd->solved);
lCp_digraph->couple(lCp_trnsd, lCp_trnsd->out, lCp_gnr, lCp_gnr->stop);
return lCp_digraph;
} // createGptModel(...)
// Metadata for a GPT coupled model
class GptModelType : public efscape::impl::ModelType
{
public:
GptModelType() :
efscape::impl::ModelType("gpt::GPT",
"This is a implementation of the classic GPT coupled model that couples a generator model (gpt::genr) that generates jobs at a fixed rate to a processor model (gpt::proc) that serves one job at a time at a fixed rate. Both models are coupled to a transducer (gpt::transd) model that computes various statistics about the performance of the queuing system",
gcp_libname,
1)
{
//========================================================================
// output ports:
// * "log": Json::Value
//========================================================================
addOutputPort(transd::log, efscape::utils::type<Json::Value>());
//========================================================================
// properties
//========================================================================
Json::Value lC_properties;
lC_properties["genr_period"] = 1.0;
lC_properties["processing_time"] = 2.0;
lC_properties["observ_time"] = 10.0;
setProperties(lC_properties);
}
};
// register gpt coupled model
const bool lb_gpt_registered =
efscape::impl::Singleton<efscape::impl::ModelHomeI>::Instance().
getModelFactory().
registerTypeWithArgs(GptModelType().typeName(),
createGptModel,
GptModelType().toJSON());
}
|
[STATEMENT]
lemma size_new_push [simp]: "invar small \<Longrightarrow> size_new (push x small) = Suc (size_new small)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. invar small \<Longrightarrow> size_new (Small.push x small) = Suc (size_new small)
[PROOF STEP]
by(induction x small rule: push.induct) (auto split: current.splits)
|
------------------------------------------------------------------------
-- The Agda standard library
--
-- The Cowriter type and some operations
------------------------------------------------------------------------
{-# OPTIONS --without-K --safe --sized-types #-}
-- Disabled to prevent warnings from BoundedVec
{-# OPTIONS --warn=noUserWarning #-}
module Codata.Cowriter where
open import Size
open import Level as L using (Level)
open import Codata.Thunk using (Thunk; force)
open import Codata.Conat
open import Codata.Delay using (Delay; later; now)
open import Codata.Stream as Stream using (Stream; _∷_)
open import Data.Unit
open import Data.List.Base using (List; []; _∷_)
open import Data.List.NonEmpty using (List⁺; _∷_)
open import Data.Nat.Base as Nat using (ℕ; zero; suc)
open import Data.Product as Prod using (_×_; _,_)
open import Data.Sum.Base as Sum using (_⊎_; inj₁; inj₂)
open import Data.Vec.Base using (Vec; []; _∷_)
open import Data.Vec.Bounded as Vec≤ using (Vec≤; _,_)
open import Function
private
variable
a b w x : Level
A : Set a
B : Set b
W : Set w
X : Set x
------------------------------------------------------------------------
-- Definition
data Cowriter (W : Set w) (A : Set a) (i : Size) : Set (a L.⊔ w) where
[_] : A → Cowriter W A i
_∷_ : W → Thunk (Cowriter W A) i → Cowriter W A i
------------------------------------------------------------------------
-- Relationship to Delay.
fromDelay : ∀ {i} → Delay A i → Cowriter ⊤ A i
fromDelay (now a) = [ a ]
fromDelay (later da) = _ ∷ λ where .force → fromDelay (da .force)
toDelay : ∀ {i} → Cowriter W A i → Delay A i
toDelay [ a ] = now a
toDelay (_ ∷ ca) = later λ where .force → toDelay (ca .force)
------------------------------------------------------------------------
-- Basic functions.
fromStream : ∀ {i} → Stream W i → Cowriter W A i
fromStream (w ∷ ws) = w ∷ λ where .force → fromStream (ws .force)
repeat : W → Cowriter W A ∞
repeat = fromStream ∘′ Stream.repeat
length : ∀ {i} → Cowriter W A i → Conat i
length [ _ ] = zero
length (w ∷ cw) = suc λ where .force → length (cw .force)
splitAt : ∀ (n : ℕ) → Cowriter W A ∞ → (Vec W n × Cowriter W A ∞) ⊎ (Vec≤ W n × A)
splitAt zero cw = inj₁ ([] , cw)
splitAt (suc n) [ a ] = inj₂ (Vec≤.[] , a)
splitAt (suc n) (w ∷ cw) = Sum.map (Prod.map₁ (w ∷_)) (Prod.map₁ (w Vec≤.∷_))
$ splitAt n (cw .force)
take : ∀ (n : ℕ) → Cowriter W A ∞ → Vec W n ⊎ (Vec≤ W n × A)
take n = Sum.map₁ Prod.proj₁ ∘′ splitAt n
infixr 5 _++_ _⁺++_
_++_ : ∀ {i} → List W → Cowriter W A i → Cowriter W A i
[] ++ ca = ca
(w ∷ ws) ++ ca = w ∷ λ where .force → ws ++ ca
_⁺++_ : ∀ {i} → List⁺ W → Thunk (Cowriter W A) i → Cowriter W A i
(w ∷ ws) ⁺++ ca = w ∷ λ where .force → ws ++ ca .force
concat : ∀ {i} → Cowriter (List⁺ W) A i → Cowriter W A i
concat [ a ] = [ a ]
concat (w ∷ ca) = w ⁺++ λ where .force → concat (ca .force)
------------------------------------------------------------------------
-- Functor, Applicative and Monad
map : ∀ {i} → (W → X) → (A → B) → Cowriter W A i → Cowriter X B i
map f g [ a ] = [ g a ]
map f g (w ∷ cw) = f w ∷ λ where .force → map f g (cw .force)
map₁ : ∀ {i} → (W → X) → Cowriter W A i → Cowriter X A i
map₁ f = map f id
map₂ : ∀ {i} → (A → X) → Cowriter W A i → Cowriter W X i
map₂ = map id
ap : ∀ {i} → Cowriter W (A → X) i → Cowriter W A i → Cowriter W X i
ap [ f ] ca = map₂ f ca
ap (w ∷ cf) ca = w ∷ λ where .force → ap (cf .force) ca
_>>=_ : ∀ {i} → Cowriter W A i → (A → Cowriter W X i) → Cowriter W X i
[ a ] >>= f = f a
(w ∷ ca) >>= f = w ∷ λ where .force → ca .force >>= f
------------------------------------------------------------------------
-- Construction.
unfold : ∀ {i} → (X → (W × X) ⊎ A) → X → Cowriter W A i
unfold next seed with next seed
... | inj₁ (w , seed') = w ∷ λ where .force → unfold next seed'
... | inj₂ a = [ a ]
------------------------------------------------------------------------
-- DEPRECATED NAMES
------------------------------------------------------------------------
-- Please use the new names as continuing support for the old names is
-- not guaranteed.
-- Version 1.3
open import Data.BoundedVec as BVec using (BoundedVec)
splitAt′ : ∀ (n : ℕ) → Cowriter W A ∞ → (Vec W n × Cowriter W A ∞) ⊎ (BoundedVec W n × A)
splitAt′ zero cw = inj₁ ([] , cw)
splitAt′ (suc n) [ a ] = inj₂ (BVec.[] , a)
splitAt′ (suc n) (w ∷ cw) = Sum.map (Prod.map₁ (w ∷_)) (Prod.map₁ (w BVec.∷_))
$ splitAt′ n (cw .force)
{-# WARNING_ON_USAGE splitAt′
"Warning: splitAt′ (and Data.BoundedVec) was deprecated in v1.3.
Please use splitAt (and Data.Vec.Bounded) instead."
#-}
take′ : ∀ (n : ℕ) → Cowriter W A ∞ → Vec W n ⊎ (BoundedVec W n × A)
take′ n = Sum.map₁ Prod.proj₁ ∘′ splitAt′ n
{-# WARNING_ON_USAGE take′
"Warning: take′ (and Data.BoundedVec) was deprecated in v1.3.
Please use take (and Data.Vec.Bounded) instead."
#-}
|
State Before: R : Type u
K : Type u'
M : Type v
V : Type v'
M₂ : Type w
V₂ : Type w'
M₃ : Type y
V₃ : Type y'
M₄ : Type z
ι : Type x
ι' : Type x'
inst✝¹⁰ : Semiring R
inst✝⁹ : AddCommMonoid M₂
inst✝⁸ : Module R M₂
inst✝⁷ : AddCommMonoid M₃
inst✝⁶ : Module R M₃
φ : ι → Type i
inst✝⁵ : (i : ι) → AddCommMonoid (φ i)
inst✝⁴ : (i : ι) → Module R (φ i)
inst✝³ : Finite ι
inst✝² : DecidableEq ι
inst✝¹ : AddCommMonoid M
inst✝ : Module R M
f g : ((i : ι) → φ i) →ₗ[R] M
h : ∀ (i : ι), comp f (single i) = comp g (single i)
⊢ f = g State After: R : Type u
K : Type u'
M : Type v
V : Type v'
M₂ : Type w
V₂ : Type w'
M₃ : Type y
V₃ : Type y'
M₄ : Type z
ι : Type x
ι' : Type x'
inst✝¹⁰ : Semiring R
inst✝⁹ : AddCommMonoid M₂
inst✝⁸ : Module R M₂
inst✝⁷ : AddCommMonoid M₃
inst✝⁶ : Module R M₃
φ : ι → Type i
inst✝⁵ : (i : ι) → AddCommMonoid (φ i)
inst✝⁴ : (i : ι) → Module R (φ i)
inst✝³ : Finite ι
inst✝² : DecidableEq ι
inst✝¹ : AddCommMonoid M
inst✝ : Module R M
f g : ((i : ι) → φ i) →ₗ[R] M
h : ∀ (i : ι), comp f (single i) = comp g (single i)
i : ι
x : φ i
⊢ ↑f (Pi.single i x) = ↑g (Pi.single i x) Tactic: refine' pi_ext fun i x => _ State Before: R : Type u
K : Type u'
M : Type v
V : Type v'
M₂ : Type w
V₂ : Type w'
M₃ : Type y
V₃ : Type y'
M₄ : Type z
ι : Type x
ι' : Type x'
inst✝¹⁰ : Semiring R
inst✝⁹ : AddCommMonoid M₂
inst✝⁸ : Module R M₂
inst✝⁷ : AddCommMonoid M₃
inst✝⁶ : Module R M₃
φ : ι → Type i
inst✝⁵ : (i : ι) → AddCommMonoid (φ i)
inst✝⁴ : (i : ι) → Module R (φ i)
inst✝³ : Finite ι
inst✝² : DecidableEq ι
inst✝¹ : AddCommMonoid M
inst✝ : Module R M
f g : ((i : ι) → φ i) →ₗ[R] M
h : ∀ (i : ι), comp f (single i) = comp g (single i)
i : ι
x : φ i
⊢ ↑f (Pi.single i x) = ↑g (Pi.single i x) State After: no goals Tactic: convert LinearMap.congr_fun (h i) x
|
{-# OPTIONS --safe --experimental-lossy-unification #-}
module Cubical.Algebra.CommAlgebra.Localisation where
open import Cubical.Foundations.Prelude
open import Cubical.Foundations.Equiv
open import Cubical.Foundations.HLevels
open import Cubical.Foundations.Function
open import Cubical.Foundations.Isomorphism
open import Cubical.Foundations.Equiv.HalfAdjoint
open import Cubical.Foundations.SIP
open import Cubical.Foundations.Powerset
open import Cubical.Data.Sigma
open import Cubical.Reflection.StrictEquiv
open import Cubical.Structures.Axioms
open import Cubical.Algebra.Semigroup
open import Cubical.Algebra.Monoid
open import Cubical.Algebra.CommRing.Base
open import Cubical.Algebra.CommRing.Properties
open import Cubical.Algebra.CommRing.Localisation.Base
open import Cubical.Algebra.CommRing.Localisation.UniversalProperty
open import Cubical.Algebra.Ring
open import Cubical.Algebra.Algebra
open import Cubical.Algebra.CommAlgebra.Base
open import Cubical.Algebra.CommAlgebra.Properties
open import Cubical.HITs.SetQuotients as SQ
open import Cubical.HITs.PropositionalTruncation as PT
private
variable
ℓ ℓ′ : Level
module AlgLoc (R' : CommRing ℓ)
(S' : ℙ (fst R')) (SMultClosedSubset : isMultClosedSubset R' S') where
open isMultClosedSubset
private R = fst R'
open CommAlgebraStr
open IsAlgebraHom
open CommRingStr (snd R') renaming (_+_ to _+r_ ; _·_ to _·r_ ; ·Rid to ·rRid)
open RingTheory (CommRing→Ring R')
open CommRingTheory R'
open Loc R' S' SMultClosedSubset
open S⁻¹RUniversalProp R' S' SMultClosedSubset
open CommAlgChar R'
S⁻¹RAsCommAlg : CommAlgebra R' ℓ
S⁻¹RAsCommAlg = toCommAlg (S⁻¹RAsCommRing , /1AsCommRingHom)
hasLocAlgUniversalProp : (A : CommAlgebra R' ℓ)
→ (∀ s → s ∈ S' → _⋆_ (snd A) s (1a (snd A)) ∈ (CommAlgebra→CommRing A) ˣ)
→ Type (ℓ-suc ℓ)
hasLocAlgUniversalProp A _ = (B : CommAlgebra R' ℓ)
→ (∀ s → s ∈ S' → _⋆_ (snd B) s (1a (snd B)) ∈ (CommAlgebra→CommRing B) ˣ)
→ isContr (CommAlgebraHom A B)
S⋆1⊆S⁻¹Rˣ : ∀ s → s ∈ S' → _⋆_ (snd S⁻¹RAsCommAlg) s (1a (snd S⁻¹RAsCommAlg)) ∈ S⁻¹Rˣ
S⋆1⊆S⁻¹Rˣ s s∈S' = subst-∈ S⁻¹Rˣ
(cong [_] (≡-× (sym (·rRid s)) (Σ≡Prop (λ x → S' x .snd) (sym (·rRid _)))))
(S/1⊆S⁻¹Rˣ s s∈S')
S⁻¹RHasAlgUniversalProp : hasLocAlgUniversalProp S⁻¹RAsCommAlg S⋆1⊆S⁻¹Rˣ
S⁻¹RHasAlgUniversalProp B' S⋆1⊆Bˣ = χᴬ , χᴬuniqueness
where
B = fromCommAlg B' .fst
φ = fromCommAlg B' .snd
open CommRingStr (snd B) renaming (_·_ to _·b_ ; 1r to 1b ; ·Lid to ·bLid)
χ : CommRingHom S⁻¹RAsCommRing B
χ = S⁻¹RHasUniversalProp B φ S⋆1⊆Bˣ .fst .fst
χcomp : ∀ r → fst χ (r /1) ≡ fst φ r
χcomp = funExt⁻ (S⁻¹RHasUniversalProp B φ S⋆1⊆Bˣ .fst .snd)
χᴬ : CommAlgebraHom S⁻¹RAsCommAlg B'
fst χᴬ = fst χ
pres0 (snd χᴬ) = IsRingHom.pres0 (snd χ)
pres1 (snd χᴬ) = IsRingHom.pres1 (snd χ)
pres+ (snd χᴬ) = IsRingHom.pres+ (snd χ)
pres· (snd χᴬ) = IsRingHom.pres· (snd χ)
pres- (snd χᴬ) = IsRingHom.pres- (snd χ)
pres⋆ (snd χᴬ) r x = path
where
path : fst χ ((r /1) ·ₗ x) ≡ _⋆_ (snd B') r (fst χ x)
path = fst χ ((r /1) ·ₗ x) ≡⟨ IsRingHom.pres· (snd χ) _ _ ⟩
fst χ (r /1) ·b fst χ x ≡⟨ cong (_·b fst χ x) (χcomp r) ⟩
fst φ r ·b fst χ x ≡⟨ refl ⟩
_⋆_ (snd B') r 1b ·b fst χ x ≡⟨ ⋆-lassoc (snd B') _ _ _ ⟩
_⋆_ (snd B') r (1b ·b fst χ x) ≡⟨ cong (_⋆_ (snd B') r) (·bLid _) ⟩
_⋆_ (snd B') r (fst χ x) ∎
χᴬuniqueness : (ψ : CommAlgebraHom S⁻¹RAsCommAlg B') → χᴬ ≡ ψ
χᴬuniqueness ψ = Σ≡Prop (λ _ → isPropIsAlgebraHom _ _ _ _)
(cong (fst ∘ fst) (χuniqueness (ψ' , funExt ψ'r/1≡φr)))
where
χuniqueness = S⁻¹RHasUniversalProp B φ S⋆1⊆Bˣ .snd
ψ' : CommRingHom S⁻¹RAsCommRing B
fst ψ' = fst ψ
IsRingHom.pres0 (snd ψ') = pres0 (snd ψ)
IsRingHom.pres1 (snd ψ') = pres1 (snd ψ)
IsRingHom.pres+ (snd ψ') = pres+ (snd ψ)
IsRingHom.pres· (snd ψ') = pres· (snd ψ)
IsRingHom.pres- (snd ψ') = pres- (snd ψ)
ψ'r/1≡φr : ∀ r → fst ψ (r /1) ≡ fst φ r
ψ'r/1≡φr r =
fst ψ (r /1) ≡⟨ cong (fst ψ) (sym (·ₗ-rid _)) ⟩
fst ψ (_⋆_ (snd S⁻¹RAsCommAlg) r (1a (snd S⁻¹RAsCommAlg))) ≡⟨ pres⋆ (snd ψ) _ _ ⟩
_⋆_ (snd B') r (fst ψ (1a (snd S⁻¹RAsCommAlg))) ≡⟨ cong (_⋆_ (snd B') r) (pres1 (snd ψ)) ⟩
_⋆_ (snd B') r 1b ∎
-- an immediate corollary:
isContrHomS⁻¹RS⁻¹R : isContr (CommAlgebraHom S⁻¹RAsCommAlg S⁻¹RAsCommAlg)
isContrHomS⁻¹RS⁻¹R = S⁻¹RHasAlgUniversalProp S⁻¹RAsCommAlg S⋆1⊆S⁻¹Rˣ
module AlgLocTwoSubsets (R' : CommRing ℓ)
(S₁ : ℙ (fst R')) (S₁MultClosedSubset : isMultClosedSubset R' S₁)
(S₂ : ℙ (fst R')) (S₂MultClosedSubset : isMultClosedSubset R' S₂) where
open isMultClosedSubset
open CommRingStr (snd R') hiding (is-set)
open RingTheory (CommRing→Ring R')
open Loc R' S₁ S₁MultClosedSubset renaming (S⁻¹R to S₁⁻¹R ;
S⁻¹RAsCommRing to S₁⁻¹RAsCommRing)
open Loc R' S₂ S₂MultClosedSubset renaming (S⁻¹R to S₂⁻¹R ;
S⁻¹RAsCommRing to S₂⁻¹RAsCommRing)
open AlgLoc R' S₁ S₁MultClosedSubset renaming ( S⁻¹RAsCommAlg to S₁⁻¹RAsCommAlg
; S⋆1⊆S⁻¹Rˣ to S₁⋆1⊆S₁⁻¹Rˣ
; S⁻¹RHasAlgUniversalProp to S₁⁻¹RHasAlgUniversalProp
; isContrHomS⁻¹RS⁻¹R to isContrHomS₁⁻¹RS₁⁻¹R)
open AlgLoc R' S₂ S₂MultClosedSubset renaming ( S⁻¹RAsCommAlg to S₂⁻¹RAsCommAlg
; S⋆1⊆S⁻¹Rˣ to S₂⋆1⊆S₂⁻¹Rˣ
; S⁻¹RHasAlgUniversalProp to S₂⁻¹RHasAlgUniversalProp
; isContrHomS⁻¹RS⁻¹R to isContrHomS₂⁻¹RS₂⁻¹R)
open IsAlgebraHom
open CommAlgebraStr ⦃...⦄
private
R = fst R'
S₁⁻¹Rˣ = S₁⁻¹RAsCommRing ˣ
S₂⁻¹Rˣ = S₂⁻¹RAsCommRing ˣ
instance
_ = snd S₁⁻¹RAsCommAlg
_ = snd S₂⁻¹RAsCommAlg
isContrS₁⁻¹R≡S₂⁻¹R : (∀ s₁ → s₁ ∈ S₁ → s₁ ⋆ 1a ∈ S₂⁻¹Rˣ)
→ (∀ s₂ → s₂ ∈ S₂ → s₂ ⋆ 1a ∈ S₁⁻¹Rˣ)
→ isContr (S₁⁻¹RAsCommAlg ≡ S₂⁻¹RAsCommAlg)
isContrS₁⁻¹R≡S₂⁻¹R S₁⊆S₂⁻¹Rˣ S₂⊆S₁⁻¹Rˣ = isOfHLevelRetractFromIso 0
(equivToIso (invEquiv (CommAlgebraPath _ _ _)))
isContrS₁⁻¹R≅S₂⁻¹R
where
χ₁ : CommAlgebraHom S₁⁻¹RAsCommAlg S₂⁻¹RAsCommAlg
χ₁ = S₁⁻¹RHasAlgUniversalProp S₂⁻¹RAsCommAlg S₁⊆S₂⁻¹Rˣ .fst
χ₂ : CommAlgebraHom S₂⁻¹RAsCommAlg S₁⁻¹RAsCommAlg
χ₂ = S₂⁻¹RHasAlgUniversalProp S₁⁻¹RAsCommAlg S₂⊆S₁⁻¹Rˣ .fst
χ₁∘χ₂≡id : χ₁ ∘a χ₂ ≡ idAlgHom
χ₁∘χ₂≡id = isContr→isProp isContrHomS₂⁻¹RS₂⁻¹R _ _
χ₂∘χ₁≡id : χ₂ ∘a χ₁ ≡ idAlgHom
χ₂∘χ₁≡id = isContr→isProp isContrHomS₁⁻¹RS₁⁻¹R _ _
IsoS₁⁻¹RS₂⁻¹R : Iso S₁⁻¹R S₂⁻¹R
Iso.fun IsoS₁⁻¹RS₂⁻¹R = fst χ₁
Iso.inv IsoS₁⁻¹RS₂⁻¹R = fst χ₂
Iso.rightInv IsoS₁⁻¹RS₂⁻¹R = funExt⁻ (cong fst χ₁∘χ₂≡id)
Iso.leftInv IsoS₁⁻¹RS₂⁻¹R = funExt⁻ (cong fst χ₂∘χ₁≡id)
isContrS₁⁻¹R≅S₂⁻¹R : isContr (CommAlgebraEquiv S₁⁻¹RAsCommAlg S₂⁻¹RAsCommAlg)
isContrS₁⁻¹R≅S₂⁻¹R = center , uniqueness
where
center : CommAlgebraEquiv S₁⁻¹RAsCommAlg S₂⁻¹RAsCommAlg
fst center = isoToEquiv IsoS₁⁻¹RS₂⁻¹R
pres0 (snd center) = pres0 (snd χ₁)
pres1 (snd center) = pres1 (snd χ₁)
pres+ (snd center) = pres+ (snd χ₁)
pres· (snd center) = pres· (snd χ₁)
pres- (snd center) = pres- (snd χ₁)
pres⋆ (snd center) = pres⋆ (snd χ₁)
uniqueness : (φ : CommAlgebraEquiv S₁⁻¹RAsCommAlg S₂⁻¹RAsCommAlg) → center ≡ φ
uniqueness φ = Σ≡Prop (λ _ → isPropIsAlgebraHom _ _ _ _)
(equivEq (cong fst
(S₁⁻¹RHasAlgUniversalProp S₂⁻¹RAsCommAlg S₁⊆S₂⁻¹Rˣ .snd
(AlgebraEquiv→AlgebraHom φ))))
isPropS₁⁻¹R≡S₂⁻¹R : isProp (S₁⁻¹RAsCommAlg ≡ S₂⁻¹RAsCommAlg)
isPropS₁⁻¹R≡S₂⁻¹R S₁⁻¹R≡S₂⁻¹R =
isContr→isProp (isContrS₁⁻¹R≡S₂⁻¹R S₁⊆S₂⁻¹Rˣ S₂⊆S₁⁻¹Rˣ) S₁⁻¹R≡S₂⁻¹R
where
S₁⊆S₂⁻¹Rˣ : ∀ s₁ → s₁ ∈ S₁ → s₁ ⋆ 1a ∈ S₂⁻¹Rˣ
S₁⊆S₂⁻¹Rˣ s₁ s₁∈S₁ =
transport (λ i → _⋆_ ⦃ S₁⁻¹R≡S₂⁻¹R i .snd ⦄ s₁ (1a ⦃ S₁⁻¹R≡S₂⁻¹R i .snd ⦄)
∈ (CommAlgebra→CommRing (S₁⁻¹R≡S₂⁻¹R i)) ˣ) (S₁⋆1⊆S₁⁻¹Rˣ s₁ s₁∈S₁)
S₂⊆S₁⁻¹Rˣ : ∀ s₂ → s₂ ∈ S₂ → s₂ ⋆ 1a ∈ S₁⁻¹Rˣ
S₂⊆S₁⁻¹Rˣ s₂ s₂∈S₂ =
transport (λ i → _⋆_ ⦃ (sym S₁⁻¹R≡S₂⁻¹R) i .snd ⦄ s₂ (1a ⦃ (sym S₁⁻¹R≡S₂⁻¹R) i .snd ⦄)
∈ (CommAlgebra→CommRing ((sym S₁⁻¹R≡S₂⁻¹R) i)) ˣ) (S₂⋆1⊆S₂⁻¹Rˣ s₂ s₂∈S₂)
|
theory "Find"
imports
Common "~~/src/HOL/Library/Multiset" (* for size_change *)
begin
section "find, find_state, fs_step"
(* find *)
record ('bs,'k,'r,'v) find_state_l =
fsl_k :: "'k key"
fsl_r :: "'r page_ref"
(* fnd0_s :: "('bs,'r) store" *)
record ('bs,'k,'r,'v) find_state_r =
fsr_r :: "'r page_ref"
fsr_v :: "'v value_t option"
(* fnd1_s :: "('bs,'r) store" *)
datatype ('bs,'k,'r,'v) find_state = Fs_l "('bs,'k,'r,'v) find_state_l" | Fs_r "('bs,'k,'r,'v) find_state_r"
definition find_state_to_page_ref :: "('bs,'k,'r,'v) find_state => 'r page_ref" where
"find_state_to_page_ref fs0 = (case fs0 of
Fs_l fsl => (fsl|>fsl_r)
| Fs_r fsr => (fsr |> fsr_r))"
definition fs_step :: "('bs,'k,'r,'v) ctxt_k2r_t
=> (('bs,'r) store * ('bs,'k,'r,'v) find_state)
=> (('bs,'k,'r,'v) find_state) option" where
"fs_step ctxt1 s0fs0 == (
let (s0,fs0) = s0fs0 in
case fs0 of
Fs_l fsl => (
let r0 = (fsl|>fsl_r) in
let k0 = (fsl|>fsl_k) in
case (page_ref_to_frame (ctxt_p2f_t.truncate ctxt1) s0 r0) of
None => (Error |> rresult_to_option) (* invalid page access *)
| Some frm => (
case frm of
Frm_I nf => (
let r' = apply_key_to_ref (ctxt1|>key_to_ref2) nf k0 in
Some(Fs_l (fsl (| fsl_r := r' |))))
| Frm_L lf => (
let k2v = key_to_v in
let v = k2v lf k0 in
Some(Fs_r (| fsr_r = r0, fsr_v = v |)))))
| Fs_r fsr => (Error |> rresult_to_option))" (* attempt to step Fs_r *)
section "fs_step as a function"
text "iterate the fs_step function n times"
(* FIXME in the following we may want to use a standard isabelle while construction *)
definition fs_step_as_fun :: "('bs,'k,'r,'v) ctxt_k2r_t
=> tree_height
=> (('bs,'r) store * ('bs,'k,'r,'v) find_state)
=> (('bs,'k,'r,'v) find_state)" where
"fs_step_as_fun ctxt1 n0 s0fs0 == (
let (s0,fs0) = s0fs0 in
let f0 = % x. (s0,x) |> (fs_step ctxt1) |> dest_Some in
(Nat.funpow n0 f0) fs0)"
section "wellformedness predicates"
(* FIXME obviously the following need to be filled in properly *)
definition wf_ctxt:: "('bs,'k,'r,'v) ctxt_k2r_t => bool" where
"wf_ctxt ctxt == True"
definition wf_ctxt1:: "('bs,'k,'r,'v) ctxt_k2r_t => bool" where
"wf_ctxt1 ctxt1 ==
! s0 r0 ctxt nf k0 r' v0 n0 m1 m1'.
(ctxt_p2f_t.truncate ctxt1 = ctxt)
(* if r0 is not a leaf *)
& (page_ref_to_frame ctxt s0 r0 = Some (Frm_I nf))
(* and the map exists at r0 *)
& (page_ref_to_map ctxt s0 r0 n0 = Some m1)
(* and we descend to r' *)
& (apply_key_to_ref (ctxt1|>key_to_ref2) nf k0 = r')
(* and the map exists at r' *)
& (page_ref_to_map ctxt s0 r' (n0 - Suc 0) = Some m1')
(* then the maps agree, at least for the key *)
& (m1 k0 = v0)
--> (m1' k0 = v0)
"
(*
definition wf_store_page_ref_to_map_none :: "('bs,'k,'r,'v) ctxt_k2r_t => ('bs,'r) store => tree_height => 'r page_ref => bool" where
"wf_store_page_ref_to_map_none c1 s0 n0 r0 == (
let c0 = ctxt_p2f_t.truncate c1 in
((page_ref_to_map c0 s0 r0 n0 = None) --> False)
)"
*)
definition wf_store_page_ref_to_map :: "('bs,'k,'r,'v) ctxt_k2r_t => ('bs,'r) store => tree_height => 'r page_ref => bool" where
"wf_store_page_ref_to_map c1 s0 n0 r0 == (
let c0 = ctxt_p2f_t.truncate c1 in
(! m1 r' nf k0 . ((
(* if we have a map at r0 *)
(page_ref_to_map c0 s0 r0 n0 = Some m1)
& (page_ref_to_frame c0 s0 r0 = Some (Frm_I nf))
(* and we follow the key to r' *)
& (apply_key_to_ref (c1|>key_to_ref2) nf k0 = r')
(* then we still have a map *)
& (page_ref_to_map c0 s0 r' (n0 - Suc 0) = None)
) --> False
)) (* FIXME does this follow from page_ref_to_map ~= None? FIXME isn't this a basic property of the defn of page_ref_to_map/page_ref_to_tree? *)
& (page_ref_to_map c0 s0 r0 n0 ~= None)
)"
definition wf_store:: "('bs,'k,'r,'v) ctxt_k2r_t => ('bs,'r) store => tree_height => 'r page_ref => bool" where
"wf_store c1 s0 n0 r0 == (
(* wf_store_page_ref_to_map_none c1 s0 n0 r0 *)
wf_store_page_ref_to_map c1 s0 n0 r0
& True)"
(*
*)
section "correctness of fs_step"
definition fs_step_invariant :: "('bs,'k,'r,'v) ctxt_p2f_t
=> (('bs,'r) store * ('bs,'k,'r,'v) find_state)
=> tree_height
=> 'v value_t option
=> bool" where
"fs_step_invariant ctxt s0fs0 n0 v0 == (
let (s0,fs0) = s0fs0 in
case fs0 of
Fs_l fsl => (
let k0 = (fsl|>fsl_k) in
let r0 = (fsl|>fsl_r) in
let v' = page_ref_key_to_v ctxt s0 r0 n0 k0 in
v' = v0)
| Fs_r fsr => (
let v' = (fsr|>fsr_v) in
v' = v0))"
(* FIXME in the following we want to eliminate n0 and v0 explicit arguments, and phrase as a
simple invariant; v0 can be a parameter of the invariant; how do we get n0? just say I v0 == ! n0. wf_store s0 n0... --> *)
lemma fs_step_is_invariant: "
! (ctxt1::('bs,'k,'r,'v) ctxt_k2r_t) ctxt s0 fs0 n0 v0.
((ctxt_p2f_t.truncate ctxt1) = ctxt)
--> wf_ctxt1 ctxt1 & wf_store ctxt1 s0 n0 (fs0|>find_state_to_page_ref)
--> (
fs_step_invariant ctxt (s0,fs0) n0 v0 --> (
let x = fs_step ctxt1 (s0,fs0) in
case x of
None => True (* if we are at a Fs_r, no further facts are available *)
| Some (fs') => (
(* n0 could be 0? but then fs' is Fs_r? *)
fs_step_invariant ctxt (s0,fs') (n0 - 1) v0)))"
apply(rule)+
apply(elim conjE)
apply(subgoal_tac "? x. fs_step ctxt1 (s0, fs0) = x")
prefer 2
apply(force)
apply(erule exE)
apply(simp add: Let_def)
apply(case_tac x)
(* none *)
apply(force)
(* x = Some a *)
apply(simp)
apply(rename_tac "fs'")
apply(simp add: fs_step_def)
apply(case_tac fs0)
prefer 2
apply(force)
(* fs0 = Fs_l find_state_l_ex *)
apply(simp)
apply(rename_tac "fsl")
apply(subgoal_tac "? r0. (fsl|>fsl_r) = r0")
prefer 2 apply(force)
apply(subgoal_tac "? k0. (fsl|>fsl_k) = k0 ")
prefer 2 apply(force)
apply(erule exE)+
apply(case_tac " (page_ref_to_frame (ctxt_p2f_t.truncate ctxt1) s0 r0)")
apply(force)
(* (page_ref_to_frame (ctxt_p2f_t.truncate ctxt1) s0 r0) = Some r' *)
apply(rename_tac frm')
apply(simp)
apply(case_tac frm')
(**********)
(* frm' = Frm_I node_frame_ext *)
apply(rename_tac nf) (* nf = node_frame *)
apply(simp)
apply(thin_tac "fs0 = ?x")
apply(thin_tac "frm' = ?x")
apply(thin_tac "x=?x")
apply(subgoal_tac "? r'. apply_key_to_ref (ctxt1 |> key_to_ref2) nf k0 = r'") prefer 2 apply(force)
apply(erule exE)
apply(subgoal_tac "? fsl'. (fsl\<lparr>fsl_r := r'\<rparr>) = fsl'") prefer 2 apply(force)
apply(erule exE)
apply(simp)
(* note how this goal is concise and readable *)
apply(simp add: fs_step_invariant_def)
apply(drule_tac t="fs'" in sym)
apply(simp)
apply(thin_tac "fs' = ?x")
apply(drule_tac t="fsl'" in sym)
apply(simp)
apply(subgoal_tac "fsl\<lparr>fsl_r := r'\<rparr> = (| fsl_k = k0, fsl_r = r' |)") prefer 2 apply(metis (full_types) find_state_l.surjective find_state_l.update_convs(2) old.unit.exhaust rev_apply_def)
apply(thin_tac "fsl' = fsl\<lparr>fsl_r := r'\<rparr> ")
apply(simp)
apply(simp add: rev_apply_def)
apply(simp add: page_ref_key_to_v_def)
(* page_ref_to_map could be none or some *)
apply(subgoal_tac "? m0. (page_ref_to_map (ctxt_p2f_t.truncate ctxt1) s0 r0 n0) = m0")
prefer 2 apply(force)
apply(erule exE)
apply(simp)
apply(case_tac m0)
(* m0 = None *)
apply(simp)
(* this case ruled out by wellformedness - page_ref_to_map cannot be None *)
apply (metis find_state.simps(5) find_state_to_page_ref_def rev_apply_def wf_store_def wf_store_page_ref_to_map_def)
(* m0 = Some a *)
apply(rename_tac m1)
apply(simp)
apply(thin_tac "m0 = ?x")
apply(subgoal_tac "? m0'. (page_ref_to_map (ctxt_p2f_t.truncate ctxt1) s0 r' (n0 - Suc 0)) = m0'")
prefer 2 apply(force)
apply(erule exE)
apply(simp)
apply(case_tac "m0'")
(* none - ruled out because m0 is_Some --> m0' is_Some ; FIXME sledgehammer should get this *)
apply(simp)
apply(simp add: wf_store_def wf_store_page_ref_to_map_def Let_def)
apply(elim conjE)
apply(erule exE)
apply(simp add: page_ref_to_map_def page_ref_to_kvs_def rev_apply_def find_state_to_page_ref_def apply_key_to_ref_def)
apply(elim exE conjE)
apply(simp)
apply (metis option.distinct(1))
(* m0' = Some a *)
apply(rename_tac "m1'")
apply(simp)
apply(thin_tac "m0'=?x")
(* m1 k0 = v0 --> m1' k0 = v0 ; this holds by wellformedness of key_to_ref, and page_ref_to_map Suc *)
apply(simp add: wf_ctxt1_def apply_key_to_ref_def rev_apply_def)
apply(force)
(* frm' = Frm_L leaf_frame_ext - easy case? *)
apply(rename_tac lf)
apply(simp)
apply(thin_tac "fs0 = ?x")
apply(thin_tac "frm' = ?x")
apply(thin_tac "x=?x")
(* we have got to a leaf, and at frm' we return Fs_r *)
apply(subgoal_tac "? fsr'. \<lparr>fsr_r = r0, fsr_v = key_to_v lf k0\<rparr> = fsr'")
prefer 2 apply(force)
apply(erule exE)
apply(simp)
apply(subgoal_tac "? v'. key_to_v lf k0 = v'")
prefer 2 apply(force)
apply(erule exE)
apply(simp add: fs_step_invariant_def)
apply(drule_tac t="fs'" in sym)
apply(simp)
apply(thin_tac "fs' = ?x")
apply(drule_tac t="fsr'" in sym)
apply(simp)
apply(thin_tac "fsr' = ?x")
apply(simp)
apply(simp (no_asm) add: rev_apply_def)
apply(simp add: page_ref_key_to_v_def)
(* page_ref_to_map could be none or some *)
apply(subgoal_tac "? m0. (page_ref_to_map (ctxt_p2f_t.truncate ctxt1) s0 r0 n0) = m0")
prefer 2 apply(force)
apply(erule exE)
apply(simp)
apply(case_tac m0)
(* m0 = None *)
apply(simp)
(* the map at r0 is none ; but we have a leaf frame; contradiction; FIXME the following should be simplified *)
(*
apply(simp add: page_ref_to_frame_def)
apply(case_tac "ref_to_page s0 r0") apply(force)
apply(simp)
apply(rename_tac p0)
*)
(* ref_to_page s0 r0 = Some p0 *)
apply(simp add: page_ref_to_map_def)
apply(case_tac "page_ref_to_kvs (ctxt_p2f_t.truncate ctxt1) s0 r0 n0")
(* page_ref_to_kvs = none *)
apply(simp)
(* apply(simp add: ref_to_page_def) *)
apply(simp add: page_ref_to_kvs_def)
apply(simp add: rev_apply_def)
apply(case_tac "page_ref_to_tree (ctxt_p2f_t.truncate ctxt1) s0 r0 n0")
(* page_ref_to_tree (ctxt_p2f_t.truncate ctxt1) s0 r0 n0 = none *)
(* page_ref_to_tree defined by primrec *)
apply(case_tac n0)
apply(force)
(*
apply(simp)
apply(simp add: page_ref_to_frame_def)
apply(force simp add: ref_to_page_def rev_apply_def)
*)
(* n0 = suc nat *)
apply(rename_tac n0')
apply(simp)
apply(simp add: page_ref_to_frame_def)
apply(simp add: ref_to_page_def rev_apply_def)
apply(case_tac "dest_store s0 r0") apply(force) apply(simp)
apply(rename_tac p0)
(* this case should be impossible because n0 was not 0, but we got a leaf ; by wf of store; sledgehammer should get this *)
apply(simp add: wf_store_def wf_store_page_ref_to_map_def)
apply(erule conjE)
apply(erule exE)
apply(rename_tac m2)
apply(simp add: page_ref_to_map_def page_ref_to_kvs_def Let_def)
apply(simp add: find_state_to_page_ref_def page_ref_to_frame_def)
apply(simp add: ref_to_page_def)
apply(simp add: rev_apply_def)
apply(force simp add: rresult_to_option_def) (* sledgehammer should get this *)
(* end page_ref_to_tree (ctxt_p2f_t.truncate ctxt1) s0 r0 n0 = none *)
(* page_ref_to_tree = Some a *)
apply(rename_tac t0)
apply(case_tac n0)
apply(force)
(* n0 = suc nat *)
apply(rename_tac n0')
apply(force)
(* page_ref_to_kvs = Some a *)
apply(rename_tac kvs)
apply(simp)
(* but m0 = none, so can't have kvs = some *)
apply(force simp add:rev_apply_def)
(* m0 = some a *)
apply(rename_tac m1)
apply(simp)
apply(thin_tac "m0 = ?x")
apply(simp)
apply(simp add: page_ref_to_map_def)
apply(simp add: rev_apply_def)
apply(elim exE conjE)
apply(simp add: page_ref_to_kvs_def rev_apply_def)
apply(case_tac n0)
(* 0 *)
apply(simp)
apply(simp add: rev_apply_def)
apply(simp add: kvs_to_map_def)
apply(drule_tac t=z in sym)
apply(simp)
apply(thin_tac "z=?x")
apply(subgoal_tac "fsl = (| fsl_k = k0, fsl_r = r0 |)") prefer 2 apply(force)
apply(simp)
apply(simp add:page_ref_to_frame_def)
apply(case_tac "ref_to_page s0 r0") apply(force)
apply(rename_tac p0)
apply(simp)
apply(simp add: ref_to_page_def)
apply(subgoal_tac "? kvs. lf = (| lf_kvs=kvs |)") prefer 2 apply (metis leaf_frame.cases)
apply(elim exE)
apply(simp)
apply(thin_tac "lf=?x")
apply(thin_tac "fsl=?x")
apply(thin_tac "n0=0")
apply(clarsimp)
apply(force simp add: key_to_v_def rev_apply_def tree_to_kvs_def)
(* suc - should be a contradiction with leaf frame *)
apply(rename_tac n0')
apply(force)
done
end
|
[STATEMENT]
lemma Uin8_code [code]: "Rep_uint8 (Uint8 i) = word_of_int (int_of_integer_symbolic i)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. Rep_uint8 (Uint8 i) = word_of_int (int_of_integer_symbolic i)
[PROOF STEP]
unfolding Uint8_def int_of_integer_symbolic_def
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. Rep_uint8 (map_fun int_of_integer Abs_uint8 word_of_int i) = word_of_integer i
[PROOF STEP]
by(simp add: Abs_uint8_inverse)
|
function installUnfold(flatSubdir,query)
%
% function installUnfold([flatSubdir],[query])
%
% Rebuild coords after a new unfold.
%
% flatSubdir: used to specify the flat subdirectory in case there is
% more than one unfold available. Default: prompt user to choose from a menu.
%
% Note: could automatically rebuild Gray/corAnals by xforming from inplane, but we
% would have to do this for each dataType.
%
% 12/8/00 Wandell and Huk
% djh, 2/14/2001
% updated to use hiddenFlat instead of opening a window
if ~exist('flatSubdir','var')
flatSubdir = getFlatSubdir;
end
if ~exist('query','var')
query = 1;
end
if query
resp = questdlg(['Delete ',flatSubdir,'/anat, coords, corAnal, ' ...
'and parameter map files and close flat window(s)?']);
else
resp = 'Yes';
end
%delete coords and rebuild
if strcmp(resp,'Yes')
closeAllFlatWindows(flatSubdir);
cleanFlat(flatSubdir);
% Open hidden gray structure because we need the gray coords
hiddenGray = initHiddenGray;
% Load flat file, compute & save coords
hiddenFlat = initHiddenFlat(flatSubdir);
% Compute and save flat anat file
hiddenFlat = loadAnat(hiddenFlat);
disp('Rebuilt Flat coords and anat files. Deleted old corAnals.');
else
disp('Coords and corAnals not deleted.');
end
return
|
There are two families of ticks that makeup the majority of the ticks in the Northeast. Hard and soft ticks. The major differences are the reproduction cycle. Hard tick females have one blood meal and can produce 10,000 eggs then die. Soft tick females will feed and then lay 20-50 ticks, feed again lay another batch and repeat. In the Northeast the most common hard ticks are the Brown Dog Tick, American Dog Tick and the Deer Tick.
Ticks are very good at hitchhiking. They can sense your vibration, shadow and CO2 as you approach. Ticks wait in areas ideal for a blood meal in a specific position called questing. They have hooks on their legs that are held up and out to hitch onto a passing meal.
The three main ticks in the Northeast all have 8 legs as adults, require 2 years to mature from egg to adult and differ in size for the most part. You can easily identify a tick embedded in your skin as it becomes engorged with blood. They all have 4 life stages egg, larva, nymph and adult. Once the egg hatches the larva needs to seek a blood meal, most often mice or other small animals. The ticks in the Northeast will molt after the first blood meal and winter as nymph under leaves or other natural debris protected from winters cold weather. In the spring the nymphs emerge and look for another blood meal. With the meal complete they will molt into adults and look for another blood meal and start the reproduction process.
Ticks can transmit various tick borne human diseases including Rocky Mountain spotted fever and Lyme disease. Another less common threat is tick paralysis. A condition that develops during feeding and can result in death but the symptoms disappear rapidly when the tick is removed. When you work outside in a brush or leaf area you run the risk of a tick hitchhiking on your clothes. This is why you should always, if possible wear long sleeves and pants preferable light in color. Tape the pant cuffs tight to your legs to stop ticks from climbing up onto your skin. When you complete the outdoor activities you should remove all clothing and shower. When drying off perform a tick check on all parts of your body. At this stage you are looking for just the tick. In 24 hours stay alert for engorged females.
The first step in control is inspection and education. Our pest management professional will inspect your yard and identify the problem areas, areas that should have brush or shrubbery cut back and other non-pesticide items that naturally keep ticks away from your home. Then we treat the “hot” areas which are most likely to have a population of ticks. Treatments and inspections are repeated 2 additional applications for traditional treatments and every 3 weeks for organic treatments.
|
(* Title: Uint8.thy
Author: Andreas Lochbihler, ETH Zurich
*)
chapter {* Unsigned words of 8 bits *}
theory Uint8 imports
Word_Misc
Bits_Integer
begin
text {*
Restriction for OCaml code generation:
OCaml does not provide an int8 type, so no special code generation
for this type is set up. If the theory @{text "Code_Target_Bits_Int"}
is imported, the type @{text uint8} is emulated via @{typ "8 word"}.
*}
declare prod.Quotient[transfer_rule]
section {* Type definition and primitive operations *}
typedef uint8 = "UNIV :: 8 word set" ..
setup_lifting type_definition_uint8
text {* Use an abstract type for code generation to disable pattern matching on @{term Abs_uint8}. *}
declare Rep_uint8_inverse[code abstype]
declare Quotient_uint8[transfer_rule]
instantiation uint8 :: "{neg_numeral, Divides.div, comm_monoid_mult, comm_ring}" begin
lift_definition zero_uint8 :: uint8 is "0" .
lift_definition one_uint8 :: uint8 is "1" .
lift_definition plus_uint8 :: "uint8 \<Rightarrow> uint8 \<Rightarrow> uint8" is "op +" .
lift_definition minus_uint8 :: "uint8 \<Rightarrow> uint8 \<Rightarrow> uint8" is "op -" .
lift_definition uminus_uint8 :: "uint8 \<Rightarrow> uint8" is uminus .
lift_definition times_uint8 :: "uint8 \<Rightarrow> uint8 \<Rightarrow> uint8" is "op *" .
lift_definition divide_uint8 :: "uint8 \<Rightarrow> uint8 \<Rightarrow> uint8" is "op div" .
lift_definition mod_uint8 :: "uint8 \<Rightarrow> uint8 \<Rightarrow> uint8" is "op mod" .
instance by standard (transfer, simp add: algebra_simps)+
end
instantiation uint8 :: linorder begin
lift_definition less_uint8 :: "uint8 \<Rightarrow> uint8 \<Rightarrow> bool" is "op <" .
lift_definition less_eq_uint8 :: "uint8 \<Rightarrow> uint8 \<Rightarrow> bool" is "op \<le>" .
instance by standard (transfer, simp add: less_le_not_le linear)+
end
lemmas [code] = less_uint8.rep_eq less_eq_uint8.rep_eq
instantiation uint8 :: bitss begin
lift_definition bitNOT_uint8 :: "uint8 \<Rightarrow> uint8" is bitNOT .
lift_definition bitAND_uint8 :: "uint8 \<Rightarrow> uint8 \<Rightarrow> uint8" is bitAND .
lift_definition bitOR_uint8 :: "uint8 \<Rightarrow> uint8 \<Rightarrow> uint8" is bitOR .
lift_definition bitXOR_uint8 :: "uint8 \<Rightarrow> uint8 \<Rightarrow> uint8" is bitXOR .
lift_definition test_bit_uint8 :: "uint8 \<Rightarrow> nat \<Rightarrow> bool" is test_bit .
lift_definition set_bit_uint8 :: "uint8 \<Rightarrow> nat \<Rightarrow> bool \<Rightarrow> uint8" is set_bit .
lift_definition set_bits_uint8 :: "(nat \<Rightarrow> bool) \<Rightarrow> uint8" is "set_bits" .
lift_definition lsb_uint8 :: "uint8 \<Rightarrow> bool" is lsb .
lift_definition shiftl_uint8 :: "uint8 \<Rightarrow> nat \<Rightarrow> uint8" is shiftl .
lift_definition shiftr_uint8 :: "uint8 \<Rightarrow> nat \<Rightarrow> uint8" is shiftr .
lift_definition msb_uint8 :: "uint8 \<Rightarrow> bool" is msb .
instance ..
end
lemmas [code] = test_bit_uint8.rep_eq lsb_uint8.rep_eq msb_uint8.rep_eq
instantiation uint8 :: equal begin
lift_definition equal_uint8 :: "uint8 \<Rightarrow> uint8 \<Rightarrow> bool" is "equal_class.equal" .
instance by standard (transfer, simp add: equal_eq)
end
lemmas [code] = equal_uint8.rep_eq
instantiation uint8 :: size begin
lift_definition size_uint8 :: "uint8 \<Rightarrow> nat" is "size" .
instance ..
end
lemmas [code] = size_uint8.rep_eq
lift_definition sshiftr_uint8 :: "uint8 \<Rightarrow> nat \<Rightarrow> uint8" (infixl ">>>" 55) is sshiftr .
lift_definition uint8_of_int :: "int \<Rightarrow> uint8" is "word_of_int" .
definition uint8_of_nat :: "nat \<Rightarrow> uint8"
where "uint8_of_nat = uint8_of_int \<circ> int"
lift_definition int_of_uint8 :: "uint8 \<Rightarrow> int" is "uint" .
lift_definition nat_of_uint8 :: "uint8 \<Rightarrow> nat" is "unat" .
definition integer_of_uint8 :: "uint8 \<Rightarrow> integer"
where "integer_of_uint8 = integer_of_int o int_of_uint8"
text {* Use pretty numerals from integer for pretty printing *}
context includes integer.lifting begin
lift_definition Uint8 :: "integer \<Rightarrow> uint8" is "word_of_int" .
lemma Rep_uint8_numeral [simp]: "Rep_uint8 (numeral n) = numeral n"
by(induction n)(simp_all add: one_uint8_def Abs_uint8_inverse numeral.simps plus_uint8_def)
lemma numeral_uint8_transfer [transfer_rule]:
"(rel_fun op = cr_uint8) numeral numeral"
by(auto simp add: cr_uint8_def)
lemma numeral_uint8 [code_unfold]: "numeral n = Uint8 (numeral n)"
by transfer simp
lemma Rep_uint8_neg_numeral [simp]: "Rep_uint8 (- numeral n) = - numeral n"
by(simp only: uminus_uint8_def)(simp add: Abs_uint8_inverse)
lemma neg_numeral_uint8 [code_unfold]: "- numeral n = Uint8 (- numeral n)"
by transfer(simp add: cr_uint8_def)
end
lemma Abs_uint8_numeral [code_post]: "Abs_uint8 (numeral n) = numeral n"
by(induction n)(simp_all add: one_uint8_def numeral.simps plus_uint8_def Abs_uint8_inverse)
lemma Abs_uint8_0 [code_post]: "Abs_uint8 0 = 0"
by(simp add: zero_uint8_def)
lemma Abs_uint8_1 [code_post]: "Abs_uint8 1 = 1"
by(simp add: one_uint8_def)
section {* Code setup *}
code_printing code_module Uint8 \<rightharpoonup> (SML)
{*(* Test that words can handle numbers between 0 and 3 *)
val _ = if 3 <= Word.wordSize then () else raise (Fail ("wordSize less than 3"));
structure Uint8 : sig
val set_bit : Word8.word -> IntInf.int -> bool -> Word8.word
val shiftl : Word8.word -> IntInf.int -> Word8.word
val shiftr : Word8.word -> IntInf.int -> Word8.word
val shiftr_signed : Word8.word -> IntInf.int -> Word8.word
val test_bit : Word8.word -> IntInf.int -> bool
end = struct
fun set_bit x n b =
let val mask = Word8.<< (0wx1, Word.fromLargeInt (IntInf.toLarge n))
in if b then Word8.orb (x, mask)
else Word8.andb (x, Word8.notb mask)
end
fun shiftl x n =
Word8.<< (x, Word.fromLargeInt (IntInf.toLarge n))
fun shiftr x n =
Word8.>> (x, Word.fromLargeInt (IntInf.toLarge n))
fun shiftr_signed x n =
Word8.~>> (x, Word.fromLargeInt (IntInf.toLarge n))
fun test_bit x n =
Word8.andb (x, Word8.<< (0wx1, Word.fromLargeInt (IntInf.toLarge n))) <> Word8.fromInt 0
end; (* struct Uint8 *)*}
code_reserved SML Uint8
code_printing code_module Uint8 \<rightharpoonup> (Haskell)
{*import qualified Data.Word;
import qualified Data.Int;
type Int8 = Data.Int.Int8;
type Word8 = Data.Word.Word8;*}
code_reserved Haskell Uint8
text {*
Scala provides only signed 8bit numbers, so we use these and
implement sign-sensitive operations like comparisons manually.
*}
code_printing code_module Uint8 \<rightharpoonup> (Scala)
{*object Uint8 {
def less(x: Byte, y: Byte) : Boolean =
if (x < 0) y < 0 && x < y
else y < 0 || x < y
def less_eq(x: Byte, y: Byte) : Boolean =
if (x < 0) y < 0 && x <= y
else y < 0 || x <= y
def set_bit(x: Byte, n: BigInt, b: Boolean) : Byte =
if (b)
(x | (1 << n.intValue)).toByte
else
(x & (1 << n.intValue).unary_~).toByte
def shiftl(x: Byte, n: BigInt) : Byte = (x << n.intValue).toByte
def shiftr(x: Byte, n: BigInt) : Byte = ((x & 255) >>> n.intValue).toByte
def shiftr_signed(x: Byte, n: BigInt) : Byte = (x >> n.intValue).toByte
def test_bit(x: Byte, n: BigInt) : Boolean =
(x & (1 << n.intValue)) != 0
} /* object Uint8 */*}
code_reserved Scala Uint8
text {*
Avoid @{term Abs_uint8} in generated code, use @{term Rep_uint8'} instead.
The symbolic implementations for code\_simp use @{term Rep_uint8}.
The new destructor @{term Rep_uint8'} is executable.
As the simplifier is given the [code abstract] equations literally,
we cannot implement @{term Rep_uint8} directly, because that makes code\_simp loop.
If code generation raises Match, some equation probably contains @{term Rep_uint8}
([code abstract] equations for @{typ uint8} may use @{term Rep_uint8} because
these instances will be folded away.)
To convert @{typ "8 word"} values into @{typ uint8}, use @{term "Abs_uint8'"}.
*}
definition Rep_uint8' where [simp]: "Rep_uint8' = Rep_uint8"
lemma Rep_uint8'_transfer [transfer_rule]:
"rel_fun cr_uint8 op = (\<lambda>x. x) Rep_uint8'"
unfolding Rep_uint8'_def by(rule uint8.rep_transfer)
lemma Rep_uint8'_code [code]: "Rep_uint8' x = (BITS n. x !! n)"
by transfer simp
lift_definition Abs_uint8' :: "8 word \<Rightarrow> uint8" is "\<lambda>x :: 8 word. x" .
lemma Abs_uint8'_code [code]: "Abs_uint8' x = Uint8 (integer_of_int (uint x))"
including integer.lifting by transfer simp
lemma [code, code del]: "term_of_class.term_of = (term_of_class.term_of :: uint8 \<Rightarrow> _)" ..
lemma term_of_uint8_code [code]:
defines "TR \<equiv> typerep.Typerep" and "bit0 \<equiv> STR ''Numeral_Type.bit0''" shows
"term_of_class.term_of x =
Code_Evaluation.App (Code_Evaluation.Const (STR ''Uint8.Abs_uint8'') (TR (STR ''fun'') [TR (STR ''Word.word'') [TR bit0 [TR bit0 [TR bit0 [TR (STR ''Numeral_Type.num1'') []]]]], TR (STR ''Uint8.uint8'') []]))
(term_of_class.term_of (Rep_uint8' x))"
by(simp add: term_of_anything)
lemma Uin8_code [code abstract]: "Rep_uint8 (Uint8 i) = word_of_int (int_of_integer_symbolic i)"
unfolding Uint8_def int_of_integer_symbolic_def by(simp add: Abs_uint8_inverse)
code_printing type_constructor uint8 \<rightharpoonup>
(SML) "Word8.word" and
(Haskell) "Uint8.Word8" and
(Scala) "Byte"
| constant Uint8 \<rightharpoonup>
(SML) "Word8.fromLargeInt (IntInf.toLarge _)" and
(Haskell) "(Prelude.fromInteger _ :: Uint8.Word8)" and
(Haskell_Quickcheck) "(Prelude.fromInteger (Prelude.toInteger _) :: Uint8.Word8)" and
(Scala) "_.byteValue"
| constant "0 :: uint8" \<rightharpoonup>
(SML) "(Word8.fromInt 0)" and
(Haskell) "(0 :: Uint8.Word8)" and
(Scala) "0.toByte"
| constant "1 :: uint8" \<rightharpoonup>
(SML) "(Word8.fromInt 1)" and
(Haskell) "(1 :: Uint8.Word8)" and
(Scala) "1.toByte"
| constant "plus :: uint8 \<Rightarrow> _ \<Rightarrow> _" \<rightharpoonup>
(SML) "Word8.+ ((_), (_))" and
(Haskell) infixl 6 "+" and
(Scala) "(_ +/ _).toByte"
| constant "uminus :: uint8 \<Rightarrow> _" \<rightharpoonup>
(SML) "Word8.~" and
(Haskell) "negate" and
(Scala) "(- _).toByte"
| constant "minus :: uint8 \<Rightarrow> _" \<rightharpoonup>
(SML) "Word8.- ((_), (_))" and
(Haskell) infixl 6 "-" and
(Scala) "(_ -/ _).toByte"
| constant "times :: uint8 \<Rightarrow> _ \<Rightarrow> _" \<rightharpoonup>
(SML) "Word8.* ((_), (_))" and
(Haskell) infixl 7 "*" and
(Scala) "(_ */ _).toByte"
| constant "HOL.equal :: uint8 \<Rightarrow> _ \<Rightarrow> bool" \<rightharpoonup>
(SML) "!((_ : Word8.word) = _)" and
(Haskell) infix 4 "==" and
(Scala) infixl 5 "=="
| class_instance uint8 :: equal \<rightharpoonup> (Haskell) -
| constant "less_eq :: uint8 \<Rightarrow> _ \<Rightarrow> bool" \<rightharpoonup>
(SML) "Word8.<= ((_), (_))" and
(Haskell) infix 4 "<=" and
(Scala) "Uint8.less'_eq"
| constant "less :: uint8 \<Rightarrow> _ \<Rightarrow> bool" \<rightharpoonup>
(SML) "Word8.< ((_), (_))" and
(Haskell) infix 4 "<" and
(Scala) "Uint8.less"
| constant "bitNOT :: uint8 \<Rightarrow> _" \<rightharpoonup>
(SML) "Word8.notb" and
(Haskell) "Data'_Bits.complement" and
(Scala) "_.unary'_~.toByte"
| constant "bitAND :: uint8 \<Rightarrow> _" \<rightharpoonup>
(SML) "Word8.andb ((_),/ (_))" and
(Haskell) infixl 7 "Data_Bits..&." and
(Scala) "(_ & _).toByte"
| constant "bitOR :: uint8 \<Rightarrow> _" \<rightharpoonup>
(SML) "Word8.orb ((_),/ (_))" and
(Haskell) infixl 5 "Data_Bits..|." and
(Scala) "(_ | _).toByte"
| constant "bitXOR :: uint8 \<Rightarrow> _" \<rightharpoonup>
(SML) "Word8.xorb ((_),/ (_))" and
(Haskell) "Data'_Bits.xor" and
(Scala) "(_ ^ _).toByte"
definition uint8_divmod :: "uint8 \<Rightarrow> uint8 \<Rightarrow> uint8 \<times> uint8" where
"uint8_divmod x y =
(if y = 0 then (undefined (op div :: uint8 \<Rightarrow> _) x (0 :: uint8), undefined (op mod :: uint8 \<Rightarrow> _) x (0 :: uint8))
else (x div y, x mod y))"
definition uint8_div :: "uint8 \<Rightarrow> uint8 \<Rightarrow> uint8"
where "uint8_div x y = fst (uint8_divmod x y)"
definition uint8_mod :: "uint8 \<Rightarrow> uint8 \<Rightarrow> uint8"
where "uint8_mod x y = snd (uint8_divmod x y)"
lemma div_uint8_code [code]: "x div y = (if y = 0 then 0 else uint8_div x y)"
including undefined_transfer unfolding uint8_divmod_def uint8_div_def
by transfer (simp add: word_div_def)
lemma mod_uint8_code [code]: "x mod y = (if y = 0 then x else uint8_mod x y)"
including undefined_transfer unfolding uint8_mod_def uint8_divmod_def
by transfer (simp add: word_mod_def)
definition uint8_sdiv :: "uint8 \<Rightarrow> uint8 \<Rightarrow> uint8"
where
"uint8_sdiv x y =
(if y = 0 then undefined (op div :: uint8 \<Rightarrow> _) x (0 :: uint8)
else Abs_uint8 (Rep_uint8 x sdiv Rep_uint8 y))"
definition div0_uint8 :: "uint8 \<Rightarrow> uint8"
where [code del]: "div0_uint8 x = undefined (op div :: uint8 \<Rightarrow> _) x (0 :: uint8)"
declare [[code abort: div0_uint8]]
definition mod0_uint8 :: "uint8 \<Rightarrow> uint8"
where [code del]: "mod0_uint8 x = undefined (op mod :: uint8 \<Rightarrow> _) x (0 :: uint8)"
declare [[code abort: mod0_uint8]]
lemma uint8_divmod_code [code]:
"uint8_divmod x y =
(if 0x80 \<le> y then if x < y then (0, x) else (1, x - y)
else if y = 0 then (div0_uint8 x, mod0_uint8 x)
else let q = (uint8_sdiv (x >> 1) y) << 1;
r = x - q * y
in if r \<ge> y then (q + 1, r - y) else (q, r))"
including undefined_transfer unfolding uint8_divmod_def uint8_sdiv_def div0_uint8_def mod0_uint8_def
by transfer(simp add: divmod_via_sdivmod)
lemma uint8_sdiv_code [code abstract]:
"Rep_uint8 (uint8_sdiv x y) =
(if y = 0 then Rep_uint8 (undefined (op div :: uint8 \<Rightarrow> _) x (0 :: uint8))
else Rep_uint8 x sdiv Rep_uint8 y)"
unfolding uint8_sdiv_def by(simp add: Abs_uint8_inverse)
text {*
Note that we only need a translation for signed division, but not for the remainder
because @{thm uint8_divmod_code} computes both with division only.
*}
code_printing
constant uint8_div \<rightharpoonup>
(SML) "Word8.div ((_), (_))" and
(Haskell) "Prelude.div"
| constant uint8_mod \<rightharpoonup>
(SML) "Word8.mod ((_), (_))" and
(Haskell) "Prelude.mod"
| constant uint8_divmod \<rightharpoonup>
(Haskell) "divmod"
| constant uint8_sdiv \<rightharpoonup>
(Scala) "(_ '/ _).toByte"
definition uint8_test_bit :: "uint8 \<Rightarrow> integer \<Rightarrow> bool"
where [code del]:
"uint8_test_bit x n =
(if n < 0 \<or> 7 < n then undefined (test_bit :: uint8 \<Rightarrow> _) x n
else x !! (nat_of_integer n))"
lemma test_bit_uint8_code [code]:
"test_bit x n \<longleftrightarrow> n < 8 \<and> uint8_test_bit x (integer_of_nat n)"
including undefined_transfer integer.lifting unfolding uint8_test_bit_def
by transfer(auto cong: conj_cong dest: test_bit_size simp add: word_size)
lemma uint8_test_bit_code [code]:
"uint8_test_bit w n =
(if n < 0 \<or> 7 < n then undefined (test_bit :: uint8 \<Rightarrow> _) w n else Rep_uint8 w !! nat_of_integer n)"
unfolding uint8_test_bit_def by(simp add: test_bit_uint8.rep_eq)
code_printing constant uint8_test_bit \<rightharpoonup>
(SML) "Uint8.test'_bit" and
(Haskell) "Data'_Bits.testBitBounded" and
(Scala) "Uint8.test'_bit"
definition uint8_set_bit :: "uint8 \<Rightarrow> integer \<Rightarrow> bool \<Rightarrow> uint8"
where [code del]:
"uint8_set_bit x n b =
(if n < 0 \<or> 7 < n then undefined (set_bit :: uint8 \<Rightarrow> _) x n b
else set_bit x (nat_of_integer n) b)"
lemma set_bit_uint8_code [code]:
"set_bit x n b = (if n < 8 then uint8_set_bit x (integer_of_nat n) b else x)"
including undefined_transfer integer.lifting unfolding uint8_set_bit_def
by(transfer)(auto cong: conj_cong simp add: not_less set_bit_beyond word_size)
lemma uint8_set_bit_code [code abstract]:
"Rep_uint8 (uint8_set_bit w n b) =
(if n < 0 \<or> 7 < n then Rep_uint8 (undefined (set_bit :: uint8 \<Rightarrow> _) w n b)
else set_bit (Rep_uint8 w) (nat_of_integer n) b)"
including undefined_transfer unfolding uint8_set_bit_def by transfer simp
code_printing constant uint8_set_bit \<rightharpoonup>
(SML) "Uint8.set'_bit" and
(Haskell) "Data'_Bits.setBitBounded" and
(Scala) "Uint8.set'_bit"
lift_definition uint8_set_bits :: "(nat \<Rightarrow> bool) \<Rightarrow> uint8 \<Rightarrow> nat \<Rightarrow> uint8" is set_bits_aux .
lemma uint8_set_bits_code [code]:
"uint8_set_bits f w n =
(if n = 0 then w
else let n' = n - 1 in uint8_set_bits f ((w << 1) OR (if f n' then 1 else 0)) n')"
by(transfer fixing: n)(cases n, simp_all)
lemma set_bits_uint8 [code]:
"(BITS n. f n) = uint8_set_bits f 0 8"
by transfer(simp add: set_bits_conv_set_bits_aux)
lemma lsb_code [code]: fixes x :: uint8 shows "lsb x = x !! 0"
by transfer(simp add: word_lsb_def word_test_bit_def)
definition uint8_shiftl :: "uint8 \<Rightarrow> integer \<Rightarrow> uint8"
where [code del]:
"uint8_shiftl x n = (if n < 0 \<or> 8 \<le> n then undefined (shiftl :: uint8 \<Rightarrow> _) x n else x << (nat_of_integer n))"
lemma shiftl_uint8_code [code]: "x << n = (if n < 8 then uint8_shiftl x (integer_of_nat n) else 0)"
including undefined_transfer integer.lifting unfolding uint8_shiftl_def
by transfer(simp add: not_less shiftl_zero_size word_size)
lemma uint8_shiftl_code [code abstract]:
"Rep_uint8 (uint8_shiftl w n) =
(if n < 0 \<or> 8 \<le> n then Rep_uint8 (undefined (shiftl :: uint8 \<Rightarrow> _) w n)
else Rep_uint8 w << nat_of_integer n)"
including undefined_transfer unfolding uint8_shiftl_def by transfer simp
code_printing constant uint8_shiftl \<rightharpoonup>
(SML) "Uint8.shiftl" and
(Haskell) "Data'_Bits.shiftlBounded" and
(Scala) "Uint8.shiftl"
definition uint8_shiftr :: "uint8 \<Rightarrow> integer \<Rightarrow> uint8"
where [code del]:
"uint8_shiftr x n = (if n < 0 \<or> 8 \<le> n then undefined (shiftr :: uint8 \<Rightarrow> _) x n else x >> (nat_of_integer n))"
lemma shiftr_uint8_code [code]: "x >> n = (if n < 8 then uint8_shiftr x (integer_of_nat n) else 0)"
including undefined_transfer integer.lifting unfolding uint8_shiftr_def
by transfer(simp add: not_less shiftr_zero_size word_size)
lemma uint8_shiftr_code [code abstract]:
"Rep_uint8 (uint8_shiftr w n) =
(if n < 0 \<or> 8 \<le> n then Rep_uint8 (undefined (shiftr :: uint8 \<Rightarrow> _) w n)
else Rep_uint8 w >> nat_of_integer n)"
including undefined_transfer unfolding uint8_shiftr_def by transfer simp
code_printing constant uint8_shiftr \<rightharpoonup>
(SML) "Uint8.shiftr" and
(Haskell) "Data'_Bits.shiftrBounded" and
(Scala) "Uint8.shiftr"
definition uint8_sshiftr :: "uint8 \<Rightarrow> integer \<Rightarrow> uint8"
where [code del]:
"uint8_sshiftr x n =
(if n < 0 \<or> 8 \<le> n then undefined sshiftr_uint8 x n else sshiftr_uint8 x (nat_of_integer n))"
lemma sshiftr_beyond: fixes x :: "'a :: len word" shows
"size x \<le> n \<Longrightarrow> x >>> n = (if x !! (size x - 1) then -1 else 0)"
by(rule word_eqI)(simp add: nth_sshiftr word_size)
lemma sshiftr_uint8_code [code]:
"x >>> n =
(if n < 8 then uint8_sshiftr x (integer_of_nat n) else if x !! 7 then -1 else 0)"
including undefined_transfer integer.lifting unfolding uint8_sshiftr_def
by transfer (simp add: not_less sshiftr_beyond word_size)
lemma uint8_sshiftr_code [code abstract]:
"Rep_uint8 (uint8_sshiftr w n) =
(if n < 0 \<or> 8 \<le> n then Rep_uint8 (undefined sshiftr_uint8 w n)
else Rep_uint8 w >>> nat_of_integer n)"
including undefined_transfer unfolding uint8_sshiftr_def by transfer simp
code_printing constant uint8_sshiftr \<rightharpoonup>
(SML) "Uint8.shiftr'_signed" and
(Haskell)
"(Prelude.fromInteger (Prelude.toInteger (Data'_Bits.shiftrBounded (Prelude.fromInteger (Prelude.toInteger _) :: Uint8.Int8) _)) :: Uint8.Word8)" and
(Scala) "Uint8.shiftr'_signed"
lemma uint8_msb_test_bit: "msb x \<longleftrightarrow> (x :: uint8) !! 7"
by transfer(simp add: msb_nth)
lemma msb_uint16_code [code]: "msb x \<longleftrightarrow> uint8_test_bit x 7"
by(simp add: uint8_test_bit_def uint8_msb_test_bit)
lemma uint8_of_int_code [code]: "uint8_of_int i = Uint8 (integer_of_int i)"
including integer.lifting by transfer simp
lemma int_of_uint8_code [code]:
"int_of_uint8 x = int_of_integer (integer_of_uint8 x)"
by(simp add: integer_of_uint8_def)
lemma nat_of_uint8_code [code]:
"nat_of_uint8 x = nat_of_integer (integer_of_uint8 x)"
unfolding integer_of_uint8_def including integer.lifting by transfer (simp add: unat_def)
definition integer_of_uint8_signed :: "uint8 \<Rightarrow> integer"
where
"integer_of_uint8_signed n = (if n !! 7 then undefined integer_of_uint8 n else integer_of_uint8 n)"
lemma integer_of_uint8_signed_code [code]:
"integer_of_uint8_signed n =
(if n !! 7 then undefined integer_of_uint8 n else integer_of_int (uint (Rep_uint8' n)))"
unfolding integer_of_uint8_signed_def integer_of_uint8_def
including undefined_transfer by transfer simp
code_printing
constant "integer_of_uint8" \<rightharpoonup>
(SML) "IntInf.fromLarge (Word8.toLargeInt _)" and
(Haskell) "Prelude.toInteger"
| constant "integer_of_uint8_signed" \<rightharpoonup>
(Scala) "BigInt"
section {* Quickcheck setup *}
definition uint8_of_natural :: "natural \<Rightarrow> uint8"
where "uint8_of_natural x \<equiv> Uint8 (integer_of_natural x)"
instantiation uint8 :: "{random, exhaustive, full_exhaustive}" begin
definition "random_uint8 \<equiv> qc_random_cnv uint8_of_natural"
definition "exhaustive_uint8 \<equiv> qc_exhaustive_cnv uint8_of_natural"
definition "full_exhaustive_uint8 \<equiv> qc_full_exhaustive_cnv uint8_of_natural"
instance ..
end
instantiation uint8 :: narrowing begin
interpretation quickcheck_narrowing_samples
"\<lambda>i. let x = Uint8 i in (x, 0xFF - x)" "0"
"Typerep.Typerep (STR ''Uint8.uint8'') []" .
definition "narrowing_uint8 d = qc_narrowing_drawn_from (narrowing_samples d) d"
declare [[code drop: "partial_term_of :: uint8 itself \<Rightarrow> _"]]
lemmas partial_term_of_uint8 [code] = partial_term_of_code
instance ..
end
no_notation sshiftr_uint8 (infixl ">>>" 55)
end
|
Sign-up today & get $10 of FREE credit.
How much storage are you actually using?
Most people use less than half of their allotted storage amount.
Ask not how much cloud storage your provider is giving you—ask how much storage you're actually using.
Depending on how much you store, your $10 may last you months or even years. After that, just top-up your account.
You can make a one-time payment to top-up your account at anytime. We accept all major credit cards, as well as Bitcoin.
Tiered pricing systems are, by definition, unfair to a large percentage of its customers.
Imagine if a gas station decided to charge every customer the same fixed price to fill their tank. What price would the gas station pick? Not the cost of the smallest car, because then they'd lose money. And not the cost of the most expensive car either, because then every customer would be upset. So the gas station would need to pick a price somewhere in the middle.
Which means that approximately 50% of its customers are going to end up paying more, and 50% are going to pay less. This might sound great if you’re driving a truck with a larger gas tank. But it also means the smaller customers are getting ripped off, in order to subsidize the larger customers. And we think that’s unfair.
With us you only pay for what you use. If you only use half a gigabyte, then we only charge you for half a gigabyte. We're talking pennies, or even fractions of a penny.
PS - We don't round up your cost. We have the ability to charge you fractions of a penny.
When you create your account, we'll give you $10 credit. At the end of every month, we calculate your bill based on how much storage you used. This amount is deducted from your available balance. If your balance is still above zero, then you're good to go. If it drops below zero, then we'll pester you for money.
Payments are simple. You can make a one-time payment at anytime to top-up your account. Or you can setup automatic payments, and we'll automatically top-up your account for you when your balance drops below zero.
Files stored in the cloud are automatically transitioned to Infrequent Access after not being modified/accessed for 30 days. If a transitioned file is then not modified/accessed for an additional 60 days, it's automatically transitioned to Cold Storage. So you pay less for files you store longer. For most users, this represents over 90% of their storage.
Storage is billed using a unit called "gigabyte months", which means "1 gigabyte stored for an entire month". This is similar in concept to a "kilowatt hour" on your electric bill. And the idea is to charge you the minimum amount possible. If you only store a document for 10 days, then we only charge you for those 10 days.
We don't round up your cost. We have the ability to charge you fractions of a penny. We can do this because we're just deducting from your balance in a computer system. After all, those fractions of a penny are yours, and we think you deserve them.
Alice stores a total of 1.5 GB (in cold storage) for the entire month. She pays 8.1¢ for the first gig, and 3.55¢ (7.1 / 2) for the second half gig. Her total storage cost for the month is $0.1155.
To get started, just download our app, and click "create new account" to get your $10 free credit.
|
= = Divisions in the College of Cardinals and the candidates to the papacy = =
|
[STATEMENT]
lemma rel_set_pos_distr_iff [simp]: "rel_set_pos_distr_cond A A' = True"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. rel_set_pos_distr_cond A A' = True
[PROOF STEP]
unfolding rel_set_pos_distr_cond_def
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (rel_set A OO rel_set A' \<le> rel_set (A OO A')) = True
[PROOF STEP]
by(simp add: rel_set_OO)
|
[STATEMENT]
lemma honest_verifier_ZK:
shows "Schnorr_\<Sigma>.HVZK"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. Schnorr_\<Sigma>.HVZK
[PROOF STEP]
unfolding Schnorr_\<Sigma>.HVZK_def
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<forall>e\<in>challenge_space. (\<forall>(h, w)\<in>R_DL. Schnorr_\<Sigma>.R h w e = Schnorr_\<Sigma>.S h e) \<and> (\<forall>h\<in>valid_pub. \<forall>(a, z)\<in>set_spmf (S2 h e). check h a e z)
[PROOF STEP]
by(auto simp add: hv_zk R_DL_def S2_def check_def valid_pub_def challenge_space_def cyclic_group_assoc)
|
import system.io
open io nat
def hello_world : io unit :=
put_str_ln "hello world"
def print_squares : ℕ → io unit
| 0 :=return ()
| (succ n) := print_squares n >>
put_str_ln (
to_string n ++ "^2 = " ++
to_string (n * n))
#eval hello_world
#eval print_squares 20
#print axioms hello_world
-- BEGIN
theorem and_commutative (p q : Prop) : p ∧ q → q ∧ p :=
assume hpq : p ∧ q,
have hp : p, from and.left hpq,
have hq : q, from and.right hpq,
show q ∧ p, from and.intro hq hp
-- END
constant m : ℕ
constant n : ℕ
constants b1 b2 : bool
/- whoo -/
#check m
#check n
#check n + 0
#check m * (n + 0)
#check b1
#check b1 && b2
#check b1 || b2
#check tt
constant f : ℕ → ℕ
constant p : ℕ × ℕ
#check p.2
#check (f, p).1
#check f p.1
#check ℕ
#check Type
#check (×)
universe u
constant α : Type u
#check α
#check Type u
#check λ x, x + 5
#check λ (α β : Type) (b : β) (x : α), x
#check (λ (α β : Type) (b : β) (x : α), x) ℕ
#reduce (λ x, x) f
#reduce (λ x, α) f
#reduce b1 || tt
#reduce tt || b1
#reduce n * 0
#reduce n * 1
#eval 12345 * 54321
def foo : (ℕ → ℕ) → ℕ := λ f, f 0
#check foo
#print foo
def double (x : ℕ) : ℕ := x + x
def doubl' := λ x : ℕ, x + x
#check double = doubl'
#reduce double = doubl'
def curry (α β γ : Type) (f : α × β → γ) : α → β → γ
:= λ a b, f (a, b)
#check let y := 2 + 2 in y * y
section useful
variables (α β γ : Type)
variables (g : β → γ) (f: α → β) (h : α → α)
variable x : α
def compose := g (f x)
def do_twice := h (h x)
def do_thrice := h (h (h x))
end useful
#print compose
#print do_twice
#print do_thrice
namespace foo
def a : ℕ := 5
#print a
end foo
#print foo.a
open foo
#print a
namespace hidden
universe μ
constant list: Type μ → Type μ
constant cons : Π α : Type μ, α → list α → list α
end hidden
open list
#check list
#check @cons
#check @nil
#check @head
variable α : Type
variable β : α → Type
variable a : α
variable b : β a
#check sigma.mk a b
namespace list
constant cons' : Π {α : Type u}, α → list α → list α
#check cons'
end list
|
import Mathlib.Data.Nat.Basic
universe u
namespace project
structure HeapNodeAux (α : Type u) (h : Type u) where
val : α
rank : Nat
children : h
-- A `Heap` is a forest of binomial trees.
inductive Heap (α : Type u) : Type u where
| heap (ns : List (HeapNodeAux α (Heap α))) : Heap α
deriving Inhabited
open Heap
-- A `BinTree` is a binomial tree. If a `BinTree` has rank `k`, its children
-- have ranks between `0` and `k - 1`. They are ordered by rank. Additionally,
-- the value of each child must be greater than or equal to the value of its
-- parent node.
abbrev BinTree α := HeapNodeAux α (Heap α)
def Heap.nodes : Heap α → List (BinTree α)
| heap ns => ns
@[simp]
theorem Heap.nodes_def : nodes (heap xs) = xs := rfl
variable {α : Type u}
def hRank : List (BinTree α) → Nat
| [] => 0
| h::_ => h.rank
def isEmpty : Heap α → Bool
| heap [] => true
| _ => false
def empty : Heap α :=
heap []
def singleton (a : α) : Heap α :=
heap [{ val := a, rank := 1, children := heap [] }]
-- Combine two binomial trees of rank `r`, creating a binomial tree of rank
-- `r + 1`.
@[specialize] def combine (le : α → α → Bool) (n₁ n₂ : BinTree α) : BinTree α :=
if le n₂.val n₁.val then
{ n₂ with rank := n₂.rank + 1, children := heap $ n₂.children.nodes ++ [n₁] }
else
{ n₁ with rank := n₁.rank + 1, children := heap $ n₁.children.nodes ++ [n₂] }
-- Merge two forests of binomial trees. The forests are assumed to be ordered
-- by rank and `mergeNodes` maintains this invariant.
@[specialize] def mergeNodes (le : α → α → Bool) : List (BinTree α) → List (BinTree α) → List (BinTree α)
| [], h => h
| h, [] => h
| f@(h₁ :: t₁), s@(h₂ :: t₂) =>
if h₁.rank < h₂.rank then h₁ :: mergeNodes le t₁ s
else if h₂.rank < h₁.rank then h₂ :: mergeNodes le t₂ f
else
let merged := combine le h₁ h₂
let r := merged.rank
if r != hRank t₁ then
if r != hRank t₂ then merged :: mergeNodes le t₁ t₂ else mergeNodes le (merged :: t₁) t₂
else
if r != hRank t₂ then mergeNodes le t₁ (merged :: t₂) else merged :: mergeNodes le t₁ t₂
termination_by _ h₁ h₂ => h₁.length + h₂.length
decreasing_by simp_wf; simp_arith [*]
@[specialize] def merge (le : α → α → Bool) : Heap α → Heap α → Heap α
| heap h₁, heap h₂ => heap (mergeNodes le h₁ h₂)
@[specialize] def head? (le : α → α → Bool) : Heap α → Option α
| heap [] => none
| heap (h::hs) => some $
hs.foldl (init := h.val) fun r n => if le r n.val then r else n.val
@[inline] def head [Inhabited α] (le : α → α → Bool) (h : Heap α) : α :=
head? le h |>.getD default
@[specialize] def findMin (le : α → α → Bool) : List (BinTree α) → Nat → BinTree α × Nat → BinTree α × Nat
| [], _, r => r
| h::hs, idx, (h', idx') => if le h'.val h.val then findMin le hs (idx+1) (h', idx') else findMin le hs (idx+1) (h, idx)
-- It is important that we check `le h'.val h.val` here, not the other way
-- around. This ensures that head? and findMin find the same element even
-- when we have `le h'.val h.val` and `le h.val h'.val` (i.e. le is not
-- irreflexive).
def deleteMin (le : α → α → Bool) : Heap α → Option (α × Heap α)
| heap [] => none
| heap [h] => some (h.val, h.children)
| heap hhs@(h::hs) =>
let (min, minIdx) := findMin le hs 1 (h, 0)
let rest := hhs.eraseIdx minIdx
let tail := merge le (heap rest) min.children
some (min.val, tail)
@[inline] def tail? (le : α → α → Bool) (h : Heap α) : Option (Heap α) :=
deleteMin le h |>.map (·.snd)
@[inline] def tail (le : α → α → Bool) (h : Heap α) : Heap α :=
tail? le h |>.getD empty
partial def toList (le : α → α → Bool) (h : Heap α) : List α :=
match deleteMin le h with
| none => []
| some (hd, tl) => hd :: toList le tl
partial def toArray (le : α → α → Bool) (h : Heap α) : Array α :=
go #[] h
where
go (acc : Array α) (h : Heap α) : Array α :=
match deleteMin le h with
| none => acc
| some (hd, tl) => go (acc.push hd) tl
partial def toListUnordered : Heap α → List α
| heap ns => ns.bind fun n => n.val :: toListUnordered n.children
partial def toArrayUnordered (h : Heap α) : Array α :=
go #[] h
where
go (acc : Array α) : Heap α → Array α
| heap ns => Id.run do
let mut acc := acc
for n in ns do
acc := acc.push n.val
acc := go acc n.children
return acc
mutual
inductive IsBinTree : BinTree α → Prop where
| mk: IsRankedTree 1 a.rank a.children.nodes → IsBinTree a
/-IsRankedTree n m ts :<=> the list ts contains the children of the parentnode of a binomial tree, IsRankedTree
assures that the order of the rank of the children is n, n+1, n+2,.....,m-1 and if n = m, then ts is empty-/
-- fix me: IsBinForest
inductive IsRankedTree : Nat → Nat → List (BinTree α) → Prop where
| nil : IsRankedTree n n []
| cons : t.rank = n → IsRankedTree (n + 1) m ts → IsBinTree t → IsRankedTree n m (t::ts)
end
/-IsHeapForest' rank [t₁,...,tₙ] :<=> Each tree in the list t₁ upto tₙ should have a smaller rank than the tree
that follows so tₙ.rank < tₙ₊₁. Also each tree in the list should be a binomial tree so IsBinTree t should hold for each tree t-/
inductive IsHeapForest' : Nat → List (BinTree α) → Prop where
| nil : IsHeapForest' rank []
| cons : rank < t.rank → IsBinTree t → IsHeapForest' t.rank ts → IsHeapForest' rank (t::ts)
abbrev IsHeapForest : List (BinTree α) → Prop := IsHeapForest' 0
/-IsHeap h calls IsHeapForest with the list of trees extracted from h-/
def IsHeap (h : Heap α): Prop :=
IsHeapForest h.nodes
mutual
inductive IsSearchTree (le : α → α → Bool) : BinTree α → Prop where
| mk : IsMinTree le a.val a.children.nodes → IsSearchTree le a
/-IsMinHeap le val ns :<=> assures that val(value of parent node) is less or equal than the value
of the nodes in ns (the children). Maintains minimum heap property-/
inductive IsMinTree (le : α → α → Bool) : α → List (BinTree α) → Prop where
| nil : IsMinTree le val []
| cons : le val n.val → IsMinTree le val ns → IsSearchTree le n → IsMinTree le val (n::ns)
end
/-IsMinHeap le (heap [t₁,...,tₙ]) :<=> IsMinHeap holds if for each tree t in the list t₁ upto tₙ, IsSearchTree le t holds-/
inductive IsMinHeap (le : α → α → Bool) : Heap α → Prop where
| nil : IsMinHeap le (heap [])
| cons : IsSearchTree le n → IsMinHeap le (heap ns) → IsMinHeap le (heap (n::ns))
theorem IsHeap_empty : IsHeap (@empty α) := by
constructor
theorem IsMinHeap_empty : IsMinHeap le (@empty α) := by
constructor
theorem singleton_IsHeap : IsHeap (singleton a) := by
constructor
. dsimp
decide
. constructor
dsimp
constructor
. constructor
theorem singleton_IsMinHeap : IsMinHeap le (singleton a) := by
constructor
. constructor
dsimp
constructor
. constructor
theorem IsRankedTree_append (rt : IsRankedTree n m xs) (ha: IsBinTree a) (hrank: a.rank = m) :
IsRankedTree n (m + 1) (xs ++ [a]) := by
induction xs generalizing n
case nil =>
simp
cases rt
constructor
. assumption
. constructor
. assumption
case cons _ _ ih =>
cases rt
constructor
. assumption
. apply ih
assumption
. assumption
theorem combine_trees_IsBinTree (le : α → α → Bool) (a b : BinTree α) :
IsBinTree a → IsBinTree b → a.rank = b.rank → IsBinTree (combine le a b) := by
intros ha hb hab
constructor
unfold combine
split
. dsimp
cases hb
apply IsRankedTree_append
repeat assumption
. dsimp
cases ha
apply IsRankedTree_append
repeat assumption
simp_all
theorem IsMinTree_append (h : IsMinTree le m xs) (ha : IsSearchTree le a) (hba: le m a.val = true) :
IsMinTree le m (xs ++ [a]) := by
induction xs with
| nil =>
simp
constructor <;> assumption
| cons _ _ ih =>
cases h
constructor
. assumption
. simp
apply ih
assumption
. assumption
variable {le : α → α → Bool} (not_le_le : ∀ x y, ¬ le x y → le y x)
theorem combine_trees_IsSearchTree (a b : BinTree α) :
IsSearchTree le a → IsSearchTree le b → IsSearchTree le (combine le a b) := by
intros ha hb
constructor
unfold combine
split
. dsimp
cases hb
apply IsMinTree_append
repeat assumption
. dsimp
cases ha
apply IsMinTree_append
repeat assumption
apply not_le_le
assumption
|
The highway 's name changes from 13400 South to Center Street through Lewiston . Passing the Lewiston Cemetery , SR @-@ 61 crosses over the Cub River and a second single track belonging to UP , and then a third UP single track just before the highway 's eastern terminus at US @-@ 91 north of Richmond . All of the rail lines that SR @-@ 61 crosses originally belonged to the Oregon Short Line Railway . Aside from the segment through Lewiston , the highway is surrounded by farmland for its entire journey across northern Utah .
|
\documentclass{article}
\usepackage{PRIMEarxiv}
\usepackage[utf8]{inputenc} % allow utf-8 input
\usepackage[T1]{fontenc} % use 8-bit T1 fonts
\usepackage{hyperref} % hyperlinks
\usepackage{url} % simple URL typesetting
\usepackage{booktabs} % professional-quality tables
\usepackage{amsfonts} % blackboard math symbols
\usepackage{nicefrac} % compact symbols for 1/2, etc.
\usepackage{microtype} % microtypography
\usepackage{fancyhdr} % header
\usepackage{graphicx} % graphics
\usepackage{xcolor}
\usepackage{mathtools}
\usepackage[square,numbers]{natbib}
\graphicspath{{media/}} % organize your images and other figures under media/ folder
%Header
\pagestyle{fancy}
\thispagestyle{empty}
\rhead{ \textit{ The GatedTabTransformer. }}
% Update your Headers here
\fancyhead[LO]{Radostin Cholakov and Todor Kolev}
\newcommand{\projectSource}{\url{https://github.com/radi-cho/GatedTabTransformer}}
%% Title
\title{The GatedTabTransformer. An enhanced\\
deep learning architecture for tabular modeling.}
\author{
Radostin Cholakov \\
High School of Mathematics\\
Plovdiv, Bulgaria\\
\texttt{[email protected]} \\
%% examples of more authors
\And
Todor Kolev \\
Obecto \\
Sofia, Bulgaria \\
\texttt{[email protected]} \\
}
\begin{document}
\maketitle
\begin{abstract}
There is an increasing interest in the application of deep learning architectures to tabular data. One of the state-of-the-art solutions is TabTransformer which incorporates an attention mechanism to better track relationships between categorical features and then makes use of a standard MLP to output its final logits. In this paper we propose multiple modifications to the original TabTransformer performing better on binary classification tasks for three separate datasets with more than 1\% AUROC gains. Inspired by gated MLP, linear projections are implemented in the MLP block and multiple activation functions are tested. We also evaluate the importance of specific hyper parameters during training.
\end{abstract}
\keywords{deep learning \and Transformer \and tabular data}
\section{Introduction}
Some of the most common machine learning pipelines with real-world applications involve manipulation of tabular data. The current state-of-the-art approaches for tabular modeling are treebased ensemble methods such as the gradient boosted decision trees (GBDTs) \cite{chen2016xgboost}. However, there is also an increasing interest in the application of deep learning techniques in the field due to the possibility for bypassing manual embedding creation and feature engineering. \cite{fiedler2021simple}. Multiple neural network solutions such as DNF-Net \cite{abutbul2020dnf}, TabNet \cite{arik1908tabnet} or MLP+ \cite{fiedler2021simple} have been introduced, all of which demonstrate performance comparable to GBDTs.
On the other hand, as we've described in previous studies \cite{cholakov2021transformers}, attention-based architectures, originally introduced to tackle NLP tasks, such as the Transformer \cite{vaswani2017attention} are constantly being adapted to solve a wider range of problems. One proposal is the TabTransformer \cite{Huang2020TabTransformerTD} which focuses on using \textit{Multi-Head Self Attention} blocks to model relationships between the categorical features in tabular data, transforming them into robust contextual embeddings. The transformed categorical features are concatenated with continuous values and then fed through a standard multilayer perceptron \cite{haykin1994neural} (section \ref{sec:tabtransformer}). This way the TabTransformer significantly outperforms pure MLPs and recent deep networks (e.g. TabNet \cite{arik1908tabnet}) for tabular data. We believe that it is possible to further enhance its architecture by replacing the final MLP block with a gated multi-layer perceptron (gMLP) \cite{Liu2021PayAT} - a simple MLP-based network with spatial gating projections, which aims to be on par with Transformers in terms of performance on sequential data (section \ref{sec:gmlp}).
In this paper we will present an enhanced version of the TabTransformer with incorporated gMLP block and the intuition behind it. Also multiple other architecture design decisions based on hyper parameter optimization experiments will be described.
\subsection{The TabTransformer}
\label{sec:tabtransformer}
The TabTransformer model, introduced in December 2020 by researchers at Amazon manages to outperform the other state-of-the-art deep learning methods for tabular data by at least 1.0\% on mean AUROC. It consists of a column embedding layer, a stack of $N$ Transformer layers, and a multilayer perceptron (figure \ref{fig:model}). The inputted tabular features are split in two parts for the categorical and continuous values. For each categorical feature the so called \textit{column embedding} is performed (see \cite{Huang2020TabTransformerTD}). It generates parametric embeddings which are inputted to a stack of Transformer layers. Each Transformer layer \cite{vaswani2017attention} consists of a multi-head self-attention layer followed by a position-wise feed-forward layer.
\begin{equation}
Attention(Q, K, V) = softmax(\frac{QK^T}{\sqrt{d_k}})V
\end{equation}
After the processing of categorical values $x_{cat}$, they are concatenated along with the continuous values $x_{cont}$ to form a final feature vector $x$ which is inputted to a standard multilayer perceptron.
\subsection{The gMLP}
\label{sec:gmlp}
The gMLP model \cite{Liu2021PayAT} introduces some simple yet really effective modifications to the standard multilayer perceptron. It consists of a stack of multiple identically structured blocks (figure \ref{fig:gmlp-overview}) defined as follows:
\begin{equation}
Z = \sigma(XU), \; Z' = s(Z), \; Y = Z'V
\end{equation}
\begin{equation}
s(Z) = Z * f_{W,b}(Z)
\end{equation}
\begin{equation}
f_{W,b}(Z) = WZ + b
\end{equation}
where $\sigma$ is an activation function (e.g. ReLU), $U$ and $V$ - linear projections along the channel dimension and $s(.)$ is the so called \textit{spatial gating unit} which captures spatial cross-token interactions. $f_{W,b}$ is a simplistic linear projection and $*$ represents element-wise multiplication. Usually the weights $W$ are initialized as near-zero values and the biases $b$ as ones at the beginning of training.
This structure does not require positional embeddings because relevant information will be captured in the gating units. From the original gMLP paper we can denote that the presented block layout is inspired by inverted bottlenecks which define $s(.)$ as a spatial convolution \cite{sandler2018mobilenetv2}.
\begin{figure}[hpt]
\begin{center}
\includegraphics[width=9cm]{gmlp-overview.pdf}
\caption{Overview of the gMLP architecture. $Z$ to be split into two independent parts during spatial gating - one for multiplicative bypass and one for gating is also proposed for further optimization.}
\end{center}
\label{fig:gmlp-overview}
\end{figure}
gMLP has been proposed by Google in May 2021 as an alternative to Transformers for NLP and vision tasks having up to 66\% less trainable parameters. In our study we will replace the pure MLP block in the TabTransformer with gMLP and test how well it can model tabular data and whether it is able to extract any additional information.
\section{Related work}
In recent years a lot of experiments have been conducted to test the applicability of standard MLPs for tabular modeling \cite{de2015deep}. Multiple Transformer-based methods have been also used to fit tabular features \cite{li2020interpretable}, \cite{sun2019deepenfm}. For example AutoInt \cite{li2020interpretable} proposes multi-head self-attentive neural network to explicitly model the feature interactions in a low-dimensional space.
An extensive performance comparison of the most popular tabular models along with their advantages and disadvantages can be found in a paper from August 2021 \cite{fiedler2021simple}. It also features enhancements to AutoInt and MLPs such as the use of element-wise linear transformations (gating) followed by LeakyReLU activation. The intuition behind the gates implemented there is similar to the gMLP block in our architecture.
Another recently published paper \cite{kadra2021well} from June 2021, initially named "Regularization is all you Need", suggests that implementing various MLP regularization strategies in combination with some of the already described techniques has a potential for further performance boosts in deep networks for tabular problems.
\section{Model}
\label{sec:model}
\begin{figure}[hpt]
\begin{center}
\includegraphics[width=10cm]{GatedTabTransformer-architecture.pdf}
\end{center}
\caption{Architecture of the proposed GatedTabTransformer. $N$ Transformer blocks can be stacked on one another. The same is true for $L$ gMLP layers.}
\label{fig:model}
\end{figure}
As described in section \ref{sec:tabtransformer}, column embeddings are generated from categorical data features. If continuous values are present in the dataset they are passed through a normalization layer.
The categorical embeddings are then processed by a \textit{Transformer block}. In our case a \textit{Transformer block} represents the encoder part of a typical Transformer \cite{vaswani2017attention}. It has two sub-layers - a multi-head self-attention mechanism, and a simple, positionwise fully connected feed-forward network.
As a final layer we have placed a slight modification of the original gMLP, called \textit{gMLP\_Classification} in our code. It is adapted to output classification logits and works best for optimization of cross entropy or binary cross entropy loss\footnote{https://pytorch.org/docs/stable/generated/torch.nn.BCELoss.html}.
By stacking multiple Transformer blocks and gMLP structures we were able to outperform some state-of-the-art deep models. More details on how well this solution performs can be found in section \ref{sec:results}.
Model implementation\footnote{Model implementation, experiments setup and dataset characteristics are publicly available on \projectSource.} is done with the PyTorch \cite{NEURIPS2019_9015} machine learning framework in a CUDA environment \cite{cuda}.
\section{Experiments}
To test the proposed architecture multiple experiments were conducted. Their main objective was to compare the standard TabTransformer's performance to the proposed model.
\subsection{Data}
We made use of three main datasets for experimentation - namely \textit{blastchar}, \textit{1995\_income} from Kaggle\footnote{\url{https://kaggle.com/}}, and \textit{bank\_marketing} from UCI repository\footnote{\url{http://archive.ics.uci.edu/ml/index.php}}. Their size varies between 7K and 45K samples with 14 to 19 features. In all datasets the label values are binary, thus binary classification should be performed. These three sets are all also mentioned in the original TabTransformer paper \cite{Huang2020TabTransformerTD}. Detailed data characteristics can be found in Appendix \ref{appendix:data}.
Dataset splitting follows the 65/15/20\% pattern for train/validation/test. The validation split is used to pick the best performing model state during training and the test split is only used to determine the final scores.
During our research the \textit{pandas} \cite{mckinney-proc-scipy-2010} and \textit{seaborn} \cite{Waskom2021} packages were used for data analysis, manipulation and visualisation.
\subsection{Hyper parameter optimization}
To generate our results (section \ref{sec:results}) we set up a simplified pipeline without the optional embedding pre-trainning or self-supervised learning steps described in \cite{Huang2020TabTransformerTD}. As a first step we recreated the implementation of the original TabTransformer and fitted it to the mentioned datasets with manually selected hyper parameters. Secondly, tuning environment was created to find the best performing set of hyper parameters (initial learning rate, hidden layers count, hidden dimensions, dropouts, etc.) for each dataset.
Then we upgraded the TabTransformer as described in section \ref{sec:model} and repeated the HPO process again, finding the best set of parameters for the new model. More details on what parameter values were tested can be found in Appendix \ref{appendix:hpo}. To avoid overfitting and to speed up training early stopping was implemented.
To train and tune our models efficiently we used the Ray Tune \cite{liaw2018tune} Python library.
\subsubsection{Learning rate and optimization objectives}
A key parameter when training a ML model is learning rate\footnote{\url{https://en.wikipedia.org/wiki/Learning_rate}}. In our experiments a learning rate scheduler\footnote{\url{https://bit.ly/3lt15km}} was implemented to decay the initial rate $\alpha$ by $\gamma$ every step size $n$ and help for quicker convergence. Values for $\alpha$, $\gamma$ and $n$ together with epoch count patience $p$ for early stopping were tuned with a grid search strategy. The process was executed separately for the gated and baseline TabTransformers.
\subsubsection{Hidden layers and dimensions}
Other important hyper parameters are count and dimensions of the hidden layers in either the MLP or gMLP and number of heads for the \textit{Multi-Head Attention}. All of these were also tuned with Ray. Our findings suggest that by increasing the baseline model's number of MLP neurons its performance peaks at a certain value and slowly decreases from that point onward, whereas the gMLP continues to increase its performance at small steps for much longer. For example if the two models perform equivalently well with hidden dimension of 32, increasing it to 128 is more likely to higher the performance of the gMLP TabTransformer compared to baseline.
\subsubsection{Neuron Activation}
Yet another aspect of optimization is to choose an activation function for the multilayer perceptorn neurons \cite{xu2015empirical}. During tuning we tested ReLU, GELU, SELU and LeakyReLU\footnote{\url{https://paperswithcode.com/method/leaky-relu}} with multiple options for its negative slope (Appendix \ref{appendix:hpo}).
\begin{equation}
LeakyReLU(x)=
\begin{dcases}
x,& \text{if } x\geq 0\\
negative\_slope * x, & \text{otherwise}
\end{dcases}
\end{equation}
For both models LeakyReLU with slopes from 0.01 up to 0.05 and the standard ReLU performed with highest results. For simplicity and consistency we proceeded our work with standard ReLU activation. More about the effect of LeakyReLU can be found in the following study - \cite{fiedler2021simple}.
\section{Results}
\label{sec:results}
\subsection{Performance evaluation metrics}
For consistency with previously conducted studies we utilized area under receiver operating characteristic curve (AUROC)\footnote{\url{https://en.wikipedia.org/wiki/Receiver_operating_characteristic}} as a performance evaluation technique \cite{bradley1997use}. The reported results are generated by training and evaluating a model configuration with the same parameters but with different seeds or randomized order of data samples multiple times (usually 5, 25 or 50), computing the mean AUROC and then comparing it to the mean scores of the other model configurations. For the TabNet model comparison expected gains were manually calculated based on the results reported in \cite{Huang2020TabTransformerTD} and \cite{fiedler2021simple}.
To estimate and visualise the performance of our models we used the \textit{scikit-learn} \cite{scikit-learn} and \textit{matplotlib} \cite{Hunter:2007} packages for Python.
\subsection{Performance comparisons}
As a result of our proposals the GatedTabTransformer shows between 0.5\% and 1.1\% performance increase in terms of mean AUROC (figure \ref{fig:auc-results}) compared to the baseline TabTransformer and 1\% to 2\% increase compared to MLPs (table \ref{tab:results-gain}).
\begin{table}[htp]
\caption{
Performance gain in mean percent AUROC compared to baseline models.
}
\label{tab:results-gain}
\centering
\begin{tabular}{lrrr}
\toprule
& Gain over & Gain over & Gain over \\
Dataset & MLP & TabTransformer & TabNet \\
\midrule
bank\_marketing & 1.3 & 1.0 & 3.1 \\
1995\_income & 0.9 & 0.7 & 2.5 \\
blastchar & 0.4 & 0.5 & 1.6 \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[hpt]
\begin{center}
\includegraphics[width=12.5cm]{roc-comparison.png}
\end{center}
\caption{AUROC gain charts for the 3 datasets - comparison between baseline TabTransformer (\textcolor[rgb]{0.12,0.53,0.7}{blue}) and the proposed GatedTabTransformer (\textcolor[rgb]{1,0.4,0}{orange}).}
\label{fig:auc-results}
\end{figure}
\section{Future work}
A fruitful direction for further research would be to investigate the effects of combining our current solutions with some of the recent breakthroughs in regularization, feature engineering, etc.
As a long term direction we will be working on a custom AutoML\footnote{\url{https://en.wikipedia.org/wiki/Automated_machine_learning}} pipeline focused on tabular data modeling. It will require the implementation of neural architecture search\footnote{\url{https://en.wikipedia.org/wiki/Neural_architecture_search}} strategies such as reinforcement learning or evolutionary algorithms \cite{elsken2019neural} to test and discover new model designs. As a reference a paper titled "The Evolved Transformer" \cite{so2019evolved} by Google from 2019 should be mentioned. It describes how NAS can be used with a Transformer \cite{vaswani2017attention} as initial seed to discover more sophisticated architectures. Analogically TabNet, TabTransformer and other tabular models could be used as seeds in a potential NAS process. Additional embedding techniques and manipulated representations of the data (e.g. TaBERT \cite{yin2020tabert}, TabFormer \cite{padhi2021tabular}) can be incorporated as pre-processing steps.
\section{Conclusion}
In the presented study we have explored some modifications to the original TabTransformer \cite{Huang2020TabTransformerTD} architecture which impact beneficially binary classification tasks for three separate datasets with more than 1\% area under receiver operating characteristic curve gains. Linear projections inspired by gated multilayer perceptrons \cite{Liu2021PayAT} have been proposed for the TabTransformer's final MLP block where its final logits are generated. We have also conducted multiple hyper parameter optimization iterations during training to test the impact of different activation functions, learning rates, hidden dimensions and layer structures. These findings have significant importance when working with tabular predictions and we've open-sourced our model and applied it in practical use cases to further showcase their importance.
\section{Acknowledgements}
This paper is purposed to be a part of the 22-nd Student Conference of the High School Student Institute of Mathematics and Informatics - BAS.
We would like to thank Maria Vasileva and Petar Iliev for their helpful feedback.
\nocite{*}
\bibliographystyle{unsrtnat}
\bibliography{references}
\pagebreak
\appendix
\small
\lhead{Appendix}
\section{HPO parameters}
\label{appendix:hpo}
\begin{itemize}
\item Learning rates: 0.05, 0.01, 0.005, 0.001, 0.0005
\item Step sizes for learning rate scheduler: 5, 10, 15 epochs
\item Learning rate scheduler slope (gamma): 0.1, 0.2, 0.5
\item Dropout: 0.0, 0.1, 0.2, 0.5
\item TabTransformer number of heads: 4, 8, 12, 16
\item MLP/gMLP number of hidden layers (depth): 2, 4, 6, 8
\item MLP/gMLP dimensions: 8, 16, 32, 64, 128, 256
\end{itemize}
\section{Data description}
\label{appendix:data}
\begin{table}[htp]
\caption{
Dataset sizes and other details.
}
\label{tab:data-details}
\centering
\begin{tabular}{lrrrrrrr}
\toprule
& & Total & Categorical & Continous & Positive \\
Dataset & Datapoints & Features & Features & Features & Class \% \\
\midrule
bank\_marketing & $45,211$ & 16 & 11 & 5 & 11.7 \\
1995\_income & $32,561$ & 14 & 9 & 5 & 24.1 \\
blastchar & $7,043$ & 19 & 17 & 2 & 26.5 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[htp]
\caption{Dataset sources. From \cite{Huang2020TabTransformerTD}.}
\label{tab:dataset-urls}
\centering
\scalebox{0.95}{
\begin{tabular}{ll}
\toprule
Dataset Name & URL \\
\midrule
1995\_income & \url{https://www.kaggle.com/lodetomasi1995/income-classification} \\
bank\_marketing & \url{https://archive.ics.uci.edu/ml/datasets/bank+marketing} \\
blastchar & \url{https://www.kaggle.com/blastchar/telco-customer-churn} \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{figure}[hpt]
\begin{center}
\includegraphics[width=10.5cm]{income-correlation.png}
\end{center}
\caption{Correlation matrix for the \textit{1995\_income} dataset.}
\label{fig:correlation-income}
\end{figure}
\begin{figure}[hpt]
\begin{center}
\includegraphics[width=11.5cm]{bank-correlation.png}
\end{center}
\caption{Correlation matrix for the \textit{bank\_marketing} dataset.}
\label{fig:correlation-bank}
\end{figure}
\begin{figure}[hpt]
\begin{center}
\includegraphics[width=12.5cm]{blastchar-correlation.png}
\end{center}
\caption{Correlation matrix for the \textit{blastchar} dataset.}
\label{fig:correlation-blastchar}
\end{figure}
\end{document}
|
Require Import ExtLib.Structures.Functor.
Require Import ExtLib.Structures.Applicative.
Require Import ExtLib.Structures.Monad.
Require Import Program.Tactics.
From Coq Require Import
Program
Setoid
Morphisms
Relations.
Require Import ITree.Basics.Basics.
Require Import HeterogeneousRelations EnTreeDefinition.
Require Import Eq.Eqit.
From Paco Require Import paco.
(** Typeclasses designed to implement a subevent relation, relied on by the trigger function *)
Class ReSum (E1 E2 : Type) : Type := resum : E1 -> E2.
Class ReSumRet E1 E2 `{EncodingType E1} `{EncodingType E2} `{ReSum E1 E2} : Type :=
resum_ret : forall (e : E1), encodes (resum e) -> encodes e.
#[global] Instance ReSum_inl E1 E2 : ReSum E1 (E1 + E2) := inl.
#[global] Instance ReSum_inr E1 E2 : ReSum E2 (E1 + E2) := inr.
#[global] Instance ReSumRet_inl E1 E2 `{EncodingType E1} `{EncodingType E2} : ReSumRet E1 (E1 + E2) :=
fun _ e => e.
#[global] Instance ReSumRet_inr E1 E2 `{EncodingType E1} `{EncodingType E2} : ReSumRet E2 (E1 + E2) :=
fun _ e => e.
(** Injects an event into an EnTree, relying on the subevent typeclasses ReSum and ReSumRet *)
Definition trigger {E1 E2 : Type} `{ReSumRet E1 E2}
(e : E1) : entree E2 (encodes e) :=
Vis (resum e) (fun x : encodes (resum e) => Ret (resum_ret e x)).
CoFixpoint resumEntree' {E1 E2 : Type} `{ReSumRet E1 E2}
{A} (ot : entree' E1 A) : entree E2 A :=
match ot with
| RetF r => Ret r
| TauF t => Tau (resumEntree' (observe t))
| VisF e k => Vis (resum e) (fun x => resumEntree' (observe (k (resum_ret e x))))
end.
(* Use resum and resum_ret to map the events in an entree to another type *)
Definition resumEntree {E1 E2 : Type} `{ReSumRet E1 E2}
{A} (t : entree E1 A) : entree E2 A :=
resumEntree' (observe t).
Lemma resumEntree_Ret {E1 E2 : Type} `{ReSumRet E1 E2}
{R} (r : R) :
resumEntree (Ret r) ≅ Ret r.
Proof. pstep. constructor. auto. Qed.
Lemma resumEntree_Tau {E1 E2 : Type} `{ReSumRet E1 E2}
{R} (t : entree E1 R) :
resumEntree (Tau t) ≅ Tau (resumEntree t).
Proof.
pstep. red. cbn. constructor. left. apply Reflexive_eqit. auto.
Qed.
Lemma resumEntree_Vis {E1 E2 : Type} `{ReSumRet E1 E2}
{R} (e : E1) (k : encodes e -> entree E1 R) :
resumEntree (Vis e k) ≅ Vis (resum e) (fun x => resumEntree (k (resum_ret e x))).
Proof.
pstep. red. cbn. constructor. left. apply Reflexive_eqit. auto.
Qed.
Lemma resumEntree_proper E1 E2 R1 R2 b1 b2 (RR : R1 -> R2 -> Prop) `{ReSumRet E1 E2} :
forall (t1 : entree E1 R1) (t2 : entree E1 R2),
eqit RR b1 b2 t1 t2 -> eqit RR b1 b2 (resumEntree t1) (resumEntree t2).
Proof.
ginit. gcofix CIH.
intros. punfold H4. red in H4.
unfold resumEntree. hinduction H4 before r; intros.
- setoid_rewrite resumEntree_Ret. gstep. constructor. auto.
- pclearbot. setoid_rewrite resumEntree_Tau. gstep. constructor.
gfinal. eauto.
- pclearbot. setoid_rewrite resumEntree_Vis. gstep. constructor.
gfinal. intros. left. eapply CIH. apply REL.
- setoid_rewrite resumEntree_Tau. inversion CHECK. subst.
rewrite tau_euttge. eauto.
- setoid_rewrite resumEntree_Tau. inversion CHECK. subst.
rewrite tau_euttge. eauto.
Qed.
#[global] Instance resumEntree_proper_inst E1 E2 R `{ReSumRet E1 E2} :
Proper (@eq_itree E1 _ R R eq ==> @eq_itree E2 _ R R eq) resumEntree.
Proof.
repeat intro. apply resumEntree_proper. auto.
Qed.
Lemma resumEntree_bind (E1 E2 : Type) `{ReSumRet E1 E2}
(R S : Type) (t : entree E1 R) (k : R -> entree E1 S) :
resumEntree (EnTree.bind t k) ≅
EnTree.bind (resumEntree t) (fun x => resumEntree (k x)).
Proof.
revert t. ginit. gcofix CIH.
intros t. unfold EnTree.bind at 1, EnTree.subst at 1.
unfold resumEntree at 2. destruct (observe t).
- setoid_rewrite resumEntree_Ret. setoid_rewrite bind_ret_l.
apply Reflexive_eqit_gen. auto.
- setoid_rewrite resumEntree_Tau. setoid_rewrite bind_tau.
gstep. constructor. gfinal. eauto.
- setoid_rewrite resumEntree_Vis. setoid_rewrite bind_vis.
gstep. constructor. gfinal. eauto.
Qed.
|
# 1. Electrical capacitance and energy of the mitochondrial inner membrane
A capacitor is an electrical circuit element that stores electrical energy in an electric field (Wikipedia REF). The mitochondrial inner membrane acts as an electrical capacitor to store energy in an electrostatic potential different between the milieux on the two sides the membrane.
Electrical capacitance is measured in units of charge per unit voltage, or in standard units, Coulombs per Volt (C/V), equivalently, a Farad. (1 F = 1 C/V)
## 1.1. What is the capacitance of mitochondrial inner membrane? Make a reasonable assumption about the size of a mitochondrion and assume that the inner membrane has 5 or 10 times more area than the outer membrane.
A typical range for the capacity density of a biological membrane is $\sim$$1\mu$m membrane would have a surface area of $4\pi r^2$, or approximately 12.5 $\mu$m$^2$. If the inner membrane has 10 times more area than the outer, then the area of the inner membrane is 125 $\mu$m$^2$. So, we can approximate the capacitance of this mitochondrion’s inner membrane as
$$
c_{mito} = \left( 1 \frac{\mu{\rm F}} {{\rm cm}^2} \right)
\left( 125 \, \mu{\rm m}^2 \right)
\left( \frac{ 1 \, {\rm cm}^2} {10^8 \mu{\rm m}^2} \right) =
1.25 \times 10^{-6} \mu{\rm F} .
$$
## 1.2. Express inner mitochondrial membrane capacitance in units of: (a.) farads per liter of mitochondria; (b.) moles per millivolt per unit liter of mitochondria.
Note that for a spherical mitochondrion of radius 1 $\mu$m, we have a volume of
\begin{equation}
\Bigg(\frac{4}{3}\pi (1\mu\rm{m})^3\Bigg) \Bigg(\frac{1 \ \rm{cm}}{10^4 \ \mu\rm{m}}\Bigg)^3 \Bigg(\frac{1 \ \rm{mL}}{1 \ \rm{cm}^3} \Bigg) \Bigg(\frac{1 \ \rm{L}}{10^3 \ \rm{mL}}\Bigg) = \frac{4}{3}\pi \times 10^{-15} \ \rm{L}.
\end{equation}
Dividing the capacitance by the volume gives
\begin{equation}
\Bigg(\frac{1.25 \times 10^{-6} \ \mu\rm{F}}{\frac{4}{3}\pi \times 10^{-15} \ \rm{L}} \Bigg) \Bigg( \frac{1 \ \rm{F}}{10^6 \ \mu\rm{F}} \Bigg) = 2.98 \times 10^2 \frac{\rm{F}}{\rm{L}} \approx 300 \frac{\rm{F}}{\rm{L}} = 300 \frac{\rm{C}}{\rm{V}\cdot \rm{L}} = 0.3 \frac{\rm{C}}{\rm{mV} \cdot \rm{L}}.
\end{equation}
Converting untis gives
\begin{equation}
300 \frac{\rm{F}}{\rm{L}} \Bigg(\frac{1 \ \frac{\rm{C}}{\rm{V}}}{1 \ \rm{F}} \Bigg) \Bigg( \frac{1 \ \rm{V}}{10^3 \ \rm{mV}} \Bigg) \Bigg( \frac{1 \ \rm{mol}}{96485 \ \rm{C}} \Bigg) = 3.11 \times 10^{-6} \frac{\rm{mol}}{\rm{mV} \cdot \rm{L}}
\end{equation}
## 1.3. If the electrostatic potential across the inner mitochondrial membrane is 180 mV, how much electrical energy per unit volume is stored in mitochondria? How much electrical energy is stored in mitochondria per unit volume of myocardium? Express your answer in joules per liter.
The potential $U$ stored on a capacitor is given by
\begin{equation}
U = \frac{1}{2} c \Delta \Psi^2
\end{equation}
where $c$ is the capacitance and $\Delta\Psi$ is the voltage gradient across the membrane. Then the energy per unit volume of mitochondria is
\begin{equation}
U = \frac{1}{2} \Bigg(0.3 \ \frac{\rm{C}}{\rm{mV} \cdot \rm{L}} \Bigg) (180 \ \rm{mV})^2 = 4.86 \times 10^3 \ \frac{\rm{C} \cdot \rm{mV}}{\rm{L}} = 4.86 \ \frac{\rm{C} \cdot \rm{V}}{\rm{L}} = 4.86 \frac{\rm{J}}{\rm{L}} \approx 5 \ \frac{\rm{J}}{\rm{L}}.
\end{equation}
Since approximately 1/3 of the volume of myocardium is mitochondria, the energy stored in the myocardium is 5/3 J L$^{-1}$.
## 1.4. Approximately much electrical energy is stored in mitochondria in the whole human heart? How does that compare to the amount of energy supplied by a AA battery? How does it compare to the amount of mechanical work the heart does per unit time?
An average human heart is approximately 250 g. In L, that is
\begin{equation}
250 g \Bigg( \frac{1 \ \rm{L}}{10^3 \ \rm{g}} \Bigg) = 0.25\ \rm{L}.
\end{equation}
Then,
\begin{equation}
\Bigg(\frac{5}{3} \ \frac{\rm{J}}{\rm{L}} \Bigg) \Bigg(\frac{1}{4} \ \rm{L} \Bigg) = \frac{5}{12} \ \rm{J}
\end{equation}
of energy is stored in the inner membrane of the mitochondria.
A typical AA battery contains 0.0039 kWH = 14 kJ. The LV typically does 1 W of work at baseline, that is, 1 W = 1 J s$^{-1}$.
# 2. Converting electrical potential to the ATP hydrolysis chemical potential
The electrostatic energy potential across the mitochondrial inner membrane is used to drive the synthesis of ATP in the final step of oxidative ATP synthesis. The mammalian mitochondrial F1F0-ATPase synthesizes ATP in the mitochondrial matrix from ADP and inorganic phosphate, coupled to the translocation of protons (H+ ions) from outside to inside of the matrix. The chemical reaction stoichiometry can be expressed:
\begin{equation}\label{eq:ATPase1}
{\rm MgADP}^{1-} + {\rm HPO}_4^{2-} + ({\rm H}^+)_{\rm inside} \leftrightharpoons
{\rm MgATP}^{2-} + {\rm H}_2{\rm O},
\end{equation}
where the term $({\rm H}^+)_{\rm inside}$ indicates that a hydrogen ion from inside the matrix is covalently incorporated into the synthesized ATP. The species MgADP$^{1-}$ and MgATP$^{2-}$ are the magnesium-bound species of ADP and ATP. This chemical reaction is coupled to the transport of $n_A = 8/3$ protons across the inner membrane:
\begin{equation}\label{eq:ATPase2}
n_A \, ({\rm H}^+)_{\rm outside} \leftrightharpoons n_A \, ({\rm H}^+)_{\rm inside}.
\end{equation}
## 2.1. Given a free magnesium concentration [Mg$^{2+}$] and hydrogen ion activity [H$^+$] = 10$^{\rm -pH}$, how can you compute the concentrations of MgADP$^{1-}$, MgATP$^{2-}$, and HPO$_4^{2-}$ in terms of the total concentrations of the reactants [$\Sigma$ADP], [$\Sigma$ATP], and [$\Sigma$Pi]? (You will need to account for binding of biochemical species to [Mg$^{2+}$] and [H$^+$].)
To determine the total measurable ATP, we must consider each cation species and their dissociation. The dissociation constants of ATP from each cation species are
\begin{align}
\rm{MgATP}^{2-} \rightleftharpoons \rm{Mg}^{2+} + \rm{ATP}^{4-} \quad &\Rightarrow \quad K_{\rm{MgATP}} = \frac{[\rm{Mg}^{2+}] [\rm{ATP}^{4-}]}{[\rm{MgATP}^{2-}]} \\
\rm{HATP}^{3-} \rightleftharpoons \rm{H}^{+} + \rm{ATP}^{4-} \quad &\Rightarrow \quad K_{\rm{HATP}} = \frac{[\rm{H}^{+}] [\rm{ATP}^{4-}]}{[\rm{HATP}^{3-}]} \\
\rm{KATP}^{3-} \rightleftharpoons \rm{K}^{+} + \rm{ATP}^{4-} \quad &\Rightarrow \quad K_{\rm{KATP}} = \frac{[\rm{K}^{+}] [\rm{ATP}^{4-}]}{[\rm{KATP}^{3-}]}.
\end{align}
Note: we disregard the doubly bound cation species in these calculations. The total measurable ATP is
\begin{align}
[\Sigma \rm{ATP}] &= [\rm{ATP}^{4-}] + [\rm{MgATP}^{2-}] + [\rm{HATP}^{3-}] + [\rm{KATP}^{3-}] \\
&= [\rm{ATP}^{4-}] + \frac{[\rm{Mg}^{2+}] [\rm{ATP}^{4-}]}{K_{\rm{MgATP}}} + \frac{[\rm{H}^{+}][\rm{ATP}^{4-}]}{K_{\rm{HATP}}} + \frac{[\rm{K}^{+}] [\rm{ATP}^{4-}]}{K_{\rm{KATP}}} \\
&= [\rm{ATP}^{4-}] \Bigg( 1 + \frac{[\rm{Mg}^{2+}]}{K_{\rm{MgATP}}} + \frac{[\rm{H}^{+}]}{K_{\rm{HATP}}} + \frac{[\rm{K}^{+}]}{K_{\rm{KATP}}} \Bigg) \\
&= [\rm{ATP}^{4-}] P_{\rm{ATP}}
\end{align}
for binding polynomial
\begin{equation}
P_{\rm{ATP}} = 1 + \frac{[\rm{Mg}^{2+}]}{K_{\rm{MgATP}}} + \frac{[\rm{H}^{+}]}{K_{\rm{HATP}}} + \frac{[\rm{K}^{+}]}{K_{\rm{KATP}}}
\end{equation}
that depends solely on the concentrations of the cation species and dissociation constants. Similarly, binding polynomials can be found for the other species
\begin{align}
P_{\rm{ADP}} &= 1 + \frac{[\rm{Mg}^{2+}]}{K_{\rm{MgADP}}} + \frac{[\rm{H}^{+}]}{K_{\rm{HADP}}} + \frac{[\rm{K}^{+}]}{K_{\rm{KADP}}} \quad \rm{and} \\
P_{\rm{Pi}} &= 1 + \frac{[\rm{Mg}^{2+}]}{K_{\rm{MgPi}}} + \frac{[\rm{H}^{+}]}{K_{\rm{HPi}}} + \frac{[\rm{K}^{+}]}{K_{\rm{KPi}}}.
\end{align}
The concentrations for $[\rm{MgATP}^{2-}]$ and $[\rm{MgADP}^{-}]$ are
\begin{align}
[\rm{MgATP}^{2-}] &= \frac{[\rm{Mg}^{2+}] [\rm{ATP}^{4-}]}{K_{\rm{MgATP}}} = \frac{[\rm{Mg}^{2+}]}{K_{\rm{MgATP}}} \frac{[\Sigma \rm{ATP}]}{P_{\rm{ATP}}} \quad \rm{and} \\
[\rm{MgADP}^{-}] &= \frac{[\rm{Mg}^{2+}] [\rm{ADP}^{3-}]}{K_{\rm{MgADP}}} = \frac{[\rm{Mg}^{2+}]}{K_{\rm{MgADP}}} \frac{[\Sigma \rm{ADP}]}{P_{\rm{ADP}}}.
\end{align}
For inorganic phosphate, note that
\begin{equation}
[\Sigma \rm{Pi}] = [\rm{HPO}_4^{2-}] P_{\rm{Pi}} \quad \Rightarrow \quad [\rm{HPO}_4^{2-}] = \frac{[\Sigma \rm{Pi}]}{P_{\rm{Pi}}}
\end{equation}
The following dissociation constants are used:
| Reactant (L) | $K_{H-L}$ | $K_{K-L}$ | $K_{Mg-L}$ |
| ------------ | ------------------------ | ----------------------- | ------------------------- |
| ATP | 2.757$\times 10^{-7}$ M | 9.809$\times 10^{-2}$ M | 8.430$\times 10^{-5}$ M |
| ADP | 4.106$\times 10^{-7}$ M | 1.319$\times 10^{-1}$ M | 7.149$\times 10^{-4}$ M |
| PI | 2.308$\times 10^{-7}$ M | 3.803$\times 10^{-1}$ M | 2.815$\times 10^{-2}$ M |
## 2.2. Derive an expression for the Gibbs free energy change associated with reaction (2.1) in terms of the reference $\Delta G_r^0$, the concentrations of biochemical reactants, and the cation concentrations [Mg$^{2+}$] and [H$^+$]. What is the free energy of ATP hydrolysis in the mitochondrial matrix? Assume that pH = 7.2 and [$\Sigma$ADP] = 9.5 mM, [$\Sigma$ATP] = 0.5 mM, and [$\Sigma$Pi] = 1 mM.
To determine the Gibbs energy of the ATP synthesis reaction, consider the biochemical ATP synthesis reaction
\begin{equation}
\rm{ADP}^{3-} + \rm{HPO}_4^{2-} + \rm{H}^{+} \rightleftharpoons \rm{ATP}^{4-} + \rm{H}_2\rm{O}.
\end{equation}
Then, the equilibrium constant is
\begin{equation}
K_{eq} = \Bigg( \frac{ [\rm{ATP}^{4-}] }{ [\rm{ADP}^{3-}] [\rm{HPO}_4^{2-}] [\rm{H}^{+}]} \Bigg)_{eq}.
\end{equation}
However, it is difficult to measure the concentration of each of these species at as is. So instead, we reformulate $K_{eq}$ in terms of the total measurable concentrations given in the previous problem, that is,
\begin{align}
K_{eq} &= \Bigg( \frac{ [\rm{ATP}^{4-}] }{ [\rm{ADP}^{3-}] [\rm{HPO}_4^{2-}] [\rm{H}^{+}]} \Bigg)_{eq} \\
&= \Bigg( \frac{ \frac{[\Sigma \rm{ATP}]}{P_{\rm{ATP}}} }{ \frac{[\Sigma \rm{ADP}]}{P_{\rm{ADP}}} \frac{ [\Sigma \rm{Pi}]}{P_{\rm{Pi}}} [\rm{H}^{+}]} \Bigg)_{eq} \\
&= \Bigg( \frac{ [\Sigma \rm{ATP}] }{ [\Sigma \rm{ADP}] [\Sigma \rm{Pi}] } \Bigg)_{eq} \frac{ P_{\rm{ADP}} P_{\rm{Pi}} }{ P_{\rm{ATP}} } \frac{ 1 }{ [\rm{H}^{+}] } \\
&= K_{eq}' \frac{ P_{\rm{ADP}} P_{\rm{Pi}} }{ P_{\rm{ATP}} } \frac{ 1 }{ [\rm{H}^{+}] },
\end{align}
where $K_{eq}'$ is the *apparent* equilibrium given a pH and ionic concentrations. Solving for $K_{eq}'$, we obtain
\begin{equation}
K_{eq}' = K_{eq} [\rm{H}^{+}] \frac{ P_{\rm{ATP}} }{ P_{\rm{ADP}} P_{\rm{Pi}} }
\end{equation}
From literature, $K_{eq} \approx 6$ at $37^\circ$ C. To calculate the Gibbs energy $\Delta G$ under physiological conditions, we have
\begin{equation}
\Delta G_{\rm{ATP}} = \Delta G'^{0} + R T \ln Q'_r
\end{equation}
for gas constant $R = 8.314$ J K$^{-1}$ mol$^{-1}$, temperature $T = 310$ K, and Gibbs energy at equilibrium $\Delta G'^0 = - R T \ln K_{eq}' = -41465$ J mol$^{-1}$. The *apparent* reaction quotient $Q'_r$ when the system is not at equilibrium is
\begin{equation}
Q'_r = \frac{ [\Sigma \rm{ATP}] }{ [\Sigma \rm{ADP}] [\Sigma \rm{Pi}] } = \frac{ 0.5 \times 10^{-3} }{ (9.5 \times 10^{-3} )(10^{-3})} \approx 53.
\end{equation}
The Python code below determines that $\Delta G = 46.17$ kJ mol$^{-1}$. Since the Gibbs energy is positive, the reaction is not spontaneous.
**OR**
To determine the Gibbs energy of the ATP synthesis reaction, consider the biochemical ATP synthesis reaction
\begin{equation}
\rm{MgADP}^{-} + \rm{HPO}_4^{2-} + \rm{H}^{+} \rightleftharpoons \rm{MgATP}^{2-} + \rm{H}_2\rm{O}.
\end{equation}
Then, the equilibrium constant is
\begin{equation}
K_{eq} = \Bigg( \frac{ [\rm{MgATP}^{2-}] }{ [\rm{MgADP}^{-}] [\rm{HPO}_4^{2-}] [\rm{H}^{+}]} \Bigg)_{eq}.
\end{equation}
However, it is difficult to measure the concentration of each of these species at as is. So instead, we reformulate $K_{eq}$ in terms of the total measurable concentrations given in the previous problem, that is,
\begin{align}
K_{eq} &= \Bigg( \frac{ [\rm{MgATP}^{2-}] }{ [\rm{MgADP}^{-}] [\rm{HPO}_4^{2-}] [\rm{H}^{+}]} \Bigg)_{eq} \\
&= \Bigg( \frac{ \frac{[\rm{Mg}^{2+}]}{K_{\mathrm{MgATP}}} \frac{[\Sigma \rm{ATP}]}{P_{\rm{ATP}}} }{ \frac{[\rm{Mg}^{2+}]}{K_{\mathrm{MgADP}}} \frac{[\Sigma \rm{ADP}]}{P_{\rm{ADP}}} \frac{ [\Sigma \rm{Pi}]}{P_{\rm{Pi}}} [\rm{H}^{+}]} \Bigg)_{eq} \\
&= \Bigg( \frac{ [\Sigma \rm{ATP}] }{ [\Sigma \rm{ADP}] [\Sigma \rm{Pi}] } \Bigg)_{eq} \frac{K_{\mathrm{MgADP}}}{K_{\mathrm{MgATP}}} \frac{ P_{\rm{ADP}} P_{\rm{Pi}} }{ P_{\rm{ATP}} } \frac{ 1 }{ [\rm{H}^{+}] } \\
&= K_{eq}' \frac{K_{\mathrm{MgADP}}}{K_{\mathrm{MgATP}}}\frac{ P_{\rm{ADP}} P_{\rm{Pi}} }{ P_{\rm{ATP}} } \frac{ 1 }{ [\rm{H}^{+}] },
\end{align}
where $K_{eq}'$ is the *apparent* equilibrium given a pH and ionic concentrations. Solving for $K_{eq}'$, we obtain
\begin{equation}
K_{eq}' = K_{eq} [\rm{H}^{+}] \frac{ P_{\rm{ATP}} }{ P_{\rm{ADP}} P_{\rm{Pi}} }\frac{K_{\mathrm{MgATP}}}{K_{\mathrm{MgADP}}}
\end{equation}
From literature, $K_{eq} \approx 6$ at $37^\circ$ C. To calculate the Gibbs energy $\Delta G$ under physiological conditions, we have
\begin{equation}
\Delta G_{\rm{ATP}} = \Delta G'^{0} + R T \ln Q'_r
\end{equation}
for gas constant $R = 8.314$ J K$^{-1}$ mol$^{-1}$, temperature $T = 310$ K, and Gibbs energy at equilibrium $\Delta G'^0 = - R T \ln K_{eq}' = -41069$ J mol$^{-1}$. The *apparent* reaction quotient $Q'_r$ when the system is not at equilibrium is
\begin{equation}
Q'_r = \frac{ [\Sigma \rm{ATP}] }{ [\Sigma \rm{ADP}] [\Sigma \rm{Pi}] } = \frac{ 0.5 \times 10^{-3} }{ (9.5 \times 10^{-3} )(10^{-3})} \approx 53.
\end{equation}
The Python code below determines that $\Delta G = 51.289$ kJ mol$^{-1}$. Since the Gibbs energy is positive, the reaction is not spontaneous.
```python
# %%
import numpy
# Constants
R = 8.314; #J / K / mol
T = 273.15 + 37; # K
# Dissociation constants (M)
K_HATP = 2.757e-7;
K_HADP = 4.106e-7;
K_HPi = 2.308e-7;
K_KATP = 9.809e-2;
K_KADP = 1.319e-1;
K_KPi = 3.803e-1;
K_MgATP = 8.43e-5;
K_MgADP = 7.149e-4;
K_MgPi = 2.815e-2;
# Equilibrium constant
K_eq = 6;
# Concentrations (M)
Mg = 1e-3;
H = 10**(-7.2);
SigATP = 0.5e-3;
SigADP = 9.5e-3;
SigPi = 1e-3;
K = 100e-3;
# Polynomials
P_ATP = 1 + Mg/K_MgATP + H/K_HATP + K/K_KATP;
P_ADP = 1 + Mg/K_MgADP + H/K_HADP + K/K_KADP;
P_Pi = 1 + Mg/K_MgPi + H/K_HPi + K/K_KPi;
# Calculation
K_apparent = K_eq * H * P_ATP / (P_ADP * P_Pi) * (K_MgATP / K_MgADP);
DeltaG0_apparent = -R * T * numpy.log(K_apparent);
print('DeltaG0_apparent:',DeltaG0_apparent)
Q_r = SigATP / (SigADP * SigPi);
DeltaG = DeltaG0_apparent + R * T * numpy.log(Q_r);
print("DeltaG:",DeltaG,"J/mol")
```
DeltaG0_apparent: 41069.3311227448
DeltaG: 51289.087406669285 J/mol
## 2.3. What is the free energy change of Equation (2.2) at $\Delta\Psi$ = 180 mV? How does the free energy change of Equation (2.1) compare to that of Equation (2.2)? How efficient is the transduction of electrical to chemical free energy in this step in ATP synthesis? (What is the ratio of energy stored in ATP to the total energy consumed?)
The Gibbs energy of translocating protons across the membrane $\Delta G_{\rm{H}^{+}}$ at $\Delta \Psi = 180$ mV is
\begin{equation}
\Delta G_{\rm{H}^{+}} = -F \Delta\Psi = - \Bigg(96485 \frac{\rm{C}}{\rm{mol}} \Bigg) (0.18 \ \rm{V}) \Bigg( \frac{1 \ \frac{\rm{J}}{\rm{C}}}{1 \ \rm{V}} \Bigg) \Bigg( \frac{1 \ \rm{kJ}}{10^3 \ \rm{J}} \Bigg) = -17.36 \ \frac{\rm{kJ}}{\rm{mol}}
\end{equation}
Hence, the total Gibbs energy $\Delta G$ is
\begin{equation}
\Delta G = \Delta G_{\rm{ATP}} + n_{\rm{H}^{+}}\Delta G_{\rm{H}^{+}} = 46.17 + \frac{8}{3} (-17.36) = -0.12 \ \rm{kJ} \ \rm{mol}^{-1}.
\end{equation}
The efficiency of ATP synthesis is the ratio 46.17/46.29 * 100 = 99.7%.
## 2.4. Given the concentrations assumed in 2.2, what is the minimum value of $\Delta\Psi$ at which ATP can be synthesized in the mitochondrial matrix?
At equilibrium, $\Delta G = 0$, that is,
\begin{equation}
46171 \ \rm{J} \ \rm{mol}^{-1} = \Delta G_{\rm{ATP}} = - n_{\rm{H}^{+}} \Delta G_{\rm{H}^{+}} = -n_{\rm{H}^{+}} (- F \Delta \Psi) = \Bigg( \frac{8}{3}\Bigg) \Bigg(96485 \frac{\rm{C}}{\rm{mol}} \Bigg) \Delta \Psi_{eq}.
\end{equation}
Solving for $\Delta \Psi_{eq}$ gives
\begin{equation}
\Delta \Psi_{eq} = 179 \ \rm{mV}.
\end{equation}
## 2.5. Assume that reaction (2.1) proceeds by simple mass-action kinetics, with a constant reverse rate kr. How does the forward rate constant necessarily depend on $\Delta\Psi$ for the reaction kinetics to be properly thermodynamically balanced?
For a simplified model, ATP synthase has the following conformations:
1. Open ($\mathbf{O}$): Percentage of open active sites on ATP synthase.
2. Bound ($\mathbf{B}$): Percentage of active sites bound to ADP and Pi that then undergoes a conformational change to make ATP.
Then, our diagram is
\begin{align}
\rm{ADP} \ \& \ \rm{Pi} & \\
&\longrightarrow \\
& k_{OB} & \\
\mathbf{O} \hspace{1cm} &\circlearrowright \hspace{1cm} \mathbf{B} \\
& k_{BO} \\
& \longleftarrow \\
\rm{ATP} &
\end{align}
Assuming mass-action kinetics, we have the following system:
\begin{align}
\frac{\rm{d}\mathbf{O}}{\rm{dt}} &= -k_{OB} [\mathrm{ADP}^{3-}] [\mathrm{Pi}^{2-}] [\mathrm{H}^{+}] \mathbf{O} + k_{BO} [\mathrm{ATP}^{4-}] \mathbf{B} \\
\frac{\rm{d}\mathbf{B}}{\rm{dt}} &= k_{OB} [\mathrm{ADP}^{3-}] [\mathrm{Pi}^{2-}] [\mathrm{H}^{+}]\mathbf{O} - k_{BO} [\mathrm{ATP}^{4-}] \mathbf{B}
\end{align}
At steady-state,
\begin{align}
k_{OB} [\mathrm{ADP}^{3-}] [\mathrm{Pi}^{2-}] [\mathrm{H}^{+}] \bar{\mathbf{O}} &= k_{BO} [\mathrm{ATP}^{4-}] \bar{\mathbf{B}} \\
\Rightarrow \quad \bar{\mathbf{O}} &= \frac{ [\mathrm{ATP}^{4-}] }{ [\mathrm{ADP}^{3-}] [\mathrm{Pi}^{2-}] [\mathrm{H}^{+}] } \frac{ k_{BO} }{ k_{OB} } \bar{\mathbf{B}} \\
&= K_{eq} \gamma \bar{\mathbf{B}},
\end{align}
where $\gamma = k_{BO} / k_{OB}$.
Note that the reaction rates $k_{BO}$ or $k_{OB}$ are not identifiable, that is, $\gamma = k_{BO} / k_{OB}$. However, if we take $k_{OB} = 1$ and $k_{BO} = \gamma$, we can rewrite our system in terms of one identifiable parameter $\gamma$, that is,
\begin{align}
\frac{\rm{d}\mathbf{O}}{\rm{dt}} &= - [\mathrm{ADP}^{3-}] [\mathrm{Pi}^{2-}] [\mathrm{H}^{+}]\mathbf{O} + \gamma [\mathrm{ATP}^{4-}] \mathbf{B} \\
\frac{\rm{d}\mathbf{B}}{\rm{dt}} &= [\mathrm{ADP}^{3-}] [\mathrm{Pi}^{2-}] [\mathrm{H}^{+}]\mathbf{O} - \gamma [\mathrm{ATP}^{4-}] \mathbf{B}
\end{align}
with steady-state condition $\bar{\mathbf{O}} = Q_r \gamma \bar{\mathbf{B}}$.
## 2.6. Write a simple program that simulates the kinetics of [$\Sigma$ADP], [$\Sigma$ATP] , and [$\Sigma$Pi] in the matrix given a fixed membrane potential, pH, and magnesium concentration, and given arbitrary initial conditions. How do the predicted steady-state concentrations depend on membrane potential, pH, and magnesium concentration?
## What is the free energy of ATP hydrolysis in the myocardium? Express Gibbs free energy in units of joule per mole. In mitochondrial ATP synthesis the chemical synthesis of ATP from ADP and inorganic phosphate is coupled to the movement of positive charges down the electrical gradient from the outside to the inside of the inner membrane. What is the minimum number of charges translocate to synthesize 1 ATP molecule? Assume that there is a 180 mV potential difference across the inner mitochondrial membrane.
## How efficient is the transduction of electrical to chemical free energy in this step in ATP synthesis? (What is the ratio of energy stored in ATP to the total energy consumed?)
|
module main where
import parse
open import lib
open import huffman-types
import huffman
module parsem = parse huffman.gratr2-nt ptr
open parsem
open parsem.pnoderiv huffman.rrs huffman.huffman-rtn
open import run ptr
open noderiv {- from run.agda -}
{- imports for Huffman trees and also
Braun trees specialized to store Huffman trees
with lower frequency ones nearer the root -}
open import huffman-tree
import braun-tree as bt
open bt huffman-tree ht-compare
--open import braun-tree huffman-tree ht-compare
pqueue : ℕ → Set
pqueue = braun-tree
pq-empty : pqueue 0
pq-empty = bt-empty
pq-insert : ∀ {n : ℕ} → huffman-tree → pqueue n → pqueue (suc n)
pq-insert = bt-insert
pq-remove-min : ∀ {p : ℕ} → pqueue (suc p) → huffman-tree × pqueue p
pq-remove-min = bt-remove-min
data output-type : Set where
encode-output : string → string → string → string → output-type
decode-output : string → output-type
error-output : string → output-type -- error message if there is an error
inc-frequency : word → trie ℕ → trie ℕ
inc-frequency w t with trie-lookup t w
inc-frequency w t | nothing = trie-insert t w 1
inc-frequency w t | just c = trie-insert t w (suc c)
compute-frequencies : words → trie ℕ → trie ℕ
compute-frequencies (WordsStart w) t = inc-frequency w t
compute-frequencies (WordsNext w ww) t = compute-frequencies ww (inc-frequency w t)
inc-frequency-nonempty : ∀(w : word)(t : trie ℕ) → trie-nonempty (inc-frequency w t) ≡ tt
inc-frequency-nonempty w t with trie-lookup t w
inc-frequency-nonempty w t | nothing = trie-insert-nonempty t w 1
inc-frequency-nonempty w t | just c = trie-insert-nonempty t w (suc c)
compute-frequencies-nonempty : ∀(ws : words)(t : trie ℕ) → trie-nonempty (compute-frequencies ws t) ≡ tt
compute-frequencies-nonempty (WordsNext w ww) t = compute-frequencies-nonempty ww (inc-frequency w t)
compute-frequencies-nonempty (WordsStart w) t = inc-frequency-nonempty w t
build-huffman-pqueue : (l : 𝕃 (string × ℕ)) → pqueue (length l)
build-huffman-pqueue [] = pq-empty
build-huffman-pqueue ((s , f) :: l) = pq-insert (ht-leaf s f) (build-huffman-pqueue l)
-- where we call this function, we have enough evidence to prove the Braun tree is nonempty
process-huffman-pqueue : ∀{n} → n =ℕ 0 ≡ ff → pqueue n → huffman-tree
process-huffman-pqueue{0} () b
process-huffman-pqueue{suc n} _ t with pq-remove-min t
process-huffman-pqueue{suc 0} _ t | h , _ = h
process-huffman-pqueue{suc (suc n)} _ _ | h , t with pq-remove-min t
process-huffman-pqueue{suc (suc n)} _ _ | h , _ | h' , t =
process-huffman-pqueue{suc n} refl (pq-insert (ht-node ((ht-frequency h) + (ht-frequency h')) h h') t)
build-mappingh : huffman-tree → trie string → 𝕃 char → trie string
build-mappingh (ht-leaf s _) m l = trie-insert m s (𝕃char-to-string (reverse l))
build-mappingh (ht-node _ h1 h2) m l =
build-mappingh h2 (build-mappingh h1 m ('0' :: l)) ('1' :: l)
build-mapping : huffman-tree → trie string
build-mapping h = build-mappingh h empty-trie []
encode-word : trie string → word → string
encode-word t w with trie-lookup t w
encode-word t w | nothing = "error"
encode-word t w | just s = s
encode-words : trie string → words → string
encode-words t (WordsNext w ww) = encode-word t w ^ encode-words t ww
encode-words t (WordsStart w) = encode-word t w
data code-tree : Set where
ct-empty : code-tree
ct-leaf : string → code-tree
ct-node : code-tree → code-tree → code-tree
flip-digit : digit → digit
flip-digit Zero = One
flip-digit One = Zero
sub-ct : digit → code-tree → code-tree
sub-ct _ ct-empty = ct-empty
sub-ct _ (ct-leaf _) = ct-empty
sub-ct Zero (ct-node t1 t2) = t1
sub-ct One (ct-node t1 t2) = t2
ct-node-digit : digit → code-tree → code-tree → code-tree
ct-node-digit Zero t1 t2 = ct-node t1 t2
ct-node-digit One t1 t2 = ct-node t2 t1
ct-insert : code-tree → code → code-tree
ct-insert t (Code s (BvlitStart d)) =
-- child d of the new tree is the new leaf, and the other child is the other subtree of t
ct-node-digit d (ct-leaf s) (sub-ct (flip-digit d) t)
ct-insert t (Code s (BvlitCons d v)) =
-- child d of the new tree is obtained recursively and the other child is the other subtree of t
ct-node-digit d (ct-insert (sub-ct d t) (Code s v)) (sub-ct (flip-digit d) t)
make-code-tree : code-tree → codes → code-tree
make-code-tree t (CodesNext c cs) = make-code-tree (ct-insert t c) cs
make-code-tree t (CodesStart c) = ct-insert t c
decode-stringh : code-tree → code-tree → bvlit → string
decode-stringh orig n (BvlitCons d v) with sub-ct d n
decode-stringh orig n (BvlitCons d v) | ct-leaf s = s ^ " " ^ (decode-stringh orig orig v)
decode-stringh orig n (BvlitCons d v) | ct-empty = "error\n"
decode-stringh orig n (BvlitCons d v) | n' = decode-stringh orig n' v
decode-stringh orig n (BvlitStart d) with sub-ct d n
decode-stringh orig n (BvlitStart d) | ct-leaf s = s ^ "\n"
decode-stringh orig n (BvlitStart d) | _ = "error\n"
decode-string : code-tree → bvlit → string
decode-string t v = decode-stringh t t v
process-cmd : cmd → output-type
process-cmd (Encode ww) = step (compute-frequencies ww empty-trie) (compute-frequencies-nonempty ww empty-trie)
where step : (t : trie ℕ) → trie-nonempty t ≡ tt → output-type
step t nonempty-t =
let s1 = trie-to-string " -> " ℕ-to-string t in
let m = trie-mappings t in
let wt = build-huffman-pqueue m in
let h = process-huffman-pqueue (is-empty-ff-length (trie-mappings t) (trie-mappings-nonempty t nonempty-t)) wt in
let s2 = ht-to-string h in
let mp = build-mapping h in
let s3 = trie-to-string " <- " (λ s → s) mp in
let s4 = "! " ^ s3 ^ (encode-words mp ww) in
encode-output s1 s2 s3 s4
process-cmd (Decode cs v) =
let ct = make-code-tree ct-empty cs in
let s = decode-string ct v in
decode-output s
process-start : start → output-type
process-start (File c) = process-cmd c
process : Run → output-type
process (ParseTree (parsed-start p) :: []) = process-start p
process r = error-output ("Parsing failure (run with -" ^ "-showParsed).\n")
putStrRunIf : 𝔹 → Run → IO ⊤
putStrRunIf tt r = putStr (Run-to-string r) >> putStr "\n"
putStrRunIf ff r = return triv
doOutput : output-type → string → IO ⊤
doOutput (error-output s) basename = putStr ("Error: " ^ s ^ "\n")
doOutput (encode-output s1 s2 s3 s4) basename =
writeFile (basename ^ "-frequencies.txt") s1 >>
writeFile (basename ^ ".gv") s2 >>
writeFile (basename ^ "-mapping.txt") s3 >>
writeFile (basename ^ ".huff") s4
doOutput (decode-output s) basename = writeFile (basename ^ "-decoded.txt") s
processArgs : (showRun : 𝔹) → (showParsed : 𝔹) → 𝕃 string → IO ⊤
processArgs showRun showParsed (input-filename :: []) = (readFiniteFile input-filename) >>= processText
where processText : string → IO ⊤
processText x with runRtn (string-to-𝕃char x)
processText x | s with s
processText x | s | inj₁ cs = putStr "Characters left before failure : " >> putStr (𝕃char-to-string cs) >> putStr "\nCannot proceed to parsing.\n"
processText x | s | inj₂ r with putStrRunIf showRun r | rewriteRun r
processText x | s | inj₂ r | sr | r' with putStrRunIf showParsed r'
processText x | s | inj₂ r | sr | r' | sr' = sr >> sr' >> doOutput (process r') (base-filename input-filename)
processArgs showRun showParsed ("--showRun" :: xs) = processArgs tt showParsed xs
processArgs showRun showParsed ("--showParsed" :: xs) = processArgs showRun tt xs
processArgs showRun showParsed (x :: xs) = putStr ("Unknown option " ^ x ^ "\n")
processArgs showRun showParsed [] = putStr "Please run with the name of a file to process.\n"
main : IO ⊤
main = getArgs >>= processArgs ff ff
|
/******************************************************************************\
* Author: Matthew Beauregard Smith *
* Affiliation: The University of Texas at Austin *
* Department: Oden Institute and Institute for Cellular and Molecular Biology *
* PI: Edward Marcotte *
* Project: Protein Fluorosequencing *
\******************************************************************************/
// Boost unit test framework (recommended to be the first include):
#include <boost/test/unit_test.hpp>
// File under test:
#include "dye-seq.h"
namespace whatprot {
BOOST_AUTO_TEST_SUITE(common_suite)
BOOST_AUTO_TEST_SUITE(dye_seq_suite)
BOOST_AUTO_TEST_CASE(constructor_test) {
int num_channels = 2;
string s = "0";
DyeSeq ds(num_channels, s);
BOOST_TEST(ds.length == 1);
BOOST_TEST(ds.num_channels == 2);
BOOST_TEST(ds.seq != (void*)NULL);
BOOST_TEST(ds.seq[0] == 0);
}
BOOST_AUTO_TEST_CASE(constructor_empty_string_test) {
int num_channels = 2;
string s = "";
DyeSeq ds(num_channels, s);
BOOST_TEST(ds.length == 0);
BOOST_TEST(ds.num_channels == 2);
}
BOOST_AUTO_TEST_CASE(constructor_trailing_dots_test) {
int num_channels = 2;
string s = "0..";
DyeSeq ds(num_channels, s);
BOOST_TEST(ds.length == 1);
BOOST_TEST(ds.num_channels == 2);
BOOST_TEST(ds.seq != (void*)NULL);
BOOST_TEST(ds.seq[0] == 0);
}
BOOST_AUTO_TEST_CASE(copy_constructor_test) {
int num_channels = 2;
string s = "0";
DyeSeq ds1(num_channels, s);
DyeSeq ds2(ds1);
BOOST_TEST(ds1.length == 1);
BOOST_TEST(ds1.num_channels == 2);
BOOST_TEST(ds1.seq != (void*)NULL);
BOOST_TEST(ds1.seq[0] == 0);
BOOST_TEST(ds2.length == 1);
BOOST_TEST(ds2.num_channels == 2);
BOOST_TEST(ds2.seq != (void*)NULL);
BOOST_TEST(ds2.seq[0] == 0);
}
BOOST_AUTO_TEST_CASE(bracket_op_const_test) {
int num_channels = 2;
string s = "0";
DyeSeq ds(num_channels, s);
const DyeSeq& cds = ds;
BOOST_TEST(cds[0] == 0);
}
BOOST_AUTO_TEST_CASE(bracket_op_const_past_end_test) {
int num_channels = 2;
string s = "0";
DyeSeq ds(num_channels, s);
const DyeSeq& cds = ds;
BOOST_TEST(cds[1] == -1);
}
BOOST_AUTO_TEST_CASE(more_difficult_dye_seq_string_test) {
int num_channels = 3;
string s = "..0.1..2..";
DyeSeq ds(num_channels, s);
const DyeSeq& cds = ds;
BOOST_TEST(cds[0] == -1);
BOOST_TEST(cds[1] == -1);
BOOST_TEST(cds[2] == 0);
BOOST_TEST(cds[3] == -1);
BOOST_TEST(cds[4] == 1);
BOOST_TEST(cds[5] == -1);
BOOST_TEST(cds[6] == -1);
BOOST_TEST(cds[7] == 2);
BOOST_TEST(cds[8] == -1);
BOOST_TEST(cds[9] == -1);
BOOST_TEST(cds[10] == -1);
}
BOOST_AUTO_TEST_CASE(bracket_op_test) {
int num_channels = 2;
string s = "0";
DyeSeq ds(num_channels, s);
BOOST_TEST(ds[0] == 0);
ds[0] = 1;
BOOST_TEST(ds[0] == 1);
}
BOOST_AUTO_TEST_SUITE_END() // dye_seq_suite
BOOST_AUTO_TEST_SUITE_END() // common_suite
} // namespace whatprot
|
import Data.Vect
import Data.Fin
matrix : Nat -> Nat -> Type -> Type
matrix k j x = Vect k (Vect j x)
addV : Num a => Vect n a -> Vect n a -> Vect n a
addV [] [] = []
addV (x :: xs) (y :: ys) = (x + y) :: (addV xs ys)
dot : Num a => Vect n a -> Vect n a -> a
dot [] [] = 0
dot (x :: xs) (y :: ys) = (x * y) + (dot xs ys)
add : Num a => matrix n m a -> matrix n m a -> matrix n m a
add [] [] = []
add (x :: xs) (y :: ys) = (addV x y) :: (add xs ys)
row : Fin n -> matrix n m a -> Vect m a
row = index
col : Fin m -> matrix n m a -> Vect n a
col x y = map (index x) y
promote : Fin n -> Fin (n + m)
promote FZ = FZ
promote (FS x) = FS (promote x)
inject : Fin n -> Fin (S n)
inject FZ = FZ
inject (FS x) = FS (inject x)
listFins : (n : Nat) -> Vect n (Fin n)
listFins Z = []
listFins (S k) = FZ :: (map FS (listFins k))
squareMatrix : Nat -> Type -> Type
squareMatrix n = matrix n n
det : Num a => squareMatrix n a -> a
det {n} x = sum $ map thing (listFins n)
where thing ix = dot (row ix x) (col ix x)
|
[STATEMENT]
lemma FD4: "Der_1 \<D> \<Longrightarrow> Der_4e \<D> \<Longrightarrow> Fr_4(\<F>\<^sub>D \<D>)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>Der_1 \<D>; Der_4e \<D>\<rbrakk> \<Longrightarrow> Fr_4 (\<F>\<^sub>D \<D>)
[PROOF STEP]
by (metis CD1b CD2 CD4 Cl_8_def Cl_der_def Fr_4_def Fr_der_def2 PC1 PC8 meet_def)
|
[GOAL]
R : Type u
inst✝³ : Ring R
Q : TypeMax
inst✝² : AddCommGroup Q
inst✝¹ : Module R Q
inst✝ : Injective R Q
X✝ Y✝ : ModuleCat R
g : X✝ ⟶ ModuleCat.mk Q
f : X✝ ⟶ Y✝
mn : CategoryTheory.Mono f
⊢ ∃ h, CategoryTheory.CategoryStruct.comp f h = g
[PROOFSTEP]
rcases Module.Injective.out _ _ f ((ModuleCat.mono_iff_injective f).mp mn) g with ⟨h, eq1⟩
[GOAL]
case intro
R : Type u
inst✝³ : Ring R
Q : TypeMax
inst✝² : AddCommGroup Q
inst✝¹ : Module R Q
inst✝ : Injective R Q
X✝ Y✝ : ModuleCat R
g : X✝ ⟶ ModuleCat.mk Q
f : X✝ ⟶ Y✝
mn : CategoryTheory.Mono f
h : ↑Y✝ →ₗ[R] ↑(ModuleCat.mk Q)
eq1 : ∀ (x : ↑X✝), ↑h (↑f x) = ↑g x
⊢ ∃ h, CategoryTheory.CategoryStruct.comp f h = g
[PROOFSTEP]
exact ⟨h, LinearMap.ext eq1⟩
[GOAL]
R : Type u
inst✝³ : Ring R
Q : TypeMax
inst✝² : AddCommGroup Q
inst✝¹ : Module R Q
inst✝ : CategoryTheory.Injective (ModuleCat.mk Q)
X Y : TypeMax
ins1 : AddCommGroup X
ins2 : AddCommGroup Y
ins3 : Module R X
ins4 : Module R Y
f : X →ₗ[R] Y
hf : Function.Injective ↑f
g : X →ₗ[R] Q
⊢ ∃ h, ∀ (x : X), ↑h (↑f x) = ↑g x
[PROOFSTEP]
skip
[GOAL]
R : Type u
inst✝³ : Ring R
Q : TypeMax
inst✝² : AddCommGroup Q
inst✝¹ : Module R Q
inst✝ : CategoryTheory.Injective (ModuleCat.mk Q)
X Y : TypeMax
ins1 : AddCommGroup X
ins2 : AddCommGroup Y
ins3 : Module R X
ins4 : Module R Y
f : X →ₗ[R] Y
hf : Function.Injective ↑f
g : X →ₗ[R] Q
⊢ ∃ h, ∀ (x : X), ↑h (↑f x) = ↑g x
[PROOFSTEP]
[email protected] (ModuleCat R) _ ⟨Q⟩ _ ⟨X⟩ ⟨Y⟩ g f ((ModuleCat.mono_iff_injective _).mpr hf) with
⟨h, rfl⟩
[GOAL]
case intro
R : Type u
inst✝³ : Ring R
Q : TypeMax
inst✝² : AddCommGroup Q
inst✝¹ : Module R Q
inst✝ : CategoryTheory.Injective (ModuleCat.mk Q)
X Y : TypeMax
ins1 : AddCommGroup X
ins2 : AddCommGroup Y
ins3 : Module R X
ins4 : Module R Y
f : X →ₗ[R] Y
hf : Function.Injective ↑f
h : ModuleCat.mk Y ⟶ ModuleCat.mk Q
⊢ ∃ h_1, ∀ (x : X), ↑h_1 (↑f x) = ↑(CategoryTheory.CategoryStruct.comp f h) x
[PROOFSTEP]
exact ⟨h, fun x => rfl⟩
[GOAL]
R : Type u
inst✝⁶ : Ring R
Q : TypeMax
inst✝⁵ : AddCommGroup Q
inst✝⁴ : Module R Q
M N : Type (max u v)
inst✝³ : AddCommGroup M
inst✝² : AddCommGroup N
inst✝¹ : Module R M
inst✝ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
a b : ExtensionOf i f
domain_eq : a.domain = b.domain
to_fun_eq : ∀ ⦃x : { x // x ∈ a.domain }⦄ ⦃y : { x // x ∈ b.domain }⦄, ↑x = ↑y → ↑a.toLinearPMap x = ↑b.toLinearPMap y
⊢ a = b
[PROOFSTEP]
rcases a with ⟨a, a_le, e1⟩
[GOAL]
case mk
R : Type u
inst✝⁶ : Ring R
Q : TypeMax
inst✝⁵ : AddCommGroup Q
inst✝⁴ : Module R Q
M N : Type (max u v)
inst✝³ : AddCommGroup M
inst✝² : AddCommGroup N
inst✝¹ : Module R M
inst✝ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
b : ExtensionOf i f
a : N →ₗ.[R] Q
a_le : LinearMap.range i ≤ a.domain
e1 : ∀ (m : M), ↑f m = ↑a { val := ↑i m, property := (_ : ↑i m ∈ a.domain) }
domain_eq : { toLinearPMap := a, le := a_le, is_extension := e1 }.toLinearPMap.domain = b.domain
to_fun_eq :
∀ ⦃x : { x // x ∈ { toLinearPMap := a, le := a_le, is_extension := e1 }.toLinearPMap.domain }⦄
⦃y : { x // x ∈ b.domain }⦄,
↑x = ↑y → ↑{ toLinearPMap := a, le := a_le, is_extension := e1 }.toLinearPMap x = ↑b.toLinearPMap y
⊢ { toLinearPMap := a, le := a_le, is_extension := e1 } = b
[PROOFSTEP]
rcases b with ⟨b, b_le, e2⟩
[GOAL]
case mk.mk
R : Type u
inst✝⁶ : Ring R
Q : TypeMax
inst✝⁵ : AddCommGroup Q
inst✝⁴ : Module R Q
M N : Type (max u v)
inst✝³ : AddCommGroup M
inst✝² : AddCommGroup N
inst✝¹ : Module R M
inst✝ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
a : N →ₗ.[R] Q
a_le : LinearMap.range i ≤ a.domain
e1 : ∀ (m : M), ↑f m = ↑a { val := ↑i m, property := (_ : ↑i m ∈ a.domain) }
b : N →ₗ.[R] Q
b_le : LinearMap.range i ≤ b.domain
e2 : ∀ (m : M), ↑f m = ↑b { val := ↑i m, property := (_ : ↑i m ∈ b.domain) }
domain_eq :
{ toLinearPMap := a, le := a_le, is_extension := e1 }.toLinearPMap.domain =
{ toLinearPMap := b, le := b_le, is_extension := e2 }.toLinearPMap.domain
to_fun_eq :
∀ ⦃x : { x // x ∈ { toLinearPMap := a, le := a_le, is_extension := e1 }.toLinearPMap.domain }⦄
⦃y : { x // x ∈ { toLinearPMap := b, le := b_le, is_extension := e2 }.toLinearPMap.domain }⦄,
↑x = ↑y →
↑{ toLinearPMap := a, le := a_le, is_extension := e1 }.toLinearPMap x =
↑{ toLinearPMap := b, le := b_le, is_extension := e2 }.toLinearPMap y
⊢ { toLinearPMap := a, le := a_le, is_extension := e1 } = { toLinearPMap := b, le := b_le, is_extension := e2 }
[PROOFSTEP]
congr
[GOAL]
case mk.mk.e_toLinearPMap
R : Type u
inst✝⁶ : Ring R
Q : TypeMax
inst✝⁵ : AddCommGroup Q
inst✝⁴ : Module R Q
M N : Type (max u v)
inst✝³ : AddCommGroup M
inst✝² : AddCommGroup N
inst✝¹ : Module R M
inst✝ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
a : N →ₗ.[R] Q
a_le : LinearMap.range i ≤ a.domain
e1 : ∀ (m : M), ↑f m = ↑a { val := ↑i m, property := (_ : ↑i m ∈ a.domain) }
b : N →ₗ.[R] Q
b_le : LinearMap.range i ≤ b.domain
e2 : ∀ (m : M), ↑f m = ↑b { val := ↑i m, property := (_ : ↑i m ∈ b.domain) }
domain_eq :
{ toLinearPMap := a, le := a_le, is_extension := e1 }.toLinearPMap.domain =
{ toLinearPMap := b, le := b_le, is_extension := e2 }.toLinearPMap.domain
to_fun_eq :
∀ ⦃x : { x // x ∈ { toLinearPMap := a, le := a_le, is_extension := e1 }.toLinearPMap.domain }⦄
⦃y : { x // x ∈ { toLinearPMap := b, le := b_le, is_extension := e2 }.toLinearPMap.domain }⦄,
↑x = ↑y →
↑{ toLinearPMap := a, le := a_le, is_extension := e1 }.toLinearPMap x =
↑{ toLinearPMap := b, le := b_le, is_extension := e2 }.toLinearPMap y
⊢ a = b
[PROOFSTEP]
exact LinearPMap.ext domain_eq to_fun_eq
[GOAL]
R : Type u
inst✝⁶ : Ring R
Q : TypeMax
inst✝⁵ : AddCommGroup Q
inst✝⁴ : Module R Q
M N : Type (max u v)
inst✝³ : AddCommGroup M
inst✝² : AddCommGroup N
inst✝¹ : Module R M
inst✝ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
a b : ExtensionOf i f
r : a = b
x y : { x // x ∈ a.domain }
h : ↑x = ↑y
⊢ x = y
[PROOFSTEP]
exact_mod_cast h
[GOAL]
R : Type u
inst✝⁶ : Ring R
Q : TypeMax
inst✝⁵ : AddCommGroup Q
inst✝⁴ : Module R Q
M N : Type (max u v)
inst✝³ : AddCommGroup M
inst✝² : AddCommGroup N
inst✝¹ : Module R M
inst✝ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
X1 X2 : ExtensionOf i f
src✝ : N →ₗ.[R] Q := X1.toLinearPMap ⊓ X2.toLinearPMap
x : N
hx : x ∈ LinearMap.range i
⊢ x ∈ LinearPMap.eqLocus X1.toLinearPMap X2.toLinearPMap
[PROOFSTEP]
rcases hx with ⟨x, rfl⟩
[GOAL]
case intro
R : Type u
inst✝⁶ : Ring R
Q : TypeMax
inst✝⁵ : AddCommGroup Q
inst✝⁴ : Module R Q
M N : Type (max u v)
inst✝³ : AddCommGroup M
inst✝² : AddCommGroup N
inst✝¹ : Module R M
inst✝ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
X1 X2 : ExtensionOf i f
src✝ : N →ₗ.[R] Q := X1.toLinearPMap ⊓ X2.toLinearPMap
x : M
⊢ ↑i x ∈ LinearPMap.eqLocus X1.toLinearPMap X2.toLinearPMap
[PROOFSTEP]
refine' ⟨X1.le (Set.mem_range_self _), X2.le (Set.mem_range_self _), _⟩
[GOAL]
case intro
R : Type u
inst✝⁶ : Ring R
Q : TypeMax
inst✝⁵ : AddCommGroup Q
inst✝⁴ : Module R Q
M N : Type (max u v)
inst✝³ : AddCommGroup M
inst✝² : AddCommGroup N
inst✝¹ : Module R M
inst✝ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
X1 X2 : ExtensionOf i f
src✝ : N →ₗ.[R] Q := X1.toLinearPMap ⊓ X2.toLinearPMap
x : M
⊢ ↑X1.toLinearPMap { val := ↑i x, property := (_ : ↑i x ∈ X1.domain) } =
↑X2.toLinearPMap { val := ↑i x, property := (_ : ↑i x ∈ X2.domain) }
[PROOFSTEP]
rw [← X1.is_extension x, ← X2.is_extension x]
[GOAL]
R : Type u
inst✝⁶ : Ring R
Q : TypeMax
inst✝⁵ : AddCommGroup Q
inst✝⁴ : Module R Q
M N : Type (max u v)
inst✝³ : AddCommGroup M
inst✝² : AddCommGroup N
inst✝¹ : Module R M
inst✝ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
X Y : ExtensionOf i f
h : X.toLinearPMap = Y.toLinearPMap
⊢ X.domain = Y.domain
[PROOFSTEP]
rw [h]
[GOAL]
R : Type u
inst✝⁶ : Ring R
Q : TypeMax
inst✝⁵ : AddCommGroup Q
inst✝⁴ : Module R Q
M N : Type (max u v)
inst✝³ : AddCommGroup M
inst✝² : AddCommGroup N
inst✝¹ : Module R M
inst✝ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
X Y : ExtensionOf i f
h : X.toLinearPMap = Y.toLinearPMap
x : { x // x ∈ X.domain }
y : { x // x ∈ Y.domain }
h' : ↑x = ↑y
⊢ ↑X.toLinearPMap x = ↑Y.toLinearPMap y
[PROOFSTEP]
have :
{x y : N} →
(h'' : x = y) →
(hx : x ∈ X.toLinearPMap.domain) →
(hy : y ∈ Y.toLinearPMap.domain) → X.toLinearPMap ⟨x, hx⟩ = Y.toLinearPMap ⟨y, hy⟩ :=
by
rw [h]
intro _ _ h _ _
congr
[GOAL]
R : Type u
inst✝⁶ : Ring R
Q : TypeMax
inst✝⁵ : AddCommGroup Q
inst✝⁴ : Module R Q
M N : Type (max u v)
inst✝³ : AddCommGroup M
inst✝² : AddCommGroup N
inst✝¹ : Module R M
inst✝ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
X Y : ExtensionOf i f
h : X.toLinearPMap = Y.toLinearPMap
x : { x // x ∈ X.domain }
y : { x // x ∈ Y.domain }
h' : ↑x = ↑y
⊢ ∀ {x y : N},
x = y →
∀ (hx : x ∈ X.domain) (hy : y ∈ Y.domain),
↑X.toLinearPMap { val := x, property := hx } = ↑Y.toLinearPMap { val := y, property := hy }
[PROOFSTEP]
rw [h]
[GOAL]
R : Type u
inst✝⁶ : Ring R
Q : TypeMax
inst✝⁵ : AddCommGroup Q
inst✝⁴ : Module R Q
M N : Type (max u v)
inst✝³ : AddCommGroup M
inst✝² : AddCommGroup N
inst✝¹ : Module R M
inst✝ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
X Y : ExtensionOf i f
h : X.toLinearPMap = Y.toLinearPMap
x : { x // x ∈ X.domain }
y : { x // x ∈ Y.domain }
h' : ↑x = ↑y
⊢ ∀ {x y : N},
x = y →
∀ (hx : x ∈ Y.domain) (hy : y ∈ Y.domain),
↑Y.toLinearPMap { val := x, property := hx } = ↑Y.toLinearPMap { val := y, property := hy }
[PROOFSTEP]
intro _ _ h _ _
[GOAL]
R : Type u
inst✝⁶ : Ring R
Q : TypeMax
inst✝⁵ : AddCommGroup Q
inst✝⁴ : Module R Q
M N : Type (max u v)
inst✝³ : AddCommGroup M
inst✝² : AddCommGroup N
inst✝¹ : Module R M
inst✝ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
X Y : ExtensionOf i f
h✝ : X.toLinearPMap = Y.toLinearPMap
x : { x // x ∈ X.domain }
y : { x // x ∈ Y.domain }
h' : ↑x = ↑y
x✝ y✝ : N
h : x✝ = y✝
hx✝ : x✝ ∈ Y.domain
hy✝ : y✝ ∈ Y.domain
⊢ ↑Y.toLinearPMap { val := x✝, property := hx✝ } = ↑Y.toLinearPMap { val := y✝, property := hy✝ }
[PROOFSTEP]
congr
[GOAL]
R : Type u
inst✝⁶ : Ring R
Q : TypeMax
inst✝⁵ : AddCommGroup Q
inst✝⁴ : Module R Q
M N : Type (max u v)
inst✝³ : AddCommGroup M
inst✝² : AddCommGroup N
inst✝¹ : Module R M
inst✝ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
X Y : ExtensionOf i f
h : X.toLinearPMap = Y.toLinearPMap
x : { x // x ∈ X.domain }
y : { x // x ∈ Y.domain }
h' : ↑x = ↑y
this :
∀ {x y : N},
x = y →
∀ (hx : x ∈ X.domain) (hy : y ∈ Y.domain),
↑X.toLinearPMap { val := x, property := hx } = ↑Y.toLinearPMap { val := y, property := hy }
⊢ ↑X.toLinearPMap x = ↑Y.toLinearPMap y
[PROOFSTEP]
apply this h' _ _
[GOAL]
R : Type u
inst✝⁶ : Ring R
Q : TypeMax
inst✝⁵ : AddCommGroup Q
inst✝⁴ : Module R Q
M N : Type (max u v)
inst✝³ : AddCommGroup M
inst✝² : AddCommGroup N
inst✝¹ : Module R M
inst✝ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
X Y : ExtensionOf i f
x : { x // x ∈ (X ⊓ Y).domain }
y : { x // x ∈ (X.toLinearPMap ⊓ Y.toLinearPMap).domain }
h : ↑x = ↑y
⊢ ↑(X ⊓ Y).toLinearPMap x = ↑(X.toLinearPMap ⊓ Y.toLinearPMap) y
[PROOFSTEP]
congr
[GOAL]
case e_a
R : Type u
inst✝⁶ : Ring R
Q : TypeMax
inst✝⁵ : AddCommGroup Q
inst✝⁴ : Module R Q
M N : Type (max u v)
inst✝³ : AddCommGroup M
inst✝² : AddCommGroup N
inst✝¹ : Module R M
inst✝ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
X Y : ExtensionOf i f
x : { x // x ∈ (X ⊓ Y).domain }
y : { x // x ∈ (X.toLinearPMap ⊓ Y.toLinearPMap).domain }
h : ↑x = ↑y
⊢ x = y
[PROOFSTEP]
exact_mod_cast h
[GOAL]
R : Type u
inst✝⁶ : Ring R
Q : TypeMax
inst✝⁵ : AddCommGroup Q
inst✝⁴ : Module R Q
M N : Type (max u v)
inst✝³ : AddCommGroup M
inst✝² : AddCommGroup N
inst✝¹ : Module R M
inst✝ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
c : Set (ExtensionOf i f)
hchain : IsChain (fun x x_1 => x ≤ x_1) c
⊢ IsChain (fun x x_1 => x ≤ x_1) ((fun x => x.toLinearPMap) '' c)
[PROOFSTEP]
rintro _ ⟨a, a_mem, rfl⟩ _ ⟨b, b_mem, rfl⟩ neq
[GOAL]
case intro.intro.intro.intro
R : Type u
inst✝⁶ : Ring R
Q : TypeMax
inst✝⁵ : AddCommGroup Q
inst✝⁴ : Module R Q
M N : Type (max u v)
inst✝³ : AddCommGroup M
inst✝² : AddCommGroup N
inst✝¹ : Module R M
inst✝ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
c : Set (ExtensionOf i f)
hchain : IsChain (fun x x_1 => x ≤ x_1) c
a : ExtensionOf i f
a_mem : a ∈ c
b : ExtensionOf i f
b_mem : b ∈ c
neq : (fun x => x.toLinearPMap) a ≠ (fun x => x.toLinearPMap) b
⊢ (fun x x_1 => x ≤ x_1) ((fun x => x.toLinearPMap) a) ((fun x => x.toLinearPMap) b) ∨
(fun x x_1 => x ≤ x_1) ((fun x => x.toLinearPMap) b) ((fun x => x.toLinearPMap) a)
[PROOFSTEP]
exact hchain a_mem b_mem (ne_of_apply_ne _ neq)
[GOAL]
R : Type u
inst✝⁶ : Ring R
Q : TypeMax
inst✝⁵ : AddCommGroup Q
inst✝⁴ : Module R Q
M N : Type (max u v)
inst✝³ : AddCommGroup M
inst✝² : AddCommGroup N
inst✝¹ : Module R M
inst✝ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
c : Set (ExtensionOf i f)
hchain : IsChain (fun x x_1 => x ≤ x_1) c
hnonempty : Set.Nonempty c
src✝ : N →ₗ.[R] Q :=
LinearPMap.sSup ((fun x => x.toLinearPMap) '' c)
(_ : DirectedOn (fun x x_1 => x ≤ x_1) ((fun x => x.toLinearPMap) '' c))
m : M
⊢ ↑f m =
↑{ domain := src✝.domain, toFun := src✝.toFun }
{ val := ↑i m, property := (_ : ↑i m ∈ { domain := src✝.domain, toFun := src✝.toFun }.domain) }
[PROOFSTEP]
refine' Eq.trans (hnonempty.some.is_extension m) _
[GOAL]
case refine'_1
R : Type u
inst✝⁶ : Ring R
Q : TypeMax
inst✝⁵ : AddCommGroup Q
inst✝⁴ : Module R Q
M N : Type (max u v)
inst✝³ : AddCommGroup M
inst✝² : AddCommGroup N
inst✝¹ : Module R M
inst✝ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
⊢ ∀ {c : Set (ExtensionOf i f)} (hchain : IsChain (fun x x_1 => x ≤ x_1) c),
Set.Nonempty c →
let src :=
LinearPMap.sSup ((fun x => x.toLinearPMap) '' c)
(_ : DirectedOn (fun x x_1 => x ≤ x_1) ((fun x => x.toLinearPMap) '' c));
DirectedOn (fun x x_1 => x ≤ x_1) (toLinearPMap '' c)
[PROOFSTEP]
intros c hchain _
[GOAL]
case refine'_1
R : Type u
inst✝⁶ : Ring R
Q : TypeMax
inst✝⁵ : AddCommGroup Q
inst✝⁴ : Module R Q
M N : Type (max u v)
inst✝³ : AddCommGroup M
inst✝² : AddCommGroup N
inst✝¹ : Module R M
inst✝ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
c : Set (ExtensionOf i f)
hchain : IsChain (fun x x_1 => x ≤ x_1) c
hnonempty✝ : Set.Nonempty c
⊢ let src :=
LinearPMap.sSup ((fun x => x.toLinearPMap) '' c)
(_ : DirectedOn (fun x x_1 => x ≤ x_1) ((fun x => x.toLinearPMap) '' c));
DirectedOn (fun x x_1 => x ≤ x_1) (toLinearPMap '' c)
[PROOFSTEP]
exact (IsChain.directedOn <| chain_linearPMap_of_chain_extensionOf hchain)
[GOAL]
case refine'_2
R : Type u
inst✝⁶ : Ring R
Q : TypeMax
inst✝⁵ : AddCommGroup Q
inst✝⁴ : Module R Q
M N : Type (max u v)
inst✝³ : AddCommGroup M
inst✝² : AddCommGroup N
inst✝¹ : Module R M
inst✝ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
c : Set (ExtensionOf i f)
hchain : IsChain (fun x x_1 => x ≤ x_1) c
hnonempty : Set.Nonempty c
src✝ : N →ₗ.[R] Q :=
LinearPMap.sSup ((fun x => x.toLinearPMap) '' c)
(_ : DirectedOn (fun x x_1 => x ≤ x_1) ((fun x => x.toLinearPMap) '' c))
m : M
⊢ ↑(Set.Nonempty.some hnonempty).toLinearPMap
{ val := ↑i m, property := (_ : ↑i m ∈ (Set.Nonempty.some hnonempty).toLinearPMap.domain) } =
↑{ domain := src✝.domain, toFun := src✝.toFun }
{ val := ↑i m, property := (_ : ↑i m ∈ { domain := src✝.domain, toFun := src✝.toFun }.domain) }
[PROOFSTEP]
symm
[GOAL]
case refine'_2
R : Type u
inst✝⁶ : Ring R
Q : TypeMax
inst✝⁵ : AddCommGroup Q
inst✝⁴ : Module R Q
M N : Type (max u v)
inst✝³ : AddCommGroup M
inst✝² : AddCommGroup N
inst✝¹ : Module R M
inst✝ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
c : Set (ExtensionOf i f)
hchain : IsChain (fun x x_1 => x ≤ x_1) c
hnonempty : Set.Nonempty c
src✝ : N →ₗ.[R] Q :=
LinearPMap.sSup ((fun x => x.toLinearPMap) '' c)
(_ : DirectedOn (fun x x_1 => x ≤ x_1) ((fun x => x.toLinearPMap) '' c))
m : M
⊢ ↑{ domain := src✝.domain, toFun := src✝.toFun }
{ val := ↑i m, property := (_ : ↑i m ∈ { domain := src✝.domain, toFun := src✝.toFun }.domain) } =
↑(Set.Nonempty.some hnonempty).toLinearPMap
{ val := ↑i m, property := (_ : ↑i m ∈ (Set.Nonempty.some hnonempty).toLinearPMap.domain) }
[PROOFSTEP]
generalize_proofs _ h1
[GOAL]
case refine'_2
R : Type u
inst✝⁶ : Ring R
Q : TypeMax
inst✝⁵ : AddCommGroup Q
inst✝⁴ : Module R Q
M N : Type (max u v)
inst✝³ : AddCommGroup M
inst✝² : AddCommGroup N
inst✝¹ : Module R M
inst✝ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
c : Set (ExtensionOf i f)
hchain : IsChain (fun x x_1 => x ≤ x_1) c
hnonempty : Set.Nonempty c
src✝ : N →ₗ.[R] Q :=
LinearPMap.sSup ((fun x => x.toLinearPMap) '' c)
(_ : DirectedOn (fun x x_1 => x ≤ x_1) ((fun x => x.toLinearPMap) '' c))
m : M
h✝ : ↑i m ∈ { domain := src✝.domain, toFun := src✝.toFun }.domain
h1 : ↑i m ∈ (Set.Nonempty.some hnonempty).toLinearPMap.domain
⊢ ↑{ domain := src✝.domain, toFun := src✝.toFun } { val := ↑i m, property := h✝ } =
↑(Set.Nonempty.some hnonempty).toLinearPMap { val := ↑i m, property := h1 }
[PROOFSTEP]
exact
LinearPMap.sSup_apply (IsChain.directedOn <| chain_linearPMap_of_chain_extensionOf hchain)
((Set.mem_image _ _ _).mpr ⟨hnonempty.some, hnonempty.choose_spec, rfl⟩) ⟨i m, h1⟩
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
x y : { x // x ∈ LinearMap.range i }
⊢ (fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) (x + y) =
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) x +
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) y
[PROOFSTEP]
have eq1 : _ + _ = (x + y).1 := congr_arg₂ (· + ·) x.2.choose_spec y.2.choose_spec
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
x y : { x // x ∈ LinearMap.range i }
eq1 : ↑i (Exists.choose (_ : ↑x ∈ LinearMap.range i)) + ↑i (Exists.choose (_ : ↑y ∈ LinearMap.range i)) = ↑(x + y)
⊢ (fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) (x + y) =
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) x +
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) y
[PROOFSTEP]
rw [← map_add, ← (x + y).2.choose_spec] at eq1
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
x y : { x // x ∈ LinearMap.range i }
eq1 :
↑i (Exists.choose (_ : ↑x ∈ LinearMap.range i) + Exists.choose (_ : ↑y ∈ LinearMap.range i)) =
↑i (Exists.choose (_ : ↑(x + y) ∈ LinearMap.range i))
⊢ (fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) (x + y) =
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) x +
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) y
[PROOFSTEP]
dsimp
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
x y : { x // x ∈ LinearMap.range i }
eq1 :
↑i (Exists.choose (_ : ↑x ∈ LinearMap.range i) + Exists.choose (_ : ↑y ∈ LinearMap.range i)) =
↑i (Exists.choose (_ : ↑(x + y) ∈ LinearMap.range i))
⊢ ↑f (Exists.choose (_ : ↑(x + y) ∈ LinearMap.range i)) =
↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i)) + ↑f (Exists.choose (_ : ↑y ∈ LinearMap.range i))
[PROOFSTEP]
rw [← Fact.out (p := Function.Injective i) eq1, map_add]
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
r : R
x : { x // x ∈ LinearMap.range i }
⊢ AddHom.toFun
{ toFun := fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i)),
map_add' :=
(_ :
∀ (x y : { x // x ∈ LinearMap.range i }),
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) (x + y) =
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) x +
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) y) }
(r • x) =
↑(RingHom.id R) r •
AddHom.toFun
{ toFun := fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i)),
map_add' :=
(_ :
∀ (x y : { x // x ∈ LinearMap.range i }),
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) (x + y) =
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) x +
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) y) }
x
[PROOFSTEP]
have eq1 : r • _ = (r • x).1 := congr_arg ((· • ·) r) x.2.choose_spec
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
r : R
x : { x // x ∈ LinearMap.range i }
eq1 : r • ↑i (Exists.choose (_ : ↑x ∈ LinearMap.range i)) = ↑(r • x)
⊢ AddHom.toFun
{ toFun := fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i)),
map_add' :=
(_ :
∀ (x y : { x // x ∈ LinearMap.range i }),
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) (x + y) =
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) x +
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) y) }
(r • x) =
↑(RingHom.id R) r •
AddHom.toFun
{ toFun := fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i)),
map_add' :=
(_ :
∀ (x y : { x // x ∈ LinearMap.range i }),
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) (x + y) =
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) x +
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) y) }
x
[PROOFSTEP]
rw [← LinearMap.map_smul, ← (r • x).2.choose_spec] at eq1
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
r : R
x : { x // x ∈ LinearMap.range i }
eq1 : ↑i (r • Exists.choose (_ : ↑x ∈ LinearMap.range i)) = ↑i (Exists.choose (_ : ↑(r • x) ∈ LinearMap.range i))
⊢ AddHom.toFun
{ toFun := fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i)),
map_add' :=
(_ :
∀ (x y : { x // x ∈ LinearMap.range i }),
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) (x + y) =
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) x +
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) y) }
(r • x) =
↑(RingHom.id R) r •
AddHom.toFun
{ toFun := fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i)),
map_add' :=
(_ :
∀ (x y : { x // x ∈ LinearMap.range i }),
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) (x + y) =
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) x +
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) y) }
x
[PROOFSTEP]
dsimp
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
r : R
x : { x // x ∈ LinearMap.range i }
eq1 : ↑i (r • Exists.choose (_ : ↑x ∈ LinearMap.range i)) = ↑i (Exists.choose (_ : ↑(r • x) ∈ LinearMap.range i))
⊢ ↑f (Exists.choose (_ : ↑(r • x) ∈ LinearMap.range i)) = r • ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))
[PROOFSTEP]
rw [← Fact.out (p := Function.Injective i) eq1, LinearMap.map_smul]
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
m : M
⊢ ↑f m =
↑{ domain := LinearMap.range i,
toFun :=
{
toAddHom :=
{ toFun := fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i)),
map_add' :=
(_ :
∀ (x y : { x // x ∈ LinearMap.range i }),
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) (x + y) =
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) x +
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) y) },
map_smul' :=
(_ :
∀ (r : R) (x : { x // x ∈ LinearMap.range i }),
AddHom.toFun
{ toFun := fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i)),
map_add' :=
(_ :
∀ (x y : { x // x ∈ LinearMap.range i }),
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) (x + y) =
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) x +
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) y) }
(r • x) =
↑(RingHom.id R) r •
AddHom.toFun
{ toFun := fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i)),
map_add' :=
(_ :
∀ (x y : { x // x ∈ LinearMap.range i }),
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) (x + y) =
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) x +
(fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i))) y) }
x) } }
{ val := ↑i m, property := (_ : ↑i m ∈ LinearMap.range i) }
[PROOFSTEP]
simp only [LinearPMap.mk_apply, LinearMap.coe_mk]
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
m : M
⊢ ↑f m =
↑{ toFun := fun x => ↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i)),
map_add' :=
(_ :
∀ (x y : { x // x ∈ LinearMap.range i }),
↑f (Exists.choose (_ : ↑(x + y) ∈ LinearMap.range i)) =
↑f (Exists.choose (_ : ↑x ∈ LinearMap.range i)) + ↑f (Exists.choose (_ : ↑y ∈ LinearMap.range i))) }
{ val := ↑i m, property := (_ : ↑i m ∈ LinearMap.range i) }
[PROOFSTEP]
dsimp
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
m : M
⊢ ↑f m = ↑f (Exists.choose (_ : ↑i m ∈ LinearMap.range i))
[PROOFSTEP]
apply congrArg
[GOAL]
case h
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
m : M
⊢ m = Exists.choose (_ : ↑i m ∈ LinearMap.range i)
[PROOFSTEP]
exact Fact.out (p := Function.Injective i) (⟨i m, ⟨_, rfl⟩⟩ : LinearMap.range i).2.choose_spec.symm
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
y : N
x : { x // x ∈ supExtensionOfMaxSingleton i f y }
⊢ ∃ a b, ↑x = ↑a + b • y
[PROOFSTEP]
have mem1 : x.1 ∈ (_ : Set _) := x.2
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
y : N
x : { x // x ∈ supExtensionOfMaxSingleton i f y }
mem1 : ↑x ∈ ↑(supExtensionOfMaxSingleton i f y)
⊢ ∃ a b, ↑x = ↑a + b • y
[PROOFSTEP]
rw [Submodule.coe_sup] at mem1
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
y : N
x : { x // x ∈ supExtensionOfMaxSingleton i f y }
mem1 : ↑x ∈ ↑(extensionOfMax i f).toLinearPMap.domain + ↑(Submodule.span R {y})
⊢ ∃ a b, ↑x = ↑a + b • y
[PROOFSTEP]
rcases mem1 with ⟨a, b, a_mem, b_mem : b ∈ (Submodule.span R _ : Submodule R N), eq1⟩
[GOAL]
case intro.intro.intro.intro
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
y : N
x : { x // x ∈ supExtensionOfMaxSingleton i f y }
a b : N
a_mem : a ∈ ↑(extensionOfMax i f).toLinearPMap.domain
b_mem : b ∈ Submodule.span R {y}
eq1 : (fun x x_1 => x + x_1) a b = ↑x
⊢ ∃ a b, ↑x = ↑a + b • y
[PROOFSTEP]
rw [Submodule.mem_span_singleton] at b_mem
[GOAL]
case intro.intro.intro.intro
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
y : N
x : { x // x ∈ supExtensionOfMaxSingleton i f y }
a b : N
a_mem : a ∈ ↑(extensionOfMax i f).toLinearPMap.domain
b_mem : ∃ a, a • y = b
eq1 : (fun x x_1 => x + x_1) a b = ↑x
⊢ ∃ a b, ↑x = ↑a + b • y
[PROOFSTEP]
rcases b_mem with ⟨z, eq2⟩
[GOAL]
case intro.intro.intro.intro.intro
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
y : N
x : { x // x ∈ supExtensionOfMaxSingleton i f y }
a b : N
a_mem : a ∈ ↑(extensionOfMax i f).toLinearPMap.domain
eq1 : (fun x x_1 => x + x_1) a b = ↑x
z : R
eq2 : z • y = b
⊢ ∃ a b, ↑x = ↑a + b • y
[PROOFSTEP]
exact ⟨⟨a, a_mem⟩, z, by rw [← eq1, ← eq2]⟩
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
y : N
x : { x // x ∈ supExtensionOfMaxSingleton i f y }
a b : N
a_mem : a ∈ ↑(extensionOfMax i f).toLinearPMap.domain
eq1 : (fun x x_1 => x + x_1) a b = ↑x
z : R
eq2 : z • y = b
⊢ ↑x = ↑{ val := a, property := a_mem } + z • y
[PROOFSTEP]
rw [← eq1, ← eq2]
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
y : N
z1 z2 : { x // x ∈ ideal i f y }
⊢ (fun z => ↑(extensionOfMax i f).toLinearPMap { val := ↑z • y, property := (_ : ↑z ∈ ideal i f y) }) (z1 + z2) =
(fun z => ↑(extensionOfMax i f).toLinearPMap { val := ↑z • y, property := (_ : ↑z ∈ ideal i f y) }) z1 +
(fun z => ↑(extensionOfMax i f).toLinearPMap { val := ↑z • y, property := (_ : ↑z ∈ ideal i f y) }) z2
[PROOFSTEP]
simp_rw [← (extensionOfMax i f).toLinearPMap.map_add]
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
y : N
z1 z2 : { x // x ∈ ideal i f y }
⊢ ↑(extensionOfMax i f).toLinearPMap { val := ↑(z1 + z2) • y, property := (_ : ↑(z1 + z2) ∈ ideal i f y) } =
↑(extensionOfMax i f).toLinearPMap
({ val := ↑z1 • y, property := (_ : ↑z1 ∈ ideal i f y) } +
{ val := ↑z2 • y, property := (_ : ↑z2 ∈ ideal i f y) })
[PROOFSTEP]
congr
[GOAL]
case e_a.e_val
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
y : N
z1 z2 : { x // x ∈ ideal i f y }
⊢ ↑(z1 + z2) • y =
↑{ val := ↑z1 • y, property := (_ : ↑z1 ∈ ideal i f y) } + ↑{ val := ↑z2 • y, property := (_ : ↑z2 ∈ ideal i f y) }
[PROOFSTEP]
apply add_smul
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
y : N
z1 : R
z2 : { x // x ∈ ideal i f y }
⊢ AddHom.toFun
{ toFun := fun z => ↑(extensionOfMax i f).toLinearPMap { val := ↑z • y, property := (_ : ↑z ∈ ideal i f y) },
map_add' :=
(_ :
∀ (z1 z2 : { x // x ∈ ideal i f y }),
(fun z => ↑(extensionOfMax i f).toLinearPMap { val := ↑z • y, property := (_ : ↑z ∈ ideal i f y) })
(z1 + z2) =
(fun z => ↑(extensionOfMax i f).toLinearPMap { val := ↑z • y, property := (_ : ↑z ∈ ideal i f y) }) z1 +
(fun z => ↑(extensionOfMax i f).toLinearPMap { val := ↑z • y, property := (_ : ↑z ∈ ideal i f y) })
z2) }
(z1 • z2) =
↑(RingHom.id R) z1 •
AddHom.toFun
{ toFun := fun z => ↑(extensionOfMax i f).toLinearPMap { val := ↑z • y, property := (_ : ↑z ∈ ideal i f y) },
map_add' :=
(_ :
∀ (z1 z2 : { x // x ∈ ideal i f y }),
(fun z => ↑(extensionOfMax i f).toLinearPMap { val := ↑z • y, property := (_ : ↑z ∈ ideal i f y) })
(z1 + z2) =
(fun z => ↑(extensionOfMax i f).toLinearPMap { val := ↑z • y, property := (_ : ↑z ∈ ideal i f y) })
z1 +
(fun z => ↑(extensionOfMax i f).toLinearPMap { val := ↑z • y, property := (_ : ↑z ∈ ideal i f y) })
z2) }
z2
[PROOFSTEP]
simp_rw [← (extensionOfMax i f).toLinearPMap.map_smul]
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
y : N
z1 : R
z2 : { x // x ∈ ideal i f y }
⊢ ↑(extensionOfMax i f).toLinearPMap { val := ↑(z1 • z2) • y, property := (_ : ↑(z1 • z2) ∈ ideal i f y) } =
↑(extensionOfMax i f).toLinearPMap (↑(RingHom.id R) z1 • { val := ↑z2 • y, property := (_ : ↑z2 ∈ ideal i f y) })
[PROOFSTEP]
congr 2
[GOAL]
case e_a.e_val
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
y : N
z1 : R
z2 : { x // x ∈ ideal i f y }
⊢ ↑(z1 • z2) • y = ↑(RingHom.id R) z1 • ↑{ val := ↑z2 • y, property := (_ : ↑z2 ∈ ideal i f y) }
[PROOFSTEP]
apply mul_smul
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
r : R
eq1 : r • y = 0
⊢ ↑(extendIdealTo i f h y) r = 0
[PROOFSTEP]
have : r ∈ ideal i f y := by
change (r • y) ∈ (extensionOfMax i f).toLinearPMap.domain
rw [eq1]
apply Submodule.zero_mem _
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
r : R
eq1 : r • y = 0
⊢ r ∈ ideal i f y
[PROOFSTEP]
change (r • y) ∈ (extensionOfMax i f).toLinearPMap.domain
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
r : R
eq1 : r • y = 0
⊢ r • y ∈ (extensionOfMax i f).toLinearPMap.domain
[PROOFSTEP]
rw [eq1]
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
r : R
eq1 : r • y = 0
⊢ 0 ∈ (extensionOfMax i f).toLinearPMap.domain
[PROOFSTEP]
apply Submodule.zero_mem _
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
r : R
eq1 : r • y = 0
this : r ∈ ideal i f y
⊢ ↑(extendIdealTo i f h y) r = 0
[PROOFSTEP]
rw [ExtensionOfMaxAdjoin.extendIdealTo_is_extension i f h y r this]
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
r : R
eq1 : r • y = 0
this : r ∈ ideal i f y
⊢ ↑(idealTo i f y) { val := r, property := this } = 0
[PROOFSTEP]
dsimp [ExtensionOfMaxAdjoin.idealTo]
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
r : R
eq1 : r • y = 0
this : r ∈ ideal i f y
⊢ ↑(extensionOfMax i f).toLinearPMap { val := r • y, property := (_ : ↑{ val := r, property := this } ∈ ideal i f y) } =
0
[PROOFSTEP]
simp only [LinearMap.coe_mk, eq1, Subtype.coe_mk, ← ZeroMemClass.zero_def, (extensionOfMax i f).toLinearPMap.map_zero]
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
r r' : R
eq1 : r • y = r' • y
⊢ ↑(extendIdealTo i f h y) r = ↑(extendIdealTo i f h y) r'
[PROOFSTEP]
rw [← sub_eq_zero, ← map_sub]
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
r r' : R
eq1 : r • y = r' • y
⊢ ↑(extendIdealTo i f h y) (r - r') = 0
[PROOFSTEP]
convert ExtensionOfMaxAdjoin.extendIdealTo_wd' i f h (r - r') _
[GOAL]
case convert_2
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
r r' : R
eq1 : r • y = r' • y
⊢ (r - r') • y = 0
[PROOFSTEP]
rw [sub_smul, sub_eq_zero, eq1]
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
r : R
hr : r • y ∈ (extensionOfMax i f).toLinearPMap.domain
⊢ ↑(extendIdealTo i f h y) r = ↑(extensionOfMax i f).toLinearPMap { val := r • y, property := hr }
[PROOFSTEP]
simp only [ExtensionOfMaxAdjoin.extendIdealTo_is_extension i f h _ _ hr, ExtensionOfMaxAdjoin.idealTo, LinearMap.coe_mk,
Subtype.coe_mk, AddHom.coe_mk]
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
x : { x // x ∈ supExtensionOfMaxSingleton i f y }
a : { x // x ∈ (extensionOfMax i f).toLinearPMap.domain }
r : R
eq1 : ↑x = ↑a + r • y
⊢ extensionToFun i f h x = ↑(extensionOfMax i f).toLinearPMap a + ↑(extendIdealTo i f h y) r
[PROOFSTEP]
cases' a with a ha
[GOAL]
case mk
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
x : { x // x ∈ supExtensionOfMaxSingleton i f y }
r : R
a : N
ha : a ∈ (extensionOfMax i f).toLinearPMap.domain
eq1 : ↑x = ↑{ val := a, property := ha } + r • y
⊢ extensionToFun i f h x = ↑(extensionOfMax i f).toLinearPMap { val := a, property := ha } + ↑(extendIdealTo i f h y) r
[PROOFSTEP]
have eq2 : (ExtensionOfMaxAdjoin.fst i x - a : N) = (r - ExtensionOfMaxAdjoin.snd i x) • y :=
by
change x = a + r • y at eq1
rwa [ExtensionOfMaxAdjoin.eqn, ← sub_eq_zero, ← sub_sub_sub_eq, sub_eq_zero, ← sub_smul] at eq1
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
x : { x // x ∈ supExtensionOfMaxSingleton i f y }
r : R
a : N
ha : a ∈ (extensionOfMax i f).toLinearPMap.domain
eq1 : ↑x = ↑{ val := a, property := ha } + r • y
⊢ ↑(fst i x) - a = (r - snd i x) • y
[PROOFSTEP]
change x = a + r • y at eq1
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
x : { x // x ∈ supExtensionOfMaxSingleton i f y }
r : R
a : N
ha : a ∈ (extensionOfMax i f).toLinearPMap.domain
eq1 : ↑x = a + r • y
⊢ ↑(fst i x) - a = (r - snd i x) • y
[PROOFSTEP]
rwa [ExtensionOfMaxAdjoin.eqn, ← sub_eq_zero, ← sub_sub_sub_eq, sub_eq_zero, ← sub_smul] at eq1
[GOAL]
case mk
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
x : { x // x ∈ supExtensionOfMaxSingleton i f y }
r : R
a : N
ha : a ∈ (extensionOfMax i f).toLinearPMap.domain
eq1 : ↑x = ↑{ val := a, property := ha } + r • y
eq2 : ↑(fst i x) - a = (r - snd i x) • y
⊢ extensionToFun i f h x = ↑(extensionOfMax i f).toLinearPMap { val := a, property := ha } + ↑(extendIdealTo i f h y) r
[PROOFSTEP]
have eq3 :=
ExtensionOfMaxAdjoin.extendIdealTo_eq i f h (r - ExtensionOfMaxAdjoin.snd i x)
(by rw [← eq2]; exact Submodule.sub_mem _ (ExtensionOfMaxAdjoin.fst i x).2 ha)
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
x : { x // x ∈ supExtensionOfMaxSingleton i f y }
r : R
a : N
ha : a ∈ (extensionOfMax i f).toLinearPMap.domain
eq1 : ↑x = ↑{ val := a, property := ha } + r • y
eq2 : ↑(fst i x) - a = (r - snd i x) • y
⊢ (r - snd i x) • ?m.219882 ∈ (extensionOfMax i f).toLinearPMap.domain
[PROOFSTEP]
rw [← eq2]
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
x : { x // x ∈ supExtensionOfMaxSingleton i f y }
r : R
a : N
ha : a ∈ (extensionOfMax i f).toLinearPMap.domain
eq1 : ↑x = ↑{ val := a, property := ha } + r • y
eq2 : ↑(fst i x) - a = (r - snd i x) • y
⊢ ↑(fst i x) - a ∈ (extensionOfMax i f).toLinearPMap.domain
[PROOFSTEP]
exact Submodule.sub_mem _ (ExtensionOfMaxAdjoin.fst i x).2 ha
[GOAL]
case mk
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
x : { x // x ∈ supExtensionOfMaxSingleton i f y }
r : R
a : N
ha : a ∈ (extensionOfMax i f).toLinearPMap.domain
eq1 : ↑x = ↑{ val := a, property := ha } + r • y
eq2 : ↑(fst i x) - a = (r - snd i x) • y
eq3 :
↑(extendIdealTo i f h y) (r - snd i x) =
↑(extensionOfMax i f).toLinearPMap
{ val := (r - snd i x) • y, property := (_ : (r - snd i x) • y ∈ (extensionOfMax i f).toLinearPMap.domain) }
⊢ extensionToFun i f h x = ↑(extensionOfMax i f).toLinearPMap { val := a, property := ha } + ↑(extendIdealTo i f h y) r
[PROOFSTEP]
simp only [map_sub, sub_smul, sub_eq_iff_eq_add] at eq3
[GOAL]
case mk
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
x : { x // x ∈ supExtensionOfMaxSingleton i f y }
r : R
a : N
ha : a ∈ (extensionOfMax i f).toLinearPMap.domain
eq1 : ↑x = ↑{ val := a, property := ha } + r • y
eq2 : ↑(fst i x) - a = (r - snd i x) • y
eq3 :
↑(extendIdealTo i f h y) r =
↑(extensionOfMax i f).toLinearPMap
{ val := r • y - snd i x • y,
property := (_ : (fun x => x ∈ (extensionOfMax i f).toLinearPMap.domain) (r • y - snd i x • y)) } +
↑(extendIdealTo i f h y) (snd i x)
⊢ extensionToFun i f h x = ↑(extensionOfMax i f).toLinearPMap { val := a, property := ha } + ↑(extendIdealTo i f h y) r
[PROOFSTEP]
unfold ExtensionOfMaxAdjoin.extensionToFun
[GOAL]
case mk
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
x : { x // x ∈ supExtensionOfMaxSingleton i f y }
r : R
a : N
ha : a ∈ (extensionOfMax i f).toLinearPMap.domain
eq1 : ↑x = ↑{ val := a, property := ha } + r • y
eq2 : ↑(fst i x) - a = (r - snd i x) • y
eq3 :
↑(extendIdealTo i f h y) r =
↑(extensionOfMax i f).toLinearPMap
{ val := r • y - snd i x • y,
property := (_ : (fun x => x ∈ (extensionOfMax i f).toLinearPMap.domain) (r • y - snd i x • y)) } +
↑(extendIdealTo i f h y) (snd i x)
⊢ ↑(extensionOfMax i f).toLinearPMap (fst i x) + ↑(extendIdealTo i f h y) (snd i x) =
↑(extensionOfMax i f).toLinearPMap { val := a, property := ha } + ↑(extendIdealTo i f h y) r
[PROOFSTEP]
rw [eq3, ← add_assoc, ← (extensionOfMax i f).toLinearPMap.map_add, AddMemClass.mk_add_mk]
[GOAL]
case mk
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
x : { x // x ∈ supExtensionOfMaxSingleton i f y }
r : R
a : N
ha : a ∈ (extensionOfMax i f).toLinearPMap.domain
eq1 : ↑x = ↑{ val := a, property := ha } + r • y
eq2 : ↑(fst i x) - a = (r - snd i x) • y
eq3 :
↑(extendIdealTo i f h y) r =
↑(extensionOfMax i f).toLinearPMap
{ val := r • y - snd i x • y,
property := (_ : (fun x => x ∈ (extensionOfMax i f).toLinearPMap.domain) (r • y - snd i x • y)) } +
↑(extendIdealTo i f h y) (snd i x)
⊢ ↑(extensionOfMax i f).toLinearPMap (fst i x) + ↑(extendIdealTo i f h y) (snd i x) =
↑(extensionOfMax i f).toLinearPMap
{ val := a + (r • y - snd i x • y),
property := (_ : a + (r • y - snd i x • y) ∈ (extensionOfMax i f).toLinearPMap.domain) } +
↑(extendIdealTo i f h y) (snd i x)
[PROOFSTEP]
congr
[GOAL]
case mk.e_a.e_a
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
x : { x // x ∈ supExtensionOfMaxSingleton i f y }
r : R
a : N
ha : a ∈ (extensionOfMax i f).toLinearPMap.domain
eq1 : ↑x = ↑{ val := a, property := ha } + r • y
eq2 : ↑(fst i x) - a = (r - snd i x) • y
eq3 :
↑(extendIdealTo i f h y) r =
↑(extensionOfMax i f).toLinearPMap
{ val := r • y - snd i x • y,
property := (_ : (fun x => x ∈ (extensionOfMax i f).toLinearPMap.domain) (r • y - snd i x • y)) } +
↑(extendIdealTo i f h y) (snd i x)
⊢ fst i x =
{ val := a + (r • y - snd i x • y),
property := (_ : a + (r • y - snd i x • y) ∈ (extensionOfMax i f).toLinearPMap.domain) }
[PROOFSTEP]
ext
[GOAL]
case mk.e_a.e_a.a
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
x : { x // x ∈ supExtensionOfMaxSingleton i f y }
r : R
a : N
ha : a ∈ (extensionOfMax i f).toLinearPMap.domain
eq1 : ↑x = ↑{ val := a, property := ha } + r • y
eq2 : ↑(fst i x) - a = (r - snd i x) • y
eq3 :
↑(extendIdealTo i f h y) r =
↑(extensionOfMax i f).toLinearPMap
{ val := r • y - snd i x • y,
property := (_ : (fun x => x ∈ (extensionOfMax i f).toLinearPMap.domain) (r • y - snd i x • y)) } +
↑(extendIdealTo i f h y) (snd i x)
⊢ ↑(fst i x) =
↑{ val := a + (r • y - snd i x • y),
property := (_ : a + (r • y - snd i x • y) ∈ (extensionOfMax i f).toLinearPMap.domain) }
[PROOFSTEP]
dsimp
[GOAL]
case mk.e_a.e_a.a
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
x : { x // x ∈ supExtensionOfMaxSingleton i f y }
r : R
a : N
ha : a ∈ (extensionOfMax i f).toLinearPMap.domain
eq1 : ↑x = ↑{ val := a, property := ha } + r • y
eq2 : ↑(fst i x) - a = (r - snd i x) • y
eq3 :
↑(extendIdealTo i f h y) r =
↑(extensionOfMax i f).toLinearPMap
{ val := r • y - snd i x • y,
property := (_ : (fun x => x ∈ (extensionOfMax i f).toLinearPMap.domain) (r • y - snd i x • y)) } +
↑(extendIdealTo i f h y) (snd i x)
⊢ ↑(fst i x) = a + (r • y - snd i x • y)
[PROOFSTEP]
rw [Subtype.coe_mk, add_sub, ← eq1]
[GOAL]
case mk.e_a.e_a.a
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
x : { x // x ∈ supExtensionOfMaxSingleton i f y }
r : R
a : N
ha : a ∈ (extensionOfMax i f).toLinearPMap.domain
eq1 : ↑x = ↑{ val := a, property := ha } + r • y
eq2 : ↑(fst i x) - a = (r - snd i x) • y
eq3 :
↑(extendIdealTo i f h y) r =
↑(extensionOfMax i f).toLinearPMap
{ val := r • y - snd i x • y,
property := (_ : (fun x => x ∈ (extensionOfMax i f).toLinearPMap.domain) (r • y - snd i x • y)) } +
↑(extendIdealTo i f h y) (snd i x)
⊢ ↑(fst i x) = ↑x - snd i x • y
[PROOFSTEP]
exact eq_sub_of_add_eq (ExtensionOfMaxAdjoin.eqn i x).symm
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
a b : { x // x ∈ supExtensionOfMaxSingleton i f y }
⊢ ExtensionOfMaxAdjoin.extensionToFun i f h (a + b) =
ExtensionOfMaxAdjoin.extensionToFun i f h a + ExtensionOfMaxAdjoin.extensionToFun i f h b
[PROOFSTEP]
have eq1 :
↑a + ↑b =
↑(ExtensionOfMaxAdjoin.fst i a + ExtensionOfMaxAdjoin.fst i b) +
(ExtensionOfMaxAdjoin.snd i a + ExtensionOfMaxAdjoin.snd i b) • y :=
by
rw [ExtensionOfMaxAdjoin.eqn, ExtensionOfMaxAdjoin.eqn, add_smul, Submodule.coe_add]
ac_rfl
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
a b : { x // x ∈ supExtensionOfMaxSingleton i f y }
⊢ ↑a + ↑b =
↑(ExtensionOfMaxAdjoin.fst i a + ExtensionOfMaxAdjoin.fst i b) +
(ExtensionOfMaxAdjoin.snd i a + ExtensionOfMaxAdjoin.snd i b) • y
[PROOFSTEP]
rw [ExtensionOfMaxAdjoin.eqn, ExtensionOfMaxAdjoin.eqn, add_smul, Submodule.coe_add]
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
a b : { x // x ∈ supExtensionOfMaxSingleton i f y }
⊢ ↑(ExtensionOfMaxAdjoin.fst i a) + ExtensionOfMaxAdjoin.snd i a • y +
(↑(ExtensionOfMaxAdjoin.fst i b) + ExtensionOfMaxAdjoin.snd i b • y) =
↑(ExtensionOfMaxAdjoin.fst i a) + ↑(ExtensionOfMaxAdjoin.fst i b) +
(ExtensionOfMaxAdjoin.snd i a • y + ExtensionOfMaxAdjoin.snd i b • y)
[PROOFSTEP]
ac_rfl
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
a b : { x // x ∈ supExtensionOfMaxSingleton i f y }
eq1 :
↑a + ↑b =
↑(ExtensionOfMaxAdjoin.fst i a + ExtensionOfMaxAdjoin.fst i b) +
(ExtensionOfMaxAdjoin.snd i a + ExtensionOfMaxAdjoin.snd i b) • y
⊢ ExtensionOfMaxAdjoin.extensionToFun i f h (a + b) =
ExtensionOfMaxAdjoin.extensionToFun i f h a + ExtensionOfMaxAdjoin.extensionToFun i f h b
[PROOFSTEP]
rw [ExtensionOfMaxAdjoin.extensionToFun_wd (y := y) i f h (a + b) _ _ eq1, LinearPMap.map_add, map_add]
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
a b : { x // x ∈ supExtensionOfMaxSingleton i f y }
eq1 :
↑a + ↑b =
↑(ExtensionOfMaxAdjoin.fst i a + ExtensionOfMaxAdjoin.fst i b) +
(ExtensionOfMaxAdjoin.snd i a + ExtensionOfMaxAdjoin.snd i b) • y
⊢ ↑(extensionOfMax i f).toLinearPMap (ExtensionOfMaxAdjoin.fst i a) +
↑(extensionOfMax i f).toLinearPMap (ExtensionOfMaxAdjoin.fst i b) +
(↑(ExtensionOfMaxAdjoin.extendIdealTo i f h y) (ExtensionOfMaxAdjoin.snd i a) +
↑(ExtensionOfMaxAdjoin.extendIdealTo i f h y) (ExtensionOfMaxAdjoin.snd i b)) =
ExtensionOfMaxAdjoin.extensionToFun i f h a + ExtensionOfMaxAdjoin.extensionToFun i f h b
[PROOFSTEP]
unfold ExtensionOfMaxAdjoin.extensionToFun
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
a b : { x // x ∈ supExtensionOfMaxSingleton i f y }
eq1 :
↑a + ↑b =
↑(ExtensionOfMaxAdjoin.fst i a + ExtensionOfMaxAdjoin.fst i b) +
(ExtensionOfMaxAdjoin.snd i a + ExtensionOfMaxAdjoin.snd i b) • y
⊢ ↑(extensionOfMax i f).toLinearPMap (ExtensionOfMaxAdjoin.fst i a) +
↑(extensionOfMax i f).toLinearPMap (ExtensionOfMaxAdjoin.fst i b) +
(↑(ExtensionOfMaxAdjoin.extendIdealTo i f h y) (ExtensionOfMaxAdjoin.snd i a) +
↑(ExtensionOfMaxAdjoin.extendIdealTo i f h y) (ExtensionOfMaxAdjoin.snd i b)) =
↑(extensionOfMax i f).toLinearPMap (ExtensionOfMaxAdjoin.fst i a) +
↑(ExtensionOfMaxAdjoin.extendIdealTo i f h y) (ExtensionOfMaxAdjoin.snd i a) +
(↑(extensionOfMax i f).toLinearPMap (ExtensionOfMaxAdjoin.fst i b) +
↑(ExtensionOfMaxAdjoin.extendIdealTo i f h y) (ExtensionOfMaxAdjoin.snd i b))
[PROOFSTEP]
abel
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
a b : { x // x ∈ supExtensionOfMaxSingleton i f y }
eq1 :
↑a + ↑b =
↑(ExtensionOfMaxAdjoin.fst i a + ExtensionOfMaxAdjoin.fst i b) +
(ExtensionOfMaxAdjoin.snd i a + ExtensionOfMaxAdjoin.snd i b) • y
⊢ ↑(extensionOfMax i f).toLinearPMap (ExtensionOfMaxAdjoin.fst i a) +
↑(extensionOfMax i f).toLinearPMap (ExtensionOfMaxAdjoin.fst i b) +
(↑(ExtensionOfMaxAdjoin.extendIdealTo i f h y) (ExtensionOfMaxAdjoin.snd i a) +
↑(ExtensionOfMaxAdjoin.extendIdealTo i f h y) (ExtensionOfMaxAdjoin.snd i b)) =
↑(extensionOfMax i f).toLinearPMap (ExtensionOfMaxAdjoin.fst i a) +
↑(ExtensionOfMaxAdjoin.extendIdealTo i f h y) (ExtensionOfMaxAdjoin.snd i a) +
(↑(extensionOfMax i f).toLinearPMap (ExtensionOfMaxAdjoin.fst i b) +
↑(ExtensionOfMaxAdjoin.extendIdealTo i f h y) (ExtensionOfMaxAdjoin.snd i b))
[PROOFSTEP]
abel
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
r : R
a : { x // x ∈ supExtensionOfMaxSingleton i f y }
⊢ AddHom.toFun
{ toFun := ExtensionOfMaxAdjoin.extensionToFun i f h,
map_add' :=
(_ :
∀ (a b : { x // x ∈ supExtensionOfMaxSingleton i f y }),
ExtensionOfMaxAdjoin.extensionToFun i f h (a + b) =
ExtensionOfMaxAdjoin.extensionToFun i f h a + ExtensionOfMaxAdjoin.extensionToFun i f h b) }
(r • a) =
↑(RingHom.id R) r •
AddHom.toFun
{ toFun := ExtensionOfMaxAdjoin.extensionToFun i f h,
map_add' :=
(_ :
∀ (a b : { x // x ∈ supExtensionOfMaxSingleton i f y }),
ExtensionOfMaxAdjoin.extensionToFun i f h (a + b) =
ExtensionOfMaxAdjoin.extensionToFun i f h a + ExtensionOfMaxAdjoin.extensionToFun i f h b) }
a
[PROOFSTEP]
dsimp
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
r : R
a : { x // x ∈ supExtensionOfMaxSingleton i f y }
⊢ ExtensionOfMaxAdjoin.extensionToFun i f h (r • a) = r • ExtensionOfMaxAdjoin.extensionToFun i f h a
[PROOFSTEP]
have eq1 : r • (a : N) = ↑(r • ExtensionOfMaxAdjoin.fst i a) + (r • ExtensionOfMaxAdjoin.snd i a) • y :=
by
rw [ExtensionOfMaxAdjoin.eqn, smul_add, smul_eq_mul, mul_smul]
rfl
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
r : R
a : { x // x ∈ supExtensionOfMaxSingleton i f y }
⊢ r • ↑a = ↑(r • ExtensionOfMaxAdjoin.fst i a) + (r • ExtensionOfMaxAdjoin.snd i a) • y
[PROOFSTEP]
rw [ExtensionOfMaxAdjoin.eqn, smul_add, smul_eq_mul, mul_smul]
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
r : R
a : { x // x ∈ supExtensionOfMaxSingleton i f y }
⊢ r • ↑(ExtensionOfMaxAdjoin.fst i a) + r • ExtensionOfMaxAdjoin.snd i a • y =
↑(r • ExtensionOfMaxAdjoin.fst i a) + r • ExtensionOfMaxAdjoin.snd i a • y
[PROOFSTEP]
rfl
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
r : R
a : { x // x ∈ supExtensionOfMaxSingleton i f y }
eq1 : r • ↑a = ↑(r • ExtensionOfMaxAdjoin.fst i a) + (r • ExtensionOfMaxAdjoin.snd i a) • y
⊢ ExtensionOfMaxAdjoin.extensionToFun i f h (r • a) = r • ExtensionOfMaxAdjoin.extensionToFun i f h a
[PROOFSTEP]
rw [ExtensionOfMaxAdjoin.extensionToFun_wd i f h (r • a) _ _ eq1, LinearMap.map_smul, LinearPMap.map_smul, ← smul_add]
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
r : R
a : { x // x ∈ supExtensionOfMaxSingleton i f y }
eq1 : r • ↑a = ↑(r • ExtensionOfMaxAdjoin.fst i a) + (r • ExtensionOfMaxAdjoin.snd i a) • y
⊢ r •
(↑(extensionOfMax i f).toLinearPMap (ExtensionOfMaxAdjoin.fst i a) +
↑(ExtensionOfMaxAdjoin.extendIdealTo i f h y) (ExtensionOfMaxAdjoin.snd i a)) =
r • ExtensionOfMaxAdjoin.extensionToFun i f h a
[PROOFSTEP]
congr
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
m : M
⊢ ↑f m =
↑{ domain := supExtensionOfMaxSingleton i f y,
toFun :=
{
toAddHom :=
{ toFun := ExtensionOfMaxAdjoin.extensionToFun i f h,
map_add' :=
(_ :
∀ (a b : { x // x ∈ supExtensionOfMaxSingleton i f y }),
ExtensionOfMaxAdjoin.extensionToFun i f h (a + b) =
ExtensionOfMaxAdjoin.extensionToFun i f h a + ExtensionOfMaxAdjoin.extensionToFun i f h b) },
map_smul' :=
(_ :
∀ (r : R) (a : { x // x ∈ supExtensionOfMaxSingleton i f y }),
AddHom.toFun
{ toFun := ExtensionOfMaxAdjoin.extensionToFun i f h,
map_add' :=
(_ :
∀ (a b : { x // x ∈ supExtensionOfMaxSingleton i f y }),
ExtensionOfMaxAdjoin.extensionToFun i f h (a + b) =
ExtensionOfMaxAdjoin.extensionToFun i f h a +
ExtensionOfMaxAdjoin.extensionToFun i f h b) }
(r • a) =
↑(RingHom.id R) r •
AddHom.toFun
{ toFun := ExtensionOfMaxAdjoin.extensionToFun i f h,
map_add' :=
(_ :
∀ (a b : { x // x ∈ supExtensionOfMaxSingleton i f y }),
ExtensionOfMaxAdjoin.extensionToFun i f h (a + b) =
ExtensionOfMaxAdjoin.extensionToFun i f h a +
ExtensionOfMaxAdjoin.extensionToFun i f h b) }
a) } }
{ val := ↑i m,
property :=
(_ :
↑i m ∈
{ domain := supExtensionOfMaxSingleton i f y,
toFun :=
{
toAddHom :=
{ toFun := ExtensionOfMaxAdjoin.extensionToFun i f h,
map_add' :=
(_ :
∀ (a b : { x // x ∈ supExtensionOfMaxSingleton i f y }),
ExtensionOfMaxAdjoin.extensionToFun i f h (a + b) =
ExtensionOfMaxAdjoin.extensionToFun i f h a +
ExtensionOfMaxAdjoin.extensionToFun i f h b) },
map_smul' :=
(_ :
∀ (r : R) (a : { x // x ∈ supExtensionOfMaxSingleton i f y }),
AddHom.toFun
{ toFun := ExtensionOfMaxAdjoin.extensionToFun i f h,
map_add' :=
(_ :
∀ (a b : { x // x ∈ supExtensionOfMaxSingleton i f y }),
ExtensionOfMaxAdjoin.extensionToFun i f h (a + b) =
ExtensionOfMaxAdjoin.extensionToFun i f h a +
ExtensionOfMaxAdjoin.extensionToFun i f h b) }
(r • a) =
↑(RingHom.id R) r •
AddHom.toFun
{ toFun := ExtensionOfMaxAdjoin.extensionToFun i f h,
map_add' :=
(_ :
∀ (a b : { x // x ∈ supExtensionOfMaxSingleton i f y }),
ExtensionOfMaxAdjoin.extensionToFun i f h (a + b) =
ExtensionOfMaxAdjoin.extensionToFun i f h a +
ExtensionOfMaxAdjoin.extensionToFun i f h b) }
a) } }.domain) }
[PROOFSTEP]
dsimp
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
m : M
⊢ ↑f m =
ExtensionOfMaxAdjoin.extensionToFun i f h { val := ↑i m, property := (_ : ↑i m ∈ supExtensionOfMaxSingleton i f y) }
[PROOFSTEP]
rw [(extensionOfMax i f).is_extension, ExtensionOfMaxAdjoin.extensionToFun_wd i f h _ ⟨i m, _⟩ 0 _, map_zero, add_zero]
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
m : M
⊢ ↑{ val := ↑i m, property := (_ : ↑i m ∈ supExtensionOfMaxSingleton i f y) } =
↑{ val := ↑i m, property := (_ : ↑i m ∈ (extensionOfMax i f).toLinearPMap.domain) } + 0 • y
[PROOFSTEP]
simp
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
x : { x // x ∈ (extensionOfMax i f).toLinearPMap.domain }
x' : { x // x ∈ (extensionOfMaxAdjoin i f h y).toLinearPMap.domain }
EQ : ↑x = ↑x'
⊢ ↑(extensionOfMax i f).toLinearPMap x = ↑(extensionOfMaxAdjoin i f h y).toLinearPMap x'
[PROOFSTEP]
symm
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
x : { x // x ∈ (extensionOfMax i f).toLinearPMap.domain }
x' : { x // x ∈ (extensionOfMaxAdjoin i f h y).toLinearPMap.domain }
EQ : ↑x = ↑x'
⊢ ↑(extensionOfMaxAdjoin i f h y).toLinearPMap x' = ↑(extensionOfMax i f).toLinearPMap x
[PROOFSTEP]
change ExtensionOfMaxAdjoin.extensionToFun i f h _ = _
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
x : { x // x ∈ (extensionOfMax i f).toLinearPMap.domain }
x' : { x // x ∈ (extensionOfMaxAdjoin i f h y).toLinearPMap.domain }
EQ : ↑x = ↑x'
⊢ ExtensionOfMaxAdjoin.extensionToFun i f h x' = ↑(extensionOfMax i f).toLinearPMap x
[PROOFSTEP]
rw [ExtensionOfMaxAdjoin.extensionToFun_wd i f h x' x 0 (by simp [EQ]), map_zero, add_zero]
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
x : { x // x ∈ (extensionOfMax i f).toLinearPMap.domain }
x' : { x // x ∈ (extensionOfMaxAdjoin i f h y).toLinearPMap.domain }
EQ : ↑x = ↑x'
⊢ ↑x' = ↑x + 0 • y
[PROOFSTEP]
simp [EQ]
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
⊢ (extensionOfMax i f).toLinearPMap.domain = ⊤
[PROOFSTEP]
refine' Submodule.eq_top_iff'.mpr fun y => _
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
⊢ y ∈ (extensionOfMax i f).toLinearPMap.domain
[PROOFSTEP]
dsimp
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
⊢ y ∈ (extensionOfMax i f).toLinearPMap.domain
[PROOFSTEP]
rw [← extensionOfMax_is_max i f _ (extensionOfMax_le i f h), extensionOfMaxAdjoin, Submodule.mem_sup]
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
⊢ ∃ y_1, y_1 ∈ (extensionOfMax i f).toLinearPMap.domain ∧ ∃ z, z ∈ Submodule.span R {?y} ∧ y_1 + z = y
case y
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
⊢ N
case y
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i : M →ₗ[R] N
f : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i)
h : Baer R Q
y : N
⊢ N
[PROOFSTEP]
exact ⟨0, Submodule.zero_mem _, y, Submodule.mem_span_singleton_self _, zero_add _⟩
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i✝ : M →ₗ[R] N
f✝ : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i✝)
h : Baer R Q
X Y : TypeMax
ins1 : AddCommGroup X
ins2 : AddCommGroup Y
ins3 : Module R X
ins4 : Module R Y
i : X →ₗ[R] Y
hi : Function.Injective ↑i
f : X →ₗ[R] Q
this : Fact (Function.Injective ↑i)
x y : Y
⊢ (fun y =>
↑(extensionOfMax i f).toLinearPMap { val := y, property := (_ : y ∈ (extensionOfMax i f).toLinearPMap.domain) })
(x + y) =
(fun y =>
↑(extensionOfMax i f).toLinearPMap
{ val := y, property := (_ : y ∈ (extensionOfMax i f).toLinearPMap.domain) })
x +
(fun y =>
↑(extensionOfMax i f).toLinearPMap
{ val := y, property := (_ : y ∈ (extensionOfMax i f).toLinearPMap.domain) })
y
[PROOFSTEP]
rw [← LinearPMap.map_add]
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i✝ : M →ₗ[R] N
f✝ : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i✝)
h : Baer R Q
X Y : TypeMax
ins1 : AddCommGroup X
ins2 : AddCommGroup Y
ins3 : Module R X
ins4 : Module R Y
i : X →ₗ[R] Y
hi : Function.Injective ↑i
f : X →ₗ[R] Q
this : Fact (Function.Injective ↑i)
x y : Y
⊢ (fun y =>
↑(extensionOfMax i f).toLinearPMap { val := y, property := (_ : y ∈ (extensionOfMax i f).toLinearPMap.domain) })
(x + y) =
↑(extensionOfMax i f).toLinearPMap
({ val := x, property := (_ : x ∈ (extensionOfMax i f).toLinearPMap.domain) } +
{ val := y, property := (_ : y ∈ (extensionOfMax i f).toLinearPMap.domain) })
[PROOFSTEP]
congr
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i✝ : M →ₗ[R] N
f✝ : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i✝)
h : Baer R Q
X Y : TypeMax
ins1 : AddCommGroup X
ins2 : AddCommGroup Y
ins3 : Module R X
ins4 : Module R Y
i : X →ₗ[R] Y
hi : Function.Injective ↑i
f : X →ₗ[R] Q
this : Fact (Function.Injective ↑i)
r : R
x : Y
⊢ AddHom.toFun
{
toFun := fun y =>
↑(extensionOfMax i f).toLinearPMap
{ val := y, property := (_ : y ∈ (extensionOfMax i f).toLinearPMap.domain) },
map_add' :=
(_ :
∀ (x y : Y),
(fun y =>
↑(extensionOfMax i f).toLinearPMap
{ val := y, property := (_ : y ∈ (extensionOfMax i f).toLinearPMap.domain) })
(x + y) =
(fun y =>
↑(extensionOfMax i f).toLinearPMap
{ val := y, property := (_ : y ∈ (extensionOfMax i f).toLinearPMap.domain) })
x +
(fun y =>
↑(extensionOfMax i f).toLinearPMap
{ val := y, property := (_ : y ∈ (extensionOfMax i f).toLinearPMap.domain) })
y) }
(r • x) =
↑(RingHom.id R) r •
AddHom.toFun
{
toFun := fun y =>
↑(extensionOfMax i f).toLinearPMap
{ val := y, property := (_ : y ∈ (extensionOfMax i f).toLinearPMap.domain) },
map_add' :=
(_ :
∀ (x y : Y),
(fun y =>
↑(extensionOfMax i f).toLinearPMap
{ val := y, property := (_ : y ∈ (extensionOfMax i f).toLinearPMap.domain) })
(x + y) =
(fun y =>
↑(extensionOfMax i f).toLinearPMap
{ val := y, property := (_ : y ∈ (extensionOfMax i f).toLinearPMap.domain) })
x +
(fun y =>
↑(extensionOfMax i f).toLinearPMap
{ val := y, property := (_ : y ∈ (extensionOfMax i f).toLinearPMap.domain) })
y) }
x
[PROOFSTEP]
rw [← LinearPMap.map_smul]
-- Porting note: used to be congr
[GOAL]
R : Type u
inst✝⁷ : Ring R
Q : TypeMax
inst✝⁶ : AddCommGroup Q
inst✝⁵ : Module R Q
M N : Type (max u v)
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
i✝ : M →ₗ[R] N
f✝ : M →ₗ[R] Q
inst✝ : Fact (Function.Injective ↑i✝)
h : Baer R Q
X Y : TypeMax
ins1 : AddCommGroup X
ins2 : AddCommGroup Y
ins3 : Module R X
ins4 : Module R Y
i : X →ₗ[R] Y
hi : Function.Injective ↑i
f : X →ₗ[R] Q
this : Fact (Function.Injective ↑i)
r : R
x : Y
⊢ AddHom.toFun
{
toFun := fun y =>
↑(extensionOfMax i f).toLinearPMap
{ val := y, property := (_ : y ∈ (extensionOfMax i f).toLinearPMap.domain) },
map_add' :=
(_ :
∀ (x y : Y),
(fun y =>
↑(extensionOfMax i f).toLinearPMap
{ val := y, property := (_ : y ∈ (extensionOfMax i f).toLinearPMap.domain) })
(x + y) =
(fun y =>
↑(extensionOfMax i f).toLinearPMap
{ val := y, property := (_ : y ∈ (extensionOfMax i f).toLinearPMap.domain) })
x +
(fun y =>
↑(extensionOfMax i f).toLinearPMap
{ val := y, property := (_ : y ∈ (extensionOfMax i f).toLinearPMap.domain) })
y) }
(r • x) =
↑(extensionOfMax i f).toLinearPMap
(↑(RingHom.id R) r • { val := x, property := (_ : x ∈ (extensionOfMax i f).toLinearPMap.domain) })
[PROOFSTEP]
dsimp
|
[STATEMENT]
lemma cf_bcomp_Hom_ObjMap_vsv: "vsv (Hom\<^sub>O\<^sub>.\<^sub>C\<^bsub>\<alpha>\<^esub>\<CC>(\<FF>-,\<GG>-)\<lparr>ObjMap\<rparr>)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. vsv (Hom\<^sub>O\<^sub>.\<^sub>C\<^bsub>\<alpha>\<^esub>\<CC>(\<FF>-,\<GG>-)\<lparr>ObjMap\<rparr>)
[PROOF STEP]
unfolding cf_bcomp_Hom_def
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. vsv (cf_cn_cov_bcomp Hom\<^sub>O\<^sub>.\<^sub>C\<^bsub>\<alpha>\<^esub>\<CC>(-,-) \<FF> \<GG>\<lparr>ObjMap\<rparr>)
[PROOF STEP]
by (rule cf_cn_cov_bcomp_ObjMap_vsv)
|
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s t : Finset α
m n a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
⊢ ∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
[PROOFSTEP]
obtain rfl | m_pos := m.eq_zero_or_pos
[GOAL]
case inl
α : Type u_1
inst✝ : DecidableEq α
s t : Finset α
n a b : ℕ
P : Finpartition s
hs : a * 0 + b * (0 + 1) = card s
⊢ ∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = 0 ∨ card x = 0 + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ 0) ∧
card (filter (fun i => card i = 0 + 1) Q.parts) = b
[PROOFSTEP]
refine' ⟨⊥, by simp, _, by simpa using hs.symm⟩
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s t : Finset α
n a b : ℕ
P : Finpartition s
hs : a * 0 + b * (0 + 1) = card s
⊢ ∀ (x : Finset α), x ∈ ⊥.parts → card x = 0 ∨ card x = 0 + 1
[PROOFSTEP]
simp
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s t : Finset α
n a b : ℕ
P : Finpartition s
hs : a * 0 + b * (0 + 1) = card s
⊢ card (filter (fun i => card i = 0 + 1) ⊥.parts) = b
[PROOFSTEP]
simpa using hs.symm
[GOAL]
case inl
α : Type u_1
inst✝ : DecidableEq α
s t : Finset α
n a b : ℕ
P : Finpartition s
hs : a * 0 + b * (0 + 1) = card s
⊢ ∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) ⊥.parts) id) ≤ 0
[PROOFSTEP]
simp only [le_zero_iff, card_eq_zero, mem_biUnion, exists_prop, mem_filter, id.def, and_assoc,
sdiff_eq_empty_iff_subset, subset_iff]
[GOAL]
case inl
α : Type u_1
inst✝ : DecidableEq α
s t : Finset α
n a b : ℕ
P : Finpartition s
hs : a * 0 + b * (0 + 1) = card s
⊢ ∀ (x : Finset α), x ∈ P.parts → ∀ ⦃x_1 : α⦄, x_1 ∈ x → ∃ a, a ∈ ⊥.parts ∧ (∀ ⦃x_2 : α⦄, x_2 ∈ a → x_2 ∈ x) ∧ x_1 ∈ a
[PROOFSTEP]
exact fun x hx a ha =>
⟨{ a }, mem_map_of_mem _ (P.le hx ha), singleton_subset_iff.2 ha, mem_singleton_self _⟩
-- Prove the case `m > 0` by strong induction on `s`
[GOAL]
case inr
α : Type u_1
inst✝ : DecidableEq α
s t : Finset α
m n a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
m_pos : m > 0
⊢ ∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
[PROOFSTEP]
induction' s using Finset.strongInduction with s ih generalizing a b
[GOAL]
case inr.H
α : Type u_1
inst✝ : DecidableEq α
s✝ t : Finset α
m n a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
⊢ ∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
[PROOFSTEP]
by_cases hab : a = 0 ∧ b = 0
[GOAL]
case pos
α : Type u_1
inst✝ : DecidableEq α
s✝ t : Finset α
m n a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : a = 0 ∧ b = 0
⊢ ∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
[PROOFSTEP]
simp only [hab.1, hab.2, add_zero, zero_mul, eq_comm, card_eq_zero, Finset.bot_eq_empty] at hs
[GOAL]
case pos
α : Type u_1
inst✝ : DecidableEq α
s✝ t : Finset α
m n a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hab : a = 0 ∧ b = 0
hs : s = ∅
⊢ ∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
[PROOFSTEP]
subst hs
[GOAL]
case pos
α : Type u_1
inst✝ : DecidableEq α
s t : Finset α
m n a✝ b✝ : ℕ
P✝ : Finpartition s
hs : a✝ * m + b✝ * (m + 1) = card s
m_pos : m > 0
a b : ℕ
hab : a = 0 ∧ b = 0
ih :
∀ (t : Finset α),
t ⊂ ∅ →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
P : Finpartition ∅
⊢ ∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
[PROOFSTEP]
have : P = Finpartition.empty _ := Unique.eq_default (α := Finpartition ⊥) P
[GOAL]
case pos
α : Type u_1
inst✝ : DecidableEq α
s t : Finset α
m n a✝ b✝ : ℕ
P✝ : Finpartition s
hs : a✝ * m + b✝ * (m + 1) = card s
m_pos : m > 0
a b : ℕ
hab : a = 0 ∧ b = 0
ih :
∀ (t : Finset α),
t ⊂ ∅ →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
P : Finpartition ∅
this : P = Finpartition.empty (Finset α)
⊢ ∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
[PROOFSTEP]
exact ⟨Finpartition.empty _, by simp, by simp [this], by simp [hab.2]⟩
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s t : Finset α
m n a✝ b✝ : ℕ
P✝ : Finpartition s
hs : a✝ * m + b✝ * (m + 1) = card s
m_pos : m > 0
a b : ℕ
hab : a = 0 ∧ b = 0
ih :
∀ (t : Finset α),
t ⊂ ∅ →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
P : Finpartition ∅
this : P = Finpartition.empty (Finset α)
⊢ ∀ (x : Finset α), x ∈ (Finpartition.empty (Finset α)).parts → card x = m ∨ card x = m + 1
[PROOFSTEP]
simp
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s t : Finset α
m n a✝ b✝ : ℕ
P✝ : Finpartition s
hs : a✝ * m + b✝ * (m + 1) = card s
m_pos : m > 0
a b : ℕ
hab : a = 0 ∧ b = 0
ih :
∀ (t : Finset α),
t ⊂ ∅ →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
P : Finpartition ∅
this : P = Finpartition.empty (Finset α)
⊢ ∀ (x : Finset α),
x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) (Finpartition.empty (Finset α)).parts) id) ≤ m
[PROOFSTEP]
simp [this]
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s t : Finset α
m n a✝ b✝ : ℕ
P✝ : Finpartition s
hs : a✝ * m + b✝ * (m + 1) = card s
m_pos : m > 0
a b : ℕ
hab : a = 0 ∧ b = 0
ih :
∀ (t : Finset α),
t ⊂ ∅ →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
P : Finpartition ∅
this : P = Finpartition.empty (Finset α)
⊢ card (filter (fun i => card i = m + 1) (Finpartition.empty (Finset α)).parts) = b
[PROOFSTEP]
simp [hab.2]
[GOAL]
case neg
α : Type u_1
inst✝ : DecidableEq α
s✝ t : Finset α
m n a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : ¬(a = 0 ∧ b = 0)
⊢ ∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
[PROOFSTEP]
simp_rw [not_and_or, ← Ne.def, ← pos_iff_ne_zero] at hab
[GOAL]
case neg
α : Type u_1
inst✝ : DecidableEq α
s✝ t : Finset α
m n a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
⊢ ∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
[PROOFSTEP]
set n := if 0 < a then m else m + 1 with hn
[GOAL]
case neg
α : Type u_1
inst✝ : DecidableEq α
s✝ t : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
⊢ ∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
[PROOFSTEP]
obtain ⟨hn₀, hn₁, hn₂, hn₃⟩ :
0 < n ∧
n ≤ m + 1 ∧ n ≤ a * m + b * (m + 1) ∧ ite (0 < a) (a - 1) a * m + ite (0 < a) b (b - 1) * (m + 1) = s.card - n :=
by
rw [hn, ← hs]
split_ifs with h <;> rw [tsub_mul, one_mul]
· refine' ⟨m_pos, le_succ _, le_add_right (le_mul_of_pos_left ‹0 < a›), _⟩
rw [tsub_add_eq_add_tsub (le_mul_of_pos_left h)]
· refine' ⟨succ_pos', le_rfl, le_add_left (le_mul_of_pos_left <| hab.resolve_left ‹¬0 < a›), _⟩
rw [← add_tsub_assoc_of_le (le_mul_of_pos_left <| hab.resolve_left ‹¬0 < a›)]
/- We will call the inductive hypothesis on a partition of `s \ t` for a carefully chosen `t ⊆ s`.
To decide which, however, we must distinguish the case where all parts of `P` have size `m` (in
which case we take `t` to be an arbitrary subset of `s` of size `n`) from the case where at
least one part `u` of `P` has size `m + 1` (in which case we take `t` to be an arbitrary subset
of `u` of size `n`). The rest of each branch is just tedious calculations to satisfy the
induction hypothesis. -/
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s✝ t : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
⊢ 0 < n ∧
n ≤ m + 1 ∧
n ≤ a * m + b * (m + 1) ∧ (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
[PROOFSTEP]
rw [hn, ← hs]
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s✝ t : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
⊢ (0 < if 0 < a then m else m + 1) ∧
(if 0 < a then m else m + 1) ≤ m + 1 ∧
(if 0 < a then m else m + 1) ≤ a * m + b * (m + 1) ∧
(if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) =
a * m + b * (m + 1) - if 0 < a then m else m + 1
[PROOFSTEP]
split_ifs with h
[GOAL]
case pos
α : Type u_1
inst✝ : DecidableEq α
s✝ t : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
h : 0 < a
⊢ 0 < m ∧ m ≤ m + 1 ∧ m ≤ a * m + b * (m + 1) ∧ (a - 1) * m + b * (m + 1) = a * m + b * (m + 1) - m
[PROOFSTEP]
rw [tsub_mul, one_mul]
[GOAL]
case neg
α : Type u_1
inst✝ : DecidableEq α
s✝ t : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
h : ¬0 < a
⊢ 0 < m + 1 ∧ m + 1 ≤ m + 1 ∧ m + 1 ≤ a * m + b * (m + 1) ∧ a * m + (b - 1) * (m + 1) = a * m + b * (m + 1) - (m + 1)
[PROOFSTEP]
rw [tsub_mul, one_mul]
[GOAL]
case pos
α : Type u_1
inst✝ : DecidableEq α
s✝ t : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
h : 0 < a
⊢ 0 < m ∧ m ≤ m + 1 ∧ m ≤ a * m + b * (m + 1) ∧ a * m - m + b * (m + 1) = a * m + b * (m + 1) - m
[PROOFSTEP]
refine' ⟨m_pos, le_succ _, le_add_right (le_mul_of_pos_left ‹0 < a›), _⟩
[GOAL]
case pos
α : Type u_1
inst✝ : DecidableEq α
s✝ t : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
h : 0 < a
⊢ a * m - m + b * (m + 1) = a * m + b * (m + 1) - m
[PROOFSTEP]
rw [tsub_add_eq_add_tsub (le_mul_of_pos_left h)]
[GOAL]
case neg
α : Type u_1
inst✝ : DecidableEq α
s✝ t : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
h : ¬0 < a
⊢ 0 < m + 1 ∧
m + 1 ≤ m + 1 ∧ m + 1 ≤ a * m + b * (m + 1) ∧ a * m + (b * (m + 1) - (m + 1)) = a * m + b * (m + 1) - (m + 1)
[PROOFSTEP]
refine' ⟨succ_pos', le_rfl, le_add_left (le_mul_of_pos_left <| hab.resolve_left ‹¬0 < a›), _⟩
[GOAL]
case neg
α : Type u_1
inst✝ : DecidableEq α
s✝ t : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
h : ¬0 < a
⊢ a * m + (b * (m + 1) - (m + 1)) = a * m + b * (m + 1) - (m + 1)
[PROOFSTEP]
rw [← add_tsub_assoc_of_le (le_mul_of_pos_left <| hab.resolve_left ‹¬0 < a›)]
/- We will call the inductive hypothesis on a partition of `s \ t` for a carefully chosen `t ⊆ s`.
To decide which, however, we must distinguish the case where all parts of `P` have size `m` (in
which case we take `t` to be an arbitrary subset of `s` of size `n`) from the case where at
least one part `u` of `P` has size `m + 1` (in which case we take `t` to be an arbitrary subset
of `u` of size `n`). The rest of each branch is just tedious calculations to satisfy the
induction hypothesis. -/
[GOAL]
case neg.intro.intro.intro
α : Type u_1
inst✝ : DecidableEq α
s✝ t : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
⊢ ∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
[PROOFSTEP]
by_cases h : ∀ u ∈ P.parts, card u < m + 1
[GOAL]
case pos
α : Type u_1
inst✝ : DecidableEq α
s✝ t : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
h : ∀ (u : Finset α), u ∈ P.parts → card u < m + 1
⊢ ∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
[PROOFSTEP]
obtain ⟨t, hts, htn⟩ := exists_smaller_set s n (hn₂.trans_eq hs)
[GOAL]
case pos.intro.intro
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
h : ∀ (u : Finset α), u ∈ P.parts → card u < m + 1
t : Finset α
hts : t ⊆ s
htn : card t = n
⊢ ∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
[PROOFSTEP]
have ht : t.Nonempty := by rwa [← card_pos, htn]
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
h : ∀ (u : Finset α), u ∈ P.parts → card u < m + 1
t : Finset α
hts : t ⊆ s
htn : card t = n
⊢ Finset.Nonempty t
[PROOFSTEP]
rwa [← card_pos, htn]
[GOAL]
case pos.intro.intro
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
h : ∀ (u : Finset α), u ∈ P.parts → card u < m + 1
t : Finset α
hts : t ⊆ s
htn : card t = n
ht : Finset.Nonempty t
⊢ ∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
[PROOFSTEP]
have hcard : ite (0 < a) (a - 1) a * m + ite (0 < a) b (b - 1) * (m + 1) = (s \ t).card := by
rw [card_sdiff ‹t ⊆ s›, htn, hn₃]
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
h : ∀ (u : Finset α), u ∈ P.parts → card u < m + 1
t : Finset α
hts : t ⊆ s
htn : card t = n
ht : Finset.Nonempty t
⊢ (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
[PROOFSTEP]
rw [card_sdiff ‹t ⊆ s›, htn, hn₃]
[GOAL]
case pos.intro.intro
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
h : ∀ (u : Finset α), u ∈ P.parts → card u < m + 1
t : Finset α
hts : t ⊆ s
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
⊢ ∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
[PROOFSTEP]
obtain ⟨R, hR₁, _, hR₃⟩ :=
@ih (s \ t) (sdiff_ssubset hts ‹t.Nonempty›) (if 0 < a then a - 1 else a) (if 0 < a then b else b - 1) (P.avoid t)
hcard
[GOAL]
case pos.intro.intro.intro.intro.intro
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
h : ∀ (u : Finset α), u ∈ P.parts → card u < m + 1
t : Finset α
hts : t ⊆ s
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
left✝ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
⊢ ∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
[PROOFSTEP]
refine' ⟨R.extend ht.ne_empty sdiff_disjoint (sdiff_sup_cancel hts), _, _, _⟩
[GOAL]
case pos.intro.intro.intro.intro.intro.refine'_1
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
h : ∀ (u : Finset α), u ∈ P.parts → card u < m + 1
t : Finset α
hts : t ⊆ s
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
left✝ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
⊢ ∀ (x : Finset α),
x ∈ (extend R (_ : t ≠ ∅) (_ : Disjoint (s \ t) t) (_ : s \ t ⊔ t = s)).parts → card x = m ∨ card x = m + 1
[PROOFSTEP]
simp only [extend_parts, mem_insert, forall_eq_or_imp, and_iff_left hR₁, htn, hn]
[GOAL]
case pos.intro.intro.intro.intro.intro.refine'_1
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
h : ∀ (u : Finset α), u ∈ P.parts → card u < m + 1
t : Finset α
hts : t ⊆ s
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
left✝ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
⊢ (if 0 < a then m else m + 1) = m ∨ (if 0 < a then m else m + 1) = m + 1
[PROOFSTEP]
exact ite_eq_or_eq _ _ _
[GOAL]
case pos.intro.intro.intro.intro.intro.refine'_2
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
h : ∀ (u : Finset α), u ∈ P.parts → card u < m + 1
t : Finset α
hts : t ⊆ s
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
left✝ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
⊢ ∀ (x : Finset α),
x ∈ P.parts →
card
(x \
Finset.biUnion
(filter (fun y => y ⊆ x) (extend R (_ : t ≠ ∅) (_ : Disjoint (s \ t) t) (_ : s \ t ⊔ t = s)).parts) id) ≤
m
[PROOFSTEP]
exact fun x hx => (card_le_of_subset <| sdiff_subset _ _).trans (lt_succ_iff.1 <| h _ hx)
[GOAL]
case pos.intro.intro.intro.intro.intro.refine'_3
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
h : ∀ (u : Finset α), u ∈ P.parts → card u < m + 1
t : Finset α
hts : t ⊆ s
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
left✝ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
⊢ card (filter (fun i => card i = m + 1) (extend R (_ : t ≠ ∅) (_ : Disjoint (s \ t) t) (_ : s \ t ⊔ t = s)).parts) = b
[PROOFSTEP]
simp_rw [extend_parts, filter_insert, htn, m.succ_ne_self.symm.ite_eq_right_iff]
[GOAL]
case pos.intro.intro.intro.intro.intro.refine'_3
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
h : ∀ (u : Finset α), u ∈ P.parts → card u < m + 1
t : Finset α
hts : t ⊆ s
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
left✝ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
⊢ card
(if ¬0 < a then insert t (filter (fun i => card i = m + 1) R.parts)
else filter (fun i => card i = m + 1) R.parts) =
b
[PROOFSTEP]
split_ifs with ha
[GOAL]
case pos
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
h : ∀ (u : Finset α), u ∈ P.parts → card u < m + 1
t : Finset α
hts : t ⊆ s
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
left✝ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
ha : 0 < a
⊢ card (filter (fun i => card i = m + 1) R.parts) = b
[PROOFSTEP]
rw [hR₃, if_pos ha]
[GOAL]
case neg
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
h : ∀ (u : Finset α), u ∈ P.parts → card u < m + 1
t : Finset α
hts : t ⊆ s
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
left✝ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
ha : ¬0 < a
⊢ card (insert t (filter (fun i => card i = m + 1) R.parts)) = b
[PROOFSTEP]
rw [card_insert_of_not_mem, hR₃, if_neg ha, tsub_add_cancel_of_le]
[GOAL]
case neg
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
h : ∀ (u : Finset α), u ∈ P.parts → card u < m + 1
t : Finset α
hts : t ⊆ s
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
left✝ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
ha : ¬0 < a
⊢ 1 ≤ b
[PROOFSTEP]
exact hab.resolve_left ha
[GOAL]
case neg
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
h : ∀ (u : Finset α), u ∈ P.parts → card u < m + 1
t : Finset α
hts : t ⊆ s
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
left✝ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
ha : ¬0 < a
⊢ ¬t ∈ filter (fun i => card i = m + 1) R.parts
[PROOFSTEP]
intro H
[GOAL]
case neg
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
h : ∀ (u : Finset α), u ∈ P.parts → card u < m + 1
t : Finset α
hts : t ⊆ s
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
left✝ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
ha : ¬0 < a
H : t ∈ filter (fun i => card i = m + 1) R.parts
⊢ False
[PROOFSTEP]
exact ht.ne_empty (le_sdiff_iff.1 <| R.le <| filter_subset _ _ H)
[GOAL]
case neg
α : Type u_1
inst✝ : DecidableEq α
s✝ t : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
h : ¬∀ (u : Finset α), u ∈ P.parts → card u < m + 1
⊢ ∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
[PROOFSTEP]
push_neg at h
[GOAL]
case neg
α : Type u_1
inst✝ : DecidableEq α
s✝ t : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
h : ∃ u, u ∈ P.parts ∧ m + 1 ≤ card u
⊢ ∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
[PROOFSTEP]
obtain ⟨u, hu₁, hu₂⟩ := h
[GOAL]
case neg.intro.intro
α : Type u_1
inst✝ : DecidableEq α
s✝ t : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
u : Finset α
hu₁ : u ∈ P.parts
hu₂ : m + 1 ≤ card u
⊢ ∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
[PROOFSTEP]
obtain ⟨t, htu, htn⟩ := exists_smaller_set _ _ (hn₁.trans hu₂)
[GOAL]
case neg.intro.intro.intro.intro
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
u : Finset α
hu₁ : u ∈ P.parts
hu₂ : m + 1 ≤ card u
t : Finset α
htu : t ⊆ u
htn : card t = n
⊢ ∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
[PROOFSTEP]
have ht : t.Nonempty := by rwa [← card_pos, htn]
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
u : Finset α
hu₁ : u ∈ P.parts
hu₂ : m + 1 ≤ card u
t : Finset α
htu : t ⊆ u
htn : card t = n
⊢ Finset.Nonempty t
[PROOFSTEP]
rwa [← card_pos, htn]
[GOAL]
case neg.intro.intro.intro.intro
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
u : Finset α
hu₁ : u ∈ P.parts
hu₂ : m + 1 ≤ card u
t : Finset α
htu : t ⊆ u
htn : card t = n
ht : Finset.Nonempty t
⊢ ∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
[PROOFSTEP]
have hcard : ite (0 < a) (a - 1) a * m + ite (0 < a) b (b - 1) * (m + 1) = (s \ t).card := by
rw [card_sdiff (htu.trans <| P.le hu₁), htn, hn₃]
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
u : Finset α
hu₁ : u ∈ P.parts
hu₂ : m + 1 ≤ card u
t : Finset α
htu : t ⊆ u
htn : card t = n
ht : Finset.Nonempty t
⊢ (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
[PROOFSTEP]
rw [card_sdiff (htu.trans <| P.le hu₁), htn, hn₃]
[GOAL]
case neg.intro.intro.intro.intro
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
u : Finset α
hu₁ : u ∈ P.parts
hu₂ : m + 1 ≤ card u
t : Finset α
htu : t ⊆ u
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
⊢ ∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
[PROOFSTEP]
obtain ⟨R, hR₁, hR₂, hR₃⟩ :=
@ih (s \ t) (sdiff_ssubset (htu.trans <| P.le hu₁) ht) (if 0 < a then a - 1 else a) (if 0 < a then b else b - 1)
(P.avoid t) hcard
[GOAL]
case neg.intro.intro.intro.intro.intro.intro.intro
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
u : Finset α
hu₁ : u ∈ P.parts
hu₂ : m + 1 ≤ card u
t : Finset α
htu : t ⊆ u
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
hR₂ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
⊢ ∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
[PROOFSTEP]
refine' ⟨R.extend ht.ne_empty sdiff_disjoint (sdiff_sup_cancel <| htu.trans <| P.le hu₁), _, _, _⟩
[GOAL]
case neg.intro.intro.intro.intro.intro.intro.intro.refine'_1
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
u : Finset α
hu₁ : u ∈ P.parts
hu₂ : m + 1 ≤ card u
t : Finset α
htu : t ⊆ u
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
hR₂ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
⊢ ∀ (x : Finset α),
x ∈ (extend R (_ : t ≠ ∅) (_ : Disjoint (s \ t) t) (_ : s \ t ⊔ t = s)).parts → card x = m ∨ card x = m + 1
[PROOFSTEP]
simp only [mem_insert, forall_eq_or_imp, extend_parts, and_iff_left hR₁, htn, hn]
[GOAL]
case neg.intro.intro.intro.intro.intro.intro.intro.refine'_1
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
u : Finset α
hu₁ : u ∈ P.parts
hu₂ : m + 1 ≤ card u
t : Finset α
htu : t ⊆ u
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
hR₂ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
⊢ (if 0 < a then m else m + 1) = m ∨ (if 0 < a then m else m + 1) = m + 1
[PROOFSTEP]
exact ite_eq_or_eq _ _ _
[GOAL]
case neg.intro.intro.intro.intro.intro.intro.intro.refine'_2
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
u : Finset α
hu₁ : u ∈ P.parts
hu₂ : m + 1 ≤ card u
t : Finset α
htu : t ⊆ u
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
hR₂ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
⊢ ∀ (x : Finset α),
x ∈ P.parts →
card
(x \
Finset.biUnion
(filter (fun y => y ⊆ x) (extend R (_ : t ≠ ∅) (_ : Disjoint (s \ t) t) (_ : s \ t ⊔ t = s)).parts) id) ≤
m
[PROOFSTEP]
conv in _ ∈ _ => rw [← insert_erase hu₁]
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
u : Finset α
hu₁ : u ∈ P.parts
hu₂ : m + 1 ≤ card u
t : Finset α
htu : t ⊆ u
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
hR₂ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
x : Finset α
| x ∈ P.parts
[PROOFSTEP]
rw [← insert_erase hu₁]
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
u : Finset α
hu₁ : u ∈ P.parts
hu₂ : m + 1 ≤ card u
t : Finset α
htu : t ⊆ u
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
hR₂ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
x : Finset α
| x ∈ P.parts
[PROOFSTEP]
rw [← insert_erase hu₁]
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
u : Finset α
hu₁ : u ∈ P.parts
hu₂ : m + 1 ≤ card u
t : Finset α
htu : t ⊆ u
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
hR₂ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
x : Finset α
| x ∈ P.parts
[PROOFSTEP]
rw [← insert_erase hu₁]
[GOAL]
case neg.intro.intro.intro.intro.intro.intro.intro.refine'_2
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
u : Finset α
hu₁ : u ∈ P.parts
hu₂ : m + 1 ≤ card u
t : Finset α
htu : t ⊆ u
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
hR₂ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
⊢ ∀ (x : Finset α),
x ∈ insert u (erase P.parts u) →
card
(x \
Finset.biUnion
(filter (fun y => y ⊆ x) (extend R (_ : t ≠ ∅) (_ : Disjoint (s \ t) t) (_ : s \ t ⊔ t = s)).parts) id) ≤
m
[PROOFSTEP]
simp only [and_imp, mem_insert, forall_eq_or_imp, Ne.def, extend_parts]
[GOAL]
case neg.intro.intro.intro.intro.intro.intro.intro.refine'_2
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
u : Finset α
hu₁ : u ∈ P.parts
hu₂ : m + 1 ≤ card u
t : Finset α
htu : t ⊆ u
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
hR₂ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
⊢ card (u \ Finset.biUnion (filter (fun y => y ⊆ u) (insert t R.parts)) id) ≤ m ∧
∀ (a : Finset α),
a ∈ erase P.parts u → card (a \ Finset.biUnion (filter (fun y => y ⊆ a) (insert t R.parts)) id) ≤ m
[PROOFSTEP]
refine' ⟨_, fun x hx => (card_le_of_subset _).trans <| hR₂ x _⟩
[GOAL]
case neg.intro.intro.intro.intro.intro.intro.intro.refine'_2.refine'_1
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
u : Finset α
hu₁ : u ∈ P.parts
hu₂ : m + 1 ≤ card u
t : Finset α
htu : t ⊆ u
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
hR₂ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
⊢ card (u \ Finset.biUnion (filter (fun y => y ⊆ u) (insert t R.parts)) id) ≤ m
[PROOFSTEP]
simp only [filter_insert, if_pos htu, biUnion_insert, mem_erase, id.def]
[GOAL]
case neg.intro.intro.intro.intro.intro.intro.intro.refine'_2.refine'_1
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
u : Finset α
hu₁ : u ∈ P.parts
hu₂ : m + 1 ≤ card u
t : Finset α
htu : t ⊆ u
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
hR₂ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
⊢ card (u \ (t ∪ Finset.biUnion (filter (fun y => y ⊆ u) R.parts) id)) ≤ m
[PROOFSTEP]
obtain rfl | hut := eq_or_ne u t
[GOAL]
case neg.intro.intro.intro.intro.intro.intro.intro.refine'_2.refine'_1.inl
α : Type u_1
inst✝ : DecidableEq α
s✝ t : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
u : Finset α
hu₁ : u ∈ P.parts
hu₂ : m + 1 ≤ card u
htu : u ⊆ u
htn : card u = n
ht : Finset.Nonempty u
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ u)
R : Finpartition (s \ u)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
hR₂ : ∀ (x : Finset α), x ∈ (avoid P u).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
⊢ card (u \ (u ∪ Finset.biUnion (filter (fun y => y ⊆ u) R.parts) id)) ≤ m
[PROOFSTEP]
rw [sdiff_eq_empty_iff_subset.2 (subset_union_left _ _)]
[GOAL]
case neg.intro.intro.intro.intro.intro.intro.intro.refine'_2.refine'_1.inl
α : Type u_1
inst✝ : DecidableEq α
s✝ t : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
u : Finset α
hu₁ : u ∈ P.parts
hu₂ : m + 1 ≤ card u
htu : u ⊆ u
htn : card u = n
ht : Finset.Nonempty u
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ u)
R : Finpartition (s \ u)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
hR₂ : ∀ (x : Finset α), x ∈ (avoid P u).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
⊢ card ∅ ≤ m
[PROOFSTEP]
exact bot_le
[GOAL]
case neg.intro.intro.intro.intro.intro.intro.intro.refine'_2.refine'_1.inr
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
u : Finset α
hu₁ : u ∈ P.parts
hu₂ : m + 1 ≤ card u
t : Finset α
htu : t ⊆ u
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
hR₂ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
hut : u ≠ t
⊢ card (u \ (t ∪ Finset.biUnion (filter (fun y => y ⊆ u) R.parts) id)) ≤ m
[PROOFSTEP]
refine'
(card_le_of_subset fun i => _).trans
(hR₂ (u \ t) <| P.mem_avoid.2 ⟨u, hu₁, fun i => hut <| i.antisymm htu, rfl⟩)
-- Porting note: `not_and` required because `∃ x ∈ s, p x` is defined differently
[GOAL]
case neg.intro.intro.intro.intro.intro.intro.intro.refine'_2.refine'_1.inr
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
u : Finset α
hu₁ : u ∈ P.parts
hu₂ : m + 1 ≤ card u
t : Finset α
htu : t ⊆ u
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
hR₂ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
hut : u ≠ t
i : α
⊢ i ∈ u \ (t ∪ Finset.biUnion (filter (fun y => y ⊆ u) R.parts) id) →
i ∈ (u \ t) \ Finset.biUnion (filter (fun y => y ⊆ u \ t) R.parts) id
[PROOFSTEP]
simp only [not_exists, not_and, mem_biUnion, and_imp, mem_union, mem_filter, mem_sdiff, id.def, not_or]
[GOAL]
case neg.intro.intro.intro.intro.intro.intro.intro.refine'_2.refine'_1.inr
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
u : Finset α
hu₁ : u ∈ P.parts
hu₂ : m + 1 ≤ card u
t : Finset α
htu : t ⊆ u
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
hR₂ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
hut : u ≠ t
i : α
⊢ i ∈ u →
¬i ∈ t →
(∀ (x : Finset α), x ∈ R.parts → x ⊆ u → ¬i ∈ x) →
(i ∈ u ∧ ¬i ∈ t) ∧ ∀ (x : Finset α), x ∈ R.parts → x ⊆ u \ t → ¬i ∈ x
[PROOFSTEP]
exact fun hi₁ hi₂ hi₃ => ⟨⟨hi₁, hi₂⟩, fun x hx hx' => hi₃ _ hx <| hx'.trans <| sdiff_subset _ _⟩
[GOAL]
case neg.intro.intro.intro.intro.intro.intro.intro.refine'_2.refine'_2
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
u : Finset α
hu₁ : u ∈ P.parts
hu₂ : m + 1 ≤ card u
t : Finset α
htu : t ⊆ u
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
hR₂ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
x : Finset α
hx : x ∈ erase P.parts u
⊢ x \ Finset.biUnion (filter (fun y => y ⊆ x) (insert t R.parts)) id ⊆
x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id
[PROOFSTEP]
apply sdiff_subset_sdiff Subset.rfl (biUnion_subset_biUnion_of_subset_left _ _)
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
u : Finset α
hu₁ : u ∈ P.parts
hu₂ : m + 1 ≤ card u
t : Finset α
htu : t ⊆ u
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
hR₂ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
x : Finset α
hx : x ∈ erase P.parts u
⊢ filter (fun y => y ⊆ x) R.parts ⊆ filter (fun y => y ⊆ x) (insert t R.parts)
[PROOFSTEP]
exact filter_subset_filter _ (subset_insert _ _)
[GOAL]
case neg.intro.intro.intro.intro.intro.intro.intro.refine'_2.refine'_3
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
u : Finset α
hu₁ : u ∈ P.parts
hu₂ : m + 1 ≤ card u
t : Finset α
htu : t ⊆ u
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
hR₂ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
x : Finset α
hx : x ∈ erase P.parts u
⊢ x ∈ (avoid P t).parts
[PROOFSTEP]
simp only [avoid, ofErase, mem_erase, mem_image, bot_eq_empty]
[GOAL]
case neg.intro.intro.intro.intro.intro.intro.intro.refine'_2.refine'_3
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
u : Finset α
hu₁ : u ∈ P.parts
hu₂ : m + 1 ≤ card u
t : Finset α
htu : t ⊆ u
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
hR₂ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
x : Finset α
hx : x ∈ erase P.parts u
⊢ x ≠ ∅ ∧ ∃ a, a ∈ P.parts ∧ a \ t = x
[PROOFSTEP]
exact
⟨(nonempty_of_mem_parts _ <| mem_of_mem_erase hx).ne_empty, _, mem_of_mem_erase hx,
(disjoint_of_subset_right htu <| P.disjoint (mem_of_mem_erase hx) hu₁ <| ne_of_mem_erase hx).sdiff_eq_left⟩
[GOAL]
case neg.intro.intro.intro.intro.intro.intro.intro.refine'_3
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
u : Finset α
hu₁ : u ∈ P.parts
hu₂ : m + 1 ≤ card u
t : Finset α
htu : t ⊆ u
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
hR₂ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
⊢ card (filter (fun i => card i = m + 1) (extend R (_ : t ≠ ∅) (_ : Disjoint (s \ t) t) (_ : s \ t ⊔ t = s)).parts) = b
[PROOFSTEP]
simp only [extend_parts, filter_insert, htn, hn, m.succ_ne_self.symm.ite_eq_right_iff]
[GOAL]
case neg.intro.intro.intro.intro.intro.intro.intro.refine'_3
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
u : Finset α
hu₁ : u ∈ P.parts
hu₂ : m + 1 ≤ card u
t : Finset α
htu : t ⊆ u
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
hR₂ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
⊢ card
(if ¬0 < a then insert t (filter (fun i => card i = m + 1) R.parts)
else filter (fun i => card i = m + 1) R.parts) =
b
[PROOFSTEP]
split_ifs with h
[GOAL]
case pos
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
u : Finset α
hu₁ : u ∈ P.parts
hu₂ : m + 1 ≤ card u
t : Finset α
htu : t ⊆ u
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
hR₂ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
h : 0 < a
⊢ card (filter (fun i => card i = m + 1) R.parts) = b
[PROOFSTEP]
rw [hR₃, if_pos h]
[GOAL]
case neg
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
u : Finset α
hu₁ : u ∈ P.parts
hu₂ : m + 1 ≤ card u
t : Finset α
htu : t ⊆ u
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
hR₂ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
h : ¬0 < a
⊢ card (insert t (filter (fun i => card i = m + 1) R.parts)) = b
[PROOFSTEP]
rw [card_insert_of_not_mem, hR₃, if_neg h, Nat.sub_add_cancel (hab.resolve_left h)]
[GOAL]
case neg
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
u : Finset α
hu₁ : u ∈ P.parts
hu₂ : m + 1 ≤ card u
t : Finset α
htu : t ⊆ u
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
hR₂ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
h : ¬0 < a
⊢ ¬t ∈ filter (fun i => card i = m + 1) R.parts
[PROOFSTEP]
intro H
[GOAL]
case neg
α : Type u_1
inst✝ : DecidableEq α
s✝ t✝ : Finset α
m n✝ a✝ b✝ : ℕ
P✝ : Finpartition s✝
hs✝ : a✝ * m + b✝ * (m + 1) = card s✝
m_pos : m > 0
s : Finset α
ih :
∀ (t : Finset α),
t ⊂ s →
∀ {a b : ℕ} {P : Finpartition t},
a * m + b * (m + 1) = card t →
∃ Q,
(∀ (x : Finset α), x ∈ Q.parts → card x = m ∨ card x = m + 1) ∧
(∀ (x : Finset α), x ∈ P.parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) Q.parts) id) ≤ m) ∧
card (filter (fun i => card i = m + 1) Q.parts) = b
a b : ℕ
P : Finpartition s
hs : a * m + b * (m + 1) = card s
hab : 0 < a ∨ 0 < b
n : ℕ := if 0 < a then m else m + 1
hn : n = if 0 < a then m else m + 1
hn₀ : 0 < n
hn₁ : n ≤ m + 1
hn₂ : n ≤ a * m + b * (m + 1)
hn₃ : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card s - n
u : Finset α
hu₁ : u ∈ P.parts
hu₂ : m + 1 ≤ card u
t : Finset α
htu : t ⊆ u
htn : card t = n
ht : Finset.Nonempty t
hcard : (if 0 < a then a - 1 else a) * m + (if 0 < a then b else b - 1) * (m + 1) = card (s \ t)
R : Finpartition (s \ t)
hR₁ : ∀ (x : Finset α), x ∈ R.parts → card x = m ∨ card x = m + 1
hR₂ : ∀ (x : Finset α), x ∈ (avoid P t).parts → card (x \ Finset.biUnion (filter (fun y => y ⊆ x) R.parts) id) ≤ m
hR₃ : card (filter (fun i => card i = m + 1) R.parts) = if 0 < a then b else b - 1
h : ¬0 < a
H : t ∈ filter (fun i => card i = m + 1) R.parts
⊢ False
[PROOFSTEP]
exact ht.ne_empty (le_sdiff_iff.1 <| R.le <| filter_subset _ _ H)
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s t : Finset α
m n a b : ℕ
P : Finpartition s
h : a * m + b * (m + 1) = card s
hm : m ≠ 0
⊢ card (filter (fun u => card u = m) (equitabilise h).parts) = a
[PROOFSTEP]
refine' (mul_eq_mul_right_iff.1 <| (add_left_inj (b * (m + 1))).1 _).resolve_right hm
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s t : Finset α
m n a b : ℕ
P : Finpartition s
h : a * m + b * (m + 1) = card s
hm : m ≠ 0
⊢ card (filter (fun u => card u = m) (equitabilise h).parts) * m + b * (m + 1) = a * m + b * (m + 1)
[PROOFSTEP]
rw [h, ← (P.equitabilise h).sum_card_parts]
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s t : Finset α
m n a b : ℕ
P : Finpartition s
h : a * m + b * (m + 1) = card s
hm : m ≠ 0
⊢ card (filter (fun u => card u = m) (equitabilise h).parts) * m + b * (m + 1) =
Finset.sum (equitabilise h).parts fun i => card i
[PROOFSTEP]
have hunion :
(P.equitabilise h).parts =
((P.equitabilise h).parts.filter fun u => u.card = m) ∪ (P.equitabilise h).parts.filter fun u => u.card = m + 1 :=
by
rw [← filter_or, filter_true_of_mem]
exact fun x => card_eq_of_mem_parts_equitabilise
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s t : Finset α
m n a b : ℕ
P : Finpartition s
h : a * m + b * (m + 1) = card s
hm : m ≠ 0
⊢ (equitabilise h).parts =
filter (fun u => card u = m) (equitabilise h).parts ∪ filter (fun u => card u = m + 1) (equitabilise h).parts
[PROOFSTEP]
rw [← filter_or, filter_true_of_mem]
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s t : Finset α
m n a b : ℕ
P : Finpartition s
h : a * m + b * (m + 1) = card s
hm : m ≠ 0
⊢ ∀ (x : Finset α), x ∈ (equitabilise h).parts → card x = m ∨ card x = m + 1
[PROOFSTEP]
exact fun x => card_eq_of_mem_parts_equitabilise
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s t : Finset α
m n a b : ℕ
P : Finpartition s
h : a * m + b * (m + 1) = card s
hm : m ≠ 0
hunion :
(equitabilise h).parts =
filter (fun u => card u = m) (equitabilise h).parts ∪ filter (fun u => card u = m + 1) (equitabilise h).parts
⊢ card (filter (fun u => card u = m) (equitabilise h).parts) * m + b * (m + 1) =
Finset.sum (equitabilise h).parts fun i => card i
[PROOFSTEP]
nth_rw 2 [hunion]
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s t : Finset α
m n a b : ℕ
P : Finpartition s
h : a * m + b * (m + 1) = card s
hm : m ≠ 0
hunion :
(equitabilise h).parts =
filter (fun u => card u = m) (equitabilise h).parts ∪ filter (fun u => card u = m + 1) (equitabilise h).parts
⊢ card (filter (fun u => card u = m) (equitabilise h).parts) * m + b * (m + 1) =
Finset.sum
(filter (fun u => card u = m) (equitabilise h).parts ∪ filter (fun u => card u = m + 1) (equitabilise h).parts)
fun i => card i
[PROOFSTEP]
rw [sum_union, sum_const_nat fun x hx => (mem_filter.1 hx).2, sum_const_nat fun x hx => (mem_filter.1 hx).2,
P.card_filter_equitabilise_big]
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s t : Finset α
m n a b : ℕ
P : Finpartition s
h : a * m + b * (m + 1) = card s
hm : m ≠ 0
hunion :
(equitabilise h).parts =
filter (fun u => card u = m) (equitabilise h).parts ∪ filter (fun u => card u = m + 1) (equitabilise h).parts
⊢ Disjoint (filter (fun u => card u = m) (equitabilise h).parts)
(filter (fun u => card u = m + 1) (equitabilise h).parts)
[PROOFSTEP]
refine' disjoint_filter_filter' _ _ _
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s t : Finset α
m n a b : ℕ
P : Finpartition s
h : a * m + b * (m + 1) = card s
hm : m ≠ 0
hunion :
(equitabilise h).parts =
filter (fun u => card u = m) (equitabilise h).parts ∪ filter (fun u => card u = m + 1) (equitabilise h).parts
⊢ Disjoint (fun u => card u = m) fun u => card u = m + 1
[PROOFSTEP]
intro x ha hb i h
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s t : Finset α
m n a b : ℕ
P : Finpartition s
h✝ : a * m + b * (m + 1) = card s
hm : m ≠ 0
hunion :
(equitabilise h✝).parts =
filter (fun u => card u = m) (equitabilise h✝).parts ∪ filter (fun u => card u = m + 1) (equitabilise h✝).parts
x : Finset α → Prop
ha : x ≤ fun u => card u = m
hb : x ≤ fun u => card u = m + 1
i : Finset α
h : x i
⊢ ⊥ i
[PROOFSTEP]
apply succ_ne_self m _
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s t : Finset α
m n a b : ℕ
P : Finpartition s
h✝ : a * m + b * (m + 1) = card s
hm : m ≠ 0
hunion :
(equitabilise h✝).parts =
filter (fun u => card u = m) (equitabilise h✝).parts ∪ filter (fun u => card u = m + 1) (equitabilise h✝).parts
x : Finset α → Prop
ha : x ≤ fun u => card u = m
hb : x ≤ fun u => card u = m + 1
i : Finset α
h : x i
⊢ succ m = m
[PROOFSTEP]
exact (hb i h).symm.trans (ha i h)
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s t : Finset α
m n a b : ℕ
P : Finpartition s
h : a * m + b * (m + 1) = card s
hm : m ≠ 0
⊢ card (equitabilise h).parts = a + b
[PROOFSTEP]
rw [← filter_true_of_mem fun x => card_eq_of_mem_parts_equitabilise, filter_or, card_union_eq,
P.card_filter_equitabilise_small _ hm, P.card_filter_equitabilise_big]
-- Porting note: was `infer_instance`
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s t : Finset α
m n a b : ℕ
P : Finpartition s
h : a * m + b * (m + 1) = card s
hm : m ≠ 0
⊢ Disjoint (filter (fun x => card x = m) (equitabilise h).parts)
(filter (fun x => card x = m + 1) (equitabilise h).parts)
[PROOFSTEP]
exact disjoint_filter.2 fun x _ h₀ h₁ => Nat.succ_ne_self m <| h₁.symm.trans h₀
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s t : Finset α
m n a b : ℕ
P : Finpartition s
h : a * m + b * (m + 1) = card s
hn : n ≠ 0
hs : n ≤ card s
⊢ ∃ P, IsEquipartition P ∧ card P.parts = n
[PROOFSTEP]
rw [← pos_iff_ne_zero] at hn
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s t : Finset α
m n a b : ℕ
P : Finpartition s
h : a * m + b * (m + 1) = card s
hn : 0 < n
hs : n ≤ card s
⊢ ∃ P, IsEquipartition P ∧ card P.parts = n
[PROOFSTEP]
have : (n - s.card % n) * (s.card / n) + s.card % n * (s.card / n + 1) = s.card := by
rw [tsub_mul, mul_add, ← add_assoc, tsub_add_cancel_of_le (Nat.mul_le_mul_right _ (mod_lt _ hn).le), mul_one,
add_comm, mod_add_div]
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s t : Finset α
m n a b : ℕ
P : Finpartition s
h : a * m + b * (m + 1) = card s
hn : 0 < n
hs : n ≤ card s
⊢ (n - card s % n) * (card s / n) + card s % n * (card s / n + 1) = card s
[PROOFSTEP]
rw [tsub_mul, mul_add, ← add_assoc, tsub_add_cancel_of_le (Nat.mul_le_mul_right _ (mod_lt _ hn).le), mul_one, add_comm,
mod_add_div]
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s t : Finset α
m n a b : ℕ
P : Finpartition s
h : a * m + b * (m + 1) = card s
hn : 0 < n
hs : n ≤ card s
this : (n - card s % n) * (card s / n) + card s % n * (card s / n + 1) = card s
⊢ ∃ P, IsEquipartition P ∧ card P.parts = n
[PROOFSTEP]
refine' ⟨(indiscrete (card_pos.1 <| hn.trans_le hs).ne_empty).equitabilise this, equitabilise_isEquipartition, _⟩
[GOAL]
α : Type u_1
inst✝ : DecidableEq α
s t : Finset α
m n a b : ℕ
P : Finpartition s
h : a * m + b * (m + 1) = card s
hn : 0 < n
hs : n ≤ card s
this : (n - card s % n) * (card s / n) + card s % n * (card s / n + 1) = card s
⊢ card (equitabilise this).parts = n
[PROOFSTEP]
rw [card_parts_equitabilise _ _ (Nat.div_pos hs hn).ne', tsub_add_cancel_of_le (mod_lt _ hn).le]
|
/-
Copyright (c) 2017 Mario Carneiro. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Mario Carneiro
-/
import data.fintype.card
import data.finset.option
/-!
# fintype instances for option
> THIS FILE IS SYNCHRONIZED WITH MATHLIB4.
> Any changes to this file require a corresponding PR to mathlib4.
-/
open function
open_locale nat
universes u v
variables {α β γ : Type*}
open finset function
instance {α : Type*} [fintype α] : fintype (option α) :=
⟨univ.insert_none, λ a, by simp⟩
lemma univ_option (α : Type*) [fintype α] : (univ : finset (option α)) = insert_none univ := rfl
@[simp] theorem fintype.card_option {α : Type*} [fintype α] :
fintype.card (option α) = fintype.card α + 1 :=
(finset.card_cons _).trans $ congr_arg2 _ (card_map _) rfl
/-- If `option α` is a `fintype` then so is `α` -/
def fintype_of_option {α : Type*} [fintype (option α)] : fintype α :=
⟨finset.erase_none (fintype.elems (option α)), λ x, mem_erase_none.mpr (fintype.complete (some x))⟩
/-- A type is a `fintype` if its successor (using `option`) is a `fintype`. -/
def fintype_of_option_equiv [fintype α] (f : α ≃ option β) : fintype β :=
by { haveI := fintype.of_equiv _ f, exact fintype_of_option }
namespace fintype
/-- A recursor principle for finite types, analogous to `nat.rec`. It effectively says
that every `fintype` is either `empty` or `option α`, up to an `equiv`. -/
def trunc_rec_empty_option {P : Type u → Sort v}
(of_equiv : ∀ {α β}, α ≃ β → P α → P β)
(h_empty : P pempty)
(h_option : ∀ {α} [fintype α] [decidable_eq α], P α → P (option α))
(α : Type u) [fintype α] [decidable_eq α] : trunc (P α) :=
begin
suffices : ∀ n : ℕ, trunc (P (ulift $ fin n)),
{ apply trunc.bind (this (fintype.card α)),
intro h,
apply trunc.map _ (fintype.trunc_equiv_fin α),
intro e,
exact of_equiv (equiv.ulift.trans e.symm) h },
intro n,
induction n with n ih,
{ have : card pempty = card (ulift (fin 0)),
{ simp only [card_fin, card_pempty, card_ulift] },
apply trunc.bind (trunc_equiv_of_card_eq this),
intro e,
apply trunc.mk,
refine of_equiv e h_empty, },
{ have : card (option (ulift (fin n))) = card (ulift (fin n.succ)),
{ simp only [card_fin, card_option, card_ulift] },
apply trunc.bind (trunc_equiv_of_card_eq this),
intro e,
apply trunc.map _ ih,
intro ih,
refine of_equiv e (h_option ih), },
end
/-- An induction principle for finite types, analogous to `nat.rec`. It effectively says
that every `fintype` is either `empty` or `option α`, up to an `equiv`. -/
@[elab_as_eliminator]
lemma induction_empty_option {P : Π (α : Type u) [fintype α], Prop}
(of_equiv : ∀ α β [fintype β] (e : α ≃ β), @P α (@fintype.of_equiv α β ‹_› e.symm) → @P β ‹_›)
(h_empty : P pempty)
(h_option : ∀ α [fintype α], by exactI P α → P (option α))
(α : Type u) [fintype α] : P α :=
begin
obtain ⟨p⟩ := @trunc_rec_empty_option (λ α, ∀ h, @P α h)
(λ α β e hα hβ, @of_equiv α β hβ e (hα _)) (λ _i, by convert h_empty)
_ α _ (classical.dec_eq α),
{ exact p _ },
{ rintro α hα - Pα hα', resetI, convert h_option α (Pα _) }
end
end fintype
/-- An induction principle for finite types, analogous to `nat.rec`. It effectively says
that every `fintype` is either `empty` or `option α`, up to an `equiv`. -/
lemma finite.induction_empty_option {P : Type u → Prop}
(of_equiv : ∀ {α β}, α ≃ β → P α → P β)
(h_empty : P pempty)
(h_option : ∀ {α} [fintype α], P α → P (option α))
(α : Type u) [finite α] : P α :=
begin
casesI nonempty_fintype α,
refine fintype.induction_empty_option _ _ _ α,
exacts [λ α β _, of_equiv, h_empty, @h_option]
end
|
Trying to emulate Legend of Mana style.
Took about 3 hours (2 hours too many).
Na poczatku myślałem że to jakiś krasnolud ale i tak dobrze się prezentuje.
lol, the cool looking ones always scare me in games. Reminds me of gilgamesh.
Could have jumped right out of a game!
3 hours were spend very useful, I like it. The shading is awesome dude!
Agree with B.O.B. This would make a killer boss. Nice color choice, and subtle backlighting. I also like the way you presented your pallette.
Well, you certainly work much faster than I do. You've really captured that beautiful RPG style and created a nifty character to boot.
|
module Budget.Budgeter(
budgeter
)where
import Numeric.LinearAlgebra
import Budget.Types
import Budget.Parse
import Budget.Printer
import Text.Parsec
import Finance.FutureValue
getNeededExpense :: BudgetItem -> Expense
getNeededExpense bs = case needed bs of
True -> Expense (name bs) (minPrice bs)
False -> Expense (name bs) 0.0
reduceBs :: [BudgetItem] -> [Expense] -> [BudgetItem]
reduceBs bs xs = map removeMin $ zip bs xs
removeMin :: (BudgetItem, Expense) -> BudgetItem
removeMin ((BudgetItem s i l h n), Expense _ m) = BudgetItem s i (l-m) (h-m) n
aDiag :: BudgetItem -> Double
aDiag (BudgetItem _ i low high _)
| low == high = 100000
| otherwise = (-2.0) * i * k**2
where k = 1.0 / (high-low)
aDiags :: [BudgetItem] -> [Double]
aDiags = map aDiag
type CurrentIndex = Integer
type Rank = Integer
aList :: CurrentIndex -> Rank -> [Double] -> [Double]
aList c r (x:xs)
|c `mod` (r+1) == 0 = x : aList (c+1) r xs
aList c r xs
| c == r^2-1 = [0]
| ((c+1) `mod` r == 0) && (c /= r^2-1) = -1 : aList (c+1) r xs
| (c `mod` (r+1) /= 0 && c >= r^2-r) = 1 : aList (c+1) r xs
| otherwise = 0 : aList (c+1) r xs
availableCosts :: Double -> Double -> Double -> Double -> Double -> Double
availableCosts cpm y v ipm apr
| cpm <= 0 = 0
| retirement > sustainableRetirement = cpm
| otherwise = availableCosts (cpm-10) y v ipm apr
where retirement = futureValue numberMonths v (ipm-cpm) ratePerMonth
sustainableRetirement = costsPerYear / apr
numberMonths = floor (12 * y)
ratePerMonth = apr/12
costsPerYear = cpm * 12
aMatrix :: [BudgetItem] -> Matrix Double
aMatrix bs = let as = aDiags bs
r = (length as) + 1
in (r><r) $ aList 0 (toInteger r) as
bElem :: BudgetItem -> Double
bElem (BudgetItem _ i low high _)
| low == high = 100000
| otherwise = (-2.0) * i * k**2 * high
where k = 1.0 / (high-low)
bElems :: [BudgetItem] -> [Double]
bElems = map bElem
type TotalCost = Double
bVector :: [BudgetItem] -> TotalCost -> Vector Double
bVector bs tc = let vecElems = bElems bs ++ [tc]
in vector vecElems
removeNeeded :: BudgetInfo -> (BudgetInfo, [Expense])
removeNeeded (BudgetInfo i s y a bs) = (BudgetInfo (i - totalNeeded) s y a newBs, needs)
where needs = map getNeededExpense bs
newBs = reduceBs bs needs
totalNeeded = sum [x | (Expense _ x) <- needs]
type ItemNames = [String]
calculateBudget :: BudgetInfo -> [Expense]
calculateBudget (BudgetInfo i s y a bs)
| i > totalMaxCosts = expenses
| otherwise = calculateBudget' [] (BudgetInfo i s y a bs)
where maxCosts = map maxPrice bs
names = map name bs
expenses = [Expense n m | (n, m) <- zip names maxCosts]
totalMaxCosts = sum maxCosts
inefficientNames :: [BudgetItem] -> Costs -> ItemNames
inefficientNames bs costs = [(name b)| (b,c) <- (zip bs costs), c < 0]
optimizedCosts :: Double -> [BudgetItem] -> [Double]
optimizedCosts i bs = init $ toList $ (aMatrix bs) <\> (bVector bs (realToFrac i))
calculateBudget' :: ItemNames -> BudgetInfo -> [Expense]
calculateBudget' unusedItems (BudgetInfo i s y a bs)
| validCosts = expenses
| otherwise = calculateBudget' newUnusedNames cleanedBI
where costs = optimizedCosts i bs
validCosts = length [c | c <- costs, c < 0] == 0
allNames = (map name bs) ++ unusedItems
newUnusedNames = unusedItems ++ inefficientNames bs costs
cleanedBI = BudgetInfo i s y a [b | (b,c) <- (zip bs costs), c > 0]
appendedCosts = costs ++ (replicate ((length allNames) - (length costs)) 0)
expenses = [Expense n m | (n, m) <- zip allNames appendedCosts]
storeSavings (BudgetInfo ipm v ytr apr bs) = BudgetInfo cpm v ytr apr bs
where cpm = availableCosts ipm ytr v ipm apr
unsafe :: Either ParseError BudgetInfo -> BudgetInfo
unsafe (Right x) = x
yearlySavings (BudgetInfo ipm _ _ _ _) cs = ipm - (sum cs)
combineExpenses :: [Expense] -> [Expense] -> [Expense]
combineExpenses e1s e2s = [Expense n1 (cost1 + cost2) | (Expense n1 cost1) <- e1s, (Expense n2 cost2) <- e2s,
n1 == n2]
getBudget :: BudgetInfo -> String
getBudget bi = let saved_bi = storeSavings bi
(available_bi, mandatory_costs) = removeNeeded saved_bi
distro = calculateBudget available_bi
finalExpenses = combineExpenses mandatory_costs distro
in fullPrint bi finalExpenses
budgeter :: String -> String
budgeter s = let bi = parse getBudgetInfo "Fetching budget info" s
in case fmap getBudget bi of
Left parsecError -> "Encountered Input Parsing errror:\n" ++ show parsecError
Right finalString -> finalString
|
function [H_o_j, A_j, H_x_j] = calcHoj(p_f_G, msckfState, camStateIndices)
%CALCHOJ Calculates H_o_j according to Mourikis 2007
% Inputs: p_f_G: feature location in the Global frame
% msckfState: the current window of states
% camStateIndex: i, with camState being the ith camera pose in the window
% Outputs: H_o_j, A
N = length(msckfState.camStates);
M = length(camStateIndices);
H_f_j = zeros(2*M, 3);
H_x_j = zeros(2*M, 12 + 6*N);
c_i = 1;
for camStateIndex = camStateIndices
camState = msckfState.camStates{camStateIndex};
C_CG = quatToRotMat(camState.q_CG);
%The feature position in the camera frame
p_f_C = C_CG*(p_f_G - camState.p_C_G);
X = p_f_C(1);
Y = p_f_C(2);
Z = p_f_C(3);
J_i = (1/Z)*[1 0 -X/Z; 0 1 -Y/Z];
H_f_j((2*c_i - 1):2*c_i, :) = J_i*C_CG;
H_x_j((2*c_i - 1):2*c_i,12+6*(camStateIndex-1) + 1:12+6*(camStateIndex-1) + 3) = J_i*crossMat(p_f_C);
H_x_j((2*c_i - 1):2*c_i,(12+6*(camStateIndex-1) + 4):(12+6*(camStateIndex-1) + 6)) = -J_i*C_CG;
c_i = c_i + 1;
end
A_j = null(H_f_j');
H_o_j = A_j'*H_x_j;
end
|
! ----------------------------------------------------------------------
subroutine bn_surface
! ----------------------------------------------------------------------
c 11.08.99
c purpose:
c
c
c ----------------------------------------------------------------------
use meshes
use bnvariables
implicit none
c ----------------------------------------------------------------------
integer :: i, m, n, ku, kv, k, np2
real(rprec), dimension(:), allocatable :: r, ru, rv
real(rprec) :: snx, sny, snz, coh, sih, co, si, cofp, sifp,
1 cofm, sifm, cm, cn
c ----------------------------------------------------------------------
allocate (r(nuv), ru(nuv), rv(nuv), stat=i)
do i = 1 , nuv
r(i) = 0._dp
z(i) = 0._dp
ru(i) = 0._dp
rv(i) = 0._dp
zu(i) = 0._dp
zv(i) = 0._dp
enddo
do m = 0,mb
do n = -nb,nb
cm = m*pi2
cn = n*pi2
do i = 1 , nuv
cofp = conu(i,m)*conv(i,n)-sinu(i,m)*sinv(i,n) !cos(u+v)
cofm = conu(i,m)*conv(i,n)+sinu(i,m)*sinv(i,n) !cos(u-v)
sifp = sinu(i,m)*conv(i,n)+conu(i,m)*sinv(i,n) !sin(u+v)
sifm = sinu(i,m)*conv(i,n)-conu(i,m)*sinv(i,n) !sin(u-v)
r(i) = r(i) + cr(m,n)*cofp + crs(m,n)*sifp
ru(i) = ru(i) - cm *( cr(m,n)*sifp - crs(m,n)*cofp)
rv(i) = rv(i) - cn *( cr(m,n)*sifp - crs(m,n)*cofp)
z(i) = z(i) + cz(m,n)*sifp + czc(m,n)*cofp
zu(i) = zu(i) + cm *( cz(m,n)*cofp - czc(m,n)*sifp)
zv(i) = zv(i) + cn *( cz(m,n)*cofp - czc(m,n)*sifp)
enddo
enddo
enddo
c----------------------------------------------------------
do kv = 1, nv
coh = cos(alvp*(kv-1))
sih = sin(alvp*(kv-1))
do ku = 1, nu
i = nu*(kv-1)+ku
x(i) = coh * r(i)
y(i) = sih * r(i)
xu(i) = coh * ru(i)
yu(i) = sih * ru(i)
xv(i) = coh * rv(i) - alp*y(i)
yv(i) = sih * rv(i) + alp*x(i)
enddo
enddo
do i = 1 , nuv
snx = yu(i)*zv(i)-zu(i)*yv(i)
sny = zu(i)*xv(i)-xu(i)*zv(i)
snz = xu(i)*yv(i)-yu(i)*xv(i)
sqf(i) = sqrt(snx*snx+sny*sny+snz*snz)
guu(i) = xu(i)*xu(i)+yu(i)*yu(i)+zu(i)*zu(i)
guv(i) = xu(i)*xv(i)+yu(i)*yv(i)+zu(i)*zv(i)
gvv(i) = xv(i)*xv(i)+yv(i)*yv(i)+zv(i)*zv(i)
dju(i) = bu(i)*guv(i)- bv(i)*guu(i)
djv(i) = bu(i)*gvv(i)- bv(i)*guv(i)
enddo
do i=1,nuv
djx(i) = bu(i)*xv(i)-bv(i)*xu(i)
djy(i) = bu(i)*yv(i)-bv(i)*yu(i)
djz(i) = bu(i)*zv(i)-bv(i)*zu(i)
enddo
do k=1,np-1
co =cos(alp*k)
si =sin(alp*k)
do i=1,nuv
x(i+k*nuv) = x(i)*co - y(i)*si
y(i+k*nuv) = y(i)*co + x(i)*si
z(i+k*nuv) = z(i)
djx(i+k*nuv) = djx(i)*co-djy(i)*si
djy(i+k*nuv) = djy(i)*co+djx(i)*si
djz(i+k*nuv) = djz(i)
enddo
enddo
np2 = np*np
do i=1,nuv
guv(i) = np*guv(i)
gvv(i) = np2*gvv(i)
enddo
deallocate (r, ru, rv, stat=i)
end subroutine bn_surface
|
(* Title: HOL/Auth/Guard/Guard_Yahalom.thy
Author: Frederic Blanqui, University of Cambridge Computer Laboratory
Copyright 2002 University of Cambridge
*)
section\<open>Yahalom Protocol\<close>
theory Guard_Yahalom imports "../Shared" Guard_Shared begin
subsection\<open>messages used in the protocol\<close>
abbreviation (input)
ya1 :: "agent => agent => nat => event" where
"ya1 A B NA == Says A B \<lbrace>Agent A, Nonce NA\<rbrace>"
abbreviation (input)
ya1' :: "agent => agent => agent => nat => event" where
"ya1' A' A B NA == Says A' B \<lbrace>Agent A, Nonce NA\<rbrace>"
abbreviation (input)
ya2 :: "agent => agent => nat => nat => event" where
"ya2 A B NA NB == Says B Server \<lbrace>Agent B, Ciph B \<lbrace>Agent A, Nonce NA, Nonce NB\<rbrace>\<rbrace>"
abbreviation (input)
ya2' :: "agent => agent => agent => nat => nat => event" where
"ya2' B' A B NA NB == Says B' Server \<lbrace>Agent B, Ciph B \<lbrace>Agent A, Nonce NA, Nonce NB\<rbrace>\<rbrace>"
abbreviation (input)
ya3 :: "agent => agent => nat => nat => key => event" where
"ya3 A B NA NB K ==
Says Server A \<lbrace>Ciph A \<lbrace>Agent B, Key K, Nonce NA, Nonce NB\<rbrace>,
Ciph B \<lbrace>Agent A, Key K\<rbrace>\<rbrace>"
abbreviation (input)
ya3':: "agent => msg => agent => agent => nat => nat => key => event" where
"ya3' S Y A B NA NB K ==
Says S A \<lbrace>Ciph A \<lbrace>Agent B, Key K, Nonce NA, Nonce NB\<rbrace>, Y\<rbrace>"
abbreviation (input)
ya4 :: "agent => agent => nat => nat => msg => event" where
"ya4 A B K NB Y == Says A B \<lbrace>Y, Crypt K (Nonce NB)\<rbrace>"
abbreviation (input)
ya4' :: "agent => agent => nat => nat => msg => event" where
"ya4' A' B K NB Y == Says A' B \<lbrace>Y, Crypt K (Nonce NB)\<rbrace>"
subsection\<open>definition of the protocol\<close>
inductive_set ya :: "event list set"
where
Nil: "[] \<in> ya"
| Fake: "\<lbrakk>evs \<in> ya; X \<in> synth (analz (spies evs))\<rbrakk> \<Longrightarrow> Says Spy B X # evs \<in> ya"
| YA1: "\<lbrakk>evs1 \<in> ya; Nonce NA \<notin> used evs1\<rbrakk> \<Longrightarrow> ya1 A B NA # evs1 \<in> ya"
| YA2: "\<lbrakk>evs2 \<in> ya; ya1' A' A B NA \<in> set evs2; Nonce NB \<notin> used evs2\<rbrakk>
\<Longrightarrow> ya2 A B NA NB # evs2 \<in> ya"
| YA3: "\<lbrakk>evs3 \<in> ya; ya2' B' A B NA NB \<in> set evs3; Key K \<notin> used evs3\<rbrakk>
\<Longrightarrow> ya3 A B NA NB K # evs3 \<in> ya"
| YA4: "\<lbrakk>evs4 \<in> ya; ya1 A B NA \<in> set evs4; ya3' S Y A B NA NB K \<in> set evs4\<rbrakk>
\<Longrightarrow> ya4 A B K NB Y # evs4 \<in> ya"
subsection\<open>declarations for tactics\<close>
declare knows_Spy_partsEs [elim]
declare Fake_parts_insert [THEN subsetD, dest]
declare initState.simps [simp del]
subsection\<open>general properties of ya\<close>
lemma ya_has_no_Gets: "evs \<in> ya \<Longrightarrow> \<forall>A X. Gets A X \<notin> set evs"
by (erule ya.induct, auto)
lemma ya_is_Gets_correct [iff]: "Gets_correct ya"
by (auto simp: Gets_correct_def dest: ya_has_no_Gets)
lemma ya_is_one_step [iff]: "one_step ya"
unfolding one_step_def by (clarify, ind_cases "ev#evs \<in> ya" for ev evs, auto)
lemma ya_has_only_Says' [rule_format]: "evs \<in> ya \<Longrightarrow>
ev \<in> set evs \<longrightarrow> (\<exists>A B X. ev=Says A B X)"
by (erule ya.induct, auto)
lemma ya_has_only_Says [iff]: "has_only_Says ya"
by (auto simp: has_only_Says_def dest: ya_has_only_Says')
lemma ya_is_regular [iff]: "regular ya"
apply (simp only: regular_def, clarify)
apply (erule ya.induct, simp_all add: initState.simps knows.simps)
by (auto dest: parts_sub)
subsection\<open>guardedness of KAB\<close>
lemma Guard_KAB [rule_format]: "\<lbrakk>evs \<in> ya; A \<notin> bad; B \<notin> bad\<rbrakk> \<Longrightarrow>
ya3 A B NA NB K \<in> set evs \<longrightarrow> GuardK K {shrK A,shrK B} (spies evs)"
apply (erule ya.induct)
(* Nil *)
apply simp_all
(* Fake *)
apply (clarify, erule in_synth_GuardK, erule GuardK_analz, simp)
(* YA1 *)
(* YA2 *)
apply safe
apply (blast dest: Says_imp_spies)
(* YA3 *)
apply blast
apply (drule_tac A=Server in Key_neq, simp+, rule No_Key, simp)
apply (drule_tac A=Server in Key_neq, simp+, rule No_Key, simp)
(* YA4 *)
apply (blast dest: Says_imp_spies in_GuardK_kparts)
by blast
subsection\<open>session keys are not symmetric keys\<close>
lemma KAB_isnt_shrK [rule_format]: "evs \<in> ya \<Longrightarrow>
ya3 A B NA NB K \<in> set evs \<longrightarrow> K \<notin> range shrK"
by (erule ya.induct, auto)
lemma ya3_shrK: "evs \<in> ya \<Longrightarrow> ya3 A B NA NB (shrK C) \<notin> set evs"
by (blast dest: KAB_isnt_shrK)
subsection\<open>ya2' implies ya1'\<close>
lemma ya2'_parts_imp_ya1'_parts [rule_format]:
"\<lbrakk>evs \<in> ya; B \<notin> bad\<rbrakk> \<Longrightarrow>
Ciph B \<lbrace>Agent A, Nonce NA, Nonce NB\<rbrace> \<in> parts (spies evs) \<longrightarrow>
\<lbrace>Agent A, Nonce NA\<rbrace> \<in> spies evs"
by (erule ya.induct, auto dest: Says_imp_spies intro: parts_parts)
lemma ya2'_imp_ya1'_parts: "\<lbrakk>ya2' B' A B NA NB \<in> set evs; evs \<in> ya; B \<notin> bad\<rbrakk>
\<Longrightarrow> \<lbrace>Agent A, Nonce NA\<rbrace> \<in> spies evs"
by (blast dest: Says_imp_spies ya2'_parts_imp_ya1'_parts)
subsection\<open>uniqueness of NB\<close>
lemma NB_is_uniq_in_ya2'_parts [rule_format]: "\<lbrakk>evs \<in> ya; B \<notin> bad; B' \<notin> bad\<rbrakk> \<Longrightarrow>
Ciph B \<lbrace>Agent A, Nonce NA, Nonce NB\<rbrace> \<in> parts (spies evs) \<longrightarrow>
Ciph B' \<lbrace>Agent A', Nonce NA', Nonce NB\<rbrace> \<in> parts (spies evs) \<longrightarrow>
A=A' \<and> B=B' \<and> NA=NA'"
apply (erule ya.induct, simp_all, clarify)
apply (drule Crypt_synth_insert, simp+)
apply (drule Crypt_synth_insert, simp+, safe)
apply (drule not_used_parts_false, simp+)+
by (drule Says_not_parts, simp+)+
lemma NB_is_uniq_in_ya2': "\<lbrakk>ya2' C A B NA NB \<in> set evs;
ya2' C' A' B' NA' NB \<in> set evs; evs \<in> ya; B \<notin> bad; B' \<notin> bad\<rbrakk>
\<Longrightarrow> A=A' \<and> B=B' \<and> NA=NA'"
by (drule NB_is_uniq_in_ya2'_parts, auto dest: Says_imp_spies)
subsection\<open>ya3' implies ya2'\<close>
lemma ya3'_parts_imp_ya2'_parts [rule_format]: "\<lbrakk>evs \<in> ya; A \<notin> bad\<rbrakk> \<Longrightarrow>
Ciph A \<lbrace>Agent B, Key K, Nonce NA, Nonce NB\<rbrace> \<in> parts (spies evs)
\<longrightarrow> Ciph B \<lbrace>Agent A, Nonce NA, Nonce NB\<rbrace> \<in> parts (spies evs)"
apply (erule ya.induct, simp_all)
apply (clarify, drule Crypt_synth_insert, simp+)
apply (blast intro: parts_sub, blast)
by (auto dest: Says_imp_spies parts_parts)
lemma ya3'_parts_imp_ya2' [rule_format]: "\<lbrakk>evs \<in> ya; A \<notin> bad\<rbrakk> \<Longrightarrow>
Ciph A \<lbrace>Agent B, Key K, Nonce NA, Nonce NB\<rbrace> \<in> parts (spies evs)
\<longrightarrow> (\<exists>B'. ya2' B' A B NA NB \<in> set evs)"
apply (erule ya.induct, simp_all, safe)
apply (drule Crypt_synth_insert, simp+)
apply (drule Crypt_synth_insert, simp+, blast)
apply blast
apply blast
by (auto dest: Says_imp_spies2 parts_parts)
lemma ya3'_imp_ya2': "\<lbrakk>ya3' S Y A B NA NB K \<in> set evs; evs \<in> ya; A \<notin> bad\<rbrakk>
\<Longrightarrow> (\<exists>B'. ya2' B' A B NA NB \<in> set evs)"
by (drule ya3'_parts_imp_ya2', auto dest: Says_imp_spies)
subsection\<open>ya3' implies ya3\<close>
lemma ya3'_parts_imp_ya3 [rule_format]: "\<lbrakk>evs \<in> ya; A \<notin> bad\<rbrakk> \<Longrightarrow>
Ciph A \<lbrace>Agent B, Key K, Nonce NA, Nonce NB\<rbrace> \<in> parts(spies evs)
\<longrightarrow> ya3 A B NA NB K \<in> set evs"
apply (erule ya.induct, simp_all, safe)
apply (drule Crypt_synth_insert, simp+)
by (blast dest: Says_imp_spies2 parts_parts)
lemma ya3'_imp_ya3: "\<lbrakk>ya3' S Y A B NA NB K \<in> set evs; evs \<in> ya; A \<notin> bad\<rbrakk>
\<Longrightarrow> ya3 A B NA NB K \<in> set evs"
by (blast dest: Says_imp_spies ya3'_parts_imp_ya3)
subsection\<open>guardedness of NB\<close>
definition ya_keys :: "agent \<Rightarrow> agent \<Rightarrow> nat \<Rightarrow> nat \<Rightarrow> event list \<Rightarrow> key set" where
"ya_keys A B NA NB evs \<equiv> {shrK A,shrK B} \<union> {K. ya3 A B NA NB K \<in> set evs}"
lemma Guard_NB [rule_format]: "\<lbrakk>evs \<in> ya; A \<notin> bad; B \<notin> bad\<rbrakk> \<Longrightarrow>
ya2 A B NA NB \<in> set evs \<longrightarrow> Guard NB (ya_keys A B NA NB evs) (spies evs)"
apply (erule ya.induct)
(* Nil *)
apply (simp_all add: ya_keys_def)
(* Fake *)
apply safe
apply (erule in_synth_Guard, erule Guard_analz, simp, clarify)
apply (frule_tac B=B in Guard_KAB, simp+)
apply (drule_tac p=ya in GuardK_Key_analz, simp+)
apply (blast dest: KAB_isnt_shrK, simp)
(* YA1 *)
apply (drule_tac n=NB in Nonce_neq, simp+, rule No_Nonce, simp)
(* YA2 *)
apply blast
apply (drule Says_imp_spies)
apply (drule_tac n=NB in Nonce_neq, simp+)
apply (drule_tac n'=NAa in in_Guard_kparts_neq, simp+)
apply (rule No_Nonce, simp)
(* YA3 *)
apply (rule Guard_extand, simp, blast)
apply (case_tac "NAa=NB", clarify)
apply (frule Says_imp_spies)
apply (frule in_Guard_kparts_Crypt, simp+)
apply (frule_tac A=A and B=B and NA=NA and NB=NB and C=Ba in ya3_shrK, simp)
apply (drule ya2'_imp_ya1'_parts, simp, blast, blast)
apply (case_tac "NBa=NB", clarify)
apply (frule Says_imp_spies)
apply (frule in_Guard_kparts_Crypt, simp+)
apply (frule_tac A=A and B=B and NA=NA and NB=NB and C=Ba in ya3_shrK, simp)
apply (drule NB_is_uniq_in_ya2', simp+, blast, simp+)
apply (simp add: No_Nonce, blast)
(* YA4 *)
apply (blast dest: Says_imp_spies)
apply (case_tac "NBa=NB", clarify)
apply (frule_tac A=S in Says_imp_spies)
apply (frule in_Guard_kparts_Crypt, simp+)
apply (blast dest: Says_imp_spies)
apply (case_tac "NBa=NB", clarify)
apply (frule_tac A=S in Says_imp_spies)
apply (frule in_Guard_kparts_Crypt, simp+, blast, simp+)
apply (frule_tac A=A and B=B and NA=NA and NB=NB and C=Aa in ya3_shrK, simp)
apply (frule ya3'_imp_ya2', simp+, blast, clarify)
apply (frule_tac A=B' in Says_imp_spies)
apply (rotate_tac -1, frule in_Guard_kparts_Crypt, simp+)
apply (frule_tac A=A and B=B and NA=NA and NB=NB and C=Ba in ya3_shrK, simp)
apply (drule NB_is_uniq_in_ya2', simp+, blast, clarify)
apply (drule ya3'_imp_ya3, simp+)
apply (simp add: Guard_Nonce)
apply (simp add: No_Nonce)
done
end
|
Formal statement is: lemma convergent_mult_const_right_iff: fixes c :: "'a::{field,topological_semigroup_mult}" assumes "c \<noteq> 0" shows "convergent (\<lambda>n. f n * c) \<longleftrightarrow> convergent f" Informal statement is: If $c \neq 0$, then the sequence $f_n c$ converges if and only if the sequence $f_n$ converges.
|
State Before: R : Type u
a✝ b : R
m n✝ : ℕ
inst✝ : Semiring R
p q : R[X]
n : ℕ
a : R
H : a ≠ 0
⊢ support (↑(monomial n) a) = {n} State After: R : Type u
a✝ b : R
m n✝ : ℕ
inst✝ : Semiring R
p q : R[X]
n : ℕ
a : R
H : a ≠ 0
⊢ (match { toFinsupp := Finsupp.single n a } with
| { toFinsupp := p } => p.support) =
{n} Tactic: rw [← ofFinsupp_single, support] State Before: R : Type u
a✝ b : R
m n✝ : ℕ
inst✝ : Semiring R
p q : R[X]
n : ℕ
a : R
H : a ≠ 0
⊢ (match { toFinsupp := Finsupp.single n a } with
| { toFinsupp := p } => p.support) =
{n} State After: no goals Tactic: exact Finsupp.support_single_ne_zero _ H
|
#include "pandora.h"
#include "navi/logic/navi_bus.h"
#include "win_utls/filepropty.h"
#include <boost/filesystem.hpp>
#include "context/ipc_context.h"
#include "base/sys_env.h"
#include "base/shared_data.h"
#include <fstream>
#include "remote/remote_impl.h"
#include "json/json.h"
#include "baselib/encrypt_str.h"
#include "baselib/base64.h"
using namespace std;
using namespace boost::filesystem;
namespace bfs=boost::filesystem;
namespace yyou
{
navi_env* navi_env::ins()
{
static navi_env s_ins ;
return &s_ins;
}
void navi_env::init_start(const yyou::game_account& account, xlogger_holder* logger )
{
_logger = logger;
_DBG_LOG(_logger) << "init logs over";
init_main_conf();
yyou::remote::cache_sgt_admin::init(remote_cache_path());
init_user_profile();
init_game_account(account);
//init dir
init_dir();
}
yyou::user_profile::sptr navi_env::get_user_profile()
{
return _user_profile;
}
yyou::pandora_conf::sptr navi_env::get_main_conf()
{
return _pandora_conf;
}
std::wstring navi_env::user_game_conf_dir()
{
std::wstringstream path;
path << _pandora_conf->user_conf_dir(_user_profile->id) << _game_account->game<<L"\\" ;
std::wstring tmp_path =std_path(path.str());
return tmp_path;
}
std::wstring navi_env::user_game_data_dir()
{
std::wstringstream path;
path << _pandora_conf->user_data_dir(_user_profile->id) <<_game_account->game<<L"\\" ;
std::wstring tmp_path =std_path(path.str());
return tmp_path;
}
std::wstring navi_env::account_data_dir_path()
{
std::wstring account_key = _game_account->isp+L"_"+_game_account->username ;
std::wstringstream path;
path << user_game_data_dir() << account_key <<L"\\";
std::wstring tmp_path =std_path(path.str());
return tmp_path;
}
std::wstring navi_env::account_conf_dir_path()
{
std::wstring account_key = _game_account->isp+L"_"+_game_account->username;
std::wstringstream path;
path << user_game_conf_dir() << account_key <<L"\\";
std::wstring tmp_path =std_path(path.str());
return tmp_path;
}
std::wstring navi_env::account_conf_file_path( const wchar_t * file_name )
{
std::wstringstream path;
path << account_conf_dir_path() <<file_name ;
ensure_dir_exist(path.str());
return path.str().c_str();
}
std::wstring navi_env::account_data_file_path( const wchar_t * file_name )
{
std::wstringstream path;
path << account_data_dir_path() <<file_name ;
ensure_dir_exist(path.str());
return path.str().c_str();
}
std::wstring navi_env::user_game_conf_file( const wchar_t * file_name )
{
std::wstringstream path;
path << user_game_conf_dir() <<file_name ;
ensure_dir_exist(path.str());
return path.str().c_str();
}
std::wstring navi_env::user_game_data_file(const wchar_t * file_name )
{
std::wstringstream path;
path << user_game_data_dir() <<file_name ;
ensure_dir_exist(path.str());
return path.str().c_str();
}
std::wstring navi_env::user_conf_dir()
{
std::wstringstream path;
path << _pandora_conf->user_conf_dir(_user_profile->id) ;
std::wstring tmp_path =std_path(path.str());
return tmp_path;
}
std::wstring navi_env::user_conf_file( const wchar_t * file_name )
{
std::wstringstream path;
path << user_conf_dir() <<file_name ;
ensure_dir_exist(path.str());
return path.str().c_str();
}
navi_env::navi_env()
{
_pandora_conf.reset(new yyou::pandora_conf);
_user_profile.reset(new yyou::user_profile);
}
std::wstring navi_env::ensure_dir_exist( const std::wstring& path_str )
{
wstring ensure_path = path_str;
if(!bfs::is_directory(path_str))
{
ensure_path = bfs::wpath(path_str).parent_path().string();
}
if(!bfs::exists(ensure_path))
{
bfs::create_directories(ensure_path);
}
return path_str;
}
void navi_env::init_game_account( yyou::game_account account )
{
_game_account.reset(new yyou::game_account());
_game_account->isp = account.isp;
_game_account->game = account.game;
_game_account->service = account.service;
_game_account->username = account.username ;
_game_account->nickname = account.nickname ;
_game_account->password = account.password;
_game_account->defaulturl = account.defaulturl;
_game_account->args = account.args;
}
std::wstring navi_env::http_cache_path()
{
/*
std::wstringstream path;
path << _pandora_conf->cache_dir() << L"\\requests\\"<<_game_account->game<<L"\\" ;
std::wstring tmp_path =std_path(path.str());
return tmp_path;
*/
TCHAR filePath[MAX_PATH];
::SHGetSpecialFolderPath(NULL, filePath, CSIDL_COMMON_APPDATA, FALSE);
std::wstring wstr(filePath);
std::wstring tmp_path = un_std_path(wstr+L"\\yunyou\\cache\\game_cache\\"+_game_account->game+L"\\");
return tmp_path;
}
std::wstring navi_env::cookie_path()
{
std::wstringstream path;
path << _pandora_conf->cache_dir() << L"\\cookies\\"<<_game_account->username<<L"\\" ;
std::wstring tmp_path =std_path(path.str());
return tmp_path;
}
std::wstring navi_env::ie_cache_path()
{
/*
std::wstringstream path;
path << _pandora_conf->cache_dir() << L"\\ie\\"<<_game_account->username<<L"\\" ;
std::wstring tmp_path =std_path(path.str());
*/
TCHAR filePath[MAX_PATH];
::SHGetSpecialFolderPath(NULL, filePath, CSIDL_COMMON_APPDATA, FALSE);
std::wstring wstr(filePath);
std::wstring tmp_path =std_path(wstr+L"\\yunyou\\_temp\\ie\\"+_game_account->username+L"\\");
return tmp_path;
}
std::wstring navi_env::flash_cookie_path()
{
TCHAR filePath[MAX_PATH];
::SHGetSpecialFolderPath(NULL, filePath, CSIDL_COMMON_APPDATA, FALSE);
std::wstring wstr(filePath);
std::wstring tmp_path = un_std_path(wstr+L"\\yunyou\\_temp\\flash_cookies\\"+_game_account->username+L"\\");
return tmp_path;
}
std::wstring navi_env::knowledge_path()
{
return std_path(sys_env::ins()->get_res_data_path()+L"\\knowledge\\");
}
std::wstring navi_env::local_data_path()
{
return std_path(sys_env::ins()->get_res_data_path());
}
std::wstring navi_env::conf_dir()
{
std::wstringstream path;
path << _pandora_conf->conf_dir() ;
std::wstring tmp_path =std_path(path.str());
return tmp_path;
}
void navi_env::clear_cache()
{
if(boost::filesystem::exists(http_cache_path()))
boost::filesystem::remove_all(http_cache_path()) ;
if(boost::filesystem::exists(ie_cache_path()))
boost::filesystem::remove_all(ie_cache_path()) ;
if(boost::filesystem::exists(flash_cookie_path()))
boost::filesystem::remove_all(flash_cookie_path()) ;
}
void navi_env::clear_cookie()
{
if(boost::filesystem::exists(cookie_path()))
boost::filesystem::remove_all(cookie_path()) ;
}
void navi_env::init_user_profile()
{
_DBG_LOG(LOG_INS_MAIN) << " == init_user_profile BEGIN ==";
std::wstring user_profile_str = shared_data_svc::ins()->get(L"user");
if(user_profile_str.size()>0)
yyou::js_load_conf(*_user_profile,yyou::wstr2str(user_profile_str));
_DBG_LOG(LOG_INS_MAIN) <<"user_profile id="<< _user_profile->id;
_DBG_LOG(LOG_INS_MAIN) << " == init_user_profile End ==";
}
void navi_env::init_main_conf()
{
_pandora_conf = sys_env::ins()->get_main_conf();
}
std::wstring navi_env::remote_cache_path()
{
std::wstringstream path;
path << _pandora_conf->cache_dir() << L"\\remote\\" ;
std::wstring tmp_path =std_path(path.str());
return tmp_path;
}
void navi_env::set_user_profile( user_profile* conf )
{
_user_profile->id = conf->id;
_user_profile->username = conf->username;
_DBG_LOG(LOG_INS_MAIN) <<"set_user_profile "<< _user_profile->id;
}
std::wstring navi_env::account_data_log_dir_path()
{
std::wstring account_key = _game_account->isp+L"_"+_game_account->username ;
std::wstringstream path;
path << account_data_dir_path() << L"log\\";
std::wstring tmp_path =std_path(path.str());
return tmp_path;
}
std::wstring navi_env::account_data_db_dir_path()
{
std::wstring account_key = _game_account->isp+L"_"+_game_account->username ;
std::wstringstream path;
path << account_data_dir_path() << L"db\\";
std::wstring tmp_path =std_path(path.str());
return tmp_path;
}
std::wstring navi_env::account_data_db_file_path( const wchar_t * file_name )
{
std::wstringstream path;
path << account_data_db_dir_path() <<file_name ;
ensure_dir_exist(path.str());
return path.str().c_str();
}
void navi_env::init_dir()
{
boost::filesystem::create_directories(user_game_conf_dir());
boost::filesystem::create_directories(user_game_data_dir());
boost::filesystem::create_directories(account_data_dir_path());
boost::filesystem::create_directories(account_conf_dir_path());
boost::filesystem::create_directories(user_conf_dir());
boost::filesystem::create_directories(http_cache_path());
boost::filesystem::create_directories(ie_cache_path());
boost::filesystem::create_directories(flash_cookie_path());
boost::filesystem::create_directories(cookie_path());
boost::filesystem::create_directories(remote_cache_path());
boost::filesystem::create_directories(account_data_log_dir_path());
boost::filesystem::create_directories(account_data_db_dir_path());
}
void navi_env::get_qqapp_conf( std::wstring app,qqapp_conf* conf )
{
std::wstring conf_dir = sys_env::ins()->get_res_data_path()+L"\\app_confs\\";
std::wstring dat_file = conf_dir+app+L".conf";
std::wstring db_file = conf_dir+app+L".db";
std::string content;
if(boost::filesystem::exists(db_file))
{
content=read_file(db_file);
content =cbase64::deBase64(enc::decrypt(content));
}
else
{
content=read_file(dat_file);
}
if(content.size()>0)
{
Json::Reader reader;
Json::Value json_value;
if(reader.parse(content, json_value))
{
conf->show_scroll = json_value["show_scroll"].asBool();
conf->can_rightkey = json_value["can_rightkey"].asBool();
}
}
}
}
|
<center>
## [mlcourse.ai](mlcourse.ai) – Open Machine Learning Course
### <center> Author: Ilya Larchenko, ODS Slack ilya_l
## <center> Individual data analysis project
## 1. Data description
__I will analyse California Housing Data (1990). It can be downloaded from Kaggle [https://www.kaggle.com/harrywang/housing]__
We will predict the median price of household in block.
To start you need to download file housing.csv.zip . Let's load the data and look at it.
```python
import pandas as pd
import numpy as np
import os
%matplotlib inline
import warnings # `do not disturbe` mode
warnings.filterwarnings('ignore')
```
```python
# change this if needed
PATH_TO_DATA = 'data'
```
```python
full_df = pd.read_csv(os.path.join(PATH_TO_DATA, 'housing.csv.zip'), compression ='zip')
print(full_df.shape)
full_df.head()
```
Data consists of 20640 rows and 10 features:
1. longitude: A measure of how far west a house is; a higher value is farther west
2. latitude: A measure of how far north a house is; a higher value is farther north
3. housingMedianAge: Median age of a house within a block; a lower number is a newer building
4. totalRooms: Total number of rooms within a block
5. totalBedrooms: Total number of bedrooms within a block
6. population: Total number of people residing within a block
7. households: Total number of households, a group of people residing within a home unit, for a block
8. medianIncome: Median income for households within a block of houses (measured in tens of thousands of US Dollars)
9. medianHouseValue: Median house value for households within a block (measured in US Dollars)
10. oceanProximity: Location of the house w.r.t ocean/sea
*median_house_value* is our target feature, we will use other features to predict it.
The task is to predict how much the houses in particular block cost (the median) based on information of blocks location and basic sociodemographic data
Let's divide dataset into train (75%) and test (25%).
```python
%%time
from sklearn.model_selection import train_test_split
train_df, test_df = train_test_split(full_df,shuffle = True, test_size = 0.25, random_state=17)
train_df=train_df.copy()
test_df=test_df.copy()
print(train_df.shape)
print(test_df.shape)
```
All futher analysis we will do with the test set. But feature generation and processing will be simmultaneously done on both sets.
## 2-3. Primary data analysis / Primary visual data analysis
```python
train_df.describe()
```
```python
train_df.info()
```
We can see that most columns has no nan values (except total_bedrooms), most features has float format, only 1 feature is categorical - ocean_proximity.
```python
train_df[pd.isnull(train_df).any(axis=1)].head(10)
```
There is no obvious reasons for some total_bedrooms to be NaN. The number of NaNs is about 1% of total dataset. Maybe we could just drop this rows or fill it with mean/median values, but let's wait for a while, and deal with blanks after initial data analysis in a smarter manner.
Let's create the list of numeric features names (it will be useful later).
```python
numerical_features=list(train_df.columns)
numerical_features.remove('ocean_proximity')
numerical_features.remove('median_house_value')
print(numerical_features)
```
Let's look at target feature distribition
```python
train_df['median_house_value'].hist()
```
We can visually see that distribution is skewed and not normal. Also it seems that the values are clipped somewhere near 500 000. We can check it numerically.
```python
max_target=train_df['median_house_value'].max()
print("The largest median value:",max_target)
print("The # of values, equal to the largest:", sum(train_df['median_house_value']==max_target))
print("The % of values, equal to the largest:", sum(train_df['median_house_value']==max_target)/train_df.shape[0])
```
Almost 5% of all values = exactly 500 001. It proves our clipping theory. Let's check the clipping of small values:
```python
min_target=train_df['median_house_value'].min()
print("The smallest median value:",min_target)
print("The # of values, equal to the smallest:", sum(train_df['median_house_value']==min_target))
print("The % of values, equal to the smallest:", sum(train_df['median_house_value']==min_target)/train_df.shape[0])
```
This time it looks much better, a little bit artificial value 14 999 - is common for prices. And there are only 4 such values. So probably the small values are not clipped.
Let's conduct some normality tests:
```python
from statsmodels.graphics.gofplots import qqplot
from matplotlib import pyplot
qqplot(train_df['median_house_value'], line='s')
pyplot.show()
```
```python
from scipy.stats import normaltest
stat, p = normaltest(train_df['median_house_value'])
print('Statistics=%.3f, p=%.3f' % (stat, p))
alpha = 0.05
if p < alpha: # null hypothesis: x comes from a normal distribution
print("The null hypothesis can be rejected")
else:
print("The null hypothesis cannot be rejected")
```
QQ-plot and D’Agostino and Pearson’s normality test show that the distribution is far from normal. We can try to use log(1+n) to make it more normal:
```python
target_log=np.log1p(train_df['median_house_value'])
qqplot(target_log, line='s')
pyplot.show()
```
```python
stat, p = normaltest(target_log)
print('Statistics=%.3f, p=%.3f' % (stat, p))
alpha = 0.05
if p < alpha: # null hypothesis: x comes from a normal distribution
print("The null hypothesis can be rejected")
else:
print("The null hypothesis cannot be rejected")
```
This graph looks much better, the only non-normal parts are clipped high prices and very low prices. Unfortunately we can not reconstruct clipped data and statistically the distribution it is still not normal - p-value = 0, the null hypothesis of distribution normality can be rejected.
Anyway, predicting of target_log instead of target can be a good choice for us, but we still should check it during model validation phase.
```python
train_df['median_house_value_log']=np.log1p(train_df['median_house_value'])
test_df['median_house_value_log']=np.log1p(test_df['median_house_value'])
```
Now let's analyze numerical features. First of all we need to look at their distributions.
```python
train_df[numerical_features].hist(bins=50, figsize=(10, 10))
```
Some features are signifacantly skewed, and our "log trick" should be heplfull
```python
skewed_features=['households','median_income','population', 'total_bedrooms', 'total_rooms']
log_numerical_features=[]
for f in skewed_features:
train_df[f + '_log']=np.log1p(train_df[f])
test_df[f + '_log']=np.log1p(test_df[f])
log_numerical_features.append(f + '_log')
```
```python
train_df[log_numerical_features].hist(bins=50, figsize=(10, 10))
```
Our new features looks much better (during the modeling phase we can use either original, new ones or both of them)
housing_median_age looks clipped as well. Let's look at it's highest value precisely.
```python
max_house_age=train_df['housing_median_age'].max()
print("The largest value:",max_house_age)
print("The # of values, equal to the largest:", sum(train_df['housing_median_age']==max_house_age))
print("The % of values, equal to the largest:", sum(train_df['housing_median_age']==max_house_age)/train_df.shape[0])
```
It is very likely the data is clipped (there are also a small chance that in 1938 there was a great reconstruction project in California but it seems less likely). We can't recreate original values, but it can be useful to create new binary value indicating the clipping of the house age.
```python
train_df['age_clipped']=train_df['housing_median_age']==max_house_age
test_df['age_clipped']=test_df['housing_median_age']==max_house_age
```
Now we will analyse correleation between features and target variable
```python
import matplotlib.pyplot as plt
import seaborn as sns
corr_y = pd.DataFrame(train_df).corr()
plt.rcParams['figure.figsize'] = (20, 16) # Размер картинок
sns.heatmap(corr_y,
xticklabels=corr_y.columns.values,
yticklabels=corr_y.columns.values, annot=True)
```
We can see some (maybe obvious) patterns here:
- House values are significantly correlated with median income
- Number of households is not 100% correlated with population, we can try to add average_size_of_household as a feature
- Longitude and Latitude should be analyzed separately (just a correlation with target variable is not very useful)
- There is a set of highly correlated features: number of rooms, bedrooms, population and households. It can be useful to reduce dimensionality of this subset, especially if we use linear models
- total_bedrooms is one of these highly correlated features, it means we can fill NaN values with high precision using simplest linear regression
Let's try to fill NaNs with simple linear regression:
```python
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
lin = LinearRegression()
# we will train our model based on all numerical non-target features with not NaN total_bedrooms
appropriate_columns = train_df.drop(['median_house_value','median_house_value_log',
'ocean_proximity', 'total_bedrooms_log'],axis=1)
train_data=appropriate_columns[~pd.isnull(train_df).any(axis=1)]
# model will be validated on 25% of train dataset
# theoretically we can use even our test_df dataset (as we don't use target) for this task, but we will not
temp_train, temp_valid = train_test_split(train_data,shuffle = True, test_size = 0.25, random_state=17)
lin.fit(temp_train.drop(['total_bedrooms'],axis=1), temp_train['total_bedrooms'])
np.sqrt(mean_squared_error(lin.predict(temp_valid.drop(['total_bedrooms'],axis=1)),
temp_valid['total_bedrooms']))
```
RMSE on a validation set is 64.5. Let's compare this with the best constant prediction - what if we fill NaNs with mean value:
```python
np.sqrt(mean_squared_error(np.ones(len(temp_valid['total_bedrooms']))*temp_train['total_bedrooms'].mean(),
temp_valid['total_bedrooms']))
```
Obviously our linear regression approach is much better. Let's train our model on whole train dataset and apply it to the rows with blanks. But preliminary we will "remember" the rows with NaNs, because there is a chance, that it can contain useful information.
```python
lin.fit(train_data.drop(['total_bedrooms'],axis=1), train_data['total_bedrooms'])
train_df['total_bedrooms_is_nan']=pd.isnull(train_df).any(axis=1).astype(int)
test_df['total_bedrooms_is_nan']=pd.isnull(test_df).any(axis=1).astype(int)
train_df['total_bedrooms'].loc[pd.isnull(train_df).any(axis=1)]=\
lin.predict(train_df.drop(['median_house_value','median_house_value_log','total_bedrooms','total_bedrooms_log',
'ocean_proximity','total_bedrooms_is_nan'],axis=1)[pd.isnull(train_df).any(axis=1)])
test_df['total_bedrooms'].loc[pd.isnull(test_df).any(axis=1)]=\
lin.predict(test_df.drop(['median_house_value','median_house_value_log','total_bedrooms','total_bedrooms_log',
'ocean_proximity','total_bedrooms_is_nan'],axis=1)[pd.isnull(test_df).any(axis=1)])
#linear regression can lead to negative predictions, let's change it
test_df['total_bedrooms']=test_df['total_bedrooms'].apply(lambda x: max(x,0))
train_df['total_bedrooms']=train_df['total_bedrooms'].apply(lambda x: max(x,0))
```
Let's update 'total_bedrooms_log' and check if there are no NaNs left
```python
train_df['total_bedrooms_log']=np.log1p(train_df['total_bedrooms'])
test_df['total_bedrooms_log']=np.log1p(test_df['total_bedrooms'])
```
```python
print(train_df.info())
print(test_df.info())
```
After filling of blanks let's have a closer look on dependences between some numerical features
```python
sns.set()
sns.pairplot(train_df[log_numerical_features+['median_house_value_log']])
```
It seems there are no new insights about numerical features (only confirmation of the old ones).
Let's try to do the same thing but for the local (geographically) subset of our data.
```python
sns.set()
local_coord=[-122, 41] # the point near which we want to look at our variables
euc_dist_th = 2 # distance treshhold
euclid_distance=train_df[['latitude','longitude']].apply(lambda x:
np.sqrt((x['longitude']-local_coord[0])**2+
(x['latitude']-local_coord[1])**2), axis=1)
# indicate wethere the point is within treshhold or not
indicator=pd.Series(euclid_distance<=euc_dist_th, name='indicator')
print("Data points within treshhold:", sum(indicator))
# a small map to visualize th eregion for analysis
sns.lmplot('longitude', 'latitude', data=pd.concat([train_df,indicator], axis=1), hue='indicator', markers ='.', fit_reg=False, height=5)
# pairplot
sns.pairplot(train_df[log_numerical_features+['median_house_value_log']][indicator])
```
We can see that on any local territory (you can play with local_coord and euc_dist_th) the linear dependences between variables became stronger, especially median_income_log / median_house_value_log. So the coordinates is very important factor for our task (we will analyze it later)
Now let's move on to the categorical feature "ocean_proximity". It is not 100% clear what does it values means. So let's first of all plot in on the map.
```python
sns.lmplot('longitude', 'latitude', data=train_df,markers ='.', hue='ocean_proximity', fit_reg=False, height=5)
plt.show()
```
Now we better undersand the meaning of different classes. Let's look at the data.
```python
value_count=train_df['ocean_proximity'].value_counts()
value_count
```
```python
plt.figure(figsize=(12,5))
sns.barplot(value_count.index, value_count.values)
plt.title('Ocean Proximity')
plt.ylabel('Number of Occurrences')
plt.xlabel('Ocean Proximity')
plt.figure(figsize=(12,5))
plt.title('House Value depending on Ocean Proximity')
sns.boxplot(x="ocean_proximity", y="median_house_value_log", data=train_df)
```
We can see that INLAND houses has significantly lower prices. Distribution in other differ but not so much. There is no clear trend in house price / poximity, so we will not try to invent complex encoding approach. Let's just do OHE for this feature.
```python
ocean_proximity_dummies = pd.get_dummies(pd.concat([train_df['ocean_proximity'],test_df['ocean_proximity']]),
drop_first=True)
```
```python
dummies_names=list(ocean_proximity_dummies.columns)
```
```python
train_df=pd.concat([train_df,ocean_proximity_dummies[:train_df.shape[0]]], axis=1 )
test_df=pd.concat([test_df,ocean_proximity_dummies[train_df.shape[0]:]], axis=1 )
train_df=train_df.drop(['ocean_proximity'], axis=1)
test_df=test_df.drop(['ocean_proximity'], axis=1)
```
```python
train_df.head()
```
And finally we will explore coordinates features.
```python
train_df[['longitude','latitude']].describe()
```
Let's plot the house_values (target) on map:
```python
from matplotlib.colors import LinearSegmentedColormap
plt.figure(figsize=(10,10))
cmap = LinearSegmentedColormap.from_list(name='name', colors=['green','yellow','red'])
f, ax = plt.subplots()
points = ax.scatter(train_df['longitude'], train_df['latitude'], c=train_df['median_house_value_log'],
s=10, cmap=cmap)
f.colorbar(points)
```
It seems that the average value of geographically nearest houses can be very good feature.
We can also see, that the most expensive houses are located near San Francisco (37.7749° N, 122.4194° W) and Los Angeles (34.0522° N, 118.2437°). Based on this we can use the distance to this cities as additional features.
We also see that the most expensive houses are on approximately on the straight line, and become cheaper when we moving to North-East. This means that the linear combination of coordinates themselves can be useful feature as well.
```python
sf_coord=[-122.4194, 37.7749]
la_coord=[-118.2437, 34.0522]
train_df['distance_to_SF']=np.sqrt((train_df['longitude']-sf_coord[0])**2+(train_df['latitude']-sf_coord[1])**2)
test_df['distance_to_SF']=np.sqrt((test_df['longitude']-sf_coord[0])**2+(test_df['latitude']-sf_coord[1])**2)
train_df['distance_to_LA']=np.sqrt((train_df['longitude']-la_coord[0])**2+(train_df['latitude']-la_coord[1])**2)
test_df['distance_to_LA']=np.sqrt((test_df['longitude']-la_coord[0])**2+(test_df['latitude']-la_coord[1])**2)
```
## 4. Insights and found dependencies
Let's quickly sum up what useful we have found so far:
- We have analyzed the features and found some ~lognorm distributed among them. We have created corresponding log features
- We have analyzed the distribution of the target feature, and concluded that it may be useful to predict log of it (to be checked)
- We have dealt with clipped and missing data
- We have created features corresponding to simple Eucledian distances to LA ans SF
- We also has found several highly correlated variables and maybe will work with them later
- We have already generated several new variables and will create more of them later after the initial modeling phase
All explanation about this steps were already given above.
## 5. Metrics selection
This is regression problem. Our target metric will be RMSE - it is one of the most popular regression metrics, and it has same unit of measurement as target value thus is easy to explain to other people.
\begin{align}
RMSE = \sqrt{\frac{1}{n}\Sigma_{i=1}^{n}{\Big(\frac{d_i -f_i}{\sigma_i}\Big)^2}}
\end{align}
As far as there is a monotonic dependence between RMSE and MSE we can optimize MSE in our model and compute RMSE only in the end. MSE is easy to optimize it is a default loss function for the most of regression models.
The main drawback of MSE and RMSE - high penalty for big errors in predictions - it can overfit to outliers, but in our case outlaying target values have already been clipped so it is not a big problem.
## 6. Model selection
We will try to solve our problem with 3 different regression models:
- Linear regression
- Random forest
- Gradient boosting
Linear regression is fast, simple and can provide quite a good baseline result for our task.
Tree based models can provide better results in case of nonlinear complex dependences of variables and in case of small number of variables, they are also more stable to multicollinearity (and we have highly correlated variables). Moreover in our problem target values are clipped and targets can't be outside the clipping interval, it is good for the tree-based models.
The results of using these models will be compared in the 11-12 parts of the project. Tree-based models are expected to work better in this particular problem, but we will start with more simple model.
We will start with standard linear regression, go through all of the modeling steps, and then do some simplified computations for 2 other models (without in-depth explanation of every step).
The final model selection will be done based on the results.
## 7. Data preprocessing
We have already done most of the preprocessing steps:
- OHE for the categorical features
- Filled NaNs
- Computed logs of skewed data
- Divided data into train and hold-out sets
Now let's scale all numerical features (it is useful for the linear models), prepare cross validation splits and we are ready to proceed to modeling
```python
from sklearn.preprocessing import StandardScaler
features_to_scale=numerical_features+log_numerical_features+['distance_to_SF','distance_to_LA']
scaler = StandardScaler()
X_train_scaled=pd.DataFrame(scaler.fit_transform(train_df[features_to_scale]),
columns=features_to_scale, index=train_df.index)
X_test_scaled=pd.DataFrame(scaler.transform(test_df[features_to_scale]),
columns=features_to_scale, index=test_df.index)
```
## 8 Cross-validation and adjustment of model hyperparameters
Let's prepare cross validation samples.
As far as there are not a lot of data we can easily divide it on 10 folds, that are taken from shuffled train data.
Within every split we will train our model on 90% of train data and compute CV metric on the other 10%.
We fix the random state for the reproducibility.
```python
from sklearn.model_selection import KFold, cross_val_score
kf = KFold(n_splits=10, random_state=17, shuffle=True)
```
### Linear regression
For the first initial baseline we will take Rigge model with only initial numerical and OHE features
```python
from sklearn.linear_model import Ridge
model=Ridge(alpha=1)
X=train_df[numerical_features+dummies_names]
y=train_df['median_house_value']
cv_scores = cross_val_score(model, X, y, cv=kf, scoring='neg_mean_squared_error', n_jobs=-1)
print(np.sqrt(-cv_scores.mean()))
```
We are doing cross validation with 10 folds, computing 'neg_mean_squared_error' (neg - because sklearn needs scoring functions to be minimized). Our final metrics: RMSE=np.sqrt(-neg_MSE)
So our baseline is RMSE = $68 702 we will try to improve this results using everything we have discovered during the data analysis phase.
We will do the following steps:
- Use scaled features
- Add log features
- Add NaN and age clip indicating features
- Add city-distance features
- Generate several new features
- Try to predict log(target) instead of target
- Tune some hyperparameters of the model
One again the most part of the hyperparameters adjustment will be done later after we add some new features. Actually the cross-validation and parameters tuning process is done through the parts 8-11.
```python
# using scaled data
X=pd.concat([train_df[dummies_names], X_train_scaled[numerical_features]], axis=1, ignore_index = True)
cv_scores = cross_val_score(model, X, y, cv=kf, scoring='neg_mean_squared_error', n_jobs=-1)
print(np.sqrt(-cv_scores.mean()))
```
```python
# adding NaN indicating feature
X=pd.concat([train_df[dummies_names+['total_bedrooms_is_nan']],
X_train_scaled[numerical_features]], axis=1, ignore_index = True)
cv_scores = cross_val_score(model, X, y, cv=kf, scoring='neg_mean_squared_error', n_jobs=-1)
print(np.sqrt(-cv_scores.mean()))
```
```python
# adding house age cliiping indicating feature
X=pd.concat([train_df[dummies_names+['age_clipped']],
X_train_scaled[numerical_features]], axis=1, ignore_index = True)
cv_scores = cross_val_score(model, X, y, cv=kf, scoring='neg_mean_squared_error', n_jobs=-1)
print(np.sqrt(-cv_scores.mean()))
```
```python
# adding log features
X=pd.concat([train_df[dummies_names+['age_clipped']], X_train_scaled[numerical_features+log_numerical_features]],
axis=1, ignore_index = True)
cv_scores = cross_val_score(model, X, y, cv=kf, scoring='neg_mean_squared_error', n_jobs=-1)
print(np.sqrt(-cv_scores.mean()))
```
```python
# adding city distance features
X=pd.concat([train_df[dummies_names+['age_clipped']], X_train_scaled],
axis=1, ignore_index = True)
cv_scores = cross_val_score(model, X, y, cv=kf, scoring='neg_mean_squared_error', n_jobs=-1)
print(np.sqrt(-cv_scores.mean()))
```
Up to this moment we have got best result using numerical features + their logs + age_clipped+ dummy variables + distances to the largest cities.
Let's try to generate new features
## 9. Creation of new features and description of this process
Previously we have already created and explained the rational of new features creation. Now Let's generate additional ones
City distances features work, but maybe there are also some non-linear dependencies between them and the target variables.
```python
sns.set()
sns.pairplot(train_df[['distance_to_SF','distance_to_LA','median_house_value_log']])
```
Visually is not obvious so let's try to create a couple of new variables and check:
```python
new_features_train_df=pd.DataFrame(index=train_df.index)
new_features_test_df=pd.DataFrame(index=test_df.index)
new_features_train_df['1/distance_to_SF']=1/(train_df['distance_to_SF']+0.001)
new_features_train_df['1/distance_to_LA']=1/(train_df['distance_to_LA']+0.001)
new_features_train_df['log_distance_to_SF']=np.log1p(train_df['distance_to_SF'])
new_features_train_df['log_distance_to_LA']=np.log1p(train_df['distance_to_LA'])
new_features_test_df['1/distance_to_SF']=1/(test_df['distance_to_SF']+0.001)
new_features_test_df['1/distance_to_LA']=1/(test_df['distance_to_LA']+0.001)
new_features_test_df['log_distance_to_SF']=np.log1p(test_df['distance_to_SF'])
new_features_test_df['log_distance_to_LA']=np.log1p(test_df['distance_to_LA'])
```
We can also generate some features correlated to the prosperity:
- rooms/person - how many rooms are there per person. The higher - the richer people are living there - the more expensive houses they buy
- rooms/household - how many rooms are there per family. The similar one but corresponds to the number of rooms per family (assuming household~family), not per person.
- two similar features but counting only bedrooms
```python
new_features_train_df['rooms/person']=train_df['total_rooms']/train_df['population']
new_features_train_df['rooms/household']=train_df['total_rooms']/train_df['households']
new_features_test_df['rooms/person']=test_df['total_rooms']/test_df['population']
new_features_test_df['rooms/household']=test_df['total_rooms']/test_df['households']
new_features_train_df['bedrooms/person']=train_df['total_bedrooms']/train_df['population']
new_features_train_df['bedrooms/household']=train_df['total_bedrooms']/train_df['households']
new_features_test_df['bedrooms/person']=test_df['total_bedrooms']/test_df['population']
new_features_test_df['bedrooms/household']=test_df['total_bedrooms']/test_df['households']
```
- the luxurity of house can be characterized buy number of bedrooms per rooms
```python
new_features_train_df['bedroom/rooms']=train_df['total_bedrooms']/train_df['total_rooms']
new_features_test_df['bedroom/rooms']=test_df['total_bedrooms']/test_df['total_rooms']
```
- the average number of persons in one household can be the signal of prosperity or the same time the signal of richness but in any case it can be a useful feature
```python
new_features_train_df['average_size_of_household']=train_df['population']/train_df['households']
new_features_test_df['average_size_of_household']=test_df['population']/test_df['households']
```
And finally let's scale all this features
```python
new_features_train_df=pd.DataFrame(scaler.fit_transform(new_features_train_df),
columns=new_features_train_df.columns, index=new_features_train_df.index)
new_features_test_df=pd.DataFrame(scaler.transform(new_features_test_df),
columns=new_features_test_df.columns, index=new_features_test_df.index)
```
```python
new_features_train_df.head()
```
```python
new_features_test_df.head()
```
We will add new features one by one and keeps only those that improve our best score
```python
# computing current best score
X=pd.concat([train_df[dummies_names+['age_clipped']], X_train_scaled],
axis=1, ignore_index = True)
cv_scores = cross_val_score(model, X, y, cv=kf, scoring='neg_mean_squared_error', n_jobs=-1)
best_score = np.sqrt(-cv_scores.mean())
print("Best score: ", best_score)
# list of the new good features
new_features_list=[]
for feature in new_features_train_df.columns:
new_features_list.append(feature)
X=pd.concat([train_df[dummies_names+['age_clipped']], X_train_scaled,
new_features_train_df[new_features_list]
],
axis=1, ignore_index = True)
cv_scores = cross_val_score(model, X, y, cv=kf, scoring='neg_mean_squared_error', n_jobs=-1)
score = np.sqrt(-cv_scores.mean())
if score >= best_score:
new_features_list.remove(feature)
print(feature, ' is not a good feature')
else:
print(feature, ' is a good feature')
print('New best score: ', score)
best_score=score
```
We have got 5 new good features. Let's update our X variable
```python
X=pd.concat([train_df[dummies_names+['age_clipped']], X_train_scaled,
new_features_train_df[new_features_list]
],
axis=1).reset_index(drop=True)
y=train_df['median_house_value'].reset_index(drop=True)
```
To deal with log of target we need to create our own cross validation or our own predicting model. We will try the first option
```python
from sklearn.metrics import mean_squared_error
def cross_val_score_with_log(model=model, X=X,y=y,kf=kf, use_log=False):
X_temp=np.array(X)
# if use_log parameter is true we will predict log(y+1)
if use_log:
y_temp=np.log1p(y)
else:
y_temp=np.array(y)
cv_scores=[]
for train_index, test_index in kf.split(X_temp,y_temp):
prediction = model.fit(X_temp[train_index], y_temp[train_index]).predict(X_temp[test_index])
# if use_log parameter is true we should come back to the initial targer
if use_log:
prediction=np.expm1(prediction)
cv_scores.append(-mean_squared_error(y[test_index],prediction))
return np.sqrt(-np.mean(cv_scores))
```
```python
cross_val_score_with_log(X=X,y=y,kf=kf, use_log=False)
```
We have got exactly the same result as with cross_val_score function. That means everything work ok. Now let's try to set use_log to true
```python
cross_val_score_with_log(X=X,y=y,kf=kf, use_log=True)
```
Unfortunately, it has not helped. So we will stick to the previous version.
And now we will tune the only meaningful hyperparameter of the Ridge regression - alpha.
## 10. Plotting training and validation curves
Let's plot Validation Curve
```python
from sklearn.model_selection import validation_curve
Cs=np.logspace(-5, 4, 10)
train_scores, valid_scores = validation_curve(model, X, y, "alpha",
Cs, cv=kf, scoring='neg_mean_squared_error')
plt.plot(Cs, np.sqrt(-train_scores.mean(axis=1)), 'ro-')
plt.fill_between(x=Cs, y1=np.sqrt(-train_scores.max(axis=1)),
y2=np.sqrt(-train_scores.min(axis=1)), alpha=0.1, color = "red")
plt.plot(Cs, np.sqrt(-valid_scores.mean(axis=1)), 'bo-')
plt.fill_between(x=Cs, y1=np.sqrt(-valid_scores.max(axis=1)),
y2=np.sqrt(-valid_scores.min(axis=1)), alpha=0.1, color = "blue")
plt.xscale('log')
plt.xlabel('alpha')
plt.ylabel('RMSE')
plt.title('Regularization Parameter Tuning')
plt.show()
```
```python
Cs[np.sqrt(-valid_scores.mean(axis=1)).argmin()]
```
We can see that curves for train and CV are very close to each other, it is a sign of underfiting. The difference between the curves does not change along with change in alpha this mean that we should try more complex models comparing to linear regression or add more new features (f.e. polynomial ones)
Using this curve we can find the optimal value of alpha. It is alpha=1. But actually our prediction does not change when alpha goes below 1.
Let's use alpha=1 and plot the learning curve
```python
from sklearn.model_selection import learning_curve
model=Ridge(alpha=1.0)
train_sizes, train_scores, valid_scores = learning_curve(model, X, y, train_sizes=list(range(50,10001,100)),
scoring='neg_mean_squared_error', cv=5)
plt.plot(train_sizes, np.sqrt(-train_scores.mean(axis=1)), 'ro-')
plt.fill_between(x=train_sizes, y1=np.sqrt(-train_scores.max(axis=1)),
y2=np.sqrt(-train_scores.min(axis=1)), alpha=0.1, color = "red")
plt.plot(train_sizes, np.sqrt(-valid_scores.mean(axis=1)), 'bo-')
plt.fill_between(x=train_sizes, y1=np.sqrt(-valid_scores.max(axis=1)),
y2=np.sqrt(-valid_scores.min(axis=1)), alpha=0.1, color = "blue")
plt.xlabel('Train size')
plt.ylabel('RMSE')
plt.title('Regularization Parameter Tuning')
plt.show()
```
Learning curves indicate high bias of the model - this means we will not improve our model by adding more data, but we can try to use more complex models or add more features to improve the results.
This result is inline with the validation curve results. So let's move on to the more complex models.
### Random forest
Actually we can just put all our features into the model but we can easily improve computational performance of the tree-based models, by deleting all monotonous derivatives of features because they does not help at all.
For example, adding log(feature) don't help tree-based model, it will just make it more computationally intensive.
So let's train random forest classifier based on shorten set of the features
```python
X.columns
```
```python
features_for_trees=['INLAND', 'ISLAND', 'NEAR BAY', 'NEAR OCEAN', 'age_clipped',
'longitude', 'latitude', 'housing_median_age', 'total_rooms',
'total_bedrooms', 'population', 'households', 'median_income',
'distance_to_SF', 'distance_to_LA','bedroom/rooms']
```
```python
%%time
from sklearn.ensemble import RandomForestRegressor
X_trees=X[features_for_trees]
model_rf=RandomForestRegressor(n_estimators=100, random_state=17)
cv_scores = cross_val_score(model_rf, X_trees, y, cv=kf, scoring='neg_mean_squared_error', n_jobs=-1)
print(np.sqrt(-cv_scores.mean()))
```
We can see significant improvement, comparing to the linear model and higher n_estimator probably will help. But first, let's try to tune other hyperparametres:
```python
from sklearn.model_selection import GridSearchCV
param_grid={'n_estimators': [100],
'max_depth': [22, 23, 24, 25],
'max_features': [5,6,7,8]}
gs=GridSearchCV(model_rf, param_grid, scoring='neg_mean_squared_error', fit_params=None, n_jobs=-1, cv=kf, verbose=1)
gs.fit(X_trees,y)
```
```python
print(np.sqrt(-gs.best_score_))
```
```python
gs.best_params_
```
```python
best_depth=gs.best_params_['max_depth']
best_features=gs.best_params_['max_features']
```
```python
%%time
model_rf=RandomForestRegressor(n_estimators=100, max_depth=best_depth, max_features=best_features, random_state=17)
cv_scores = cross_val_score(model_rf, X_trees, y, cv=kf, scoring='neg_mean_squared_error', n_jobs=-1)
print(np.sqrt(-cv_scores.mean()))
```
With the relatively small effort we have got a significant improvement of results. Random Forest results can be further improved by higher n_estimators, let's find the n_estimators at witch the results stabilize.
```python
model_rf=RandomForestRegressor(n_estimators=200, max_depth=best_depth, max_features=best_features, random_state=17)
Cs=list(range(20,201,20))
train_scores, valid_scores = validation_curve(model_rf, X_trees, y, "n_estimators",
Cs, cv=kf, scoring='neg_mean_squared_error')
plt.plot(Cs, np.sqrt(-train_scores.mean(axis=1)), 'ro-')
plt.fill_between(x=Cs, y1=np.sqrt(-train_scores.max(axis=1)),
y2=np.sqrt(-train_scores.min(axis=1)), alpha=0.1, color = "red")
plt.plot(Cs, np.sqrt(-valid_scores.mean(axis=1)), 'bo-')
plt.fill_between(x=Cs, y1=np.sqrt(-valid_scores.max(axis=1)),
y2=np.sqrt(-valid_scores.min(axis=1)), alpha=0.1, color = "blue")
plt.xlabel('n_estimators')
plt.ylabel('RMSE')
plt.title('Regularization Parameter Tuning')
plt.show()
```
This time we can see that the results of train is much better than CV, but it is totally ok for the Random Forest.
Higher value of n_estimators (>100) does not help much. Let's stick to the n_estimators=200 - it is high enough but not very computationally intensive.
### Gradient boosting
And finally we will try to use LightGBM to solve our problem.
We will try the model out of the box, and then tune some of its parameters using random search
```python
# uncomment to install if you have not yet
#!pip install lightgbm
```
```python
%%time
from lightgbm.sklearn import LGBMRegressor
model_gb=LGBMRegressor()
cv_scores = cross_val_score(model_gb, X_trees, y, cv=kf, scoring='neg_mean_squared_error', n_jobs=1)
print(np.sqrt(-cv_scores.mean()))
```
LGBMRegressor has much more hyperparameters than previous models. As far as this is educational problem we will not spend a lot of time to tuning all of them. In this case RandomizedSearchCV can give us very good result quite fast, much faster than GridSearch. We will do optimization in 2 steps: model complexity optimization and convergence optimization. Let's do it.
```python
gs
```
```python
# model complexity optimization
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import randint, uniform
param_grid={'max_depth': randint(6,11),
'num_leaves': randint(7,127),
'reg_lambda': np.logspace(-3,0,100),
'random_state': [17]}
gs=RandomizedSearchCV(model_gb, param_grid, n_iter = 50, scoring='neg_mean_squared_error', fit_params=None,
n_jobs=-1, cv=kf, verbose=1, random_state=17)
gs.fit(X_trees,y)
```
```python
np.sqrt(-gs.best_score_)
```
```python
gs.best_params_
```
Let's fix n_estimators=500, it is big enough but is not to computationally intensive yet, and find the best value of the learning_rate
```python
# model convergency optimization
param_grid={'n_estimators': [500],
'learning_rate': np.logspace(-4, 0, 100),
'max_depth': [10],
'num_leaves': [72],
'reg_lambda': [0.0010722672220103231],
'random_state': [17]}
gs=RandomizedSearchCV(model_gb, param_grid, n_iter = 20, scoring='neg_mean_squared_error', fit_params=None,
n_jobs=-1, cv=kf, verbose=1, random_state=17)
gs.fit(X_trees,y)
```
```python
np.sqrt(-gs.best_score_)
```
```python
gs.best_params_
```
We have got the best params for the gradient boosting and will use them for the final prediction.
## 11. Prediction for test or hold-out samples
Lets sum up the results of our project. We will compute RMSE on cross validation and holdout set and compare them.
```python
results_df=pd.DataFrame(columns=['model','CV_results', 'holdout_results'])
```
```python
# hold-out features and target
X_ho=pd.concat([test_df[dummies_names+['age_clipped']], X_test_scaled,
new_features_test_df[new_features_list]],axis=1).reset_index(drop=True)
y_ho=test_df['median_house_value'].reset_index(drop=True)
X_trees_ho=X_ho[features_for_trees]
```
```python
%%time
#linear model
model=Ridge(alpha=1.0)
cv_scores = cross_val_score(model, X, y, cv=kf, scoring='neg_mean_squared_error', n_jobs=-1)
score_cv=np.sqrt(-np.mean(cv_scores.mean()))
prediction_ho = model.fit(X, y).predict(X_ho)
score_ho=np.sqrt(mean_squared_error(y_ho,prediction_ho))
results_df.loc[results_df.shape[0]]=['Linear Regression', score_cv, score_ho]
```
```python
%%time
#Random Forest
model_rf=RandomForestRegressor(n_estimators=200, max_depth=23, max_features=5, random_state=17)
cv_scores = cross_val_score(model_rf, X_trees, y, cv=kf, scoring='neg_mean_squared_error', n_jobs=-1)
score_cv=np.sqrt(-np.mean(cv_scores.mean()))
prediction_ho = model_rf.fit(X_trees, y).predict(X_trees_ho)
score_ho=np.sqrt(mean_squared_error(y_ho,prediction_ho))
results_df.loc[results_df.shape[0]]=['Random Forest', score_cv, score_ho]
```
```python
%%time
#Gradient boosting
model_gb=LGBMRegressor(reg_lambda=0.0010722672220103231, max_depth=10,
n_estimators=500, num_leaves=72, random_state=17, learning_rate=0.06734150657750829)
cv_scores = cross_val_score(model_gb, X_trees, y, cv=kf, scoring='neg_mean_squared_error', n_jobs=-1)
score_cv=np.sqrt(-np.mean(cv_scores.mean()))
prediction_ho = model_gb.fit(X_trees, y).predict(X_trees_ho)
score_ho=np.sqrt(mean_squared_error(y_ho,prediction_ho))
results_df.loc[results_df.shape[0]]=['Gradient boosting', score_cv, score_ho]
```
```python
results_df
```
It seems we have done quite a good job. Cross validation results are inline with holdout ones. Our best CV model - gradient boosting, turned out to be the best on hold-out dataset as well (and it is also faster than random forest).
## 12. Conclusions
To sum up, we have got the solution that can predict the mean house value in the block with RMSE \$46k using our best model - LGB. It is not an extremely precise prediction: \$46k is about 20% of the average mean house price, but it seems that it is near the possible solution for these classes of model based on this data (it is popular dataset but I have not find any solution with significantly better results).
We have used old Californian data from 1990 so it is not useful right now. But the same approach can be used to predict modern house prices (if applied to the resent market data).
We have done a lot but the results surely can be improved, at least one could try:
- feature engineering: polynomial features, better distances to cities (not Euclidean ones, ellipse representation of cities), average values of target for the geographically closest neighbours (requires custom estimator function for correct cross validation)
- PCA for dimensionality reduction (I have mentioned it but didn't used)
- other models (at least KNN and SVM can be tried based on data)
- more time and effort can be spent on RF and LGB parameters tuning
|
Credits are adapted from the album 's liner notes .
|
Turn a jumbled mess into a well-organized closet with our white soft storage solutions. This durable piece keeps clutter at bay using every inch of available space for endless storage possibilities. This organizer has reinforced shelves for great capacity and easily attaches directly to your closet rod with Velcro-style straps. Perfect for organizing bulky sweaters, pants, shirts, and bags in your dorm room closet.
Turn a jumbled mess into a well-organized closet with our soft storage solutions. This durable piece keeps clutter at bay using every inch of available space for endless storage possibilities. This organizer has reinforced shelves for great capacity and easily attaches directly to your closet rod with Velcro-style straps. Perfect for organizing bulky sweaters, pants, shirts, and bags in your dorm room closet. One item in Honey-Can-Do's mix and match collection of sturdy polyester closet organizers available in several colors, it's a perfect blend of economy and strength. Matching storage drawers (SFT-01241), sold separately, instantly create more space for socks, undergarments, and accessories.
|
# -*- coding: utf-8 -*-
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
""" nnfp.py
'Neural Audio Fingerprint for High-specific Audio Retrieval based on
Contrastive Learning', https://arxiv.org/abs/2010.11910
USAGE:
Please see test() in the below.
"""
import numpy as np
import tensorflow as tf
assert tf.__version__ >= "2.0"
class ConvLayer(tf.keras.layers.Layer):
"""
Separable convolution layer
Arguments
---------
hidden_ch: (int)
strides: [(int, int), (int, int)]
norm: 'layer_norm1d' for normalization on Freq axis. (default)
'layer_norm2d' for normalization on on FxT space
'batch_norm' or else, batch-normalization
Input
-----
x: (B,F,T,1)
[Conv1x3]>>[ELU]>>[BN]>>[Conv3x1]>>[ELU]>>[BN]
Output
------
x: (B,F,T,C) with {F=F/stride, T=T/stride, C=hidden_ch}
"""
def __init__(self,
hidden_ch=128,
strides=[(1,1),(1,1)],
norm='layer_norm2d'):
super(ConvLayer, self).__init__()
self.conv2d_1x3 = tf.keras.layers.Conv2D(hidden_ch,
kernel_size=(1, 3),
strides=strides[0],
padding='SAME',
dilation_rate=(1, 1),
kernel_initializer='glorot_uniform',
bias_initializer='zeros')
self.conv2d_3x1 = tf.keras.layers.Conv2D(hidden_ch,
kernel_size=(3, 1),
strides=strides[1],
padding='SAME',
dilation_rate=(1, 1),
kernel_initializer='glorot_uniform',
bias_initializer='zeros')
if norm == 'layer_norm1d':
self.BN_1x3 = tf.keras.layers.LayerNormalization(axis=-1)
self.BN_3x1 = tf.keras.layers.LayerNormalization(axis=-1)
elif norm == 'layer_norm2d':
self.BN_1x3 = tf.keras.layers.LayerNormalization(axis=(1, 2, 3))
self.BN_3x1 = tf.keras.layers.LayerNormalization(axis=(1, 2, 3))
else:
self.BN_1x3 = tf.keras.layers.BatchNormalization(axis=-1) # Fix axis: 2020 Apr20
self.BN_3x1 = tf.keras.layers.BatchNormalization(axis=-1)
self.forward = tf.keras.Sequential([self.conv2d_1x3,
tf.keras.layers.ELU(),
self.BN_1x3,
self.conv2d_3x1,
tf.keras.layers.ELU(),
self.BN_3x1
])
def call(self, x):
return self.forward(x)
class DivEncLayer(tf.keras.layers.Layer):
"""
Multi-head projection a.k.a. 'divide and encode' layer:
• The concept of 'divide and encode' was discovered in Lai et.al.,
'Simultaneous Feature Learning and Hash Coding with Deep Neural Networks',
2015. https://arxiv.org/abs/1504.03410
• It was also adopted in Gfeller et.al. 'Now Playing: Continuo-
us low-power music recognition', 2017. https://arxiv.org/abs/1711.10958
Arguments
---------
q: (int) number of slices as 'slice_length = input_dim / q'
unit_dim: [(int), (int)]
norm: 'layer_norm1d' or 'layer_norm2d' uses 1D-layer normalization on the feature.
'batch_norm' or else uses batch normalization. Default is 'layer_norm2d'.
Input
-----
x: (B,1,1,C)
Returns
-------
emb: (B,Q)
"""
def __init__(self, q=128, unit_dim=[32, 1], norm='batch_norm'):
super(DivEncLayer, self).__init__()
self.q = q
self.unit_dim = unit_dim
self.norm = norm
if norm in ['layer_norm1d', 'layer_norm2d']:
self.BN = [tf.keras.layers.LayerNormalization(axis=-1) for i in range(q)]
else:
self.BN = [tf.keras.layers.BatchNormalization(axis=-1) for i in range(q)]
self.split_fc_layers = self._construct_layers()
def build(self, input_shape):
# Prepare output embedding variable for dynamic batch-size
self.slice_length = int(input_shape[-1] / self.q)
def _construct_layers(self):
layers = list()
for i in range(self.q): # q: num_slices
layers.append(tf.keras.Sequential([tf.keras.layers.Dense(self.unit_dim[0], activation='elu'),
#self.BN[i],
tf.keras.layers.Dense(self.unit_dim[1])]))
return layers
@tf.function
def _split_encoding(self, x_slices):
"""
Input: (B,Q,S)
Returns: (B,Q)
"""
out = list()
for i in range(self.q):
out.append(self.split_fc_layers[i](x_slices[:, i, :]))
return tf.concat(out, axis=1)
def call(self, x): # x: (B,1,1,2048)
x = tf.reshape(x, shape=[x.shape[0], self.q, -1]) # (B,Q,S); Q=num_slices; S=slice length; (B,128,8 or 16)
return self._split_encoding(x)
class FingerPrinter(tf.keras.Model):
"""
Fingerprinter: 'Neural Audio Fingerprint for High-specific Audio Retrieval
based on Contrastive Learning', https://arxiv.org/abs/2010.11910
IN >> [Convlayer]x8 >> [DivEncLayer] >> [L2Normalizer] >> OUT
Arguments
---------
input_shape: tuple (int), not including the batch size
front_hidden_ch: (list)
front_strides: (list)
emb_sz: (int) default=128
fc_unit_dim: (list) default=[32,1]
norm: 'layer_norm1d' for normalization on Freq axis.
'layer_norm2d' for normalization on on FxT space (default).
'batch_norm' or else, batch-normalization.
use_L2layer: True (default)
• Note: batch-normalization will not work properly with TPUs.
Input
-----
x: (B,F,T,1)
Returns
-------
emb: (B,Q)
"""
def __init__(self,
input_shape=(256,32,1),
front_hidden_ch=[128, 128, 256, 256, 512, 512, 1024, 1024],
front_strides=[[(1,2), (2,1)], [(1,2), (2,1)],
[(1,2), (2,1)], [(1,2), (2,1)],
[(1,1), (2,1)], [(1,2), (2,1)],
[(1,1), (2,1)], [(1,2), (2,1)]],
emb_sz=128, # q
fc_unit_dim=[32,1],
norm='layer_norm2d',
use_L2layer=True):
super(FingerPrinter, self).__init__()
self.front_hidden_ch = front_hidden_ch
self.front_strides = front_strides
self.emb_sz=emb_sz
self.norm = norm
self.use_L2layer = use_L2layer
self.n_clayers = len(front_strides)
self.front_conv = tf.keras.Sequential(name='ConvLayers')
if ((front_hidden_ch[-1] % emb_sz) != 0):
front_hidden_ch[-1] = ((front_hidden_ch[-1]//emb_sz) + 1) * emb_sz
# Front (sep-)conv layers
for i in range(self.n_clayers):
self.front_conv.add(ConvLayer(hidden_ch=front_hidden_ch[i],
strides=front_strides[i], norm=norm))
self.front_conv.add(tf.keras.layers.Flatten()) # (B,F',T',C) >> (B,D)
# Divide & Encoder layer
self.div_enc = DivEncLayer(q=emb_sz, unit_dim=fc_unit_dim, norm=norm)
@tf.function
def call(self, inputs):
x = self.front_conv(inputs) # (B,D) with D = (T/2^4) x last_hidden_ch
x = self.div_enc(x) # (B,Q)
if self.use_L2layer:
return tf.math.l2_normalize(x, axis=1)
else:
return x
def get_fingerprinter(cfg, trainable=False):
"""
Input length : 1s or 2s
Arguements
----------
cfg : (dict)
created from the '.yaml' located in /config dicrectory
Returns
-------
<tf.keras.Model> FingerPrinter object
"""
input_shape = (256, 32, 1)
emb_sz = cfg['MODEL']['EMB_SZ']
norm = cfg['MODEL']['BN']
fc_unit_dim = [32, 1]
m = FingerPrinter(input_shape=input_shape,
emb_sz=emb_sz,
fc_unit_dim=fc_unit_dim,
norm=norm)
m.trainable = trainable
return m
def test():
input_1s = tf.constant(np.random.randn(3,256,32,1), dtype=tf.float32) # BxFxTx1
fprinter = FingerPrinter(emb_sz=128, fc_unit_dim=[32, 1], norm='layer_norm2d')
emb_1s = fprinter(input_1s) # BxD
input_2s = tf.constant(np.random.randn(3,256,63,1), dtype=tf.float32) # BxFxTx1
fprinter = FingerPrinter(emb_sz=128, fc_unit_dim=[32, 1], norm='layer_norm2d')
emb_2s = fprinter(input_2s)
#%timeit -n 10 fprinter(_input) # 27.9ms
"""
Total params: 19,224,576
Trainable params: 19,224,576
Non-trainable params: 0
"""
|
lemma pairwise_orthogonal_independent: assumes "pairwise orthogonal S" and "0 \<notin> S" shows "independent S"
|
\documentclass{article}
%\usepackage{fullpage}
%\usepackage{nopageno}
\usepackage[margin=1.5in]{geometry}
\usepackage{tikz}
\usetikzlibrary{shapes.geometric, calc}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage[normalem]{ulem}
\usepackage{fancyhdr}
\usepackage{cancel}
\usepackage{enumerate}
%\renewcommand\headheight{12pt}
\pagestyle{fancy}
\lhead{April 30, 2014}
\rhead{Jon Allen}
\allowdisplaybreaks
\newcommand{\abs}[1]{\left\lvert #1 \right\rvert}
\begin{document}
Part 1 (7 points): Due in class Wednesday, April 30.
Chapter 8: \#11, 12 (do three parts), 15, 16, 22(b), 26 (do two parts), 27, 28, 30, (Grad: 29)
(Hint for \#28: Use the 'more formal' definition of conjugate given in the book right before the example on p. 293.)
\begin{enumerate}
\setcounter{enumi}{10}
\item
Compute the Stirling numbers of the second kind $S(8,k),\;(k=0,1,\dots,8)$.
\begin{align*}
S(8,0)&=0&S(8,8)&=1\\
\intertext{Cheating with figure 8.2 on page 284 to get $S(7,k)$}
S(8,1)&=S(7,1)+S(7,0)=1&S(8,2)&=2S(7,2)+S(7,1)=127\\
S(8,3)&=3S(7,3)+S(7,2)=966&S(8,4)&=4S(7,4)+S(7,3)=1701\\
S(8,5)&=5S(7,5)+S(7,4)=1050&S(8,6)&=6S(7,6)+S(7,5)=266\\
S(8,7)&=7S(7,7)+S(7,6)=28
\end{align*}
\item
(do three parts)
Prove that the Stirling numbers of the second kind satisfy the following relations:
\begin{enumerate}
\item
$S(n,1)=1,\;(n\ge1)$
\subsubsection*{proof}
We take as given that $S(n,0)=S(1,0)=0$ and $S(1,1)=1$ (equations 8.16 and 8.17 in the text). Now lets assume that $S(n-1,1)=1$
\begin{align*}
S(n,1)&=1\cdot S(n-1,1)+S(n-1,0)=1+0=1
\end{align*}
And induction says we win $\Box$
\item
$S(n,2)=2^{n-1}-1,\;(n\ge2)$
\subsubsection*{proof}
We wish count the ways of putting $n\ge2$ elements into 2 indistinguishable boxes such that no box is empty. We can put our n elements into two distinguishable boxes in $2^n$ ways (2 choices for each element, n times). Now we subtract the cases where the first box is empty and where the second box is empty. and we have $2^n-2$ ways to put the elements into distinguishable boxes. If we have two colors to paint these boxes, we can do so in $2!=2$ ways. So dividing by the ways of distinguishing the boxes we have $\frac{2n-2}{2}=2^{n-1}-1$ which is the result we want. $\Box$
\item
$S(n,n-1)=\binom{n}{2},\;(n\ge1)$
\subsubsection*{proof}
We wish to count the number of ways of putting $n\ge1$ elements into $n-1$ indistinguishable boxes such that no box is empty. From the pigeonhole principle we know that at least one box has more than one element. If now take one element from every box we have $n-(n-1)=1$ element left. So one box has the one element left plus the one element we removed for two elements altogether. So we see we must put the $n$ elements into the boxes so 2 elements share a box and all the others are in boxes by themselves. So if we wish to count the ways to put the elements in boxes, we could simply count the number of ways to choose the two elements that share a box. And of course if we have $n$ elements we can choose two of them in $\binom{n}{2}$ ways. Notice that $\binom{1}{2}=0=S(1,0)$ so this result also works for the special case where where this proof makes no sense because we only have one element. $\Box$
% \item
% $S(n,n-2)=\binom{n}{3}+3\binom{n}{4},\;(n\ge2)$
\end{enumerate}
\setcounter{enumi}{14}
\item
The number of partitions of a set of $n$ elements into $k$ distinguishable boxes (some of which may be empty) is $k^n$. By counting in a different way, prove that
\[k^n=\binom{k}{1}1!S(n,1)+\binom{k}{2}2!S(n,2)+\dots+\binom{k}{n}n!S(n,n).\]
(if $k>n$, define $S(n,k)$ to be 0.)
\subsubsection*{proof}
Imagine we have $n$ elements that we want to put into $k$ boxes in such a way that $i\ge1$ boxes have things in them, and the rest are empty. Then we can put these elements into $i$ indistinguishable boxes in $S(n,i)$ ways. Now we distinguish the boxes by ``painting'' them in $i$ ``colors'' which we can do $i!$ ways. So we can put the elements into $i$ distinguishable nonempty boxes in $i!S(n,i)$ ways. Now we can pick the boxes that have elements in them from the $k$ boxes in $\binom{k}{i}$ ways, for a total of $\binom{k}{i}i!S(n,i)$ ways to fill $i$ of $k$ distinguisheable boxes with $n$ objects. To find the total number of ways to distribute the $n$ objects we sum the ways to distribute the objects with all possible values of $i$. This must be $k^n$ and so we have our result
\begin{align*}
k^n&=\sum\limits_{i=1}^k{\binom{k}{i}i!S(n,i)}
\end{align*}
$\Box$
\item
Compute the Bell number $B_8$. (Cf. Exercise 11.)
\begin{align*}
B_p&=S(p,0)+S(p,1)+\dots+S(p,p)\\
B_8&=S(8,0)+S(8,1)+\dots+S(8,8)\\
&=0+1+127+966+1701+1050+266+28+1\\
&=4140
\end{align*}
\setcounter{enumi}{21}
\item
\begin{enumerate}
\setcounter{enumii}{1}
\item
Calculate the partition number $p_7$ and construct the diagram of the set $\mathcal{P}_7$, partially orderedby majorization
\begin{align*}
\begin{matrix}
&7^1\\
&\downarrow\\
&6^11^1\\
&\downarrow\\
&5^12^1\\
\downarrow&&\downarrow\\
5^11^2&&4^13^1\\
&\downarrow\\
&4^12^11^1\\
\downarrow&&\downarrow\\
4^11^3&&3^21^1\\
\downarrow&&\downarrow\\
\downarrow&&3^12^2\\
\to&\to\leftarrow&\leftarrow\\
&\downarrow\\
&\leftarrow\to&\\
\downarrow&&\downarrow\\
3^12^11^2&&2^31^1\\
\downarrow&&\downarrow\\
3^11^4&&\downarrow\\
\to&\to\leftarrow&\leftarrow\\
&\downarrow\\
&2^21^3\\
&\downarrow\\
&2^11^5\\
&\downarrow\\
&1^7
\end{matrix}
\end{align*}
And of course $p_7=15$
\end{enumerate}
\setcounter{enumi}{25}
\item
(do two parts)
Determine the conjugate of each of the following partitions
\begin{enumerate}
\item
$12=5+4+2+1$
We observe that $\{5,4,2,1\}$ has 4 elements. This is the length of our first row. The last element is size 1, so there will be one of those. The difference between the last two elements is 1, so we have one row of 3 next. The difference between 4 and 2 is 2, so we have 2 rows of 2. And 5-4 makes the last row have only one element. The other answers are obtained similarly.
$12=4+3+2+2+1$
\item
$15=6+4+3+1+1$
$15=5+3+3+2+1+1$
\item
$20=6+6+4+4$
$20=4+4+4+4+2+2$
\item
$21=6+5+4+3+2+1$
$21=6+5+4+3+2+1$
\item
$29=8+6+6+4+3+2$
$29=6+6+5+4+3+3+1+1$
\end{enumerate}
\item
For each integer $n>2$, determine a self-conjugate partition of $n$ that has at least two parts
If $n$ is odd, then make one row of length $\frac{n+1}{2}$ and $\frac{n-1}{2}$ rows of length 1. If $n$ is even then make a row of $\frac{n}{2}$, a row of 2, and $\frac{n-4}{2}$ rows of length 1.
\item
Prove that conjugation reverses the order of majorization; that is, if $\lambda$ and $\mu$ are partitions of $n$ and $\lambda$ is majorized by $\mu$, then $\mu^*$ is majorized by $\lambda^*$.
\subsubsection*{proof}
Let us define the notation $\lambda_k, \mu_k, {\lambda_k}^*,{\mu_k}^*$ as the size of the $k$th row of the $\lambda,\mu,\lambda^*,\mu*$ partitions. Because $\lambda$ majorizes $\mu$ we know two important things.
\begin{align*}
\lambda_1&\ge\mu_1\\
\sum\limits_{i=1}^k{\lambda_i}&\ge\sum\limits_{i=1}^k{\mu_i}
\end{align*}
Where $k$ is the number of partitions of $\lambda$ or $\mu$, whichever has fewer partitions. But of course we know that because these are partitions of $n$ the the sum of the partition sizes of the partition with less partitions must be $n$. So we see that
\begin{align*}
n=\sum\limits_{i=1}^k{\lambda_i}&\ge\sum\limits_{i=1}^k{\mu_i}\\
n=\sum\limits_{i=1}^k{\lambda_i}&=\sum\limits_{i=1}^k{\mu_i}+\sum\limits_{i=1}^l{\mu_i}
\end{align*}
And so $k$ is the number of partitions of $\lambda$ while $k+l$ is the number of partitions of $\mu$ where $k\ge{k+l}$.
Now when we conjugate these partitions we see that because $k$ is the number of rows of $\lambda$, ${\lambda_1}^*=k$. Similarly ${\mu_1}^*=k+l$. Now because $k\le{k+1}$ we know that ${\lambda_1}^*\le{\mu_1}^*$
Now we strip one element from each row of our partitions. This is effectively removing ${\lambda_1}^*$ and ${\mu_1}^*$. Now because the partial sums of $\lambda_i$ are more than the partial sums of $\mu_i$ we know that we have a new set of partial sums that fit the same rule.
We can repeat this until we run out of element, building our conjugates and the partial sum inequality will always hold. Now because the amount we have taken from our original partition and have put into our conjugate partition sums to n, and we start with the opposite inequality for our conjugate partitions (${\lambda_1}^*\le{\mu_1}^*)$ we know that as long as the inequality holds for the partial sum of the original partition, the opposite inequality will hold for the partial sums of the conjugate partitions. $\Box$
% Now if we strip off an element from each of the partitions, then we are left with the following for all $k\ge1$.
% \begin{align*}
% \sum\limits_{i=1}^k{\lambda_i}&\ge\sum\limits_{i=1}^k{\mu_i}\\
% \sum\limits_{i=1}^k{\lambda_i-1}&\ge\sum\limits_{i=1}^k{\mu_i-1}
% \end{align*}
% We have just stripped off the first row of our conjugated partitions. We have created some new groups of subsets (not really technically partitions anymore). Lets call these new groups $\lambda^+,\mu^+$. Notice that $\mu+$ is majorized by $\lambda+$. Now from the logic we just went over we can say that the first row of the conjugate of $\lambda+$ is shorter than the first row of the conjugate of $\mu^+$ and by extension ${\lambda_2}^*\le{\mu_2}^*$.
% If we follow this process out until we run out of rows/columns, we can see that it will always be true that ${\lambda_k}^*\le{\mu_k}^*$ and so $\lambda^*$ is majorized by $\mu^*$
\setcounter{enumi}{29}
\item
Prove that the partition function satisfies
\[p_n>p_{n-1}\;(n\ge2).\]
\subsubsection*{proof}
If we take the $p_{n-1}$ partitions of $n-1$ and add a single element partition to it, then we have $p_{n-1}$ partitions of $n$. Also note that $n^1$ is not a partition of $n-1$ but is a partition of $n$. So we have $p_n\ge {p_{n-1}+1}>p_{n-1}$ and our simple proof. $\Box$
\setcounter{enumi}{28}
\item (grad)
\end{enumerate}
\end{document}
|
############################################################
# For macaque paper, 08.19
# Makes proportion charts for sv types and sv bases
# Gregg Thomas
############################################################
this.dir <- dirname(parent.frame(2)$ofile)
setwd(this.dir)
library(ggplot2)
library(reshape2)
library(cowplot)
source("../lib/read_svs.r")
source("../lib/filter_svs.r")
source("../lib/subset_svs.r")
source("../lib/design.r")
cat("----------\n")
############################################################
savefiles = F
color_plots = F
rm_alus = T
# Run options
maxlen = 100000
minlen = F
# CNV length cutoffs
sv_list = readSVs()
sv_list = filterSVs(sv_list, minlen, maxlen)
mq_events = sv_list[[1]]; hu_events = sv_list[[2]];
# Read and filter data
if(rm_alus){
hu_events = subset(hu_events, Length < 275 | Length > 325)
mq_events = subset(mq_events, Length < 275 | Length > 325)
}
# For Alu stuff
cat("----------\nSubsetting macaque data...\n")
mqr = subsetSVs(mq_events)
mq_svs = mqr[[4]]
cat("----------\nSubsetting human data...\n")
hur = subsetSVs(hu_events)
hu_svs = hur[[4]]
# Subset data
######################
# SV type proportion plot -- Fig 1B
cat("Geting SV type counts...\n")
mq_dels = length(mq_svs$Type[mq_svs$Type=="<DEL>"])
mq_dups = length(mq_svs$Type[mq_svs$Type=="<DUP>"])
#mq_invs = length(mq_svs$Type[mq_svs$Type=="<INV>"])
mq_total = mq_dels + mq_dups
mq_dels_p = mq_dels / mq_total
mq_dups_p = mq_dups / mq_total
hu_dels = length(hu_svs$Type[hu_svs$Type=="<DEL>"])
hu_dups = length(hu_svs$Type[hu_svs$Type=="<DUP>"])
#hu_invs = length(hu_svs$Type[hu_svs$Type=="<INV>"])
hu_total = hu_dels + hu_dups
hu_dels_p = hu_dels / hu_total
hu_dups_p = hu_dups / hu_total
# Getting alleles for both species less than 100000 bp long and counting SV types. Also removing inversions for humans.
sv_types = data.frame("Species"=c("Macaque","Human"),
"total.svs"=c(mq_total, hu_total),
"num.del"=c(mq_dels, hu_dels),
"num.dup"=c(mq_dups, hu_dups),
"prop.del"=c(mq_dels_p, hu_dels_p),
"prop.dup"=c(mq_dups_p, hu_dups_p))
sv_types_prop = subset(sv_types, select=c("Species","prop.del","prop.dup"))
sv_types_prop_melt = melt(sv_types_prop, id.vars="Species")
# Organizing type counts
cat("Plotting SV type proportions...\n")
fig1b = ggplot(sv_types_prop_melt, aes(x=Species, y=value, fill=variable)) +
geom_bar(stat='identity') +
scale_y_continuous(expand = c(0,0)) +
labs(x="", y="Proportion of CNV types") +
coord_flip() +
bartheme()
if(color_plots){
fig1b = fig1b + scale_fill_manual(name="", labels=c("Deletions","Duplications"), values=c("#006ddb","#db6d00"))
}else{
fig1b = fig1b + scale_fill_grey(name="", labels=c("Deletions","Duplications"))
}
print(fig1b)
#if(savefiles){
# outfile = "fig1b.pdf"
# cat(" -> ", outfile, "\n")
# ggsave(fig1b, filename=outfile, width=6, height=4, units="in")
#}
# SV type proportion plot
cat(" -> Chi-squared test for CNV types...\n")
sv_types_counts = subset(sv_types, select=c("Species","num.del","num.dup"))
sv_types_counts_t = t(sv_types_counts[,2:ncol(sv_types_counts)])
colnames(sv_types_counts_t) <- sv_types_counts[,1]
type_chi = chisq.test(sv_types_counts_t)
print(sv_types_counts_t)
print(type_chi)
# SV type chi-squared test.
# SV type proportion plot -- Fig 1B
######################
######################
# SV bases plot -- Fig 1C
cat("Getting SV base counts...\n")
mq_del_bases = sum(mq_svs$Length[mq_svs$Type=="<DEL>"])
mq_dup_bases = sum(mq_svs$Length[mq_svs$Type=="<DUP>"])
#mq_invs = sum(mq_svs$Type[mq_svs$Type=="<INV>"])
mq_total_bases = mq_del_bases + mq_dup_bases
mq_del_bases_p = mq_del_bases / mq_total_bases
mq_dup_bases_p = mq_dup_bases / mq_total_bases
hu_del_bases = sum(hu_svs$Length[hu_svs$Type=="<DEL>"])
hu_dup_bases = sum(hu_svs$Length[hu_svs$Type=="<DUP>"])
#hu_invs = sum(hu_svs$Type[hu_svs$Type=="<INV>"])
hu_total_bases = hu_del_bases + hu_dup_bases
hu_del_bases_p = hu_del_bases / hu_total_bases
hu_dup_bases_p = hu_dup_bases / hu_total_bases
# Counting base types.
sv_bases = data.frame("Species"=c("Macaque","Human"),
"total.svs"=c(mq_total_bases, hu_total_bases),
"num.del"=c(mq_del_bases, hu_del_bases),
"num.dup"=c(mq_dup_bases, hu_dup_bases),
"prop.del"=c(mq_del_bases_p, hu_del_bases_p),
"prop.dup"=c(mq_dup_bases_p, hu_dup_bases_p))
sv_bases_prop = subset(sv_bases, select=c("Species","prop.del","prop.dup"))
sv_bases_prop_melt = melt(sv_bases_prop, id.vars="Species")
# Organizing base counts
cat("Plotting SV base proportions...\n")
fig1c = ggplot(sv_bases_prop_melt, aes(x=Species, y=value, fill=variable)) +
geom_bar(stat='identity') +
scale_y_continuous(expand = c(0,0)) +
labs(x="", y="Proportion of bases affected by CNVs") +
coord_flip() +
bartheme()
if(color_plots){
fig1c = fig1c + scale_fill_manual(name="", labels=c("Deletions","Duplications"), values=c("#006ddb","#db6d00"))
}else{
fig1c = fig1c + scale_fill_grey(name="", labels=c("Deletions","Duplications"))
}
print(fig1c)
#if(savefiles){
# outfile = "fig1c.pdf"
# cat(" -> ", outfile, "\n")
# ggsave(fig1c, filename=outfile, width=6, height=4, units="in")
#}
cat(" -> Chi-squared test for CNV bases...\n")
sv_bases_counts = subset(sv_bases, select=c("Species","num.del","num.dup"))
sv_bases_counts_t = t(sv_bases_counts[,2:ncol(sv_bases_counts)])
colnames(sv_bases_counts_t) <- sv_bases_counts[,1]
base_chi = chisq.test(sv_bases_counts_t)
print(sv_bases_counts_t)
print(base_chi)
# SV bases chi-squared test.
# SV bases plot
######################
######################
# Combine plots for figure
cat("Combining proportion plots...\n")
prow = plot_grid(fig1b + theme(legend.position="none"),
fig1c + theme(legend.position="none"),
align = 'vh',
labels = c("B", "C"),
label_size = 24,
hjust = -1,
nrow = 1)
# Combine panels B and C
legend_b = get_legend(fig1b + theme(legend.direction="horizontal", legend.justification="center", legend.box.just="bottom"))
# Extract the legend from one of the plots
p = plot_grid(prow, legend_b, ncol=1, rel_heights=c(1, 0.2))
# Add the legend underneath the row we made earlier with 10% of the height of the row.
print(p)
if(savefiles){
if(color_plots){
outfile = "fig1bc.pdf"
}else{
outfile = "fig1bc-grey.pdf"
}
cat(" -> ", outfile, "\n")
ggsave(filename=outfile, p, width=10, height=4, units="in")
}
# Save the figure.
######################
|
/***************************************
This resource is required to alert InputSprocket
to enable its services
All mac classic compatible apps using InputSprocket must include this file
Copyright 1995-2014 by Rebecca Ann Heineman [email protected]
It is released under an MIT Open Source license. Please see LICENSE
for license details. Yes, you can use it in a
commercial title without paying anything, just give me a credit.
Please? It's not like I'm asking you for money!
***************************************/
#include <InputSprocket.r>
resource 'isap' (128) {
callsISpInit,
usesInputSprocket
};
|
function p = prime_ge ( n )
%*****************************************************************************80
%
%% PRIME_GE returns the smallest prime greater than or equal to N.
%
% Discussion:
%
% The MATLAB version of this program is made much simpler
% because of the availability of the IS_PRIME logical function.
%
% Example:
%
% N PRIME_GE
%
% -10 2
% 1 2
% 2 2
% 3 3
% 4 5
% 5 5
% 6 7
% 7 7
% 8 11
% 9 11
% 10 11
%
% Licensing:
%
% This code is distributed under the GNU LGPL license.
%
% Modified:
%
% 15 March 2003
%
% Author:
%
% John Burkardt
%
% Parameters:
%
% Input, integer N, the number to be bounded.
%
% Output, integer P, the smallest prime number that is greater
% than or equal to N.
%
p = max ( ceil ( n ), 2 );
while ( ~ isprime ( p ) )
p = p + 1;
end
return
end
|
open import Data.Empty
open import Data.Maybe
open import Data.Product
open import Data.Sum
open import AEff
open import EffectAnnotations
open import Types
open import Relation.Binary.PropositionalEquality hiding ([_])
open import Relation.Nullary
open import Relation.Nullary.Negation
module AwaitingComputations where
-- COMPUTATIONS THAT ARE TEMPORARILY STUCK DUE TO AWAITING FOR A PARTICULAR PROMISE
data _⧗_ {Γ : Ctx} {X : VType} (x : ⟨ X ⟩ ∈ Γ) : {C : CType} → Γ ⊢M⦂ C → Set where
await : {C : CType}
{M : Γ ∷ X ⊢M⦂ C} →
-------------------------
x ⧗ (await (` x) until M)
let-in : {X Y : VType}
{o : O}
{i : I}
{M : Γ ⊢M⦂ X ! (o , i)}
{N : Γ ∷ X ⊢M⦂ Y ! (o , i)} →
x ⧗ M →
-----------------------------
x ⧗ (let= M `in N)
interrupt : {X : VType}
{o : O}
{i : I}
{op : Σₛ}
{V : Γ ⊢V⦂ ``(payload op)}
{M : Γ ⊢M⦂ X ! (o , i)} →
x ⧗ M →
-------------------------
x ⧗ (↓ op V M)
coerce : {X : VType}
{o o' : O}
{i i' : I}
{p : o ⊑ₒ o'}
{q : i ⊑ᵢ i'}
{M : Γ ⊢M⦂ X ! (o , i)} →
x ⧗ M →
-------------------------
x ⧗ (coerce p q M)
|
{-# OPTIONS --without-K --rewriting #-}
open import HoTT
module groups.HomSequence where
infix 15 _⊣|ᴳ
infixr 10 _→⟨_⟩ᴳ_
data HomSequence {i} : (G : Group i) (H : Group i) → Type (lsucc i) where
_⊣|ᴳ : (G : Group i) → HomSequence G G
_→⟨_⟩ᴳ_ : (G : Group i) {H K : Group i}
→ (G →ᴳ H) → HomSequence H K
→ HomSequence G K
HomSeq-++ : ∀ {i} {G H K : Group i}
→ HomSequence G H → HomSequence H K → HomSequence G K
HomSeq-++ (_ ⊣|ᴳ) seq = seq
HomSeq-++ (_ →⟨ φ ⟩ᴳ seq₁) seq₂ = _ →⟨ φ ⟩ᴳ HomSeq-++ seq₁ seq₂
HomSeq-snoc : ∀ {i} {G H K : Group i}
→ HomSequence G H → (H →ᴳ K) → HomSequence G K
HomSeq-snoc seq φ = HomSeq-++ seq (_ →⟨ φ ⟩ᴳ _ ⊣|ᴳ)
{- maps between two hom sequences -}
infix 15 _↓|ᴳ
infixr 10 _↓⟨_⟩ᴳ_
data HomSeqMap {i₀ i₁} : {G₀ H₀ : Group i₀} {G₁ H₁ : Group i₁}
→ HomSequence G₀ H₀ → HomSequence G₁ H₁
→ (G₀ →ᴳ G₁) → (H₀ →ᴳ H₁) → Type (lsucc (lmax i₀ i₁)) where
_↓|ᴳ : {G₀ : Group i₀} {G₁ : Group i₁} (ξ : G₀ →ᴳ G₁) → HomSeqMap (G₀ ⊣|ᴳ) (G₁ ⊣|ᴳ) ξ ξ
_↓⟨_⟩ᴳ_ : {G₀ H₀ K₀ : Group i₀} {G₁ H₁ K₁ : Group i₁}
→ {φ : G₀ →ᴳ H₀} {seq₀ : HomSequence H₀ K₀}
→ {ψ : G₁ →ᴳ H₁} {seq₁ : HomSequence H₁ K₁}
→ (ξG : G₀ →ᴳ G₁) {ξH : H₀ →ᴳ H₁} {ξK : K₀ →ᴳ K₁}
→ CommSquareᴳ φ ψ ξG ξH
→ HomSeqMap seq₀ seq₁ ξH ξK
→ HomSeqMap (G₀ →⟨ φ ⟩ᴳ seq₀) (G₁ →⟨ ψ ⟩ᴳ seq₁) ξG ξK
HomSeqMap-snoc : ∀ {i₀ i₁} {G₀ H₀ K₀ : Group i₀} {G₁ H₁ K₁ : Group i₁}
{seq₀ : HomSequence G₀ H₀} {seq₁ : HomSequence G₁ H₁}
{φ₀ : H₀ →ᴳ K₀} {φ₁ : H₁ →ᴳ K₁}
{ξG : G₀ →ᴳ G₁} {ξH : H₀ →ᴳ H₁} {ξK : K₀ →ᴳ K₁}
→ HomSeqMap seq₀ seq₁ ξG ξH
→ CommSquareᴳ φ₀ φ₁ ξH ξK
→ HomSeqMap (HomSeq-snoc seq₀ φ₀) (HomSeq-snoc seq₁ φ₁) ξG ξK
HomSeqMap-snoc (ξG ↓|ᴳ) □ = ξG ↓⟨ □ ⟩ᴳ _ ↓|ᴳ
HomSeqMap-snoc (ξG ↓⟨ □₁ ⟩ᴳ seq) □₂ = ξG ↓⟨ □₁ ⟩ᴳ HomSeqMap-snoc seq □₂
{- equivalences between two hom sequences -}
is-seqᴳ-equiv : ∀ {i₀ i₁} {G₀ H₀ : Group i₀} {G₁ H₁ : Group i₁}
{seq₀ : HomSequence G₀ H₀} {seq₁ : HomSequence G₁ H₁}
{ξG : G₀ →ᴳ G₁} {ξH : H₀ →ᴳ H₁}
→ HomSeqMap seq₀ seq₁ ξG ξH
→ Type (lmax i₀ i₁)
is-seqᴳ-equiv (ξ ↓|ᴳ) = is-equiv (GroupHom.f ξ)
is-seqᴳ-equiv (ξ ↓⟨ _ ⟩ᴳ seq) = is-equiv (GroupHom.f ξ) × is-seqᴳ-equiv seq
is-seqᴳ-equiv-snoc : ∀ {i₀ i₁} {G₀ H₀ K₀ : Group i₀} {G₁ H₁ K₁ : Group i₁}
{seq₀ : HomSequence G₀ H₀} {seq₁ : HomSequence G₁ H₁}
{φ₀ : H₀ →ᴳ K₀} {φ₁ : H₁ →ᴳ K₁}
{ξG : G₀ →ᴳ G₁} {ξH : H₀ →ᴳ H₁} {ξK : K₀ →ᴳ K₁}
{seq-map : HomSeqMap seq₀ seq₁ ξG ξH}
{cs : CommSquareᴳ φ₀ φ₁ ξH ξK}
→ is-seqᴳ-equiv seq-map → is-equiv (GroupHom.f ξK)
→ is-seqᴳ-equiv (HomSeqMap-snoc seq-map cs)
is-seqᴳ-equiv-snoc {seq-map = ξG ↓|ᴳ} ξG-is-equiv ξH-is-equiv = ξG-is-equiv , ξH-is-equiv
is-seqᴳ-equiv-snoc {seq-map = ξG ↓⟨ _ ⟩ᴳ seq} (ξG-is-equiv , seq-is-equiv) ξH-is-equiv =
ξG-is-equiv , is-seqᴳ-equiv-snoc seq-is-equiv ξH-is-equiv
private
is-seqᴳ-equiv-head : ∀ {i₀ i₁} {G₀ H₀ : Group i₀} {G₁ H₁ : Group i₁}
{seq₀ : HomSequence G₀ H₀} {seq₁ : HomSequence G₁ H₁}
{ξG : G₀ →ᴳ G₁} {ξH : H₀ →ᴳ H₁}
{seq-map : HomSeqMap seq₀ seq₁ ξG ξH}
→ is-seqᴳ-equiv seq-map → is-equiv (GroupHom.f ξG)
is-seqᴳ-equiv-head {seq-map = ξ ↓|ᴳ} ise = ise
is-seqᴳ-equiv-head {seq-map = ξ ↓⟨ _ ⟩ᴳ _} ise = fst ise
is-seqᴳ-equiv-last : ∀ {i₀ i₁} {G₀ H₀ : Group i₀} {G₁ H₁ : Group i₁}
{seq₀ : HomSequence G₀ H₀} {seq₁ : HomSequence G₁ H₁}
{ξG : G₀ →ᴳ G₁} {ξH : H₀ →ᴳ H₁}
{seq-map : HomSeqMap seq₀ seq₁ ξG ξH}
→ is-seqᴳ-equiv seq-map → is-equiv (GroupHom.f ξH)
is-seqᴳ-equiv-last {seq-map = ξ ↓|ᴳ} ise = ise
is-seqᴳ-equiv-last {seq-map = _ ↓⟨ _ ⟩ᴳ rest} (_ , rest-ise) = is-seqᴳ-equiv-last rest-ise
module is-seqᴳ-equiv {i₀ i₁} {G₀ H₀ : Group i₀} {G₁ H₁ : Group i₁}
{seq₀ : HomSequence G₀ H₀} {seq₁ : HomSequence G₁ H₁}
{ξG : G₀ →ᴳ G₁} {ξH : H₀ →ᴳ H₁}
{seq-map : HomSeqMap seq₀ seq₁ ξG ξH}
(seq-map-is-equiv : is-seqᴳ-equiv seq-map) where
head = is-seqᴳ-equiv-head seq-map-is-equiv
last = is-seqᴳ-equiv-last seq-map-is-equiv
HomSeqEquiv : ∀ {i₀ i₁} {G₀ H₀ : Group i₀} {G₁ H₁ : Group i₁}
(seq₀ : HomSequence G₀ H₀) (seq₁ : HomSequence G₁ H₁)
(ξG : G₀ →ᴳ G₁) (ξH : H₀ →ᴳ H₁) → Type (lsucc (lmax i₀ i₁))
HomSeqEquiv seq₀ seq₁ ξG ξH = Σ (HomSeqMap seq₀ seq₁ ξG ξH) is-seqᴳ-equiv
HomSeqEquiv-inverse : ∀ {i₀ i₁} {G₀ H₀ : Group i₀} {G₁ H₁ : Group i₁}
{seq₀ : HomSequence G₀ H₀} {seq₁ : HomSequence G₁ H₁}
{ξG : G₀ →ᴳ G₁} {ξH : H₀ →ᴳ H₁}
(equiv : HomSeqEquiv seq₀ seq₁ ξG ξH)
→ HomSeqEquiv seq₁ seq₀
(GroupIso.g-hom (ξG , is-seqᴳ-equiv-head (snd equiv)))
(GroupIso.g-hom (ξH , is-seqᴳ-equiv-last (snd equiv)))
HomSeqEquiv-inverse ((ξ ↓|ᴳ) , ξ-ise) =
(GroupIso.g-hom (ξ , ξ-ise) ↓|ᴳ) , is-equiv-inverse ξ-ise
HomSeqEquiv-inverse ((ξ ↓⟨ □ ⟩ᴳ rest) , (ξ-ise , rest-ise)) =
(GroupIso.g-hom (ξ , ξ-ise)
↓⟨ CommSquareᴳ-inverse-v □ ξ-ise (is-seqᴳ-equiv-head rest-ise) ⟩ᴳ
fst rest-inverse-equiv) ,
is-equiv-inverse ξ-ise , snd rest-inverse-equiv
where
rest-inverse-equiv = HomSeqEquiv-inverse (rest , rest-ise)
{- Doesn't seem useful.
infix 15 _↕|ᴳ
infixr 10 _↕⟨_⟩ᴳ_
_↕|ᴳ : ∀ {i} {G₀ G₁ : Group i} (iso : G₀ ≃ᴳ G₁)
→ HomSeqEquiv (G₀ ⊣|ᴳ) (G₁ ⊣|ᴳ) (fst iso) (fst iso)
iso ↕|ᴳ = (fst iso ↓|ᴳ) , snd iso
_↕⟨_⟩ᴳ_ : ∀ {i} {G₀ G₁ H₀ H₁ K₀ K₁ : Group i}
→ {φ : G₀ →ᴳ H₀} {seq₀ : HomSequence H₀ K₀}
→ {ψ : G₁ →ᴳ H₁} {seq₁ : HomSequence H₁ K₁}
→ (isoG : G₀ ≃ᴳ G₁) {isoH : H₀ ≃ᴳ H₁} {isoK : K₀ ≃ᴳ K₁}
→ HomCommSquare φ ψ (fst isoG) (fst isoH)
→ HomSeqEquiv seq₀ seq₁ (fst isoH) (fst isoK)
→ HomSeqEquiv (G₀ →⟨ φ ⟩ᴳ seq₀) (G₁ →⟨ ψ ⟩ᴳ seq₁) (fst isoG) (fst isoK)
(ξG , hG-is-equiv) ↕⟨ sqr ⟩ᴳ (seq-map , seq-map-is-equiv) =
(ξG ↓⟨ sqr ⟩ᴳ seq-map) , hG-is-equiv , seq-map-is-equiv
-}
private
hom-seq-map-index-type : ∀ {i₀ i₁} {G₀ H₀ : Group i₀} {G₁ H₁ : Group i₁}
{seq₀ : HomSequence G₀ H₀} {seq₁ : HomSequence G₁ H₁}
{ξG : G₀ →ᴳ G₁} {ξH : H₀ →ᴳ H₁}
→ ℕ → HomSeqMap seq₀ seq₁ ξG ξH → Type (lmax i₀ i₁)
hom-seq-map-index-type _ (_ ↓|ᴳ) = Lift ⊤
hom-seq-map-index-type O (_↓⟨_⟩ᴳ_ {φ = φ} {ψ = ψ} ξG {ξH} _ _)
= CommSquareᴳ φ ψ ξG ξH
hom-seq-map-index-type (S n) (_ ↓⟨ _ ⟩ᴳ seq-map)
= hom-seq-map-index-type n seq-map
abstract
hom-seq-map-index : ∀ {i₀ i₁} {G₀ H₀ : Group i₀} {G₁ H₁ : Group i₁}
{seq₀ : HomSequence G₀ H₀} {seq₁ : HomSequence G₁ H₁}
{ξG : G₀ →ᴳ G₁} {ξH : H₀ →ᴳ H₁}
(n : ℕ) (seq-map : HomSeqMap seq₀ seq₁ ξG ξH)
→ hom-seq-map-index-type n seq-map
hom-seq-map-index _ (_ ↓|ᴳ) = lift tt
hom-seq-map-index O (_ ↓⟨ □ ⟩ᴳ _) = □
hom-seq-map-index (S n) (_ ↓⟨ _ ⟩ᴳ seq-map)
= hom-seq-map-index n seq-map
private
hom-seq-equiv-index-type : ∀ {i₀ i₁} {G₀ H₀ : Group i₀} {G₁ H₁ : Group i₁}
{seq₀ : HomSequence G₀ H₀} {seq₁ : HomSequence G₁ H₁}
{ξG : G₀ →ᴳ G₁} {ξH : H₀ →ᴳ H₁}
→ ℕ → HomSeqMap seq₀ seq₁ ξG ξH → Type (lmax i₀ i₁)
hom-seq-equiv-index-type {ξG = ξG} O _ = is-equiv (GroupHom.f ξG)
hom-seq-equiv-index-type (S _) (_ ↓|ᴳ) = Lift ⊤
hom-seq-equiv-index-type (S n) (_ ↓⟨ _ ⟩ᴳ seq-map)
= hom-seq-equiv-index-type n seq-map
abstract
hom-seq-equiv-index : ∀ {i₀ i₁} {G₀ H₀ : Group i₀} {G₁ H₁ : Group i₁}
{seq₀ : HomSequence G₀ H₀} {seq₁ : HomSequence G₁ H₁}
{ξG : G₀ →ᴳ G₁} {ξH : H₀ →ᴳ H₁}
(n : ℕ) (seq-equiv : HomSeqEquiv seq₀ seq₁ ξG ξH)
→ hom-seq-equiv-index-type n (fst seq-equiv)
hom-seq-equiv-index O (seq-map , ise) = is-seqᴳ-equiv-head ise
hom-seq-equiv-index (S _) ((_ ↓|ᴳ) , _) = lift tt
hom-seq-equiv-index (S n) ((_ ↓⟨ _ ⟩ᴳ seq-map) , ise)
= hom-seq-equiv-index n (seq-map , snd ise)
|
/-
Copyright (c) 2022 Andrew Yang. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Andrew Yang
! This file was ported from Lean 3 source module category_theory.limits.shapes.diagonal
! leanprover-community/mathlib commit f6bab67886fb92c3e2f539cc90a83815f69a189d
! Please do not edit these lines, except to modify the commit id
! if you have ported upstream changes.
-/
import Mathlib.CategoryTheory.Limits.Shapes.Pullbacks
import Mathlib.CategoryTheory.Limits.Shapes.KernelPair
import Mathlib.CategoryTheory.Limits.Shapes.CommSq
/-!
# The diagonal object of a morphism.
We provide various API and isomorphisms considering the diagonal object `Δ_{Y/X} := pullback f f`
of a morphism `f : X ⟶ Y`.
-/
open CategoryTheory
noncomputable section
namespace CategoryTheory.Limits
variable {C : Type _} [Category C] {X Y Z : C}
namespace pullback
section Diagonal
variable (f : X ⟶ Y) [HasPullback f f]
/-- The diagonal object of a morphism `f : X ⟶ Y` is `Δ_{X/Y} := pullback f f`. -/
abbrev diagonalObj : C :=
pullback f f
#align category_theory.limits.pullback.diagonal_obj CategoryTheory.Limits.pullback.diagonalObj
/-- The diagonal morphism `X ⟶ Δ_{X/Y}` for a morphism `f : X ⟶ Y`. -/
def diagonal : X ⟶ diagonalObj f :=
pullback.lift (𝟙 _) (𝟙 _) rfl
#align category_theory.limits.pullback.diagonal CategoryTheory.Limits.pullback.diagonal
@[reassoc (attr := simp)]
theorem diagonal_fst : diagonal f ≫ pullback.fst = 𝟙 _ :=
pullback.lift_fst _ _ _
#align category_theory.limits.pullback.diagonal_fst CategoryTheory.Limits.pullback.diagonal_fst
@[reassoc (attr := simp)]
theorem diagonal_snd : diagonal f ≫ pullback.snd = 𝟙 _ :=
pullback.lift_snd _ _ _
#align category_theory.limits.pullback.diagonal_snd CategoryTheory.Limits.pullback.diagonal_snd
instance : IsSplitMono (diagonal f) :=
⟨⟨⟨pullback.fst, diagonal_fst f⟩⟩⟩
instance : IsSplitEpi (pullback.fst : pullback f f ⟶ X) :=
⟨⟨⟨diagonal f, diagonal_fst f⟩⟩⟩
instance : IsSplitEpi (pullback.snd : pullback f f ⟶ X) :=
⟨⟨⟨diagonal f, diagonal_snd f⟩⟩⟩
instance [Mono f] : IsIso (diagonal f) := by
rw [(IsIso.inv_eq_of_inv_hom_id (diagonal_fst f)).symm]
infer_instance
/-- The two projections `Δ_{X/Y} ⟶ X` form a kernel pair for `f : X ⟶ Y`. -/
theorem diagonal_isKernelPair : IsKernelPair f (pullback.fst : diagonalObj f ⟶ _) pullback.snd :=
IsPullback.of_hasPullback f f
#align category_theory.limits.pullback.diagonal_is_kernel_pair CategoryTheory.Limits.pullback.diagonal_isKernelPair
end Diagonal
end pullback
variable [HasPullbacks C]
open pullback
section
variable {U V₁ V₂ : C} (f : X ⟶ Y) (i : U ⟶ Y)
variable (i₁ : V₁ ⟶ pullback f i) (i₂ : V₂ ⟶ pullback f i)
@[reassoc (attr := simp)]
theorem pullback_diagonal_map_snd_fst_fst :
(pullback.snd :
pullback (diagonal f)
(map (i₁ ≫ snd) (i₂ ≫ snd) f f (i₁ ≫ fst) (i₂ ≫ fst) i (by simp [condition])
(by simp [condition])) ⟶
_) ≫
fst ≫ i₁ ≫ fst =
pullback.fst := by
conv_rhs => rw [← Category.comp_id pullback.fst]
rw [← diagonal_fst f, pullback.condition_assoc, pullback.lift_fst]
#align category_theory.limits.pullback_diagonal_map_snd_fst_fst CategoryTheory.Limits.pullback_diagonal_map_snd_fst_fst
@[reassoc (attr := simp)]
theorem pullback_diagonal_map_snd_snd_fst :
(pullback.snd :
pullback (diagonal f)
(map (i₁ ≫ snd) (i₂ ≫ snd) f f (i₁ ≫ fst) (i₂ ≫ fst) i (by simp [condition])
(by simp [condition])) ⟶
_) ≫
snd ≫ i₂ ≫ fst =
pullback.fst := by
conv_rhs => rw [← Category.comp_id pullback.fst]
rw [← diagonal_snd f, pullback.condition_assoc, pullback.lift_snd]
#align category_theory.limits.pullback_diagonal_map_snd_snd_fst CategoryTheory.Limits.pullback_diagonal_map_snd_snd_fst
variable [HasPullback i₁ i₂]
set_option maxHeartbeats 400000 in
/-- This iso witnesses the fact that
given `f : X ⟶ Y`, `i : U ⟶ Y`, and `i₁ : V₁ ⟶ X ×[Y] U`, `i₂ : V₂ ⟶ X ×[Y] U`, the diagram
V₁ ×[X ×[Y] U] V₂ ⟶ V₁ ×[U] V₂
| |
| |
↓ ↓
X ⟶ X ×[Y] X
is a pullback square.
Also see `pullback_fst_map_snd_isPullback`.
-/
def pullbackDiagonalMapIso :
pullback (diagonal f)
(map (i₁ ≫ snd) (i₂ ≫ snd) f f (i₁ ≫ fst) (i₂ ≫ fst) i
(by simp only [Category.assoc, condition])
(by simp only [Category.assoc, condition])) ≅
pullback i₁ i₂ where
hom :=
pullback.lift (pullback.snd ≫ pullback.fst) (pullback.snd ≫ pullback.snd) (by
ext
. simp [Category.assoc, pullback_diagonal_map_snd_fst_fst, pullback_diagonal_map_snd_snd_fst]
. simp [Category.assoc, pullback.condition, pullback.condition_assoc])
inv :=
pullback.lift (pullback.fst ≫ i₁ ≫ pullback.fst)
(pullback.map _ _ _ _ (𝟙 _) (𝟙 _) pullback.snd (Category.id_comp _).symm
(Category.id_comp _).symm) (by
ext
. simp only [Category.assoc, diagonal_fst, Category.comp_id, limit.lift_π,
PullbackCone.mk_pt, PullbackCone.mk_π_app, limit.lift_π_assoc, cospan_left]
. simp only [condition_assoc, Category.assoc, diagonal_snd, Category.comp_id,
limit.lift_π, PullbackCone.mk_pt, PullbackCone.mk_π_app,
limit.lift_π_assoc, cospan_right])
#align category_theory.limits.pullback_diagonal_map_iso CategoryTheory.Limits.pullbackDiagonalMapIso
@[reassoc (attr := simp)]
theorem pullbackDiagonalMapIso_hom_fst :
(pullbackDiagonalMapIso f i i₁ i₂).hom ≫ pullback.fst = pullback.snd ≫ pullback.fst := by
delta pullbackDiagonalMapIso
simp
#align category_theory.limits.pullback_diagonal_map_iso_hom_fst CategoryTheory.Limits.pullbackDiagonalMapIso_hom_fst
@[reassoc (attr := simp)]
theorem pullbackDiagonalMapIso_hom_snd :
(pullbackDiagonalMapIso f i i₁ i₂).hom ≫ pullback.snd = pullback.snd ≫ pullback.snd := by
delta pullbackDiagonalMapIso
simp
#align category_theory.limits.pullback_diagonal_map_iso_hom_snd CategoryTheory.Limits.pullbackDiagonalMapIso_hom_snd
@[reassoc (attr := simp)]
theorem pullbackDiagonalMapIso_inv_fst :
(pullbackDiagonalMapIso f i i₁ i₂).inv ≫ pullback.fst = pullback.fst ≫ i₁ ≫ pullback.fst := by
delta pullbackDiagonalMapIso
simp
#align category_theory.limits.pullback_diagonal_map_iso_inv_fst CategoryTheory.Limits.pullbackDiagonalMapIso_inv_fst
@[reassoc (attr := simp)]
theorem pullbackDiagonalMapIso_inv_snd_fst :
(pullbackDiagonalMapIso f i i₁ i₂).inv ≫ pullback.snd ≫ pullback.fst = pullback.fst := by
delta pullbackDiagonalMapIso
simp
#align category_theory.limits.pullback_diagonal_map_iso_inv_snd_fst CategoryTheory.Limits.pullbackDiagonalMapIso_inv_snd_fst
@[reassoc (attr := simp)]
theorem pullbackDiagonalMapIso_inv_snd_snd :
(pullbackDiagonalMapIso f i i₁ i₂).inv ≫ pullback.snd ≫ pullback.snd = pullback.snd := by
delta pullbackDiagonalMapIso
simp
#align category_theory.limits.pullback_diagonal_map_iso_inv_snd_snd CategoryTheory.Limits.pullbackDiagonalMapIso_inv_snd_snd
theorem pullback_fst_map_snd_isPullback :
IsPullback (fst ≫ i₁ ≫ fst)
(map i₁ i₂ (i₁ ≫ snd) (i₂ ≫ snd) _ _ _ (Category.id_comp _).symm (Category.id_comp _).symm)
(diagonal f)
(map (i₁ ≫ snd) (i₂ ≫ snd) f f (i₁ ≫ fst) (i₂ ≫ fst) i (by simp [condition])
(by simp [condition])) :=
IsPullback.of_iso_pullback ⟨by ext <;> simp [condition_assoc]⟩
(pullbackDiagonalMapIso f i i₁ i₂).symm (pullbackDiagonalMapIso_inv_fst f i i₁ i₂)
(by aesop_cat)
#align category_theory.limits.pullback_fst_map_snd_is_pullback CategoryTheory.Limits.pullback_fst_map_snd_isPullback
end
section
variable {S T : C} (f : X ⟶ T) (g : Y ⟶ T) (i : T ⟶ S)
variable [HasPullback i i] [HasPullback f g] [HasPullback (f ≫ i) (g ≫ i)]
variable
[HasPullback (diagonal i)
(pullback.map (f ≫ i) (g ≫ i) i i f g (𝟙 _) (Category.comp_id _) (Category.comp_id _))]
/-- This iso witnesses the fact that
given `f : X ⟶ T`, `g : Y ⟶ T`, and `i : T ⟶ S`, the diagram
X ×ₜ Y ⟶ X ×ₛ Y
| |
| |
↓ ↓
T ⟶ T ×ₛ T
is a pullback square.
Also see `pullback_map_diagonal_isPullback`.
-/
def pullbackDiagonalMapIdIso :
pullback (diagonal i)
(pullback.map (f ≫ i) (g ≫ i) i i f g (𝟙 _) (Category.comp_id _) (Category.comp_id _)) ≅
pullback f g := by
refine' _ ≪≫
pullbackDiagonalMapIso i (𝟙 _) (f ≫ inv pullback.fst) (g ≫ inv pullback.fst) ≪≫ _
. refine' @asIso _ _ _ _ (pullback.map _ _ _ _ (𝟙 T) ((pullback.congrHom _ _).hom) (𝟙 _) _ _) ?_
. rw [← Category.comp_id pullback.snd, ← condition, Category.assoc, IsIso.inv_hom_id_assoc]
. rw [← Category.comp_id pullback.snd, ← condition, Category.assoc, IsIso.inv_hom_id_assoc]
· rw [Category.comp_id, Category.id_comp]
. ext <;> simp
. infer_instance
. refine' @asIso _ _ _ _ (pullback.map _ _ _ _ (𝟙 _) (𝟙 _) pullback.fst _ _) ?_
. rw [Category.assoc, IsIso.inv_hom_id, Category.comp_id, Category.id_comp]
. rw [Category.assoc, IsIso.inv_hom_id, Category.comp_id, Category.id_comp]
. infer_instance
#align category_theory.limits.pullback_diagonal_map_id_iso CategoryTheory.Limits.pullbackDiagonalMapIdIso
@[reassoc (attr := simp)]
theorem pullbackDiagonalMapIdIso_hom_fst :
(pullbackDiagonalMapIdIso f g i).hom ≫ pullback.fst = pullback.snd ≫ pullback.fst := by
delta pullbackDiagonalMapIdIso
simp
#align category_theory.limits.pullback_diagonal_map_id_iso_hom_fst CategoryTheory.Limits.pullbackDiagonalMapIdIso_hom_fst
@[reassoc (attr := simp)]
theorem pullbackDiagonalMapIdIso_hom_snd :
(pullbackDiagonalMapIdIso f g i).hom ≫ pullback.snd = pullback.snd ≫ pullback.snd := by
delta pullbackDiagonalMapIdIso
simp
#align category_theory.limits.pullback_diagonal_map_id_iso_hom_snd CategoryTheory.Limits.pullbackDiagonalMapIdIso_hom_snd
@[reassoc (attr := simp)]
theorem pullbackDiagonalMapIdIso_inv_fst :
(pullbackDiagonalMapIdIso f g i).inv ≫ pullback.fst = pullback.fst ≫ f := by
rw [Iso.inv_comp_eq, ← Category.comp_id pullback.fst, ← diagonal_fst i, pullback.condition_assoc]
simp
#align category_theory.limits.pullback_diagonal_map_id_iso_inv_fst CategoryTheory.Limits.pullbackDiagonalMapIdIso_inv_fst
@[reassoc (attr := simp)]
theorem pullbackDiagonalMapIdIso_inv_snd_fst :
(pullbackDiagonalMapIdIso f g i).inv ≫ pullback.snd ≫ pullback.fst = pullback.fst := by
rw [Iso.inv_comp_eq]
simp
#align category_theory.limits.pullback_diagonal_map_id_iso_inv_snd_fst CategoryTheory.Limits.pullbackDiagonalMapIdIso_inv_snd_fst
@[reassoc (attr := simp)]
theorem pullbackDiagonalMapIdIso_inv_snd_snd :
(pullbackDiagonalMapIdIso f g i).inv ≫ pullback.snd ≫ pullback.snd = pullback.snd := by
rw [Iso.inv_comp_eq]
simp
#align category_theory.limits.pullback_diagonal_map_id_iso_inv_snd_snd CategoryTheory.Limits.pullbackDiagonalMapIdIso_inv_snd_snd
theorem pullback.diagonal_comp (f : X ⟶ Y) (g : Y ⟶ Z) [HasPullback f f] [HasPullback g g]
[HasPullback (f ≫ g) (f ≫ g)] :
diagonal (f ≫ g) = diagonal f ≫ (pullbackDiagonalMapIdIso f f g).inv ≫ pullback.snd := by
apply pullback.hom_ext <;> simp
#align category_theory.limits.pullback.diagonal_comp CategoryTheory.Limits.pullback.diagonal_comp
theorem pullback_map_diagonal_isPullback :
IsPullback (pullback.fst ≫ f)
(pullback.map f g (f ≫ i) (g ≫ i) _ _ i (Category.id_comp _).symm (Category.id_comp _).symm)
(diagonal i)
(pullback.map (f ≫ i) (g ≫ i) i i f g (𝟙 _) (Category.comp_id _) (Category.comp_id _)) := by
apply IsPullback.of_iso_pullback _ (pullbackDiagonalMapIdIso f g i).symm
· simp
· apply pullback.hom_ext <;> simp
· constructor
apply pullback.hom_ext <;> simp [condition]
#align category_theory.limits.pullback_map_diagonal_is_pullback CategoryTheory.Limits.pullback_map_diagonal_isPullback
/-- The diagonal object of `X ×[Z] Y ⟶ X` is isomorphic to `Δ_{Y/Z} ×[Z] X`. -/
def diagonalObjPullbackFstIso {X Y Z : C} (f : X ⟶ Z) (g : Y ⟶ Z) :
diagonalObj (pullback.fst : pullback f g ⟶ X) ≅
pullback (pullback.snd ≫ g : diagonalObj g ⟶ Z) f :=
pullbackRightPullbackFstIso _ _ _ ≪≫
pullback.congrHom pullback.condition rfl ≪≫
pullbackAssoc _ _ _ _ ≪≫ pullbackSymmetry _ _ ≪≫ pullback.congrHom pullback.condition rfl
#align category_theory.limits.diagonal_obj_pullback_fst_iso CategoryTheory.Limits.diagonalObjPullbackFstIso
@[reassoc (attr := simp)]
theorem diagonalObjPullbackFstIso_hom_fst_fst {X Y Z : C} (f : X ⟶ Z) (g : Y ⟶ Z) :
(diagonalObjPullbackFstIso f g).hom ≫ pullback.fst ≫ pullback.fst =
pullback.fst ≫ pullback.snd := by
delta diagonalObjPullbackFstIso
simp
#align category_theory.limits.diagonal_obj_pullback_fst_iso_hom_fst_fst CategoryTheory.Limits.diagonalObjPullbackFstIso_hom_fst_fst
@[reassoc (attr := simp)]
@[reassoc (attr := simp)]
theorem diagonalObjPullbackFstIso_hom_snd {X Y Z : C} (f : X ⟶ Z) (g : Y ⟶ Z) :
(diagonalObjPullbackFstIso f g).hom ≫ pullback.snd = pullback.fst ≫ pullback.fst := by
delta diagonalObjPullbackFstIso
simp
#align category_theory.limits.diagonal_obj_pullback_fst_iso_hom_snd CategoryTheory.Limits.diagonalObjPullbackFstIso_hom_snd
@[reassoc (attr := simp)]
theorem diagonalObjPullbackFstIso_inv_fst_fst {X Y Z : C} (f : X ⟶ Z) (g : Y ⟶ Z) :
(diagonalObjPullbackFstIso f g).inv ≫ pullback.fst ≫ pullback.fst = pullback.snd := by
delta diagonalObjPullbackFstIso
simp
#align category_theory.limits.diagonal_obj_pullback_fst_iso_inv_fst_fst CategoryTheory.Limits.diagonalObjPullbackFstIso_inv_fst_fst
@[reassoc (attr := simp)]
theorem diagonalObjPullbackFstIso_inv_fst_snd {X Y Z : C} (f : X ⟶ Z) (g : Y ⟶ Z) :
(diagonalObjPullbackFstIso f g).inv ≫ pullback.fst ≫ pullback.snd =
pullback.fst ≫ pullback.fst := by
delta diagonalObjPullbackFstIso
simp
#align category_theory.limits.diagonal_obj_pullback_fst_iso_inv_fst_snd CategoryTheory.Limits.diagonalObjPullbackFstIso_inv_fst_snd
@[reassoc (attr := simp)]
theorem diagonalObjPullbackFstIso_inv_snd_fst {X Y Z : C} (f : X ⟶ Z) (g : Y ⟶ Z) :
(diagonalObjPullbackFstIso f g).inv ≫ pullback.snd ≫ pullback.fst = pullback.snd := by
delta diagonalObjPullbackFstIso
simp
#align category_theory.limits.diagonal_obj_pullback_fst_iso_inv_snd_fst CategoryTheory.Limits.diagonalObjPullbackFstIso_inv_snd_fst
@[reassoc (attr := simp)]
theorem diagonalObjPullbackFstIso_inv_snd_snd {X Y Z : C} (f : X ⟶ Z) (g : Y ⟶ Z) :
(diagonalObjPullbackFstIso f g).inv ≫ pullback.snd ≫ pullback.snd =
pullback.fst ≫ pullback.snd := by
delta diagonalObjPullbackFstIso
simp
#align category_theory.limits.diagonal_obj_pullback_fst_iso_inv_snd_snd CategoryTheory.Limits.diagonalObjPullbackFstIso_inv_snd_snd
theorem diagonal_pullback_fst {X Y Z : C} (f : X ⟶ Z) (g : Y ⟶ Z) :
diagonal (pullback.fst : pullback f g ⟶ _) =
(pullbackSymmetry _ _).hom ≫
((baseChange f).map
(Over.homMk (diagonal g) (by simp) : Over.mk g ⟶ Over.mk (pullback.snd ≫ g))).left ≫
(diagonalObjPullbackFstIso f g).inv := by
apply pullback.hom_ext <;> apply pullback.hom_ext <;> dsimp <;> simp
#align category_theory.limits.diagonal_pullback_fst CategoryTheory.Limits.diagonal_pullback_fst
end
/-- Given the following diagram with `S ⟶ S'` a monomorphism,
X ⟶ X'
↘ ↘
S ⟶ S'
↗ ↗
Y ⟶ Y'
This iso witnesses the fact that
X ×[S] Y ⟶ (X' ×[S'] Y') ×[Y'] Y
| |
| |
↓ ↓
(X' ×[S'] Y') ×[X'] X ⟶ X' ×[S'] Y'
is a pullback square. The diagonal map of this square is `pullback.map`.
Also see `pullback_lift_map_is_pullback`.
-/
@[simps]
def pullbackFstFstIso {X Y S X' Y' S' : C} (f : X ⟶ S) (g : Y ⟶ S) (f' : X' ⟶ S') (g' : Y' ⟶ S')
(i₁ : X ⟶ X') (i₂ : Y ⟶ Y') (i₃ : S ⟶ S') (e₁ : f ≫ i₃ = i₁ ≫ f') (e₂ : g ≫ i₃ = i₂ ≫ g')
[Mono i₃] :
pullback (pullback.fst : pullback (pullback.fst : pullback f' g' ⟶ _) i₁ ⟶ _)
(pullback.fst : pullback (pullback.snd : pullback f' g' ⟶ _) i₂ ⟶ _) ≅
pullback f g
where
hom :=
pullback.lift (pullback.fst ≫ pullback.snd) (pullback.snd ≫ pullback.snd)
(by
rw [← cancel_mono i₃, Category.assoc, Category.assoc, Category.assoc, Category.assoc, e₁,
e₂, ← pullback.condition_assoc, pullback.condition_assoc, pullback.condition,
pullback.condition_assoc])
inv :=
pullback.lift
(pullback.lift (pullback.map _ _ _ _ _ _ _ e₁ e₂) pullback.fst (pullback.lift_fst _ _ _))
(pullback.lift (pullback.map _ _ _ _ _ _ _ e₁ e₂) pullback.snd (pullback.lift_snd _ _ _))
(by rw [pullback.lift_fst, pullback.lift_fst])
hom_inv_id := by
apply pullback.hom_ext
. apply pullback.hom_ext
. apply pullback.hom_ext
. simp only [Category.assoc, lift_fst, lift_fst_assoc, Category.id_comp]
rw [condition]
. simp [Category.assoc, lift_snd]
rw [condition_assoc, condition]
. simp only [Category.assoc, lift_fst_assoc, lift_snd, lift_fst, Category.id_comp]
. apply pullback.hom_ext
. apply pullback.hom_ext
. simp only [Category.assoc, lift_snd_assoc, lift_fst_assoc, lift_fst, Category.id_comp]
rw [← condition_assoc, condition]
. simp only [Category.assoc, lift_snd, lift_fst_assoc, lift_snd_assoc, Category.id_comp]
rw [condition]
. simp only [Category.assoc, lift_snd_assoc, lift_snd, Category.id_comp]
inv_hom_id := by
apply pullback.hom_ext
. simp only [Category.assoc, lift_fst, lift_fst_assoc, lift_snd, Category.id_comp]
. simp only [Category.assoc, lift_snd, lift_snd_assoc, Category.id_comp]
#align category_theory.limits.pullback_fst_fst_iso CategoryTheory.Limits.pullbackFstFstIso
theorem pullback_map_eq_pullbackFstFstIso_inv {X Y S X' Y' S' : C} (f : X ⟶ S) (g : Y ⟶ S)
(f' : X' ⟶ S') (g' : Y' ⟶ S') (i₁ : X ⟶ X') (i₂ : Y ⟶ Y') (i₃ : S ⟶ S') (e₁ : f ≫ i₃ = i₁ ≫ f')
(e₂ : g ≫ i₃ = i₂ ≫ g') [Mono i₃] :
pullback.map f g f' g' i₁ i₂ i₃ e₁ e₂ =
(pullbackFstFstIso f g f' g' i₁ i₂ i₃ e₁ e₂).inv ≫ pullback.snd ≫ pullback.fst := by
simp only [pullbackFstFstIso_inv, lift_snd_assoc, lift_fst]
#align category_theory.limits.pullback_map_eq_pullback_fst_fst_iso_inv CategoryTheory.Limits.pullback_map_eq_pullbackFstFstIso_inv
theorem pullback_lift_map_isPullback {X Y S X' Y' S' : C} (f : X ⟶ S) (g : Y ⟶ S) (f' : X' ⟶ S')
(g' : Y' ⟶ S') (i₁ : X ⟶ X') (i₂ : Y ⟶ Y') (i₃ : S ⟶ S') (e₁ : f ≫ i₃ = i₁ ≫ f')
(e₂ : g ≫ i₃ = i₂ ≫ g') [Mono i₃] :
IsPullback (pullback.lift (pullback.map f g f' g' i₁ i₂ i₃ e₁ e₂) fst (lift_fst _ _ _))
(pullback.lift (pullback.map f g f' g' i₁ i₂ i₃ e₁ e₂) snd (lift_snd _ _ _)) pullback.fst
pullback.fst :=
IsPullback.of_iso_pullback ⟨by rw [lift_fst, lift_fst]⟩
(pullbackFstFstIso f g f' g' i₁ i₂ i₃ e₁ e₂).symm (by simp) (by simp)
#align category_theory.limits.pullback_lift_map_is_pullback CategoryTheory.Limits.pullback_lift_map_isPullback
end CategoryTheory.Limits
|
function imdb = cnn_imagenet_setup_data(varargin)
% CNN_IMAGENET_SETUP_DATA Initialize ImageNet ILSVRC CLS-LOC challenge data
% This function creates an IMDB structure pointing to a local copy
% of the ILSVRC CLS-LOC data.
%
% The ILSVRC data ships in several TAR archives that can be
% obtained from the ILSVRC challenge website. You will need:
%
% ILSVRC2012_img_train.tar
% ILSVRC2012_img_val.tar
% ILSVRC2012_img_test.tar
% ILSVRC2012_devkit.tar
%
% Note that images in the CLS-LOC challenge are the same for the 2012, 2013,
% and 2014 edition of ILSVRC, but that the development kit
% is different. However, all devkit versions should work.
%
% In order to use the ILSVRC data with these scripts, please
% unpack it as follows. Create a root folder <DATA>, by default
%
% data/imagenet12
%
% (note that this can be a simlink). Use the 'dataDir' option to
% specify a different path.
%
% Within this folder, create the following hierarchy:
%
% <DATA>/images/train/ : content of ILSVRC2012_img_train.tar
% <DATA>/images/val/ : content of ILSVRC2012_img_val.tar
% <DATA>/images/test/ : content of ILSVRC2012_img_test.tar
% <DATA>/ILSVRC2012_devkit : content of ILSVRC2012_devkit.tar
%
% In order to speedup training and testing, it may be a good idea
% to preprocess the images to have a fixed size (e.g. 256 pixels
% high) and/or to store the images in RAM disk (provided that
% sufficient RAM is available). Reading images off disk with a
% sufficient speed is crucial for fast training.
opts.dataDir = fullfile('data','imagenet12') ;
opts.lite = false ;
opts = vl_argparse(opts, varargin) ;
% -------------------------------------------------------------------------
% Load categories metadata
% -------------------------------------------------------------------------
d = dir(fullfile(opts.dataDir, '*devkit*')) ;
if numel(d) == 0
error('Make sure that ILSVRC data is correctly installed in %s', ...
opts.dataDir) ;
end
devkitPath = fullfile(opts.dataDir, d(1).name) ;
% find metadata
mt = dir(fullfile(devkitPath, 'data', 'meta_clsloc.mat')) ;
if numel(mt) == 0
mt = dir(fullfile(devkitPath, 'data', 'meta.mat')) ;
end
metaPath = fullfile(devkitPath, 'data', mt(1).name) ;
% find validation images labels
tmp = dir(fullfile(devkitPath, 'data', '*_validation_ground_truth*')) ;
valLabelsPath = fullfile(devkitPath, 'data', tmp(1).name) ;
% find validation images blacklist
tmp = dir(fullfile(devkitPath, 'data', '*_validation_blacklist*')) ;
if numel(tmp) > 0
valBlacklistPath = fullfile(devkitPath, 'data', tmp(1).name) ;
else
valBlacklistPath = [] ;
warning('Could not find validation images blacklist file');
end
fprintf('using devkit %s\n', devkitPath) ;
fprintf('using metadata %s\n', metaPath) ;
fprintf('using validation labels %s\n', valLabelsPath) ;
fprintf('using validation blacklist %s\n', valBlacklistPath) ;
meta = load(metaPath) ;
cats = {meta.synsets(1:1000).WNID} ;
descrs = {meta.synsets(1:1000).words} ;
imdb.classes.name = cats ;
imdb.classes.description = descrs ;
imdb.imageDir = fullfile(opts.dataDir, 'images') ;
% -------------------------------------------------------------------------
% Training images
% -------------------------------------------------------------------------
fprintf('searching training images ...\n') ;
names = {} ;
labels = {} ;
for d = dir(fullfile(opts.dataDir, 'images', 'train', 'n*'))'
[~,lab] = ismember(d.name, cats) ;
ims = dir(fullfile(opts.dataDir, 'images', 'train', d.name, '*.JPEG')) ;
names{end+1} = strcat([d.name, filesep], {ims.name}) ;
labels{end+1} = ones(1, numel(ims)) * lab ;
fprintf('.') ;
if mod(numel(names), 50) == 0, fprintf('\n') ; end
%fprintf('found %s with %d images\n', d.name, numel(ims)) ;
end
names = horzcat(names{:}) ;
labels = horzcat(labels{:}) ;
if numel(names) ~= 1281167
warning('Found %d training images instead of 1,281,167. Dropping training set.', numel(names)) ;
names = {} ;
labels =[] ;
end
names = strcat(['train' filesep], names) ;
imdb.images.id = 1:numel(names) ;
imdb.images.name = names ;
imdb.images.set = ones(1, numel(names)) ;
imdb.images.label = labels ;
% -------------------------------------------------------------------------
% Validation images
% -------------------------------------------------------------------------
ims = dir(fullfile(opts.dataDir, 'images', 'val', '*.JPEG')) ;
names = sort({ims.name}) ;
labels = textread(valLabelsPath, '%d') ;
if numel(ims) ~= 50e3
warning('Found %d instead of 50,000 validation images. Dropping validation set.', numel(ims))
names = {} ;
labels =[] ;
else
if ~isempty(valBlacklistPath)
black = textread(valBlacklistPath, '%d') ;
fprintf('blacklisting %d validation images\n', numel(black)) ;
keep = setdiff(1:numel(names), black) ;
names = names(keep) ;
labels = labels(keep) ;
end
end
names = strcat(['val' filesep], names) ;
imdb.images.id = horzcat(imdb.images.id, (1:numel(names)) + 1e7 - 1) ;
imdb.images.name = horzcat(imdb.images.name, names) ;
imdb.images.set = horzcat(imdb.images.set, 2*ones(1,numel(names))) ;
imdb.images.label = horzcat(imdb.images.label, labels') ;
% -------------------------------------------------------------------------
% Test images
% -------------------------------------------------------------------------
ims = dir(fullfile(opts.dataDir, 'images', 'test', '*.JPEG')) ;
names = sort({ims.name}) ;
labels = zeros(1, numel(names)) ;
if numel(labels) ~= 100e3
warning('Found %d instead of 100,000 test images', numel(labels))
end
names = strcat(['test' filesep], names) ;
imdb.images.id = horzcat(imdb.images.id, (1:numel(names)) + 2e7 - 1) ;
imdb.images.name = horzcat(imdb.images.name, names) ;
imdb.images.set = horzcat(imdb.images.set, 3*ones(1,numel(names))) ;
imdb.images.label = horzcat(imdb.images.label, labels) ;
% -------------------------------------------------------------------------
% Postprocessing
% -------------------------------------------------------------------------
% sort categories by WNID (to be compatible with other implementations)
[imdb.classes.name,perm] = sort(imdb.classes.name) ;
imdb.classes.description = imdb.classes.description(perm) ;
relabel(perm) = 1:numel(imdb.classes.name) ;
ok = imdb.images.label > 0 ;
imdb.images.label(ok) = relabel(imdb.images.label(ok)) ;
if opts.lite
% pick a small number of images for the first 10 classes
% this cannot be done for test as we do not have test labels
clear keep ;
for i=1:10
sel = find(imdb.images.label == i) ;
train = sel(imdb.images.set(sel) == 1) ;
val = sel(imdb.images.set(sel) == 2) ;
train = train(1:256) ;
val = val(1:40) ;
keep{i} = [train val] ;
end
test = find(imdb.images.set == 3) ;
keep = sort(cat(2, keep{:}, test(1:1000))) ;
imdb.images.id = imdb.images.id(keep) ;
imdb.images.name = imdb.images.name(keep) ;
imdb.images.set = imdb.images.set(keep) ;
imdb.images.label = imdb.images.label(keep) ;
end
|
\documentclass[11pt]{article}
\usepackage[a4paper,margin= 2cm]{geometry}
\usepackage{graphicx}
\usepackage{caption}
\usepackage{subcaption}
\title{\LARGE{\bf{Task 4 - Quadrature}}}
\author{\Large{\bf{Kirtan Patel - AE19B038}}}
\date{}
\begin{document}
\maketitle
\section{Introduction}
In mathematics, quadrature is a historical term which means the process of determining area. This term is still used nowadays in the context of differential equations, where "solving an equation by quadrature" or "reduction to quadrature" means expressing its solution in terms of integrals.\\
There are various reasons as of why such approximations
can be useful. First, not every function can be analytically integrated. Second, even if a
closed integration formula exists, it might still not be the most efficient way of calculating
the integral. In addition, it can happen that we need to integrate an unknown function,
in which only some samples of the function are known.\\
This report includes working and analysis of 4 methods of quadrature:
\begin{enumerate}
\item Rectangular Method
\begin{enumerate}
\item Left Endpoint Rule
\item Right Endpoint Rule
\item Midpoint Rule
\end{enumerate}
\item Trapezoid Method
\end{enumerate}
\subsection{Rectangular Method}
In order to gain some insight on numerical integration, we review Riemann integration, a framework that can be viewed as an approach for approximating integrals.\\
We assume that \textit{f(x)} is bounded function defined on [a,b] and that {$x_0, . . . . ,x_n$} is a partition(P) of [a,b]. For each \textit{i} we let
\[ M_i(f) = \mathop{sup}_{x\in[x_{i-1},x_i]} f(x)\] and \[m_i(f) = \mathop{inf}_{x\in[x_{i-1},x_i]} f(x)\]
Thus, $M_i(f)$ represents the Maxima of f(x) in the ith interval [$x_{i-1},x_{i}]$ while $m_i(f)$ represents the Minima of f(x) in the ith interval [$x_{i-1},x_{i}]$.\\
Letting $\delta x_i = x_i - x_{i-1}$ , the \textbf{upper (Darboux) sum} of f(x) with respect to the partition P is defined as :
\[ U(f,P) = \sum_{i=1}^{n}M_i\Delta x_i\]
while the \textbf{lower (Darboux) sum} of f(x) with respect to the partition P is defined as :
\[ L(f,P) = \sum_{i=1}^{n}m_i\Delta x_i\] \\
The upper bound integral of f(x) on [a,b] is defined as :
\[ U(f)= inf(U(f,P))\]
and the lower bound integral of f(x) is defined as
\[ L(f)= sup(L(f,P))\] where both the infimum and the supremum are taken over all possible partitions, P, of the interval [a,b]. \\
As mentioned above, the bound estimation of integral $\int_{a}^{b}f(x) \,dx$ can be done using the extremum values attained by the function in the given range. However, these sums are not defined in the most convenient way for an approximation algorithm. This is because we need to find the extrema of the function in every subinterval. Finding the extrema of the function, may be a complicated task on its own, which we would like to avoid.\\
A simpler approach for the approximation is to compute the product of the value of the function ar either of the end-points of the interval by the length of the interval. For an increasing function, taking the left endpoint would give a lower bound value while the right endpoint would give an upper bound value. The opposite is true if the function is decreasing.\\
A variation on the rectangular rule is the midpoint rule. Similar to the rectangular rule, we approximate the value of the integral by multiplying the length of the interval by the value of the function at one point. Only this time, we use the value of the function at the center point of the interval.\\
\begin{figure} [h!]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1.1\linewidth]{Sample1}
\label{fig1:sub1}
\end{subfigure}%
\hspace{1cm}
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=0.9\linewidth]{Sample2}
\label{fig1:sub2}
\end{subfigure}
\caption{Graphical Representation of Quadrature using Rectangular Rule}
\end{figure}
\subsection{Trapezoid Method}
This technique is a much more accurate way to
approximate area beneath a curve. To construct the
trapezoids, you mark the height of the function at the beginning and end of the width interval, then connect the two points.
We get a better approximation of graphs which are closer to linear, by assuming each interval to be consisting of trapeziums. Here we use both the end-points and average the values at the endpoints.\\
On dividing the integration interval into sub-intervals, this is termed as the composite trapezoid rule, which is obtained by applying the trapezoid rule in each sub-interval and then adding the resulting areas together\\
The Trapezoidal Rule overestimates a curve that is
concave up and underestimates functions that are
concave down.
\begin{figure} [h!]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.8\linewidth]{Sample3}
\label{fig2:sub2}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.8\linewidth]{Sample4}
\label{fig2:sub3}
\end{subfigure}
\caption{Graphical Representation of Quadrature using Rectangular Rule}
\end{figure}
\vspace{1cm}
\subsection{Remarks}
There are several other method which use weighted averages of the areas and give a less errorneous result.The following are a few of them:
\begin{itemize}
\item \textbf{Newton-Cotes Methods}
Use intepolating polynomials. Trapezoid, Simpson’s 1/3 and 3/8 rules,
Bode’s are special cases of 1st, 2nd, 3rd and 4th order polynomials are
used, respectively
\item \textbf{Romberg Integration(Richardson Extrapolation)}
uses knowledge of error estimated to build a recursive higher order scheme
\item \textbf{Gauss Quadrature}
Like Newton-Cotes, but instead of a regular grid, choose a set that lets you get higher order accuracy
\item \textbf{Monte Carlo Integration}
Use randomly selected grid points. Useful for higher dimensional integrals (d>4)
\end{itemize}
\pagebreak
\section{Results and Analysis}
Taking the number of sub-intervals from the user, and finding the error in the quadrature found by each method, we get the following results.
\subsection{Absolute Error vs $\Delta$x}
\begin{figure}[h!]
\centering
\centering
\includegraphics[width=0.8\linewidth]{abs_error}
\caption{Absolute Error vs $\Delta x$ size of sub-interval}
\label{fig4}
\end{figure}
\subsection{log(Absolute Error) vs $\Delta$x}
\begin{figure} [h!]
\centering
\begin{subfigure}{0.55\textwidth}
\centering
\includegraphics[width=\linewidth]{log_abs_error}
\caption{log(absolute error) vs $\Delta$x}
\label{fig2:sub1}
\end{subfigure}%
\begin{subfigure}{0.55\textwidth}
\centering
\includegraphics[width=\linewidth]{log_error_vs_delta_x_Lin_Reg}
\caption{log(absolute error) vs $\Delta$x - Linear Regression}
\label{fig2:sub2}
\end{subfigure}
\caption{Results obtained by plotting log(Absolute Error) vs $\Delta$x}
\end{figure}
We can see that the Midpoint and Trapezoid Rules give a far more accurate result than the Left and Right Endpoint Rules. This difference is significant in functions with large slopes and large slope changes.\\
Furthermore, the Midpoint Rule gives a slightly better result than the Trapezoid Rule. For more linear functions, this difference negligible while for highly non-linear functions, the difference is large.\\
We can even see that the relation is almost linear between the Absolute error and the size of the sub-interval. Naturally, if we use a finer sub-interval, we go closer and closer to the true value of the integral and hence the error tends to zero.\\
It is important to note that the user inputs the number of intervals the integration interval is to be divided in. The sub-interval size $\Delta$x is given by : \[ \Delta x = \frac{(upper~integration~limit - lower~integration~limit)}{number~of~sub-intervals}\]
\subsection{Absolute Error vs Number of Sub-intervals}
\begin{figure}[h!]
\centering
\centering
\includegraphics[width=0.8\linewidth]{abs_error_vs_num_subintervals}
\caption{Result obtained by plotting Absolute Error vs number of sub-intervals}
\label{fig4}
\end{figure}
We can see that the Absolute Error is \textbf{inversely proportional} to the number of sub-intervals. \\
This is even verified by seeing the Absolute Error vs $\Delta$x plot which is linear. This, along with the relation between $\Delta$x and number of sub-intervals confirms that the plot lines with the mathematical relation y $\propto$ $\frac{1}{x}$ \\
As the number of subintervals tends to $\infty$, the Absolute Error tends to 0
\pagebreak
\section{Order of Accuracy}
The 'order of accuracy' is a way to quantify the accuracy of the method i.e. how close the output of the method is to the actual value. Since the value we get from numerical method is an approximation to the actual value, there is an error value which is greater than zero, as the approximate value is not exactly the same as the actual value but close to it.\\
The degree of accuracy of a Quadrature Formula is the largest positive integer n such that the formula is for x$^k$ for each k = 0,1,...n.\\
The integral is given as :
\[ \int_{a}^{b}f(x) \,dx = Numerical~Integral~+~Error\]
In terms of the above defition, the order of accuracy is the highest degree of a polynomial for which the Integration Method gives no error.
\subsection{Rectangular Rule}
The Left Endpoint Rectangular Rule follows :
\[\int_{a}^{b}f(x) \,dx = {b-a}[f(a)]~+~Error\]
The Right Endpoint Rectangular Rule follows :
\[\int_{a}^{b}f(x) \,dx = {b-a}[f(b)]~+~Error\]
The Midpoint Rectangular Rule follows :
\[\int_{a}^{b}f(x) \,dx = {b-a}[f(\frac{a+b}{2})]~+~Error\] \\
Let f(x) = x$^0$\\
For which integral value~~$\int_{a}^{b} x^0 \,dx~=~\int_{a}^{b} 1 \,dx~=~ b-a$\\
And the Rectangular Rule (for all the 3 sub-rules) gives value: $(b-a).[1]~=~b-a$ \\
Hence there is \textbf{No Error} for x$^0$ \\
For f(x) = x$^1$\\
For which integral value~~$\int_{a}^{b} x \,dx~=~\frac{b^2-a^2}{2}$\\
the Left Endpoint Rectagular Rule gives value: $(b-a).[a]~=~ab-a^2$\\
the Right Endpoint Rectagular Rule gives value: $(b-a).[b]~=~b^2-ab$\\
the Midpoint Rectagular Rule gives value: $(b-a).[\frac{a+b}{2}]~=~\frac{b^2-a^2}{2}$\\
Hence there is \textbf{No Error} for x$^1$ in the Midpoint Rule, while there is \textbf{Error} for the Left Endpoint and the Right Endpoint Rules.\\
But for f(x) = x$^2$\\
For which integral value~~$\int_{a}^{b} x^2 \,dx~=~\frac{b^3-a^3}{3}$\\
And the Midpoint Rectangular Rule gives value~= $\frac{b-a}{2}~[(\frac{a~+~b}{2})^2]$\\
Here there is a \textbf{ Error} for f(x) = x$^2$.\\
Therefore, the order of accuracy for the Midpoint Rectangular method is 1,\\
while the order of accuracy for the Left and Right Endpoint Rules is 0.
\subsection{Trapezoid Rule}
The Trapezoid Rule follows that :
\[ \int_{a}^{b}f(x) \,dx = \frac{b-a}{2}[f(a)+f(b)]~+~Error\]
Let f(x) = x$^0$\\
For which integral value~~$\int_{a}^{b} x^0 \,dx~=~\int_{a}^{b} 1 \,dx~=~ b-a$\\
And the Trapezoid Rule gives value~= ~ $\frac{b-a}{2}~[1~+~1]~=~b-a$\\
Hence there is \textbf{No Error} for x$^0$ \\
Similarly Let f(x) = x$^1$\\
For which integral value~~$\int_{a}^{b} x \,dx~=~\frac{b^2-a^2}{2}$\\
And the Trapezoid Rule gives value~= ~ $\frac{b-a}{2}~[a~+~b]~=~\frac{b^2-a^2}{2}$\\
Hence there is \textbf{No Error} for x$^1$ as well.\\
But for f(x) = x$^2$\\
For which integral value~~$\int_{a}^{b} x^2 \,dx~=~\frac{b^3-a^3}{3}$\\
And the Trapezoid Rule gives value~= ~ $\frac{b-a}{2}~[a^2~+~b^2]$\\
Here there is a cubic\textbf{ Error} for f(x) = x$^2$.\\
Therefore, the the order of accuracy for the Trapezoid method is 1.\\
As we begin to use interpolation and other mathematical techniques, we can get a higet order of accuracy. e.g. The Simpson's 1/3 rule has an order of accuracy of 3.\\
Tabulating the order of accuracy(OoA) calculated,\\
\begin{table} [h!]
\centering
\begin{tabular}{| l || l |}
\hline
Quadrature Method & OoA \\
\hline \hline
Left Endpoint Rectangular Method & 0\\
\hline
Right Endpoint Rectangular Method & 0 \\
\hline
Midpoint Rectangular Method & 1 \\
\hline
Trapezoid Method & 1 \\
\hline
\end{tabular}
\end{table}
This is seen evidently in the Absolute Error vs $\Delta$x plots (Figure 3 and Figure 4)
\section{References}
\begin{enumerate}
\item https://en.wikipedia.org/wiki/Quadrature$\_$(mathematics)
\item https://math.stackexchange.com/questions/2873291/what-is-the-intuitive-meaning-of-order-of-accuracy-and-order-of-approximation
\item http://www.math.pitt.edu/~sussmanm/2070Fall07/lab$\_$10/index.html
\item https://www.kth.se/social/upload/5287cd2bf27654543869d6c8/Ant-OrderofAccuracy.pdf
\item https://www3.nd.edu/~zxu2/acms40390F11/sec4-3-Numerical$\_$integration.pdf
\end{enumerate}
\end{document}
|
-- ---------------------------------------------------------------- [ Effs.idr ]
-- Module : Effs.idr
-- Copyright : (c) Jan de Muijnck-Hughes
-- License : see LICENSE
-- --------------------------------------------------------------------- [ EOH ]
module Frigg.Effs
-- ----------------------------------------------------------------- [ Imports ]
import public Effects
import public Effect.System
import public Effect.State
import public Effect.Exception
import public Effect.File
import public Effect.StdIO
import XML.DOM
import ArgParse
import Config.INI
import Freyja
import Frigg.Options
import Frigg.Error
-- -------------------------------------------------------------- [ Directives ]
%access export
-- ------------------------------------------------------------------- [ State ]
public export
record FriggState where
constructor MkFriggState
opts : FriggOpts
pdoc : Maybe PatternDoc
xdoc : Maybe XMLDoc
Default FriggState where
default = MkFriggState mkDefOpts Nothing Nothing
-- ----------------------------------------------------------------- [ Effects ]
public export
FriggEffs : List EFFECT
FriggEffs = [ FILE_IO ()
, SYSTEM
, STDIO
, 'ferr ::: EXCEPTION FriggError
, 'fstate ::: STATE FriggState
]
namespace Frigg
raise : FriggError -> Eff b ['ferr ::: EXCEPTION FriggError]
raise err = 'ferr :- Exception.raise err
public export
Frigg : Type -> Type
Frigg rTy = Eff rTy FriggEffs
getOptions : Eff FriggOpts ['fstate ::: STATE FriggState]
getOptions = pure $ opts !('fstate :- get)
putOptions : FriggOpts -> Eff () ['fstate ::: STATE FriggState]
putOptions o = 'fstate :- update (\st => record {opts = o} st)
processOptions : Frigg FriggOpts
processOptions =
case parseArgs mkDefOpts convOpts !getArgs of
Left err => Frigg.raise (ParseError (show err))
Right o => pure o
putPatternDoc : PatternDoc -> Eff () ['fstate ::: STATE FriggState]
putPatternDoc p = 'fstate :- update (\st => record {pdoc = Just p} st)
getPatternDoc : Frigg PatternDoc
getPatternDoc =
case pdoc !('fstate :- get) of
Nothing => Frigg.raise PatternDocMissing
Just p => pure p
getXMLDoc : Frigg XMLDoc
getXMLDoc =
case xdoc !('fstate :- get) of
Nothing => Frigg.raise PatternDocMissing
Just x => pure x
putXMLDoc : XMLDoc -> Eff () ['fstate ::: STATE FriggState]
putXMLDoc p = 'fstate :- update (\st => record {xdoc = Just p} st)
-- --------------------------------------------------------------------- [ EOF ]
|
!..
!..file contains fermi-dirac integral routines:
!..
!..function zfermim12 does a rational function fit for the order -1/2 integral
!..function zfermi12 does a rational function fit for the order 1/2 integral
!..function zfermi1 does a rational function fit for the order 1 integral
!..function zfermi32 does a rational function fit for the order 3/2 integral
!..function zfermi2 does a rational function fit for the order 2 integral
!..function zfermi52 does a rational function fit for the order 5/2 integral
!..function zfermi3 does a rational function fit for the order 3 integral
!..
!..function ifermim12 is a rational function fit for the inverse of order -1/2
!..function ifermi12 is a rational function fit for the inverse of order 1/2
!..function ifermi32 is a rational function fit for the inverse of order 3/2
!..function ifermi52 is a rational function fit for the inverse of order 5/2
module fermi
contains
double precision function zfermim12(x)
! include 'implno.dek'
!..
!..this routine applies a rational function expansion to get the fermi-dirac
!..integral of order -1/2 evaluated at x. maximum error is 1.23d-12.
!..reference: antia apjs 84,101 1993
!..
!..declare
integer i,m1,k1,m2,k2
double precision x,an,a1(12),b1(12),a2(12),b2(12),rn,den,xx
!..load the coefficients of the expansion
data an,m1,k1,m2,k2 /-0.5d0, 7, 7, 11, 11/
data (a1(i),i=1,8)/ 1.71446374704454d7, 3.88148302324068d7, &
3.16743385304962d7, 1.14587609192151d7, &
1.83696370756153d6, 1.14980998186874d5, &
1.98276889924768d3, 1.0d0/
data (b1(i),i=1,8)/ 9.67282587452899d6, 2.87386436731785d7, &
3.26070130734158d7, 1.77657027846367d7, &
4.81648022267831d6, 6.13709569333207d5, &
3.13595854332114d4, 4.35061725080755d2/
data (a2(i),i=1,12)/-4.46620341924942d-15, -1.58654991146236d-12, &
-4.44467627042232d-10, -6.84738791621745d-8, &
-6.64932238528105d-6, -3.69976170193942d-4, &
-1.12295393687006d-2, -1.60926102124442d-1, &
-8.52408612877447d-1, -7.45519953763928d-1, &
2.98435207466372d0, 1.0d0/
data (b2(i),i=1,12)/-2.23310170962369d-15, -7.94193282071464d-13, &
-2.22564376956228d-10, -3.43299431079845d-8, &
-3.33919612678907d-6, -1.86432212187088d-4, &
-5.69764436880529d-3, -8.34904593067194d-2, &
-4.78770844009440d-1, -4.99759250374148d-1, &
1.86795964993052d0, 4.16485970495288d-1/
if (x .lt. 2.0d0) then
xx = exp(x)
rn = xx + a1(m1)
do i=m1-1,1,-1
rn = rn*xx + a1(i)
enddo
den = b1(k1+1)
do i=k1,1,-1
den = den*xx + b1(i)
enddo
zfermim12 = xx * rn/den
!..
else
xx = 1.0d0/(x*x)
rn = xx + a2(m2)
do i=m2-1,1,-1
rn = rn*xx + a2(i)
enddo
den = b2(k2+1)
do i=k2,1,-1
den = den*xx + b2(i)
enddo
zfermim12 = sqrt(x)*rn/den
end if
return
end
double precision function zfermi12(x)
! include 'implno.dek'
!..
!..this routine applies a rational function expansion to get the fermi-dirac
!..integral of order 1/2 evaluated at x. maximum error is 5.47d-13.
!..reference: antia apjs 84,101 1993
!..
!..declare
integer i,m1,k1,m2,k2
double precision x,an,a1(12),b1(12),a2(12),b2(12),rn,den,xx
!..load the coefficients of the expansion
data an,m1,k1,m2,k2 /0.5d0, 7, 7, 10, 11/
data (a1(i),i=1,8)/5.75834152995465d6, 1.30964880355883d7, &
1.07608632249013d7, 3.93536421893014d6, &
6.42493233715640d5, 4.16031909245777d4, &
7.77238678539648d2, 1.0d0/
data (b1(i),i=1,8)/6.49759261942269d6, 1.70750501625775d7, &
1.69288134856160d7, 7.95192647756086d6, &
1.83167424554505d6, 1.95155948326832d5, &
8.17922106644547d3, 9.02129136642157d1/
data (a2(i),i=1,11)/4.85378381173415d-14, 1.64429113030738d-11, &
3.76794942277806d-9, 4.69233883900644d-7, &
3.40679845803144d-5, 1.32212995937796d-3, &
2.60768398973913d-2, 2.48653216266227d-1, &
1.08037861921488d0, 1.91247528779676d0, &
1.0d0/
data (b2(i),i=1,12)/7.28067571760518d-14, 2.45745452167585d-11, &
5.62152894375277d-9, 6.96888634549649d-7, &
5.02360015186394d-5, 1.92040136756592d-3, &
3.66887808002874d-2, 3.24095226486468d-1, &
1.16434871200131d0, 1.34981244060549d0, &
2.01311836975930d-1, -2.14562434782759d-2/
if (x .lt. 2.0d0) then
xx = exp(x)
rn = xx + a1(m1)
do i=m1-1,1,-1
rn = rn*xx + a1(i)
enddo
den = b1(k1+1)
do i=k1,1,-1
den = den*xx + b1(i)
enddo
zfermi12 = xx * rn/den
else
xx = 1.0d0/(x*x)
rn = xx + a2(m2)
do i=m2-1,1,-1
rn = rn*xx + a2(i)
enddo
den = b2(k2+1)
do i=k2,1,-1
den = den*xx + b2(i)
enddo
zfermi12 = x*sqrt(x)*rn/den
end if
return
end
double precision function zfermi1(x)
! include 'implno.dek'
!..
!..this routine applies a rational function expansion to get the fermi-dirac
!..integral of order 1 evaluated at x. maximum error is 1.0e-8.
!..reference: antia priv comm. 11sep94
!..
!..declare
integer i,m1,k1,m2,k2
double precision x,an,a1(12),b1(12),a2(12),b2(12),rn,den,xx
!..load the coefficients of the expansion
data an,m1,k1,m2,k2 /1.0, 7, 4, 9, 5/
data (a1(i),i=1,8)/-7.606458638543d7, -1.143519707857d8, &
-5.167289383236d7, -7.304766495775d6, &
-1.630563622280d5, 3.145920924780d3, &
-7.156354090495d1, 1.0d0/
data (b1(i),i=1,5)/-7.606458639561d7, -1.333681162517d8, &
-7.656332234147d7, -1.638081306504d7, &
-1.044683266663d6/
data (a2(i),i=1,10)/-3.493105157219d-7, -5.628286279892d-5, &
-5.188757767899d-3, -2.097205947730d-1, &
-3.353243201574d0, -1.682094530855d1, &
-2.042542575231d1, 3.551366939795d0, &
-2.400826804233d0, 1.0d0/
data (b2(i),i=1,6)/-6.986210315105d-7, -1.102673536040d-4, &
-1.001475250797d-2, -3.864923270059d-1, &
-5.435619477378d0, -1.563274262745d1/
if (x .lt. 2.0d0) then
xx = exp(x)
rn = xx + a1(m1)
do i=m1-1,1,-1
rn = rn*xx + a1(i)
enddo
den = b1(k1+1)
do i=k1,1,-1
den = den*xx + b1(i)
enddo
zfermi1 = xx * rn/den
else
xx = 1.0d0/(x*x)
rn = xx + a2(m2)
do i=m2-1,1,-1
rn = rn*xx + a2(i)
enddo
den = b2(k2+1)
do i=k2,1,-1
den = den*xx + b2(i)
enddo
zfermi1 = x*x*rn/den
end if
return
end
double precision function zfermi32(x)
! include 'implno.dek'
!..
!..this routine applies a rational function expansion to get the fermi-dirac
!..integral of order 3/2 evaluated at x. maximum error is 5.07d-13.
!..reference: antia apjs 84,101 1993
!..
!..declare
integer i,m1,k1,m2,k2
double precision x,an,a1(12),b1(12),a2(12),b2(12),rn,den,xx
!..load the coefficients of the expansion
data an,m1,k1,m2,k2 /1.5d0, 6, 7, 9, 10/
data (a1(i),i=1,7)/4.32326386604283d4, 8.55472308218786d4, &
5.95275291210962d4, 1.77294861572005d4, &
2.21876607796460d3, 9.90562948053193d1, &
1.0d0/
data (b1(i),i=1,8)/3.25218725353467d4, 7.01022511904373d4, &
5.50859144223638d4, 1.95942074576400d4, &
3.20803912586318d3, 2.20853967067789d2, &
5.05580641737527d0, 1.99507945223266d-2/
data (a2(i),i=1,10)/2.80452693148553d-13, 8.60096863656367d-11, &
1.62974620742993d-8, 1.63598843752050d-6, &
9.12915407846722d-5, 2.62988766922117d-3, &
3.85682997219346d-2, 2.78383256609605d-1, &
9.02250179334496d-1, 1.0d0/
data (b2(i),i=1,11)/7.01131732871184d-13, 2.10699282897576d-10, &
3.94452010378723d-8, 3.84703231868724d-6, &
2.04569943213216d-4, 5.31999109566385d-3, &
6.39899717779153d-2, 3.14236143831882d-1, &
4.70252591891375d-1, -2.15540156936373d-2, &
2.34829436438087d-3/
if (x .lt. 2.0d0) then
xx = exp(x)
rn = xx + a1(m1)
do i=m1-1,1,-1
rn = rn*xx + a1(i)
enddo
den = b1(k1+1)
do i=k1,1,-1
den = den*xx + b1(i)
enddo
zfermi32 = xx * rn/den
else
xx = 1.0d0/(x*x)
rn = xx + a2(m2)
do i=m2-1,1,-1
rn = rn*xx + a2(i)
enddo
den = b2(k2+1)
do i=k2,1,-1
den = den*xx + b2(i)
enddo
zfermi32 = x*x*sqrt(x)*rn/den
end if
return
end
double precision function zfermi2(x)
! include 'implno.dek'
!..
!..this routine applies a rational function expansion to get the fermi-dirac
!..integral of order 2 evaluated at x. maximum error is 1.0e-8.
!..reference: antia priv comm. 11sep94
!..
!..declare
integer i,m1,k1,m2,k2
double precision x,an,a1(12),b1(12),a2(12),b2(12),rn,den,xx
!..load the coefficients of the expansion
data an,m1,k1,m2,k2 /2.0, 7, 4, 5, 9/
data (a1(i),i=1,8)/-1.434885992395d8, -2.001711155617d8, &
-8.507067153428d7, -1.175118281976d7, &
-3.145120854293d5, 4.275771034579d3, &
-8.069902926891d1, 1.0d0/
data (b1(i),i=1,5)/-7.174429962316d7, -1.090535948744d8, &
-5.350984486022d7, -9.646265123816d6, &
-5.113415562845d5/
data (a2(i),i=1,6)/ 6.919705180051d-8, 1.134026972699d-5, &
7.967092675369d-4, 2.432500578301d-2, &
2.784751844942d-1, 1.0d0/
data (b2(i),i=1,10)/ 2.075911553728d-7, 3.197196691324d-5, &
2.074576609543d-3, 5.250009686722d-2, &
3.171705130118d-1, -1.147237720706d-1, &
6.638430718056d-2, -1.356814647640d-2, &
-3.648576227388d-2, 3.621098757460d-2/
if (x .lt. 2.0d0) then
xx = exp(x)
rn = xx + a1(m1)
do i=m1-1,1,-1
rn = rn*xx + a1(i)
enddo
den = b1(k1+1)
do i=k1,1,-1
den = den*xx + b1(i)
enddo
zfermi2 = xx * rn/den
else
xx = 1.0d0/(x*x)
rn = xx + a2(m2)
do i=m2-1,1,-1
rn = rn*xx + a2(i)
enddo
den = b2(k2+1)
do i=k2,1,-1
den = den*xx + b2(i)
enddo
zfermi2 = x*x*x*rn/den
end if
return
end
double precision function zfermi52(x)
! include 'implno.dek'
!..
!..this routine applies a rational function expansion to get the fermi-dirac
!..integral of order 5/2 evaluated at x. maximum error is 2.47d-13.
!..reference: antia apjs 84,101 1993
!..
!..declare
integer i,m1,k1,m2,k2
double precision x,an,a1(12),b1(12),a2(12),b2(12),rn,den,xx
!..load the coefficients of the expansion
data an,m1,k1,m2,k2 /2.5d0, 6, 7, 10, 9/
data (a1(i),i=1,7)/6.61606300631656d4, 1.20132462801652d5, &
7.67255995316812d4, 2.10427138842443d4, &
2.44325236813275d3, 1.02589947781696d2, &
1.0d0/
data (b1(i),i=1,8)/1.99078071053871d4, 3.79076097261066d4, &
2.60117136841197d4, 7.97584657659364d3, &
1.10886130159658d3, 6.35483623268093d1, &
1.16951072617142d0, 3.31482978240026d-3/
data (a2(i),i=1,11)/8.42667076131315d-12, 2.31618876821567d-9, &
3.54323824923987d-7, 2.77981736000034d-5, &
1.14008027400645d-3, 2.32779790773633d-2, &
2.39564845938301d-1, 1.24415366126179d0, &
3.18831203950106d0, 3.42040216997894d0, &
1.0d0/
data (b2(i),i=1,10)/2.94933476646033d-11, 7.68215783076936d-9, &
1.12919616415947d-6, 8.09451165406274d-5, &
2.81111224925648d-3, 3.99937801931919d-2, &
2.27132567866839d-1, 5.31886045222680d-1, &
3.70866321410385d-1, 2.27326643192516d-2/
if (x .lt. 2.0d0) then
xx = exp(x)
rn = xx + a1(m1)
do i=m1-1,1,-1
rn = rn*xx + a1(i)
enddo
den = b1(k1+1)
do i=k1,1,-1
den = den*xx + b1(i)
enddo
zfermi52 = xx * rn/den
else
xx = 1.0d0/(x*x)
rn = xx + a2(m2)
do i=m2-1,1,-1
rn = rn*xx + a2(i)
enddo
den = b2(k2+1)
do i=k2,1,-1
den = den*xx + b2(i)
enddo
zfermi52 = x*x*x*sqrt(x)*rn/den
end if
return
end
double precision function zfermi3(x)
! include 'implno.dek'
!..
!..this routine applies a rational function expansion to get the fermi-dirac
!..integral of order 3 evaluated at x. maximum error is 1.0e-8.
!..reference: antia priv comm. 11sep94
!..
!..declare
integer i,m1,k1,m2,k2
double precision x,an,a1(12),b1(12),a2(12),b2(12),rn,den,xx
!..load the coefficients of the expansion
data an,m1,k1,m2,k2 /3.0, 4, 6, 7, 7/
data (a1(i),i=1,5)/ 6.317036716422d2, 7.514163924637d2, &
2.711961035750d2, 3.274540902317d1, &
1.0d0/
data (b1(i),i=1,7)/ 1.052839452797d2, 1.318163114785d2, &
5.213807524405d1, 7.500064111991d0, &
3.383020205492d-1, 2.342176749453d-3, &
-8.445226098359d-6/
data (a2(i),i=1,8)/ 1.360999428425d-8, 1.651419468084d-6, &
1.021455604288d-4, 3.041270709839d-3, &
4.584298418374d-2, 3.440523212512d-1, &
1.077505444383d0, 1.0d0/
data (b2(i),i=1,8)/ 5.443997714076d-8, 5.531075760054d-6, &
2.969285281294d-4, 6.052488134435d-3, &
5.041144894964d-2, 1.048282487684d-1, &
1.280969214096d-2, -2.851555446444d-3/
if (x .lt. 2.0d0) then
xx = exp(x)
rn = xx + a1(m1)
do i=m1-1,1,-1
rn = rn*xx + a1(i)
enddo
den = b1(k1+1)
do i=k1,1,-1
den = den*xx + b1(i)
enddo
zfermi3 = xx * rn/den
else
xx = 1.0d0/(x*x)
rn = xx + a2(m2)
do i=m2-1,1,-1
rn = rn*xx + a2(i)
enddo
den = b2(k2+1)
do i=k2,1,-1
den = den*xx + b2(i)
enddo
zfermi3 = x*x*x*x*rn/den
end if
return
end
double precision function ifermim12(f)
! include 'implno.dek'
!..
!..this routine applies a rational function expansion to get the inverse
!..fermi-dirac integral of order -1/2 when it is equal to f.
!..maximum error is 3.03d-9. reference: antia apjs 84,101 1993
!..
!..declare
integer i,m1,k1,m2,k2
double precision f,an,a1(12),b1(12),a2(12),b2(12),rn,den,ff
!..load the coefficients of the expansion
data an,m1,k1,m2,k2 /-0.5d0, 5, 6, 6, 6/
data (a1(i),i=1,6)/-1.570044577033d4, 1.001958278442d4, &
-2.805343454951d3, 4.121170498099d2, &
-3.174780572961d1, 1.0d0/
data (b1(i),i=1,7)/-2.782831558471d4, 2.886114034012d4, &
-1.274243093149d4, 3.063252215963d3, &
-4.225615045074d2, 3.168918168284d1, &
-1.008561571363d0/
data (a2(i),i=1,7)/ 2.206779160034d-8, -1.437701234283d-6, &
6.103116850636d-5, -1.169411057416d-3, &
1.814141021608d-2, -9.588603457639d-2, &
1.0d0/
data (b2(i),i=1,7)/ 8.827116613576d-8, -5.750804196059d-6, &
2.429627688357d-4, -4.601959491394d-3, &
6.932122275919d-2, -3.217372489776d-1, &
3.124344749296d0/
if (f .lt. 4.0d0) then
rn = f + a1(m1)
do i=m1-1,1,-1
rn = rn*f + a1(i)
enddo
den = b1(k1+1)
do i=k1,1,-1
den = den*f + b1(i)
enddo
ifermim12 = log(f * rn/den)
else
ff = 1.0d0/f**(1.0d0/(1.0d0 + an))
rn = ff + a2(m2)
do i=m2-1,1,-1
rn = rn*ff + a2(i)
enddo
den = b2(k2+1)
do i=k2,1,-1
den = den*ff + b2(i)
enddo
ifermim12 = rn/(den*ff)
end if
return
end
double precision function ifermi12(f)
! include 'implno.dek'
!..
!..this routine applies a rational function expansion to get the inverse
!..fermi-dirac integral of order 1/2 when it is equal to f.
!..maximum error is 4.19d-9. reference: antia apjs 84,101 1993
!..
!..declare
integer i,m1,k1,m2,k2
double precision f,an,a1(12),b1(12),a2(12),b2(12),rn,den,ff
!..load the coefficients of the expansion
data an,m1,k1,m2,k2 /0.5d0, 4, 3, 6, 5/
data (a1(i),i=1,5)/ 1.999266880833d4, 5.702479099336d3, &
6.610132843877d2, 3.818838129486d1, &
1.0d0/
data (b1(i),i=1,4)/ 1.771804140488d4, -2.014785161019d3, &
9.130355392717d1, -1.670718177489d0/
data (a2(i),i=1,7)/-1.277060388085d-2, 7.187946804945d-2, &
-4.262314235106d-1, 4.997559426872d-1, &
-1.285579118012d0, -3.930805454272d-1, &
1.0d0/
data (b2(i),i=1,6)/-9.745794806288d-3, 5.485432756838d-2, &
-3.299466243260d-1, 4.077841975923d-1, &
-1.145531476975d0, -6.067091689181d-2/
if (f .lt. 4.0d0) then
rn = f + a1(m1)
do i=m1-1,1,-1
rn = rn*f + a1(i)
enddo
den = b1(k1+1)
do i=k1,1,-1
den = den*f + b1(i)
enddo
ifermi12 = log(f * rn/den)
else
ff = 1.0d0/f**(1.0d0/(1.0d0 + an))
rn = ff + a2(m2)
do i=m2-1,1,-1
rn = rn*ff + a2(i)
enddo
den = b2(k2+1)
do i=k2,1,-1
den = den*ff + b2(i)
enddo
ifermi12 = rn/(den*ff)
end if
return
end
double precision function ifermi32(f)
! include 'implno.dek'
!..
!..this routine applies a rational function expansion to get the inverse
!..fermi-dirac integral of order 3/2 when it is equal to f.
!..maximum error is 2.26d-9. reference: antia apjs 84,101 1993
!..
!..declare
integer i,m1,k1,m2,k2
double precision f,an,a1(12),b1(12),a2(12),b2(12),rn,den,ff
!..load the coefficients of the expansion
data an,m1,k1,m2,k2 /1.5d0, 3, 4, 6, 5/
data (a1(i),i=1,4)/ 1.715627994191d2, 1.125926232897d2, &
2.056296753055d1, 1.0d0/
data (b1(i),i=1,5)/ 2.280653583157d2, 1.193456203021d2, &
1.167743113540d1, -3.226808804038d-1, &
3.519268762788d-3/
data (a2(i),i=1,7)/-6.321828169799d-3, -2.183147266896d-2, &
-1.057562799320d-1, -4.657944387545d-1, &
-5.951932864088d-1, 3.684471177100d-1, &
1.0d0/
data (b2(i),i=1,6)/-4.381942605018d-3, -1.513236504100d-2, &
-7.850001283886d-2, -3.407561772612d-1, &
-5.074812565486d-1, -1.387107009074d-1/
if (f .lt. 4.0d0) then
rn = f + a1(m1)
do i=m1-1,1,-1
rn = rn*f + a1(i)
enddo
den = b1(k1+1)
do i=k1,1,-1
den = den*f + b1(i)
enddo
ifermi32 = log(f * rn/den)
else
ff = 1.0d0/f**(1.0d0/(1.0d0 + an))
rn = ff + a2(m2)
do i=m2-1,1,-1
rn = rn*ff + a2(i)
enddo
den = b2(k2+1)
do i=k2,1,-1
den = den*ff + b2(i)
enddo
ifermi32 = rn/(den*ff)
end if
return
end
double precision function ifermi52(f)
! include 'implno.dek'
!..
!..this routine applies a rational function expansion to get the inverse
!..fermi-dirac integral of order 5/2 when it is equal to f.
!..maximum error is 6.17d-9. reference: antia apjs 84,101 1993
!..
!..declare
integer i,m1,k1,m2,k2
double precision f,an,a1(12),b1(12),a2(12),b2(12),rn,den,ff
!..load the coefficients of the expansion
data an,m1,k1,m2,k2 /2.5d0, 2, 3, 6, 6/
data (a1(i),i=1,3)/ 2.138969250409d2, 3.539903493971d1, &
1.0d0/
data (b1(i),i=1,4)/ 7.108545512710d2, 9.873746988121d1, &
1.067755522895d0, -1.182798726503d-2/
data (a2(i),i=1,7)/-3.312041011227d-2, 1.315763372315d-1, &
-4.820942898296d-1, 5.099038074944d-1, &
5.495613498630d-1, -1.498867562255d0, &
1.0d0/
data (b2(i),i=1,7)/-2.315515517515d-2, 9.198776585252d-2, &
-3.835879295548d-1, 5.415026856351d-1, &
-3.847241692193d-1, 3.739781456585d-2, &
-3.008504449098d-2/
if (f .lt. 4.0d0) then
rn = f + a1(m1)
do i=m1-1,1,-1
rn = rn*f + a1(i)
enddo
den = b1(k1+1)
do i=k1,1,-1
den = den*f + b1(i)
enddo
ifermi52 = log(f * rn/den)
else
ff = 1.0d0/f**(1.0d0/(1.0d0 + an))
rn = ff + a2(m2)
do i=m2-1,1,-1
rn = rn*ff + a2(i)
enddo
den = b2(k2+1)
do i=k2,1,-1
den = den*ff + b2(i)
enddo
ifermi52 = rn/(den*ff)
end if
return
end
end module fermi
|
!> @file
!! Include fortran file for metadata adresses of characters and other variables
!! file included in module metadata_interfaces of getadd.f90
!! @author
!! Copyright (C) 2012-2013 BigDFT group
!! This file is distributed under the terms of the
!! GNU General Public License, see ~/COPYING file
!! or http://www.gnu.org/copyleft/gpl.txt .
!! For the list of contributors, see ~/AUTHORS
integer(kind=8) :: la
!local variables
integer(kind=8), external :: f_loc
if (size(array)==0) then
la=int(0,kind=8)
return
end if
|
(*
* Copyright 2014, NICTA
*
* This software may be distributed and modified according to the terms of
* the BSD 2-Clause license. Note that NO WARRANTY is provided.
* See "LICENSE_BSD2.txt" for details.
*
* @TAG(NICTA_BSD)
*)
section "Signed Words"
theory Signed_Words
imports "HOL-Word.Word"
begin
text \<open>Signed words as separate (isomorphic) word length class. Useful for tagging words in C.\<close>
typedef ('a::len0) signed = "UNIV :: 'a set" ..
lemma card_signed [simp]: "CARD (('a::len0) signed) = CARD('a)"
unfolding type_definition.card [OF type_definition_signed]
by simp
instantiation signed :: (len0) len0
begin
definition
len_signed [simp]: "len_of (x::'a::len0 signed itself) = len_of TYPE('a)"
instance ..
end
instance signed :: (len) len
by (intro_classes, simp)
type_synonym 'a sword = "'a signed word"
type_synonym sword8 = "8 sword"
type_synonym sword16 = "16 sword"
type_synonym sword32 = "32 sword"
type_synonym sword64 = "64 sword"
end
|
State Before: α : Type u
β : Type v
σ τ : Perm α
hστ : Disjoint σ τ
n : ℕ
⊢ (σ * τ) ^ n = 1 ↔ σ ^ n = 1 ∧ τ ^ n = 1 State After: no goals Tactic: rw [hστ.commute.mul_pow, Disjoint.mul_eq_one_iff (hστ.pow_disjoint_pow n n)]
|
module PolygonOps
export inpolygon, HormannAgathos, HaoSun
include("validity_checks.jl")
include("inpolygon.jl")
include("area.jl")
end # module
|
#pragma once
// Eigen includes
#include <Eigen/Core>
#include <Eigen/LU>
// STL includes
#include <vector>
#include <random>
#ifdef USE_THREADS
#include <thread>
#endif
// Local include
#include "DirectionsSampling.hpp"
/* This header provide multiple ways to project spherical function to Spherical
* Harmonics vectors.
*
* + 'ProjectToSH' return the SH coefficient vector when the Functor to be
* projected is bandlimited to the max order of SH. Warning: do not use with
* highly varying functions! This projection will probably behave very
* badly. In such case, it is advised to use the 'ProjectToShMC' method.
*
* + 'ProjectToShMC' return the SH coefficient vector by integrating <f·Ylm>
* using Monte Carlo integration. It is possible to choose the type of
* random or quasi- random sequence for the integration.
*
* + 'TripleTensorProduct' return the premultiplied triple tensor product:
* the integral of <Ylm x Ylm x Ylm> multiplied by the SH coefficients 'clm'
* of 'f'. The integral is computed using Monte Carlo, as 'ProjectToShMC'
* does.
*
* TODO: Template the direction sequence.
*/
#define USE_FIBONACCI_SEQ
//#define USE_BLUENOISE_SEQ
/* From a set of `basis` vector directions, and a spherical function `Functor`,
* generate the Spherical Harmonics projection of the `Functor`. This algorithm
* works as follows:
*
* 1. Evaluate the matrix Ylm(w_i) for each SH index and each direction
* 2. Evaluate the vector of the functor [f(w_0), ..., f(w_n)]
* 3. Return the product Ylm(w_i)^{-1} [f(w_0), ..., f(w_n)]
*
* Requierement: `basis` must have the dimension of the output SH vector.
* `f` must be real valued. Higher dimension functor are not
* handled yet.
*/
template<class Functor, class Vector, class SH>
inline Eigen::VectorXf ProjectToSH(const Functor& f,
const std::vector<Vector>& basis) {
// Get the number of elements to compute
const int dsize = basis.size();
const int order = sqrt(dsize)-1;
Eigen::MatrixXf Ylm(dsize, dsize);
Eigen::VectorXf flm(dsize);
for(unsigned int i=0; i<basis.size(); ++i) {
const Vector& w = basis[i];
auto ylm = SH::FastBasis(w, order);
Ylm.row(i) = ylm;
flm[i] = f(w);
}
return Ylm.inverse() * flm;
}
/* Project a spherical function `f` onto Spherical Harmonics coefficients up
* to order `order` using Monte-Carlo integration.
*/
template<class Functor, class Vector, class SH>
inline Eigen::VectorXf ProjectToShMC(const Functor& f, int order, int M=100000) {
std::mt19937 gen(0);
std::uniform_real_distribution<float> dist(0.0,1.0);
Eigen::VectorXf shCoeffs((order+1)*(order+1));
#if defined(USE_FIBONACCI_SEQ)
const std::vector<Vector> directions = SamplingFibonacci<Vector>(M);
#elif defined(USE_BLUENOISE_SEQ)
const std::vector<Vector> directions = SamplingBlueNoise<Vector>(M);
#else
const std::vector<Vector> directions = SamplingRandom<Vector>(M);
#endif
for(auto& w : directions) {
// Evaluate the function and the basis vector
shCoeffs += f(w) * SH::FastBasis(w, order);
}
shCoeffs *= 4.0*M_PI / float(M);
return shCoeffs;
}
/* Compute the triple tensor product \int Ylm * Ylm * Ylm
*
* ## Arguments:
*
* + 'ylm' is the input spherical function SH coeffcients. They will be
* prefactored to the triple product tensor to build the matrix.
*
* + 'truncated' allows to export either the truncated matrix (up to order
* 'n', where 'n' is the order the input SH coeffs 'ylm') or the full matrix
* that is order '2n-1'.
*/
template<class SH, class Vector>
inline Eigen::MatrixXf TripleTensorProduct(const Eigen::VectorXf& ylm,
bool truncated=true,
int nDirections=100000) {
// Compute the max order
const int vsize = ylm.size();
const int order = (truncated) ? sqrt(vsize)-1 : 2*sqrt(vsize)-2;
const int msize = (truncated) ? vsize : SH::Terms(order);
Eigen::MatrixXf res = Eigen::MatrixXf::Zero(msize, msize);
Eigen::VectorXf clm(msize);
// Take a uniformly distributed point sequence and integrate the triple tensor
// for each SH band
#if defined(USE_FIBONACCI_SEQ)
const std::vector<Vector> directions = SamplingFibonacci<Vector>(nDirections);
#elif defined(USE_BLUENOISE_SEQ)
const std::vector<Vector> directions = SamplingBlueNoise<Vector>(nDirections);
#else
const std::vector<Vector> directions = SamplingRandom<Vector>(nDirections);
#endif
for(auto& w : directions) {
SH::FastBasis(w, order, clm);
res += clm.segment(0, vsize).dot(ylm) * clm * clm.transpose();
}
res *= 4.0f * M_PI / float(nDirections);
return res;
}
/* Compute the triple tensor product \int Ylm * Ylm * Ylm for a bunch of
* function projected on Spherical Harmonics. This method is specially
* interesting to construct product of (R,G,B) component where each component
* is stored in a separate vector.
*
* ## Arguments:
*
* + 'ylm' is the input spherical function SH coeffcients. They will be
* prefactored to the triple product tensor to build the matrix.
*
* + 'truncated' allows to export either the truncated matrix (up to order
* 'n', where 'n' is the order the input SH coeffs 'ylm') or the full matrix
* that is order '2n-1'.
*/
template<class SH, class Vector>
inline std::vector<Eigen::MatrixXf> TripleTensorProduct(
const std::vector<Eigen::VectorXf>& ylms,
bool truncated=true,
int nDirections=100000) {
#ifdef USE_THREADS
struct TTPThread : public std::thread {
std::vector<Eigen::MatrixXf> res;
TTPThread(const std::vector<Vector>& dirs, unsigned int start, unsigned int end,
const std::vector<Eigen::VectorXf>& ylms, int order) :
std::thread(&TTPThread::run, this, dirs, start, end, ylms, order) {}
void run(const std::vector<Vector>& dirs, unsigned int start, unsigned int end,
const std::vector<Eigen::VectorXf>& ylms, int order) {
const int vsize = ylms[0].size();
const float fact = 4.0f * M_PI / float(dirs.size());
const int msize = SH::Terms(order);
Eigen::MatrixXf mat(msize, msize);
Eigen::VectorXf clm(msize);
res.reserve(3);
res.push_back(Eigen::MatrixXf::Zero(msize, msize));
res.push_back(Eigen::MatrixXf::Zero(msize, msize));
res.push_back(Eigen::MatrixXf::Zero(msize, msize));
for(unsigned int k=start; k<end; ++k) {
// Get the vector
const Vector& w = dirs[k];
// Construct the matrix
SH::FastBasis(w, order, clm);
mat = clm * clm.transpose();
// For each SH vector apply the weight to the matrix and sum it
for(unsigned int i=0; i<ylms.size(); ++i) {
res[i] += fact * clm.segment(0, vsize).dot(ylms[i]) * mat;
}
}
}
};
#endif
// Compute the max order
const int vsize = ylms[0].size();
const int order = (truncated) ? sqrt(vsize)-1 : 2*sqrt(vsize)-2;
const int msize = (truncated) ? vsize : SH::Terms(order);
std::vector<Eigen::MatrixXf> res(ylms.size(), Eigen::MatrixXf::Zero(msize, msize));
// Take a uniformly distributed point sequence and integrate the triple tensor
// for each SH band
#if defined(USE_FIBONACCI_SEQ)
const std::vector<Vector> directions = SamplingFibonacci<Vector>(nDirections);
#elif defined(USE_BLUENOISE_SEQ)
const std::vector<Vector> directions = SamplingBlueNoise<Vector>(nDirections);
#else
const std::vector<Vector> directions = SamplingRandom<Vector>(nDirections);
#endif
#ifdef USE_THREADS
std::vector<TTPThread*> threads;
const unsigned int nthreads = std::thread::hardware_concurrency();
for(unsigned int i=0; i<nthreads; ++i) {
const unsigned int block = nDirections / nthreads;
const unsigned int start = i * block;
const unsigned int end = (i<nthreads-1) ? (i+1)*block : nDirections;
threads.push_back(new TTPThread(directions, start, end, ylms, order));
}
for(TTPThread* thread : threads) {
thread->join();
for(unsigned int i=0; i<3; ++i) {
res[i] += thread->res[i];
}
}
#else
Eigen::VectorXf clm(msize);
const float fact = 4.0f * M_PI / float(nDirections);
for(auto& w : directions) {
// Construct the matrix
SH::FastBasis(w, order, clm);
const auto matrix = clm * clm.transpose();
// For each SH vector apply the weight to the matrix and sum it
for(unsigned int i=0; i<ylms.size(); ++i) {
res[i] += fact * clm.segment(0, vsize).dot(ylms[i]) * matrix ;
}
}
#endif
return res;
}
template<class SH, class Vector>
Eigen::MatrixXf TripleTensorProductCos(int order,
int nDirections=100000) {
// Compute the max order
const int msize = SH::Terms(order);
Eigen::MatrixXf res = Eigen::MatrixXf::Zero(msize, msize);
// Take a uniformly distributed point sequence and integrate the triple tensor
// for each SH band
#if defined(USE_FIBONACCI_SEQ)
const std::vector<Vector> directions = SamplingFibonacci<Vector>(nDirections);
#elif defined(USE_BLUENOISE_SEQ)
const std::vector<Vector> directions = SamplingBlueNoise<Vector>(nDirections);
#else
const std::vector<Vector> directions = SamplingRandom<Vector>(nDirections);
#endif
const float fact = 4.0f * M_PI / float(nDirections);
for(auto& w : directions) {
// Construct the matrix
const Eigen::VectorXf clm = SH::FastBasis(w, order);
const auto matrix = clm * clm.transpose();
// For each SH vector apply the weight to the matrix and sum it
if(w[2] > 0.0) {
res += fact * w[2] * matrix;
}
}
return res;
}
|
-- Andreas, 2016-07-25, issue #2108
-- test case and report by Jesper
{-# OPTIONS --allow-unsolved-metas #-}
-- {-# OPTIONS -v tc.pos.occ:70 #-}
open import Agda.Primitive
open import Agda.Builtin.Equality
lone = lsuc lzero
record Level-zero-or-one : Set where
field
level : Level
is-lower : (level ⊔ lone) ≡ lone
open Level-zero-or-one public
Coerce : ∀ {a} → a ≡ lone → Set₁
Coerce refl = Set
data Test : Set₁ where
test : Coerce (is-lower _) → Test
-- WAS:
-- Meta variable here triggers internal error.
-- Should succeed with unsolved metas.
|
lemma emeasure_lborel_countable: fixes A :: "'a::euclidean_space set" assumes "countable A" shows "emeasure lborel A = 0"
|
%Seccion "Resumen"
%
\chapter*{Summary}
\addcontentsline{toc}{chapter}{Summary}
\vspace{-1cm}
Class Notes for the Course Introduction to the Finite Element Methods
\textbf{Keywords: } Scientific Computing, Computational Mechanics, Finite Element Methods, Numerical Methods.
|
Require Import VerdiRaft.Raft.
Require Import VerdiRaft.RaftRefinementInterface.
Section AppendEntriesRequestsCameFromLeaders.
Context {orig_base_params : BaseParams}.
Context {one_node_params : OneNodeParams orig_base_params}.
Context {raft_params : RaftParams orig_base_params}.
Definition append_entries_came_from_leaders (net : network) : Prop :=
forall p t n pli plt es ci,
In p (nwPackets net) ->
pBody p = AppendEntries t n pli plt es ci ->
exists ll,
In (t, ll) (leaderLogs (fst (nwState net (pSrc p)))).
Class append_entries_came_from_leaders_interface : Prop :=
{
append_entries_came_from_leaders_invariant :
forall net,
refined_raft_intermediate_reachable net ->
append_entries_came_from_leaders net
}.
End AppendEntriesRequestsCameFromLeaders.
|
{-# OPTIONS --without-K #-}
open import HoTT
open import cohomology.Exactness
open import cohomology.Choice
open import cohomology.FunctionOver
module cohomology.Theory where
-- [i] for the universe level of the group
record CohomologyTheory i : Type (lsucc i) where
field
C : ℤ → Ptd i → Group i
CEl : ℤ → Ptd i → Type i
CEl n X = Group.El (C n X)
Cid : (n : ℤ) (X : Ptd i) → CEl n X
Cid n X = GroupStructure.ident (Group.group-struct (C n X))
⊙CEl : ℤ → Ptd i → Ptd i
⊙CEl n X = ⊙[ CEl n X , Cid n X ]
field
CF-hom : (n : ℤ) {X Y : Ptd i} → fst (X ⊙→ Y) → (C n Y →ᴳ C n X)
CF-ident : (n : ℤ) {X : Ptd i}
→ CF-hom n {X} {X} (⊙idf X) == idhom (C n X)
CF-comp : (n : ℤ) {X Y Z : Ptd i} (g : fst (Y ⊙→ Z)) (f : fst (X ⊙→ Y))
→ CF-hom n (g ⊙∘ f) == CF-hom n f ∘ᴳ CF-hom n g
CF : (n : ℤ) {X Y : Ptd i} → fst (X ⊙→ Y) → CEl n Y → CEl n X
CF n f = GroupHom.f (CF-hom n f)
field
C-abelian : (n : ℤ) (X : Ptd i) → is-abelian (C n X)
C-Susp : (n : ℤ) (X : Ptd i) → C (succ n) (⊙Susp X) ≃ᴳ C n X
C-SuspF : (n : ℤ) {X Y : Ptd i} (f : fst (X ⊙→ Y))
→ fst (C-Susp n X) ∘ᴳ CF-hom (succ n) (⊙susp-fmap f)
== CF-hom n f ∘ᴳ fst (C-Susp n Y)
C-exact : (n : ℤ) {X Y : Ptd i} (f : fst (X ⊙→ Y))
→ is-exact (CF-hom n (⊙cfcod' f)) (CF-hom n f)
C-additive : (n : ℤ) {I : Type i} (Z : I → Ptd i)
→ ((W : I → Type i) → has-choice 0 I W)
→ is-equiv (GroupHom.f (Πᴳ-hom-in (CF-hom n ∘ ⊙bwin {X = Z})))
{- Alternate form of suspension axiom naturality -}
C-Susp-↓ : (n : ℤ) {X Y : Ptd i} (f : fst (X ⊙→ Y))
→ CF-hom (succ n) (⊙susp-fmap f) == CF-hom n f
[ uncurry _→ᴳ_ ↓ pair×= (group-ua (C-Susp n Y)) (group-ua (C-Susp n X)) ]
C-Susp-↓ n f =
hom-over-isos $ function-over-equivs _ _ $ ap GroupHom.f (C-SuspF n f)
record OrdinaryTheory i : Type (lsucc i) where
constructor ordinary-theory
field
cohomology-theory : CohomologyTheory i
open CohomologyTheory cohomology-theory public
field
C-dimension : (n : ℤ) → n ≠ 0 → C n (⊙Lift ⊙S⁰) == 0ᴳ
|
\name{AnnotationFunction-class}
\docType{class}
\alias{AnnotationFunction-class}
\title{
The AnnotationFunction Class
}
\description{
The AnnotationFunction Class
}
\details{
The heatmap annotation is basically graphics aligned to the heatmap columns or rows.
There is no restriction for the graphic types, e.g. it can be heatmap-like
annotation or points. Here the AnnotationFunction class is designed for
creating complex and flexible annotation graphics. As the main part of the class, it uses
a user-defined function to define the graphics. It also keeps information of
the size of the plotting regions of the annotation. And most importantly, it
allows subsetting to the annotation to draw a subset of the graphics, which
is the base for the splitting of the annotations.
See \code{\link{AnnotationFunction}} constructor for details.
}
\examples{
# There is no example
NULL
}
|
function RtOut = inverseRt(RtIn)
RtOut = [RtIn(1:3,1:3)', - RtIn(1:3,1:3)'* RtIn(1:3,4)];
|
Require Import Framework FSParameters.
Require Export AuthenticationLayer ListLayer CachedDiskLayer ATCLayer.
Require TransactionCacheLayer.
Require LoggedDiskRefinement.
Require FileDiskNoninterference.
Import ListNotations.
Set Implicit Arguments.
Definition TCDCore := HorizontalComposition (ListOperation (addr * value)) CachedDiskOperation.
Definition TCDLang := Build_Layer TCDCore.
Import TransactionCacheLayer.
Import LoggedDiskRefinement.
Import AbstractOracles.
Definition TCD_valid_state := fun s => FileDiskNoninterference.CD_valid_state (fst s) (snd s).
(* Definition TCD_related_states u exc := fun s1 s2 : TCDLang.(state) => FileDiskNoninterference.CD_related_states u exc (fst s1) (snd s1) (snd s2).
*)
Definition TCD_reboot_f selector : TCDLang.(state) -> TCDLang.(state) :=
fun s: TCDLang.(state) => ([], (empty_mem,
(fst (snd (snd s)), select_total_mem selector (snd (snd (snd s)))))).
Definition TCD_reboot_list l_selector := map (fun f => fun s: TCDLang.(state) => ([]: list (addr * value), f (snd s))) (cached_disk_reboot_list l_selector).
Definition TCD_CoreRefinement := HC_Core_Refinement TCDLang TransactionCacheLang LoggedDiskCoreRefinement.
Definition TCD_Refinement := HC_Refinement TCDLang TransactionCacheLang LoggedDiskCoreRefinement.
(* ATCD *)
Definition ATCDCore := HorizontalComposition AuthenticationOperation TCDCore.
Definition ATCDLang := Build_Layer ATCDCore.
(*
Definition ATCD_valid_state := fun s => TCD_valid_state (fst s) (snd s).
Definition ATCD_related_states u exc := fun s1 s2 : ATCDLang.(state) => FileDiskNoninterference.CD_related_states u exc (fst s1) (snd s1) (snd s2).
*)
Definition ATCD_reboot_f selector := fun s: ATCDLang.(state) => (fst s, TCD_reboot_f selector (snd s)).
Definition ATCD_reboot_list l_selector := map (fun f => fun s: ATCDLang.(state) => (fst s, f (snd s))) (TCD_reboot_list l_selector).
Definition ATCD_refines_reboot selector s_imp s_abs :=
refines_reboot (snd (snd (ATCD_reboot_f selector s_imp))) (snd (snd (ATC_reboot_f s_abs))) /\
fst (ATCD_reboot_f selector s_imp) = fst (ATC_reboot_f s_abs) /\
fst (snd (ATCD_reboot_f selector s_imp)) = fst (snd (ATC_reboot_f s_abs)).
Definition ATCD_CoreRefinement := HC_Core_Refinement ATCDLang ATCLang TCD_CoreRefinement.
Definition ATCD_Refinement := HC_Refinement ATCDLang ATCLang TCD_CoreRefinement.
|
import numpy as np
from visual_dynamics.policies import Policy
from visual_dynamics.utils import transformations as tf
class PositionBasedServoingPolicy(Policy):
def __init__(self, lambda_, target_to_obj_T, straight_trajectory=True):
"""
target_to_obj_T is in inertial reference frame
"""
self.lambda_ = lambda_
self.target_to_obj_T = np.asarray(target_to_obj_T)
self.straight_trajectory = straight_trajectory
def act(self, obs):
curr_to_obj_pos, curr_to_obj_rot = obs[:2]
curr_to_obj_T = tf.quaternion_matrix(np.r_[curr_to_obj_rot[3], curr_to_obj_rot[:3]])
curr_to_obj_T[:3, 3] = curr_to_obj_pos
target_to_curr_T = self.target_to_obj_T.dot(tf.inverse_matrix(curr_to_obj_T))
target_to_curr_aa = tf.axis_angle_from_matrix(target_to_curr_T)
# project rotation so that it rotates around the up vector
up = np.array([0, 0, 1])
target_to_curr_aa = target_to_curr_aa.dot(up) * up
target_to_curr_T[:3, :3] = tf.matrix_from_axis_angle(target_to_curr_aa)[:3, :3]
if self.straight_trajectory:
linear_vel = -self.lambda_ * target_to_curr_T[:3, :3].T.dot(target_to_curr_T[:3, 3])
angular_vel = -self.lambda_ * target_to_curr_aa.dot(up)
else:
linear_vel = -self.lambda_ * ((self.target_to_obj_T[:3, 3] - curr_to_obj_pos) + np.cross(curr_to_obj_pos, target_to_curr_aa))
angular_vel = -self.lambda_ * target_to_curr_aa.dot(up)
action = np.r_[linear_vel, angular_vel]
return action
def reset(self):
return None
def _get_config(self):
config = super(PositionBasedServoingPolicy, self)._get_config()
config.update({'lambda_': self.lambda_,
'target_to_obj_T': self.target_to_obj_T.tolist(),
'straight_trajectory': self.straight_trajectory})
return config
|
using MolecularPotentials
mutable struct Water{T} <: AbstractMolecule
coords::SMatrix{3, 3, Float64}
end
|
“The Beatniks” comic strip series is a story about the human experience through the eyes of a group middle school kids; trying to find themselves as they carelessly venture through life. Opening themselves up to new experiences, they look to the world for answer just to find out that they’re more lost than ever. Stumbling through the awkwardness of young love, experiencing the power of true friendships, fearlessly announcing their uncensored form of expression, losing themselves within their own imagination, being scolded for their misunderstood fun, amd much more. This comic has continued to expand into other projects as a indirect connection to an exhibition I am working on called “Unwanted Sympathy.” Read a few comics here, if you want to read more click button below.
|
(************************************************************************)
(* v * The Coq Proof Assistant / The Coq Development Team *)
(* <O___,, * INRIA - CNRS - LIX - LRI - PPS - Copyright 1999-2011 *)
(* \VV/ **************************************************************)
(* // * This file is distributed under the terms of the *)
(* * GNU Lesser General Public License Version 2.1 *)
(************************************************************************)
(*i $Id: NProperties.v 14641 2011-11-06 11:59:10Z herbelin $ i*)
Require Export NAxioms NSub.
(** This functor summarizes all known facts about N.
For the moment it is only an alias to [NSubPropFunct], which
subsumes all others.
*)
Module Type NPropSig := NSubPropFunct.
Module NPropFunct (N:NAxiomsSig) <: NPropSig N.
Include NPropSig N.
End NPropFunct.
|
function M = nfchoa_order(nls,conf)
%NFCHOA_ORDER maximum order of spatial band-limited NFC-HOA
%
% Usage: M = nfchoa_order(nls,conf)
%
% Input parameters:
% nls - number of secondary sources
%
% Output parameters:
% M - spherical harmonics order
% conf - configuration struct (see SFS_config)
%
% NFCHOA_ORDER(nls,conf) returns the maximum order of spherical harmonics for
% the given number of secondary sources in order to avoid spectral repetitions
% (spatial aliasing) of the dirving signals. The order is
%
% / nls/2 - 1, even nls
% M = <
% \ (nls-1)/2 odd nls
%
% for a circular array and
% _____
% M = \|nls/2
%
% for a spherical array.
%
% See also: driving_function_imp_nfchoa, driving_function_mono_nfchoa
%
% References:
% Ahrens (2012) - "Analytic Methods of Sound Field Synthesis", Springer,
% ISBN 978-3-642-25743-8
%*****************************************************************************
% The MIT License (MIT) *
% *
% Copyright (c) 2010-2019 SFS Toolbox Developers *
% *
% Permission is hereby granted, free of charge, to any person obtaining a *
% copy of this software and associated documentation files (the "Software"), *
% to deal in the Software without restriction, including without limitation *
% the rights to use, copy, modify, merge, publish, distribute, sublicense, *
% and/or sell copies of the Software, and to permit persons to whom the *
% Software is furnished to do so, subject to the following conditions: *
% *
% The above copyright notice and this permission notice shall be included in *
% all copies or substantial portions of the Software. *
% *
% THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR *
% IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, *
% FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL *
% THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER *
% LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING *
% FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER *
% DEALINGS IN THE SOFTWARE. *
% *
% The SFS Toolbox allows to simulate and investigate sound field synthesis *
% methods like wave field synthesis or higher order ambisonics. *
% *
% https://sfs.readthedocs.io [email protected] *
%*****************************************************************************
%% ===== Checking input parameters =======================================
nargmin = 2;
nargmax = 2;
narginchk(nargmin,nargmax);
isargpositivescalar(nls);
isargstruct(conf);
%% ===== Configuration ===================================================
if conf.nfchoa.order
M = conf.nfchoa.order;
return;
end
dimension = conf.dimension;
%% ===== Computation =====================================================
% Get maximum order of spherical harmonics to avoid spatial aliasing
if strcmp('2D',dimension) || strcmp('2.5D',dimension)
% Ahrens (2012), p. 132
if isodd(nls)
M = (nls-1)/2;
else
M = nls/2 - 1;
end
elseif strcmp('3D',dimension)
% Ahrens (2012), p. 125
M = floor(sqrt(nls/2));
end
|
import streamlit as st
from streamlit_drawable_canvas import st_canvas
import pandas as pd
from helpers import Homography, VoronoiPitch, Play, PitchImage, PitchDraw, get_table_download_link
from pitch import FootballPitch
from narya.narya.tracker.full_tracker import FootballTracker
import cv2
import numpy as np
from mplsoccer.pitch import Pitch
import matplotlib.pyplot as plt
plt.style.use('dark_background')
import keras.backend.tensorflow_backend as tb
tb._SYMBOLIC_SCOPE.value = True
st.set_option('deprecation.showfileUploaderEncoding', False)
image = None
@st.cache(allow_output_mutation=True)
def create_tracker():
tracker = FootballTracker(pretrained=True,
frame_rate=23,
track_buffer = 60,
ctx=None)
return tracker
@st.cache(allow_output_mutation=True)
def run_tracker(tracker, img_list):
trajectories = tracker(img_list,
split_size = 512,
save_tracking_folder = 'narya_output/',
template = template,
skip_homo = [])
return trajectories
#tracker = create_tracker()
template = cv2.imread('narya/world_cup_template.png')
template = cv2.resize(template, (512,512))/255.
image_selection = st.selectbox('Choose image:', ['', 'Example Image', 'My own image'],
format_func=lambda x: 'Choose image' if x == '' else x)
if image_selection:
if image_selection == 'Example Image':
image = cv2.imread("atm_che_23022021_62_07_2.jpg")
else:
st.title('Upload Image or Video')
uploaded_file = st.file_uploader("Select Image file to open:", type=["png", "jpg", "mp4"])
pitch = FootballPitch()
if uploaded_file:
if uploaded_file.type == 'video/mp4':
play = Play(uploaded_file)
t = st.slider('You have uploaded a video. Choose the frame you want to process:', 0.0,60.0)
image = play.get_frame(t)
image = cv2.imread("atm_che_23022021_62_07_2.jpg")
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
#image = PitchImage(pitch, image=play.get_frame(t))
else:
file_bytes = np.asarray(bytearray(uploaded_file.read()),dtype=np.uint8)
image = cv2.imdecode(file_bytes, 1)
#image = cv2.imread("atm_che_23022021_62_07_2.jpg")
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
if image is not None:
img_list = [image]
tracker = create_tracker()
trajectories = tracker(img_list,
split_size = 512,
save_tracking_folder = 'narya_output/',
template = template,
skip_homo = [])
x_coords = [val[0][0] for val in trajectories.values()]
y_coords = [val[0][1] for val in trajectories.values()]
x_coords = [x/320*120 for x in x_coords]
y_coords = [y/320*80 for y in y_coords]
pitch = Pitch(view='full', figsize=(6.8, 10.5), orientation='horizontal')
fig, ax = pitch.draw()
pitch.scatter(x_coords, y_coords, ax=ax, c='#c34c45', s=150)
st.title('Tracking Results')
st.write('From left to right: the original image, overlayed bounding boxes + homography and a schematic represenation',
expanded=True)
col1, col2, col3 = st.beta_columns(3)
with col1:
st.image(image, use_column_width= 'always')
with col2:
st.image("narya_output/test_00000.jpg", use_column_width= 'always')
with col3:
st.pyplot(fig)
review = st.selectbox('Do the results look good?:', ['', 'Yes and export', 'No and manually fix'],
format_func=lambda x: 'Do the results look good?' if x == '' else x)
if review:
if review == 'Yes and export':
df = pd.DataFrame({'x': x_coords, 'y': y_coords})
st.markdown(get_table_download_link(df[['x', 'y']]), unsafe_allow_html=True)
else:
st.write("I hope to soon add the functionality to annotate players and homography manually")
|
suppressPackageStartupMessages(library(float))
x = matrix(1:30, 10)
y = matrix(1:10, 10, 2)
z = matrix(1:5, 5, 3)
xs = fl(x)
ys = fl(y)
zs = fl(z)
cpxy = crossprod(x, y)
tcpxz = tcrossprod(x, z)
test = dbl(crossprod(xs))
stopifnot(all.equal(test, crossprod(x)))
test = dbl(crossprod(xs, ys))
stopifnot(all.equal(test, cpxy))
test = crossprod(x, ys)
stopifnot(all.equal(test, cpxy))
test = crossprod(xs, y)
stopifnot(all.equal(test, cpxy))
test = dbl(tcrossprod(xs))
stopifnot(all.equal(test, tcrossprod(x)))
test = dbl(tcrossprod(xs, zs))
stopifnot(all.equal(test, tcpxz))
test = tcrossprod(x, zs)
stopifnot(all.equal(test, tcpxz))
test = tcrossprod(xs, z)
stopifnot(all.equal(test, tcpxz))
|
The Bulls returned to the Delta Center for Game 6 on June 14 , 1998 , leading the series 3 – 2 . Jordan executed a series of plays , considered to be one of the greatest clutch performances in NBA Finals history . With the Bulls trailing 86 – 83 with 41 @.@ 9 seconds remaining in the fourth quarter , Phil Jackson called a timeout . When play resumed , Jordan received the inbound pass , drove to the basket , and hit a shot over several Jazz defenders , cutting the Utah lead to 86 – 85 . The Jazz brought the ball upcourt and passed the ball to forward Karl Malone , who was set up in the low post and was being guarded by Rodman . Malone jostled with Rodman and caught the pass , but Jordan cut behind him and took the ball out of his hands for a steal . Jordan then dribbled down the court and paused , eyeing his defender , Jazz guard Bryon Russell . With 10 seconds remaining , Jordan started to dribble right , then crossed over to his left , possibly pushing off Russell , although the officials did not call a foul . With 5 @.@ 2 seconds left , Jordan gave Chicago an 87 – 86 lead with a game @-@ winning jumper , the climactic shot of his Bulls career . Afterwards , John Stockton missed a game @-@ winning three @-@ pointer . Jordan and the Bulls won their sixth NBA championship and second three @-@ peat . Once again , Jordan was voted the Finals MVP , having led all scorers averaging 33 @.@ 5 points per game , including 45 in the deciding Game 6 . Jordan 's six Finals MVPs is a record ; Shaquille O 'Neal , Magic Johnson , LeBron James and Tim Duncan are tied for second place with three apiece . The 1998 Finals holds the highest television rating of any Finals series in history . Game 6 also holds the highest television rating of any game in NBA history .
|
%% Fibres
%
%%
%
% The set of all rotations that rotate a certain vector u onto a certain
% vector v define a fibre in the rotation space. A discretisation of such a
% fibre is defined by
u = xvector;
v = yvector;
rot = rotation(fibre(u,v))
|
theory Typing
imports Set_Calculus
begin
section \<open>Typing and Branch Expansion\<close>
text \<open>
In this theory, we prove that the branch expansion rules
preserve well-typedness.
\<close>
no_notation Set.member ("(_/ : _)" [51, 51] 50)
lemma types_term_unique: "v \<turnstile> t : l1 \<Longrightarrow> v \<turnstile> t : l2 \<Longrightarrow> l2 = l1"
apply(induction arbitrary: l2 rule: types_pset_term.induct)
apply (metis types_pset_term_cases)+
done
lemma type_of_term_if_types_term:
"v \<turnstile> t : l \<Longrightarrow> type_of_term v t = l"
using types_term_unique unfolding type_of_term_def by blast
lemma types_term_if_mem_subterms_term:
assumes "s \<in> subterms t"
assumes "v \<turnstile> t : lt"
shows "\<exists>ls. v \<turnstile> s : ls"
using assms
by (induction t arbitrary: s lt) (auto elim: types_pset_term_cases)
lemma is_Var_if_types_term_0: "v \<turnstile> t : 0 \<Longrightarrow> is_Var t"
by (induction t) (auto elim: types_pset_term_cases)
lemma is_Var_if_urelem': "urelem' v \<phi> t \<Longrightarrow> is_Var t"
using is_Var_if_types_term_0 by blast
lemma is_Var_if_urelem: "urelem \<phi> t \<Longrightarrow> is_Var t"
unfolding urelem_def using is_Var_if_urelem' by blast
lemma types_fmD:
"v \<turnstile> And p q \<Longrightarrow> v \<turnstile> p"
"v \<turnstile> And p q \<Longrightarrow> v \<turnstile> q"
"v \<turnstile> Or p q \<Longrightarrow> v \<turnstile> p"
"v \<turnstile> Or p q \<Longrightarrow> v \<turnstile> q"
"v \<turnstile> Neg p \<Longrightarrow> v \<turnstile> p"
"v \<turnstile> Atom a \<Longrightarrow> v \<turnstile> a"
unfolding types_pset_fm_def using fm.set_intros by auto
lemma types_fmI:
"v \<turnstile> p \<Longrightarrow> v \<turnstile> q \<Longrightarrow> v \<turnstile> And p q"
"v \<turnstile> p \<Longrightarrow> v \<turnstile> q \<Longrightarrow> v \<turnstile> Or p q"
"v \<turnstile> p \<Longrightarrow> v \<turnstile> Neg p"
"v \<turnstile> a \<Longrightarrow> v \<turnstile> Atom a"
unfolding types_pset_fm_def using fm.set_intros by auto
lemma types_pset_atom_Member_D:
assumes "v \<turnstile> s \<in>\<^sub>s f t1 t2" "f \<in> {(\<squnion>\<^sub>s), (\<sqinter>\<^sub>s), (-\<^sub>s)}"
shows "v \<turnstile> s \<in>\<^sub>s t1" "v \<turnstile> s \<in>\<^sub>s t2"
proof -
from assms obtain ls where
"v \<turnstile> s : ls" "v \<turnstile> f t1 t2 : Suc ls"
using types_pset_atom.simps by fastforce
with assms have "v \<turnstile> s \<in>\<^sub>s t1 \<and> v \<turnstile> s \<in>\<^sub>s t2"
by (auto simp: types_pset_atom.simps elim: types_pset_term_cases)
then show "v \<turnstile> s \<in>\<^sub>s t1" "v \<turnstile> s \<in>\<^sub>s t2"
by blast+
qed
lemmas types_pset_atom_Member_Union_D = types_pset_atom_Member_D[where ?f="(\<squnion>\<^sub>s)", simplified]
and types_pset_atom_Member_Inter_D = types_pset_atom_Member_D[where ?f="(\<sqinter>\<^sub>s)", simplified]
and types_pset_atom_Member_Diff_D = types_pset_atom_Member_D[where ?f="(-\<^sub>s)", simplified]
lemma types_term_if_mem_subterms:
fixes \<phi> :: "'a pset_fm"
assumes "v \<turnstile> \<phi>"
assumes "f t1 t2 \<in> subterms \<phi>" "f \<in> {(\<squnion>\<^sub>s), (\<sqinter>\<^sub>s), (-\<^sub>s)}"
obtains lt where "v \<turnstile> t1 : lt" "v \<turnstile> t2 : lt"
proof -
from assms obtain a :: "'a pset_atom" where atom: "v \<turnstile> a" "f t1 t2 \<in> subterms a"
unfolding types_pset_fm_def by (induction \<phi>) auto
obtain t' l where "v \<turnstile> t' : l" "f t1 t2 \<in> subterms t'"
apply(rule types_pset_atom.cases[OF \<open>v \<turnstile> a\<close>])
using atom(2) by auto
then obtain lt where "v \<turnstile> t1 : lt \<and> v \<turnstile> t2 : lt"
by (induction t' arbitrary: l)
(use assms(3) in \<open>auto elim: types_pset_term_cases\<close>)
with that show ?thesis
by blast
qed
lemma types_if_types_Member_and_subterms:
fixes \<phi> :: "'a pset_fm"
assumes "v \<turnstile> s \<in>\<^sub>s t1 \<or> v \<turnstile> s \<in>\<^sub>s t2" "v \<turnstile> \<phi>"
assumes "f t1 t2 \<in> subterms \<phi>" "f \<in> {(\<squnion>\<^sub>s), (\<sqinter>\<^sub>s), (-\<^sub>s)}"
shows "v \<turnstile> s \<in>\<^sub>s f t1 t2"
proof -
from types_term_if_mem_subterms[OF assms(2-)] obtain lt where lt:
"v \<turnstile> t1 : lt" "v \<turnstile> t2 : lt"
by blast
moreover from assms(1) lt(1,2) obtain ls where "v \<turnstile> s : ls" "lt = Suc ls"
by (auto simp: types_pset_atom.simps dest: types_term_unique)
ultimately show ?thesis
using assms
by (auto simp: types_pset_term.intros types_pset_atom.simps)
qed
lemma types_subst_tlvl:
fixes l :: "'a pset_atom"
assumes "v \<turnstile> AT (t1 =\<^sub>s t2)" "v \<turnstile> l"
shows "v \<turnstile> subst_tlvl t1 t2 l"
proof -
from assms obtain lt where "v \<turnstile> t1 : lt" "v \<turnstile> t2 : lt"
by (auto simp: types_pset_atom.simps dest!: types_fmD(6))
with assms(2) show ?thesis
by (cases "(t1, t2, l)" rule: subst_tlvl.cases)
(auto simp: types_pset_atom.simps dest: types_term_unique)
qed
lemma types_sym_Equal:
assumes "v \<turnstile> t1 =\<^sub>s t2"
shows "v \<turnstile> t2 =\<^sub>s t1"
using assms by (auto simp: types_pset_atom.simps)
lemma types_lexpands:
fixes \<phi> :: "'a pset_fm"
assumes "lexpands b' b" "b \<noteq> []" "\<phi> \<in> set b'"
assumes "\<And>(\<phi> :: 'a pset_fm). \<phi> \<in> set b \<Longrightarrow> v \<turnstile> \<phi>"
shows "v \<turnstile> \<phi>"
using assms
proof(induction rule: lexpands.induct)
case (1 b' b)
then show ?case
apply(induction rule: lexpands_fm.induct)
apply(force dest: types_fmD intro: types_fmI(3))+
done
next
case (2 b' b)
then show ?case
proof(induction rule: lexpands_un.induct)
case (1 s t1 t2 b)
then show ?thesis
by (auto dest!: types_fmD(5,6) "1"(4) intro!: types_fmI(2,3,4)
intro: types_pset_atom_Member_Union_D)
next
case (2 s t1 b t2)
then have "v \<turnstile> last b"
by auto
from types_if_types_Member_and_subterms[OF _ this] 2 show ?case
by (auto dest!: "2"(5) types_fmD(6) intro: types_fmI(4))
next
case (3 s t2 b t1)
then have "v \<turnstile> last b"
by auto
from types_if_types_Member_and_subterms[OF _ this] 3 show ?case
by (auto dest!: "3"(5) types_fmD(6) intro: types_fmI(4))
next
case (4 s t1 t2 b)
then show ?case
by (auto dest!: types_fmD(5,6) "4"(5) intro!: types_fmI(2,3,4)
intro: types_pset_atom_Member_Union_D)
next
case (5 s t1 t2 b)
then show ?case
by (auto dest!: types_fmD(5,6) "5"(5) intro!: types_fmI(2,3,4)
intro: types_pset_atom_Member_Union_D)
next
case (6 s t1 b t2)
then have "v \<turnstile> last b"
by auto
note types_if_types_Member_and_subterms[where ?f="(\<squnion>\<^sub>s)", OF _ this "6"(3), simplified]
with 6 show ?case
by (auto dest!: types_fmD(5,6) "6"(6) intro!: types_fmI(2,3,4))
qed
next
case (3 b' b)
then show ?case
proof(induction rule: lexpands_int.induct)
case (1 s t1 t2 b)
then show ?thesis
by (auto dest!: types_fmD(5,6) "1"(4) intro!: types_fmI(2,3,4)
intro: types_pset_atom_Member_Inter_D)
next
case (2 s t1 b t2)
then have "v \<turnstile> last b"
by auto
from types_if_types_Member_and_subterms[OF _ this] 2 show ?case
by (auto dest!: "2"(5) types_fmD(5,6) intro!: types_fmI(3,4))
next
case (3 s t2 b t1)
then have "v \<turnstile> last b"
by auto
from types_if_types_Member_and_subterms[OF _ this] 3 show ?case
by (auto dest!: "3"(5) types_fmD(5,6) intro!: types_fmI(3,4))
next
case (4 s t1 t2 b)
then show ?case
by (auto dest!: types_fmD(5,6) "4"(5) intro!: types_fmI(2,3,4)
intro: types_pset_atom_Member_Inter_D)
next
case (5 s t1 t2 b)
then show ?case
by (auto dest!: types_fmD(5,6) "5"(5) intro!: types_fmI(2,3,4)
intro: types_pset_atom_Member_Inter_D)
next
case (6 s t1 b t2)
then have "v \<turnstile> last b"
by auto
note types_if_types_Member_and_subterms[where ?f="(\<sqinter>\<^sub>s)", OF _ this "6"(3), simplified]
with 6 show ?case
by (auto dest!: types_fmD(5,6) "6"(6) intro!: types_fmI(2,3,4))
qed
next
case (4 b' b)
then show ?case
proof(induction rule: lexpands_diff.induct)
case (1 s t1 t2 b)
then show ?thesis
by (auto dest!: types_fmD(5,6) "1"(4) intro!: types_fmI(2,3,4)
intro: types_pset_atom_Member_Diff_D)
next
case (2 s t1 b t2)
then have "v \<turnstile> last b"
by auto
from types_if_types_Member_and_subterms[OF _ this] 2 show ?case
by (auto dest!: "2"(5) types_fmD(5,6) intro!: types_fmI(3,4))
next
case (3 s t2 b t1)
then have "v \<turnstile> last b"
by auto
from types_if_types_Member_and_subterms[OF _ this] 3 show ?case
by (auto dest!: "3"(5) types_fmD(5,6) intro!: types_fmI(3,4))
next
case (4 s t1 t2 b)
then show ?case
by (auto dest!: types_fmD(5,6) "4"(5) intro!: types_fmI(2,3,4)
intro: types_pset_atom_Member_Diff_D)
next
case (5 s t1 t2 b)
then show ?case
by (auto dest!: types_fmD(5,6) "5"(5) intro!: types_fmI(2,3,4)
intro: types_pset_atom_Member_Diff_D)
next
case (6 s t1 b t2)
then have "v \<turnstile> last b"
by auto
note types_if_types_Member_and_subterms[where ?f="(-\<^sub>s)", OF _ this "6"(3), simplified]
with 6 show ?case
by (auto dest!: types_fmD(5,6) "6"(6) intro!: types_fmI(2,3,4))
qed
next
case (5 b' b)
then show ?case
proof(cases rule: lexpands_single.cases)
case (1 t1)
with 5 have "v \<turnstile> last b"
by auto
with \<open>Single t1 \<in> subterms (last b)\<close> obtain a :: "'a pset_atom"
where atom: "a \<in> atoms (last b)" "Single t1 \<in> subterms a" "v \<turnstile> a"
unfolding types_pset_fm_def by (metis UN_E subterms_fm_def)
obtain t' l where "Single t1 \<in> subterms t'" "v \<turnstile> t' : l"
apply(rule types_pset_atom.cases[OF \<open>v \<turnstile> a\<close>])
using atom(2) by auto
note types_term_if_mem_subterms_term[OF this]
then obtain lt1 where "v \<turnstile> t1 : lt1" "v \<turnstile> Single t1 : Suc lt1"
by (metis types_pset_term_cases(3))
with 5 1 show ?thesis
using types_pset_atom.intros(2) types_pset_fm_def by fastforce
qed (auto simp: types_pset_atom.simps elim!: types_pset_term_cases
dest!: types_fmD(5,6) "5"(4) intro!: types_fmI(3,4))
next
case (6 b' b)
then show ?case
proof(cases rule: lexpands_eq.cases)
case (5 s t s')
with 6 show ?thesis
by (auto 0 3 dest!: "6"(4) types_fmD(5,6) dest: types_term_unique
simp: types_pset_atom.simps types_term_unique intro!: types_fmI(3,4))
qed (auto simp: types_sym_Equal dest!: "6"(4) types_fmD(5,6)
intro!: types_fmI(3,4) types_subst_tlvl)
qed
lemma types_bexpands_nowit:
fixes \<phi> :: "'a pset_fm"
assumes "bexpands_nowit bs' b" "b' \<in> bs'" "\<phi> \<in> set b'"
assumes "\<And>(\<phi> :: 'a pset_fm). \<phi> \<in> set b \<Longrightarrow> v \<turnstile> \<phi>"
shows "v \<turnstile> \<phi>"
using assms(1)
proof(cases rule: bexpands_nowit.cases)
case (1 p q)
from assms "1"(2) show ?thesis
unfolding "1"(1)
by (auto dest!: assms(4) types_fmD(3) intro!: types_fmI(3))
next
case (2 p q)
from assms "2"(2) show ?thesis
unfolding "2"(1)
by (auto dest!: assms(4) types_fmD(5) dest: types_fmD(1,2) intro!: types_fmI(3))
next
case (3 s t1 t2)
from assms "3"(2,3) show ?thesis
unfolding "3"(1) using types_pset_atom_Member_Union_D(1)[of v s t1 t2]
by (auto dest!: types_fmD(6) assms(4) intro!: types_fmI(3,4))
next
case (4 s t1 t2)
with assms have "v \<turnstile> last b"
by (metis empty_iff empty_set last_in_set)
from assms "4"(2,3) show ?thesis
unfolding "4"(1)
using types_if_types_Member_and_subterms[where ?f="(\<sqinter>\<^sub>s)", OF _ \<open>v \<turnstile> last b\<close> "4"(3),
THEN types_pset_atom_Member_Inter_D(2)]
by (force dest!: types_fmD(6) assms(4) intro!: types_fmI(3,4))
next
case (5 s t1 t2)
with assms have "v \<turnstile> last b"
by (metis empty_iff empty_set last_in_set)
from assms "5"(2,3) show ?thesis
unfolding "5"(1)
using types_if_types_Member_and_subterms[where ?f="(-\<^sub>s)", OF _ \<open>v \<turnstile> last b\<close> "5"(3),
THEN types_pset_atom_Member_Diff_D(2)]
by (force dest!: types_fmD(6) assms(4) intro!: types_fmI(3,4))
qed
lemma types_term_if_on_vars_eq:
assumes "\<forall>x \<in> vars t. v' x = v x"
shows "v' \<turnstile> t : l \<longleftrightarrow> v \<turnstile> t : l"
using assms
apply(induction t arbitrary: l)
apply(auto intro!: types_pset_term_intros' types_pset_term.intros(4-)
elim!: types_pset_term_cases)
done
lemma types_pset_atom_if_on_vars_eq:
fixes a :: "'a pset_atom"
assumes "\<forall>x \<in> vars a. v' x = v x"
shows "v' \<turnstile> a \<longleftrightarrow> v \<turnstile> a"
using assms
by (auto simp: ball_Un types_pset_atom.simps dest!: types_term_if_on_vars_eq)
lemma types_pset_fm_if_on_vars_eq:
fixes \<phi> :: "'a pset_fm"
assumes "\<forall>x \<in> vars \<phi>. v' x = v x"
shows "v' \<turnstile> \<phi> \<longleftrightarrow> v \<turnstile> \<phi>"
using assms types_pset_atom_if_on_vars_eq
unfolding types_pset_fm_def vars_fm_def by fastforce
lemma types_term_fun_upd:
assumes "x \<notin> vars t"
shows "v(x := l) \<turnstile> t : l \<longleftrightarrow> v \<turnstile> t : l"
using assms types_term_if_on_vars_eq by (metis fun_upd_other)
lemma types_pset_atom_fun_upd:
fixes a :: "'a pset_atom"
assumes "x \<notin> vars a"
shows "v(x := l) \<turnstile> a \<longleftrightarrow> v \<turnstile> a"
using assms types_pset_atom_if_on_vars_eq by (metis fun_upd_other)
lemma types_pset_fm_fun_upd:
fixes \<phi> :: "'a pset_fm"
assumes "x \<notin> vars \<phi>"
shows "v(x := l) \<turnstile> \<phi> \<longleftrightarrow> v \<turnstile> \<phi>"
using assms types_pset_fm_if_on_vars_eq by (metis fun_upd_other)
lemma types_bexpands_wit:
fixes b :: "'a branch" and bs' :: "'a branch set"
assumes "bexpands_wit t1 t2 x bs' b" "b \<noteq> []"
assumes "\<And>(\<phi> :: 'a pset_fm). \<phi> \<in> set b \<Longrightarrow> v \<turnstile> \<phi>"
obtains l where "\<forall>\<phi> \<in> set b. v(x := l) \<turnstile> \<phi>"
"\<forall>b' \<in> bs'. \<forall>\<phi> \<in> set b'. v(x := l) \<turnstile> \<phi>"
using assms(1)
proof(cases rule: bexpands_wit.cases)
case 1
from assms(3)[OF "1"(2)] obtain lt where lt: "v \<turnstile> t1 : lt" "v \<turnstile> t2 : lt"
by (auto dest!: types_fmD simp: types_pset_atom.simps)
with 1 assms(2,3) have "lt \<noteq> 0"
unfolding urelem_def using last_in_set by metis
with lt obtain ltp where ltp: "v \<turnstile> t1 : Suc ltp" "v \<turnstile> t2 : Suc ltp"
using not0_implies_Suc by blast
with assms(3) have "\<forall>\<phi> \<in> set b. v(x := ltp) \<turnstile> \<phi>"
using types_pset_fm_fun_upd \<open>x \<notin> vars b\<close> by (metis vars_fm_vars_branchI)
moreover from \<open>x \<notin> vars b\<close> \<open>AF (t1 =\<^sub>s t2) \<in> set b\<close> have not_in_vars: "x \<notin> vars t1" "x \<notin> vars t2"
using assms(2) by (auto simp: vars_fm_vars_branchI)
from this[THEN types_term_fun_upd] have "\<forall>b' \<in> bs'. \<forall>\<phi> \<in> set b'. v(x := ltp) \<turnstile> \<phi>"
using ltp unfolding "1"(1)
apply(auto intro!: types_fmI types_pset_term_intros'(2) simp: types_pset_atom.simps)
apply (metis fun_upd_same fun_upd_upd types_pset_term.intros(2))+
done
ultimately show ?thesis
using that by blast
qed
lemma types_expandss:
fixes b b' :: "'a branch"
assumes "expandss b' b" "b \<noteq> []"
assumes "\<And>\<phi>. \<phi> \<in> set b \<Longrightarrow> v \<turnstile> \<phi>"
obtains v' where "\<forall>x \<in> vars b. v' x = v x" "\<forall>\<phi> \<in> set b'. v' \<turnstile> \<phi>"
using assms
proof(induction b' b arbitrary: thesis rule: expandss.induct)
case (1 b)
then show ?case by blast
next
case (2 b3 b2 b1)
then obtain v' where v': "\<forall>x \<in> vars b1. v' x = v x" "\<forall>\<phi> \<in> set b2. v' \<turnstile> \<phi>"
by blast
with types_lexpands[OF \<open>lexpands b3 b2\<close>] have "\<forall>\<phi> \<in> set b3. v' \<turnstile> \<phi>"
using expandss_not_Nil[OF \<open>expandss b2 b1\<close> \<open>b1 \<noteq> []\<close>] by blast
with v' "2.prems" show ?case
by force
next
case (3 bs b2 b3 b1)
then obtain v' where v': "\<forall>x \<in> vars b1. v' x = v x" "\<forall>\<phi> \<in> set b2. v' \<turnstile> \<phi>"
by blast
from \<open>bexpands bs b2\<close> show ?case
proof(cases rule: bexpands.cases)
case 1
from types_bexpands_nowit[OF this] v' \<open>b3 \<in> bs\<close> have "\<forall>\<phi> \<in> set b3. v' \<turnstile> \<phi>"
by blast
with v' "3.prems" show ?thesis
by force
next
case (2 t1 t2 x)
from types_bexpands_wit[OF this] v' \<open>b3 \<in> bs\<close> obtain l
where "\<forall>\<phi> \<in> set b3. v'(x := l) \<turnstile> \<phi>"
using expandss_not_Nil[OF \<open>expandss b2 b1\<close> \<open>b1 \<noteq> []\<close>] by metis
moreover from bexpands_witD(9)[OF 2] have "x \<notin> vars b1"
using expandss_mono[OF \<open>expandss b2 b1\<close>] unfolding vars_branch_def by blast
then have "\<forall>y \<in> vars b1. (v'(x := l)) y = v y"
using v'(1) by simp
moreover from \<open>x \<notin> vars b2\<close> v'(2) have "\<forall>\<phi> \<in> set b2. v'(x := l) \<turnstile> \<phi>"
by (meson types_pset_fm_fun_upd vars_fm_vars_branchI)
ultimately show ?thesis
using v' "3.prems"(1)[where ?v'="v'(x := l)"] by fastforce
qed
qed
lemma urelem_invar_if_wf_branch:
assumes "wf_branch b"
assumes "urelem (last b) x" "x \<in> subterms (last b)"
shows "\<exists>v. \<forall>\<phi> \<in> set b. urelem' v \<phi> x"
proof -
from assms obtain v where v: "v \<turnstile> last b" "v \<turnstile> x : 0"
unfolding urelem_def by blast
moreover from assms have "expandss b [last b]"
by (metis expandss_last_eq last.simps list.distinct(1) wf_branch_def)
from types_expandss[OF this, simplified] v obtain v' where
"\<forall>x \<in> vars (last b). v' x = v x" "\<forall>\<phi> \<in> set b. v' \<turnstile> \<phi>"
by (metis list.set_intros(1) vars_fm_vars_branchI)
ultimately show ?thesis
unfolding urelem_def using assms
by (metis mem_vars_fm_if_mem_subterms_fm types_term_if_on_vars_eq)
qed
lemma not_types_term_0_if_types_term:
fixes s :: "'a pset_term"
assumes "f t1 t2 \<in> subterms s" "f \<in> {(\<sqinter>\<^sub>s), (\<squnion>\<^sub>s), (-\<^sub>s)}"
assumes "v \<turnstile> f t1 t2 : l"
shows "\<not> v \<turnstile> t1 : 0" "\<not> v \<turnstile> t2 : 0"
using assms
by (induction s arbitrary: l)
(auto elim: types_pset_term_cases dest: types_term_unique)
lemma types_term_subterms:
assumes "t \<in> subterms s"
assumes "v \<turnstile> s : ls"
obtains lt where "v \<turnstile> t : lt"
using assms
by (induction s arbitrary: ls) (auto elim: types_pset_term_cases dest: types_term_unique)
lemma types_atom_subterms:
fixes a :: "'a pset_atom"
assumes "t \<in> subterms a"
assumes "v \<turnstile> a"
obtains lt where "v \<turnstile> t : lt"
using assms
by (cases a) (fastforce elim: types_term_subterms simp: types_pset_atom.simps)+
lemma subterms_type_pset_fm_not_None:
fixes \<phi> :: "'a pset_fm"
assumes "t \<in> subterms \<phi>"
assumes "v \<turnstile> \<phi>"
obtains lt where "v \<turnstile> t : lt"
using assms
by (induction \<phi>) (auto elim: types_atom_subterms dest: types_fmD(1-5) dest!: types_fmD(6))
lemma not_urelem_comps_if_compound:
assumes "f t1 t2 \<in> subterms \<phi>" "f \<in> {(\<sqinter>\<^sub>s), (\<squnion>\<^sub>s), (-\<^sub>s)}"
shows "\<not> urelem \<phi> t1" "\<not> urelem \<phi> t2"
proof -
from assms have "\<not> v \<turnstile> t1 : 0" "\<not> v \<turnstile> t2 : 0" if "v \<turnstile> \<phi>" for v
using that not_types_term_0_if_types_term[OF _ _ subterms_type_pset_fm_not_None]
using subterms_refl by metis+
then show "\<not> urelem \<phi> t1" "\<not> urelem \<phi> t2"
unfolding urelem_def by blast+
qed
end
|
[STATEMENT]
lemma points_index_pair_rep_num:
assumes "\<And> b. b \<in># B \<Longrightarrow> x \<in> b"
shows "B index {x, y} = B rep y"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. B index {x, y} = B rep y
[PROOF STEP]
using point_replication_number_def points_index_def
[PROOF STATE]
proof (prove)
using this:
?B rep ?x \<equiv> size (filter_mset ((\<in>) ?x) ?B)
?B index ?ps \<equiv> size (filter_mset ((\<subseteq>) ?ps) ?B)
goal (1 subgoal):
1. B index {x, y} = B rep y
[PROOF STEP]
by (metis assms empty_subsetI filter_mset_cong insert_subset)
|
{-# OPTIONS --cubical --no-import-sorts --safe #-}
module Cubical.Algebra.Ring.Properties where
open import Cubical.Foundations.Prelude
open import Cubical.Foundations.Equiv
open import Cubical.Foundations.Equiv.HalfAdjoint
open import Cubical.Foundations.HLevels
open import Cubical.Foundations.Isomorphism
open import Cubical.Foundations.Univalence
open import Cubical.Foundations.Transport
open import Cubical.Foundations.SIP
open import Cubical.Data.Sigma
open import Cubical.Structures.Axioms
open import Cubical.Structures.Auto
open import Cubical.Structures.Macro
open import Cubical.Algebra.Semigroup
open import Cubical.Algebra.Monoid
open import Cubical.Algebra.AbGroup
open import Cubical.Algebra.Ring.Base
private
variable
ℓ : Level
{-
some basic calculations (used for example in QuotientRing.agda),
that should become obsolete or subject to change once we have a
ring solver (see https://github.com/agda/cubical/issues/297)
-}
module Theory (R' : Ring {ℓ}) where
open RingStr (snd R')
private R = ⟨ R' ⟩
implicitInverse : (x y : R)
→ x + y ≡ 0r
→ y ≡ - x
implicitInverse x y p =
y ≡⟨ sym (+-lid y) ⟩
0r + y ≡⟨ cong (λ u → u + y) (sym (+-linv x)) ⟩
(- x + x) + y ≡⟨ sym (+-assoc _ _ _) ⟩
(- x) + (x + y) ≡⟨ cong (λ u → (- x) + u) p ⟩
(- x) + 0r ≡⟨ +-rid _ ⟩
- x ∎
equalByDifference : (x y : R)
→ x - y ≡ 0r
→ x ≡ y
equalByDifference x y p =
x ≡⟨ sym (+-rid _) ⟩
x + 0r ≡⟨ cong (λ u → x + u) (sym (+-linv y)) ⟩
x + ((- y) + y) ≡⟨ +-assoc _ _ _ ⟩
(x - y) + y ≡⟨ cong (λ u → u + y) p ⟩
0r + y ≡⟨ +-lid _ ⟩
y ∎
0-selfinverse : - 0r ≡ 0r
0-selfinverse = sym (implicitInverse _ _ (+-rid 0r))
0-idempotent : 0r + 0r ≡ 0r
0-idempotent = +-lid 0r
+-idempotency→0 : (x : R) → x ≡ x + x → x ≡ 0r
+-idempotency→0 x p =
x ≡⟨ sym (+-rid x) ⟩
x + 0r ≡⟨ cong (λ u → x + u) (sym (+-rinv _)) ⟩
x + (x + (- x)) ≡⟨ +-assoc _ _ _ ⟩
(x + x) + (- x) ≡⟨ cong (λ u → u + (- x)) (sym p) ⟩
x + (- x) ≡⟨ +-rinv _ ⟩
0r ∎
0-rightNullifies : (x : R) → x · 0r ≡ 0r
0-rightNullifies x =
let x·0-is-idempotent : x · 0r ≡ x · 0r + x · 0r
x·0-is-idempotent =
x · 0r ≡⟨ cong (λ u → x · u) (sym 0-idempotent) ⟩
x · (0r + 0r) ≡⟨ ·-rdist-+ _ _ _ ⟩
(x · 0r) + (x · 0r) ∎
in (+-idempotency→0 _ x·0-is-idempotent)
0-leftNullifies : (x : R) → 0r · x ≡ 0r
0-leftNullifies x =
let 0·x-is-idempotent : 0r · x ≡ 0r · x + 0r · x
0·x-is-idempotent =
0r · x ≡⟨ cong (λ u → u · x) (sym 0-idempotent) ⟩
(0r + 0r) · x ≡⟨ ·-ldist-+ _ _ _ ⟩
(0r · x) + (0r · x) ∎
in +-idempotency→0 _ 0·x-is-idempotent
-commutesWithRight-· : (x y : R) → x · (- y) ≡ - (x · y)
-commutesWithRight-· x y = implicitInverse (x · y) (x · (- y))
(x · y + x · (- y) ≡⟨ sym (·-rdist-+ _ _ _) ⟩
x · (y + (- y)) ≡⟨ cong (λ u → x · u) (+-rinv y) ⟩
x · 0r ≡⟨ 0-rightNullifies x ⟩
0r ∎)
-commutesWithLeft-· : (x y : R) → (- x) · y ≡ - (x · y)
-commutesWithLeft-· x y = implicitInverse (x · y) ((- x) · y)
(x · y + (- x) · y ≡⟨ sym (·-ldist-+ _ _ _) ⟩
(x - x) · y ≡⟨ cong (λ u → u · y) (+-rinv x) ⟩
0r · y ≡⟨ 0-leftNullifies y ⟩
0r ∎)
-isDistributive : (x y : R) → (- x) + (- y) ≡ - (x + y)
-isDistributive x y =
implicitInverse _ _
((x + y) + ((- x) + (- y)) ≡⟨ sym (+-assoc _ _ _) ⟩
x + (y + ((- x) + (- y))) ≡⟨ cong
(λ u → x + (y + u))
(+-comm _ _) ⟩
x + (y + ((- y) + (- x))) ≡⟨ cong (λ u → x + u) (+-assoc _ _ _) ⟩
x + ((y + (- y)) + (- x)) ≡⟨ cong (λ u → x + (u + (- x)))
(+-rinv _) ⟩
x + (0r + (- x)) ≡⟨ cong (λ u → x + u) (+-lid _) ⟩
x + (- x) ≡⟨ +-rinv _ ⟩
0r ∎)
translatedDifference : (x a b : R) → a - b ≡ (x + a) - (x + b)
translatedDifference x a b =
a - b ≡⟨ cong (λ u → a + u)
(sym (+-lid _)) ⟩
(a + (0r + (- b))) ≡⟨ cong (λ u → a + (u + (- b)))
(sym (+-rinv _)) ⟩
(a + ((x + (- x)) + (- b))) ≡⟨ cong (λ u → a + u)
(sym (+-assoc _ _ _)) ⟩
(a + (x + ((- x) + (- b)))) ≡⟨ (+-assoc _ _ _) ⟩
((a + x) + ((- x) + (- b))) ≡⟨ cong (λ u → u + ((- x) + (- b)))
(+-comm _ _) ⟩
((x + a) + ((- x) + (- b))) ≡⟨ cong (λ u → (x + a) + u)
(-isDistributive _ _) ⟩
((x + a) - (x + b)) ∎
+-assoc-comm1 : (x y z : R) → x + (y + z) ≡ y + (x + z)
+-assoc-comm1 x y z = +-assoc x y z ∙∙ cong (λ x → x + z) (+-comm x y) ∙∙ sym (+-assoc y x z)
+-assoc-comm2 : (x y z : R) → x + (y + z) ≡ z + (y + x)
+-assoc-comm2 x y z = +-assoc-comm1 x y z ∙∙ cong (λ x → y + x) (+-comm x z) ∙∙ +-assoc-comm1 y z x
module HomTheory {R S : Ring {ℓ}} (f′ : RingHom R S) where
open Theory ⦃...⦄
open RingStr ⦃...⦄
open RingHom f′
private
instance
_ = R
_ = S
_ = snd R
_ = snd S
homPres0 : f 0r ≡ 0r
homPres0 = +-idempotency→0 (f 0r)
(f 0r ≡⟨ sym (cong f 0-idempotent) ⟩
f (0r + 0r) ≡⟨ isHom+ _ _ ⟩
f 0r + f 0r ∎)
-commutesWithHom : (x : ⟨ R ⟩) → f (- x) ≡ - (f x)
-commutesWithHom x = implicitInverse _ _
(f x + f (- x) ≡⟨ sym (isHom+ _ _) ⟩
f (x + (- x)) ≡⟨ cong f (+-rinv x) ⟩
f 0r ≡⟨ homPres0 ⟩
0r ∎)
|
{-# OPTIONS --without-K --safe #-}
module Fragment.Examples.CSemigroup.Arith.Atomic where
open import Fragment.Examples.CSemigroup.Arith.Base
-- Fully dynamic associativity
+-dyn-assoc₁ : ∀ {m n o} → (m + n) + o ≡ m + (n + o)
+-dyn-assoc₁ = fragment CSemigroupFrex +-csemigroup
+-dyn-assoc₂ : ∀ {m n o p} → ((m + n) + o) + p ≡ m + (n + (o + p))
+-dyn-assoc₂ = fragment CSemigroupFrex +-csemigroup
+-dyn-assoc₃ : ∀ {m n o p q} → (m + n) + o + (p + q) ≡ m + (n + o + p) + q
+-dyn-assoc₃ = fragment CSemigroupFrex +-csemigroup
*-dyn-assoc₁ : ∀ {m n o} → (m * n) * o ≡ m * (n * o)
*-dyn-assoc₁ = fragment CSemigroupFrex *-csemigroup
*-dyn-assoc₂ : ∀ {m n o p} → ((m * n) * o) * p ≡ m * (n * (o * p))
*-dyn-assoc₂ = fragment CSemigroupFrex *-csemigroup
*-dyn-assoc₃ : ∀ {m n o p q} → (m * n) * o * (p * q) ≡ m * (n * o * p) * q
*-dyn-assoc₃ = fragment CSemigroupFrex *-csemigroup
-- Fully dynamic commutativity
+-dyn-comm₁ : ∀ {m n} → m + n ≡ n + m
+-dyn-comm₁ = fragment CSemigroupFrex +-csemigroup
+-dyn-comm₂ : ∀ {m n o} → m + (n + o) ≡ (o + n) + m
+-dyn-comm₂ = fragment CSemigroupFrex +-csemigroup
+-dyn-comm₃ : ∀ {m n o p} → (m + n) + (o + p) ≡ (p + o) + (n + m)
+-dyn-comm₃ = fragment CSemigroupFrex +-csemigroup
*-dyn-comm₁ : ∀ {m n} → m * n ≡ n * m
*-dyn-comm₁ = fragment CSemigroupFrex *-csemigroup
*-dyn-comm₂ : ∀ {m n o} → m * (n * o) ≡ (o * n) * m
*-dyn-comm₂ = fragment CSemigroupFrex *-csemigroup
*-dyn-comm₃ : ∀ {m n o p} → (m * n) * (o * p) ≡ (p * o) * (n * m)
*-dyn-comm₃ = fragment CSemigroupFrex *-csemigroup
-- Fully dynamic associavity and commutativity
+-dyn-comm-assoc₁ : ∀ {m n o} → (m + n) + o ≡ n + (m + o)
+-dyn-comm-assoc₁ = fragment CSemigroupFrex +-csemigroup
+-dyn-comm-assoc₂ : ∀ {m n o p} → ((m + n) + o) + p ≡ p + (o + (n + m))
+-dyn-comm-assoc₂ = fragment CSemigroupFrex +-csemigroup
+-dyn-comm-assoc₃ : ∀ {m n o p q} → (m + n) + o + (p + q) ≡ q + (p + o + n) + m
+-dyn-comm-assoc₃ = fragment CSemigroupFrex +-csemigroup
*-dyn-comm-assoc₁ : ∀ {m n o} → (m * n) * o ≡ n * (m * o)
*-dyn-comm-assoc₁ = fragment CSemigroupFrex *-csemigroup
*-dyn-comm-assoc₂ : ∀ {m n o p} → ((m * n) * o) * p ≡ p * (o * (n * m))
*-dyn-comm-assoc₂ = fragment CSemigroupFrex *-csemigroup
*-dyn-comm-assoc₃ : ∀ {m n o p q} → (m * n) * o * (p * q) ≡ q * (p * o * n) * m
*-dyn-comm-assoc₃ = fragment CSemigroupFrex *-csemigroup
-- Partially static associativity
+-sta-assoc₁ : ∀ {m n} → (m + 2) + (3 + n) ≡ m + (5 + n)
+-sta-assoc₁ = fragment CSemigroupFrex +-csemigroup
+-sta-assoc₂ : ∀ {m n o p} → (((m + n) + 5) + o) + p ≡ m + (n + (2 + (3 + (o + p))))
+-sta-assoc₂ = fragment CSemigroupFrex +-csemigroup
+-sta-assoc₃ : ∀ {m n o p} → ((m + n) + 2) + (3 + (o + p)) ≡ m + ((n + 1) + (4 + o)) + p
+-sta-assoc₃ = fragment CSemigroupFrex +-csemigroup
*-sta-assoc₁ : ∀ {m n} → (m * 2) * (3 * n) ≡ m * (6 * n)
*-sta-assoc₁ = fragment CSemigroupFrex *-csemigroup
*-sta-assoc₂ : ∀ {m n o p} → (((m * n) * 6) * o) * p ≡ m * (n * (2 * (3 * (o * p))))
*-sta-assoc₂ = fragment CSemigroupFrex *-csemigroup
*-sta-assoc₃ : ∀ {m n o p} → ((m * n) * 2) * (6 * (o * p)) ≡ m * ((n * 2) * (6 * o)) * p
*-sta-assoc₃ = fragment CSemigroupFrex *-csemigroup
-- Partially static commutativity
+-sta-comm₁ : ∀ {m} → m + 1 ≡ 1 + m
+-sta-comm₁ = fragment CSemigroupFrex +-csemigroup
+-sta-comm₂ : ∀ {m n} → m + (2 + n) ≡ (n + 2) + m
+-sta-comm₂ = fragment CSemigroupFrex +-csemigroup
+-sta-comm₃ : ∀ {m n o p} → (1 + (m + n)) + ((o + p) + 2) ≡ ((p + o) + 2) + (1 + (n + m))
+-sta-comm₃ = fragment CSemigroupFrex +-csemigroup
*-sta-comm₁ : ∀ {m} → m * 4 ≡ 4 * m
*-sta-comm₁ = fragment CSemigroupFrex *-csemigroup
*-sta-comm₂ : ∀ {m n} → m * (2 * n) ≡ (n * 2) * m
*-sta-comm₂ = fragment CSemigroupFrex *-csemigroup
*-sta-comm₃ : ∀ {m n o p} → (4 * (m * n)) * ((o * p) * 2) ≡ ((p * o) * 2) * (4 * (n * m))
*-sta-comm₃ = fragment CSemigroupFrex *-csemigroup
-- Partially static associavity and commutativity
+-sta-comm-assoc₁ : ∀ {m n o} → 1 + (m + n) + o + 4 ≡ 5 + n + (m + o)
+-sta-comm-assoc₁ = fragment CSemigroupFrex +-csemigroup
+-sta-comm-assoc₂ : ∀ {m n o p} → 5 + ((m + n) + o) + p ≡ p + ((o + 1) + (n + m)) + 4
+-sta-comm-assoc₂ = fragment CSemigroupFrex +-csemigroup
+-sta-comm-assoc₃ : ∀ {m n o p q} → (m + n + 1) + o + (p + q + 4) ≡ (2 + q) + (p + o + n) + (m + 3)
+-sta-comm-assoc₃ = fragment CSemigroupFrex +-csemigroup
*-sta-comm-assoc₁ : ∀ {m n o} → 2 * (m * n) * o * 3 ≡ 6 * n * (m * o)
*-sta-comm-assoc₁ = fragment CSemigroupFrex *-csemigroup
*-sta-comm-assoc₂ : ∀ {m n o p} → 6 * ((m * n) * o) * p ≡ p * ((o * 2) * (n * m)) * 3
*-sta-comm-assoc₂ = fragment CSemigroupFrex *-csemigroup
*-sta-comm-assoc₃ : ∀ {m n o p q} → (m * n * 3) * o * (p * q * 4) ≡ (2 * q) * (p * o * n) * (m * 6)
*-sta-comm-assoc₃ = fragment CSemigroupFrex *-csemigroup
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.