text
stringlengths 0
3.34M
|
---|
Is the FXCM trading resource centre free? Market orders: With a market order, you instruct your broker to execute your buy/sell at the current market rate. There’s fair, so… United States Federal Reserve daily update of exchange rates I am getting amazing results with this system. Finally having confidence to trade and be disciplined because of the rules to make the trade. I can’t thank you enough for this article and trading system. Privacy & Refund Policy Video 39A: Trading MTR Bottoms Forex Trading Course Level 1 is the pre-requisite. It covers all the essential skills that every trader MUST know to be consistently profitable. These include technical analysis, risk management and trading psychology — factors that can make or break your trades. its continuous operation: 24 hours a day except weekends, i.e., trading from 22:00 GMT on Sunday (Sydney) until 22:00 GMT Friday (New York); On a GBP/USD trade, the value per pip is $10. This means that every pip that the market moves is worth $10, either in your favour or against you. Support Forum Video 31D Swing Trading and Scalping 08B Candles, Setups, and Signal Bars Harmonic Price Patterns Related Video Shorts (0) Currency Currency future Currency forward Non-deliverable forward Foreign exchange swap Currency swap Foreign exchange option English Promotions Articles on Currency Bear trend needs Lower Highs 27:07 Master Forex Fundamentals + Day Trading Masterclass & ALL of our Courses for One LOW Price The Best Forex Tools Traders also have to understand how to handle mistakes. Most mistakes are due to taking a bad entry or managing a good entry poorly. However, most trades have at least a 40% chance of failing. A trader has to know what to do when a trade is not doing what he expected. When do you decide that your premise is now wrong and your trade is bad? What do you do once you decide you are in a losing trade? GAIN Capital Group LLC (dba FOREX.com) 135 US Hwy 202/206 Bedminster NJ 07921, USA InstaForex Loprais Team - Official participant of the Dakar rally Give me a chance to tell you about it. If I can show you my system is as good as I say, and as safe as I say, then simply give it a go, OK? The Commodity Futures Trading Commission (CFTC) limits leverage available to retail forex traders in the United States to 50:1 on major currency pairs and 20:1 for all others. OANDA Asia Pacific offers maximum leverage of 50:1 on FX products and limits to leverage offered on CFDs apply. Maximum leverage for OANDA Canada clients is determined by IIROC and is subject to change. For more information refer to our regulatory and financial compliance section. 1. Chart Patterns Video 22A Major Trend Reversals Risk Disclaimer: DailyForex will not be held liable for any loss or damage resulting from reliance on the information contained within this website including market news, analysis, trading signals and Forex broker reviews. The data contained in this website is not necessarily real-time nor accurate, and analyses are the opinions of the author and do not represent the recommendations of DailyForex or its employees. Currency trading on margin involves high risk, and is not suitable for all investors. As a leveraged product losses are able to exceed initial deposits and capital is at risk. Before deciding to trade Forex or any other financial instrument you should carefully consider your investment objectives, level of experience, and risk appetite. We work hard to offer you valuable information about all of the brokers that we review. In order to provide you with this free service we receive advertising fees from brokers, including some of those listed within our rankings and on this page. While we do our utmost to ensure that all our data is up-to-date, we encourage you to verify our information with the broker directly. bonus 55% A Tenuous And Unstable State Of Affairs USD/CHF 30 (pips) 1:294 Monday 21:00 - Friday 20:55 Forgot password? Price Action Webinar Intraday market update: Friday July 27, 2018 So… From the Back Cover Media The 12 daily tasks of top Forex traders 4.1 out of 5 stars 73 customer reviews Current Price: $ 497 London Open Signals Log in Legal information Does Learn to Trade provide personal mentoring? Your capital is at risk and you “should” only use funds you can afford to lose as you are only able to quantify your risk and not vice versa since the markets do what they do and you have no control over their movement. A guide of 2% of equity is the recommended level of your trade per transaction.
|
```python
from __future__ import print_function
import os
import pandas as pd
%matplotlib inline
from matplotlib import pyplot as plt
```
```python
#Set current directory and work relative to it
os.chdir('../Data Files')
```
```python
#Load the dataset into a pandas.DataFrame
ibm_df = pd.read_csv('ibm-common-stock-closing-prices.csv')
ibm_df.index = ibm_df['Date']
```
```python
#Let's find out the shape of the DataFrame
print('Shape of the dataframe:', ibm_df.shape)
```
Shape of the dataframe: (1009, 2)
```python
#Let's see the top 10 rows
ibm_df.head(10)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Date</th>
<th>IBM common stock closing prices</th>
</tr>
<tr>
<th>Date</th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>1962-01-02</th>
<td>1962-01-02</td>
<td>572.00</td>
</tr>
<tr>
<th>1962-01-03</th>
<td>1962-01-03</td>
<td>577.00</td>
</tr>
<tr>
<th>1962-01-04</th>
<td>1962-01-04</td>
<td>571.25</td>
</tr>
<tr>
<th>1962-01-05</th>
<td>1962-01-05</td>
<td>560.00</td>
</tr>
<tr>
<th>1962-01-08</th>
<td>1962-01-08</td>
<td>549.50</td>
</tr>
<tr>
<th>1962-01-09</th>
<td>1962-01-09</td>
<td>556.00</td>
</tr>
<tr>
<th>1962-01-10</th>
<td>1962-01-10</td>
<td>557.00</td>
</tr>
<tr>
<th>1962-01-11</th>
<td>1962-01-11</td>
<td>563.00</td>
</tr>
<tr>
<th>1962-01-12</th>
<td>1962-01-12</td>
<td>564.00</td>
</tr>
<tr>
<th>1962-01-15</th>
<td>1962-01-15</td>
<td>566.50</td>
</tr>
</tbody>
</table>
</div>
```python
#Rename the second column
ibm_df.rename(columns={'IBM common stock closing prices': 'Close_Price'},
inplace=True)
ibm_df.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Date</th>
<th>Close_Price</th>
</tr>
<tr>
<th>Date</th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>1962-01-02</th>
<td>1962-01-02</td>
<td>572.00</td>
</tr>
<tr>
<th>1962-01-03</th>
<td>1962-01-03</td>
<td>577.00</td>
</tr>
<tr>
<th>1962-01-04</th>
<td>1962-01-04</td>
<td>571.25</td>
</tr>
<tr>
<th>1962-01-05</th>
<td>1962-01-05</td>
<td>560.00</td>
</tr>
<tr>
<th>1962-01-08</th>
<td>1962-01-08</td>
<td>549.50</td>
</tr>
</tbody>
</table>
</div>
```python
#remove missing values
missing = (pd.isnull(ibm_df['Date'])) & (pd.isnull(ibm_df['Close_Price']))
print('No. of rows with missing values:', missing.sum())
ibm_df = ibm_df.loc[~missing, :]
```
No. of rows with missing values: 0
```python
#To illustrate the idea of moving average we compute a weekly moving average taking
#a window of 5 days instead of 7 days because trading happens only during the weekdays.
ibm_df['5-Day Moving Avg'] = ibm_df['Close_Price'].rolling(5).mean()
```
```python
fig = plt.figure(figsize=(16, 8))
ax = fig.add_subplot(2,1,1)
ibm_df['Close_Price'].plot(ax=ax, color='b')
ax.set_title('IBM Common Stock Close Prices during 1962-1965')
ax = fig.add_subplot(2,1,2)
ibm_df['5-Day Moving Avg'].plot(ax=ax, color='r')
ax.set_title('5-day Moving Average')
plt.tight_layout(pad=0.4, w_pad=0.5, h_pad=2.0)
plt.savefig('../plots/ch2/B07887_02_14.png', format='png', dpi=300)
```
```python
#Calculate the moving averages using 'rolling' and 'mean' functions
MA2 = ibm_df['Close_Price'].rolling(window=2).mean()
TwoXMA2 = MA2.rolling(window=2).mean()
MA4 = ibm_df['Close_Price'].rolling(window=4).mean()
TwoXMA4 = MA4.rolling(window=2).mean()
MA3 = ibm_df['Close_Price'].rolling(window=3).mean()
ThreeXMA3 = MA3.rolling(window=3).mean()
```
```python
#Let's remove NaN from the above variables
MA2 = MA2.loc[~pd.isnull(MA2)]
TwoXMA2 = TwoXMA2.loc[~pd.isnull(TwoXMA2)]
MA4 = MA4.loc[~pd.isnull(MA4)]
TwoXMA4 = TwoXMA4.loc[~pd.isnull(TwoXMA4)]
MA3 = MA3.loc[~pd.isnull(MA3)]
ThreeXMA3 = TwoXMA4.loc[~pd.isnull(ThreeXMA3)]
```
```python
f, axarr = plt.subplots(3, sharex=True)
f.set_size_inches(16, 8)
ibm_df['Close_Price'].iloc[:45].plot(color='b', linestyle = '-', ax=axarr[0])
MA2.iloc[:45].plot(color='r', linestyle = '-', ax=axarr[0])
TwoXMA2.iloc[:45].plot(color='r', linestyle = '--', ax=axarr[0])
axarr[0].set_title('2 day MA & 2X2 day MA')
ibm_df['Close_Price'].iloc[:45].plot(color='b', linestyle = '-', ax=axarr[1])
MA4.iloc[:45].plot(color='g', linestyle = '-', ax=axarr[1])
TwoXMA4.iloc[:45].plot(color='g', linestyle = '--', ax=axarr[1])
axarr[1].set_title('4 day MA & 2X4 day MA')
ibm_df['Close_Price'].iloc[:45].plot(color='b', linestyle = '-', ax=axarr[2])
MA3.iloc[:45].plot(color='k', linestyle = '-', ax=axarr[2])
ThreeXMA3.iloc[:45].plot(color='k', linestyle = '--', ax=axarr[2])
axarr[2].set_title('3 day MA & 3X 3day MA')
plt.savefig('../plots/ch2/B07887_02_15.png', format='png', dpi=300)
```
\begin{equation}
\hat{F}_t=\frac{x_{t-k} + x_{t-k+1}+...+x_{t}+..+x_{t+k-1} + x_{t+k}}{2k+1}
\end{equation}
\begin{equation}
\hat{F}^{(2)}_t=\frac{x_{t-1} +x_{t}}{2}
\end{equation}
\begin{equation}
2x\hat{F}^{(2)}_t=\frac{\hat{F}^{(2)}_t + \hat{F}^{(2)}_{t+1}}{2} = \frac{1}{2}[\frac{x_{t-1}+x_t}{2} + \frac{x_t + x_{t+1}}{2}] = \frac{1}{4}x_{t-2} + \frac{1}{2}x_t + \frac{1}{4}x_{t+1}
\end{equation}
```python
```
|
{- Delay operator. -}
module TemporalOps.Delay where
open import CategoryTheory.Categories
open import CategoryTheory.Instances.Reactive
open import CategoryTheory.Functor
open import CategoryTheory.CartesianStrength
open import TemporalOps.Common
open import TemporalOps.Next
open import Data.Nat.Properties using (+-identityʳ ; +-comm ; +-assoc ; +-suc)
open import Relation.Binary.HeterogeneousEquality as ≅ using (_≅_ ; ≅-to-≡)
import Relation.Binary.PropositionalEquality as ≡
open import Data.Product
open import Data.Sum
-- General iteration
-- iter f n v = fⁿ(v)
iter : (τ -> τ) -> ℕ -> τ -> τ
iter F zero A = A
iter F (suc n) A = F (iter F n A)
-- Multi-step delay
delay_by_ : τ -> ℕ -> τ
delay A by zero = A
delay A by suc n = ▹ (delay A by n)
infix 67 delay_by_
-- || Lemmas for the delay operator
-- Extra delay is cancelled out by extra waiting.
delay-+ : ∀{A} -> (n l k : ℕ)
-> delay A by (n + l) at (n + k) ≡ delay A by l at k
delay-+ zero l k = refl
delay-+ (suc n) = delay-+ n
-- || Derived lemmas - they can all be expressed in terms of delay-+,
-- || but they are given explicitly for simplicity.
-- Delay by n is cancelled out by waiting n extra steps.
delay-+-left0 : ∀{A} -> (n k : ℕ)
-> delay A by n at (n + k) ≡ A at k
delay-+-left0 zero k = refl
delay-+-left0 (suc n) k = delay-+-left0 n k
-- delay-+-left0 can be converted to delay-+ (heterogeneously).
delay-+-left0-eq : ∀{A : τ} -> (n l : ℕ)
-> Proof-≡ (delay-+-left0 {A} n l) (delay-+ {A} n 0 l)
delay-+-left0-eq zero l v v′ pf = ≅-to-≡ pf
delay-+-left0-eq (suc n) l = delay-+-left0-eq n l
-- Extra delay by n steps is cancelled out by waiting for n steps.
delay-+-right0 : ∀{A} -> (n l : ℕ)
-> delay A by (n + l) at n ≡ delay A by l at 0
delay-+-right0 zero l = refl
delay-+-right0 (suc n) l = delay-+-right0 n l
-- Delaying by n is the same as delaying by (n + 0)
delay-+0-left : ∀{A} -> (k n : ℕ)
-> delay A by k at n ≡ delay A by (k + 0) at n
delay-+0-left {A} k n rewrite +-identityʳ k = refl
-- If the delay is greater than the wait amount, we get unit
delay-⊤ : ∀{A} -> (n k : ℕ)
-> ⊤ at n ≡ delay A by (n + suc k) at n
delay-⊤ {A} n k = sym (delay-+-right0 n (suc k))
-- Associativity of arguments in the delay lemma
delay-assoc-sym : ∀{A} (n k l j : ℕ)
-> Proof-≅ (sym (delay-+ {A} n (k + l) (k + j)))
(sym (delay-+ {A} (n + k) l j))
delay-assoc-sym zero zero l j v v′ pr = pr
delay-assoc-sym zero (suc k) l j = delay-assoc-sym zero k l j
delay-assoc-sym (suc n) k l j = delay-assoc-sym n k l j
-- Functor instance for delay
F-delay : ℕ -> Endofunctor ℝeactive
F-delay k = record
{ omap = delay_by k
; fmap = fmap-delay k
; fmap-id = λ {_ n a} -> fmap-delay-id k {_} {n} {a}
; fmap-∘ = fmap-delay-∘ k
; fmap-cong = fmap-delay-cong k
}
where
-- Lifting of delay
fmap-delay : {A B : τ} -> (k : ℕ) -> A ⇴ B -> delay A by k ⇴ delay B by k
fmap-delay zero f = f
fmap-delay (suc k) f = Functor.fmap F-▹ (fmap-delay k f)
-- Delay preserves identities
fmap-delay-id : ∀ (k : ℕ) {A : τ} {n : ℕ} {a : (delay A by k) n}
-> (fmap-delay k id at n) a ≡ a
fmap-delay-id zero = refl
fmap-delay-id (suc k) {A} {zero} = refl
fmap-delay-id (suc k) {A} {suc n} = fmap-delay-id k {A} {n}
-- Delay preserves composition
fmap-delay-∘ : ∀ (k : ℕ) {A B C : τ} {g : B ⇴ C} {f : A ⇴ B} {n : ℕ} {a : (delay A by k) n}
-> (fmap-delay k (g ∘ f) at n) a ≡ (fmap-delay k g ∘ fmap-delay k f at n) a
fmap-delay-∘ zero = refl
fmap-delay-∘ (suc k) {n = zero} = refl
fmap-delay-∘ (suc k) {n = suc n} = fmap-delay-∘ k {n = n}
-- Delay is congruent
fmap-delay-cong : ∀ (k : ℕ) {A B : τ} {f f′ : A ⇴ B}
-> ({n : ℕ} {a : A at n} -> f n a ≡ f′ n a)
-> ({n : ℕ} {a : delay A by k at n}
-> (fmap-delay k f at n) a
≡ (fmap-delay k f′ at n) a)
fmap-delay-cong zero e = e
fmap-delay-cong (suc k) e {zero} = refl
fmap-delay-cong (suc k) e {suc n} = fmap-delay-cong k e
-- || Lemmas for the interaction of fmap and delay-+
-- Lifted version of the delay-+ lemma
-- Arguments have different types, so we need heterogeneous equality
fmap-delay-+ : ∀ {A B : τ} {f : A ⇴ B} (n k l : ℕ)
-> Fun-≅ (Functor.fmap (F-delay (n + k)) f at (n + l))
(Functor.fmap (F-delay k) f at l)
fmap-delay-+ zero k l v .v ≅.refl = ≅.refl
fmap-delay-+ (suc n) k l v v′ pf = fmap-delay-+ n k l v v′ pf
-- Specialised version with v of type delay A by (n + k) at (n + l)
-- Uses explicit rewrites and homogeneous equality
fmap-delay-+-n+k : ∀ {A B : τ} {f : A ⇴ B} (n k l : ℕ)
-> (v : delay A by (n + k) at (n + l))
-> rew (delay-+ n k l) ((Functor.fmap (F-delay (n + k)) f at (n + l)) v)
≡ (Functor.fmap (F-delay k) f at l) (rew (delay-+ n k l) v)
fmap-delay-+-n+k {A} n k l v =
≅-to-rew-≡ (fmap-delay-+ n k l v v′ v≅v′) (delay-+ n k l)
where
v′ : delay A by k at l
v′ = rew (delay-+ n k l) v
v≅v′ : v ≅ v′
v≅v′ = rew-to-≅ (delay-+ n k l)
-- Lifted delay lemma with delay-+-left0
fmap-delay-+-n+0 : ∀ {A B : τ} {f : A ⇴ B} (n l : ℕ)
-> {v : delay A by n at (n + l)}
-> rew (delay-+-left0 n l) ((Functor.fmap (F-delay n) f at (n + l)) v)
≡ f l (rew (delay-+-left0 n l) v)
fmap-delay-+-n+0 {A} zero l = refl
fmap-delay-+-n+0 {A} (suc n) l = fmap-delay-+-n+0 n l
-- Specialised version with v of type delay A by k at l
-- Uses explicit rewrites and homogeneous equality
fmap-delay-+-k : ∀ {A B : τ} {f : A ⇴ B} (n k l : ℕ)
->(v : delay A by k at l)
-> Functor.fmap (F-delay (n + k)) f (n + l) (rew (sym (delay-+ n k l)) v)
≡ rew (sym (delay-+ n k l)) (Functor.fmap (F-delay k) f l v)
fmap-delay-+-k {A} {B} {f} n k l v =
sym (≅-to-rew-≡ (≅.sym (fmap-delay-+ n k l v′ v v≅v′)) (sym (delay-+ n k l)))
where
v′ : delay A by (n + k) at (n + l)
v′ = rew (sym (delay-+ n k l)) v
v≅v′ : v′ ≅ v
v≅v′ = ≅.sym (rew-to-≅ (sym (delay-+ n k l)))
-- Delay is a Cartesian functor
F-cart-delay : ∀ k -> CartesianFunctor (F-delay k) ℝeactive-cart ℝeactive-cart
F-cart-delay k = record
{ u = u-delay k
; m = m-delay k
; m-nat₁ = m-nat₁-delay k
; m-nat₂ = m-nat₂-delay k
; associative = assoc-delay k
; unital-right = unit-right-delay k
; unital-left = λ {B} {n} {a} -> unit-left-delay k {B} {n} {a}
}
where
open CartesianFunctor F-cart-▹
u-delay : ∀ k -> ⊤ ⇴ delay ⊤ by k
u-delay zero = λ n _ → top.tt
u-delay (suc k) zero top.tt = top.tt
u-delay (suc k) (suc n) top.tt = u-delay k n top.tt
m-delay : ∀ k (A B : τ) -> (delay A by k ⊗ delay B by k) ⇴ delay (A ⊗ B) by k
m-delay zero A B = λ n x → x
m-delay (suc k) A B = Functor.fmap F-▹ (m-delay k A B) ∘ m (delay A by k) (delay B by k)
m-nat₁-delay : ∀ k {A B C : τ} (f : A ⇴ B)
-> Functor.fmap (F-delay k) (f * id) ∘ m-delay k A C
≈ m-delay k B C ∘ Functor.fmap (F-delay k) f * id
m-nat₁-delay zero f = refl
m-nat₁-delay (suc k) f {zero} = refl
m-nat₁-delay (suc k) f {suc n} = m-nat₁-delay k f
m-nat₂-delay : ∀ k {A B C : τ} (f : A ⇴ B)
-> Functor.fmap (F-delay k) (id * f) ∘ m-delay k C A
≈ m-delay k C B ∘ id * Functor.fmap (F-delay k) f
m-nat₂-delay zero f = refl
m-nat₂-delay (suc k) f {zero} = refl
m-nat₂-delay (suc k) f {suc n} = m-nat₂-delay k f
assoc-delay : ∀ k {A B C : τ}
-> m-delay k A (B ⊗ C) ∘ id * m-delay k B C ∘ assoc-right
≈ Functor.fmap (F-delay k) assoc-right ∘ m-delay k (A ⊗ B) C ∘ m-delay k A B * id
assoc-delay zero = refl
assoc-delay (suc k) {A} {B} {C} {zero} = refl
assoc-delay (suc k) {A} {B} {C} {suc n} = assoc-delay k
unit-right-delay : ∀ k {A : τ} ->
Functor.fmap (F-delay k) unit-right ∘ m-delay k A ⊤ ∘ (id * u-delay k) ≈ unit-right
unit-right-delay zero {A} {n} = refl
unit-right-delay (suc k) {A} {zero} = refl
unit-right-delay (suc k) {A} {suc n} = unit-right-delay k
unit-left-delay : ∀ k {B : τ} ->
Functor.fmap (F-delay k) unit-left ∘ m-delay k ⊤ B ∘ (u-delay k * id) ≈ unit-left
unit-left-delay zero = refl
unit-left-delay (suc k) {B} {zero} = refl
unit-left-delay (suc k) {B} {suc n} = unit-left-delay k
m-delay-+-n+0 : ∀ {A B} k l {a b}
-> (rew (delay-+-left0 k l)
(CartesianFunctor.m (F-cart-delay k) A B (k + l) (a , b)))
≡ (rew (delay-+-left0 k l) a , rew (delay-+-left0 k l) b)
m-delay-+-n+0 zero l = refl
m-delay-+-n+0 (suc k) l = m-delay-+-n+0 k l
m-delay-+-sym : ∀ {A B} k l m{a b}
-> rew (sym (delay-+ k m l))
(CartesianFunctor.m (F-cart-delay m) A B l (a , b))
≡ CartesianFunctor.m (F-cart-delay (k + m)) A B (k + l)
((rew (sym (delay-+ k m l)) a) , (rew (sym (delay-+ k m l)) b))
m-delay-+-sym zero l m = refl
m-delay-+-sym (suc k) l m = m-delay-+-sym k l m
|
#include <uat/permit.hpp>
#include <boost/functional/hash.hpp>
namespace uat
{
region::region(const region& other) : interface_(other.interface_->clone()) {}
auto region::operator=(const region& other) -> region&
{
interface_ = other.interface_->clone();
return *this;
}
auto region::adjacent_regions() const -> std::vector<region> { return interface_->adjacent_regions(); }
auto region::hash() const -> std::size_t { return interface_->hash(); }
auto region::operator==(const region& other) const -> bool { return interface_->equals(*other.interface_); }
auto region::operator!=(const region& other) const -> bool { return !interface_->equals(*other.interface_); }
auto region::distance(const region& other) const -> uint_t { return interface_->distance(*other.interface_); }
auto region::heuristic_distance(const region& other) const -> value_t
{
return interface_->heuristic_distance(*other.interface_);
}
auto region::shortest_path(const region& other, int seed) const -> std::vector<region>
{
return interface_->shortest_path(*other.interface_, seed);
}
auto region::print_to(std::function<void(std::string_view, fmt::format_args)> f) const -> void
{
interface_->print_to(std::move(f));
}
auto region::turn(const region& before, const region& to) const -> bool
{
return interface_->turn(*before.interface_, *to.interface_);
}
auto region::climb(const region& to) const -> bool { return interface_->climb(*to.interface_); }
permit::permit(region s, uint_t time) noexcept : region_{std::move(s)}, time_{time} {}
auto permit::time() const noexcept -> uint_t { return time_; }
auto permit::location() const noexcept -> const region& { return region_; }
auto permit::location() noexcept -> region& { return region_; }
auto permit::operator==(const permit& other) const -> bool { return location() == other.location() && time() == other.time(); }
auto permit::operator!=(const permit& other) const -> bool { return !(*this == other); }
} // namespace uat
namespace std
{
auto hash<uat::region>::operator()(const uat::region& s) const noexcept -> size_t { return s.hash(); }
auto hash<uat::permit>::operator()(const uat::permit& p) const noexcept -> size_t
{
size_t seed = 0;
boost::hash_combine(seed, p.location().hash());
boost::hash_combine(seed, std::hash<std::size_t>{}(p.time()));
return seed;
}
} // namespace std
|
c
c
c ###################################################
c ## COPYRIGHT (C) 2000 by Jay William Ponder ##
c ## All Rights Reserved ##
c ###################################################
c
c ################################################################
c ## ##
c ## subroutine replica -- periodicity via cell replication ##
c ## ##
c ################################################################
c
c
c "replica" decides between images and replicates for generation
c of periodic boundary conditions, and sets the cell replicate
c list if the replicates method is to be used
c
c
subroutine replica (cutoff)
implicit none
include 'sizes.i'
include 'bound.i'
include 'boxes.i'
include 'cell.i'
include 'inform.i'
include 'iounit.i'
integer i,j,k
integer nx,ny,nz
real*8 cutoff,maximage
real*8 xlimit,ylimit,zlimit
c
c
c only necessary if periodic boundaries are in use
c
if (.not. use_bounds) return
c
c find the maximum sphere radius inscribed in periodic box
c
if (orthogonal) then
xlimit = xbox2
ylimit = ybox2
zlimit = zbox2
else if (monoclinic) then
xlimit = xbox2 * beta_sin
ylimit = ybox2
zlimit = zbox2 * beta_sin
else if (triclinic) then
xlimit = xbox2 * beta_sin * gamma_sin
ylimit = ybox2 * gamma_sin
zlimit = zbox2 * beta_sin
else if (octahedron) then
xlimit = (sqrt(3.0d0)/4.0d0) * xbox
ylimit = xlimit
zlimit = xlimit
end if
maximage = min(xlimit,ylimit,zlimit)
c
c use replicate method to handle cutoffs too large for images
c
if (cutoff .le. maximage) then
use_replica = .false.
else
use_replica = .true.
end if
c
c truncated octahedron cannot use the replicates method
c
if (octahedron .and. use_replica) then
write (iout,10)
10 format (/,' REPLICA -- Truncated Octahedron',
& ' cannot be Replicated')
call fatal
end if
c
c find the number of replicates needed based on cutoff
c
nx = int(cutoff/xlimit)
ny = int(cutoff/ylimit)
nz = int(cutoff/zlimit)
if (cutoff .gt. dble(nx)*xlimit) nx = nx + 1
if (cutoff .gt. dble(ny)*ylimit) ny = ny + 1
if (cutoff .gt. dble(nz)*zlimit) nz = nz + 1
if (nx .lt. 1) nx = 1
if (ny .lt. 1) ny = 1
if (nz .lt. 1) nz = 1
c
c set the replicated cell length and the half width
c
xcell = dble(nx) * xbox
ycell = dble(ny) * ybox
zcell = dble(nz) * zbox
xcell2 = 0.5d0 * xcell
ycell2 = 0.5d0 * ycell
zcell2 = 0.5d0 * zcell
c
c check the total number of replicated unit cells
c
ncell = nx*ny*nz - 1
if (ncell .gt. maxcell) then
write (iout,20)
20 format (/,' REPLICA -- Increase MAXCELL or Decrease',
& ' the Interaction Cutoffs')
call fatal
end if
c
c assign indices to the required cell replicates
c
ncell = 0
do k = 0, nz-1
do j = 0, ny-1
do i = 0, nx-1
if (k.ne.0 .or. j.ne.0 .or. i.ne.0) then
ncell = ncell + 1
icell(1,ncell) = i
icell(2,ncell) = j
icell(3,ncell) = k
end if
end do
end do
end do
c
c print a message indicating the number of replicates used
c
if (debug .and. ncell.ne.0) then
write (iout,30) nx,ny,nz
30 format (/,' REPLICA -- Period Boundary via',i3,' x',
& i3,' x',i3,' Set of Cell Replicates')
end if
return
end
|
function [t,s]=zerocros(y,m,x)
%ZEROCROS finds the zeros crossings in a signal [T,S]=(Y,M,X)
% Inputs: y = input waveform
% m = mode string containing:
% 'p' - positive crossings only
% 'n' - negative crossings only
% 'b' - both (default)
% 'r' - round to sample values
% x = x-axis values corresponding to y [default 1:length(y)]
%
% Outputs: t = x-axis positions of zero crossings
% s = estimated slope of y at the zero crossing
%
% This routine uses linear interpolation to estimate the position of a zero crossing
% A zero crossing occurs between y(n) and y(n+1) iff (y(n)>=0) ~= (y(n+1)>=0)
% Example: y=sin(2*pi*(0:1000)/200); y(1:100:1001)=0; zerocros(y);
% Note that we get a zero crossing at the end but not at the start.
% Copyright (C) Mike Brookes 2003-2015
% Version: $Id: zerocros.m 6563 2015-08-16 16:56:24Z dmb $
%
% VOICEBOX is a MATLAB toolbox for speech processing.
% Home page: http://www.ee.ic.ac.uk/hp/staff/dmb/voicebox/voicebox.html
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% This program is free software; you can redistribute it and/or modify
% it under the terms of the GNU General Public License as published by
% the Free Software Foundation; either version 2 of the License, or
% (at your option) any later version.
%
% This program is distributed in the hope that it will be useful,
% but WITHOUT ANY WARRANTY; without even the implied warranty of
% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
% GNU General Public License for more details.
%
% You can obtain a copy of the GNU General Public License from
% http://www.gnu.org/copyleft/gpl.html or by writing to
% Free Software Foundation, Inc.,675 Mass Ave, Cambridge, MA 02139, USA.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
if nargin<2 || ~numel(m)
m='b';
end
s=y>=0;
k=s(2:end)-s(1:end-1);
if any(m=='p')
f=find(k>0);
elseif any(m=='n')
f=find(k<0);
else
f=find(k~=0);
end
s=y(f+1)-y(f);
t=f-y(f)./s;
if any(m=='r')
t=round(t);
end
if nargin>2
tf=t-f; % fractional sample
t=x(f).*(1-tf)+x(f+1).*tf;
s=s./(x(f+1)-x(f));
end
if ~nargout
n=length(y);
plot(1:n,y,'-',t,zeros(length(t),1),'o');
end
|
/* permutation/gsl_permutation.h
*
* Copyright (C) 1996, 1997, 1998, 1999, 2000 Brian Gough
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or (at
* your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#ifndef __GSL_PERMUTATION_H__
#define __GSL_PERMUTATION_H__
#include <stdlib.h>
#include <gsl/gsl_errno.h>
#undef __BEGIN_DECLS
#undef __END_DECLS
#ifdef __cplusplus
# define __BEGIN_DECLS extern "C" {
# define __END_DECLS }
#else
# define __BEGIN_DECLS /* empty */
# define __END_DECLS /* empty */
#endif
__BEGIN_DECLS
struct gsl_permutation_struct
{
size_t size;
size_t *data;
};
typedef struct gsl_permutation_struct gsl_permutation;
gsl_permutation *gsl_permutation_alloc (const size_t n);
gsl_permutation *gsl_permutation_calloc (const size_t n);
void gsl_permutation_init (gsl_permutation * p);
void gsl_permutation_free (gsl_permutation * p);
int gsl_permutation_fread (FILE * stream, gsl_permutation * p);
int gsl_permutation_fwrite (FILE * stream, const gsl_permutation * p);
int gsl_permutation_fscanf (FILE * stream, gsl_permutation * p);
int gsl_permutation_fprintf (FILE * stream, const gsl_permutation * p, const char *format);
size_t gsl_permutation_size (const gsl_permutation * p);
size_t * gsl_permutation_data (const gsl_permutation * p);
size_t gsl_permutation_get (const gsl_permutation * p, const size_t i);
int gsl_permutation_swap (gsl_permutation * p, const size_t i, const size_t j);
int gsl_permutation_valid (gsl_permutation * p);
void gsl_permutation_reverse (gsl_permutation * p);
int gsl_permutation_inverse (gsl_permutation * inv, const gsl_permutation * p);
int gsl_permutation_next (gsl_permutation * p);
int gsl_permutation_prev (gsl_permutation * p);
extern int gsl_check_range;
#ifdef HAVE_INLINE
extern inline
size_t
gsl_permutation_get (const gsl_permutation * p, const size_t i)
{
#ifndef GSL_RANGE_CHECK_OFF
if (i >= p->size)
{
GSL_ERROR_VAL ("index out of range", GSL_EINVAL, 0);
}
#endif
return p->data[i];
}
#endif /* HAVE_INLINE */
__END_DECLS
#endif /* __GSL_PERMUTATION_H__ */
|
[STATEMENT]
lemma continuous_on_Shleg: "continuous_on A Shleg"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. continuous_on A local.Shleg
[PROOF STEP]
by (auto simp: Shleg_def intro!: continuous_intros)
|
After finishing the album , Townsend stated the project was " punishing " and an " absolute nightmare to complete " due to amount of material against tight schedules . He also described the hardship of the project by telling " if he was ever going to start drinking [ again ] , the last months would have been it " , but now " he 's starting to get excited again " . Later , " after the chaos of finishing it had subsided " , Townsend stated he is really satisfied with the result .
|
% Determinarea domeniului de definitie al functiei
t=-10:10;
% Apelarea functiei MATLAB def_funct
f=def_funct(t,0,3,5,5,1)
|
State Before: α : Type u_1
s : ℕ
l : Ordnode α
x : α
r : Ordnode α
⊢ dual (dual (node s l x r)) = node s l x r State After: no goals Tactic: rw [dual, dual, dual_dual l, dual_dual r]
|
[STATEMENT]
lemma fixp_above_unfold:
assumes a: "a \<in> Field leq"
shows "fixp_above a = f (fixp_above a)" (is "?a = f ?a")
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. fixp_above a = f (fixp_above a)
[PROOF STEP]
proof(rule leq_antisym)
[PROOF STATE]
proof (state)
goal (2 subgoals):
1. (fixp_above a, f (fixp_above a)) \<in> leq
2. (f (fixp_above a), fixp_above a) \<in> leq
[PROOF STEP]
show "(?a, f ?a) \<in> leq"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (fixp_above a, f (fixp_above a)) \<in> leq
[PROOF STEP]
using fixp_above_Field[OF a]
[PROOF STATE]
proof (prove)
using this:
fixp_above a \<in> Field leq
goal (1 subgoal):
1. (fixp_above a, f (fixp_above a)) \<in> leq
[PROOF STEP]
by(rule increasing)
[PROOF STATE]
proof (state)
this:
(fixp_above a, f (fixp_above a)) \<in> leq
goal (1 subgoal):
1. (f (fixp_above a), fixp_above a) \<in> leq
[PROOF STEP]
have "f ?a \<in> iterates_above a"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. f (fixp_above a) \<in> iterates_above a
[PROOF STEP]
using fixp_iterates_above
[PROOF STATE]
proof (prove)
using this:
fixp_above ?a \<in> iterates_above ?a
goal (1 subgoal):
1. f (fixp_above a) \<in> iterates_above a
[PROOF STEP]
by(rule iterates_above.step)
[PROOF STATE]
proof (state)
this:
f (fixp_above a) \<in> iterates_above a
goal (1 subgoal):
1. (f (fixp_above a), fixp_above a) \<in> leq
[PROOF STEP]
with chain_iterates_above[OF a]
[PROOF STATE]
proof (chain)
picking this:
iterates_above a \<in> Chains leq
f (fixp_above a) \<in> iterates_above a
[PROOF STEP]
show "(f ?a, ?a) \<in> leq"
[PROOF STATE]
proof (prove)
using this:
iterates_above a \<in> Chains leq
f (fixp_above a) \<in> iterates_above a
goal (1 subgoal):
1. (f (fixp_above a), fixp_above a) \<in> leq
[PROOF STEP]
by(simp add: fixp_above_inside assms lub_upper)
[PROOF STATE]
proof (state)
this:
(f (fixp_above a), fixp_above a) \<in> leq
goal:
No subgoals!
[PROOF STEP]
qed
|
For any polynomials $x$ and $y$, $-x \div y = -(x \div y)$.
|
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝¹ : HeytingAlgebra α
x✝ : HeytingAlgebra β
inst✝ : HeytingHomClass F α β
src✝ : HeytingHomClass F α β := inst✝
f : F
⊢ ↑f ⊤ = ⊤
[PROOFSTEP]
rw [← @himp_self α _ ⊥, ← himp_self, map_himp]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝¹ : CoheytingAlgebra α
x✝ : CoheytingAlgebra β
inst✝ : CoheytingHomClass F α β
src✝ : CoheytingHomClass F α β := inst✝
f : F
⊢ ↑f ⊥ = ⊥
[PROOFSTEP]
rw [← @sdiff_self α _ ⊤, ← sdiff_self, map_sdiff]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝¹ : BiheytingAlgebra α
x✝ : BiheytingAlgebra β
inst✝ : BiheytingHomClass F α β
src✝ : BiheytingHomClass F α β := inst✝
f : F
⊢ ↑f ⊥ = ⊥
[PROOFSTEP]
rw [← @sdiff_self α _ ⊤, ← sdiff_self, BiheytingHomClass.map_sdiff]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝¹ : BiheytingAlgebra α
x✝ : BiheytingAlgebra β
inst✝ : BiheytingHomClass F α β
src✝ : BiheytingHomClass F α β := inst✝
f : F
⊢ ↑f ⊤ = ⊤
[PROOFSTEP]
rw [← @himp_self α _ ⊥, ← himp_self, map_himp]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝¹ : HeytingAlgebra α
x✝ : HeytingAlgebra β
inst✝ : OrderIsoClass F α β
src✝ : BoundedLatticeHomClass F α β := toBoundedLatticeHomClass
f : F
a b : α
c : (fun x => β) (a ⇨ b)
⊢ c ≤ ↑f (a ⇨ b) ↔ c ≤ ↑f a ⇨ ↑f b
[PROOFSTEP]
simp only [← map_inv_le_iff, le_himp_iff]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝¹ : HeytingAlgebra α
x✝ : HeytingAlgebra β
inst✝ : OrderIsoClass F α β
src✝ : BoundedLatticeHomClass F α β := toBoundedLatticeHomClass
f : F
a b : α
c : (fun x => β) (a ⇨ b)
⊢ EquivLike.inv f c ⊓ a ≤ b ↔ EquivLike.inv f (c ⊓ ↑f a) ≤ b
[PROOFSTEP]
rw [← OrderIsoClass.map_le_map_iff f]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝¹ : HeytingAlgebra α
x✝ : HeytingAlgebra β
inst✝ : OrderIsoClass F α β
src✝ : BoundedLatticeHomClass F α β := toBoundedLatticeHomClass
f : F
a b : α
c : (fun x => β) (a ⇨ b)
⊢ ↑f (EquivLike.inv f c ⊓ a) ≤ ↑f b ↔ EquivLike.inv f (c ⊓ ↑f a) ≤ b
[PROOFSTEP]
simp
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝¹ : CoheytingAlgebra α
x✝ : CoheytingAlgebra β
inst✝ : OrderIsoClass F α β
src✝ : BoundedLatticeHomClass F α β := toBoundedLatticeHomClass
f : F
a b : α
c : (fun x => β) (a \ b)
⊢ ↑f (a \ b) ≤ c ↔ ↑f a \ ↑f b ≤ c
[PROOFSTEP]
simp only [← le_map_inv_iff, sdiff_le_iff]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝¹ : CoheytingAlgebra α
x✝ : CoheytingAlgebra β
inst✝ : OrderIsoClass F α β
src✝ : BoundedLatticeHomClass F α β := toBoundedLatticeHomClass
f : F
a b : α
c : (fun x => β) (a \ b)
⊢ a ≤ b ⊔ EquivLike.inv f c ↔ a ≤ EquivLike.inv f (↑f b ⊔ c)
[PROOFSTEP]
rw [← OrderIsoClass.map_le_map_iff f]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝¹ : CoheytingAlgebra α
x✝ : CoheytingAlgebra β
inst✝ : OrderIsoClass F α β
src✝ : BoundedLatticeHomClass F α β := toBoundedLatticeHomClass
f : F
a b : α
c : (fun x => β) (a \ b)
⊢ ↑f a ≤ ↑f (b ⊔ EquivLike.inv f c) ↔ a ≤ EquivLike.inv f (↑f b ⊔ c)
[PROOFSTEP]
simp
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝¹ : BiheytingAlgebra α
x✝ : BiheytingAlgebra β
inst✝ : OrderIsoClass F α β
src✝ : LatticeHomClass F α β := toLatticeHomClass
f : F
a b : α
c : (fun x => β) (a ⇨ b)
⊢ c ≤ ↑f (a ⇨ b) ↔ c ≤ ↑f a ⇨ ↑f b
[PROOFSTEP]
simp only [← map_inv_le_iff, le_himp_iff]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝¹ : BiheytingAlgebra α
x✝ : BiheytingAlgebra β
inst✝ : OrderIsoClass F α β
src✝ : LatticeHomClass F α β := toLatticeHomClass
f : F
a b : α
c : (fun x => β) (a ⇨ b)
⊢ EquivLike.inv f c ⊓ a ≤ b ↔ EquivLike.inv f (c ⊓ ↑f a) ≤ b
[PROOFSTEP]
rw [← OrderIsoClass.map_le_map_iff f]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝¹ : BiheytingAlgebra α
x✝ : BiheytingAlgebra β
inst✝ : OrderIsoClass F α β
src✝ : LatticeHomClass F α β := toLatticeHomClass
f : F
a b : α
c : (fun x => β) (a ⇨ b)
⊢ ↑f (EquivLike.inv f c ⊓ a) ≤ ↑f b ↔ EquivLike.inv f (c ⊓ ↑f a) ≤ b
[PROOFSTEP]
simp
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝¹ : BiheytingAlgebra α
x✝ : BiheytingAlgebra β
inst✝ : OrderIsoClass F α β
src✝ : LatticeHomClass F α β := toLatticeHomClass
f : F
a b : α
c : (fun x => β) (a \ b)
⊢ ↑f (a \ b) ≤ c ↔ ↑f a \ ↑f b ≤ c
[PROOFSTEP]
simp only [← le_map_inv_iff, sdiff_le_iff]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝¹ : BiheytingAlgebra α
x✝ : BiheytingAlgebra β
inst✝ : OrderIsoClass F α β
src✝ : LatticeHomClass F α β := toLatticeHomClass
f : F
a b : α
c : (fun x => β) (a \ b)
⊢ a ≤ b ⊔ EquivLike.inv f c ↔ a ≤ EquivLike.inv f (↑f b ⊔ c)
[PROOFSTEP]
rw [← OrderIsoClass.map_le_map_iff f]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝¹ : BiheytingAlgebra α
x✝ : BiheytingAlgebra β
inst✝ : OrderIsoClass F α β
src✝ : LatticeHomClass F α β := toLatticeHomClass
f : F
a b : α
c : (fun x => β) (a \ b)
⊢ ↑f a ≤ ↑f (b ⊔ EquivLike.inv f c) ↔ a ≤ EquivLike.inv f (↑f b ⊔ c)
[PROOFSTEP]
simp
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝² : BooleanAlgebra α
inst✝¹ : BooleanAlgebra β
inst✝ : BoundedLatticeHomClass F α β
src✝ : BoundedLatticeHomClass F α β := inst✝
f : F
a b : α
⊢ ↑f (a ⇨ b) = ↑f a ⇨ ↑f b
[PROOFSTEP]
rw [himp_eq, himp_eq, map_sup, (isCompl_compl.map _).compl_eq]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝² : BooleanAlgebra α
inst✝¹ : BooleanAlgebra β
inst✝ : BoundedLatticeHomClass F α β
src✝ : BoundedLatticeHomClass F α β := inst✝
f : F
a b : α
⊢ ↑f (a \ b) = ↑f a \ ↑f b
[PROOFSTEP]
rw [sdiff_eq, sdiff_eq, map_inf, (isCompl_compl.map _).compl_eq]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝² : HeytingAlgebra α
inst✝¹ : HeytingAlgebra β
inst✝ : HeytingHomClass F α β
f : F
a : α
⊢ ↑f aᶜ = (↑f a)ᶜ
[PROOFSTEP]
rw [← himp_bot, ← himp_bot, map_himp, map_bot]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝² : HeytingAlgebra α
inst✝¹ : HeytingAlgebra β
inst✝ : HeytingHomClass F α β
f : F
a b : α
⊢ ↑f (a ⇔ b) = ↑f a ⇔ ↑f b
[PROOFSTEP]
simp_rw [bihimp, map_inf, map_himp]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝² : CoheytingAlgebra α
inst✝¹ : CoheytingAlgebra β
inst✝ : CoheytingHomClass F α β
f : F
a : α
⊢ ↑f (¬a) = ¬↑f a
[PROOFSTEP]
rw [← top_sdiff', ← top_sdiff', map_sdiff, map_top]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝² : CoheytingAlgebra α
inst✝¹ : CoheytingAlgebra β
inst✝ : CoheytingHomClass F α β
f : F
a b : α
⊢ ↑f (a ∆ b) = ↑f a ∆ ↑f b
[PROOFSTEP]
simp_rw [symmDiff, map_sup, map_sdiff]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝³ : HeytingAlgebra α
inst✝² : HeytingAlgebra β
inst✝¹ : HeytingAlgebra γ
inst✝ : HeytingAlgebra δ
f g : HeytingHom α β
h : (fun f => f.toFun) f = (fun f => f.toFun) g
⊢ f = g
[PROOFSTEP]
obtain ⟨⟨⟨_, _⟩, _⟩, _⟩ := f
[GOAL]
case mk.mk.mk
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝³ : HeytingAlgebra α
inst✝² : HeytingAlgebra β
inst✝¹ : HeytingAlgebra γ
inst✝ : HeytingAlgebra δ
g : HeytingHom α β
toFun✝ : α → β
map_sup'✝ : ∀ (a b : α), toFun✝ (a ⊔ b) = toFun✝ a ⊔ toFun✝ b
map_inf'✝ :
∀ (a b : α),
SupHom.toFun { toFun := toFun✝, map_sup' := map_sup'✝ } (a ⊓ b) =
SupHom.toFun { toFun := toFun✝, map_sup' := map_sup'✝ } a ⊓
SupHom.toFun { toFun := toFun✝, map_sup' := map_sup'✝ } b
map_bot'✝ :
SupHom.toFun { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ }.toSupHom ⊥ = ⊥
map_himp'✝ :
∀ (a b : α),
SupHom.toFun { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ }.toSupHom (a ⇨ b) =
SupHom.toFun { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ }.toSupHom a ⇨
SupHom.toFun { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ }.toSupHom b
h :
(fun f => f.toFun)
{ toLatticeHom := { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ },
map_bot' := map_bot'✝, map_himp' := map_himp'✝ } =
(fun f => f.toFun) g
⊢ { toLatticeHom := { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ },
map_bot' := map_bot'✝, map_himp' := map_himp'✝ } =
g
[PROOFSTEP]
obtain ⟨⟨⟨_, _⟩, _⟩, _⟩ := g
[GOAL]
case mk.mk.mk.mk.mk.mk
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝³ : HeytingAlgebra α
inst✝² : HeytingAlgebra β
inst✝¹ : HeytingAlgebra γ
inst✝ : HeytingAlgebra δ
toFun✝¹ : α → β
map_sup'✝¹ : ∀ (a b : α), toFun✝¹ (a ⊔ b) = toFun✝¹ a ⊔ toFun✝¹ b
map_inf'✝¹ :
∀ (a b : α),
SupHom.toFun { toFun := toFun✝¹, map_sup' := map_sup'✝¹ } (a ⊓ b) =
SupHom.toFun { toFun := toFun✝¹, map_sup' := map_sup'✝¹ } a ⊓
SupHom.toFun { toFun := toFun✝¹, map_sup' := map_sup'✝¹ } b
map_bot'✝¹ :
SupHom.toFun { toSupHom := { toFun := toFun✝¹, map_sup' := map_sup'✝¹ }, map_inf' := map_inf'✝¹ }.toSupHom ⊥ = ⊥
map_himp'✝¹ :
∀ (a b : α),
SupHom.toFun { toSupHom := { toFun := toFun✝¹, map_sup' := map_sup'✝¹ }, map_inf' := map_inf'✝¹ }.toSupHom (a ⇨ b) =
SupHom.toFun { toSupHom := { toFun := toFun✝¹, map_sup' := map_sup'✝¹ }, map_inf' := map_inf'✝¹ }.toSupHom a ⇨
SupHom.toFun { toSupHom := { toFun := toFun✝¹, map_sup' := map_sup'✝¹ }, map_inf' := map_inf'✝¹ }.toSupHom b
toFun✝ : α → β
map_sup'✝ : ∀ (a b : α), toFun✝ (a ⊔ b) = toFun✝ a ⊔ toFun✝ b
map_inf'✝ :
∀ (a b : α),
SupHom.toFun { toFun := toFun✝, map_sup' := map_sup'✝ } (a ⊓ b) =
SupHom.toFun { toFun := toFun✝, map_sup' := map_sup'✝ } a ⊓
SupHom.toFun { toFun := toFun✝, map_sup' := map_sup'✝ } b
map_bot'✝ :
SupHom.toFun { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ }.toSupHom ⊥ = ⊥
map_himp'✝ :
∀ (a b : α),
SupHom.toFun { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ }.toSupHom (a ⇨ b) =
SupHom.toFun { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ }.toSupHom a ⇨
SupHom.toFun { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ }.toSupHom b
h :
(fun f => f.toFun)
{ toLatticeHom := { toSupHom := { toFun := toFun✝¹, map_sup' := map_sup'✝¹ }, map_inf' := map_inf'✝¹ },
map_bot' := map_bot'✝¹, map_himp' := map_himp'✝¹ } =
(fun f => f.toFun)
{ toLatticeHom := { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ },
map_bot' := map_bot'✝, map_himp' := map_himp'✝ }
⊢ { toLatticeHom := { toSupHom := { toFun := toFun✝¹, map_sup' := map_sup'✝¹ }, map_inf' := map_inf'✝¹ },
map_bot' := map_bot'✝¹, map_himp' := map_himp'✝¹ } =
{ toLatticeHom := { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ },
map_bot' := map_bot'✝, map_himp' := map_himp'✝ }
[PROOFSTEP]
congr
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝³ : HeytingAlgebra α
inst✝² : HeytingAlgebra β
inst✝¹ : HeytingAlgebra γ
inst✝ : HeytingAlgebra δ
f : HeytingHom α β
f' : α → β
h : f' = ↑f
⊢ ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b
[PROOFSTEP]
simpa only [h] using map_sup f
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝³ : HeytingAlgebra α
inst✝² : HeytingAlgebra β
inst✝¹ : HeytingAlgebra γ
inst✝ : HeytingAlgebra δ
f : HeytingHom α β
f' : α → β
h : f' = ↑f
⊢ ∀ (a b : α),
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } (a ⊓ b) =
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } a ⊓
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } b
[PROOFSTEP]
simpa only [h] using map_inf f
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝³ : HeytingAlgebra α
inst✝² : HeytingAlgebra β
inst✝¹ : HeytingAlgebra γ
inst✝ : HeytingAlgebra δ
f : HeytingHom α β
f' : α → β
h : f' = ↑f
⊢ SupHom.toFun
{ toSupHom := { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) },
map_inf' :=
(_ :
∀ (a b : α),
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } (a ⊓ b) =
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } a ⊓
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } b) }.toSupHom
⊥ =
⊥
[PROOFSTEP]
simpa only [h] using map_bot f
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝³ : HeytingAlgebra α
inst✝² : HeytingAlgebra β
inst✝¹ : HeytingAlgebra γ
inst✝ : HeytingAlgebra δ
f : HeytingHom α β
f' : α → β
h : f' = ↑f
⊢ ∀ (a b : α),
SupHom.toFun
{ toSupHom := { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) },
map_inf' :=
(_ :
∀ (a b : α),
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } (a ⊓ b) =
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } a ⊓
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) }
b) }.toSupHom
(a ⇨ b) =
SupHom.toFun
{ toSupHom := { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) },
map_inf' :=
(_ :
∀ (a b : α),
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } (a ⊓ b) =
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } a ⊓
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) }
b) }.toSupHom
a ⇨
SupHom.toFun
{ toSupHom := { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) },
map_inf' :=
(_ :
∀ (a b : α),
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } (a ⊓ b) =
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } a ⊓
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) }
b) }.toSupHom
b
[PROOFSTEP]
simpa only [h] using map_himp f
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝³ : HeytingAlgebra α
inst✝² : HeytingAlgebra β
inst✝¹ : HeytingAlgebra γ
inst✝ : HeytingAlgebra δ
f : HeytingHom β γ
g : HeytingHom α β
src✝ : LatticeHom α γ := LatticeHom.comp f.toLatticeHom g.toLatticeHom
⊢ SupHom.toFun
{
toSupHom :=
{ toFun := ↑f ∘ ↑g,
map_sup' :=
(_ :
∀ (a b : α),
SupHom.toFun src✝.toSupHom (a ⊔ b) = SupHom.toFun src✝.toSupHom a ⊔ SupHom.toFun src✝.toSupHom b) },
map_inf' :=
(_ :
∀ (a b : α),
SupHom.toFun src✝.toSupHom (a ⊓ b) =
SupHom.toFun src✝.toSupHom a ⊓ SupHom.toFun src✝.toSupHom b) }.toSupHom
⊥ =
⊥
[PROOFSTEP]
simp
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝³ : HeytingAlgebra α
inst✝² : HeytingAlgebra β
inst✝¹ : HeytingAlgebra γ
inst✝ : HeytingAlgebra δ
f : HeytingHom β γ
g : HeytingHom α β
src✝ : LatticeHom α γ := LatticeHom.comp f.toLatticeHom g.toLatticeHom
a b : α
⊢ SupHom.toFun
{
toSupHom :=
{ toFun := ↑f ∘ ↑g,
map_sup' :=
(_ :
∀ (a b : α),
SupHom.toFun src✝.toSupHom (a ⊔ b) = SupHom.toFun src✝.toSupHom a ⊔ SupHom.toFun src✝.toSupHom b) },
map_inf' :=
(_ :
∀ (a b : α),
SupHom.toFun src✝.toSupHom (a ⊓ b) =
SupHom.toFun src✝.toSupHom a ⊓ SupHom.toFun src✝.toSupHom b) }.toSupHom
(a ⇨ b) =
SupHom.toFun
{
toSupHom :=
{ toFun := ↑f ∘ ↑g,
map_sup' :=
(_ :
∀ (a b : α),
SupHom.toFun src✝.toSupHom (a ⊔ b) =
SupHom.toFun src✝.toSupHom a ⊔ SupHom.toFun src✝.toSupHom b) },
map_inf' :=
(_ :
∀ (a b : α),
SupHom.toFun src✝.toSupHom (a ⊓ b) =
SupHom.toFun src✝.toSupHom a ⊓ SupHom.toFun src✝.toSupHom b) }.toSupHom
a ⇨
SupHom.toFun
{
toSupHom :=
{ toFun := ↑f ∘ ↑g,
map_sup' :=
(_ :
∀ (a b : α),
SupHom.toFun src✝.toSupHom (a ⊔ b) =
SupHom.toFun src✝.toSupHom a ⊔ SupHom.toFun src✝.toSupHom b) },
map_inf' :=
(_ :
∀ (a b : α),
SupHom.toFun src✝.toSupHom (a ⊓ b) =
SupHom.toFun src✝.toSupHom a ⊓ SupHom.toFun src✝.toSupHom b) }.toSupHom
b
[PROOFSTEP]
simp
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝³ : HeytingAlgebra α
inst✝² : HeytingAlgebra β
inst✝¹ : HeytingAlgebra γ
inst✝ : HeytingAlgebra δ
f f₁ f₂ : HeytingHom α β
g g₁ g₂ : HeytingHom β γ
hg : Injective ↑g
h : comp g f₁ = comp g f₂
a : α
⊢ ↑g (↑f₁ a) = ↑g (↑f₂ a)
[PROOFSTEP]
rw [← comp_apply, h, comp_apply]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝³ : CoheytingAlgebra α
inst✝² : CoheytingAlgebra β
inst✝¹ : CoheytingAlgebra γ
inst✝ : CoheytingAlgebra δ
f g : CoheytingHom α β
h : (fun f => f.toFun) f = (fun f => f.toFun) g
⊢ f = g
[PROOFSTEP]
obtain ⟨⟨⟨_, _⟩, _⟩, _⟩ := f
[GOAL]
case mk.mk.mk
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝³ : CoheytingAlgebra α
inst✝² : CoheytingAlgebra β
inst✝¹ : CoheytingAlgebra γ
inst✝ : CoheytingAlgebra δ
g : CoheytingHom α β
toFun✝ : α → β
map_sup'✝ : ∀ (a b : α), toFun✝ (a ⊔ b) = toFun✝ a ⊔ toFun✝ b
map_inf'✝ :
∀ (a b : α),
SupHom.toFun { toFun := toFun✝, map_sup' := map_sup'✝ } (a ⊓ b) =
SupHom.toFun { toFun := toFun✝, map_sup' := map_sup'✝ } a ⊓
SupHom.toFun { toFun := toFun✝, map_sup' := map_sup'✝ } b
map_top'✝ :
SupHom.toFun { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ }.toSupHom ⊤ = ⊤
map_sdiff'✝ :
∀ (a b : α),
SupHom.toFun { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ }.toSupHom (a \ b) =
SupHom.toFun { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ }.toSupHom a \
SupHom.toFun { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ }.toSupHom b
h :
(fun f => f.toFun)
{ toLatticeHom := { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ },
map_top' := map_top'✝, map_sdiff' := map_sdiff'✝ } =
(fun f => f.toFun) g
⊢ { toLatticeHom := { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ },
map_top' := map_top'✝, map_sdiff' := map_sdiff'✝ } =
g
[PROOFSTEP]
obtain ⟨⟨⟨_, _⟩, _⟩, _⟩ := g
[GOAL]
case mk.mk.mk.mk.mk.mk
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝³ : CoheytingAlgebra α
inst✝² : CoheytingAlgebra β
inst✝¹ : CoheytingAlgebra γ
inst✝ : CoheytingAlgebra δ
toFun✝¹ : α → β
map_sup'✝¹ : ∀ (a b : α), toFun✝¹ (a ⊔ b) = toFun✝¹ a ⊔ toFun✝¹ b
map_inf'✝¹ :
∀ (a b : α),
SupHom.toFun { toFun := toFun✝¹, map_sup' := map_sup'✝¹ } (a ⊓ b) =
SupHom.toFun { toFun := toFun✝¹, map_sup' := map_sup'✝¹ } a ⊓
SupHom.toFun { toFun := toFun✝¹, map_sup' := map_sup'✝¹ } b
map_top'✝¹ :
SupHom.toFun { toSupHom := { toFun := toFun✝¹, map_sup' := map_sup'✝¹ }, map_inf' := map_inf'✝¹ }.toSupHom ⊤ = ⊤
map_sdiff'✝¹ :
∀ (a b : α),
SupHom.toFun { toSupHom := { toFun := toFun✝¹, map_sup' := map_sup'✝¹ }, map_inf' := map_inf'✝¹ }.toSupHom (a \ b) =
SupHom.toFun { toSupHom := { toFun := toFun✝¹, map_sup' := map_sup'✝¹ }, map_inf' := map_inf'✝¹ }.toSupHom a \
SupHom.toFun { toSupHom := { toFun := toFun✝¹, map_sup' := map_sup'✝¹ }, map_inf' := map_inf'✝¹ }.toSupHom b
toFun✝ : α → β
map_sup'✝ : ∀ (a b : α), toFun✝ (a ⊔ b) = toFun✝ a ⊔ toFun✝ b
map_inf'✝ :
∀ (a b : α),
SupHom.toFun { toFun := toFun✝, map_sup' := map_sup'✝ } (a ⊓ b) =
SupHom.toFun { toFun := toFun✝, map_sup' := map_sup'✝ } a ⊓
SupHom.toFun { toFun := toFun✝, map_sup' := map_sup'✝ } b
map_top'✝ :
SupHom.toFun { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ }.toSupHom ⊤ = ⊤
map_sdiff'✝ :
∀ (a b : α),
SupHom.toFun { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ }.toSupHom (a \ b) =
SupHom.toFun { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ }.toSupHom a \
SupHom.toFun { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ }.toSupHom b
h :
(fun f => f.toFun)
{ toLatticeHom := { toSupHom := { toFun := toFun✝¹, map_sup' := map_sup'✝¹ }, map_inf' := map_inf'✝¹ },
map_top' := map_top'✝¹, map_sdiff' := map_sdiff'✝¹ } =
(fun f => f.toFun)
{ toLatticeHom := { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ },
map_top' := map_top'✝, map_sdiff' := map_sdiff'✝ }
⊢ { toLatticeHom := { toSupHom := { toFun := toFun✝¹, map_sup' := map_sup'✝¹ }, map_inf' := map_inf'✝¹ },
map_top' := map_top'✝¹, map_sdiff' := map_sdiff'✝¹ } =
{ toLatticeHom := { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ },
map_top' := map_top'✝, map_sdiff' := map_sdiff'✝ }
[PROOFSTEP]
congr
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝³ : CoheytingAlgebra α
inst✝² : CoheytingAlgebra β
inst✝¹ : CoheytingAlgebra γ
inst✝ : CoheytingAlgebra δ
f : CoheytingHom α β
f' : α → β
h : f' = ↑f
⊢ ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b
[PROOFSTEP]
simpa only [h] using map_sup f
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝³ : CoheytingAlgebra α
inst✝² : CoheytingAlgebra β
inst✝¹ : CoheytingAlgebra γ
inst✝ : CoheytingAlgebra δ
f : CoheytingHom α β
f' : α → β
h : f' = ↑f
⊢ ∀ (a b : α),
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } (a ⊓ b) =
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } a ⊓
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } b
[PROOFSTEP]
simpa only [h] using map_inf f
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝³ : CoheytingAlgebra α
inst✝² : CoheytingAlgebra β
inst✝¹ : CoheytingAlgebra γ
inst✝ : CoheytingAlgebra δ
f : CoheytingHom α β
f' : α → β
h : f' = ↑f
⊢ SupHom.toFun
{ toSupHom := { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) },
map_inf' :=
(_ :
∀ (a b : α),
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } (a ⊓ b) =
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } a ⊓
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } b) }.toSupHom
⊤ =
⊤
[PROOFSTEP]
simpa only [h] using map_top f
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝³ : CoheytingAlgebra α
inst✝² : CoheytingAlgebra β
inst✝¹ : CoheytingAlgebra γ
inst✝ : CoheytingAlgebra δ
f : CoheytingHom α β
f' : α → β
h : f' = ↑f
⊢ ∀ (a b : α),
SupHom.toFun
{ toSupHom := { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) },
map_inf' :=
(_ :
∀ (a b : α),
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } (a ⊓ b) =
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } a ⊓
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) }
b) }.toSupHom
(a \ b) =
SupHom.toFun
{ toSupHom := { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) },
map_inf' :=
(_ :
∀ (a b : α),
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } (a ⊓ b) =
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } a ⊓
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) }
b) }.toSupHom
a \
SupHom.toFun
{ toSupHom := { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) },
map_inf' :=
(_ :
∀ (a b : α),
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } (a ⊓ b) =
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } a ⊓
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) }
b) }.toSupHom
b
[PROOFSTEP]
simpa only [h] using map_sdiff f
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝³ : CoheytingAlgebra α
inst✝² : CoheytingAlgebra β
inst✝¹ : CoheytingAlgebra γ
inst✝ : CoheytingAlgebra δ
f : CoheytingHom β γ
g : CoheytingHom α β
src✝ : LatticeHom α γ := LatticeHom.comp f.toLatticeHom g.toLatticeHom
⊢ SupHom.toFun
{
toSupHom :=
{ toFun := ↑f ∘ ↑g,
map_sup' :=
(_ :
∀ (a b : α),
SupHom.toFun src✝.toSupHom (a ⊔ b) = SupHom.toFun src✝.toSupHom a ⊔ SupHom.toFun src✝.toSupHom b) },
map_inf' :=
(_ :
∀ (a b : α),
SupHom.toFun src✝.toSupHom (a ⊓ b) =
SupHom.toFun src✝.toSupHom a ⊓ SupHom.toFun src✝.toSupHom b) }.toSupHom
⊤ =
⊤
[PROOFSTEP]
simp
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝³ : CoheytingAlgebra α
inst✝² : CoheytingAlgebra β
inst✝¹ : CoheytingAlgebra γ
inst✝ : CoheytingAlgebra δ
f : CoheytingHom β γ
g : CoheytingHom α β
src✝ : LatticeHom α γ := LatticeHom.comp f.toLatticeHom g.toLatticeHom
a b : α
⊢ SupHom.toFun
{
toSupHom :=
{ toFun := ↑f ∘ ↑g,
map_sup' :=
(_ :
∀ (a b : α),
SupHom.toFun src✝.toSupHom (a ⊔ b) = SupHom.toFun src✝.toSupHom a ⊔ SupHom.toFun src✝.toSupHom b) },
map_inf' :=
(_ :
∀ (a b : α),
SupHom.toFun src✝.toSupHom (a ⊓ b) =
SupHom.toFun src✝.toSupHom a ⊓ SupHom.toFun src✝.toSupHom b) }.toSupHom
(a \ b) =
SupHom.toFun
{
toSupHom :=
{ toFun := ↑f ∘ ↑g,
map_sup' :=
(_ :
∀ (a b : α),
SupHom.toFun src✝.toSupHom (a ⊔ b) =
SupHom.toFun src✝.toSupHom a ⊔ SupHom.toFun src✝.toSupHom b) },
map_inf' :=
(_ :
∀ (a b : α),
SupHom.toFun src✝.toSupHom (a ⊓ b) =
SupHom.toFun src✝.toSupHom a ⊓ SupHom.toFun src✝.toSupHom b) }.toSupHom
a \
SupHom.toFun
{
toSupHom :=
{ toFun := ↑f ∘ ↑g,
map_sup' :=
(_ :
∀ (a b : α),
SupHom.toFun src✝.toSupHom (a ⊔ b) =
SupHom.toFun src✝.toSupHom a ⊔ SupHom.toFun src✝.toSupHom b) },
map_inf' :=
(_ :
∀ (a b : α),
SupHom.toFun src✝.toSupHom (a ⊓ b) =
SupHom.toFun src✝.toSupHom a ⊓ SupHom.toFun src✝.toSupHom b) }.toSupHom
b
[PROOFSTEP]
simp
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝³ : CoheytingAlgebra α
inst✝² : CoheytingAlgebra β
inst✝¹ : CoheytingAlgebra γ
inst✝ : CoheytingAlgebra δ
f f₁ f₂ : CoheytingHom α β
g g₁ g₂ : CoheytingHom β γ
hg : Injective ↑g
h : comp g f₁ = comp g f₂
a : α
⊢ ↑g (↑f₁ a) = ↑g (↑f₂ a)
[PROOFSTEP]
rw [← comp_apply, h, comp_apply]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝³ : BiheytingAlgebra α
inst✝² : BiheytingAlgebra β
inst✝¹ : BiheytingAlgebra γ
inst✝ : BiheytingAlgebra δ
f g : BiheytingHom α β
h : (fun f => f.toFun) f = (fun f => f.toFun) g
⊢ f = g
[PROOFSTEP]
obtain ⟨⟨⟨_, _⟩, _⟩, _⟩ := f
[GOAL]
case mk.mk.mk
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝³ : BiheytingAlgebra α
inst✝² : BiheytingAlgebra β
inst✝¹ : BiheytingAlgebra γ
inst✝ : BiheytingAlgebra δ
g : BiheytingHom α β
toFun✝ : α → β
map_sup'✝ : ∀ (a b : α), toFun✝ (a ⊔ b) = toFun✝ a ⊔ toFun✝ b
map_inf'✝ :
∀ (a b : α),
SupHom.toFun { toFun := toFun✝, map_sup' := map_sup'✝ } (a ⊓ b) =
SupHom.toFun { toFun := toFun✝, map_sup' := map_sup'✝ } a ⊓
SupHom.toFun { toFun := toFun✝, map_sup' := map_sup'✝ } b
map_himp'✝ :
∀ (a b : α),
SupHom.toFun { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ }.toSupHom (a ⇨ b) =
SupHom.toFun { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ }.toSupHom a ⇨
SupHom.toFun { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ }.toSupHom b
map_sdiff'✝ :
∀ (a b : α),
SupHom.toFun { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ }.toSupHom (a \ b) =
SupHom.toFun { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ }.toSupHom a \
SupHom.toFun { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ }.toSupHom b
h :
(fun f => f.toFun)
{ toLatticeHom := { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ },
map_himp' := map_himp'✝, map_sdiff' := map_sdiff'✝ } =
(fun f => f.toFun) g
⊢ { toLatticeHom := { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ },
map_himp' := map_himp'✝, map_sdiff' := map_sdiff'✝ } =
g
[PROOFSTEP]
obtain ⟨⟨⟨_, _⟩, _⟩, _⟩ := g
[GOAL]
case mk.mk.mk.mk.mk.mk
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝³ : BiheytingAlgebra α
inst✝² : BiheytingAlgebra β
inst✝¹ : BiheytingAlgebra γ
inst✝ : BiheytingAlgebra δ
toFun✝¹ : α → β
map_sup'✝¹ : ∀ (a b : α), toFun✝¹ (a ⊔ b) = toFun✝¹ a ⊔ toFun✝¹ b
map_inf'✝¹ :
∀ (a b : α),
SupHom.toFun { toFun := toFun✝¹, map_sup' := map_sup'✝¹ } (a ⊓ b) =
SupHom.toFun { toFun := toFun✝¹, map_sup' := map_sup'✝¹ } a ⊓
SupHom.toFun { toFun := toFun✝¹, map_sup' := map_sup'✝¹ } b
map_himp'✝¹ :
∀ (a b : α),
SupHom.toFun { toSupHom := { toFun := toFun✝¹, map_sup' := map_sup'✝¹ }, map_inf' := map_inf'✝¹ }.toSupHom (a ⇨ b) =
SupHom.toFun { toSupHom := { toFun := toFun✝¹, map_sup' := map_sup'✝¹ }, map_inf' := map_inf'✝¹ }.toSupHom a ⇨
SupHom.toFun { toSupHom := { toFun := toFun✝¹, map_sup' := map_sup'✝¹ }, map_inf' := map_inf'✝¹ }.toSupHom b
map_sdiff'✝¹ :
∀ (a b : α),
SupHom.toFun { toSupHom := { toFun := toFun✝¹, map_sup' := map_sup'✝¹ }, map_inf' := map_inf'✝¹ }.toSupHom (a \ b) =
SupHom.toFun { toSupHom := { toFun := toFun✝¹, map_sup' := map_sup'✝¹ }, map_inf' := map_inf'✝¹ }.toSupHom a \
SupHom.toFun { toSupHom := { toFun := toFun✝¹, map_sup' := map_sup'✝¹ }, map_inf' := map_inf'✝¹ }.toSupHom b
toFun✝ : α → β
map_sup'✝ : ∀ (a b : α), toFun✝ (a ⊔ b) = toFun✝ a ⊔ toFun✝ b
map_inf'✝ :
∀ (a b : α),
SupHom.toFun { toFun := toFun✝, map_sup' := map_sup'✝ } (a ⊓ b) =
SupHom.toFun { toFun := toFun✝, map_sup' := map_sup'✝ } a ⊓
SupHom.toFun { toFun := toFun✝, map_sup' := map_sup'✝ } b
map_himp'✝ :
∀ (a b : α),
SupHom.toFun { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ }.toSupHom (a ⇨ b) =
SupHom.toFun { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ }.toSupHom a ⇨
SupHom.toFun { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ }.toSupHom b
map_sdiff'✝ :
∀ (a b : α),
SupHom.toFun { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ }.toSupHom (a \ b) =
SupHom.toFun { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ }.toSupHom a \
SupHom.toFun { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ }.toSupHom b
h :
(fun f => f.toFun)
{ toLatticeHom := { toSupHom := { toFun := toFun✝¹, map_sup' := map_sup'✝¹ }, map_inf' := map_inf'✝¹ },
map_himp' := map_himp'✝¹, map_sdiff' := map_sdiff'✝¹ } =
(fun f => f.toFun)
{ toLatticeHom := { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ },
map_himp' := map_himp'✝, map_sdiff' := map_sdiff'✝ }
⊢ { toLatticeHom := { toSupHom := { toFun := toFun✝¹, map_sup' := map_sup'✝¹ }, map_inf' := map_inf'✝¹ },
map_himp' := map_himp'✝¹, map_sdiff' := map_sdiff'✝¹ } =
{ toLatticeHom := { toSupHom := { toFun := toFun✝, map_sup' := map_sup'✝ }, map_inf' := map_inf'✝ },
map_himp' := map_himp'✝, map_sdiff' := map_sdiff'✝ }
[PROOFSTEP]
congr
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝³ : BiheytingAlgebra α
inst✝² : BiheytingAlgebra β
inst✝¹ : BiheytingAlgebra γ
inst✝ : BiheytingAlgebra δ
f : BiheytingHom α β
f' : α → β
h : f' = ↑f
⊢ ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b
[PROOFSTEP]
simpa only [h] using map_sup f
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝³ : BiheytingAlgebra α
inst✝² : BiheytingAlgebra β
inst✝¹ : BiheytingAlgebra γ
inst✝ : BiheytingAlgebra δ
f : BiheytingHom α β
f' : α → β
h : f' = ↑f
⊢ ∀ (a b : α),
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } (a ⊓ b) =
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } a ⊓
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } b
[PROOFSTEP]
simpa only [h] using map_inf f
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝³ : BiheytingAlgebra α
inst✝² : BiheytingAlgebra β
inst✝¹ : BiheytingAlgebra γ
inst✝ : BiheytingAlgebra δ
f : BiheytingHom α β
f' : α → β
h : f' = ↑f
⊢ ∀ (a b : α),
SupHom.toFun
{ toSupHom := { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) },
map_inf' :=
(_ :
∀ (a b : α),
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } (a ⊓ b) =
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } a ⊓
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) }
b) }.toSupHom
(a ⇨ b) =
SupHom.toFun
{ toSupHom := { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) },
map_inf' :=
(_ :
∀ (a b : α),
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } (a ⊓ b) =
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } a ⊓
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) }
b) }.toSupHom
a ⇨
SupHom.toFun
{ toSupHom := { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) },
map_inf' :=
(_ :
∀ (a b : α),
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } (a ⊓ b) =
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } a ⊓
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) }
b) }.toSupHom
b
[PROOFSTEP]
simpa only [h] using map_himp f
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝³ : BiheytingAlgebra α
inst✝² : BiheytingAlgebra β
inst✝¹ : BiheytingAlgebra γ
inst✝ : BiheytingAlgebra δ
f : BiheytingHom α β
f' : α → β
h : f' = ↑f
⊢ ∀ (a b : α),
SupHom.toFun
{ toSupHom := { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) },
map_inf' :=
(_ :
∀ (a b : α),
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } (a ⊓ b) =
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } a ⊓
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) }
b) }.toSupHom
(a \ b) =
SupHom.toFun
{ toSupHom := { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) },
map_inf' :=
(_ :
∀ (a b : α),
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } (a ⊓ b) =
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } a ⊓
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) }
b) }.toSupHom
a \
SupHom.toFun
{ toSupHom := { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) },
map_inf' :=
(_ :
∀ (a b : α),
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } (a ⊓ b) =
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) } a ⊓
SupHom.toFun { toFun := f', map_sup' := (_ : ∀ (a b : α), f' (a ⊔ b) = f' a ⊔ f' b) }
b) }.toSupHom
b
[PROOFSTEP]
simpa only [h] using map_sdiff f
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝³ : BiheytingAlgebra α
inst✝² : BiheytingAlgebra β
inst✝¹ : BiheytingAlgebra γ
inst✝ : BiheytingAlgebra δ
f : BiheytingHom β γ
g : BiheytingHom α β
src✝ : LatticeHom α γ := LatticeHom.comp f.toLatticeHom g.toLatticeHom
a b : α
⊢ SupHom.toFun
{
toSupHom :=
{ toFun := ↑f ∘ ↑g,
map_sup' :=
(_ :
∀ (a b : α),
SupHom.toFun src✝.toSupHom (a ⊔ b) = SupHom.toFun src✝.toSupHom a ⊔ SupHom.toFun src✝.toSupHom b) },
map_inf' :=
(_ :
∀ (a b : α),
SupHom.toFun src✝.toSupHom (a ⊓ b) =
SupHom.toFun src✝.toSupHom a ⊓ SupHom.toFun src✝.toSupHom b) }.toSupHom
(a ⇨ b) =
SupHom.toFun
{
toSupHom :=
{ toFun := ↑f ∘ ↑g,
map_sup' :=
(_ :
∀ (a b : α),
SupHom.toFun src✝.toSupHom (a ⊔ b) =
SupHom.toFun src✝.toSupHom a ⊔ SupHom.toFun src✝.toSupHom b) },
map_inf' :=
(_ :
∀ (a b : α),
SupHom.toFun src✝.toSupHom (a ⊓ b) =
SupHom.toFun src✝.toSupHom a ⊓ SupHom.toFun src✝.toSupHom b) }.toSupHom
a ⇨
SupHom.toFun
{
toSupHom :=
{ toFun := ↑f ∘ ↑g,
map_sup' :=
(_ :
∀ (a b : α),
SupHom.toFun src✝.toSupHom (a ⊔ b) =
SupHom.toFun src✝.toSupHom a ⊔ SupHom.toFun src✝.toSupHom b) },
map_inf' :=
(_ :
∀ (a b : α),
SupHom.toFun src✝.toSupHom (a ⊓ b) =
SupHom.toFun src✝.toSupHom a ⊓ SupHom.toFun src✝.toSupHom b) }.toSupHom
b
[PROOFSTEP]
simp
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝³ : BiheytingAlgebra α
inst✝² : BiheytingAlgebra β
inst✝¹ : BiheytingAlgebra γ
inst✝ : BiheytingAlgebra δ
f : BiheytingHom β γ
g : BiheytingHom α β
src✝ : LatticeHom α γ := LatticeHom.comp f.toLatticeHom g.toLatticeHom
a b : α
⊢ SupHom.toFun
{
toSupHom :=
{ toFun := ↑f ∘ ↑g,
map_sup' :=
(_ :
∀ (a b : α),
SupHom.toFun src✝.toSupHom (a ⊔ b) = SupHom.toFun src✝.toSupHom a ⊔ SupHom.toFun src✝.toSupHom b) },
map_inf' :=
(_ :
∀ (a b : α),
SupHom.toFun src✝.toSupHom (a ⊓ b) =
SupHom.toFun src✝.toSupHom a ⊓ SupHom.toFun src✝.toSupHom b) }.toSupHom
(a \ b) =
SupHom.toFun
{
toSupHom :=
{ toFun := ↑f ∘ ↑g,
map_sup' :=
(_ :
∀ (a b : α),
SupHom.toFun src✝.toSupHom (a ⊔ b) =
SupHom.toFun src✝.toSupHom a ⊔ SupHom.toFun src✝.toSupHom b) },
map_inf' :=
(_ :
∀ (a b : α),
SupHom.toFun src✝.toSupHom (a ⊓ b) =
SupHom.toFun src✝.toSupHom a ⊓ SupHom.toFun src✝.toSupHom b) }.toSupHom
a \
SupHom.toFun
{
toSupHom :=
{ toFun := ↑f ∘ ↑g,
map_sup' :=
(_ :
∀ (a b : α),
SupHom.toFun src✝.toSupHom (a ⊔ b) =
SupHom.toFun src✝.toSupHom a ⊔ SupHom.toFun src✝.toSupHom b) },
map_inf' :=
(_ :
∀ (a b : α),
SupHom.toFun src✝.toSupHom (a ⊓ b) =
SupHom.toFun src✝.toSupHom a ⊓ SupHom.toFun src✝.toSupHom b) }.toSupHom
b
[PROOFSTEP]
simp
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
δ : Type u_5
inst✝³ : BiheytingAlgebra α
inst✝² : BiheytingAlgebra β
inst✝¹ : BiheytingAlgebra γ
inst✝ : BiheytingAlgebra δ
f f₁ f₂ : BiheytingHom α β
g g₁ g₂ : BiheytingHom β γ
hg : Injective ↑g
h : comp g f₁ = comp g f₂
a : α
⊢ ↑g (↑f₁ a) = ↑g (↑f₂ a)
[PROOFSTEP]
rw [← comp_apply, h, comp_apply]
|
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Wed Feb 26 15:01:50 2020
@author: parsotak
"""
#import data sets
import pygrib
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
#set input file
f = '/wx/storage/halpea17/wx272/lab5/gfs-data.grb2'
#open file
grib = pygrib.open(f)
#print inventories
#grib.seek(0)
#for grb in grib:
# print(grb)
#read in mslp (mb) data contour #1
MSL_con1 = grib.select(name = 'Pressure reduced to MSL', level = 0)[0]
#convert from Pa to mb
MSL_con1_mb = MSL_con1.values / 100
#read in geo heights thickness from 1000-500 mb convert to dam contour #2
Geo_con2_500 = grib.select(name = 'Geopotential Height', level = 500)[0]
Geo_con2_1000 = grib.select(name = 'Geopotential Height', level = 1000)[0]
#convert each height to mb from Pa
Geo_con2_500mb = Geo_con2_500.values / 10
Geo_con2_1000mb = Geo_con2_1000.values / 10
#thkn difference at pressure level
thkn_level = Geo_con2_500mb - Geo_con2_1000mb
#read in the total precps contourfill
totprecip_contf = grib.select(name = 'Total Precipitation', level = 0)[0]
#array for total precips (inches). contourfill
totprecip_cont = totprecip_contf.values / 25.4
#define lat and lon
lat, lon = MSL_con1.latlons()
#Define a figure
fig = plt.figure(figsize = (12,8))
ax = fig.add_axes([0.1,0.1,0.8,0.8])
#Define basemap lat: 20 N - 52 N every 10; long: 130 W - 60 W merc project
m = Basemap(llcrnrlon = 230., llcrnrlat = 20., urcrnrlon = 300., urcrnrlat = 55., resolution = 'l', projection = 'merc', ax = ax)
m.drawparallels(np.arange(0.,81.,10), labels = [1,0,0,0], fontsize = 10, zorder = 12)
m.drawmeridians(np.arange(0.,351.,10), labels = [0,0,0,1], fontsize = 10, zorder = 12)
m.drawcoastlines()
m.drawstates()
m.drawcountries()
#convert from lat/lon to map plot coordinates
xi, yi = m(lon, lat)
#range for MSLP
mslp_Cont1 = np.arange(936, 1056, 4)
#range for thkn 1000 - 500 mb one for less than equal
thkn_Contour = np.arange(474, 606, 6)
thkn_Contour2 = np.arange(546, 606, 6)
#contour #1 mslp range 936-1054 every 4 mb solid line gray inline labels size 10 whole numbers
Contour_MSLP1 = m.contour(xi, yi, MSL_con1_mb, mslp_Cont1, cmap = 'Greys')
#MSLP contour labels
thknlab1 = plt.clabel(Contour_MSLP1, mslp_Cont1, inline = True, fontsize = 10, fmt = '%1.0f')
#contour #2 thinkness 1000 - 500 mb in dam range 474 - 606 every 6 dam dashed lines and have inline labels size 10 whole numbers
Contour_thkn500mb = m.contour(xi, yi, thkn_level, thkn_Contour, colors = 'Blue', linestyles = 'dashed')
Contour_thkn1000mb = m.contour(xi, yi, thkn_level, thkn_Contour2, colors = 'Red', linestyles = 'dashed')
#plotting the inline labels for each contour thkn level
thknlab = plt.clabel(Contour_thkn500mb, thkn_Contour, inline = True, fontsize = 10, fmt = '%1.0f')
thknlab1 = plt.clabel(Contour_thkn1000mb, thkn_Contour2, inline = True, fontsize = 10, fmt = '%1.0f')
# total precip ticks range
ttprecip = [0.01, 0.1, 0.25, 0.5, 0.75, 1]
#contourfill total precip inches interval = [0.01, 0.1, 0.25, 0.5, 0.75, 1] colormap = 'GuBu'
totprecipplot = m.contourf(xi, yi, totprecip_cont, ttprecip, cmap = 'GnBu')
#total precip colorbar
totprecipcbar = plt.colorbar(totprecipplot, orientation = 'horizontal', ax = ax, ticks = ttprecip, shrink = 0.75, pad = 0.05)
totprecipcbar.set_label('Total Precipitation (in.)')
totprecipcbar.ax.tick_params(labelsize = 12)
ax.set_title('MSLP (mb), 1000 - 500 mb thickness (dam), and Total precipitation (inches). Forecast valid 2020-02-18 at 06Z.')
plt.savefig("parsotak_lab5_MSLP_Geo-H_totprecip.png")
plt.show()
|
section \<open> Building Entry System \<close>
theory BuildingEntry
imports "Z_Machines.Z_Machine"
begin
term subst_upd
type_synonym staff = \<nat>
consts
Staff :: "staff set"
maxentry :: "\<nat>"
schema ASystem =
s :: "\<bbbP> staff"
where "s \<in> \<bbbF> Staff" "#s < maxentry"
record_default ASystem
zoperation AEnterBuilding =
over ASystem
params p\<in>Staff
pre "#s < maxentry \<and> p \<notin> s"
update "[s \<leadsto> s \<union> {p}]"
zoperation ALeaveBuilding =
over ASystem
params p\<in>Staff
pre "p \<in> s"
update "[s \<leadsto> s - {p}]"
zmachine ABuildingEntry =
over ASystem
init "[s \<leadsto> {}]"
operations AEnterBuilding ALeaveBuilding
def_consts
Staff = "{0..10}"
maxentry = "5"
animate ABuildingEntry
schema CSystem =
l :: "staff list"
where
"l \<in> iseq Staff" "#l \<le> maxentry"
record_default CSystem
zoperation CEnterBuilding =
params p \<in> Staff
pre "#l < maxentry \<and> p \<notin> ran l"
update "[l \<leadsto> l @ [p]]"
definition ListRetrieveSet :: "CSystem \<Rightarrow> (_, ASystem) itree" where
"ListRetrieveSet = \<questiondown>CSystem? ;; \<langle>\<lblot>s \<leadsto> set l\<rblot>\<rangle>\<^sub>a"
definition SetRetrieveList :: "ASystem \<Rightarrow> (_, CSystem) itree" where
"SetRetrieveList = \<questiondown>ASystem? ;; \<langle>\<lblot>l \<leadsto> sorted_list_of_set s\<rblot>\<rangle>\<^sub>a"
find_theorems "(\<circ>\<^sub>s)"
lemma "ListRetrieveSet ;; SetRetrieveList = \<questiondown>CSystem?"
apply (simp add: ListRetrieveSet_def SetRetrieveList_def ASystem_inv_def assigns_seq kcomp_assoc assigns_assume assigns_seq_comp usubst usubst_eval)
lemma "p \<in> Staff \<Longrightarrow> (ListRetrieveSet ;; AEnterBuilding p) \<sqsubseteq> (CEnterBuilding p ;; ListRetrieveSet)"
unfolding ListRetrieveSet_def AEnterBuilding_def CEnterBuilding_def
apply refine_auto
apply (simp add: distinct_card)
done
end
|
Require Import Undecidability.Axioms.EA.
Require Import Undecidability.Shared.Pigeonhole.
Require Import Undecidability.Shared.FinitenessFacts.
Require Import Undecidability.Synthetic.reductions Undecidability.Synthetic.truthtables.
Require Import Undecidability.Synthetic.DecidabilityFacts Undecidability.Synthetic.EnumerabilityFacts Undecidability.Synthetic.SemiDecidabilityFacts Undecidability.Synthetic.ReducibilityFacts.
Require Import Undecidability.Shared.ListAutomation.
Require Import List Arith.
Import ListNotations ListAutomationNotations.
Definition productive (p : nat -> Prop) := exists f : nat -> nat, forall c, (forall x, W c x -> p x) -> p (f c) /\ ~ W c (f c).
Lemma productive_nonenumerable p :
productive p -> ~ enumerable p.
Proof.
intros [f Hf] [c Hc] % W_spec.
destruct (Hf c) as [H1 % Hc H2].
eapply Hc. tauto.
Qed.
Lemma K0_productive :
productive (compl K0).
Proof.
exists (fun n => n). intros c H.
specialize (H c). unfold K0, compl in *. tauto.
Qed.
Lemma productive_dedekind_infinite p :
productive p -> dedekind_infinite p.
Proof.
specialize List_id as [c_l c_spec]. intros [f Hf].
eapply (weakly_generative_dedekind_infinite). econstructor.
intros l.
exists (f (c_l l)). intros Hl. split.
- specialize (Hf (c_l l)).
rewrite <- c_spec. eapply Hf.
intros x. rewrite c_spec. eapply Hl.
- eapply Hf. intros x. rewrite c_spec. eapply Hl.
Qed.
Lemma productive_subpredicate p :
productive p ->
exists q : nat -> Prop, enumerable q /\ dedekind_infinite q /\ (forall x, q x -> p x).
Proof.
intros H.
eapply dedekind_infinite_problem.
eapply productive_dedekind_infinite. eauto.
Qed.
Lemma productive_red p q :
p ⪯ₘ q -> productive p -> productive q.
Proof.
intros [f Hf] [g Hg].
specialize (SMN' f) as [k Hk].
exists (fun c => f (g (k c))). intros c Hs.
assert (Hkc : forall x, W (k c) x -> p x) by (intros; now eapply Hf, Hs, Hk).
split.
- now eapply Hf, Hg.
- intros ?. eapply Hg, Hk; eauto.
Qed.
Lemma many_one_complete_subpredicate p :
m-complete p ->
exists q : nat -> Prop, enumerable q /\ dedekind_infinite q /\ (forall x, q x -> compl p x).
Proof.
intros Hcomp. eapply productive_subpredicate.
eapply productive_red.
- eapply red_m_complement. eapply Hcomp. eapply K0_enum.
- eapply K0_productive.
Qed.
Definition simple (p : nat -> Prop) :=
enumerable p /\ ~ exhaustible (compl p) /\ ~ exists q, enumerable q /\ ~ exhaustible q /\ (forall x, q x -> compl p x).
Lemma simple_non_enumerable p :
simple p -> ~ enumerable (compl p).
Proof.
intros (H1 & H2 & H3) H4.
apply H3. eauto.
Qed.
Lemma simple_undecidable p :
simple p -> ~ decidable p.
Proof.
intros ? ? % decidable_complement % decidable_enumerable; eauto.
now eapply simple_non_enumerable in H1.
Qed.
Lemma simple_m_incomplete p :
simple p -> ~ m-complete p.
Proof.
intros (H1 & H2 & H3) (q & Hq1 & Hq2 & Hq3) % many_one_complete_subpredicate.
eapply H3. exists q; repeat split; eauto.
now eapply unbounded_non_finite, dedekind_infinite_unbounded.
Qed.
Lemma non_finite_non_empty {X} (p : X -> Prop) :
~ exhaustible p -> ~~ exists x, p x.
Proof.
intros H1 H2. apply H1. exists []. firstorder.
Qed.
Lemma simple_no_cylinder p :
(fun '((x, n) : nat * nat) => p x) ⪯₁ p -> ~ simple p.
Proof.
intros [f [inj_f Hf]] (H1 & H2 & H3). red in Hf.
apply (non_finite_non_empty _ H2). intros [x0 Hx0].
apply H3.
exists (fun x => exists n, x = f (x0, n)). split. 2:split.
- eapply semi_decidable_enumerable; eauto.
exists (fun x n => Nat.eqb x (f (x0, n))).
intros x. split; intros [n H]; exists n; destruct (Nat.eqb_spec x (f (x0, n))); firstorder congruence.
- eapply unbounded_non_finite, dedekind_infinite_unbounded.
exists (fun n => f (x0, n)). intros. split. eauto.
now intros ? [=] % inj_f.
- intros ? [n ->]. red. now rewrite <- (Hf (x0, n)).
Qed.
|
{-# OPTIONS --cubical --no-import-sorts --safe #-}
module Cubical.Relation.Binary.Raw.Construct.Constant where
open import Cubical.Core.Everything
open import Cubical.Foundations.Prelude
open import Cubical.Foundations.HLevels using (hProp)
open import Cubical.Relation.Binary.Raw
------------------------------------------------------------------------
-- Definition
Const : ∀ {a b c} {A : Type a} {B : Type b} → Type c → RawREL A B c
Const I = λ _ _ → I
------------------------------------------------------------------------
-- Properties
module _ {a c} {A : Type a} {C : Type c} where
reflexive : C → Reflexive {A = A} (Const C)
reflexive c = c
symmetric : Symmetric {A = A} (Const C)
symmetric c = c
transitive : Transitive {A = A} (Const C)
transitive c d = c
isPartialEquivalence : IsPartialEquivalence {A = A} (Const C)
isPartialEquivalence = record
{ symmetric = λ {x} {y} → symmetric {x} {y}
; transitive = λ {x} {y} {z} → transitive {x} {y} {z}
}
partialEquivalence : PartialEquivalence A c
partialEquivalence = record { isPartialEquivalence = isPartialEquivalence }
isEquivalence : C → IsEquivalence {A = A} (Const C)
isEquivalence c = record
{ isPartialEquivalence = isPartialEquivalence
; reflexive = λ {x} → reflexive c {x}
}
equivalence : C → Equivalence A c
equivalence x = record { isEquivalence = isEquivalence x }
|
/-
Copyright (c) 2022 Violeta Hernández Palacios. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Violeta Hernández Palacios
! This file was ported from Lean 3 source module order.bounded
! leanprover-community/mathlib commit c3291da49cfa65f0d43b094750541c0731edc932
! Please do not edit these lines, except to modify the commit id
! if you have ported upstream changes.
-/
import Mathbin.Order.RelClasses
import Mathbin.Data.Set.Intervals.Basic
/-!
# Bounded and unbounded sets
> THIS FILE IS SYNCHRONIZED WITH MATHLIB4.
> Any changes to this file require a corresponding PR to mathlib4.
We prove miscellaneous lemmas about bounded and unbounded sets. Many of these are just variations on
the same ideas, or similar results with a few minor differences. The file is divided into these
different general ideas.
-/
namespace Set
variable {α : Type _} {r : α → α → Prop} {s t : Set α}
/-! ### Subsets of bounded and unbounded sets -/
#print Set.Bounded.mono /-
theorem Bounded.mono (hst : s ⊆ t) (hs : Bounded r t) : Bounded r s :=
hs.imp fun a ha b hb => ha b (hst hb)
#align set.bounded.mono Set.Bounded.mono
-/
#print Set.Unbounded.mono /-
theorem Unbounded.mono (hst : s ⊆ t) (hs : Unbounded r s) : Unbounded r t := fun a =>
let ⟨b, hb, hb'⟩ := hs a
⟨b, hst hb, hb'⟩
#align set.unbounded.mono Set.Unbounded.mono
-/
/-! ### Alternate characterizations of unboundedness on orders -/
/- warning: set.unbounded_le_of_forall_exists_lt -> Set.unbounded_le_of_forall_exists_lt is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : Preorder.{u1} α], (forall (a : α), Exists.{succ u1} α (fun (b : α) => Exists.{0} (Membership.Mem.{u1, u1} α (Set.{u1} α) (Set.hasMem.{u1} α) b s) (fun (H : Membership.Mem.{u1, u1} α (Set.{u1} α) (Set.hasMem.{u1} α) b s) => LT.lt.{u1} α (Preorder.toLT.{u1} α _inst_1) a b))) -> (Set.Unbounded.{u1} α (LE.le.{u1} α (Preorder.toLE.{u1} α _inst_1)) s)
but is expected to have type
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : Preorder.{u1} α], (forall (a : α), Exists.{succ u1} α (fun (b : α) => And (Membership.mem.{u1, u1} α (Set.{u1} α) (Set.instMembershipSet.{u1} α) b s) (LT.lt.{u1} α (Preorder.toLT.{u1} α _inst_1) a b))) -> (Set.Unbounded.{u1} α (fun ([email protected]._hyg.172 : α) ([email protected]._hyg.174 : α) => LE.le.{u1} α (Preorder.toLE.{u1} α _inst_1) [email protected]._hyg.172 [email protected]._hyg.174) s)
Case conversion may be inaccurate. Consider using '#align set.unbounded_le_of_forall_exists_lt Set.unbounded_le_of_forall_exists_ltₓ'. -/
theorem unbounded_le_of_forall_exists_lt [Preorder α] (h : ∀ a, ∃ b ∈ s, a < b) :
Unbounded (· ≤ ·) s := fun a =>
let ⟨b, hb, hb'⟩ := h a
⟨b, hb, fun hba => hba.not_lt hb'⟩
#align set.unbounded_le_of_forall_exists_lt Set.unbounded_le_of_forall_exists_lt
/- warning: set.unbounded_le_iff -> Set.unbounded_le_iff is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α], Iff (Set.Unbounded.{u1} α (LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) s) (forall (a : α), Exists.{succ u1} α (fun (b : α) => Exists.{0} (Membership.Mem.{u1, u1} α (Set.{u1} α) (Set.hasMem.{u1} α) b s) (fun (H : Membership.Mem.{u1, u1} α (Set.{u1} α) (Set.hasMem.{u1} α) b s) => LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1))))) a b)))
but is expected to have type
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α], Iff (Set.Unbounded.{u1} α (fun ([email protected]._hyg.248 : α) ([email protected]._hyg.250 : α) => LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.248 [email protected]._hyg.250) s) (forall (a : α), Exists.{succ u1} α (fun (b : α) => And (Membership.mem.{u1, u1} α (Set.{u1} α) (Set.instMembershipSet.{u1} α) b s) (LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) a b)))
Case conversion may be inaccurate. Consider using '#align set.unbounded_le_iff Set.unbounded_le_iffₓ'. -/
theorem unbounded_le_iff [LinearOrder α] : Unbounded (· ≤ ·) s ↔ ∀ a, ∃ b ∈ s, a < b := by
simp only [unbounded, not_le]
#align set.unbounded_le_iff Set.unbounded_le_iff
/- warning: set.unbounded_lt_of_forall_exists_le -> Set.unbounded_lt_of_forall_exists_le is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : Preorder.{u1} α], (forall (a : α), Exists.{succ u1} α (fun (b : α) => Exists.{0} (Membership.Mem.{u1, u1} α (Set.{u1} α) (Set.hasMem.{u1} α) b s) (fun (H : Membership.Mem.{u1, u1} α (Set.{u1} α) (Set.hasMem.{u1} α) b s) => LE.le.{u1} α (Preorder.toLE.{u1} α _inst_1) a b))) -> (Set.Unbounded.{u1} α (LT.lt.{u1} α (Preorder.toLT.{u1} α _inst_1)) s)
but is expected to have type
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : Preorder.{u1} α], (forall (a : α), Exists.{succ u1} α (fun (b : α) => And (Membership.mem.{u1, u1} α (Set.{u1} α) (Set.instMembershipSet.{u1} α) b s) (LE.le.{u1} α (Preorder.toLE.{u1} α _inst_1) a b))) -> (Set.Unbounded.{u1} α (fun ([email protected]._hyg.336 : α) ([email protected]._hyg.338 : α) => LT.lt.{u1} α (Preorder.toLT.{u1} α _inst_1) [email protected]._hyg.336 [email protected]._hyg.338) s)
Case conversion may be inaccurate. Consider using '#align set.unbounded_lt_of_forall_exists_le Set.unbounded_lt_of_forall_exists_leₓ'. -/
theorem unbounded_lt_of_forall_exists_le [Preorder α] (h : ∀ a, ∃ b ∈ s, a ≤ b) :
Unbounded (· < ·) s := fun a =>
let ⟨b, hb, hb'⟩ := h a
⟨b, hb, fun hba => hba.not_le hb'⟩
#align set.unbounded_lt_of_forall_exists_le Set.unbounded_lt_of_forall_exists_le
/- warning: set.unbounded_lt_iff -> Set.unbounded_lt_iff is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α], Iff (Set.Unbounded.{u1} α (LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) s) (forall (a : α), Exists.{succ u1} α (fun (b : α) => Exists.{0} (Membership.Mem.{u1, u1} α (Set.{u1} α) (Set.hasMem.{u1} α) b s) (fun (H : Membership.Mem.{u1, u1} α (Set.{u1} α) (Set.hasMem.{u1} α) b s) => LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1))))) a b)))
but is expected to have type
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α], Iff (Set.Unbounded.{u1} α (fun ([email protected]._hyg.412 : α) ([email protected]._hyg.414 : α) => LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.412 [email protected]._hyg.414) s) (forall (a : α), Exists.{succ u1} α (fun (b : α) => And (Membership.mem.{u1, u1} α (Set.{u1} α) (Set.instMembershipSet.{u1} α) b s) (LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) a b)))
Case conversion may be inaccurate. Consider using '#align set.unbounded_lt_iff Set.unbounded_lt_iffₓ'. -/
theorem unbounded_lt_iff [LinearOrder α] : Unbounded (· < ·) s ↔ ∀ a, ∃ b ∈ s, a ≤ b := by
simp only [unbounded, not_lt]
#align set.unbounded_lt_iff Set.unbounded_lt_iff
/- warning: set.unbounded_ge_of_forall_exists_gt -> Set.unbounded_ge_of_forall_exists_gt is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : Preorder.{u1} α], (forall (a : α), Exists.{succ u1} α (fun (b : α) => Exists.{0} (Membership.Mem.{u1, u1} α (Set.{u1} α) (Set.hasMem.{u1} α) b s) (fun (H : Membership.Mem.{u1, u1} α (Set.{u1} α) (Set.hasMem.{u1} α) b s) => LT.lt.{u1} α (Preorder.toLT.{u1} α _inst_1) b a))) -> (Set.Unbounded.{u1} α (GE.ge.{u1} α (Preorder.toLE.{u1} α _inst_1)) s)
but is expected to have type
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : Preorder.{u1} α], (forall (a : α), Exists.{succ u1} α (fun (b : α) => And (Membership.mem.{u1, u1} α (Set.{u1} α) (Set.instMembershipSet.{u1} α) b s) (LT.lt.{u1} α (Preorder.toLT.{u1} α _inst_1) b a))) -> (Set.Unbounded.{u1} α (fun ([email protected]._hyg.500 : α) ([email protected]._hyg.502 : α) => GE.ge.{u1} α (Preorder.toLE.{u1} α _inst_1) [email protected]._hyg.500 [email protected]._hyg.502) s)
Case conversion may be inaccurate. Consider using '#align set.unbounded_ge_of_forall_exists_gt Set.unbounded_ge_of_forall_exists_gtₓ'. -/
theorem unbounded_ge_of_forall_exists_gt [Preorder α] (h : ∀ a, ∃ b ∈ s, b < a) :
Unbounded (· ≥ ·) s :=
@unbounded_le_of_forall_exists_lt αᵒᵈ _ _ h
#align set.unbounded_ge_of_forall_exists_gt Set.unbounded_ge_of_forall_exists_gt
/- warning: set.unbounded_ge_iff -> Set.unbounded_ge_iff is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α], Iff (Set.Unbounded.{u1} α (GE.ge.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) s) (forall (a : α), Exists.{succ u1} α (fun (b : α) => Exists.{0} (Membership.Mem.{u1, u1} α (Set.{u1} α) (Set.hasMem.{u1} α) b s) (fun (H : Membership.Mem.{u1, u1} α (Set.{u1} α) (Set.hasMem.{u1} α) b s) => LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1))))) b a)))
but is expected to have type
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α], Iff (Set.Unbounded.{u1} α (fun ([email protected]._hyg.542 : α) ([email protected]._hyg.544 : α) => GE.ge.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.542 [email protected]._hyg.544) s) (forall (a : α), Exists.{succ u1} α (fun (b : α) => And (Membership.mem.{u1, u1} α (Set.{u1} α) (Set.instMembershipSet.{u1} α) b s) (LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) b a)))
Case conversion may be inaccurate. Consider using '#align set.unbounded_ge_iff Set.unbounded_ge_iffₓ'. -/
theorem unbounded_ge_iff [LinearOrder α] : Unbounded (· ≥ ·) s ↔ ∀ a, ∃ b ∈ s, b < a :=
⟨fun h a =>
let ⟨b, hb, hba⟩ := h a
⟨b, hb, lt_of_not_ge hba⟩,
unbounded_ge_of_forall_exists_gt⟩
#align set.unbounded_ge_iff Set.unbounded_ge_iff
/- warning: set.unbounded_gt_of_forall_exists_ge -> Set.unbounded_gt_of_forall_exists_ge is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : Preorder.{u1} α], (forall (a : α), Exists.{succ u1} α (fun (b : α) => Exists.{0} (Membership.Mem.{u1, u1} α (Set.{u1} α) (Set.hasMem.{u1} α) b s) (fun (H : Membership.Mem.{u1, u1} α (Set.{u1} α) (Set.hasMem.{u1} α) b s) => LE.le.{u1} α (Preorder.toLE.{u1} α _inst_1) b a))) -> (Set.Unbounded.{u1} α (GT.gt.{u1} α (Preorder.toLT.{u1} α _inst_1)) s)
but is expected to have type
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : Preorder.{u1} α], (forall (a : α), Exists.{succ u1} α (fun (b : α) => And (Membership.mem.{u1, u1} α (Set.{u1} α) (Set.instMembershipSet.{u1} α) b s) (LE.le.{u1} α (Preorder.toLE.{u1} α _inst_1) b a))) -> (Set.Unbounded.{u1} α (fun ([email protected]._hyg.668 : α) ([email protected]._hyg.670 : α) => GT.gt.{u1} α (Preorder.toLT.{u1} α _inst_1) [email protected]._hyg.668 [email protected]._hyg.670) s)
Case conversion may be inaccurate. Consider using '#align set.unbounded_gt_of_forall_exists_ge Set.unbounded_gt_of_forall_exists_geₓ'. -/
theorem unbounded_gt_of_forall_exists_ge [Preorder α] (h : ∀ a, ∃ b ∈ s, b ≤ a) :
Unbounded (· > ·) s := fun a =>
let ⟨b, hb, hb'⟩ := h a
⟨b, hb, fun hba => not_le_of_gt hba hb'⟩
#align set.unbounded_gt_of_forall_exists_ge Set.unbounded_gt_of_forall_exists_ge
/- warning: set.unbounded_gt_iff -> Set.unbounded_gt_iff is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α], Iff (Set.Unbounded.{u1} α (GT.gt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) s) (forall (a : α), Exists.{succ u1} α (fun (b : α) => Exists.{0} (Membership.Mem.{u1, u1} α (Set.{u1} α) (Set.hasMem.{u1} α) b s) (fun (H : Membership.Mem.{u1, u1} α (Set.{u1} α) (Set.hasMem.{u1} α) b s) => LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1))))) b a)))
but is expected to have type
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α], Iff (Set.Unbounded.{u1} α (fun ([email protected]._hyg.745 : α) ([email protected]._hyg.747 : α) => GT.gt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.745 [email protected]._hyg.747) s) (forall (a : α), Exists.{succ u1} α (fun (b : α) => And (Membership.mem.{u1, u1} α (Set.{u1} α) (Set.instMembershipSet.{u1} α) b s) (LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) b a)))
Case conversion may be inaccurate. Consider using '#align set.unbounded_gt_iff Set.unbounded_gt_iffₓ'. -/
theorem unbounded_gt_iff [LinearOrder α] : Unbounded (· > ·) s ↔ ∀ a, ∃ b ∈ s, b ≤ a :=
⟨fun h a =>
let ⟨b, hb, hba⟩ := h a
⟨b, hb, le_of_not_gt hba⟩,
unbounded_gt_of_forall_exists_ge⟩
#align set.unbounded_gt_iff Set.unbounded_gt_iff
/-! ### Relation between boundedness by strict and nonstrict orders. -/
/-! #### Less and less or equal -/
#print Set.Bounded.rel_mono /-
theorem Bounded.rel_mono {r' : α → α → Prop} (h : Bounded r s) (hrr' : r ≤ r') : Bounded r' s :=
let ⟨a, ha⟩ := h
⟨a, fun b hb => hrr' b a (ha b hb)⟩
#align set.bounded.rel_mono Set.Bounded.rel_mono
-/
#print Set.bounded_le_of_bounded_lt /-
theorem bounded_le_of_bounded_lt [Preorder α] (h : Bounded (· < ·) s) : Bounded (· ≤ ·) s :=
h.rel_mono fun _ _ => le_of_lt
#align set.bounded_le_of_bounded_lt Set.bounded_le_of_bounded_lt
-/
#print Set.Unbounded.rel_mono /-
theorem Unbounded.rel_mono {r' : α → α → Prop} (hr : r' ≤ r) (h : Unbounded r s) : Unbounded r' s :=
fun a =>
let ⟨b, hb, hba⟩ := h a
⟨b, hb, fun hba' => hba (hr b a hba')⟩
#align set.unbounded.rel_mono Set.Unbounded.rel_mono
-/
#print Set.unbounded_lt_of_unbounded_le /-
theorem unbounded_lt_of_unbounded_le [Preorder α] (h : Unbounded (· ≤ ·) s) : Unbounded (· < ·) s :=
h.rel_mono fun _ _ => le_of_lt
#align set.unbounded_lt_of_unbounded_le Set.unbounded_lt_of_unbounded_le
-/
#print Set.bounded_le_iff_bounded_lt /-
theorem bounded_le_iff_bounded_lt [Preorder α] [NoMaxOrder α] :
Bounded (· ≤ ·) s ↔ Bounded (· < ·) s :=
by
refine' ⟨fun h => _, bounded_le_of_bounded_lt⟩
cases' h with a ha
cases' exists_gt a with b hb
exact ⟨b, fun c hc => lt_of_le_of_lt (ha c hc) hb⟩
#align set.bounded_le_iff_bounded_lt Set.bounded_le_iff_bounded_lt
-/
#print Set.unbounded_lt_iff_unbounded_le /-
theorem unbounded_lt_iff_unbounded_le [Preorder α] [NoMaxOrder α] :
Unbounded (· < ·) s ↔ Unbounded (· ≤ ·) s := by
simp_rw [← not_bounded_iff, bounded_le_iff_bounded_lt]
#align set.unbounded_lt_iff_unbounded_le Set.unbounded_lt_iff_unbounded_le
-/
/-! #### Greater and greater or equal -/
#print Set.bounded_ge_of_bounded_gt /-
theorem bounded_ge_of_bounded_gt [Preorder α] (h : Bounded (· > ·) s) : Bounded (· ≥ ·) s :=
let ⟨a, ha⟩ := h
⟨a, fun b hb => le_of_lt (ha b hb)⟩
#align set.bounded_ge_of_bounded_gt Set.bounded_ge_of_bounded_gt
-/
#print Set.unbounded_gt_of_unbounded_ge /-
theorem unbounded_gt_of_unbounded_ge [Preorder α] (h : Unbounded (· ≥ ·) s) : Unbounded (· > ·) s :=
fun a =>
let ⟨b, hb, hba⟩ := h a
⟨b, hb, fun hba' => hba (le_of_lt hba')⟩
#align set.unbounded_gt_of_unbounded_ge Set.unbounded_gt_of_unbounded_ge
-/
#print Set.bounded_ge_iff_bounded_gt /-
theorem bounded_ge_iff_bounded_gt [Preorder α] [NoMinOrder α] :
Bounded (· ≥ ·) s ↔ Bounded (· > ·) s :=
@bounded_le_iff_bounded_lt αᵒᵈ _ _ _
#align set.bounded_ge_iff_bounded_gt Set.bounded_ge_iff_bounded_gt
-/
#print Set.unbounded_gt_iff_unbounded_ge /-
theorem unbounded_gt_iff_unbounded_ge [Preorder α] [NoMinOrder α] :
Unbounded (· > ·) s ↔ Unbounded (· ≥ ·) s :=
@unbounded_lt_iff_unbounded_le αᵒᵈ _ _ _
#align set.unbounded_gt_iff_unbounded_ge Set.unbounded_gt_iff_unbounded_ge
-/
/-! ### The universal set -/
#print Set.unbounded_le_univ /-
theorem unbounded_le_univ [LE α] [NoTopOrder α] : Unbounded (· ≤ ·) (@Set.univ α) := fun a =>
let ⟨b, hb⟩ := exists_not_le a
⟨b, ⟨⟩, hb⟩
#align set.unbounded_le_univ Set.unbounded_le_univ
-/
#print Set.unbounded_lt_univ /-
theorem unbounded_lt_univ [Preorder α] [NoTopOrder α] : Unbounded (· < ·) (@Set.univ α) :=
unbounded_lt_of_unbounded_le unbounded_le_univ
#align set.unbounded_lt_univ Set.unbounded_lt_univ
-/
#print Set.unbounded_ge_univ /-
theorem unbounded_ge_univ [LE α] [NoBotOrder α] : Unbounded (· ≥ ·) (@Set.univ α) := fun a =>
let ⟨b, hb⟩ := exists_not_ge a
⟨b, ⟨⟩, hb⟩
#align set.unbounded_ge_univ Set.unbounded_ge_univ
-/
#print Set.unbounded_gt_univ /-
theorem unbounded_gt_univ [Preorder α] [NoBotOrder α] : Unbounded (· > ·) (@Set.univ α) :=
unbounded_gt_of_unbounded_ge unbounded_ge_univ
#align set.unbounded_gt_univ Set.unbounded_gt_univ
-/
/-! ### Bounded and unbounded intervals -/
#print Set.bounded_self /-
theorem bounded_self (a : α) : Bounded r { b | r b a } :=
⟨a, fun x => id⟩
#align set.bounded_self Set.bounded_self
-/
/-! #### Half-open bounded intervals -/
#print Set.bounded_lt_Iio /-
theorem bounded_lt_Iio [Preorder α] (a : α) : Bounded (· < ·) (Set.Iio a) :=
bounded_self a
#align set.bounded_lt_Iio Set.bounded_lt_Iio
-/
#print Set.bounded_le_Iio /-
theorem bounded_le_Iio [Preorder α] (a : α) : Bounded (· ≤ ·) (Set.Iio a) :=
bounded_le_of_bounded_lt (bounded_lt_Iio a)
#align set.bounded_le_Iio Set.bounded_le_Iio
-/
#print Set.bounded_le_Iic /-
theorem bounded_le_Iic [Preorder α] (a : α) : Bounded (· ≤ ·) (Set.Iic a) :=
bounded_self a
#align set.bounded_le_Iic Set.bounded_le_Iic
-/
#print Set.bounded_lt_Iic /-
theorem bounded_lt_Iic [Preorder α] [NoMaxOrder α] (a : α) : Bounded (· < ·) (Set.Iic a) := by
simp only [← bounded_le_iff_bounded_lt, bounded_le_Iic]
#align set.bounded_lt_Iic Set.bounded_lt_Iic
-/
#print Set.bounded_gt_Ioi /-
theorem bounded_gt_Ioi [Preorder α] (a : α) : Bounded (· > ·) (Set.Ioi a) :=
bounded_self a
#align set.bounded_gt_Ioi Set.bounded_gt_Ioi
-/
#print Set.bounded_ge_Ioi /-
theorem bounded_ge_Ioi [Preorder α] (a : α) : Bounded (· ≥ ·) (Set.Ioi a) :=
bounded_ge_of_bounded_gt (bounded_gt_Ioi a)
#align set.bounded_ge_Ioi Set.bounded_ge_Ioi
-/
#print Set.bounded_ge_Ici /-
theorem bounded_ge_Ici [Preorder α] (a : α) : Bounded (· ≥ ·) (Set.Ici a) :=
bounded_self a
#align set.bounded_ge_Ici Set.bounded_ge_Ici
-/
#print Set.bounded_gt_Ici /-
theorem bounded_gt_Ici [Preorder α] [NoMinOrder α] (a : α) : Bounded (· > ·) (Set.Ici a) := by
simp only [← bounded_ge_iff_bounded_gt, bounded_ge_Ici]
#align set.bounded_gt_Ici Set.bounded_gt_Ici
-/
/-! #### Other bounded intervals -/
#print Set.bounded_lt_Ioo /-
theorem bounded_lt_Ioo [Preorder α] (a b : α) : Bounded (· < ·) (Set.Ioo a b) :=
(bounded_lt_Iio b).mono Set.Ioo_subset_Iio_self
#align set.bounded_lt_Ioo Set.bounded_lt_Ioo
-/
#print Set.bounded_lt_Ico /-
theorem bounded_lt_Ico [Preorder α] (a b : α) : Bounded (· < ·) (Set.Ico a b) :=
(bounded_lt_Iio b).mono Set.Ico_subset_Iio_self
#align set.bounded_lt_Ico Set.bounded_lt_Ico
-/
#print Set.bounded_lt_Ioc /-
theorem bounded_lt_Ioc [Preorder α] [NoMaxOrder α] (a b : α) : Bounded (· < ·) (Set.Ioc a b) :=
(bounded_lt_Iic b).mono Set.Ioc_subset_Iic_self
#align set.bounded_lt_Ioc Set.bounded_lt_Ioc
-/
#print Set.bounded_lt_Icc /-
theorem bounded_lt_Icc [Preorder α] [NoMaxOrder α] (a b : α) : Bounded (· < ·) (Set.Icc a b) :=
(bounded_lt_Iic b).mono Set.Icc_subset_Iic_self
#align set.bounded_lt_Icc Set.bounded_lt_Icc
-/
#print Set.bounded_le_Ioo /-
theorem bounded_le_Ioo [Preorder α] (a b : α) : Bounded (· ≤ ·) (Set.Ioo a b) :=
(bounded_le_Iio b).mono Set.Ioo_subset_Iio_self
#align set.bounded_le_Ioo Set.bounded_le_Ioo
-/
#print Set.bounded_le_Ico /-
theorem bounded_le_Ico [Preorder α] (a b : α) : Bounded (· ≤ ·) (Set.Ico a b) :=
(bounded_le_Iio b).mono Set.Ico_subset_Iio_self
#align set.bounded_le_Ico Set.bounded_le_Ico
-/
#print Set.bounded_le_Ioc /-
theorem bounded_le_Ioc [Preorder α] (a b : α) : Bounded (· ≤ ·) (Set.Ioc a b) :=
(bounded_le_Iic b).mono Set.Ioc_subset_Iic_self
#align set.bounded_le_Ioc Set.bounded_le_Ioc
-/
#print Set.bounded_le_Icc /-
theorem bounded_le_Icc [Preorder α] (a b : α) : Bounded (· ≤ ·) (Set.Icc a b) :=
(bounded_le_Iic b).mono Set.Icc_subset_Iic_self
#align set.bounded_le_Icc Set.bounded_le_Icc
-/
#print Set.bounded_gt_Ioo /-
theorem bounded_gt_Ioo [Preorder α] (a b : α) : Bounded (· > ·) (Set.Ioo a b) :=
(bounded_gt_Ioi a).mono Set.Ioo_subset_Ioi_self
#align set.bounded_gt_Ioo Set.bounded_gt_Ioo
-/
#print Set.bounded_gt_Ioc /-
theorem bounded_gt_Ioc [Preorder α] (a b : α) : Bounded (· > ·) (Set.Ioc a b) :=
(bounded_gt_Ioi a).mono Set.Ioc_subset_Ioi_self
#align set.bounded_gt_Ioc Set.bounded_gt_Ioc
-/
#print Set.bounded_gt_Ico /-
theorem bounded_gt_Ico [Preorder α] [NoMinOrder α] (a b : α) : Bounded (· > ·) (Set.Ico a b) :=
(bounded_gt_Ici a).mono Set.Ico_subset_Ici_self
#align set.bounded_gt_Ico Set.bounded_gt_Ico
-/
#print Set.bounded_gt_Icc /-
theorem bounded_gt_Icc [Preorder α] [NoMinOrder α] (a b : α) : Bounded (· > ·) (Set.Icc a b) :=
(bounded_gt_Ici a).mono Set.Icc_subset_Ici_self
#align set.bounded_gt_Icc Set.bounded_gt_Icc
-/
#print Set.bounded_ge_Ioo /-
theorem bounded_ge_Ioo [Preorder α] (a b : α) : Bounded (· ≥ ·) (Set.Ioo a b) :=
(bounded_ge_Ioi a).mono Set.Ioo_subset_Ioi_self
#align set.bounded_ge_Ioo Set.bounded_ge_Ioo
-/
#print Set.bounded_ge_Ioc /-
theorem bounded_ge_Ioc [Preorder α] (a b : α) : Bounded (· ≥ ·) (Set.Ioc a b) :=
(bounded_ge_Ioi a).mono Set.Ioc_subset_Ioi_self
#align set.bounded_ge_Ioc Set.bounded_ge_Ioc
-/
#print Set.bounded_ge_Ico /-
theorem bounded_ge_Ico [Preorder α] (a b : α) : Bounded (· ≥ ·) (Set.Ico a b) :=
(bounded_ge_Ici a).mono Set.Ico_subset_Ici_self
#align set.bounded_ge_Ico Set.bounded_ge_Ico
-/
#print Set.bounded_ge_Icc /-
theorem bounded_ge_Icc [Preorder α] (a b : α) : Bounded (· ≥ ·) (Set.Icc a b) :=
(bounded_ge_Ici a).mono Set.Icc_subset_Ici_self
#align set.bounded_ge_Icc Set.bounded_ge_Icc
-/
/-! #### Unbounded intervals -/
#print Set.unbounded_le_Ioi /-
theorem unbounded_le_Ioi [SemilatticeSup α] [NoMaxOrder α] (a : α) :
Unbounded (· ≤ ·) (Set.Ioi a) := fun b =>
let ⟨c, hc⟩ := exists_gt (a ⊔ b)
⟨c, le_sup_left.trans_lt hc, (le_sup_right.trans_lt hc).not_le⟩
#align set.unbounded_le_Ioi Set.unbounded_le_Ioi
-/
#print Set.unbounded_le_Ici /-
theorem unbounded_le_Ici [SemilatticeSup α] [NoMaxOrder α] (a : α) :
Unbounded (· ≤ ·) (Set.Ici a) :=
(unbounded_le_Ioi a).mono Set.Ioi_subset_Ici_self
#align set.unbounded_le_Ici Set.unbounded_le_Ici
-/
#print Set.unbounded_lt_Ioi /-
theorem unbounded_lt_Ioi [SemilatticeSup α] [NoMaxOrder α] (a : α) :
Unbounded (· < ·) (Set.Ioi a) :=
unbounded_lt_of_unbounded_le (unbounded_le_Ioi a)
#align set.unbounded_lt_Ioi Set.unbounded_lt_Ioi
-/
#print Set.unbounded_lt_Ici /-
theorem unbounded_lt_Ici [SemilatticeSup α] (a : α) : Unbounded (· < ·) (Set.Ici a) := fun b =>
⟨a ⊔ b, le_sup_left, le_sup_right.not_lt⟩
#align set.unbounded_lt_Ici Set.unbounded_lt_Ici
-/
/-! ### Bounded initial segments -/
/- warning: set.bounded_inter_not -> Set.bounded_inter_not is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {r : α -> α -> Prop} {s : Set.{u1} α}, (forall (a : α) (b : α), Exists.{succ u1} α (fun (m : α) => forall (c : α), (Or (r c a) (r c b)) -> (r c m))) -> (forall (a : α), Iff (Set.Bounded.{u1} α r (Inter.inter.{u1} (Set.{u1} α) (Set.hasInter.{u1} α) s (setOf.{u1} α (fun (b : α) => Not (r b a))))) (Set.Bounded.{u1} α r s))
but is expected to have type
forall {α : Type.{u1}} {r : α -> α -> Prop} {s : Set.{u1} α}, (forall (a : α) (b : α), Exists.{succ u1} α (fun (m : α) => forall (c : α), (Or (r c a) (r c b)) -> (r c m))) -> (forall (a : α), Iff (Set.Bounded.{u1} α r (Inter.inter.{u1} (Set.{u1} α) (Set.instInterSet.{u1} α) s (setOf.{u1} α (fun (b : α) => Not (r b a))))) (Set.Bounded.{u1} α r s))
Case conversion may be inaccurate. Consider using '#align set.bounded_inter_not Set.bounded_inter_notₓ'. -/
theorem bounded_inter_not (H : ∀ a b, ∃ m, ∀ c, r c a ∨ r c b → r c m) (a : α) :
Bounded r (s ∩ { b | ¬r b a }) ↔ Bounded r s :=
by
refine' ⟨_, bounded.mono (Set.inter_subset_left s _)⟩
rintro ⟨b, hb⟩
cases' H a b with m hm
exact ⟨m, fun c hc => hm c (or_iff_not_imp_left.2 fun hca => hb c ⟨hc, hca⟩)⟩
#align set.bounded_inter_not Set.bounded_inter_not
/- warning: set.unbounded_inter_not -> Set.unbounded_inter_not is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {r : α -> α -> Prop} {s : Set.{u1} α}, (forall (a : α) (b : α), Exists.{succ u1} α (fun (m : α) => forall (c : α), (Or (r c a) (r c b)) -> (r c m))) -> (forall (a : α), Iff (Set.Unbounded.{u1} α r (Inter.inter.{u1} (Set.{u1} α) (Set.hasInter.{u1} α) s (setOf.{u1} α (fun (b : α) => Not (r b a))))) (Set.Unbounded.{u1} α r s))
but is expected to have type
forall {α : Type.{u1}} {r : α -> α -> Prop} {s : Set.{u1} α}, (forall (a : α) (b : α), Exists.{succ u1} α (fun (m : α) => forall (c : α), (Or (r c a) (r c b)) -> (r c m))) -> (forall (a : α), Iff (Set.Unbounded.{u1} α r (Inter.inter.{u1} (Set.{u1} α) (Set.instInterSet.{u1} α) s (setOf.{u1} α (fun (b : α) => Not (r b a))))) (Set.Unbounded.{u1} α r s))
Case conversion may be inaccurate. Consider using '#align set.unbounded_inter_not Set.unbounded_inter_notₓ'. -/
theorem unbounded_inter_not (H : ∀ a b, ∃ m, ∀ c, r c a ∨ r c b → r c m) (a : α) :
Unbounded r (s ∩ { b | ¬r b a }) ↔ Unbounded r s := by
simp_rw [← not_bounded_iff, bounded_inter_not H]
#align set.unbounded_inter_not Set.unbounded_inter_not
/-! #### Less or equal -/
/- warning: set.bounded_le_inter_not_le -> Set.bounded_le_inter_not_le is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : SemilatticeSup.{u1} α] (a : α), Iff (Set.Bounded.{u1} α (LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeSup.toPartialOrder.{u1} α _inst_1)))) (Inter.inter.{u1} (Set.{u1} α) (Set.hasInter.{u1} α) s (setOf.{u1} α (fun (b : α) => Not (LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeSup.toPartialOrder.{u1} α _inst_1))) b a))))) (Set.Bounded.{u1} α (LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeSup.toPartialOrder.{u1} α _inst_1)))) s)
but is expected to have type
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : SemilatticeSup.{u1} α] (a : α), Iff (Set.Bounded.{u1} α (fun ([email protected]._hyg.3253 : α) ([email protected]._hyg.3255 : α) => LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeSup.toPartialOrder.{u1} α _inst_1))) [email protected]._hyg.3253 [email protected]._hyg.3255) (Inter.inter.{u1} (Set.{u1} α) (Set.instInterSet.{u1} α) s (setOf.{u1} α (fun (b : α) => Not (LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeSup.toPartialOrder.{u1} α _inst_1))) b a))))) (Set.Bounded.{u1} α (fun ([email protected]._hyg.3289 : α) ([email protected]._hyg.3291 : α) => LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeSup.toPartialOrder.{u1} α _inst_1))) [email protected]._hyg.3289 [email protected]._hyg.3291) s)
Case conversion may be inaccurate. Consider using '#align set.bounded_le_inter_not_le Set.bounded_le_inter_not_leₓ'. -/
theorem bounded_le_inter_not_le [SemilatticeSup α] (a : α) :
Bounded (· ≤ ·) (s ∩ { b | ¬b ≤ a }) ↔ Bounded (· ≤ ·) s :=
bounded_inter_not (fun x y => ⟨x ⊔ y, fun z h => h.elim le_sup_of_le_left le_sup_of_le_right⟩) a
#align set.bounded_le_inter_not_le Set.bounded_le_inter_not_le
/- warning: set.unbounded_le_inter_not_le -> Set.unbounded_le_inter_not_le is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : SemilatticeSup.{u1} α] (a : α), Iff (Set.Unbounded.{u1} α (LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeSup.toPartialOrder.{u1} α _inst_1)))) (Inter.inter.{u1} (Set.{u1} α) (Set.hasInter.{u1} α) s (setOf.{u1} α (fun (b : α) => Not (LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeSup.toPartialOrder.{u1} α _inst_1))) b a))))) (Set.Unbounded.{u1} α (LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeSup.toPartialOrder.{u1} α _inst_1)))) s)
but is expected to have type
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : SemilatticeSup.{u1} α] (a : α), Iff (Set.Unbounded.{u1} α (fun ([email protected]._hyg.3350 : α) ([email protected]._hyg.3352 : α) => LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeSup.toPartialOrder.{u1} α _inst_1))) [email protected]._hyg.3350 [email protected]._hyg.3352) (Inter.inter.{u1} (Set.{u1} α) (Set.instInterSet.{u1} α) s (setOf.{u1} α (fun (b : α) => Not (LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeSup.toPartialOrder.{u1} α _inst_1))) b a))))) (Set.Unbounded.{u1} α (fun ([email protected]._hyg.3386 : α) ([email protected]._hyg.3388 : α) => LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeSup.toPartialOrder.{u1} α _inst_1))) [email protected]._hyg.3386 [email protected]._hyg.3388) s)
Case conversion may be inaccurate. Consider using '#align set.unbounded_le_inter_not_le Set.unbounded_le_inter_not_leₓ'. -/
theorem unbounded_le_inter_not_le [SemilatticeSup α] (a : α) :
Unbounded (· ≤ ·) (s ∩ { b | ¬b ≤ a }) ↔ Unbounded (· ≤ ·) s :=
by
rw [← not_bounded_iff, ← not_bounded_iff, not_iff_not]
exact bounded_le_inter_not_le a
#align set.unbounded_le_inter_not_le Set.unbounded_le_inter_not_le
/- warning: set.bounded_le_inter_lt -> Set.bounded_le_inter_lt is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α] (a : α), Iff (Set.Bounded.{u1} α (LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) (Inter.inter.{u1} (Set.{u1} α) (Set.hasInter.{u1} α) s (setOf.{u1} α (fun (b : α) => LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1))))) a b)))) (Set.Bounded.{u1} α (LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) s)
but is expected to have type
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α] (a : α), Iff (Set.Bounded.{u1} α (fun ([email protected]._hyg.3459 : α) ([email protected]._hyg.3461 : α) => LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.3459 [email protected]._hyg.3461) (Inter.inter.{u1} (Set.{u1} α) (Set.instInterSet.{u1} α) s (setOf.{u1} α (fun (b : α) => LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) a b)))) (Set.Bounded.{u1} α (fun ([email protected]._hyg.3492 : α) ([email protected]._hyg.3494 : α) => LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.3492 [email protected]._hyg.3494) s)
Case conversion may be inaccurate. Consider using '#align set.bounded_le_inter_lt Set.bounded_le_inter_ltₓ'. -/
theorem bounded_le_inter_lt [LinearOrder α] (a : α) :
Bounded (· ≤ ·) (s ∩ { b | a < b }) ↔ Bounded (· ≤ ·) s := by
simp_rw [← not_le, bounded_le_inter_not_le]
#align set.bounded_le_inter_lt Set.bounded_le_inter_lt
/- warning: set.unbounded_le_inter_lt -> Set.unbounded_le_inter_lt is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α] (a : α), Iff (Set.Unbounded.{u1} α (LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) (Inter.inter.{u1} (Set.{u1} α) (Set.hasInter.{u1} α) s (setOf.{u1} α (fun (b : α) => LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1))))) a b)))) (Set.Unbounded.{u1} α (LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) s)
but is expected to have type
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α] (a : α), Iff (Set.Unbounded.{u1} α (fun ([email protected]._hyg.3537 : α) ([email protected]._hyg.3539 : α) => LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.3537 [email protected]._hyg.3539) (Inter.inter.{u1} (Set.{u1} α) (Set.instInterSet.{u1} α) s (setOf.{u1} α (fun (b : α) => LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) a b)))) (Set.Unbounded.{u1} α (fun ([email protected]._hyg.3570 : α) ([email protected]._hyg.3572 : α) => LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.3570 [email protected]._hyg.3572) s)
Case conversion may be inaccurate. Consider using '#align set.unbounded_le_inter_lt Set.unbounded_le_inter_ltₓ'. -/
theorem unbounded_le_inter_lt [LinearOrder α] (a : α) :
Unbounded (· ≤ ·) (s ∩ { b | a < b }) ↔ Unbounded (· ≤ ·) s :=
by
convert unbounded_le_inter_not_le a
ext
exact lt_iff_not_le
#align set.unbounded_le_inter_lt Set.unbounded_le_inter_lt
/- warning: set.bounded_le_inter_le -> Set.bounded_le_inter_le is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α] (a : α), Iff (Set.Bounded.{u1} α (LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) (Inter.inter.{u1} (Set.{u1} α) (Set.hasInter.{u1} α) s (setOf.{u1} α (fun (b : α) => LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1))))) a b)))) (Set.Bounded.{u1} α (LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) s)
but is expected to have type
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α] (a : α), Iff (Set.Bounded.{u1} α (fun ([email protected]._hyg.3680 : α) ([email protected]._hyg.3682 : α) => LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.3680 [email protected]._hyg.3682) (Inter.inter.{u1} (Set.{u1} α) (Set.instInterSet.{u1} α) s (setOf.{u1} α (fun (b : α) => LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) a b)))) (Set.Bounded.{u1} α (fun ([email protected]._hyg.3713 : α) ([email protected]._hyg.3715 : α) => LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.3713 [email protected]._hyg.3715) s)
Case conversion may be inaccurate. Consider using '#align set.bounded_le_inter_le Set.bounded_le_inter_leₓ'. -/
theorem bounded_le_inter_le [LinearOrder α] (a : α) :
Bounded (· ≤ ·) (s ∩ { b | a ≤ b }) ↔ Bounded (· ≤ ·) s :=
by
refine' ⟨_, bounded.mono (Set.inter_subset_left s _)⟩
rw [← @bounded_le_inter_lt _ s _ a]
exact bounded.mono fun x ⟨hx, hx'⟩ => ⟨hx, le_of_lt hx'⟩
#align set.bounded_le_inter_le Set.bounded_le_inter_le
/- warning: set.unbounded_le_inter_le -> Set.unbounded_le_inter_le is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α] (a : α), Iff (Set.Unbounded.{u1} α (LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) (Inter.inter.{u1} (Set.{u1} α) (Set.hasInter.{u1} α) s (setOf.{u1} α (fun (b : α) => LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1))))) a b)))) (Set.Unbounded.{u1} α (LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) s)
but is expected to have type
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α] (a : α), Iff (Set.Unbounded.{u1} α (fun ([email protected]._hyg.3822 : α) ([email protected]._hyg.3824 : α) => LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.3822 [email protected]._hyg.3824) (Inter.inter.{u1} (Set.{u1} α) (Set.instInterSet.{u1} α) s (setOf.{u1} α (fun (b : α) => LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) a b)))) (Set.Unbounded.{u1} α (fun ([email protected]._hyg.3855 : α) ([email protected]._hyg.3857 : α) => LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.3855 [email protected]._hyg.3857) s)
Case conversion may be inaccurate. Consider using '#align set.unbounded_le_inter_le Set.unbounded_le_inter_leₓ'. -/
theorem unbounded_le_inter_le [LinearOrder α] (a : α) :
Unbounded (· ≤ ·) (s ∩ { b | a ≤ b }) ↔ Unbounded (· ≤ ·) s :=
by
rw [← not_bounded_iff, ← not_bounded_iff, not_iff_not]
exact bounded_le_inter_le a
#align set.unbounded_le_inter_le Set.unbounded_le_inter_le
/-! #### Less than -/
/- warning: set.bounded_lt_inter_not_lt -> Set.bounded_lt_inter_not_lt is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : SemilatticeSup.{u1} α] (a : α), Iff (Set.Bounded.{u1} α (LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeSup.toPartialOrder.{u1} α _inst_1)))) (Inter.inter.{u1} (Set.{u1} α) (Set.hasInter.{u1} α) s (setOf.{u1} α (fun (b : α) => Not (LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeSup.toPartialOrder.{u1} α _inst_1))) b a))))) (Set.Bounded.{u1} α (LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeSup.toPartialOrder.{u1} α _inst_1)))) s)
but is expected to have type
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : SemilatticeSup.{u1} α] (a : α), Iff (Set.Bounded.{u1} α (fun ([email protected]._hyg.3929 : α) ([email protected]._hyg.3931 : α) => LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeSup.toPartialOrder.{u1} α _inst_1))) [email protected]._hyg.3929 [email protected]._hyg.3931) (Inter.inter.{u1} (Set.{u1} α) (Set.instInterSet.{u1} α) s (setOf.{u1} α (fun (b : α) => Not (LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeSup.toPartialOrder.{u1} α _inst_1))) b a))))) (Set.Bounded.{u1} α (fun ([email protected]._hyg.3965 : α) ([email protected]._hyg.3967 : α) => LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeSup.toPartialOrder.{u1} α _inst_1))) [email protected]._hyg.3965 [email protected]._hyg.3967) s)
Case conversion may be inaccurate. Consider using '#align set.bounded_lt_inter_not_lt Set.bounded_lt_inter_not_ltₓ'. -/
theorem bounded_lt_inter_not_lt [SemilatticeSup α] (a : α) :
Bounded (· < ·) (s ∩ { b | ¬b < a }) ↔ Bounded (· < ·) s :=
bounded_inter_not (fun x y => ⟨x ⊔ y, fun z h => h.elim lt_sup_of_lt_left lt_sup_of_lt_right⟩) a
#align set.bounded_lt_inter_not_lt Set.bounded_lt_inter_not_lt
/- warning: set.unbounded_lt_inter_not_lt -> Set.unbounded_lt_inter_not_lt is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : SemilatticeSup.{u1} α] (a : α), Iff (Set.Unbounded.{u1} α (LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeSup.toPartialOrder.{u1} α _inst_1)))) (Inter.inter.{u1} (Set.{u1} α) (Set.hasInter.{u1} α) s (setOf.{u1} α (fun (b : α) => Not (LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeSup.toPartialOrder.{u1} α _inst_1))) b a))))) (Set.Unbounded.{u1} α (LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeSup.toPartialOrder.{u1} α _inst_1)))) s)
but is expected to have type
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : SemilatticeSup.{u1} α] (a : α), Iff (Set.Unbounded.{u1} α (fun ([email protected]._hyg.4026 : α) ([email protected]._hyg.4028 : α) => LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeSup.toPartialOrder.{u1} α _inst_1))) [email protected]._hyg.4026 [email protected]._hyg.4028) (Inter.inter.{u1} (Set.{u1} α) (Set.instInterSet.{u1} α) s (setOf.{u1} α (fun (b : α) => Not (LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeSup.toPartialOrder.{u1} α _inst_1))) b a))))) (Set.Unbounded.{u1} α (fun ([email protected]._hyg.4062 : α) ([email protected]._hyg.4064 : α) => LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeSup.toPartialOrder.{u1} α _inst_1))) [email protected]._hyg.4062 [email protected]._hyg.4064) s)
Case conversion may be inaccurate. Consider using '#align set.unbounded_lt_inter_not_lt Set.unbounded_lt_inter_not_ltₓ'. -/
theorem unbounded_lt_inter_not_lt [SemilatticeSup α] (a : α) :
Unbounded (· < ·) (s ∩ { b | ¬b < a }) ↔ Unbounded (· < ·) s :=
by
rw [← not_bounded_iff, ← not_bounded_iff, not_iff_not]
exact bounded_lt_inter_not_lt a
#align set.unbounded_lt_inter_not_lt Set.unbounded_lt_inter_not_lt
/- warning: set.bounded_lt_inter_le -> Set.bounded_lt_inter_le is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α] (a : α), Iff (Set.Bounded.{u1} α (LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) (Inter.inter.{u1} (Set.{u1} α) (Set.hasInter.{u1} α) s (setOf.{u1} α (fun (b : α) => LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1))))) a b)))) (Set.Bounded.{u1} α (LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) s)
but is expected to have type
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α] (a : α), Iff (Set.Bounded.{u1} α (fun ([email protected]._hyg.4135 : α) ([email protected]._hyg.4137 : α) => LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.4135 [email protected]._hyg.4137) (Inter.inter.{u1} (Set.{u1} α) (Set.instInterSet.{u1} α) s (setOf.{u1} α (fun (b : α) => LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) a b)))) (Set.Bounded.{u1} α (fun ([email protected]._hyg.4168 : α) ([email protected]._hyg.4170 : α) => LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.4168 [email protected]._hyg.4170) s)
Case conversion may be inaccurate. Consider using '#align set.bounded_lt_inter_le Set.bounded_lt_inter_leₓ'. -/
theorem bounded_lt_inter_le [LinearOrder α] (a : α) :
Bounded (· < ·) (s ∩ { b | a ≤ b }) ↔ Bounded (· < ·) s :=
by
convert bounded_lt_inter_not_lt a
ext
exact not_lt.symm
#align set.bounded_lt_inter_le Set.bounded_lt_inter_le
/- warning: set.unbounded_lt_inter_le -> Set.unbounded_lt_inter_le is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α] (a : α), Iff (Set.Unbounded.{u1} α (LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) (Inter.inter.{u1} (Set.{u1} α) (Set.hasInter.{u1} α) s (setOf.{u1} α (fun (b : α) => LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1))))) a b)))) (Set.Unbounded.{u1} α (LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) s)
but is expected to have type
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α] (a : α), Iff (Set.Unbounded.{u1} α (fun ([email protected]._hyg.4278 : α) ([email protected]._hyg.4280 : α) => LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.4278 [email protected]._hyg.4280) (Inter.inter.{u1} (Set.{u1} α) (Set.instInterSet.{u1} α) s (setOf.{u1} α (fun (b : α) => LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) a b)))) (Set.Unbounded.{u1} α (fun ([email protected]._hyg.4311 : α) ([email protected]._hyg.4313 : α) => LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.4311 [email protected]._hyg.4313) s)
Case conversion may be inaccurate. Consider using '#align set.unbounded_lt_inter_le Set.unbounded_lt_inter_leₓ'. -/
theorem unbounded_lt_inter_le [LinearOrder α] (a : α) :
Unbounded (· < ·) (s ∩ { b | a ≤ b }) ↔ Unbounded (· < ·) s :=
by
convert unbounded_lt_inter_not_lt a
ext
exact not_lt.symm
#align set.unbounded_lt_inter_le Set.unbounded_lt_inter_le
/- warning: set.bounded_lt_inter_lt -> Set.bounded_lt_inter_lt is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α] [_inst_2 : NoMaxOrder.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))] (a : α), Iff (Set.Bounded.{u1} α (LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) (Inter.inter.{u1} (Set.{u1} α) (Set.hasInter.{u1} α) s (setOf.{u1} α (fun (b : α) => LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1))))) a b)))) (Set.Bounded.{u1} α (LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) s)
but is expected to have type
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α] [_inst_2 : NoMaxOrder.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1))))))] (a : α), Iff (Set.Bounded.{u1} α (fun ([email protected]._hyg.4424 : α) ([email protected]._hyg.4426 : α) => LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.4424 [email protected]._hyg.4426) (Inter.inter.{u1} (Set.{u1} α) (Set.instInterSet.{u1} α) s (setOf.{u1} α (fun (b : α) => LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) a b)))) (Set.Bounded.{u1} α (fun ([email protected]._hyg.4457 : α) ([email protected]._hyg.4459 : α) => LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.4457 [email protected]._hyg.4459) s)
Case conversion may be inaccurate. Consider using '#align set.bounded_lt_inter_lt Set.bounded_lt_inter_ltₓ'. -/
theorem bounded_lt_inter_lt [LinearOrder α] [NoMaxOrder α] (a : α) :
Bounded (· < ·) (s ∩ { b | a < b }) ↔ Bounded (· < ·) s :=
by
rw [← bounded_le_iff_bounded_lt, ← bounded_le_iff_bounded_lt]
exact bounded_le_inter_lt a
#align set.bounded_lt_inter_lt Set.bounded_lt_inter_lt
/- warning: set.unbounded_lt_inter_lt -> Set.unbounded_lt_inter_lt is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α] [_inst_2 : NoMaxOrder.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))] (a : α), Iff (Set.Unbounded.{u1} α (LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) (Inter.inter.{u1} (Set.{u1} α) (Set.hasInter.{u1} α) s (setOf.{u1} α (fun (b : α) => LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1))))) a b)))) (Set.Unbounded.{u1} α (LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) s)
but is expected to have type
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α] [_inst_2 : NoMaxOrder.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1))))))] (a : α), Iff (Set.Unbounded.{u1} α (fun ([email protected]._hyg.4532 : α) ([email protected]._hyg.4534 : α) => LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.4532 [email protected]._hyg.4534) (Inter.inter.{u1} (Set.{u1} α) (Set.instInterSet.{u1} α) s (setOf.{u1} α (fun (b : α) => LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) a b)))) (Set.Unbounded.{u1} α (fun ([email protected]._hyg.4565 : α) ([email protected]._hyg.4567 : α) => LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.4565 [email protected]._hyg.4567) s)
Case conversion may be inaccurate. Consider using '#align set.unbounded_lt_inter_lt Set.unbounded_lt_inter_ltₓ'. -/
theorem unbounded_lt_inter_lt [LinearOrder α] [NoMaxOrder α] (a : α) :
Unbounded (· < ·) (s ∩ { b | a < b }) ↔ Unbounded (· < ·) s :=
by
rw [← not_bounded_iff, ← not_bounded_iff, not_iff_not]
exact bounded_lt_inter_lt a
#align set.unbounded_lt_inter_lt Set.unbounded_lt_inter_lt
/-! #### Greater or equal -/
/- warning: set.bounded_ge_inter_not_ge -> Set.bounded_ge_inter_not_ge is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : SemilatticeInf.{u1} α] (a : α), Iff (Set.Bounded.{u1} α (GE.ge.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α _inst_1)))) (Inter.inter.{u1} (Set.{u1} α) (Set.hasInter.{u1} α) s (setOf.{u1} α (fun (b : α) => Not (LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α _inst_1))) a b))))) (Set.Bounded.{u1} α (GE.ge.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α _inst_1)))) s)
but is expected to have type
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : SemilatticeInf.{u1} α] (a : α), Iff (Set.Bounded.{u1} α (fun ([email protected]._hyg.4639 : α) ([email protected]._hyg.4641 : α) => GE.ge.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α _inst_1))) [email protected]._hyg.4639 [email protected]._hyg.4641) (Inter.inter.{u1} (Set.{u1} α) (Set.instInterSet.{u1} α) s (setOf.{u1} α (fun (b : α) => Not (LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α _inst_1))) a b))))) (Set.Bounded.{u1} α (fun ([email protected]._hyg.4675 : α) ([email protected]._hyg.4677 : α) => GE.ge.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α _inst_1))) [email protected]._hyg.4675 [email protected]._hyg.4677) s)
Case conversion may be inaccurate. Consider using '#align set.bounded_ge_inter_not_ge Set.bounded_ge_inter_not_geₓ'. -/
theorem bounded_ge_inter_not_ge [SemilatticeInf α] (a : α) :
Bounded (· ≥ ·) (s ∩ { b | ¬a ≤ b }) ↔ Bounded (· ≥ ·) s :=
@bounded_le_inter_not_le αᵒᵈ s _ a
#align set.bounded_ge_inter_not_ge Set.bounded_ge_inter_not_ge
/- warning: set.unbounded_ge_inter_not_ge -> Set.unbounded_ge_inter_not_ge is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : SemilatticeInf.{u1} α] (a : α), Iff (Set.Unbounded.{u1} α (GE.ge.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α _inst_1)))) (Inter.inter.{u1} (Set.{u1} α) (Set.hasInter.{u1} α) s (setOf.{u1} α (fun (b : α) => Not (LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α _inst_1))) a b))))) (Set.Unbounded.{u1} α (GE.ge.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α _inst_1)))) s)
but is expected to have type
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : SemilatticeInf.{u1} α] (a : α), Iff (Set.Unbounded.{u1} α (fun ([email protected]._hyg.4718 : α) ([email protected]._hyg.4720 : α) => GE.ge.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α _inst_1))) [email protected]._hyg.4718 [email protected]._hyg.4720) (Inter.inter.{u1} (Set.{u1} α) (Set.instInterSet.{u1} α) s (setOf.{u1} α (fun (b : α) => Not (LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α _inst_1))) a b))))) (Set.Unbounded.{u1} α (fun ([email protected]._hyg.4754 : α) ([email protected]._hyg.4756 : α) => GE.ge.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α _inst_1))) [email protected]._hyg.4754 [email protected]._hyg.4756) s)
Case conversion may be inaccurate. Consider using '#align set.unbounded_ge_inter_not_ge Set.unbounded_ge_inter_not_geₓ'. -/
theorem unbounded_ge_inter_not_ge [SemilatticeInf α] (a : α) :
Unbounded (· ≥ ·) (s ∩ { b | ¬a ≤ b }) ↔ Unbounded (· ≥ ·) s :=
@unbounded_le_inter_not_le αᵒᵈ s _ a
#align set.unbounded_ge_inter_not_ge Set.unbounded_ge_inter_not_ge
/- warning: set.bounded_ge_inter_gt -> Set.bounded_ge_inter_gt is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α] (a : α), Iff (Set.Bounded.{u1} α (GE.ge.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) (Inter.inter.{u1} (Set.{u1} α) (Set.hasInter.{u1} α) s (setOf.{u1} α (fun (b : α) => LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1))))) b a)))) (Set.Bounded.{u1} α (GE.ge.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) s)
but is expected to have type
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α] (a : α), Iff (Set.Bounded.{u1} α (fun ([email protected]._hyg.4797 : α) ([email protected]._hyg.4799 : α) => GE.ge.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.4797 [email protected]._hyg.4799) (Inter.inter.{u1} (Set.{u1} α) (Set.instInterSet.{u1} α) s (setOf.{u1} α (fun (b : α) => LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) b a)))) (Set.Bounded.{u1} α (fun ([email protected]._hyg.4830 : α) ([email protected]._hyg.4832 : α) => GE.ge.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.4830 [email protected]._hyg.4832) s)
Case conversion may be inaccurate. Consider using '#align set.bounded_ge_inter_gt Set.bounded_ge_inter_gtₓ'. -/
theorem bounded_ge_inter_gt [LinearOrder α] (a : α) :
Bounded (· ≥ ·) (s ∩ { b | b < a }) ↔ Bounded (· ≥ ·) s :=
@bounded_le_inter_lt αᵒᵈ s _ a
#align set.bounded_ge_inter_gt Set.bounded_ge_inter_gt
/- warning: set.unbounded_ge_inter_gt -> Set.unbounded_ge_inter_gt is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α] (a : α), Iff (Set.Unbounded.{u1} α (GE.ge.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) (Inter.inter.{u1} (Set.{u1} α) (Set.hasInter.{u1} α) s (setOf.{u1} α (fun (b : α) => LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1))))) b a)))) (Set.Unbounded.{u1} α (GE.ge.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) s)
but is expected to have type
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α] (a : α), Iff (Set.Unbounded.{u1} α (fun ([email protected]._hyg.4873 : α) ([email protected]._hyg.4875 : α) => GE.ge.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.4873 [email protected]._hyg.4875) (Inter.inter.{u1} (Set.{u1} α) (Set.instInterSet.{u1} α) s (setOf.{u1} α (fun (b : α) => LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) b a)))) (Set.Unbounded.{u1} α (fun ([email protected]._hyg.4906 : α) ([email protected]._hyg.4908 : α) => GE.ge.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.4906 [email protected]._hyg.4908) s)
Case conversion may be inaccurate. Consider using '#align set.unbounded_ge_inter_gt Set.unbounded_ge_inter_gtₓ'. -/
theorem unbounded_ge_inter_gt [LinearOrder α] (a : α) :
Unbounded (· ≥ ·) (s ∩ { b | b < a }) ↔ Unbounded (· ≥ ·) s :=
@unbounded_le_inter_lt αᵒᵈ s _ a
#align set.unbounded_ge_inter_gt Set.unbounded_ge_inter_gt
/- warning: set.bounded_ge_inter_ge -> Set.bounded_ge_inter_ge is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α] (a : α), Iff (Set.Bounded.{u1} α (GE.ge.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) (Inter.inter.{u1} (Set.{u1} α) (Set.hasInter.{u1} α) s (setOf.{u1} α (fun (b : α) => LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1))))) b a)))) (Set.Bounded.{u1} α (GE.ge.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) s)
but is expected to have type
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α] (a : α), Iff (Set.Bounded.{u1} α (fun ([email protected]._hyg.4949 : α) ([email protected]._hyg.4951 : α) => GE.ge.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.4949 [email protected]._hyg.4951) (Inter.inter.{u1} (Set.{u1} α) (Set.instInterSet.{u1} α) s (setOf.{u1} α (fun (b : α) => LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) b a)))) (Set.Bounded.{u1} α (fun ([email protected]._hyg.4982 : α) ([email protected]._hyg.4984 : α) => GE.ge.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.4982 [email protected]._hyg.4984) s)
Case conversion may be inaccurate. Consider using '#align set.bounded_ge_inter_ge Set.bounded_ge_inter_geₓ'. -/
theorem bounded_ge_inter_ge [LinearOrder α] (a : α) :
Bounded (· ≥ ·) (s ∩ { b | b ≤ a }) ↔ Bounded (· ≥ ·) s :=
@bounded_le_inter_le αᵒᵈ s _ a
#align set.bounded_ge_inter_ge Set.bounded_ge_inter_ge
/- warning: set.unbounded_ge_iff_unbounded_inter_ge -> Set.unbounded_ge_iff_unbounded_inter_ge is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α] (a : α), Iff (Set.Unbounded.{u1} α (GE.ge.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) (Inter.inter.{u1} (Set.{u1} α) (Set.hasInter.{u1} α) s (setOf.{u1} α (fun (b : α) => LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1))))) b a)))) (Set.Unbounded.{u1} α (GE.ge.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) s)
but is expected to have type
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α] (a : α), Iff (Set.Unbounded.{u1} α (fun ([email protected]._hyg.5025 : α) ([email protected]._hyg.5027 : α) => GE.ge.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.5025 [email protected]._hyg.5027) (Inter.inter.{u1} (Set.{u1} α) (Set.instInterSet.{u1} α) s (setOf.{u1} α (fun (b : α) => LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) b a)))) (Set.Unbounded.{u1} α (fun ([email protected]._hyg.5058 : α) ([email protected]._hyg.5060 : α) => GE.ge.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.5058 [email protected]._hyg.5060) s)
Case conversion may be inaccurate. Consider using '#align set.unbounded_ge_iff_unbounded_inter_ge Set.unbounded_ge_iff_unbounded_inter_geₓ'. -/
theorem unbounded_ge_iff_unbounded_inter_ge [LinearOrder α] (a : α) :
Unbounded (· ≥ ·) (s ∩ { b | b ≤ a }) ↔ Unbounded (· ≥ ·) s :=
@unbounded_le_inter_le αᵒᵈ s _ a
#align set.unbounded_ge_iff_unbounded_inter_ge Set.unbounded_ge_iff_unbounded_inter_ge
/-! #### Greater than -/
/- warning: set.bounded_gt_inter_not_gt -> Set.bounded_gt_inter_not_gt is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : SemilatticeInf.{u1} α] (a : α), Iff (Set.Bounded.{u1} α (GT.gt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α _inst_1)))) (Inter.inter.{u1} (Set.{u1} α) (Set.hasInter.{u1} α) s (setOf.{u1} α (fun (b : α) => Not (LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α _inst_1))) a b))))) (Set.Bounded.{u1} α (GT.gt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α _inst_1)))) s)
but is expected to have type
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : SemilatticeInf.{u1} α] (a : α), Iff (Set.Bounded.{u1} α (fun ([email protected]._hyg.5102 : α) ([email protected]._hyg.5104 : α) => GT.gt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α _inst_1))) [email protected]._hyg.5102 [email protected]._hyg.5104) (Inter.inter.{u1} (Set.{u1} α) (Set.instInterSet.{u1} α) s (setOf.{u1} α (fun (b : α) => Not (LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α _inst_1))) a b))))) (Set.Bounded.{u1} α (fun ([email protected]._hyg.5138 : α) ([email protected]._hyg.5140 : α) => GT.gt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α _inst_1))) [email protected]._hyg.5138 [email protected]._hyg.5140) s)
Case conversion may be inaccurate. Consider using '#align set.bounded_gt_inter_not_gt Set.bounded_gt_inter_not_gtₓ'. -/
theorem bounded_gt_inter_not_gt [SemilatticeInf α] (a : α) :
Bounded (· > ·) (s ∩ { b | ¬a < b }) ↔ Bounded (· > ·) s :=
@bounded_lt_inter_not_lt αᵒᵈ s _ a
#align set.bounded_gt_inter_not_gt Set.bounded_gt_inter_not_gt
/- warning: set.unbounded_gt_inter_not_gt -> Set.unbounded_gt_inter_not_gt is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : SemilatticeInf.{u1} α] (a : α), Iff (Set.Unbounded.{u1} α (GT.gt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α _inst_1)))) (Inter.inter.{u1} (Set.{u1} α) (Set.hasInter.{u1} α) s (setOf.{u1} α (fun (b : α) => Not (LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α _inst_1))) a b))))) (Set.Unbounded.{u1} α (GT.gt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α _inst_1)))) s)
but is expected to have type
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : SemilatticeInf.{u1} α] (a : α), Iff (Set.Unbounded.{u1} α (fun ([email protected]._hyg.5181 : α) ([email protected]._hyg.5183 : α) => GT.gt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α _inst_1))) [email protected]._hyg.5181 [email protected]._hyg.5183) (Inter.inter.{u1} (Set.{u1} α) (Set.instInterSet.{u1} α) s (setOf.{u1} α (fun (b : α) => Not (LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α _inst_1))) a b))))) (Set.Unbounded.{u1} α (fun ([email protected]._hyg.5217 : α) ([email protected]._hyg.5219 : α) => GT.gt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α _inst_1))) [email protected]._hyg.5217 [email protected]._hyg.5219) s)
Case conversion may be inaccurate. Consider using '#align set.unbounded_gt_inter_not_gt Set.unbounded_gt_inter_not_gtₓ'. -/
theorem unbounded_gt_inter_not_gt [SemilatticeInf α] (a : α) :
Unbounded (· > ·) (s ∩ { b | ¬a < b }) ↔ Unbounded (· > ·) s :=
@unbounded_lt_inter_not_lt αᵒᵈ s _ a
#align set.unbounded_gt_inter_not_gt Set.unbounded_gt_inter_not_gt
/- warning: set.bounded_gt_inter_ge -> Set.bounded_gt_inter_ge is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α] (a : α), Iff (Set.Bounded.{u1} α (GT.gt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) (Inter.inter.{u1} (Set.{u1} α) (Set.hasInter.{u1} α) s (setOf.{u1} α (fun (b : α) => LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1))))) b a)))) (Set.Bounded.{u1} α (GT.gt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) s)
but is expected to have type
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α] (a : α), Iff (Set.Bounded.{u1} α (fun ([email protected]._hyg.5260 : α) ([email protected]._hyg.5262 : α) => GT.gt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.5260 [email protected]._hyg.5262) (Inter.inter.{u1} (Set.{u1} α) (Set.instInterSet.{u1} α) s (setOf.{u1} α (fun (b : α) => LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) b a)))) (Set.Bounded.{u1} α (fun ([email protected]._hyg.5293 : α) ([email protected]._hyg.5295 : α) => GT.gt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.5293 [email protected]._hyg.5295) s)
Case conversion may be inaccurate. Consider using '#align set.bounded_gt_inter_ge Set.bounded_gt_inter_geₓ'. -/
theorem bounded_gt_inter_ge [LinearOrder α] (a : α) :
Bounded (· > ·) (s ∩ { b | b ≤ a }) ↔ Bounded (· > ·) s :=
@bounded_lt_inter_le αᵒᵈ s _ a
#align set.bounded_gt_inter_ge Set.bounded_gt_inter_ge
/- warning: set.unbounded_inter_ge -> Set.unbounded_inter_ge is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α] (a : α), Iff (Set.Unbounded.{u1} α (GT.gt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) (Inter.inter.{u1} (Set.{u1} α) (Set.hasInter.{u1} α) s (setOf.{u1} α (fun (b : α) => LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1))))) b a)))) (Set.Unbounded.{u1} α (GT.gt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) s)
but is expected to have type
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α] (a : α), Iff (Set.Unbounded.{u1} α (fun ([email protected]._hyg.5336 : α) ([email protected]._hyg.5338 : α) => GT.gt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.5336 [email protected]._hyg.5338) (Inter.inter.{u1} (Set.{u1} α) (Set.instInterSet.{u1} α) s (setOf.{u1} α (fun (b : α) => LE.le.{u1} α (Preorder.toLE.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) b a)))) (Set.Unbounded.{u1} α (fun ([email protected]._hyg.5369 : α) ([email protected]._hyg.5371 : α) => GT.gt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.5369 [email protected]._hyg.5371) s)
Case conversion may be inaccurate. Consider using '#align set.unbounded_inter_ge Set.unbounded_inter_geₓ'. -/
theorem unbounded_inter_ge [LinearOrder α] (a : α) :
Unbounded (· > ·) (s ∩ { b | b ≤ a }) ↔ Unbounded (· > ·) s :=
@unbounded_lt_inter_le αᵒᵈ s _ a
#align set.unbounded_inter_ge Set.unbounded_inter_ge
/- warning: set.bounded_gt_inter_gt -> Set.bounded_gt_inter_gt is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α] [_inst_2 : NoMinOrder.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))] (a : α), Iff (Set.Bounded.{u1} α (GT.gt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) (Inter.inter.{u1} (Set.{u1} α) (Set.hasInter.{u1} α) s (setOf.{u1} α (fun (b : α) => LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1))))) b a)))) (Set.Bounded.{u1} α (GT.gt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) s)
but is expected to have type
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α] [_inst_2 : NoMinOrder.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1))))))] (a : α), Iff (Set.Bounded.{u1} α (fun ([email protected]._hyg.5415 : α) ([email protected]._hyg.5417 : α) => GT.gt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.5415 [email protected]._hyg.5417) (Inter.inter.{u1} (Set.{u1} α) (Set.instInterSet.{u1} α) s (setOf.{u1} α (fun (b : α) => LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) b a)))) (Set.Bounded.{u1} α (fun ([email protected]._hyg.5448 : α) ([email protected]._hyg.5450 : α) => GT.gt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.5448 [email protected]._hyg.5450) s)
Case conversion may be inaccurate. Consider using '#align set.bounded_gt_inter_gt Set.bounded_gt_inter_gtₓ'. -/
theorem bounded_gt_inter_gt [LinearOrder α] [NoMinOrder α] (a : α) :
Bounded (· > ·) (s ∩ { b | b < a }) ↔ Bounded (· > ·) s :=
@bounded_lt_inter_lt αᵒᵈ s _ _ a
#align set.bounded_gt_inter_gt Set.bounded_gt_inter_gt
/- warning: set.unbounded_gt_inter_gt -> Set.unbounded_gt_inter_gt is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α] [_inst_2 : NoMinOrder.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))] (a : α), Iff (Set.Unbounded.{u1} α (GT.gt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) (Inter.inter.{u1} (Set.{u1} α) (Set.hasInter.{u1} α) s (setOf.{u1} α (fun (b : α) => LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1))))) b a)))) (Set.Unbounded.{u1} α (GT.gt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (LinearOrder.toLattice.{u1} α _inst_1)))))) s)
but is expected to have type
forall {α : Type.{u1}} {s : Set.{u1} α} [_inst_1 : LinearOrder.{u1} α] [_inst_2 : NoMinOrder.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1))))))] (a : α), Iff (Set.Unbounded.{u1} α (fun ([email protected]._hyg.5494 : α) ([email protected]._hyg.5496 : α) => GT.gt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.5494 [email protected]._hyg.5496) (Inter.inter.{u1} (Set.{u1} α) (Set.instInterSet.{u1} α) s (setOf.{u1} α (fun (b : α) => LT.lt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) b a)))) (Set.Unbounded.{u1} α (fun ([email protected]._hyg.5527 : α) ([email protected]._hyg.5529 : α) => GT.gt.{u1} α (Preorder.toLT.{u1} α (PartialOrder.toPreorder.{u1} α (SemilatticeInf.toPartialOrder.{u1} α (Lattice.toSemilatticeInf.{u1} α (DistribLattice.toLattice.{u1} α (instDistribLattice.{u1} α _inst_1)))))) [email protected]._hyg.5527 [email protected]._hyg.5529) s)
Case conversion may be inaccurate. Consider using '#align set.unbounded_gt_inter_gt Set.unbounded_gt_inter_gtₓ'. -/
theorem unbounded_gt_inter_gt [LinearOrder α] [NoMinOrder α] (a : α) :
Unbounded (· > ·) (s ∩ { b | b < a }) ↔ Unbounded (· > ·) s :=
@unbounded_lt_inter_lt αᵒᵈ s _ _ a
#align set.unbounded_gt_inter_gt Set.unbounded_gt_inter_gt
end Set
|
State Before: α : Type u_1
inst✝ : LinearOrderedField α
a b c d : α
n : ℤ
hn : Odd n
⊢ 0 < a ^ n ↔ 0 < a State After: case intro
α : Type u_1
inst✝ : LinearOrderedField α
a b c d : α
n k : ℤ
hk : n = 2 * k + 1
⊢ 0 < a ^ n ↔ 0 < a Tactic: cases' hn with k hk State Before: case intro
α : Type u_1
inst✝ : LinearOrderedField α
a b c d : α
n k : ℤ
hk : n = 2 * k + 1
⊢ 0 < a ^ n ↔ 0 < a State After: no goals Tactic: simpa only [hk, two_mul] using zpow_bit1_pos_iff
|
import columnround
open columnround
open utils
namespace columnround_examples
/-!
# Columnround examples
## Spec examples
-/
/-! ### Example 1 -/
/-- Input for example 1 --/
def input1 : matrixType :=
(
(0x00000001, 0x00000000, 0x00000000, 0x00000000),
(0x00000001, 0x00000000, 0x00000000, 0x00000000),
(0x00000001, 0x00000000, 0x00000000, 0x00000000),
(0x00000001, 0x00000000, 0x00000000, 0x00000000)
)
/-- Output for example 1 --/
def output1 : matrixType :=
(
(0x10090288, 0x00000000, 0x00000000, 0x00000000),
(0x00000101, 0x00000000, 0x00000000, 0x00000000),
(0x00020401, 0x00000000, 0x00000000, 0x00000000),
(0x40a04001, 0x00000000, 0x00000000, 0x00000000)
)
--#eval if columnround_output (columnround (columnround_input input1)) = output1 then "pass" else "fail"
/-! ### Example 2 -/
/- Input for example 2 -/
def input2 : matrixType :=
(
(0x08521bd6, 0x1fe88837, 0xbb2aa576, 0x3aa26365),
(0xc54c6a5b, 0x2fc74c2f, 0x6dd39cc3, 0xda0a64f6),
(0x90a2f23d, 0x067f95a6, 0x06b35f61, 0x41e4732e),
(0xe859c100, 0xea4d84b7, 0x0f619bff, 0xbc6e965a)
)
/-- Output for example 2 -/
def output2 : matrixType :=
(
(0x8c9d190a, 0xce8e4c90, 0x1ef8e9d3, 0x1326a71a),
(0x90a20123, 0xead3c4f3, 0x63a091a0, 0xf0708d69),
(0x789b010c, 0xd195a681, 0xeb7d5504, 0xa774135c),
(0x481c2027, 0x53a8e4b5, 0x4c1f89c5, 0x3f78c9c8)
)
--#eval if columnround_output (columnround (columnround_input input2)) = output2 then "pass" else "fail"
/-!
## Inverse examples
We use the same test vectors as the spec but going backwards.
-/
-- example 1
--#eval if columnround_output (columnround_inv (columnround_input output1)) = input1 then "pass" else "fail"
-- example 2
--#eval if columnround_output (columnround_inv (columnround_input output2)) = input2 then "pass" else "fail"
end columnround_examples
|
/-
Copyright (c) 2019 Seul Baek. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Author: Seul Baek
-/
import Mathlib.PrePort
import Mathlib.Lean3Lib.init.default
import Mathlib.data.list.func
import Mathlib.tactic.ring
import Mathlib.tactic.omega.misc
import Mathlib.PostPort
namespace Mathlib
/-
Non-constant terms of linear constraints are represented
by storing their coefficients in integer lists.
-/
namespace omega
namespace coeffs
/-- `val_between v as l o` is the value (under valuation `v`) of the term
obtained taking the term represented by `(0, as)` and dropping all
subterms that include variables outside the range `[l,l+o)` -/
@[simp] def val_between (v : ℕ → ℤ) (as : List ℤ) (l : ℕ) : ℕ → ℤ := sorry
@[simp] theorem val_between_nil {v : ℕ → ℤ} {l : ℕ} (m : ℕ) : val_between v [] l m = 0 := sorry
/-- Evaluation of the nonconstant component of a normalized linear arithmetic term. -/
def val (v : ℕ → ℤ) (as : List ℤ) : ℤ := val_between v as 0 (list.length as)
@[simp] theorem val_nil {v : ℕ → ℤ} : val v [] = 0 := rfl
theorem val_between_eq_of_le {v : ℕ → ℤ} {as : List ℤ} {l : ℕ} (m : ℕ) :
list.length as ≤ l + m → val_between v as l m = val_between v as l (list.length as - l) :=
sorry
theorem val_eq_of_le {v : ℕ → ℤ} {as : List ℤ} {k : ℕ} :
list.length as ≤ k → val v as = val_between v as 0 k :=
sorry
theorem val_between_eq_val_between {v : ℕ → ℤ} {w : ℕ → ℤ} {as : List ℤ} {bs : List ℤ} {l : ℕ}
{m : ℕ} :
(∀ (x : ℕ), l ≤ x → x < l + m → v x = w x) →
(∀ (x : ℕ), l ≤ x → x < l + m → list.func.get x as = list.func.get x bs) →
val_between v as l m = val_between w bs l m :=
sorry
theorem val_between_set {v : ℕ → ℤ} {a : ℤ} {l : ℕ} {n : ℕ} {m : ℕ} :
l ≤ n → n < l + m → val_between v (list.func.set a [] n) l m = a * v n :=
sorry
@[simp] theorem val_set {v : ℕ → ℤ} {m : ℕ} {a : ℤ} : val v (list.func.set a [] m) = a * v m :=
sorry
theorem val_between_neg {v : ℕ → ℤ} {as : List ℤ} {l : ℕ} {o : ℕ} :
val_between v (list.func.neg as) l o = -val_between v as l o :=
sorry
@[simp] theorem val_neg {v : ℕ → ℤ} {as : List ℤ} : val v (list.func.neg as) = -val v as := sorry
theorem val_between_add {v : ℕ → ℤ} {is : List ℤ} {js : List ℤ} {l : ℕ} (m : ℕ) :
val_between v (list.func.add is js) l m = val_between v is l m + val_between v js l m :=
sorry
@[simp] theorem val_add {v : ℕ → ℤ} {is : List ℤ} {js : List ℤ} :
val v (list.func.add is js) = val v is + val v js :=
sorry
theorem val_between_sub {v : ℕ → ℤ} {is : List ℤ} {js : List ℤ} {l : ℕ} (m : ℕ) :
val_between v (list.func.sub is js) l m = val_between v is l m - val_between v js l m :=
sorry
@[simp] theorem val_sub {v : ℕ → ℤ} {is : List ℤ} {js : List ℤ} :
val v (list.func.sub is js) = val v is - val v js :=
sorry
/-- `val_except k v as` is the value (under valuation `v`) of the term
obtained taking the term represented by `(0, as)` and dropping the
subterm that includes the `k`th variable. -/
def val_except (k : ℕ) (v : ℕ → ℤ) (as : List ℤ) : ℤ :=
val_between v as 0 k + val_between v as (k + 1) (list.length as - (k + 1))
theorem val_except_eq_val_except {k : ℕ} {is : List ℤ} {js : List ℤ} {v : ℕ → ℤ} {w : ℕ → ℤ} :
(∀ (x : ℕ), x ≠ k → v x = w x) →
(∀ (x : ℕ), x ≠ k → list.func.get x is = list.func.get x js) →
val_except k v is = val_except k w js :=
sorry
theorem val_except_update_set {v : ℕ → ℤ} {n : ℕ} {as : List ℤ} {i : ℤ} {j : ℤ} :
val_except n (update n i v) (list.func.set j as n) = val_except n v as :=
val_except_eq_val_except update_eq_of_ne (list.func.get_set_eq_of_ne n)
theorem val_between_add_val_between {v : ℕ → ℤ} {as : List ℤ} {l : ℕ} {m : ℕ} {n : ℕ} :
val_between v as l m + val_between v as (l + m) n = val_between v as l (m + n) :=
sorry
theorem val_except_add_eq {v : ℕ → ℤ} (n : ℕ) {as : List ℤ} :
val_except n v as + list.func.get n as * v n = val v as :=
sorry
@[simp] theorem val_between_map_mul {v : ℕ → ℤ} {i : ℤ} {as : List ℤ} {l : ℕ} {m : ℕ} :
val_between v (list.map (Mul.mul i) as) l m = i * val_between v as l m :=
sorry
theorem forall_val_dvd_of_forall_mem_dvd {i : ℤ} {as : List ℤ} :
(∀ (x : ℤ), x ∈ as → i ∣ x) → ∀ (n : ℕ), i ∣ list.func.get n as :=
fun (ᾰ : ∀ (x : ℤ), x ∈ as → i ∣ x) (n : ℕ) =>
idRhs ((fun (x : ℤ) => i ∣ x) (list.func.get n as))
(list.func.forall_val_of_forall_mem (dvd_zero i) ᾰ n)
theorem dvd_val_between {v : ℕ → ℤ} {i : ℤ} {as : List ℤ} {l : ℕ} {m : ℕ} :
(∀ (x : ℤ), x ∈ as → i ∣ x) → i ∣ val_between v as l m :=
sorry
theorem dvd_val {v : ℕ → ℤ} {as : List ℤ} {i : ℤ} : (∀ (x : ℤ), x ∈ as → i ∣ x) → i ∣ val v as :=
dvd_val_between
@[simp] theorem val_between_map_div {v : ℕ → ℤ} {as : List ℤ} {i : ℤ} {l : ℕ}
(h1 : ∀ (x : ℤ), x ∈ as → i ∣ x) {m : ℕ} :
val_between v (list.map (fun (x : ℤ) => x / i) as) l m = val_between v as l m / i :=
sorry
@[simp] theorem val_map_div {v : ℕ → ℤ} {as : List ℤ} {i : ℤ} :
(∀ (x : ℤ), x ∈ as → i ∣ x) → val v (list.map (fun (x : ℤ) => x / i) as) = val v as / i :=
sorry
theorem val_between_eq_zero {v : ℕ → ℤ} {is : List ℤ} {l : ℕ} {m : ℕ} :
(∀ (x : ℤ), x ∈ is → x = 0) → val_between v is l m = 0 :=
sorry
theorem val_eq_zero {v : ℕ → ℤ} {is : List ℤ} : (∀ (x : ℤ), x ∈ is → x = 0) → val v is = 0 :=
val_between_eq_zero
end Mathlib
|
library(lme4)
source("R/functions/get_score.r")
fn.si = file.path(PROJECT_DIR, "generated_data", "SLE", "SLE_sample_info_2_sle.txt")
info = fread(fn.si, data.table = F)
info = info %>%
mutate(DA=factor(DA, levels=c("low","mid","high")),
SUBJECT = factor(SUBJECT, levels=
unique(SUBJECT[order(as.numeric(sub("SLE-","",SUBJECT)))])))
treatment.fields = c("STEROID_IV_CATEGORY","CYCLOPHOSPHAMIDE_CATEGORY",
"ORAL_STEROIDS_CATEGORY","MYCOPHENOLATE_CATEGORY",
"HYDROXYCHLOROQUINE_CATEGORY")
for (f in treatment.fields) {
info[,f] = factor(info[,f])
}
fn.ge = file.path(PROJECT_DIR, "generated_data", "SLE", "SLE_ge_matrix_gene_sle.txt")
dat = fread(fn.ge, data.table = F) %>%
tibble::remove_rownames() %>% tibble::column_to_rownames("gene") %>%
data.matrix()
pb.file = "PB_DC.M4.11_ge_sig.txt"
pb.label = "PB.DC"
fn.pg = file.path(PROJECT_DIR, "data", "SLE", "phenotypes", "SLE_SUBJECT_PG.txt")
df.pg = read_tsv(fn.pg)
subj.lowDA = info %>%
dplyr::select(SUBJECT, DA) %>%
dplyr::filter(DA=="low") %>%
distinct()
fn.sig = file.path(PROJECT_DIR, "generated_data", "signatures", pb.file)
pb.genes = fread(fn.sig, header = F) %>% unlist(use.names=F)
gi = toupper(rownames(dat)) %in% toupper(pb.genes)
sum(gi)
info = mutate(info, PB=get_score(dat[gi,]))
form = paste0("PB"," ~ 1 + ",
paste(treatment.fields,sep="",collapse=" + "), " + (SLEDAI|SUBJECT)")
m.lmer = lmer(as.formula(form), data=info)
mre = ranef(m.lmer, condVar=T)
x = mre$SUBJECT
pv = attr(x, "postVar")
se = unlist(lapply(1:ncol(x), function(i) sqrt(pv[i, i, ])))
mre.df = data.frame(y=unlist(x),
se=se,
ci=1.96*se,
nQQ=rep(qnorm(ppoints(nrow(x))), ncol(x)),
SUBJECT=factor(rep(rownames(x), ncol(x)), levels=rownames(x)),
group=gl(ncol(x), nrow(x), labels=names(x))) %>%
dplyr::filter(group %in% c("SLEDAI")) %>%
mutate(z = y/ci)
mre.z = mre.df %>%
dplyr::select(SUBJECT,group,z) %>%
spread(group,z) %>%
dplyr::rename(PB_SLEDAI_corr_score=SLEDAI)
df.out = left_join(mre.z, df.pg, by="SUBJECT")
fn.sig = file.path(PROJECT_DIR, "generated_data", "SLE",
sprintf("SLE_%s_SLEDAI_corr_score.txt",pb.label))
fwrite(df.out, fn.sig, sep="\t", quote=F)
df.out = df.out %>% dplyr::filter(PG %in% 2:4, SUBJECT %in% subj.lowDA$SUBJECT)
fn.sig.2 = file.path(PROJECT_DIR, "generated_data", "SLE",
sprintf("SLE_%s_SLEDAI_corr_score_PG234_lowDA.txt",pb.label))
fwrite(df.out, fn.sig.2, sep="\t", quote=F)
|
module despob
use basic
use algor
implicit none
real(real_dp) :: uno = 1.0_real_dp
contains
!Los hacea todo iguales: n
subroutine new_income(daty,n,p)
implicit none
type(income):: daty
integer n,p
daty%p = p
daty%n = n
allocate(daty%y(n,p))
end subroutine
subroutine add_income_basic(daty)
implicit none
type(income):: daty
integer p
p = daty%p
allocate(daty%z(p))
allocate(daty%min(p))
allocate(daty%max(p))
allocate(daty%med(p))
allocate(daty%ave(p))
allocate(daty%adev(p))
allocate(daty%sdev(p))
allocate(daty%var(p))
allocate(daty%skew(p))
allocate(daty%kurt(p))
end subroutine
subroutine add_income_hist(daty,nb)
implicit none
type(income):: daty
integer nb,p
p = daty%p
daty%nb = nb
if(allocated(daty%vx)) deallocate(daty%vx)
if(allocated(daty%vf)) deallocate(daty%vf)
if(allocated(daty%vfa)) deallocate(daty%vfa)
allocate(daty%vx(nb,p))
allocate(daty%vf(nb,p))
allocate(daty%vfa(nb,p))
end subroutine
subroutine add_income_pov(daty,ni)
implicit none
type(income):: daty
integer ni
if(allocated(daty%pov)) deallocate(daty%pov)
allocate(daty%pov(ni,daty%p))
if(allocated(daty%rpg)) deallocate(daty%rpg)
allocate(daty%rpg(daty%n,daty%p))
if(allocated(daty%tip)) deallocate(daty%tip)
allocate(daty%tip(daty%n+1,daty%p))
if(allocated(daty%q)) deallocate(daty%q)
allocate(daty%q(daty%p))
if(allocated(daty%dtip)) deallocate(daty%dtip)
allocate(daty%dtip(daty%p))
if(allocated(daty%dpov)) deallocate(daty%dpov)
allocate(daty%dpov(ni,daty%p))
end subroutine
subroutine add_income_ineq(daty,ni)
implicit none
type(income):: daty
integer ni
if(allocated(daty%ineq)) deallocate(daty%ineq)
allocate(daty%ineq(ni,daty%p))
if(allocated(daty%dineq)) deallocate(daty%dineq)
allocate(daty%dineq(ni,daty%p))
if(allocated(daty%lorenz)) deallocate(daty%lorenz)
allocate(daty%lorenz(daty%n+1,daty%p))
if(allocated(daty%dlorenz)) deallocate(daty%dlorenz)
allocate(daty%dlorenz(daty%p))
if(allocated(daty%lorenzg)) deallocate(daty%lorenzg)
allocate(daty%lorenzg(daty%n+1,daty%p))
if(allocated(daty%dlorenzg)) deallocate(daty%dlorenzg)
allocate(daty%dlorenzg(daty%p))
end subroutine
subroutine clear_income(daty)
implicit none
type(income):: daty
deallocate(daty%y)
deallocate(daty%z)
deallocate(daty%min)
deallocate(daty%max)
deallocate(daty%med)
deallocate(daty%ave)
deallocate(daty%adev)
deallocate(daty%sdev)
deallocate(daty%var)
deallocate(daty%skew)
deallocate(daty%kurt)
end subroutine
!=======================================================================
! medidas basicas y ordena al ingreso
subroutine basic_measure(daty)
implicit none
type(income) :: daty
real(real_dp) :: med,ave,adev,sdev,var,skew,kurt
integer :: np,i,n,nb
np = daty%p
n = daty%n
call nhist(n,nb)
call add_income_hist(daty,nb)
do i=1,np
! sort
call sort(daty%y(:,i))
! basic (podrian ser mas eficientes)
call median(daty%y(:,i), n, med)
call moment(daty%y(:,i),ave,adev,sdev,var,skew,kurt)
daty%min(i) = minval(daty%y(:,i))
daty%max(i) = maxval(daty%y(:,i))
daty%med(i) = med
daty%ave(i) = ave
daty%adev(i) = adev
daty%sdev(i) = sdev
daty%var(i) = var
daty%skew(i) = skew
daty%kurt(i) = kurt
!hist
call histogram(daty%y(:,i),daty%vx(:,i),daty%vf(:,i),daty%vfa(:,i))
enddo
end subroutine
! datos ordenados
subroutine poverty_measure(daty)
implicit none
type(income) :: daty
integer :: ni,i
ni = 13 !no es seguro!!!!
call add_income_pov(daty,ni)
do i=1,daty%p
call poverty_measure_aux(daty%y(:,i),daty%z(i),daty%rpg(:,i),daty%tip(:,i),daty%q(i),daty%pov(:,i))
! comparar indicadores
daty%dpov(:,i) = compare_i(daty%pov(:,1),daty%pov(:,i))
! comparar tip
daty%dtip(i) = compare_s(daty%tip(:,1),daty%tip(:,i))
enddo
end subroutine
! ver de chequear size de pov
! calcula rpg,q,pov
subroutine poverty_measure_aux(y,z,rpg,tip,q,pov)
implicit none
real(real_dp) :: y(:),z,rpg(:),tip(:),pov(:)
integer :: q,n,i
real(real_dp) :: p0,p1,p2,gp,u,a,f,atkp,yedep,ug
real(real_dp) :: c,k,ep,b ! elegibles
n = size(y)
q = count(y<z)
rpg(1:q) = 1-y(1:q)/z
rpg(q+1:) = 0.0_real_dp
tip(1) = 0.0_real_dp
do i=1,n
tip(i+1) = tip(i)+rpg(i)
enddo
tip = tip/n
! Indices p0,p1,p2
p0 = real(q,real_dp)/n
p1 = sum(rpg(1:q))/n
p2 = sum(rpg(1:q)**2)/n
call gini_s(y(1:q),gp) ! ya ordenado
! P0
pov(1)= p0
pov(2)= p1
pov(3)= p2
! Indice de Watts (1968)
pov(4) = sum(log(z/y(1:q)))/n
! Indice de Sen
pov(5) = P0*Gp + P1*(1-Gp) ! ver si es asintotico
! �ndice de Sen-Shorrocks-Thon:
pov(6) = P0*P1*(1+Gp)
! Indice de Clark, Ulph y Hemming (1981)
!c = 0.25 ! c<= 0.5 ! elegible
!pov(7) = sum((1-(y(1:q)/z)**c))/(n*c);
c = 0.5_real_dp ! 0<=c<=1
if(c==0) then
pov(7) = 1 - (product(y(1:q)/z)/n)**(uno/n)
else
pov(7) = 1 - ((sum((y(1:q)/z)**c)+(n-q))/n)**(uno/c)
endif
! Indice de Takayama
u = (sum(y(1:q))+(n-q)*z)/n
a = 0
do i=1,q
a = a+(n-i+1)*y(i)
enddo
do i=q+1,n
a = a+(n-i+1)*z
enddo
pov(8) = 1+1/real(n,real_dp) - (2/(u*n*n))*a
! Indice de Kakwani
k = 2 ! elegible
a = 0
u = 0
do i=1,q
f = (q-i+1)**k
a = a+f
u = u+f*(z-y(i))
enddo
pov(9) = (q/(n*z*a))*u
! Indice de Thon
u = 0
do i=1,q
u = u+(n-i+1)*(z-y(i))
enddo
pov(10) = (2/(n*(n+1)*z))*u
! Indice de Blackorby y Donaldson
ep = 2 !elegible
u = sum(y(1:q))/q
call atkinson(y(1:q),ep,atkp)
yedep = u*(1-atkp)
pov(11) = (real(q,real_dp)/n)*(z-yedep)/z
!Hagenaars
!ug = product(y(1:q))**(1.0_real_dp/q)
ug = exp(sum(log(y(1:q)))/q) ! o normalizar con el maximo
pov(12) = (q/real(n,real_dp))*((log(z)-log(ug))/log(z))
!Chakravarty (1983)
b = 0.5_real_dp ! 0<b<1
pov(13) = sum(uno-((y(1:q)/z)**b))/n
! Indice de Zheng (2000) - ver
!ab = 0.1 ! ab>0
!Pzhe = sum(exp(ab*(z-yp))-1)/q ! % ver
end subroutine
subroutine inequality_measure(daty)
implicit none
type(income) :: daty
integer :: ni,i
ni = 20 !no es seguro!!!!
call add_income_ineq(daty,ni)
do i=1,daty%p
call inequality_measure_aux(daty%y(:,i),daty%ave(i),daty%ineq(:,i), &
daty%lorenz(:,i),daty%lorenzg(:,i))
! comparar indicadores
daty%dineq(:,i) = compare_i(daty%ineq(:,1),daty%ineq(:,i))
! comparar lorenz
daty%dlorenz(i) = compare_s(daty%lorenz(:,1),daty%lorenz(:,i))
daty%dlorenzg(i) = compare_s(daty%lorenzg(:,1),daty%lorenzg(:,i))
enddo
end subroutine
! Datos ya ordenados
subroutine inequality_measure_aux(y,u,ineq,vlorenz,vlorenzg)
implicit none
real(real_dp) :: y(:),ineq(:),u,vlorenz(:),vlorenzg(:),a
integer :: n,i
n = size(y)
! Lorenz
vlorenz(1) = 0.0_real_dp
do i=1,n-1
vlorenz(i+1) = vlorenz(i)+y(i)
enddo
vlorenz = vlorenz/sum(y)
vlorenz(n+1) = 1.0_real_dp
vlorenzg = vlorenz*u
! Gini
call gini_s(y,ineq(1)); ! y ordenado
! Rango relativo
!ineq(2) = (maxval(y)-minval(y))/u
!ineq(2) = (y(n)-y(1))/u
call ineq_basic(y,ineq(2),'rr')
! Desviacion media relativa -Pietra o Schutz
!ineq(3) = sum(abs(y-u))/(2*n*u)
call ineq_basic(y,ineq(3),'dmr')
! Coeficiente de variacion
!ineq(4) = std(y)/u
call ineq_basic(y,ineq(4),'cv')
! Desv Est de los logaritmos
!ineq(5) = sqrt(sum((log(u)-log(y))**2)/n)
call ineq_basic(y,ineq(5),'dslog')
! General Entropy
call gentropy(y,0.0_real_dp,ineq(6))
call gentropy(y,1.0_real_dp,ineq(7))
call gentropy(y,2.0_real_dp,ineq(8))
! Atkinson
call atkinson(y,0.5_real_dp,ineq(9))
call atkinson(y,1.0_real_dp,ineq(10))
call atkinson(y,2.0_real_dp,ineq(11))
! Ratios de Kuznets
call ratios(y,ineq(12:16))
! Kolm Family
a = 2 ! averson a la desigualdad
call kolm(y,a,ineq(17))
!Bonferroni
call bonferroni(y,ineq(18))
!Lineales
call ineq_linear(y,ineq(19),'merhan')
call ineq_linear(y,ineq(20),'piesch')
! call lorenz(ys)
end subroutine
subroutine ineq_basic(y,m,c)
implicit none
real(real_dp) :: y(:),m,u
integer :: n
character(len=*) :: c
n = size(y)
u = sum(y)/n
select case(c)
! Rango relativo
case('rr')
m = (y(n)-y(1))/u !(maxval(y)-minval(y))/u
! Desviacion media relativa -Pietra o Schutz
case('dmr')
m = sum(abs(y-u))/(2*n*u)
! Coeficiente de variacion
case('cv')
m = std(y)/u
! Desv Est de los logaritmos
case('dslog')
m = sqrt(sum((log(u)-log(y))**2)/n)
case default
m = -1
end select
end subroutine
! indices lineales
subroutine ineq_linear(y,m,c)
implicit none
real(real_dp) :: y(:),u,syi,pi,f,qi,m,nn
integer :: i,n
character(len=*) :: c
! Lorenz
n = size(y)
nn = real(n,real_dp)
u = sum(y)/nn
syi = y(1)
pi = uno/nn
f = uno/(nn*u)
qi = f*y(1)
m = pi-qi
select case(c)
case('gini')
do i=2,n-1
pi = i/nn
syi = syi+y(i)
qi = f*syi
m = m + (pi-qi)
enddo
m = m*2/n
case('merhan')
do i=2,n-1
pi = i/nn
syi = syi+y(i)
qi = f*syi
m = m + (1-pi)*(pi-qi)
enddo
m = m*6/n
case('piesch')
do i=2,n-1
pi = i/nn
syi = syi+y(i)
qi = f*syi
m = m + pi*(pi-qi)
enddo
m = m*3/n
case default
m = -1
end select
! write(*,*) trim(c),' ',m
end subroutine
subroutine bonferroni(y,r)
implicit none
real(real_dp) :: y(:),s,u,x,r
integer :: i,n
n = size(y)
s = y(1)
x = y(1)
do i=2,n-1
x = x+y(i)
s = s+x/i
enddo
u = (x+y(n))/n
r = 1 - (1/((n-1)*u))*s
end subroutine
subroutine kolm(y,a,r)
implicit none
real(real_dp) :: y(:),r,u
real(real_dp) :: a ! > 0
integer :: n
n = size(y)
u = sum(y)/n
r = (1/a)*(log((1.0_real_dp/n)*sum(exp(a*(u-y)))))
end subroutine
! Gini (version ya ordenada)
subroutine gini_s(y,g)
implicit none
real(real_dp) :: y(:),g
real(real_dp) :: u,a
integer :: n,i
n = size(y)
u = sum(y)/n
!call sort(x)
a = 0
do i=1,n
a = a + (n-i+1)*y(i)
enddo
g = (n+1)/(n-1) - 2/(n*(n-1)*u)*a
g = g*(n-1)/n ! ver si esta bien
end subroutine
! Ratios de Kuznets
! Supone datos ordenados
! mejorar resultados para numeros peque�os (se puede imponer rango
! segun valor de n y k)
subroutine ratios(ys,r)
implicit none
real(real_dp) :: ys(:),r(:)
integer :: n,k,i,d
n = size(ys)
k = size(r)
do i=1,k
d = int(i*(n/(2*real(k))))
r(i) = sum(ys(n-d+1:n))/sum(ys(1:d))
enddo
end subroutine
! Entropia general
subroutine gentropy(y,a,p)
implicit none
real(real_dp) :: y(:),a,p,u
integer :: n
n = size(y)
u = sum(y)/n
if(a==0.0) then
p = sum(log(u/y))/n
elseif(a==1.0) then
p = sum((y/u)*log(y/u))/n
else
p = (1/(a*(a-1)))*(sum((y/u)**a)/n-1);
endif
end subroutine
! Atkinson (medida de desigualdad)
subroutine atkinson(y,e,p)
implicit none
real(real_dp) :: y(:),e,p,u
integer :: n
n = size(y)
u = sum(y)/n
if(e==1) then
!p = 1-sum(log(y)/log(u))/n ! verificar
p = 1-product(y**(1.0_real_dp/n))/u
else
p = 1-(sum((y/u)**(1-e))/n)**(1.0/(1-e))
endif
end subroutine
! Supone Ingreso Ordenado
! Hacer opcion para lorenz generalizada
subroutine lorenz(y)
implicit none
real(real_dp) :: y(:)
real(real_dp) :: pa(size(y)+1),ya(size(y)+1)
integer :: n,i
n = size(y)
do i=1,n-1
pa(i+1) = i
ya(i+1) = ya(i)+y(i)
enddo
pa = pa/real(n,real_dp)
ya = ya/sum(y)
pa(1) = 0.0
ya(1) = 0.0
pa(n+1) = 1.0
ya(n+1) = 1.0
! do i=1,n+1
! write(*,*) i, pa(i),ya(i)
! enddo
! call plot_lorenz_curve(pa,ya)
end subroutine
end module despob
#ifdef undef
! Gini
subroutine gini(y,g)
implicit none
real(real_dp) :: y(:),g
real(real_dp) :: x(size(y)),u,a
integer :: n,i
n = size(y)
u = sum(y)/n
x = y
call sort(x)
a = 0
do i=1,n
a = a + (n-i+1)*x(i)
enddo
g = (n+1)/(n-1) - 2/(n*(n-1)*u)*a
g = g*(n-1)/n ! ver si esta bien
end subroutine
#endif
|
\name{widthDetails.packed_legends}
\alias{widthDetails.packed_legends}
\title{
Grob width for packed_legends
}
\description{
Grob width for packed_legends
}
\usage{
\method{widthDetails}{packed_legends}(x)
}
\arguments{
\item{x}{A packed_legends object.}
}
\examples{
# There is no example
NULL
}
|
#!/usr/local/bin/julia
using go
debug_log = joinpath(splitdir(@__FILE__)[1], "./gtp_debug.log")
model_dir = joinpath(splitdir(@__FILE__)[1], "../models")
f = open(debug_log, "w") # stderr log
redirect_stderr(f)
function usage()
println(STDOUT, "Usage: ./gtp_bot.jl {random|keras [model name]}")
exit()
end
nargs = length(ARGS)
if nargs != 0
if ARGS[1] == "random"
policy = go.RandomPolicy()
elseif ARGS[1] == "keras"
if nargs != 2
usage()
end
policy = go.KerasNetwork(model_dir, ARGS[2])
else
usage()
end
else
usage()
end
go.gtp(policy)
|
library("broom")
library("ggplot2")
library("reshape2")
setwd("~/dev/BitFunnel/src/Scripts")
args = commandArgs(trailingOnly=TRUE)
if (length(args) != 4) {
stop("Required args: [interpreter QueryPipelineStats filename], [compiler QPS filename], [cachelines vs time filename], [matches vs. time filename]", call.=FALSE)
}
int_name = args[1]
comp_name = args[2]
out_name1 = args[3]
out_name2 = args[4]
print("Reading input.")
interpreter <- read.csv(header=TRUE, file=int_name)
compiler <- read.csv(header=TRUE, file=comp_name)
df <- data.frame(interpreter$cachelines, compiler$match)
names(df)[names(df) == 'interpreter.cachelines'] <- 'Cachelines'
names(df)[names(df) == 'compiler.match'] <- 'MatchTime'
# print("Plotting cachelines vs. time.")
# png(filename=out_name1,width=1600,height=1200)
# ggplot(df, aes(x=Cachelines,y=MatchTime)) +
# theme_minimal() +
# geom_point(alpha=1/100) +
# theme(axis.text = element_text(size=40),
# axis.title = element_text(size=40)) +
# ylim(0, 0.001)
# dev.off()
# print("Plotting matches vs. time.")
# png(filename=out_name2,width=1600,height=1200)
# ggplot(compiler, aes(x=matches,y=match)) +
# theme_minimal() +
# geom_point(alpha=1/20) +
# theme(axis.text = element_text(size=40),
# axis.title = element_text(size=40))
# dev.off()
# print("Computing cacheline regression.")
# df <- data.frame(interpreter$cachelines, compiler$matches, compiler$match)
# names(df)[names(df) == 'interpreter.cachelines'] <- 'Cachelines'
# names(df)[names(df) == 'compiler.matches'] <- 'Matches'
# names(df)[names(df) == 'compiler.match'] <- 'Time'
# fit <- lm(Time ~ Matches, data=df)
# print(summary(fit))
# fit <- lm(Time ~ Cachelines, data=df)
# print(summary(fit))
# fit <- lm(Time ~ ., data=df)
# print(summary(fit))
# print("Residual plot.")
# df <- augment(fit)
# # TODO: don't hardcode filename.
# png(filename="time-residual.png",width=1600,height=1200)
# ggplot(df, aes(x = .fitted, y = .resid)) +
# theme_minimal() +
# geom_point(alpha=1/10) +
# theme(axis.text = element_text(size=40),
# axis.title = element_text(size=40))
# dev.off()
print("Computing quadword regression.")
df <- data.frame(interpreter$quadwords, compiler$matches, compiler$match)
names(df)[names(df) == 'interpreter.quadwords'] <- 'Quadwords'
names(df)[names(df) == 'compiler.matches'] <- 'Matches'
names(df)[names(df) == 'compiler.match'] <- 'Time'
fit <- lm(Time ~ Matches, data=df)
print(summary(fit))
fit <- lm(Time ~ Quadwords, data=df)
print(summary(fit))
fit <- lm(Time ~ ., data=df)
print(summary(fit))
df <- data.frame(interpreter$quadwords, compiler$match)
names(df)[names(df) == 'interpreter.quadwords'] <- 'Quadwords'
names(df)[names(df) == 'compiler.match'] <- 'MatchTime'
print("Plotting quadwords vs. time.")
# png(filename=out_name1,width=1600,height=1200)
ggplot(df, aes(x=Quadwords,y=MatchTime)) +
theme_minimal() +
geom_smooth(method = "lm", se = FALSE) +
theme(aspect.ratio=1/2) +
geom_point(alpha=1/10) +
theme(axis.text = element_text(size=20),
axis.title = element_text(size=20)) +
# ylim(0, 0.0005) +
scale_y_continuous(name="Match Time (us)", labels=c("0", "100", "200", "300", "400", "500"), breaks=c(0, 0.0001, 0.0002, 0.0003, 0.0004, 0.0005), limits=c(0, 0.0005)) +
# dev.off()
ggsave(out_name1, width = 10, height=5)
# df <- data.frame(interpreter$matches, compiler$match)
# names(df)[names(df) == 'interpreter.matches'] <- 'Matches'
# names(df)[names(df) == 'compiler.match'] <- 'MatchTime'
# print("Plotting quadwords vs. time.")
# png(filename=out_name1,width=1600,height=1200)
# ggplot(df, aes(x=Matches,y=MatchTime)) +
# theme_minimal() +
# geom_point(alpha=1/10) +
# theme(axis.text = element_text(size=40),
# axis.title = element_text(size=40)) +
# ylim(0, 0.002)
# dev.off()
|
-- Andreas, 2019-07-23, issue #3937 reported by gallais
f : let ... = ? in ?
f = ?
-- WAS: internal error
-- Expected: Could not parse the left-hand side ...
|
data Vect : Nat -> Type -> Type where
Nil : Vect Z a
(::) : (x : a) -> (xs : Vect k a) -> Vect (S k) a
%name Vect xs, ys, zs
append : Vect n elem -> Vect m elem -> Vect (n + m) elem
append [] ys = ys
append (x :: xs) ys = x :: append xs ys
zip : Vect n a -> Vect n b -> Vect n (a, b)
zip [] [] = []
zip (x :: xs) (y :: ys) = (x, y) :: zip xs ys
|
Shop Hundreds of Families at Once!
Sell with Us & Clear the Clutter!
Four Ways to Join the Fun!
SELL with Us & Earn Cash!
Earn cash for your child’s outgrown or unused clothes, toys, baby equipment and more without all the hassles of holding a yard sale, meeting with strangers or selling online!
Sign-up as a consignor, prepare and drop off your items – we’ll do the rest of the work and you’ll collect at least 60% of the profit (more if you volunteer at the sale).
Help out at the event and get a chance to shop first! The more shifts you work, the earlier you get to shop at our private presale. Consignors who volunteer will not only be able to shop early, but will also earn a greater percentage of their sales. And it’ll be fun, too!
SHOP & Save Big on Gently-Used Items!
Shop 40,000+ new and gently used items at a fraction of retail!
All items are pre-inspected to ensure quality, so you don’t need to worry about coming home with stained clothes or broken toys. All items that need batteries will have them so you can try them out, and everything will be organized so you can find that bargain you’re looking for.
Families who need to stretch their dollars further in this economy.
Those in need, as consignors are given the opportunity to donate their unsold items to our charitable partners. We will also be collecting non perishable donations for the local Food Pantry.
The environment by giving people an opportunity to recycle items that someone else could use.
Follow Us on Facebook for Updates & Contests!
|
/***************************************************************************
Copyright 2015 Ufora Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
****************************************************************************/
#include "Hash.hpp"
#include <boost/python.hpp>
#include "../../native/Registrar.hpp"
#include "../python/CPPMLWrapper.hpp"
#include "../python/ScopedPyThreads.hpp"
#include "../containers/ImmutableTreeVector.py.hpp"
#include "../containers/ImmutableTreeSet.py.hpp"
namespace Ufora {
class HashWrapper :
public native::module::Exporter<HashWrapper> {
public:
std::string getModuleName(void)
{
return "Hash";
}
void getDefinedTypes(std::vector<std::string>& outTypes)
{
outTypes.push_back(typeid(Hash).name());
}
static Hash hashString(std::string s)
{
ScopedPyThreads threads;
return Hash::CityHash(&s[0], s.size());
}
static int32_t hash(Hash& in)
{
return ((int32_t*)&in)[0];
}
static Hash add(Hash& h, Hash& r)
{
ScopedPyThreads threads;
return h + r;
}
static Hash add2(Hash& h, std::string r)
{
ScopedPyThreads threads;
return h + hashString(r);
}
static uint32_t hashGetItem(Hash& h, uint32_t ix)
{
if (ix < 5)
return h[ix];
return 0;
}
static Hash* CreateHash(uint32_t ix)
{
return new Hash(ix);
}
static int HashCMP(Hash& h, boost::python::object o)
{
boost::python::extract<Hash> otherHash(o);
if (otherHash.check())
return h.cmp(otherHash());
return -1;
}
static Hash xorHash(Hash& h1, Hash& h2)
{
Hash out;
for (long k = 0; k < 5;k++)
out[k] = h1[k] ^ h2[k];
return out;
}
template<class T>
static std::string simpleSerializer(const T& in)
{
ScopedPyThreads threads;
return ::serialize<T>(in);
}
template<class T>
static void simpleDeserializer(T& out, std::string inByteBlock)
{
ScopedPyThreads threads;
out = ::deserialize<T>(inByteBlock);
}
void exportPythonWrapper()
{
using namespace boost::python;
using namespace boost::python;
class_<Hash >("Hash", init<>() )
.def("__init__", make_constructor(CreateHash) )
.def("__str__", &hashToString)
.def("__add__", &add)
.def("__add__", &add2)
.def("xor", &xorHash)
.def("sha1", &hashString)
.staticmethod("sha1")
.def("stringToHash", &stringToHash)
.staticmethod("stringToHash")
.def("__repr__", &hashToString)
.def("__hash__", &hash)
.def("__cmp__", &HashCMP)
.def("__getitem__", &hashGetItem)
.def("__getstate__", simpleSerializer<Hash>)
.def("__setstate__", simpleDeserializer<Hash>)
;
PythonWrapper<ImmutableTreeSet<Hash> >::exportPythonInterface("Hash");
PythonWrapper<ImmutableTreeVector<Hash> >::exportPythonInterface("Hash");
}
};
}
//explicitly instantiating the registration element causes the linker to need
//this file
template<>
char native::module::Exporter<
Ufora::HashWrapper>::mEnforceRegistration =
native::module::ExportRegistrar<
Ufora::HashWrapper>::registerWrapper();
|
[STATEMENT]
lemma DERIV_cdivide:
"(f has_field_derivative D) (at x within s) \<Longrightarrow>
((\<lambda>x. f x / c) has_field_derivative D / c) (at x within s)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (f has_field_derivative D) (at x within s) \<Longrightarrow> ((\<lambda>x. f x / c) has_field_derivative D / c) (at x within s)
[PROOF STEP]
using DERIV_cmult_right[of f D x s "1 / c"]
[PROOF STATE]
proof (prove)
using this:
(f has_field_derivative D) (at x within s) \<Longrightarrow> ((\<lambda>x. f x * ((1::'a) / c)) has_field_derivative D * ((1::'a) / c)) (at x within s)
goal (1 subgoal):
1. (f has_field_derivative D) (at x within s) \<Longrightarrow> ((\<lambda>x. f x / c) has_field_derivative D / c) (at x within s)
[PROOF STEP]
by simp
|
Pattycake moved permanently to the Bronx Zoo on December 20 , 1982 . For a few years , she lived in a cage with Pansy , a chimpanzee . In June 1999 , Pattycake moved into the Wildlife Conservation Society 's $ 43 million Congo Gorilla Forest exhibit . The exhibit includes a Great Gorilla Forest viewing area that separates gorillas and visitors with a glass window . Two troops of gorillas inhabited the 6 @.@ 5 acre exhibit , with a dozen gorillas in Pattycake 's troop alone , including <unk> , Pattycake , <unk> , <unk> , Halima , Fran , Layla , Kumi , Suki , Babatunde , Barbara , and M <unk> . The general curator of the Bronx Zoo , James Doherty , described Pattycake as " independent " with " few close friends " in the Congo Gorilla Forest . " It may have something to do with the fact that she didn 't live with her parents that long , and lived with that chimpanzee for a few years , " Doherty said .
|
last edited by Claire Valva on April 22, 2019, with cleanup on July 24, 2019
# Tests simulation that includes phase locking
this didn't really work/ phase locking is kind of a nonfactor
```python
#!jupyter nbconvert --to python phase_lock_sim.ipynb
```
[NbConvertApp] Converting notebook phase_lock_sim.ipynb to python
[NbConvertApp] Writing 8151 bytes to phase_lock_sim.py
```python
import numpy as np
from netCDF4 import Dataset, num2date
from scipy.signal import get_window, csd
from scipy.fftpack import fft, ifft, fftshift, fftfreq
import pandas as pd
import datetime
from math import pi
from scipy import optimize
from astropy.stats.circstats import circmean, circvar
from os import walk
from multiprocessing import Pool
from sympy import Poly, Eq, Function, exp, re, im
import pickle
import time
import random
import multiprocessing as mp
from joblib import Parallel, delayed
```
```python
#get detrended parts
f = []
for (dirpath, dirnames, filenames) in walk('detrended/'):
f.extend(filenames)
break
```
```python
# root finders
#use these
def solve_f(X, solution):
#function to solve f coeff equation for trend analysis
x,y = X
f = x*np.exp(1j*y) - solution
return [np.real(f), np.imag(f)]
def real_f(X, solution):
#function to wrap solve_f so that it can be used with fsolve
x,y = X
z = [x+0j,y+0j]
actual_f = solve_f(z, solution)
return(actual_f)
# solve for phase
def root_find(sol):
return optimize.root(real_f, [np.real(sol), 0], args=sol).x
```
```python
# get function to generate random coeffs
def entry_fft(amp, phase = random.uniform(0, 2*pi)):
# takes amplitude and phase to give corresponding fourier coeff
entry = amp*np.exp(1j*phase)
return entry
# write functions to make a longer ifft
def ext_row(row, n):
ext_f = np.zeros(((len(row) - 1) * n + 1,), dtype="complex128")
ext_f[::n] = row * n
return ext_f
def ext_ifft_new(n, input_array):
# add the zeros onto each end
ext_f = [ext_row(entry,n) for entry in input_array]
# make up for the formulat multiplying for array length
olddim = len(input_array[5])
newdim = len(ext_f[0])
mult = newdim/olddim
ext_f = np.multiply(ext_f, mult)
adjusted_tested = np.fft.ifft2(ext_f)
return adjusted_tested
season_titles = ["Winter", "Spring", "Summer", "Fall"]
# flatten season for plotting
flatten = lambda l: [item for sublist in l for item in sublist]
```
```python
file_pickle = open("spectra_copy/spectra_02_45.0Narr.pickle", "rb")
d2_touse, d2_seasons, d2_averages = pickle.load(file_pickle, encoding='latin1')
```
```python
filed = ["spectra/spectra_02_45.0Sarr.pickle",
"spectra/spectra_02_45.0Narr.pickle"]
```
```python
mean_phases_lock = [[-1.20929458e-16, 1.65918271e-01, -2.17292412e-01, -2.40352609e-01,
8.64205449e-02, 1.07202695e-02],[-1.21105919e-16, 3.96836386e-01, -5.77513605e-01, 5.62200988e-01,
3.64883992e-01, -7.35447431e-02]]
```
```python
stds_lock = [[0. , 0.35092645, 0.37481109, 0.36100874, 0.35869798,
0.36139656], [0. , 0.22681627, 0.40726111, 0.39961749, 0.37641118,
0.36154913]]
```
```python
for heythere in range(1):
# get function to generate random coeffs
def entry_fft(amp,std,wavenum, phase = random.uniform(0, 2*pi)):
# takes amplitude and phase to give corresponding fourier coeff
if np.abs(wavenum) <= 6:
phase = np.random.normal(loc = mean_phases_lock[ko][wavenum], scale = stds_lock[ko][wavenum])
amp_new = np.random.normal(loc = amp, scale = std)
entry = amp_new*np.exp(1j*phase)
return entry
# write functions to make a longer ifft
def ext_row(row, n):
ext_f = np.zeros(((len(row) - 1) * n + 1,), dtype="complex128")
ext_f[::n] = row * n
return ext_f
def ext_ifft_new(n, input_array):
# add the zeros onto each end
ext_f = [ext_row(entry,n) for entry in input_array]
# make up for the formulat multiplying for array length
olddim = len(input_array[5])
newdim = len(ext_f[0])
mult = newdim/olddim
# ext_f = np.multiply(mult, ext_f)
adjusted_tested = np.fft.ifft2(ext_f)
return adjusted_tested
def combined(amps,stds, length):
# combines generation of random phase with inverse transform
newarray = [[entry_fft(amp = amps[wave][timed],
std = stds[wave][timed], wavenum = wave,
phase = random.uniform(0, 2*pi))
for timed in range(len(amps[wave]))]
for wave in range(len(amps))]
newarray = [np.array(leaf) for leaf in newarray]
iffted = ext_ifft_new(length, newarray)
return iffted
```
```python
for ko in range(2):
file_pickle = open(filed[ko], "rb")
d2_touse, d2_seasons, d2_averages = pickle.load(file_pickle, encoding='latin1')
alled = [[[[root_find(entry) for entry in sublist]
for sublist in year]
for year in season]
for season in d2_seasons]
phases = roots[:,:,:,1]
amps = roots[:,:,:,0]
def padded(to_pad, index):
length = len(to_pad)
if index == 0:
zeros = longl - length
to_pad = list(to_pad)
for i in range(zeros):
to_pad.append(0)
return to_pad
else:
return to_pad
#pad rows with zeros to account for leap year
season_amps_adj = [[[padded(row, index = i)
for row in entry]
for entry in amps[i]]
for i in range(4)]
#get average amplitude and phases for each season
avg_amps = [np.average(season, axis = 0)
for season in season_amps_adj]
#get average amplitude and phases for each season
std_amps = [np.std(season, axis = 0)
for season in season_amps_adj]
def entry_fft(amp,std,wavenum, phase = random.uniform(0, 2*pi)):
# takes amplitude and phase to give corresponding fourier coeff
if np.abs(wavenum) <= 6:
phase = np.random.normal(loc = mean_phases_lock[ko][wavenum], scale = stds_lock[ko][wavenum])
amp_new = np.random.normal(loc = amp, scale = std)
entry = amp_new*np.exp(1j*phase)
return entry
# write functions to make a longer ifft
def ext_row(row, n):
ext_f = np.zeros(((len(row) - 1) * n + 1,), dtype="complex128")
ext_f[::n] = row * n
return ext_f
def ext_ifft_new(n, input_array):
# add the zeros onto each end
ext_f = [ext_row(entry,n) for entry in input_array]
# make up for the formulat multiplying for array length
olddim = len(input_array[5])
newdim = len(ext_f[0])
mult = newdim/olddim
# ext_f = np.multiply(mult, ext_f)
adjusted_tested = np.fft.ifft2(ext_f)
return adjusted_tested
def combined(amps,stds, length):
# combines generation of random phase with inverse transform
newarray = [[entry_fft(amp = amps[wave][timed],
std = stds[wave][timed], wavenum = wave,
phase = random.uniform(0, 2*pi))
for timed in range(len(amps[wave]))]
for wave in range(len(amps))]
newarray = [np.array(leaf) for leaf in newarray]
iffted = ext_ifft_new(length, newarray)
return iffted
def repeater(season, stds, length, times):
# repeats the phase creation and inverse transform
newarray = [combined(season,stds,length) for leaf in range(times)]
return(newarray)
def repeater_2(amps,stds, length, times):
#do procedure
repeated_comp = [repeater(amps[i],stds[i], length, times)
for i in range(4)]
#output.put(repeated_comp)
#listed_parts.append(repeated_comp)
import pickle
file_name2 = "sim_samp/"
file_pickle = open(file_name2,'wb')
pickle.dump(repeated_comp,file_pickle)
file_pickle.close()
return repeated_comp
runlen = 70
runtimes = 4
toplot = repeater_2(avg_amps,std_amps, runlen, runtimes)
```
```python
```
```python
```
|
from __future__ import print_function, division, absolute_import
import numpy as np
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
from openmdao.api import Problem, Group
from dymos import Phase, GaussLobatto
from dymos.examples.brachistochrone.brachistochrone_ode import BrachistochroneODE
p = Problem(model=Group())
phase = Phase(ode_class=BrachistochroneODE,
transcription=GaussLobatto(num_segments=4, order=[3, 5, 3, 5]))
p.model.add_subsystem('phase0', phase)
p.setup()
p['phase0.t_initial'] = 1.0
p['phase0.t_duration'] = 9.0
p.run_model()
grid_data = phase.options['transcription'].grid_data
t_all = p.get_val('phase0.timeseries.time')
t_disc = t_all[grid_data.subset_node_indices['state_disc'], 0]
t_col = t_all[grid_data.subset_node_indices['col'], 0]
def f(x): # pragma: no cover
return np.sin(x) / x + 1
def fu(x): # pragma: no cover
return (np.cos(x) * x - np.sin(x))/x**2
def plot_01(): # pragma: no cover
fig, axes = plt.subplots(1, 1)
ax = axes
x = np.linspace(1, 10, 100)
# Plot the segment boundaries
segends = np.linspace(1, 10, 5)
for i in range(len(segends)):
ax.plot((segends[i], segends[i]), (0, 1), linestyle='--', color='gray')
if i > 0:
ax.annotate('', xy=(segends[i], 0.05), xytext=(segends[i-1], 0.05),
arrowprops=dict(arrowstyle='<->'))
ax.text((segends[i]+segends[i-1])/2, 0.15, 'Segment {0}'.format(i-1),
ha='center', va='center')
# Set the bounding box properties
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
# Set the labels
ax.set_xlabel('time')
# Remove the axes ticks
ax.tick_params(
axis='both', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False,
right=False,
labelbottom=False,
labelleft=False) # labels along the bottom edge are off
plt.savefig('01_segments.png')
def plot_02(): # pragma: no cover
fig, axes = plt.subplots(1, 1)
ax = axes
x = np.linspace(1, 10, 100)
# Plot the state time history
# ax.plot(x, y, 'b-')
ax.plot(t_all, 0.5*np.ones_like(t_all), 'kx')
# Plot the segment boundaries
segends = np.linspace(1, 10, 5)
for i in range(len(segends)):
ax.plot((segends[i], segends[i]), (0, 1), linestyle='--', color='gray')
if i > 0:
ax.annotate('', xy=(segends[i], 0.05), xytext=(segends[i-1], 0.05),
arrowprops=dict(arrowstyle='<->'))
ax.text((segends[i]+segends[i-1])/2, 0.15, 'Segment {0}'.format(i-1),
ha='center', va='center')
# Set the bounding box properties
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
# Set the labels
ax.set_xlabel('time')
# ax.set_ylabel('state value')
# Remove the axes ticks
ax.tick_params(
axis='both', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False,
right=False,
labelbottom=False,
labelleft=False) # labels along the bottom edge are off
plt.savefig('02_nodes.png')
def plot_03(): # pragma: no cover
fig, axes = plt.subplots(2, 1)
for i in range(2):
ax = axes[i]
x = np.linspace(1, 10, 100)
if i == 0:
# Plot the state time history
y = f(x)
ax.plot(t_disc, f(t_disc), 'bo')
elif i == 1:
y = fu(x)
ax.plot(t_all, fu(t_all), 'rs')
y_max = np.max(y)
y_min = np.min(y)
# Plot the segment boundaries
segends = np.linspace(1, 10, 5)
for j in range(len(segends)):
ax.plot((segends[j], segends[j]), (y_min, y_max), linestyle='--', color='gray',
zorder=-100)
# if i > 0:
# # ax.annotate('', xy=(segends[i], 0.05), xytext=(segends[i-1], 0.05),
# # arrowprops=dict(arrowstyle='<->'))
# # ax.text((segends[i]+segends[i-1])/2, 0.15, 'Segment {0}'.format(i-1),
# # ha='center', va='center')
# Set the bounding box properties
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
# print(i)
# if i==0:
# ax.spines['bottom'].set_color('none')
# Set the labels
ax.set_xlabel('time')
if i == 0:
ax.set_ylabel('state value')
else:
ax.set_ylabel('control value')
# Remove the axes ticks
ax.tick_params(
axis='both', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False,
right=False,
labelbottom=False,
labelleft=False) # labels along the bottom edge are off
plt.savefig('03_inputs.png')
def plot_04(): # pragma: no cover
fig, axes = plt.subplots(2, 1)
for i in range(2):
ax = axes[i]
x = np.linspace(1, 10, 100)
if i == 0:
# Plot the state time history
y = f(x)
# ax.plot(x, y, 'b-')
f_all = f(t_all)
ax.plot(t_all, f_all, 'bo')
elif i == 1:
y = fu(x)
ax.plot(x, y, 'r-')
ax.plot(t_all, fu(t_all), 'rs')
y_max = np.max(y)
y_min = np.min(y)
# Plot the segment boundaries
segends = np.linspace(1, 10, 5)
for j in range(len(segends)):
ax.plot((segends[j], segends[j]), (y_min, y_max), linestyle='--', color='gray',
zorder=-100)
# Set the bounding box properties
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
# Set the labels
ax.set_xlabel('time')
if i == 0:
ax.set_ylabel('state value')
else:
ax.set_ylabel('control value')
# Remove the axes ticks
ax.tick_params(
axis='both', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False,
right=False,
labelbottom=False,
labelleft=False) # labels along the bottom edge are off
plt.savefig('04_control_rate_interpolation.png')
def plot_05(): # pragma: no cover
fig, axes = plt.subplots(2, 1)
for i in range(2):
ax = axes[i]
x = np.linspace(1, 10, 100)
if i == 0:
# Plot the state time history
y = f(x)
f_disc = f(t_disc)
df_dx_disc = fu(t_disc)
# ax.plot(x, y, 'b-')
ax.plot(t_disc, f_disc, 'bo')
for j in range(len(t_disc)):
dx = 0.3
ax.plot((t_disc[j]-dx, t_disc[j]+dx),
(f_disc[j]-dx*df_dx_disc[j], f_disc[j]+dx*df_dx_disc[j]), 'r--')
elif i == 1:
y = fu(x)
# ax.plot(x, y, 'r-')
ax.plot(t_all, fu(t_all), 'rs')
y_max = np.max(y)
y_min = np.min(y)
# Plot the segment boundaries
segends = np.linspace(1, 10, 5)
for j in range(len(segends)):
ax.plot((segends[j], segends[j]), (y_min, y_max), linestyle='--', color='gray',
zorder=-100)
# if i > 0:
# # ax.annotate('', xy=(segends[i], 0.05), xytext=(segends[i-1], 0.05),
# # arrowprops=dict(arrowstyle='<->'))
# # ax.text((segends[i]+segends[i-1])/2, 0.15, 'Segment {0}'.format(i-1),
# # ha='center', va='center')
# Set the bounding box properties
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
# print(i)
# if i==0:
# ax.spines['bottom'].set_color('none')
# Set the labels
ax.set_xlabel('time')
if i == 0:
ax.set_ylabel('state value')
else:
ax.set_ylabel('control value')
# Remove the axes ticks
ax.tick_params(
axis='both', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False,
right=False,
labelbottom=False,
labelleft=False) # labels along the bottom edge are off
plt.savefig('05_ode_eval_disc.png')
def plot_06(): # pragma: no cover
fig, axes = plt.subplots(2, 1)
for i in range(2):
ax = axes[i]
x = np.linspace(1, 10, 100)
if i == 0:
# Plot the state time history
y = f(x)
ax.plot(x, y, 'b-')
f_disc = f(t_disc)
f_all = f(t_all)
df_dx_disc = fu(t_disc)
# ax.plot(x, y, 'b-')
ax.plot(t_all, f_all, 'bo')
for j in range(len(t_disc)):
dx = 0.3
ax.plot((t_disc[j]-dx, t_disc[j]+dx),
(f_disc[j]-dx*df_dx_disc[j], f_disc[j]+dx*df_dx_disc[j]), 'r--')
elif i == 1:
y = fu(x)
ax.plot(x, y, 'r-')
ax.plot(t_all, fu(t_all), 'rs')
y_max = np.max(y)
y_min = np.min(y)
# Plot the segment boundaries
segends = np.linspace(1, 10, 5)
for j in range(len(segends)):
ax.plot((segends[j], segends[j]), (y_min, y_max), linestyle='--', color='gray',
zorder=-100)
# if i > 0:
# # ax.annotate('', xy=(segends[i], 0.05), xytext=(segends[i-1], 0.05),
# # arrowprops=dict(arrowstyle='<->'))
# # ax.text((segends[i]+segends[i-1])/2, 0.15, 'Segment {0}'.format(i-1),
# # ha='center', va='center')
# Set the bounding box properties
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
# print(i)
# if i==0:
# ax.spines['bottom'].set_color('none')
# Set the labels
ax.set_xlabel('time')
if i == 0:
ax.set_ylabel('state value')
else:
ax.set_ylabel('control value')
# Remove the axes ticks
ax.tick_params(
axis='both', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False,
right=False,
labelbottom=False,
labelleft=False) # labels along the bottom edge are off
plt.savefig('06_interpolation.png')
def plot_07(): # pragma: no cover
fig, axes = plt.subplots(2, 1)
for i in range(2):
ax = axes[i]
x = np.linspace(1, 10, 100)
if i == 0:
# Plot the state time history
y = f(x)
ax.plot(x, y, 'b-')
f_col = f(t_col)
f_all = f(t_all)
df_dx_col_approx = fu(t_col)
df_dx_col_computed = -0.5 * fu(t_col)
# ax.plot(x, y, 'b-')
ax.plot(t_all, f_all, 'bo')
for j in range(len(t_col)):
dx = 0.3
ax.plot((t_col[j]-dx, t_col[j]+dx),
(f_col[j]-dx*df_dx_col_approx[j], f_col[j]+dx*df_dx_col_approx[j]), 'r--')
ax.plot((t_col[j]-dx, t_col[j]+dx),
(f_col[j]-dx*df_dx_col_computed[j], f_col[j]+dx*df_dx_col_computed[j]),
'k--')
elif i == 1:
y = fu(x)
ax.plot(x, y, 'r-')
ax.plot(t_all, fu(t_all), 'rs')
y_max = np.max(y)
y_min = np.min(y)
# Plot the segment boundaries
segends = np.linspace(1, 10, 5)
for j in range(len(segends)):
ax.plot((segends[j], segends[j]), (y_min, y_max), linestyle='--', color='gray',
zorder=-100)
# if i > 0:
# # ax.annotate('', xy=(segends[i], 0.05), xytext=(segends[i-1], 0.05),
# # arrowprops=dict(arrowstyle='<->'))
# # ax.text((segends[i]+segends[i-1])/2, 0.15, 'Segment {0}'.format(i-1),
# # ha='center', va='center')
# Set the bounding box properties
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
# print(i)
# if i==0:
# ax.spines['bottom'].set_color('none')
# Set the labels
ax.set_xlabel('time')
if i == 0:
ax.set_ylabel('state value')
else:
ax.set_ylabel('control value')
# Remove the axes ticks
ax.tick_params(
axis='both', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False,
right=False,
labelbottom=False,
labelleft=False) # labels along the bottom edge are off
plt.savefig('07_ode_eval_col.png')
if __name__ == '__main__': # pragma: no cover
plot_01()
plot_02()
plot_03()
plot_04()
plot_05()
plot_06()
plot_07()
plt.show()
|
Formal statement is: lemma connected_component_eq: "y \<in> connected_component_set S x \<Longrightarrow> (connected_component_set S y = connected_component_set S x)" Informal statement is: If $y$ is in the connected component of $x$, then the connected component of $y$ is the same as the connected component of $x$.
|
lemma reflect_poly_pCons': "p \<noteq> 0 \<Longrightarrow> reflect_poly (pCons c p) = reflect_poly p + monom c (Suc (degree p))"
|
section \<open>Diophantine Equations\<close>
theory Parametric_Polynomials
imports Main
abbrevs ++ = "\<^bold>+" and
-- = "\<^bold>-" and
** = "\<^bold>*" and
00 = "\<^bold>0" and
11 = "\<^bold>1"
begin
subsection \<open>Parametric Polynomials\<close>
text \<open>This section defines parametric polynomials and builds up the infrastructure to later prove
that a given predicate or relation is Diophantine. The formalization follows
\<^cite>\<open>"h10lecturenotes"\<close>.\<close>
type_synonym assignment = "nat \<Rightarrow> nat"
text \<open>Definition of parametric polynomials with natural number coefficients and their evaluation
function\<close>
datatype ppolynomial =
Const nat |
Param nat |
Var nat |
Sum ppolynomial ppolynomial (infixl "\<^bold>+" 65) |
NatDiff ppolynomial ppolynomial (infixl "\<^bold>-" 65) |
Prod ppolynomial ppolynomial (infixl "\<^bold>*" 70)
fun ppeval :: "ppolynomial \<Rightarrow> assignment \<Rightarrow> assignment \<Rightarrow> nat" where
"ppeval (Const c) p v = c" |
"ppeval (Param x) p v = p x" |
"ppeval (Var x) p v = v x" |
"ppeval (D1 \<^bold>+ D2) p v = (ppeval D1 p v) + (ppeval D2 p v)" |
(* The next line lifts subtraction of type "nat \<Rightarrow> nat \<Rightarrow> nat" *)
"ppeval (D1 \<^bold>- D2) p v = (ppeval D1 p v) - (ppeval D2 p v)" |
"ppeval (D1 \<^bold>* D2) p v = (ppeval D1 p v) * (ppeval D2 p v)"
definition Sq_pp ("_ \<^bold>^\<^bold>2" [99] 75) where "Sq_pp P = P \<^bold>* P"
definition is_dioph_set :: "nat set \<Rightarrow> bool" where
"is_dioph_set A = (\<exists>P1 P2::ppolynomial. \<forall>a. (a \<in> A)
\<longleftrightarrow> (\<exists>v. ppeval P1 (\<lambda>x. a) v = ppeval P2 (\<lambda>x. a) v))"
datatype polynomial =
Const nat |
Param nat |
Sum polynomial polynomial (infixl "[+]" 65) |
NatDiff polynomial polynomial (infixl "[-]" 65) |
Prod polynomial polynomial (infixl "[*]" 70)
fun peval :: "polynomial \<Rightarrow> assignment \<Rightarrow> nat" where
"peval (Const c) p = c" |
"peval (Param x) p = p x" |
"peval (Sum D1 D2) p = (peval D1 p) + (peval D2 p)" |
(* The next line lifts subtraction of type "nat \<Rightarrow> nat \<Rightarrow> nat" *)
"peval (NatDiff D1 D2) p = (peval D1 p) - (peval D2 p)" |
"peval (Prod D1 D2) p = (peval D1 p) * (peval D2 p)"
definition sq_p :: "polynomial \<Rightarrow> polynomial" ("_ [^2]" [99] 75) where "sq_p P = P [*] P"
definition zero_p :: "polynomial" ("\<^bold>0") where "zero_p = Const 0"
definition one_p :: "polynomial" ("\<^bold>1") where "one_p = Const 1"
lemma sq_p_eval: "peval (P[^2]) p = (peval P p)^2"
unfolding sq_p_def by (simp add: power2_eq_square)
fun convert :: "polynomial \<Rightarrow> ppolynomial" where
"convert (Const c) = (ppolynomial.Const c)" |
"convert (Param x) = (ppolynomial.Param x)" |
"convert (D1 [+] D2) = (convert D1) \<^bold>+ (convert D2)" |
"convert (D1 [-] D2) = (convert D1) \<^bold>- (convert D2)" |
"convert (D1 [*] D2) = (convert D1) \<^bold>* (convert D2)"
lemma convert_eval: "peval P a = ppeval (convert P) a v" (* implicit for all v *)
by (induction P, auto)
definition list_eval :: "polynomial list \<Rightarrow> assignment \<Rightarrow> (nat \<Rightarrow> nat)" where
"list_eval PL a = nth (map (\<lambda>x. peval x a) PL)"
end
|
import tactic
import tactic.slim_check
mk_simp_attribute props ""
variables {α : Type*}
@[props] def is_reflexive (r : α → α → Prop) : Prop := ∀ a, r a a
@[props] def is_irreflexive (r : α → α → Prop) : Prop := ∀ a, ¬ r a a
@[props] def is_symmetric (r : α → α → Prop) : Prop := ∀ a b, r a b → r b a
@[props] def is_transitive (r : α → α → Prop) : Prop := ∀ a b c, r a b → r b c → r a c
@[props] def is_asymmetric (r : α → α → Prop) : Prop := ∀ a b, r a b → ¬ r b a
@[props] def is_total' (r : α → α → Prop) : Prop := ∀ a b, r a b ∨ r b a
@[props] def is_antisymmetric (r : α → α → Prop) : Prop := ∀ a b, r a b → r b a → a = b
@[props] def is_partial_equivalence (r : α → α → Prop) : Prop := is_symmetric r ∧ is_transitive r
@[props] def is_trichotomous' (r : α → α → Prop) : Prop := ∀ a b, r a b ∨ a = b ∨ r b a
@[props] def is_preorder' (r : α → α → Prop) : Prop := is_reflexive r ∧ is_transitive r
@[props] def is_total_preorder' (r : α → α → Prop) : Prop := is_transitive r ∧ is_total' r
@[props] def is_partial_order' (r : α → α → Prop) : Prop := is_preorder' r ∧ is_antisymmetric r
@[props] def is_linear_order' (r : α → α → Prop) : Prop := is_partial_order' r ∧ is_total' r
@[props] def is_equivalence (r : α → α → Prop) : Prop := is_preorder' r ∧ is_symmetric r
@[props] def is_strict_order' (r : α → α → Prop) : Prop := is_irreflexive r ∧ is_transitive r
-- def is_strict_weak_order' (r : α → α → Prop) : Prop := is_strict_order' r ∧ is_incomp_trans' r
-- def is_strict_total_order' (r : α → α → Prop) : Prop := is_trichotomous' r ∧ is_strict_weak_order' r
open tactic expr sum
meta def props : list (name × string) :=
[(`is_reflexive, "reflexive"),
(`is_irreflexive, "irreflexive"),
(`is_symmetric, "symmetric"),
(`is_transitive, "transitive"),
(`is_asymmetric, "asymmetric"),
(`is_total', "total"),
(`is_antisymmetric, "antisymmetric"),
(`is_trichotomous', "trichotomous")]
meta def omega_that_fails_on_false : tactic unit :=
done <|> all_goals' (target >>= λ tgt, guard (tgt.const_name ≠ `false) >> interactive.omega [])
meta def try_slim_check (cfg : slim_check.slim_check_cfg) :
tactic (option string) := do
result ← try_or_report_error (interactive.slim_check {quiet := tt, ..cfg}),
inr n ← return result | return none,
when (n.starts_with "Failed to create a") failed,
return (some n)
meta def finds_slim_check_example (cfg : slim_check.slim_check_cfg := {}) : tactic unit := do
n ← try_slim_check cfg,
guard n.is_some
meta def test_init : tactic unit := do
t ← target,
let ⟨f, args⟩ := t.get_app_fn_args,
let prop_name := f.const_name,
let rel_name := args.ilast.const_name,
dsimp_target none [prop_name, rel_name] {fail_if_unchanged := ff}
meta def test_init_main (t : expr) (R : name) : tactic expr := do
t.dsimp {} ff [`props] [simp_arg_type.expr $ expr.const R []]
meta def test_main (t : expr) (cfg : slim_check.slim_check_cfg := {}) : tactic (ℕ ⊕ string) := do
-- trace t,
let not_t := const `not [] t,
retrieve (tactic.assert `this t >> focus1 (tidy >> omega_that_fails_on_false >> done) >> return (inl 0)) <|>
retrieve (tactic.assert `this not_t >> focus1 (tidy >> omega_that_fails_on_false >> done) >> return (inl 1)) <|>
retrieve (do tactic.assert `this t, some error ← focus1 (try_slim_check cfg), return (inr error)) <|>
return (inl 2)
meta def test_core (cfg : slim_check.slim_check_cfg := {}) : tactic (ℕ ⊕ string) := do
test_init,
t ← target,
r ← test_main t cfg,
when (r = inl 0) (get_local `this >>= exact),
return r
meta def test (cfg : slim_check.slim_check_cfg := {}) : tactic unit := do
result ← test_core cfg,
when (result = inl 0) (trace "succeeded!"),
when (result = inl 1) (trace "proved the negation!"),
when (result.is_right) (trace "found a counterexample!" >> trace result.get_right.iget),
when (result = inl 2) (trace "couldn't make progress."),
skip
meta def test_all (R : name) (cfg : slim_check.slim_check_cfg := {}) : tactic unit :=
props.mmap' $ λ ⟨prop_nm, prop_str⟩, retrieve $ do {
-- let t : expr := (const rel [level.zero]) (const `nat []) (const R []),
rel ← mk_const R,
prop ← mk_const prop_nm,
t ← to_expr ``(%%prop %%rel),
t ← test_init_main t R,
result ← test_main t cfg,
-- trace result,
when (result = inl 0) (trace!"{R} is {prop_str}!"),
when (result = inl 1) (trace!"{R} is not {prop_str}!"),
when (result.is_right) (trace!"{R} is not {prop_str}!"),
when (result = inl 2) (trace!"Cannot resolve whether {R} is {prop_str}.") }
@[user_attribute] meta def test_attr : user_attribute :=
{ name := `test,
descr := "Automatically tests whether a relation `ℕ → ℕ → Prop` satisfies some properties.",
after_set := some $
λ R _ _, do test_all R }
-- def dummy : false ↔ 0 = 1 := by simp
@[test]
def R (n m : ℕ) : Prop :=
n < m ∨ n > 10
example : is_transitive R :=
by { test, sorry }
|
function rd = jed_to_rd ( jed )
%*****************************************************************************80
%
%% JED_TO_RD converts a JED to an RD.
%
% Licensing:
%
% This code is distributed under the GNU LGPL license.
%
% Modified:
%
% 24 June 2012
%
% Author:
%
% John Burkardt
%
% Reference:
%
% Edward Reingold, Nachum Dershowitz,
% Calendrical Calculations, the Millennium Edition,
% Cambridge, 2002.
%
% Parameters:
%
% Input, real JED, the Julian Ephemeris Date.
%
% Output, real RD, the RD date.
%
rd_epoch = epoch_to_jed_rd ( );
rd = jed - rd_epoch;
return
end
|
rm(list=ls(all=TRUE))
Sys.setlocale(locale="chinese")
setwd('./data')
#library(geojsonio)
library(ropencc)
library(dplyr)
library(jsonlite)
library(leaflet)
library(geojson)
library(sp)
library(sf)
library(rgeos)
library(rgdal)
library(stringdist)
library(data.table)
library(fasttime)
library(tidyr)
arr_dep <- fread('1213.txt',header=F,quote='"',encoding = 'UTF-8',sep=",")
arr_dep$time <- fastPOSIXct(arr_dep$V6,'UTC')
jsonfile <- "osm_bus_stops_zhuhai.geojson"
stops<- readOGR(jsonfile, require_geomType="wkbPoint",use_iconv = TRUE, encoding = "UTF-8")
# read stations defined by polygons
stations <- readOGR(jsonfile, require_geomType="wkbPolygon",use_iconv = TRUE, encoding = "UTF-8")
# convert station polygons to points
stations <- SpatialPointsDataFrame(gCentroid(stations, byid=TRUE),
stations@data, match.ID=FALSE)
all_stops=rbind(stops,stations)
#all_stops=as.data.frame(rbind(stops,stations))
noname_stops <- all_stops[is.na(all_stops$name),]
named_stops <- all_stops[!is.na(all_stops$name),]
ccts = converter(T2S)
named_stops$name <- sapply(as.character(named_stops$name),function(x) {
ccts[x]
})
###################################################
#filter out stops points with at most 4 spottings
stop_names=as.data.frame(tapply(named_stops$name,named_stops$name,FUN=length))
names(stop_names) <- c("spots")
stop_names$name <- row.names(stop_names)
good_stops <- subset(stop_names,spots<=4)
################################################################
coupled_polygon=lapply(good_stops$name,
FUN=function(x,SpPointsDF){
points <- coordinates(subset(SpPointsDF,name==x))
Polygons(list(Polygon(rbind(points,points[1,]))),x)
},
SpPointsDF=named_stops)
coupled_polygon=SpatialPolygons(coupled_polygon,proj4string = CRS("+proj=longlat +ellps=WGS84"))
coupled_polygon=SpatialPolygonsDataFrame(coupled_polygon,good_stops, match.ID = TRUE)
simple_stops=gCentroid(coupled_polygon,byid=TRUE)
simple_stops <- SpatialPointsDataFrame(simple_stops,good_stops)
#####################################################################
#write out stops.txt
stop_output <- as.data.frame(simple_stops)
stop_output$stop_id <- seq(1,nrow(stop_output))
stop_output <- stop_output[,c(5,2,4,3)]
names(stop_output) <- c("stop_id","stop_name","stop_lat","stop_lon")
setwd('../build_GTFS')
cat("stop_id,stop_name,stop_lat,stop_lon","\n",file="stops.txt")
write.table(stop_output,file="stops.txt",sep = ",",row.names=FALSE,col.names=FALSE,append=TRUE)
setwd('../data')
#####################################################################
#write out routes.txt
routes_output <- data.frame(route_id=seq(1,length(unique(arr_dep$V1))),
route_short_name=unique(arr_dep$V1),
route_long_name=unique(arr_dep$V1),
route_type=3
)
setwd('../build_GTFS')
cat("route_id,route_short_name,route_long_name,route_type","\n",file="routes.txt")
write.table(routes_output,file="routes.txt",sep = ",",row.names=FALSE,col.names=FALSE,append=TRUE,fileEncoding = "UTF-8")
setwd('../data')
#####################################################################
#write out trips.txt
#first eliminate duplicate records
trip_holder <- data.table(V1=character(),
V2=character(),
V3=character(),
V5=character(),
seq=integer(),
arr=double(),
dep=double())
trip_holder$arr <- fastPOSIXct(trip_holder$arr,'UTC')
trip_holder$dep <- fastPOSIXct(trip_holder$dep,'UTC')
for (i in routes_output$route_id)
#for (i in c(1))
{
route_id=i
route_name=routes_output[routes_output$route_id==route_id,]$route_short_name
temp_dep <- arr_dep[V1==route_name]
setkey(temp_dep,V2,time)
temp_dep[,CID:=paste0(V4,V5)]
temp_dep$seq <- with(temp_dep,
rep(seq(length(rle(CID)$values)),rle(CID)$lengths)
)
temp_dep$inner_seq <- with(temp_dep,
ave(seq_along(seq),seq,FUN=seq_along)
)
temp_dep <- temp_dep[inner_seq==1]
temp_dep[,c("V6","CID","seq","inner_seq"):=NULL]
temp_dep$seq <- with(temp_dep,
rep(seq(length(rle(V5)$values)),rle(V5)$lengths)
)
temp_dep$inner_seq <- with(temp_dep,
ave(seq_along(seq),seq,FUN=seq_along)
)
temp_dep$instance <- with(temp_dep,
rep(rle(seq)$lengths,rle(seq)$lengths)
)
temp_dep <- temp_dep[inner_seq==1 |inner_seq==instance ]
temp_faulty=temp_dep %>% group_by(seq) %>% mutate(value=paste(V4,collapse = ""))
temp_faulty_list <- unique(temp_faulty[temp_faulty$value=="到站到站",]$seq)
temp_faulty_list1 <- unique(temp_faulty[temp_faulty$value=="离站离站",]$seq)
temp_dep[(seq %in% temp_faulty_list) & inner_seq==instance & inner_seq>1]$V4 <- "离站"
temp_dep[(seq %in% temp_faulty_list1) & inner_seq==1]$V4 <- "到站"
temp_dep_stripped <- temp_dep[,c(seq(1,7))]
temp_dep=temp_dep_stripped %>% spread(V4,time)
#tryCatch(temp_dep_stripped %>% spread(V4,time),error=c)
setkey(temp_dep,seq)
names(temp_dep)[c(6,7)] <-c("arr","dep")
temp_dep[is.na(arr),arr:=dep]
temp_dep[is.na(dep),dep:=arr]
trip_holder <- rbind(trip_holder,temp_dep)
}
#temp_dep <- temp_dep[order(seq)]
#temp_dep[c(seq(10458, 10459))]
#temp_dep[c(seq(16363, 16364))]
trip_holder[,diff:=dep-arr]
trip_holder <- trip_holder[diff>=0]
trip_holder[,CID:=paste0(V2,V3)]
trip_holder$trip_id <- with(trip_holder,
rep(seq(length(rle(CID)$values)),rle(CID)$lengths)
)
routes_output <- as.data.table(routes_output)
setkey(routes_output,route_short_name)
setkey(trip_holder,V1)
trip_data <- routes_output[trip_holder]
setkey(trip_data,route_id,seq)
trips_output <- trip_data[,c("trip_id","route_id")]
trips_output[,service_id:=0]
setkey(trips_output)
trips_output <- unique(trips_output)
setwd('../build_GTFS')
cat("trip_id,route_id,service_id","\n",file="trips.txt")
write.table(trips_output,file="trips.txt",sep = ",",row.names=FALSE,col.names=FALSE,append=TRUE,fileEncoding = "UTF-8")
setwd('../data')
#####################################################################
#write out stop_times.txt
# establish correspondence between archived stop name and osm stop names (ids)
record_name <- data.table(id=seq(1,length(unique(trip_data$V5))),stopname=unique(trip_data$V5))
record_name[,match1:='']
record_name[,match2:='']
record_name[,stop_id:=0]
for (i in 1:nrow(record_name))
{
record_name[i,3]=stop_output[amatch(record_name[i,2],stop_output$stop_name,method='jw',maxDist=0.3),2]
record_name[i,4]=stop_output[amatch(record_name[i,2],stop_output$stop_name,method='jw',maxDist=0.01),2]
record_name[i,5]=stop_output[amatch(record_name[i,2],stop_output$stop_name,method='jw',maxDist=0.01),1]
}
record_name_lkp <- record_name[!is.na(record_name$match2),c("match2","stop_id")]
setkey(record_name_lkp,match2)
valid_stops=record_name[!is.na(record_name$match2)]$match2
stop_time_output <- trip_data[(V5 %in% valid_stops),]
setkey(stop_time_output,V5)
stop_time_output <- stop_time_output[record_name_lkp]
setkey(stop_time_output,trip_id,seq)
stop_time_output$stop_sequence <- with(stop_time_output,
ave(seq_along(trip_id),trip_id,FUN=seq_along)
)
stop_time_output[,arrival_time:=strftime(arr, format = "%H:%M:%S",tz="GMT")]
stop_time_output[,departure_time:=strftime(dep, format = "%H:%M:%S",tz="GMT")]
stop_time_output <- stop_time_output[,c("trip_id","arrival_time","departure_time","stop_id","stop_sequence")]
setwd('../build_GTFS')
cat("trip_id,arrival_time,departure_time,stop_id,stop_sequence","\n",file="stop_times.txt")
write.table(stop_time_output,file="stop_times.txt",sep = ",",quote=FALSE,row.names=FALSE,col.names=FALSE,append=TRUE,fileEncoding = "UTF-8")
setwd('../data')
######################################################################
#visualize stops
m <- leaflet(data=simple_stops) %>%
addTiles() %>%
addMarkers(popup = ~as.character(name), label = ~as.character(name))
m
#########################################
|
The layout and style of the Song tombs resemble those found in the contemporary Tangut kingdom of the Western Xia , which also had an auxiliary burial site associated with each tomb . At the center of each burial site is a truncated pyramidal tomb , each having once been guarded by a four @-@ walled enclosure with four centered gates and four corner towers . About 100 km ( 62 mi ) from Gongxian is the Baisha Tomb , which contains " elaborate facsimiles in brick of Chinese timber frame construction , from door lintels to pillars and pedestals to bracket sets , that adorn interior walls . " The Baisha Tomb has two large separate chambers with conical ceilings ; a large staircase leads down to the entrance doors of the subterranean tomb .
|
\documentclass[a4paper,12pt]{report}
\usepackage[utf8x]{inputenc}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{color}
\usepackage{graphicx}
\newcommand{\norm}[1]{\lVert#1\rVert}
\usepackage{listings}
\usepackage{color}
%\usepackage{enumitem}
\usepackage{enumerate}
\usepackage{url}
\definecolor{dkgreen}{rgb}{0,0.6,0}
\definecolor{gray}{rgb}{0.5,0.5,0.5}
\definecolor{mauve}{rgb}{0.58,0,0.82}
\lstset{frame=tb,
language=Matlab,
aboveskip=3mm,
belowskip=3mm,
showstringspaces=false,
columns=flexible,
basicstyle={\small\ttfamily},
numbers=none,
numberstyle=\tiny\color{gray},
keywordstyle=\color{blue},
commentstyle=\color{dkgreen},
stringstyle=\color{mauve},
breaklines=true,
breakatwhitespace=true
tabsize=3
}
% Title Page
\title{CMSC828J Project 3 report \\ \ \\ Theme: \\ \ \\ Practical issues \& insights \\ with Isomap}
\author{Jason Filippou \\ \ \\ [email protected]}
\begin{document}
\maketitle
%\section*{Preliminaries}
\begin{abstract}
For this project, we have implemented Isomap, arguably the most commonly used algorithm for Manifold Learning. We outline the
steps of the algorithm and our accompanying implementation. We discuss the practical issues that an implementation of Isomap would need to address. The algorithm is first validated on synthetic data and subsequently applied on the face data used in \cite{Tenenbaum2000}, where we are able to reproduce some of the authors' results. In some sense, our version of Isomap extends that of \cite{Tenenbaum2000} in that, if multiple connected components arise from the $k$-nearest neighbor graph, we embed almost all connected components (up to some very small ones which we discard as noise), whereas the original Isomap algorithm required the user to specify the index of the connected component that would be embedded.
This write-up is organized as follows: First, we mention every step of the Isomap algorithm briefly, accompanying it with MATLAB code snippets. Then, we describe our synthetic data results, while simultaneously explaining the nature of the embedding for various values of $k$. Finally, we present the results on the face data. The results themselves are very easy to reproduce; as is mentioned in this write-up, it suffices to simply run the scripts $\texttt{swissroll.m}$. $\texttt{sinusoid.m}$ and $\texttt{expFaceData.m}$ in any preferred order.
Some acknowledgements are due for this improved version of the project. For the synthetic data part of our experiments, classmate Konstantinos Zambogiannis helped us produce a swissroll which appeared to be sampled densely enough for Isomap to work correctly. \footnote{Recall that dense sampling is among the requirements of Isomap.}
\end{abstract}
\section*{1. Steps of Isomap}
The key steps of Isomap are outlined below. We accompany the analysis with MATLAB code snippets.
\subsection*{1.1 KNN graph}
In this step, we form the K-nearest neighbors graph of the data by applying the edge weights $W_{i,j} = \left || x_i - x_j \right ||_{\scriptsize{2}}$ for $k$-nearest neighbors and $\infty$ for nodes that aren't $k$-nearest neighbors. This reflects the \textit{local structure} of the manifold. As we will see later on, the choice of $k$ affects the format of the embedding significantly. The following MATLAB function, \texttt{build\_knn\_graph.m}, implements this step.
\vspace{.2in}
\begin{lstlisting}
function [ G ] = build_KNN_Graph( data, k )
%BUILD_KNN_GRAPH Find the k-nearest neighbors of every data point
% and build the knn graph G.
% The knn graph is represented as an NxN weighted sparse matrix.
% This is the format expected by MATLAB graph functions such as
% graphshortestpath.
% The first column of D should always be 0.
[IDX, D] = knnsearch(data, data, 'k', k, 'distance', 'euclidean');
% Initialize the weight matrix with zeroes
G = zeros(size(data,1));
% For every datapoint
for i = 1:size(data,1)
% For every neighbor other than ourselves
for j = 2:k
% Set the arc weight to be the Euclidean distance calculated.
G(i, IDX(i, j)) = D(i, j);
end
end
G = sparse(G); % Compact storage, needed by graphshortestpath and other functions.
end
\end{lstlisting}
This function returns a sparse matrix G, which is the preferred weighted graph format of MATLAB. We can use this format as input to other functions,
such as \texttt{graphshortestpath}.
\subsection*{1.2 Shortest Path distances}
In this step, we compute the shortest path distances between all pairs of points, and then store the squares of these distances in a symmetric matrix D. The idea is that those shortest path distances approach the true geodesics on the manifold. \cite{Tenenbaum2000} proves that, as the density of the manifold grows without bound (that is, the number of points $N \to +\infty$), the geodesics are approximated arbitrarily well. Initially, we used Dijkstra's algorithm to implement this step. However, we came to find that MATLAB actually solves this problem much more efficiently by including the function \texttt{graphallshortestpaths}, which implements Johnson's algorithm. This algrithm is much faster since it can operate over the entire graph at once, instead of needing a source point, like Dijkstra's algorithm does. For documentation consistency, we show the source code of \texttt{build\_distance\_matrix.m} below:
\vspace{.2in}
\begin{lstlisting}
function [ D ] = build_distance_matrix(n, G )
%BUILD_DISTANCE_MATRIX Builds a distance matrix between points on the graph G by measuring their distances as that of the shortest path between them on G. Warning: Using this function is much slower than using graphallshortestpath.
D = zeros(n);
for i = 1:n
for j = 1:n
%fprintf('%d, %d\n', i, j); % For tracing our progress.
if j > i
[D(i, j), ~, ~] = graphshortestpath(G, i, j, 'Directed', false);
else
D(i, j) = D(j, i); % Just a reference copy to speed things up, since the matrix is symmetric.
end
end
end
%fprintf('%s\n', 'Done!'); % For tracing our progress.
end
\end{lstlisting}
\subsection*{1.3 Classic Multi-dimensional Scaling (cMDS)}
The final step of Isomap is the low-dimensional embedding, achieved by Classic Multi-dimensional Scaling (cMDS) on the shortest-path distance matrix
$D$. cMDS differs from metric MDS in that it projects $D$ to the cone of positive semi-definite matrices by making its negative eigenvalues zero \cite{Cayton2005}. The function \texttt{cMDS.m}, outlined below, computes the Singular Value Decomposition of $B = -\frac{1}{2}HDH = USU^T$ and returns the first $dim$ columns of $US^{\frac{1}{2}}$. Note that the matrix $S^{\frac{1}{2}}$ is not well-defined in the real number domain unless we make its negative eigenvalues zero.
\vspace{.2in}
\begin{lstlisting}
function [ X, retEigvals ] = cMDS( D, dim )
%cMDS Classical Multi-Dimensional Scaling (MDS)
% Parameters:
% D, a symmetric matrix of distances (not necessarily Euclidean)
% dim: the target dimensionality to compute the low-rank approximation with.
% Returns: X = (U * S^(1/2))[n x dim], where n is the number of rows and dim signifies the number of first columns of the matrix product to return
% eigVals the first 5 eigenvalues returned by e
N = size(D, 1);
H = eye(N) - 1/N * ones(N, 1) * ones(N,1)'; % centering matrix
B = (-1/2)*H*D*H;
[U, S, ~] = svd(B);
% Because the distances are not necessarily Euclidean, we need to project B onto the cone of p.s.d matrices. To do that, we need to make the negative eigenvalues of B zero.
eigvals = diag(S);
eigvals(eigvals < 0) = 0;
S = diag(sqrt(eigvals));
% We now take the first "dim" columns of (U * S^(1/2).
M = U * S;
X = M(:, 1:dim);
retEigvals = eigvals(1: 5)';
\end{lstlisting}
\section*{2. Experiments on synthetic data}
In this section, we detail our results on synthetic 2D data embedded in 3D. For this purpose, we produce a custom swissroll and 3D sinusoid.
\begin{figure}
\centering
\includegraphics[width=.7\textwidth]{figures/swissroll_data.eps}
\caption{The sampled swissroll.}
\label{fig:swissroll}
\end{figure}
\subsection*{2.1 Swissroll}
Our original swissroll data (figure \ref{fig:swissroll}) consists of 2500 points randomly sampled from a 2D uniform distribution and then mapped to a swissroll by using the swissroll map:
\begin{equation}
(X, Y) \rightarrow (X \cdot cos(X), Y, X\cdot sin(X))
\label{eq:swiss_map}
\end{equation}
This data is generated in \texttt{gen\_swissroll.m}. Our swissroll experiments can be reproduced fully by running the top-level script \texttt{swissroll.m}. Figure \ref{fig:swiss_embeddings} shows embeddings and eigenvalue magnitudes for various values of the parameter $k$. The eigenvalue magnitudes ``give away" the dominant dimensionality of the produced embedding. As can be witnessed by our results, the value of $k$ has
a tremendous effect on the embedding, both from a visual and a mathematical perspective. For $k=4$, we tend to favor a one-dimensional embedding. Also note that for small values of $k$ (in this case, $k < 7$) the $k$-nearest neighbor graph introduces multiple connected components. We take care of this by using MATLAB's connected component functionalities. For more details, we refer the reader to the comments of \texttt{swissroll.m}.
\begin{figure}
\centering
\includegraphics[width = .4\textwidth]{figures/swissroll_embedding_k=4}
\includegraphics[width = .4\textwidth]{figures/swissroll_barplot_k=4}\\
\includegraphics[width = .4\textwidth]{figures/swissroll_embedding_k=6}
\includegraphics[width = .4\textwidth]{figures/swissroll_barplot_k=6} \\
\includegraphics[width = .4\textwidth]{figures/swissroll_embedding_k=8}
\includegraphics[width = .4\textwidth]{figures/swissroll_barplot_k=8}\\
\includegraphics[width = .4\textwidth]{figures/swissroll_embedding_k=10}
\includegraphics[width = .4\textwidth]{figures/swissroll_barplot_k=10}\\
\caption{Swissroll embeddings and eigenvalue spectra for $k=4, 6, 8, 10$.}
\label{fig:swiss_embeddings}
\end{figure}
For all $k > 4$, the eigenvalue spectrum still shows only 2 appreciable eigenvalues. This is understandable, since essentially our data
is nothing beyond a curved 2D plane in 3D.
\subsection*{2.2 Sinusoid}
The MATLAB function file \texttt{gen\_sinusoid.m} was used to generate a 2D sinusoid of 2000 points embedded in 3D (Figure \ref{fig:sinusoid}). Figure
\ref{fig:sinus_embeddings} shows the embeddings attained for this sinusoid for various values of $k$. We again see that $k$ has a direct consequence
on the form of the embedding. For $k=4$, we obtain an almost 1D embedding. For larger values of $k$, Isomap converges to the intrinsic dimensionality
of the dataset. The results are reproducible by running the script \texttt{sinusoid.m}.
\begin{figure}
\centering
\includegraphics[width=.7\textwidth]{figures/sinusoid_data.eps}
\caption{The sampled sinusoid.}
\label{fig:sinusoid}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = .4\textwidth]{figures/sinusoid_embedding_k=4}
\includegraphics[width = .4\textwidth]{figures/sinusoid_barplot_k=4}\\
\includegraphics[width = .4\textwidth]{figures/sinusoid_embedding_k=6}
\includegraphics[width = .4\textwidth]{figures/sinusoid_barplot_k=6} \\
\includegraphics[width = .4\textwidth]{figures/sinusoid_embedding_k=8}
\includegraphics[width = .4\textwidth]{figures/sinusoid_barplot_k=8}\\
\includegraphics[width = .4\textwidth]{figures/sinusoid_embedding_k=10}
\includegraphics[width = .4\textwidth]{figures/sinusoid_barplot_k=10}\\
\caption{Sinusoid embeddings and eigenvalue spectra for $k=4, 6, 8, 10$.}
\label{fig:sinus_embeddings}
\end{figure}
\section*{3. Face data}
Finally, we apply our implementation of Isomap to the face data used in \cite{Tenenbaum2000}, downloadable from \url{
http://isomap.stanford.edu/}. This data consists of 698 grayscale images of the same face, with rotations along the X and Y axis and variation in
lighting. These degrees of freedom are the dominant ones in this dataset, which is used as one of the running examples of \cite{Tenenbaum2000}.
The authors have been able to recover these degrees of freedom through the eigenvalue spectrum returned by Isomap, and as can be made evident by
figure \ref{fig:faceDataSpectra}, so can our own implementation of the algorithm. It should be pointed out that the 4th and 5th eigenvalues are not
significantly smaller than the 3rd: this mirrors the experimental results of \cite{Tenenbaum2000} and reflects the fact that the degrees of freedom are
distorted by phenomena such as self-shadowing and non-symmetries in the face. Running the script \texttt{expFaceData.m} reproduces our results.
\begin{figure}
\centering
\includegraphics[width=.4\textwidth]{figures/faceEigSpectrum_k=4}
\includegraphics[width=.4\textwidth]{figures/faceEigSpectrum_k=6}\\
\includegraphics[width=.4\textwidth]{figures/faceEigSpectrum_k=8}
\includegraphics[width=.4\textwidth]{figures/faceEigSpectrum_k=10}
\caption{Face data eigenvalue spectra for $k=4, 6, 8, 10$.}
\label{fig:faceDataSpectra}
\end{figure}
\section*{4. Conclusions}
In this project, we implemented Isomap, the original and most widely used algorithm for Manifold Learning. We showed the steps of Isomap
and hinted towards code snippets that implement those steps. We experimented on synthetic and real data and found that the results matched our
intuitions (for the synthetic data) as well as prior, published results (for the real data).
Isomap has been terrifically studied in the manifold literature and most of its theoretical strengths and weaknesses have been discussed. With this work,
we aimed to address some practical challenges that an implementation of this algorithm needs to address. Specifically:
\begin{enumerate}
\item We deal with the issue of multiple connected components more thoroughly than the original Isomap implementation, which was only capable of
projecting one connected component, whenever multiple exist. We find that this issue is prevalent in sparsely sampled data or when the value of $k$ is
small. For example, for $k=4$, the 4-nearest neighbor graph produced for our densely sampled ($N=2500$) swissroll introduced 36 connected components.
\item We perform a visual and mathematical examination of the effect that the parameter $k$ has on the form of the embeddings produced. We determine that, if $k$ is not ``sufficiently" large, the data tends to be projected in a linear subspace of lower dimensionality than the intrinsic dimensionality of the data. In cases where the user has some prior information about this intrinsic dimensionality (e.g face data), then tuning $k$ such
that this dimensionality is well-reflected is possible. However, in the general case, where there is no prior information about the manifold whatsoever,
it does not seem obvious how $k$ should be selected. To the best of our knowledge, there do not exist theoretical guarantees that link the value of $k$ to the intrinsic dimensionality of the dataset.
\item We experimentally verify that the biggest bottleneck behind Isomap is the shortest-path algorithm runtime. An original implementation used Dijkstra's algorithm and iterated over half the nodes in the structure, for a total complexity of $O({{N}\choose{2}}\times log(N)\times E)$, where $N$ is the number of nodes and $E$ is the number of edges in the graph. As the value of $k$ increases, $E$ explodes, leading to complications with this algorithm selection. We substitute our use of Dijkstra's algorithm through $\texttt{graphshortestpath}$ to Johnson's algorithm, through $\texttt{graphallshortestpaths}$, which provides for a complexity that only depends linearly on $E$: $O(N\times log(N) + N \times E)$. Furthermore,
our implementation is easily parallelizable by using MATLAB's \texttt{parfor} loops such that multiple values of $k$ are tested simultaneously. One would only need to take one of the three top-level scripts, add
\texttt{matlabpool(n)} with \texttt{n} the required number of MATLAB workers (processing threads) before the for loops, change the for loops into parfor loops and add \texttt{matlabpool close} after the loops. Prior to employing Johnson's algorithm, this is how we structured these scripts; after using Johnson, the computational benefit was so significant that there did not exist good reason to burden the host machine's threads with parallel computation.
\end{enumerate}
\bibliography{../828_refs}
\bibliographystyle{plain}
\end{document}
|
If two loops are homotopic, then they have the same value at any point in the interval $[0,1]$.
|
# Copyright (c) 2018-2021, Carnegie Mellon University
# See LICENSE for details
# LocalConfig is configured in the user's local _spiral.g (Linux/Unix)
# or _spiral_win.g (Windows) in the Spiral root directory
#
# At a minimum, cpuinfo and osinfo need to be set, see one of those
# above mentioned config files for reference
Declare(LocalConfig, SpiralDefaults);
Class(LocalConfig, rec(
info := meth(self)
Print("\nPID: ", GetPid(), "\n");
end,
appendSym := self >> When(IsBound(self.osinfo.isWindows) and self.osinfo.isWindows(), "%*", "$*"),
compilerinfo := rec(
compiler:= "",
defaultMode := "none",
modes := rec(none := ""),
alignmentSpecifier := ()->"",
info := ()->Print("compiler info not set"),
),
cpuinfo := rec(
cpuname:="",
vendor:="",
freq:=0,
default_lang:="",
isWindows := False,
isLinux := False,
is32bit := False,
is64bit := False,
nproc := 1,
SIMDname := False,
info := ()->Print("CPU info not set"),
SIMD := () -> spiral.platforms.SIMDArchitectures
),
osinfo := rec(info := ()->Print("OS info not set"),
isWindows := False,
isLinux := False,
isDarwin := False,
isCygwin := False),
svninfo := rec(version := "unknown",
modified := "unknown",
mixed := "unknown",
isInit := false,
info := self >> When(self.isInit, Print("SVN: ", self.version, When(self.modified, " (modified)", "")), Print("SVN info not set"))
),
getOpts := arg >> arg[1].cpuinfo.getOpts(Drop(arg, 1)),
setTitle := meth(arg)
local self, title;
self := arg[1];
if not IsBound(self.osinfo.setTitle) then return false; fi;
if Length(arg)=1 then title := ""; else title := Concat(" - ", arg[2]); fi;
self.osinfo.setTitle(Concat("Spiral 5.0", title));
return true;
end
));
HighPerfMixin := rec(
useDeref := true,
compileStrategy := compiler.IndicesCS2,
propagateNth := false
);
SpiralDefaults := CopyFields(SpiralDefaults, rec(
includes := ["<include/omega64.h>"],
precision := "double",
generateInitFunc := true,
XType := code.TPtr(code.TReal),
YType := code.TPtr(code.TReal),
unifyStoredDataType := false, # false | "input" | "output"
# if non-false, then unifies the datatype of
# precomputed data with the datatype of input (X)
# or output (Y)
# we implement complex transforms using real vector of 2x size
dataType := "real",
globalUnrolling := 32,
faultTolerant := false,
printWebMeasure := false,
# compiler options
finalBinSplit := false,
declareConstants := false,
doScalarReplacement := false,
propagateNth := true,
inplace := false,
doSumsUnification := false,
arrayDataModifier := "static",
scalarDataModifier := "",
arrayBufModifier := "static",
funcModifier := "", # for example "__decl" or "__fastcall"
valuePostfix := "",
# How much information Spiral is printing on the terminal.
# Currently rather few functions are using this.
verbosity := 1,
# list of include files in generated C code, eg. ["<math.h>"]
includes := [],
formulaStrategies := rec(
sigmaSpl := [ sigma.StandardSumsRules ],
preRC := [],
rc := [ sigma.StandardSumsRules ],
postProcess := [
(s, opts) -> compiler.BlockSums(opts.globalUnrolling, s),
(s, opts) -> sigma.Process_fPrecompute(s, opts)
]
),
baseHashes := [],
subParams := [],
sumsgen := sigma.DefaultSumsGen,
# breakdownRules limits the used breakdown rules.
# It must be a record of the form
# rec(
# nonTerm := [ breakdown_rule1, breakdown_rule2, ...],
# DFT := [ DFT_Base, DFT_CT ] <-- example
# ).
#
# By default we set it to ApplicableTable for backwards compatibility with
# older svn revisions.
#
# Functions SwitchRulesOn/Off will work only with breakdownRules==ApplicableTable.
breakdownRules := formgen.ApplicableTable,
compileStrategy := compiler.IndicesCS,
simpIndicesInside := [code.nth, code.tcast, code.deref],
useDeref := true,
generateComplexCode := false,
unparser := compiler.CUnparserProg,
codegen := compiler.DefaultCodegen,
TCharCtype := "char",
TUCharCtype := "unsigned char",
TUIntCtype := "unsigned int",
TULongLongCtype := "unsigned long long",
TRealCtype := "double",
operations := rec(Print := s -> Print("<Spiral options record>")),
highPerf := self >> CopyFields(self, HighPerfMixin),
coldcache := false,
));
CplxSpiralDefaults := CopyFields(SpiralDefaults, rec(
includes := ["<include/complex_gcc_sse2.h>"],
unparser := compiler.CMacroUnparserProg,
XType := code.TPtr(code.TComplex),
YType := code.TPtr(code.TComplex),
dataType := "complex",
generateComplexCode := true,
c99 := rec(
I := "__I__",
re := "creal",
im := "cimag"
)
));
IntelC99Mixin := rec(
includes := ["<include/omega64c.h>"],
unparser := compiler.CUnparserProg,
XType := code.TPtr(code.TComplex),
YType := code.TPtr(code.TComplex),
dataType := "complex",
generateComplexCode := true,
c99 := rec(
I := "__I__",
re := "creal",
im := "cimag"
),
TComplexCtype := "_Complex double",
TRealCtype := "double",
);
IBMC99Mixin := rec(
includes := ["<include/omega64c.h>"],
unparser := compiler.CUnparserProg,
XType := code.TPtr(code.TComplex),
YType := code.TPtr(code.TComplex),
dataType := "complex",
generateComplexCode := true,
c99 := rec(
I := "__I",
re := "_creal",
im := "_cimag"
),
TComplexCtype := "_Complex double",
TRealCtype := "double",
postalign := n -> Print(" __alignx(16,", n, ");\n")
);
# How do we determine if we're running on Windows or Linux?
#Try(Load(iswindows));
#Try(Load(islinux));
# NOTE: I'd like to pull that out into a function call, but failed to use load/include/read inside a function... => ask YSV
#if LocalConfig.osinfo.isWindows() then
# Exec(let(sdir:=Conf("spiral_dir"), Concat("SubWCRev.exe ", sdir, " ", Concat(sdir, "\\spiral\\svn_win.src ", sdir, "\\spiral\\svn_info.g > NUL"))));
# Load(svn_info);
#fi;
#if LocalConfig.osinfo.isLinux() then
# NOTE: For now, assume that any non-windows system is a linux system.
#else
# Exec(let(sdir:=Conf("spiral_dir"), Concat(". ", sdir, "/spiral/svn_linux.src ", sdir, " > ", sdir, "/spiral/svn_info.g")));
# Load(svn_info);
#fi;
compiler.Unparser.fileinfo := meth(self, opts)
local info;
# if IsBound(opts.fileinfo) then
# info := opts.fileinfo;
# Print("/*\tCPU: ");
# LocalConfig.cpuinfo.info();
# Print("\n\tOS: ");
# LocalConfig.osinfo.info();
# Print("\n\t");
# LocalConfig.svninfo.info();
# if IsBound(opts.profile) then
# Print("\n\tprofile: ", opts.profile.name, ", ", opts.profile.makeopts.CFLAGS);
# else
# Print("\n\tlanguage: ", opts.language);
# fi;
# PrintLine("\n\ttimestamp: ", let(t:=Date(), Concat(t[2]," ",StringInt(t[3]),", ",StringInt(t[1]), "; ",StringInt(t[4]),":",StringInt(t[5]),":",StringInt(t[6]))),
# "\n\ttransform: ", info.algorithm.node , "\n\t",
# "source file: \"", info.file, "(.c)\"\n\t",
# "performance: ", info.cycles, " cycles, ", spiral._compute_mflops(info.flops, info.cycles), " Mflop/s\n",
# "\nalgorithm: ", info.algorithm, "\n",
# "*/\n");
# fi;
if IsBound(opts.fileinfo) then
info := opts.fileinfo;
if IsBound(opts.profile) then
Print("/*\tprofile: ", opts.profile.name, ", ", opts.profile.makeopts.CFLAGS);
else
Print("/*\tlanguage: ", opts.language);
fi;
PrintLine("\n\ttimestamp: ", let(t:=Date(), Concat(t[2]," ",StringInt(t[3]),", ",StringInt(t[1]), "; ",StringInt(t[4]),":",StringInt(t[5]),":",StringInt(t[6]))),
"\n",
"\nalgorithm: ", info.algorithm, "\n",
"*/\n");
fi;
end;
|
\chapter{User Guide}
\label{ch:user-guide}
This chapter is about building, configuring, and using Dangless.
Most of this information in less detail is also described in the repository \path{README} file.
\section{System requirements}
Most requirements are posed by Dune itself:
\begin{itemize}
\item A 64-bit x86 Linux environment
\item A relatively recent Intel CPU with VT-x support
\item Kernel version of 4.4.0 or older
\item Installed kernel headers for the running kernel
\item Root (sudo) privileges
\item Enabled and sufficient number of hugepages (see below)
\end{itemize}
The remaining requirements posed by Dangless itself are fairly usual:
\begin{itemize}
\item A recent C compiler that supports C11 and the GNU extensions (either GCC or Clang will work)
\item Python 3.6.1 or newer
\item CMake 3.5.2 or newer
\end{itemize}
\subsection{Hugepages}
Besides the above, Dune requires some 2 MB hugepages to be available during initialization for setting up its safe stacks. It will also try to use huge pages to acquire memory for the guest's page allocator, although it will gracefully fall back if there are not enough huge pages available.
To make sure that some huge pages remain available, it is recommended to limit or disable transparent hugepages by setting \path{/sys/kernel/mm/transparent_hugepage/enabled} to \emph{madvise} or \emph{never} (you will need to use \path{su} if you want to change it).
Then, you can check the number of huge pages available:
\begin{lstlisting}[breaklines, language=, style=]
$ cat /proc/meminfo | grep Huge
AnonHugePages: 49152 kB
HugePages_Total: 512
HugePages_Free: 512
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
\end{lstlisting}
In my tests, it appears that at minimum \textbf{71} free huge pages are required to satisfy Dune, although it is not quite clear to me as to why: by default for 2 safe stacks of size 2 MB each, we should only need 2 hugepages.
You can dedicate more huge pages by modifying \path{/proc/sys/vm/nr_hugepages} (again, you will need to use \texttt{su} to do so), or by executing:
\begin{lstlisting}[breaklines, language=, style=]
sudo sysctl -w vm.nr_hugepages=<NUM>
\end{lstlisting}
... where \texttt{<NUM>} should be replaced by the desired number, of course.
When there is not a sufficient number of huge pages available, Dangless will fail while trying to enter Dune mode, and you will see output much like this:
\begin{lstlisting}[breaklines, language=, style=]
dune: failed to mmap() hugepage of size 2097152 for safe stack 0
dune: setup_safe_stack() failed
dune: create_percpu() failed
Dangless: failed to enter Dune mode: Cannot allocate memory
\end{lstlisting}
\section{Building and configuring}
The full Dangless source code is available on GitHub at \url{https://github.com/shdnx/dangless-malloc}. After cloning, you will have to start by setting up its dependencies (such as Dune) which are registered as git submodules in the \path{vendor} folder:
\begin{lstlisting}[breaklines, language=bash, style=]
git submodule init
git submodule update
\end{lstlisting}
Then we have to apply the Dune patches as described in Section~\ref{sec:bg-dune} and build it:
\begin{lstlisting}[breaklines, language=bash, style=]
cd vendor/dune-ix
# patch dune, so that the physical page metadata is accessible inside the guest, allowing us to e.g. mess with the pagetables
git apply ../dune-ix-guestppages.patch
# patch dune, so that we can register a prehook function to run before system calls are passed to the host kernel
git apply ../dune-ix-vmcallprehook.patch
# patch dune, so that it does not kill the process with SIGTERM when handling the exit_group syscall - this causes runs to be registered as failures when they succeeded
git apply ../dune-ix-nosigterm.patch
# need sudo, because it is building a kernel module
sudo make
\end{lstlisting}
Now configure and build Dangless using CMake:
\begin{lstlisting}[breaklines, language=bash, style=]
cd ../../sources
mkdir build
cd build
# you can specify your configuration options here, or e.g. use ninja (-GNinja) instead of make
cmake \
-D CMAKE_BUILD_TYPE=Debug \
-D OVERRIDE_SYMBOLS=ON \
-D REGISTER_PREINIT=ON \
-D COLLECT_STATISTICS=OFF \
..
cmake --build .
\end{lstlisting}
You should be able to see \path{libdangless_malloc.a} and \path{dangless_user.make} afterwards in the build directory.
You can see what configuration options were used to build Dangless by listing the CMake cache:
\begin{lstlisting}[breaklines, language=C, style=]
$ cmake -LH
-- Cache values
// Whether to allow dangless to gracefully handle running out of virtual memory and continue operating as a proxy to the underlying memory allocator.
ALLOW_SYSMALLOC_FALLBACK:BOOL=ON
// Whether Dangless should automatically dedicate any unused PML4 pagetable entries (large unused virtual memory regions) for its virtual memory allocator. If disabled, user code will have to call dangless_dedicate_vmem().
AUTODEDICATE_PML4ES:BOOL=ON
// Choose the type of build, options are: None(CMAKE_CXX_FLAGS or CMAKE_C_FLAGS used) Debug Release RelWithDebInfo MinSizeRel.
CMAKE_BUILD_TYPE:STRING=Debug
// Install path prefix, prepended onto install directories.
CMAKE_INSTALL_PREFIX:PATH=/usr/local
// Whether to collect statistics during runtime about Dangless usage. If enabled, statistics are printed after every run to stderr. These are only for local developer use and are not uploaded anywhere.
COLLECT_STATISTICS:BOOL=OFF
// Debug mode for dangless_malloc.c
DEBUG_DGLMALLOC:BOOL=OFF
// Debug mode for vmcall_fixup.c
DEBUG_DUNE_VMCALL_FIXUP:BOOL=OFF
...
\end{lstlisting}
You can also use a CMake GUI such CCMake~\cite{ccmake-website}, or check the main CMake file (\path{sources/CMakeLists.txt}) for the list of available configuration options, their description and default values.
\section{API overview}
Dangless is a Linux static library \path{libdangless_malloc.a} that can be linked to any application during build time. It defines a set of functions for allocating and deallocating memory:
\begin{lstlisting}
// sources/include/dangless/dangless_malloc.h
void *dangless_malloc(size_t sz) __attribute__((malloc));
void *dangless_calloc(size_t num, size_t size) __attribute__((malloc));
void *dangless_realloc(void *p, size_t new_size);
int dangless_posix_memalign(void **pp, size_t align, size_t size);
void dangless_free(void *p);
\end{lstlisting}
These functions have the exact same signature and behaviour as their standard counterparts \lstinline!malloc()!, \lstinline!calloc()!, and \lstinline!free()!. In fact, because the GNU C Library defines these standard functions as weak symbols~\cite{glibc-malloc-is-weak}, Dangless provides an option (\lstinline!OVERRIDE_SYMBOLS!) to override the symbols with its own implementation, enabling the user code to perform memory management without even being aware that it is using Dangless in the background.
Besides the above, Dangless defines a few more functions, out of which the following two are important.
\begin{lstlisting}
void dangless_init(void);
\end{lstlisting}
Initializes Dangless as described in Section~\ref{sec:dangless-init}. Whether or not this function is called automatically during application start-up is controlled by the \lstinline!REGISTER_PREINIT! option, defaulting to On.
\begin{lstlisting}
int dangless_dedicate_vmem(void *start, void *end);
\end{lstlisting}
Dedicates a memory region to Dangless's virtual memory allocator, as described in Section~ref{sec:dangless-alloc-virtmem}.
Whether or not any dedication happens automatically is controlled by the \lstinline!AUTO_DEDICATE_PML4ES! option.
\section{Integrating into existing applications}
Dangless can be integrated into an application which relies, directly or indirectly, the C memory management API. To do this:
\begin{itemize}
\item link to \path{libdangless_malloc.a} using the \texttt{--whole-library} option
\item link to \path{libdune.a}
\item link to \path{libdl} to allow Dangless to resolve the symbols for the system allocator
\item disable building a position-independent executable (PIE) using \texttt{-no-pie}, due to the limitation of Dune
\end{itemize}
When building, Dangless generates a \path{dangless_user.make} file that makes this easier to do with Makefile-based build systems.
|
= = = Domestic = = =
|
#Worldclim2 precipitation model.
#clearn environment, load packages.
rm(list=ls())
library(runjags)
#output path for precipitation summary object.
out.path <- '/fs/data3/caverill/NEFI_microbial/worldclim2_uncertainty/precipitation_JAGS_model.rds'
#load data
prec <- readRDS('/fs/data3/caverill/NEFI_microbial/worldclim2_uncertainty/worldclim2_prec_uncertainty.rds')
#subset prec for prelim fitting.
prec <- prec[sample(nrow(prec),30000),]
#setup runjags model. precipitation has constant variance.
#setup runjags model. Same model for temperature and precipitation.
prec.model = "
model {
#setup priors for model predictors.
for(j in 1:N.preds){
m[j] ~ dnorm(0, .0001) #flat uninformative priors.
}
#elevation affects uncertainty.
#tau is still a function of sigma.
#sigma is now a funtion of an intercept (as before) and elevation, with a parameter that gets fitted.
k0 ~ dunif(0,100)
k1 ~ dnorm(0, .0001)
#fit a linear model
#should be a way to store predictors as a list with an intercept.
#then just multiply the m vector by the predictor list for the linear component of the model.
for(i in 1:N){
y.hat[i] <- m[1]*1 + m[2]*x[i]
sigma[i] <- k0 + k1*elev[i]
tau[i] <- pow(sigma[i], -2)
y[i] ~ dnorm(y.hat[i], tau[i])
}
} #close model loop.
"
#Setup data objects.
prec.data <- list( y = prec$observed,
x = prec$predicted,
elev = prec$elev,
N = length(prec$observed),
N.preds = 2)
#pick starting values based on lm fits to help chain mixing and speed convergence.
start.prec <- list()
prec.freq <- lm(observed ~ predicted, data = prec)
start.prec[[1]] <- list(k0 = 47, k1 = 0.003, m = c(coef(prec.freq)))
start.prec[[2]] <- lapply(start.prec[[1]],'*',1.01)
start.prec[[3]] <- lapply(start.prec[[1]],'*',0.99)
#fit precipitation model. This number of samples gives clean looking distributions.
prec.out <- run.jags(prec.model,
data = prec.data,
adapt = 200,
burnin = 1000,
sample = 3000,
n.chains = 3,
monitor = c('m','k0','k1'),
inits = start.prec
)
#summarize and plot output.
prec.summary <- data.frame(summary(prec.out))
plot(prec.out)
#save output.
saveRDS(prec.summary,out.path)
|
function [im2, H] = repeatability_saltnpepper(im, n_pixels)
seed = sum(im(:));
s = RandStream('mt19937ar', 'Seed', seed);
[h,w,~] = size(im);
num_missing = n_pixels;
coords = [];
while num_missing > 0
x = randi(s, [1 w], num_missing, 1);
y = randi(s, [1 h], num_missing, 1);
coords = unique([coords; [x y]], 'rows');
num_missing = n_pixels - size(coords, 1);
end
im2 = im;
H = eye(3);
if isempty(coords)
return;
end
im_gray = rgb2gray(im);
coord_inds = sub2ind(size(im_gray), coords(:,2), coords(:,1));
is_dark = im_gray(coord_inds) < 128;
chan_stride = h * w;
for ch = 1:3
% dark goes bright
im2(coord_inds(is_dark) + (ch - 1) * chan_stride) = 255;
% bright goes dark
im2(coord_inds(~is_dark) + (ch - 1) * chan_stride) = 0;
end
end
|
@testset "Transformations" begin
function test_reverse(T, seq)
revseq = reverse(T(seq))
@test String(revseq) == reverse(seq)
end
function test_dna_complement(T, seq)
comp = complement(T(seq))
@test String(comp) == dna_complement(seq)
end
function test_rna_complement(T, seq)
comp = complement(T(seq))
@test String(comp) == rna_complement(seq)
end
function test_dna_revcomp(T, seq)
revcomp = reverse_complement(T(seq))
@test String(revcomp) == reverse(dna_complement(seq))
end
function test_rna_revcomp(T, seq)
revcomp = reverse_complement(T(seq))
@test String(revcomp) == reverse(rna_complement(seq))
end
@testset "Reverse" begin
for len in 1:64, _ in 1:10
if len <= 32
test_reverse(DNAMer{len}, random_dna_kmer(len))
test_reverse(RNAMer{len}, random_rna_kmer(len))
end
test_reverse(BigDNAMer{len}, random_dna_kmer(len))
test_reverse(BigRNAMer{len}, random_rna_kmer(len))
end
seq = dna"AAAAAAAAAAAAAAAAAAAAAAAAAAAAGATAC"
@test reverse(seq[(length(seq)-9):length(seq)]) == dna"CATAGAAAAA"
end
@testset "Complement" begin
for len in 1:64, _ in 1:10
if len <= 32
test_dna_complement(DNAMer{len}, random_dna_kmer(len))
test_rna_complement(RNAMer{len}, random_rna_kmer(len))
end
test_dna_complement(BigDNAMer{len}, random_dna_kmer(len))
test_rna_complement(BigRNAMer{len}, random_rna_kmer(len))
end
end
@testset "Reverse Complement" begin
for len in 1:64, _ in 1:10
if len <= 32
test_dna_revcomp(DNAMer{len}, random_dna_kmer(len))
test_rna_revcomp(RNAMer{len}, random_rna_kmer(len))
end
test_dna_revcomp(BigDNAMer{len}, random_dna_kmer(len))
test_rna_revcomp(BigRNAMer{len}, random_rna_kmer(len))
end
end
end
|
module Data.List.Predicates.Unique
import Data.List
%access public export
%default total
||| Proof that a list `xs` is unique.
data Unique : (xs : List type) -> Type where
||| Empty lists are 'unique'
Empty : Unique Nil
||| Only add an element to the list is it doesn't appear in the
||| remaining lists.
Cons : (x : type)
-> (prfU : Not (Elem x xs))
-> (rest : Unique xs)
-> Unique (x::xs)
-- ----------------------------------------------------------------- [ Helpers ]
duplicateElement : (prf : Elem x xs) -> Unique (x :: xs) -> Void
duplicateElement prf (Cons x f rest) = f prf
restNotUnique : (f : Unique xs -> Void) -> (contra : Elem x xs -> Void) -> Unique (x :: xs) -> Void
restNotUnique f contra (Cons x g rest) = f rest
||| Is the list `xs` unique?
unique : DecEq type
=> (xs : List type)
-> Dec (Unique xs)
unique [] = Yes Empty
unique (x :: xs) with (isElem x xs)
unique (x :: xs) | (Yes prf) = No (duplicateElement prf)
unique (x :: xs) | (No contra) with (unique xs)
unique (x :: xs) | (No contra) | (Yes prf) = Yes (Cons x contra prf)
unique (x :: xs) | (No contra) | (No f) = No (restNotUnique f contra)
-- --------------------------------------------------------------------- [ EOF ]
|
```python
%matplotlib inline
```
Band Ratio Measures
===================
Exploring how band ratio measures relate to periodic & aperiodic activity.
Introduction
------------
Band ratios measures are a relatively common measure, proposed to measure oscillatory,
or periodic, activity.
They are typically calculated as:
\begin{align}BR = \frac{avg(low band power)}{avg(high band power)}\end{align}
In this notebook we will explore this measure in the context of conceptualizing
neural power spectra as a combination of aperiodic and periodic activity.
Band Ratios Project
~~~~~~~~~~~~~~~~~~~
This example offers a relatively quick demonstration of how band ratios measures
relate to periodic and aperiodic activity.
We have completed a full project investigating methodological properties of band
ratio measures, which is available
`here <https://github.com/voytekresearch/BandRatios>`_.
```python
# Import numpy and matplotlib
import numpy as np
import matplotlib.pyplot as plt
# Import simulation, utility, and plotting tools
from fooof.bands import Bands
from fooof.utils import trim_spectrum
from fooof.sim.gen import gen_power_spectrum
from fooof.sim.utils import set_random_seed
from fooof.plts.spectra import plot_spectrum_shading, plot_spectra_shading
```
```python
# General Settings
# Define band definitions
bands = Bands({'theta' : [4, 8], 'beta' : [20, 30]})
# Define helper variables for indexing peak data
icf, ipw, ibw = 0, 1, 2
# Plot settings
shade_color = '#0365C0'
```
Simulating Data
~~~~~~~~~~~~~~~
For this example, we will use simulated data. Let's start by simulating a
a baseline power spectrum.
```python
# Simulation Settings
nlv = 0
f_res = 0.1
f_range = [1, 35]
# Define default aperiodic values
ap = [0, 1]
# Define default periodic values for band specific peaks
theta = [6, 0.4, 1]
alpha = [10, 0.5, 0.75]
beta = [25, 0.3, 1.5]
# Set random seed, for consistency generating simulated data
set_random_seed(21)
```
```python
# Simulate a power spectrum
freqs, powers = gen_power_spectrum(f_range, ap, [theta, alpha, beta], nlv, f_res)
```
Calculating Band Ratios
~~~~~~~~~~~~~~~~~~~~~~~
Band ratio measures are a ratio of power between defined frequency bands.
We can now define a function we can use to calculate band ratio measures, and
apply it to our baseline power spectrum.
For this example, we will be using the theta / beta ratio, which is the
most commonly applied band ratio measure.
Note that it doesn't matter exactly which ratio measure or which frequency band
definitions we use, as the general properties demonstrated here generalize
to different bands and ranges.
```python
def calc_band_ratio(freqs, powers, low_band, high_band):
"""Helper function to calculate band ratio measures."""
# Extract frequencies within each specified band
_, low_band = trim_spectrum(freqs, powers, low_band)
_, high_band = trim_spectrum(freqs, powers, high_band)
# Calculate average power within each band, and then the ratio between them
ratio = np.mean(low_band) / np.mean(high_band)
return ratio
```
```python
# Plot the power spectrum, shading the frequency bands used for the ratio
plot_spectrum_shading(freqs, powers, [bands.theta, bands.beta],
color='black', shade_colors=shade_color,
log_powers=True, linewidth=3.5)
```
```python
# Calculate a band ratio measure
tbr = calc_band_ratio(freqs, powers, bands.theta, bands.beta)
print('Calculate theta / beta ratio is :\t {:1.2f}'.format(tbr))
```
Periodic Impacts on Band Ratio Measures
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Typical investigations involving band ratios compare differences in band ratio measures
within and between subjects. The typical interpretation of band ratio measures is that
they relate to the relative power between two bands.
Next, lets simulate data that varies across different periodic parameters of the data, and
see how this changes our measured theta / beta ratio, as compared to our baseline
power spectrum.
```python
# Define a helper function for updating parameters
from copy import deepcopy
def upd(data, index, value):
"""Return a updated copy of an array."""
out = deepcopy(data)
out[index] = value
return out
```
```python
# Simulate and collect power spectra with changes in each periodic parameter
spectra = {
'Theta Frequency' : None,
'Theta Power' : gen_power_spectrum(\
f_range, ap, [upd(theta, ipw, 0.5041), alpha, beta], nlv, f_res)[1],
'Theta Bandwidth' : gen_power_spectrum(\
f_range, ap, [upd(theta, ibw, 1.61), alpha, beta], nlv, f_res)[1],
'Alpha Frequency' : gen_power_spectrum(\
f_range, ap, [theta, upd(alpha, icf, 8.212), beta], nlv, f_res)[1],
'Alpha Power' : None,
'Alpha Bandwidth' : gen_power_spectrum(\
f_range, ap, [theta, upd(alpha, ibw, 1.8845), beta], nlv, f_res)[1],
'Beta Frequency' : gen_power_spectrum(\
f_range, ap, [theta, alpha, upd(beta, icf, 19.388)], nlv, f_res)[1],
'Beta Power' : gen_power_spectrum(\
f_range, ap, [theta, alpha, upd(beta, ipw, 0.1403)], nlv, f_res)[1],
'Beta Bandwidth' : gen_power_spectrum(\
f_range, ap, [theta, alpha, upd(beta, ibw, 0.609)], nlv, f_res)[1],
}
```
```python
# Calculate changes in theta / beta ratios
for label, spectrum in spectra.items():
if spectrum is not None:
print('TBR difference from {:20} is \t {:1.3f}'.format(\
label, tbr - calc_band_ratio(freqs, spectrum, bands.theta, bands.beta)))
```
```python
# Create figure of periodic changes
title_settings = {'fontsize': 16, 'fontweight': 'bold'}
fig, axes = plt.subplots(3, 3, figsize=(15, 14))
for ax, (label, spectrum) in zip(axes.flatten(), spectra.items()):
if spectrum is None: continue
plot_spectra_shading(freqs, [powers, spectrum],
[bands.theta, bands.beta], shade_colors=shade_color,
log_freqs=False, log_powers=True, ax=ax)
ax.set_title(label, **title_settings)
ax.set_xlim([0, 35]); ax.set_ylim([-1.75, 0])
ax.xaxis.label.set_visible(False)
ax.yaxis.label.set_visible(False)
# Turn off empty axes & space out axes
fig.subplots_adjust(hspace=.3, wspace=.3)
_ = [ax.axis('off') for ax in [axes[0, 0], axes[1, 1]]]
```
In the simulations above, we systematically manipulated each parameter of each of the
three different band peaks present in our data. For 7 of the 9 possible changes, we can
do so in a way that creates an identical change in the measured band ratio measure.
Band ratio measures are therefore not specific to band power differences, but rather
can reflect multiple different changes across multiple different periodic parameters.
Aperiodic Impacts on Band Ratio Measures
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Next, we can also examine if changes in aperiodic properties of the data can also
impact band ratio measures. We will explore changes in the aperiodic exponent, with
and without overlying peaks.
To do so, we will use the same approach to simulating, comparing, and plotting
data as above (though note that the code to do so has been condensed in the
next section).
```python
# Simulate and collect power spectra with changes in aperiodic parameters
exp_spectra = {
'Exponent w Peaks' : \
[powers,
gen_power_spectrum(f_range, [0.13, 1.1099],
[theta, alpha, beta], nlv, f_res)[1]],
'Exponent w/out Peaks' : \
[gen_power_spectrum(f_range, ap, [], nlv, f_res)[1],
gen_power_spectrum(f_range, [0.13, 1.1417], [], nlv, f_res)[1]]}
```
```python
# Calculate & plot changes in theta / beta ratios
fig, axes = plt.subplots(1, 2, figsize=(15, 6))
fig.subplots_adjust(wspace=.3)
for ax, (label, (comparison, spectrum)) in zip(axes, exp_spectra.items()):
print('\tTBR difference from {:20} is \t {:1.3f}'.format(label, \
calc_band_ratio(freqs, comparison, bands.theta, bands.beta) - \
calc_band_ratio(freqs, spectrum, bands.theta, bands.beta)))
plot_spectra_shading(freqs, [comparison, spectrum],
[bands.theta, bands.beta],
shade_colors=shade_color,
log_freqs=False, log_powers=True, ax=ax)
ax.set_title(label, **title_settings)
ax.set_xlim([0, 35]); ax.set_ylim([-1.75, 0])
```
In these simulations, we again see that we can obtain the same measured difference in
band ratio measures from differences in the aperiodic properties of the data. This is
true even if there are no periodic peaks present at all.
This shows that band ratio measures are not even specific to periodic activity,
and can be driven by changes in aperiodic activity.
Conclusion
----------
Band ratio measures are supposed to reflect the relative power of rhythmic neural activity.
However, here we can see that band ratio measures are actually under-determined
in that many different changes of both periodic and aperiodic parameters can affect
band ratio measurements - including aperiodic changes when there is no periodic activity.
For this reason, we conclude that band-ratio measures, by themselves, are
an insufficient measure of neural activity. We propose that approaches such as
parameterizing power spectra are more specific for adjudicating what is changing
in neural data.
For more investigation into band ratios, their methodological issues, applications to real
data, and a comparison to parameterizing power spectra, see the full project
`here <https://github.com/voytekresearch/BandRatios>`_,
|
#ifndef ASLAM_PER_ITERATION_CALLBACK_HPP
#define ASLAM_PER_ITERATION_CALLBACK_HPP
#include <math.h>
#include <boost/foreach.hpp>
#include <boost/shared_ptr.hpp>
#include <aslam/backend/ErrorTerm.hpp>
#include <vector>
namespace aslam {
namespace backend {
class PerIterationCallback {
public:
virtual void callback() = 0;
virtual ~PerIterationCallback() {};
};
} // namespace backend
} // namespace aslam
#endif /*ASLAM_PER_ITERATION_CALLBACK_HPP*/
|
lemma le_measureD1: "A \<le> B \<Longrightarrow> space A \<le> space B"
|
from __future__ import division
import pandas as pd
import numpy as np
from sklearn.preprocessing import OneHotEncoder
from sklearn.cluster import KMeans
import pyspark
def ohe_columns(series, name):
ohe = OneHotEncoder(categories='auto')
ohe.fit(series)
cols = ohe.get_feature_names(name)
ohe = ohe.transform(series)
final_df = pd.DataFrame(ohe.toarray(), columns=cols)
return final_df
def add_clusters_to_users(n_clusters=8):
"""
parameters:number of clusters
return: user dataframe
"""
# Get the user data
user_df = pd.read_csv('data/users.dat', sep='::', header=None
, names=['id', 'gender', 'age', 'occupation', 'zip'])
# OHE for clustering
my_cols = ['gender', 'age', 'occupation']
ohe_multi = OneHotEncoder(categories='auto')
ohe_multi.fit(user_df[my_cols])
ohe_mat = ohe_multi.transform(user_df[my_cols])
# Then KMeans cluster
k_clusters = KMeans(n_clusters=8, random_state=42)
k_clusters.fit(ohe_mat)
preds = k_clusters.predict(ohe_mat)
# Add clusters to user df
user_df['cluster'] = preds
return user_df
def add_cluster_to_ratings(user_df):
"""
given user_df with clusters, add clusters to ratings data
parameters
---------
user_df: df with user data
returns
-------
ratings_df: ratings_df with cluster column
"""
# Read in ratings file
#Get ratings file - create Spark instance for loading JSON
spark = pyspark.sql.SparkSession.builder.getOrCreate()
sc = spark.sparkContext
ratings_df = spark.read.json('data/ratings.json').toPandas()
# Set up clusters
cluster_dict = {}
for k, v in zip(user_df['id'].tolist(), user_df['cluster'].tolist()):
cluster_dict[k] = v
# Add cluster to ratings
ratings_df['cluster'] = ratings_df['user_id'].apply(lambda x: cluster_dict[x])
return ratings_df
def get_cold_start_rating(user_id, movie_id):
"""
Given user_id and movie_id, return a predicted rating
parameters
----------
user_id, movie_id
returns
-------
movie rating (float)
If current user, current movie = average rating of movie by cluster
If current user, NOT current movie = average rating for cluster
If NOT current user, and current movie = average for the movie
If NOT current user, and NOT current movie = average for all ratings
"""
if user_id in u_info['id'].tolist():
if movie_id in ratings_df['movie_id'].tolist():
cluster = u_info.loc[u_info['id'] == user_id]['cluster'].tolist()[0]
cluster_df = ratings_df.loc[(ratings_df['cluster'] == cluster)]['rating']
cluster_avg = cluster_df.mean()
else:
cluster = u_info.loc[u_info['id'] == user_id]['cluster'].tolist()[0]
cluster_rating = ratings_df.loc[ratings_df['cluster'] == cluster]['rating'].tolist()
if len(cluster_rating) > 1:
cluster_avg = sum(cluster_rating)/len(cluster_rating)
else:
cluster_avg = cluster_rating[0]
return cluster_avg
else:
if movie_id in ratings_df['movie_id'].tolist():
movie_rating = ratings_df.loc[ratings_df['movie_id'] == movie_id]['rating'].tolist()
if len(movie_rating) > 1:
movie_avg = sum(movie_rating)/len(movie_rating)
else:
movie_avg = movie_rating[0]
return movie_avg
else:
return ratings_df['rating'].mean()
|
Formal statement is: lemma LIMSEQ_inverse_real_of_nat: "(\<lambda>n. inverse (real (Suc n))) \<longlonglongrightarrow> 0" Informal statement is: The sequence $\frac{1}{n+1}$ converges to $0$.
|
Formal statement is: lemma holomorphic_on_minus [holomorphic_intros]: "f holomorphic_on s \<Longrightarrow> (\<lambda>z. -(f z)) holomorphic_on s" Informal statement is: If $f$ is holomorphic on a set $s$, then $-f$ is holomorphic on $s$.
|
module Test
import Mult
thing : Nat -> Nat
thing x = mult x (plus x x)
|
State Before: l : Type ?u.913128
m : Type u_1
n : Type u_2
o : Type ?u.913137
m' : o → Type ?u.913142
n' : o → Type ?u.913147
R : Type ?u.913150
S : Type ?u.913153
α : Type v
β : Type w
γ : Type ?u.913160
inst✝¹ : NonUnitalNonAssocRing α
inst✝ : Fintype m
A B : Matrix m n α
x : m → α
⊢ vecMul x (A - B) = vecMul x A - vecMul x B State After: no goals Tactic: simp [sub_eq_add_neg, vecMul_add, vecMul_neg]
|
myMatrix <- matrix(1:12, ncol = 4)
myMatrix
|
\def\InformationPageTitle{Information Page}
\providecommand\InformationPageTitle{Information Page}
\InputIfFileExists{infopage.txt}{%
\def\InfoMissingWarning{}
}{%
% \def\InfoMissingWarning{
% {\centering \Large \bf The information in this page is not correct, please
% provide the information in the document infopage.txt in the root of this TeX project.}
% You can find an example infopage.text file name infopage.txt-example. One approach is copy and edit the copy.
% }
}
%\addsec{\InformationPageTitle}
% \section*{\InformationPageTitle}
\InfoMissingWarning
%\section*{\InformationPageTitle}
%% provide default values
Fontys Hogeschool Techniek en Logistiek\\
Postbus 141, 5900 AC Venlo
\vspace*{1cm}
\noindent
{\centering \Large%\bfseries
\documentname
}
\vspace{1cm}
\begin{infoblock}
Name of student: & \studentname\\
Student number: & \snumber\\
Course: & \course\\
Period: & \period\\
\end{infoblock}
\begin{infoblock}
Company name: & \companyname\\
Address: & \companyaddress\\
Postcode, City: & \companypostcodecity\\
Country: & \companycountry\\
\end{infoblock}
\begin{infoblock}
Company coach: & \companycoach\\
Email: & \texttt{\href{mailto:\companycoachmail}{\companycoachmail}}\\
University coach: & \universitytutor\\
Email: & \texttt{\href{mailto:\universitytutormail}{\universitytutormail}}\\
\end{infoblock}
\ifx\examiner\empty
\relax
\else
\ifx\externalexpert\empty
\relax
\else
\begin{infoblock}
Examinator: & \examiner\\
External domain expert: & \externalexpert\\
\end{infoblock}
\fi
\fi
\begin{infoblock}
Non-disclosure agreement: & \hasnda
\end{infoblock}
\clearpage
|
(*
* Copyright 2020, Data61, CSIRO (ABN 41 687 119 230)
*
* SPDX-License-Identifier: GPL-2.0-only
*)
(* Arch specific lemmas that should be moved into theory files before CRefine *)
theory ArchMove_C
imports Move_C
begin
lemma ps_clear_is_aligned_ksPSpace_None:
"\<lbrakk>ps_clear p n s; is_aligned p n; 0<d; d \<le> mask n\<rbrakk>
\<Longrightarrow> ksPSpace s (p + d) = None"
apply (simp add: ps_clear_def add_diff_eq[symmetric] mask_2pm1[symmetric])
apply (drule equals0D[where a="p + d"])
apply (simp add: dom_def word_gt_0 del: word_neq_0_conv)
apply (drule mp)
apply (rule word_plus_mono_right)
apply simp
apply (simp add: mask_2pm1)
apply (erule is_aligned_no_overflow')
apply (drule mp)
apply (case_tac "(0::machine_word)<2^n")
apply (frule le_m1_iff_lt[of "(2::machine_word)^n" d, THEN iffD1])
apply (simp add: mask_2pm1[symmetric])
apply (erule (1) is_aligned_no_wrap')
apply (simp add: is_aligned_mask mask_2pm1 not_less word_bits_def
power_overflow)
by assumption
lemma ps_clear_is_aligned_ctes_None:
assumes "ps_clear p tcbBlockSizeBits s"
and "is_aligned p tcbBlockSizeBits"
shows "ksPSpace s (p + 2*2^cteSizeBits) = None"
and "ksPSpace s (p + 3*2^cteSizeBits) = None"
and "ksPSpace s (p + 4*2^cteSizeBits) = None"
by (auto intro: assms ps_clear_is_aligned_ksPSpace_None
simp: objBits_defs mask_def)+
lemma word_shift_by_3:
"x * 8 = (x::'a::len word) << 3"
by (simp add: shiftl_t2n)
lemma unat_mask_3_less_8:
"unat (p && mask 3 :: word64) < 8"
apply (rule unat_less_helper)
apply (rule order_le_less_trans, rule word_and_le1)
apply (simp add: mask_def)
done
lemma ucast_le_ucast_6_64:
"(ucast x \<le> (ucast y :: word64)) = (x \<le> (y :: 6 word))"
by (simp add: ucast_le_ucast)
definition
user_word_at :: "machine_word \<Rightarrow> machine_word \<Rightarrow> kernel_state \<Rightarrow> bool"
where
"user_word_at x p \<equiv> \<lambda>s. is_aligned p 3
\<and> pointerInUserData p s
\<and> x = word_rcat (map (underlying_memory (ksMachineState s))
[p + 7, p + 6, p + 5, p + 4, p + 3, p + 2, p + 1, p])"
definition
device_word_at :: "machine_word \<Rightarrow> machine_word \<Rightarrow> kernel_state \<Rightarrow> bool"
where
"device_word_at x p \<equiv> \<lambda>s. is_aligned p 3
\<and> pointerInDeviceData p s
\<and> x = word_rcat (map (underlying_memory (ksMachineState s))
[p + 7, p + 6, p + 5, p + 4, p + 3, p + 2, p + 1, p])"
(* FIXME: move to GenericLib *)
lemmas unat64_eq_of_nat = unat_eq_of_nat[where 'a=64, folded word_bits_def]
context begin interpretation Arch .
crunch inv'[wp]: archThreadGet P
(* FIXME MOVE near thm tg_sp' *)
lemma atg_sp':
"\<lbrace>P\<rbrace> archThreadGet f p \<lbrace>\<lambda>t. obj_at' (\<lambda>t'. f (tcbArch t') = t) p and P\<rbrace>"
including no_pre
apply (simp add: archThreadGet_def)
apply wp
apply (rule hoare_strengthen_post)
apply (rule getObject_tcb_sp)
apply clarsimp
apply (erule obj_at'_weakenE)
apply simp
done
(* FIXME: MOVE to EmptyFail *)
lemma empty_fail_archThreadGet [intro!, wp, simp]:
"empty_fail (archThreadGet f p)"
by (fastforce simp: archThreadGet_def getObject_def split_def)
(* FIXME: move to ainvs? *)
lemma sign_extend_canonical_address:
"(x = sign_extend 38 x) = canonical_address x"
by (fastforce simp: sign_extended_iff_sign_extend canonical_address_sign_extended canonical_bit_def)
lemma ptr_range_mask_range:
"{ptr..ptr + 2 ^ bits - 1} = mask_range ptr bits"
unfolding mask_def
by simp
lemma valid_untyped':
notes usableUntypedRange.simps[simp del]
assumes pspace_distinct': "pspace_distinct' s" and
pspace_aligned': "pspace_aligned' s" and
al: "is_aligned ptr bits"
shows "valid_untyped' d ptr bits idx s =
(\<forall>p ko. ksPSpace s p = Some ko \<longrightarrow>
obj_range' p ko \<inter> {ptr..ptr + 2 ^ bits - 1} \<noteq> {} \<longrightarrow>
obj_range' p ko \<subseteq> {ptr..ptr + 2 ^ bits - 1} \<and>
obj_range' p ko \<inter>
usableUntypedRange (UntypedCap d ptr bits idx) = {})"
apply (simp add: valid_untyped'_def)
apply (simp add: ko_wp_at'_def)
apply (rule arg_cong[where f=All])
apply (rule ext)
apply (rule arg_cong[where f=All])
apply (rule ext)
apply (case_tac "ksPSpace s ptr' = Some ko", simp_all)
apply (frule pspace_alignedD'[OF _ pspace_aligned'])
apply (frule pspace_distinctD'[OF _ pspace_distinct'])
apply (simp add: ptr_range_mask_range)
apply (frule aligned_ranges_subset_or_disjoint[OF al])
apply (simp only: ptr_range_mask_range)
apply (fold obj_range'_def)
apply (rule iffI)
apply auto[1]
apply (rule conjI)
apply (rule ccontr, simp)
apply (simp add: Set.psubset_eq)
apply (erule conjE)
apply (case_tac "obj_range' ptr' ko \<inter> mask_range ptr bits \<noteq> {}", simp)
apply (cut_tac is_aligned_no_overflow[OF al])
apply (clarsimp simp add: obj_range'_def mask_def add_diff_eq)
apply (clarsimp simp add: usableUntypedRange.simps Int_commute)
apply (case_tac "obj_range' ptr' ko \<inter> mask_range ptr bits \<noteq> {}", simp+)
apply (cut_tac is_aligned_no_overflow[OF al])
apply (clarsimp simp add: obj_range'_def mask_def add_diff_eq)
apply (frule is_aligned_no_overflow)
by (metis al intvl_range_conv' le_m1_iff_lt less_is_non_zero_p1
nat_le_linear power_overflow sub_wrap add_0
add_0_right word_add_increasing word_less_1 word_less_sub_1)
lemma more_pageBits_inner_beauty:
fixes x :: "9 word"
fixes p :: machine_word
assumes x: "x \<noteq> ucast (p && mask pageBits >> 3)"
shows "(p && ~~ mask pageBits) + (ucast x * 8) \<noteq> p"
apply clarsimp
apply (simp add: word_shift_by_3)
apply (subst (asm) word_plus_and_or_coroll)
apply (word_eqI_solve dest: test_bit_size simp: pageBits_def)
apply (insert x)
apply (erule notE)
apply word_eqI
apply (erule_tac x="3+n" in allE)
apply (clarsimp simp: word_size pageBits_def)
done
(* FIXME x64: figure out where these are needed and adjust appropriately *)
lemma mask_pageBits_inner_beauty:
"is_aligned p 3 \<Longrightarrow>
(p && ~~ mask pageBits) + (ucast ((ucast (p && mask pageBits >> 3)):: 9 word) * 8) = (p::machine_word)"
apply (simp add: is_aligned_nth word_shift_by_3)
apply (subst word_plus_and_or_coroll)
apply (rule word_eqI)
apply (clarsimp simp: word_size word_ops_nth_size nth_ucast nth_shiftr nth_shiftl)
apply (rule word_eqI)
apply (clarsimp simp: word_size word_ops_nth_size nth_ucast nth_shiftr nth_shiftl
pageBits_def)
apply (rule iffI)
apply (erule disjE)
apply clarsimp
apply clarsimp
apply simp
apply clarsimp
apply (rule context_conjI)
apply (rule leI)
apply clarsimp
apply simp
apply arith
done
lemmas mask_64_id[simp] = mask_len_id[where 'a=64, folded word_bits_def]
mask_len_id[where 'a=64, simplified]
lemma prio_ucast_shiftr_wordRadix_helper: (* FIXME generalise *)
"(ucast (p::priority) >> wordRadix :: machine_word) < 4"
unfolding maxPriority_def numPriorities_def wordRadix_def
using unat_lt2p[where x=p]
apply (clarsimp simp add: word_less_nat_alt shiftr_div_2n' unat_ucast_upcast is_up word_le_nat_alt)
apply arith
done
lemma prio_ucast_shiftr_wordRadix_helper': (* FIXME generalise *)
"(ucast (p::priority) >> wordRadix :: machine_word) \<le> 3"
unfolding maxPriority_def numPriorities_def wordRadix_def
using unat_lt2p[where x=p]
apply (clarsimp simp add: word_less_nat_alt shiftr_div_2n' unat_ucast_upcast is_up word_le_nat_alt)
apply arith
done
lemma prio_unat_shiftr_wordRadix_helper': (* FIXME generalise *)
"unat ((p::priority) >> wordRadix) \<le> 3"
unfolding maxPriority_def numPriorities_def wordRadix_def
using unat_lt2p[where x=p]
apply (clarsimp simp add: word_less_nat_alt shiftr_div_2n' unat_ucast_upcast is_up word_le_nat_alt)
apply arith
done
lemma prio_ucast_shiftr_wordRadix_helper2: (* FIXME possibly unused *)
"(ucast (p::priority) >> wordRadix :: machine_word) < 0x20"
by (rule order_less_trans[OF prio_ucast_shiftr_wordRadix_helper]; simp)
lemma prio_ucast_shiftr_wordRadix_helper3:
"(ucast (p::priority) >> wordRadix :: machine_word) < 0x40"
by (rule order_less_trans[OF prio_ucast_shiftr_wordRadix_helper]; simp)
lemma unat_ucast_prio_L1_cmask_simp:
"unat (ucast (p::priority) && 0x3F :: machine_word) = unat (p && 0x3F)"
using unat_ucast_prio_mask_simp[where m=6]
by (simp add: mask_def)
lemma machine_word_and_3F_less_40:
"(w :: machine_word) && 0x3F < 0x40"
by (rule word_and_less', simp)
lemmas setEndpoint_obj_at_tcb' = setEndpoint_obj_at'_tcb
(* FIXME: Move to Schedule_R.thy. Make Arch_switchToThread_obj_at a specialisation of this *)
lemma Arch_switchToThread_obj_at_pre:
"\<lbrace>obj_at' (Not \<circ> tcbQueued) t\<rbrace>
Arch.switchToThread t
\<lbrace>\<lambda>rv. obj_at' (Not \<circ> tcbQueued) t\<rbrace>"
apply (simp add: RISCV64_H.switchToThread_def)
apply (wp asUser_obj_at_notQ doMachineOp_obj_at hoare_drop_imps|wpc)+
done
lemma loadWordUser_submonad_fn:
"loadWordUser p = submonad_fn ksMachineState (ksMachineState_update \<circ> K)
(pointerInUserData p) (loadWord p)"
by (simp add: loadWordUser_def submonad_doMachineOp.fn_is_sm submonad_fn_def)
lemma storeWordUser_submonad_fn:
"storeWordUser p v = submonad_fn ksMachineState (ksMachineState_update \<circ> K)
(pointerInUserData p) (storeWord p v)"
by (simp add: storeWordUser_def submonad_doMachineOp.fn_is_sm submonad_fn_def)
lemma threadGet_tcbFault_loadWordUser_comm:
"do x \<leftarrow> threadGet tcbFault t; y \<leftarrow> loadWordUser p; n x y od =
do y \<leftarrow> loadWordUser p; x \<leftarrow> threadGet tcbFault t; n x y od"
apply (rule submonad_comm [OF tcbFault_submonad_args _
threadGet_tcbFault_submonad_fn
loadWordUser_submonad_fn])
apply (simp add: submonad_args_def pointerInUserData_def)
apply (simp add: thread_replace_def Let_def)
apply simp
apply (clarsimp simp: thread_replace_def Let_def typ_at'_def ko_wp_at'_def
ps_clear_upd ps_clear_upd_None pointerInUserData_def
split: option.split kernel_object.split)
apply (simp add: get_def empty_fail_def)
apply (simp add: ef_loadWord)
done
lemma threadGet_tcbFault_storeWordUser_comm:
"do x \<leftarrow> threadGet tcbFault t; y \<leftarrow> storeWordUser p v; n x y od =
do y \<leftarrow> storeWordUser p v; x \<leftarrow> threadGet tcbFault t; n x y od"
apply (rule submonad_comm [OF tcbFault_submonad_args _
threadGet_tcbFault_submonad_fn
storeWordUser_submonad_fn])
apply (simp add: submonad_args_def pointerInUserData_def)
apply (simp add: thread_replace_def Let_def)
apply simp
apply (clarsimp simp: thread_replace_def Let_def typ_at'_def ko_wp_at'_def
ps_clear_upd ps_clear_upd_None pointerInUserData_def
split: option.split kernel_object.split)
apply (simp add: get_def empty_fail_def)
apply (simp add: ef_storeWord)
done
lemma asUser_getRegister_discarded:
"(asUser t (getRegister r)) >>= (\<lambda>_. n) =
stateAssert (tcb_at' t) [] >>= (\<lambda>_. n)"
apply (rule ext)
apply (clarsimp simp: submonad_asUser.fn_is_sm submonad_fn_def
submonad_asUser.args assert_def select_f_def
gets_def get_def modify_def put_def
getRegister_def bind_def split_def
return_def fail_def stateAssert_def)
done
crunch pspace_canonical'[wp]: setThreadState pspace_canonical'
lemma obj_at_kernel_mappings':
"\<lbrakk>pspace_in_kernel_mappings' s; obj_at' P p s\<rbrakk>
\<Longrightarrow> p \<in> kernel_mappings"
by (clarsimp simp: pspace_in_kernel_mappings'_def obj_at'_def dom_def)
crunches Arch.switchToThread
for valid_queues'[wp]: valid_queues'
(simp: crunch_simps wp: hoare_drop_imps)
crunches switchToIdleThread
for ksCurDomain[wp]: "\<lambda>s. P (ksCurDomain s)"
crunches switchToIdleThread, switchToThread
for valid_pspace'[wp]: valid_pspace'
(simp: whenE_def crunch_simps wp: hoare_drop_imps)
lemma getMessageInfo_less_4:
"\<lbrace>\<top>\<rbrace> getMessageInfo t \<lbrace>\<lambda>rv s. msgExtraCaps rv < 4\<rbrace>"
including no_pre
apply (simp add: getMessageInfo_def)
apply wp
apply (rule hoare_strengthen_post, rule hoare_vcg_prop)
apply (simp add: messageInfoFromWord_def Let_def
Types_H.msgExtraCapBits_def)
apply (rule word_leq_minus_one_le, simp)
apply simp
apply (rule word_and_le1)
done
lemma getMessageInfo_msgLength':
"\<lbrace>\<top>\<rbrace> getMessageInfo t \<lbrace>\<lambda>rv s. msgLength rv \<le> 0x78\<rbrace>"
including no_pre
apply (simp add: getMessageInfo_def)
apply wp
apply (rule hoare_strengthen_post, rule hoare_vcg_prop)
apply (simp add: messageInfoFromWord_def Let_def msgMaxLength_def not_less
Types_H.msgExtraCapBits_def split: if_split )
done
definition
"isPTCap' cap \<equiv> \<exists>p asid. cap = (ArchObjectCap (PageTableCap p asid))"
lemma asid_shiftr_low_bits_less[simplified]:
"(asid :: machine_word) \<le> mask asid_bits \<Longrightarrow> asid >> asid_low_bits < 2^LENGTH(asid_high_len)"
apply (rule_tac y="2 ^ 7" in order_less_le_trans)
apply (rule shiftr_less_t2n)
apply (simp add: le_mask_iff_lt_2n[THEN iffD1] asid_bits_def asid_low_bits_def)
apply simp
done
lemma getActiveIRQ_neq_Some0x3FF':
"\<lbrace>\<top>\<rbrace> getActiveIRQ in_kernel \<lbrace>\<lambda>rv s. rv \<noteq> Some 0x3FF\<rbrace>"
apply (simp add: getActiveIRQ_def)
apply (wp alternative_wp select_wp)
apply simp
done
lemma getActiveIRQ_neq_Some0x3FF:
"\<lbrace>\<top>\<rbrace> doMachineOp (getActiveIRQ in_kernel) \<lbrace>\<lambda>rv s. rv \<noteq> Some 0x3FF\<rbrace>"
apply (wpsimp simp: doMachineOp_def split_def)
apply (auto dest: use_valid intro: getActiveIRQ_neq_Some0x3FF')
done
(* We don't have access to n_msgRegisters from C here, but the number of msg registers in C should
be equivalent to what we have in the abstract/design specs. We want a number for this definition
that automatically updates if the number of registers changes, and we sanity check it later
in msgRegisters_size_sanity *)
definition size_msgRegisters :: nat where
size_msgRegisters_pre_def: "size_msgRegisters \<equiv> size (RISCV64.msgRegisters)"
schematic_goal size_msgRegisters_def:
"size_msgRegisters = numeral ?x"
unfolding size_msgRegisters_pre_def RISCV64.msgRegisters_def
by (simp add: upto_enum_red fromEnum_def enum_register del: Suc_eq_numeral)
(simp only: Suc_eq_plus1_left, simp del: One_nat_def)
lemma length_msgRegisters[simplified size_msgRegisters_def]:
"length RISCV64_H.msgRegisters = size_msgRegisters"
by (simp add: size_msgRegisters_pre_def RISCV64_H.msgRegisters_def)
lemma empty_fail_loadWordUser[intro!, simp]:
"empty_fail (loadWordUser x)"
by (fastforce simp: loadWordUser_def ef_loadWord ef_dmo')
lemma empty_fail_getMRs[iff]:
"empty_fail (getMRs t buf mi)"
by (auto simp add: getMRs_def split: option.split)
lemma empty_fail_getReceiveSlots:
"empty_fail (getReceiveSlots r rbuf)"
proof -
note
empty_fail_resolveAddressBits[wp]
empty_fail_rethrowFailure[wp]
empty_fail_rethrowFailure[wp]
show ?thesis
unfolding getReceiveSlots_def loadCapTransfer_def lookupCap_def lookupCapAndSlot_def
by (wpsimp simp: emptyOnFailure_def unifyFailure_def lookupSlotForThread_def
capTransferFromWords_def getThreadCSpaceRoot_def locateSlot_conv bindE_assoc
lookupSlotForCNodeOp_def lookupErrorOnFailure_def rangeCheck_def)
qed
lemma user_getreg_rv:
"\<lbrace>obj_at' (\<lambda>tcb. P ((user_regs o atcbContextGet o tcbArch) tcb r)) t\<rbrace>
asUser t (getRegister r)
\<lbrace>\<lambda>rv s. P rv\<rbrace>"
apply (simp add: asUser_def split_def)
apply (wp threadGet_wp)
apply (clarsimp simp: obj_at'_def getRegister_def in_monad atcbContextGet_def)
done
crunches insertNewCap, Arch_createNewCaps, threadSet, Arch.createObject, setThreadState,
updateFreeIndex, preemptionPoint
for gsCNodes[wp]: "\<lambda>s. P (gsCNodes s)"
(wp: crunch_wps setObject_ksPSpace_only
simp: unless_def updateObject_default_def crunch_simps
ignore_del: preemptionPoint)
end
end
|
# NRPy+'s Reference Metric Interface
## Author: Zach Etienne
### Formatting improvements courtesy Brandon Clark
### NRPy+ Source Code for this module: [reference_metric.py](../edit/reference_metric.py)
## Introduction:
### Why use a reference metric? Benefits of choosing the best coordinate system for the problem
When solving a partial differential equation on the computer, it is useful to first pick a coordinate system well-suited to the geometry of the problem. For example, if we are modeling a spherically-symmetric star, it would be hugely wasteful to model the star in 3-dimensional Cartesian coordinates ($x$,$y$,$z$). This is because in Cartesian coordinates, we would need to choose high sampling in all three Cartesian directions. If instead we chose to model the star in spherical coordinates ($r$,$\theta$,$\phi$), so long as the star is centered at $r=0$, we would not need to model the star with more than one point in the $\theta$ and $\phi$ directions!
A similar argument holds for stars that are *nearly* spherically symmetric. Such stars may exhibit density distributions that vary slowly in $\theta$ and $\phi$ directions (e.g., isolated neutron stars or black holes). In these cases the number of points needed to sample the angular directions will still be much smaller than in the radial direction.
Thus choice of an appropriate reference metric may directly mitigate the [Curse of Dimensionality](https://en.wikipedia.org/wiki/Curse_of_dimensionality).
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This module is organized as follow
1. [Step 1](#define_ref_metric): Defining a reference metric, [`reference_metric.py`](../edit/reference_metric.py)
1. [Step 2](#define_geometric): Defining geometric quantities, **`ref_metric__hatted_quantities()`**
1. [Step 3](#prescribed_ref_metric): Prescribed reference metrics in [`reference_metric.py`](../edit/reference_metric.py)
1. [Step 3.a](#sphericallike): Spherical-like coordinate systems
1. [Step 3.a.i](#spherical): **`reference_metric::CoordSystem = "Spherical"`**
1. [Step 3.a.ii](#sinhspherical): **`reference_metric::CoordSystem = "SinhSpherical"`**
1. [Step 3.a.iii](#sinhsphericalv2): **`reference_metric::CoordSystem = "SinhSphericalv2"`**
1. [Step 3.b](#cylindricallike): Cylindrical-like coordinate systems
1. [Step 3.b.i](#cylindrical): **`reference_metric::CoordSystem = "Cylindrical"`**
1. [Step 3.b.ii](#sinhcylindrical): **`reference_metric::CoordSystem = "SinhCylindrical"`**
1. [Step 3.b.iii](#sinhcylindricalv2): **`reference_metric::CoordSystem = "SinhCylindricalv2"`**
1. [Step 3.c](#cartesianlike): Cartesian-like coordinate systems
1. [Step 3.c.i](#cartesian): **`reference_metric::CoordSystem = "Cartesian"`**
1. [Step 3.d](#prolatespheroidal): Prolate spheroidal coordinates
1. [Step 3.d.i](#symtp): **`reference_metric::CoordSystem = "SymTP"`**
1. [Step 3.d.ii](#sinhsymtp): **`reference_metric::CoordSystem = "SinhSymTP"`**
1. [Step 4](#latex_pdf_output): Output this module to $\LaTeX$-formatted PDF file
<a id='define_ref_metric'></a>
# Step 1: Defining a reference metric, [`reference_metric.py`](../edit/reference_metric.py) \[Back to [top](#toc)\]
$$\label{define_ref_metric}$$
***Note that currently only orthogonal reference metrics of dimension 3 or fewer are supported. This can be extended if desired.***
NRPy+ assumes all curvilinear coordinate systems map directly from a uniform, Cartesian numerical grid with coordinates $(x,y,z)$=(`xx[0]`,`xx[1]`,`xx[2]`). Thus when defining reference metrics, all defined coordinate quantities must be in terms of the `xx[]` array. As we will see, this adds a great deal of flexibility
For example, [**reference_metric.py**](../edit/reference_metric.py) requires that the *orthogonal coordinate scale factors* be defined. As described [here](https://en.wikipedia.org/wiki/Curvilinear_coordinates), the $i$th scale factor is the positive root of the metric element $g_{ii}$. In ordinary spherical coordinates $(r,\theta,\phi)$, with line element $ds^2 = g_{ij} dx^i dx^j = dr^2+ r^2 d \theta^2 + r^2 \sin^2\theta \ d\phi^2$, we would first define
* $r = xx_0$
* $\theta = xx_1$
* $\phi = xx_2$,
so that the scale factors are defined as
* `scalefactor_orthog[0]` = $1$
* `scalefactor_orthog[1]` = $r$
* `scalefactor_orthog[2]` = $r \sin \theta$
Here is the corresponding code:
```python
import sympy as sp
import NRPy_param_funcs as par
import reference_metric as rfm
r = rfm.xx[0]
th = rfm.xx[1]
ph = rfm.xx[2]
rfm.scalefactor_orthog[0] = 1
rfm.scalefactor_orthog[1] = r
rfm.scalefactor_orthog[2] = r*sp.sin(th)
# Notice that the scale factor will be given
# in terms of the fundamental Cartesian
# grid variables, and not {r,th,ph}:
print("r*sin(th) = "+str(rfm.scalefactor_orthog[2]))
```
r*sin(th) = xx0*sin(xx1)
Next suppose we wish to modify our radial coordinate $r(xx_0)$ to be an exponentially increasing function, so that our numerical grid $(xx_0,xx_1,xx_2)$ will map to a spherical grid with radial grid spacing ($\Delta r$) that *increases* with $r$. Generally we will find it useful to define $r(xx_0)$ to be an odd function, so let's choose
$$r(xx_0) = a \sinh(xx_0/s),$$
where $a$ is an overall radial scaling factor, and $s$ denotes the scale (in units of $xx_0$) over which exponential growth will take place. In our implementation below, note that we use the relation
$$\sinh(x) = \frac{e^x - e^{-x}}{2},$$
as SymPy finds it easier to evaluate exponentials than hyperbolic trigonometric functions.
```python
a,s = sp.symbols('a s',positive=True)
xx0_rescaled = rfm.xx[0] / s
r = a*(sp.exp(xx0_rescaled) - sp.exp(-xx0_rescaled))/2
# Must redefine the scalefactors since 'r' has been updated!
rfm.scalefactor_orthog[0] = 1
rfm.scalefactor_orthog[1] = r
rfm.scalefactor_orthog[2] = r*sp.sin(th)
print(rfm.scalefactor_orthog[2])
```
a*(exp(xx0/s) - exp(-xx0/s))*sin(xx1)/2
Often we will find it useful to also define the appropriate mappings from (`xx[0]`,`xx[1]`,`xx[2]`) to Cartesian coordinates (for plotting purposes) and ordinary spherical coordinates (e.g., in case initial data when solving a PDE are naturally written in spherical coordinates). For this purpose, reference_metric.py also declares lists **`xxCart[]`** and **`xxSph[]`**, which in this case are defined as
```python
rfm.xxSph[0] = r
rfm.xxSph[1] = th
rfm.xxSph[2] = ph
rfm.xxCart[0] = r*sp.sin(th)*sp.cos(ph)
rfm.xxCart[1] = r*sp.sin(th)*sp.sin(ph)
rfm.xxCart[2] = r*sp.cos(th)
# Here we show off SymPy's pretty_print()
# and simplify() functions. Nice, no?
sp.pretty_print(sp.simplify(rfm.xxCart[0]))
```
⎛xx₀⎞
a⋅sin(xx₁)⋅cos(xx₂)⋅sinh⎜───⎟
⎝ s ⎠
<a id='define_geometric'></a>
# Step 2: Define geometric quantities, `ref_metric__hatted_quantities()` \[Back to [top](#toc)\]
$$\label{define_geometric}$$
Once `scalefactor_orthog[]` has been defined, the function **`ref_metric__hatted_quantities()`** within [reference_metric.py](../edit/reference_metric.py) can be called to define a number of geometric quantities useful for solving PDEs in curvilinear coordinate systems.
Adopting the notation of [Baumgarte, Montero, Cordero-Carrión, and Müller, PRD 87, 044026 (2012)](https://arxiv.org/abs/1211.6632), geometric quantities related to the reference metric are named "hatted" quantities, . For example, the reference metric is defined as $\hat{g}_{ij}$=`ghatDD[i][j]`:
```python
rfm.ref_metric__hatted_quantities()
sp.pretty_print(sp.Matrix(sp.simplify(rfm.ghatDD)))
```
⎡1 0 0 ⎤
⎢ ⎥
⎢ 2 ⎥
⎢ ⎛ xx₀ -xx₀ ⎞ ⎥
⎢ ⎜ ─── ─────⎟ ⎥
⎢ 2 ⎜ s s ⎟ ⎥
⎢ a ⋅⎝ℯ - ℯ ⎠ ⎥
⎢0 ─────────────────── 0 ⎥
⎢ 4 ⎥
⎢ ⎥
⎢ 2 ⎥
⎢ ⎛ xx₀ -xx₀ ⎞ ⎥
⎢ ⎜ ─── ─────⎟ ⎥
⎢ 2 ⎜ s s ⎟ 2 ⎥
⎢ a ⋅⎝ℯ - ℯ ⎠ ⋅sin (xx₁)⎥
⎢0 0 ─────────────────────────────⎥
⎣ 4 ⎦
In addition to $\hat{g}_{ij}$, **`ref_metric__hatted_quantities()`** also provides:
* The rescaling "matrix" `ReDD[i][j]`, used for separating singular (due to chosen coordinate system) pieces of smooth rank-2 tensor components from the smooth parts, so that the smooth parts can be used within temporal and spatial differential operators.
* Inverse reference metric: $\hat{g}^{ij}$=`ghatUU[i][j]`.
* Reference metric determinant: $\det\left(\hat{g}_{ij}\right)$=`detgammahat`.
* First and second derivatives of the reference metric: $\hat{g}_{ij,k}$=`ghatDD_dD[i][j][k]`; $\hat{g}_{ij,kl}$=`ghatDD_dDD[i][j][k][l]`
* Christoffel symbols associated with the reference metric, $\hat{\Gamma}^i_{jk}$ = `GammahatUDD[i][j][k]` and their first derivatives $\hat{\Gamma}^i_{jk,l}$ = `GammahatUDD_dD[i][j][k][l]`
For example, the Christoffel symbol $\hat{\Gamma}^{xx_1}_{xx_2 xx_2}=\hat{\Gamma}^1_{22}$ is given by `GammahatUDD[1][2][2]`:
```python
sp.pretty_print(sp.simplify(rfm.GammahatUDD[1][2][2]))
```
-sin(2⋅xx₁)
────────────
2
Given the trigonometric identity $2\sin(x)\cos(x) = \sin(2x)$, notice that the above expression is equivalent to Eq. 18 of [Baumgarte, Montero, Cordero-Carrión, and Müller, PRD 87, 044026 (2012)](https://arxiv.org/abs/1211.6632). This is expected since the sinh-radial spherical coordinate system is equivalent to ordinary spherical coordinates in the angular components.
<a id='prescribed_ref_metric'></a>
# Step 3: Prescribed reference metrics in [`reference_metric.py`](../edit/reference_metric.py) \[Back to [top](#toc)\]
$$\label{prescribed_ref_metric}$$
One need not manually define scale factors or other quantities for reference metrics, as a number of prescribed reference metrics are already defined in [reference_metric.py](../edit/reference_metric.py). These can be accessed by first setting the parameter **reference_metric::CoordSystem** to one of the following, and then calling the function **`rfm.reference_metric()`**.
```python
import NRPy_param_funcs as par
import indexedexp as ixp
import grid as gri
# Step 0a: Initialize parameters
thismodule = __name__
par.initialize_param(par.glb_param("char", thismodule, "CoordSystem", "Spherical"))
# Step 0b: Declare global variables
xx = gri.xx
xxCart = ixp.zerorank1(DIM=4) # Must be set in terms of xx[]s
Cart_to_xx = ixp.zerorank1(DIM=4) # Must be set in terms of xx[]s
Cartx,Carty,Cartz = sp.symbols("Cartx Carty Cartz", real=True)
Cart = [Cartx,Carty,Cartz]
xxSph = ixp.zerorank1(DIM=4) # Must be set in terms of xx[]s
scalefactor_orthog = ixp.zerorank1(DIM=4) # Must be set in terms of xx[]s
have_already_called_reference_metric_function = False
CoordSystem = par.parval_from_str("reference_metric::CoordSystem")
M_PI,M_SQRT1_2 = par.Cparameters("#define",thismodule,["M_PI","M_SQRT1_2"],"")
global xxmin
global xxmax
global UnitVectors
UnitVectors = ixp.zerorank2(DIM=3)
```
We will find the following plotting function useful for analyzing coordinate systems in which the radial coordinate is rescaled.
```python
def create_r_of_xx0_plots(CoordSystem, r_of_xx0,rprime_of_xx0):
import matplotlib.pyplot as plt
plt.clf()
Nr = 20
dxx0 = 1.0 / float(Nr)
xx0s = []
rs = []
deltars = []
rprimes = []
for i in range(Nr):
xx0 = (float(i) + 0.5)*dxx0
xx0s.append(xx0)
rs.append( sp.sympify(str(r_of_xx0 ).replace("xx0",str(xx0))))
rprimes.append(sp.sympify(str(rprime_of_xx0).replace("xx0",str(xx0))))
if i>0:
deltars.append(sp.log(rs[i]-rs[i-1],10))
else:
deltars.append(sp.log(2*rs[0],10))
# fig, ax = plt.subplots()
fig = plt.figure(figsize=(12,12)) # 8 in x 8 in
ax = fig.add_subplot(221)
ax.set_title('$r(xx_0)$ for '+CoordSystem,fontsize='x-large')
ax.set_xlabel('$xx_0$',fontsize='x-large')
ax.set_ylabel('$r(xx_0)$',fontsize='x-large')
ax.plot(xx0s, rs, 'k.', label='Spacing between\nadjacent gridpoints')
# legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large')
# legend.get_frame().set_facecolor('C1')
ax = fig.add_subplot(222)
ax.set_title('Grid spacing for '+CoordSystem,fontsize='x-large')
ax.set_xlabel('$xx_0$',fontsize='x-large')
ax.set_ylabel('$\log_{10}(\Delta r)$',fontsize='x-large')
ax.plot(xx0s, deltars, 'k.', label='Spacing between\nadjacent gridpoints\nin $r(xx_0)$ plot')
legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large')
legend.get_frame().set_facecolor('C1')
ax = fig.add_subplot(223)
ax.set_title('$r\'(xx_0)$ for '+CoordSystem,fontsize='x-large')
ax.set_xlabel('$xx_0$',fontsize='x-large')
ax.set_ylabel('$r\'(xx_0)$',fontsize='x-large')
ax.plot(xx0s, rprimes, 'k.', label='Nr=96')
# legend = ax.legend(loc='upper left', shadow=True, fontsize='x-large')
# legend.get_frame().set_facecolor('C1')
plt.tight_layout(pad=2)
plt.show()
```
<a id='sphericallike'></a>
## Step 3.a: Spherical-like coordinate systems \[Back to [top](#toc)\]
$$\label{sphericallike}$$
<a id='spherical'></a>
### Step 3.a.i: **`reference_metric::CoordSystem = "Spherical"`** \[Back to [top](#toc)\]
$$\label{spherical}$$
Standard spherical coordinates, with $(r,\theta,\phi)=(xx_0,xx_1,xx_2)$
```python
if CoordSystem == "Spherical":
# Adding assumption real=True can help simplify expressions involving xx[0] & xx[1] below.
xx[0] = sp.symbols("xx0", real=True)
xx[1] = sp.symbols("xx1", real=True)
RMAX = par.Cparameters("REAL", thismodule, ["RMAX"],10.0)
xxmin = [sp.sympify(0), sp.sympify(0), -M_PI]
xxmax = [ RMAX, M_PI, M_PI]
r = xx[0]
th = xx[1]
ph = xx[2]
Cart_to_xx[0] = sp.sqrt(Cartx ** 2 + Carty ** 2 + Cartz ** 2)
Cart_to_xx[1] = sp.acos(Cartz / Cart_to_xx[0])
Cart_to_xx[2] = sp.atan2(Carty, Cartx)
xxSph[0] = r
xxSph[1] = th
xxSph[2] = ph
# Now define xCart, yCart, and zCart in terms of x0,xx[1],xx[2].
# Note that the relation between r and x0 is not necessarily trivial in SinhSpherical coordinates. See above.
xxCart[0] = xxSph[0]*sp.sin(xxSph[1])*sp.cos(xxSph[2])
xxCart[1] = xxSph[0]*sp.sin(xxSph[1])*sp.sin(xxSph[2])
xxCart[2] = xxSph[0]*sp.cos(xxSph[1])
scalefactor_orthog[0] = sp.diff(xxSph[0],xx[0])
scalefactor_orthog[1] = xxSph[0]
scalefactor_orthog[2] = xxSph[0]*sp.sin(xxSph[1])
# Set the unit vectors
UnitVectors = [[ sp.sin(xxSph[1])*sp.cos(xxSph[2]), sp.sin(xxSph[1])*sp.sin(xxSph[2]), sp.cos(xxSph[1])],
[ sp.cos(xxSph[1])*sp.cos(xxSph[2]), sp.cos(xxSph[1])*sp.sin(xxSph[2]), -sp.sin(xxSph[1])],
[ -sp.sin(xxSph[2]), sp.cos(xxSph[2]), sp.sympify(0) ]]
```
Now let's analyze $r(xx_0)$ for **"Spherical"** coordinates.
```python
%matplotlib inline
import sympy as sp
import reference_metric as rfm
import NRPy_param_funcs as par
CoordSystem = "Spherical"
par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem)
rfm.reference_metric()
RMAX = 10.0
r_of_xx0 = sp.sympify(str(rfm.xxSph[0] ).replace("RMAX",str(RMAX)))
rprime_of_xx0 = sp.sympify(str(sp.diff(rfm.xxSph[0],rfm.xx[0])).replace("RMAX",str(RMAX)))
create_r_of_xx0_plots(CoordSystem, r_of_xx0,rprime_of_xx0)
```
<a id='sinhspherical'></a>
### Step 3.a.ii: **`reference_metric::CoordSystem = "SinhSpherical"`** \[Back to [top](#toc)\]
$$\label{sinhspherical}$$
Spherical coordinates, but with $$r(xx_0) = \text{AMPL} \frac{\sinh\left(\frac{xx_0}{\text{SINHW}}\right)}{\sinh\left(\frac{1}{\text{SINHW}}\right)}.$$
SinhSpherical uses two parameters: `AMPL` and `SINHW`. `AMPL` sets the outer boundary distance; and `SINHW` sets the focusing of the coordinate points near $r=0$, where a small `SINHW` ($\sim 0.125$) will greatly focus the points near $r=0$ and a large `SINHW` will look more like an ordinary spherical polar coordinate system.
```python
if CoordSystem == "SinhSpherical":
xxmin = [sp.sympify(0), sp.sympify(0), -M_PI]
xxmax = [sp.sympify(1), M_PI, M_PI]
AMPL, SINHW = par.Cparameters("REAL",thismodule,["AMPL","SINHW"],[10.0,0.2])
# Set SinhSpherical radial coordinate by default; overwrite later if CoordSystem == "SinhSphericalv2".
r = AMPL * (sp.exp(xx[0] / SINHW) - sp.exp(-xx[0] / SINHW)) / \
(sp.exp(1 / SINHW) - sp.exp(-1 / SINHW))
th = xx[1]
ph = xx[2]
Cart_to_xx[0] = SINHW*sp.asinh(sp.sqrt(Cartx ** 2 + Carty ** 2 + Cartz ** 2)*sp.sinh(1/SINHW)/AMPL)
Cart_to_xx[1] = sp.acos(Cartz / sp.sqrt(Cartx ** 2 + Carty ** 2 + Cartz ** 2))
Cart_to_xx[2] = sp.atan2(Carty, Cartx)
xxSph[0] = r
xxSph[1] = th
xxSph[2] = ph
# Now define xCart, yCart, and zCart in terms of x0,xx[1],xx[2].
# Note that the relation between r and x0 is not necessarily trivial in SinhSpherical coordinates. See above.
xxCart[0] = xxSph[0]*sp.sin(xxSph[1])*sp.cos(xxSph[2])
xxCart[1] = xxSph[0]*sp.sin(xxSph[1])*sp.sin(xxSph[2])
xxCart[2] = xxSph[0]*sp.cos(xxSph[1])
scalefactor_orthog[0] = sp.diff(xxSph[0],xx[0])
scalefactor_orthog[1] = xxSph[0]
scalefactor_orthog[2] = xxSph[0]*sp.sin(xxSph[1])
# Set the unit vectors
UnitVectors = [[ sp.sin(xxSph[1])*sp.cos(xxSph[2]), sp.sin(xxSph[1])*sp.sin(xxSph[2]), sp.cos(xxSph[1])],
[ sp.cos(xxSph[1])*sp.cos(xxSph[2]), sp.cos(xxSph[1])*sp.sin(xxSph[2]), -sp.sin(xxSph[1])],
[ -sp.sin(xxSph[2]), sp.cos(xxSph[2]), sp.sympify(0) ]]
```
Now we explore $r(xx_0)$ for `SinhSpherical` assuming `AMPL=10.0` and `SINHW=0.2`:
```python
%matplotlib inline
import sympy as sp
import reference_metric as rfm
import NRPy_param_funcs as par
CoordSystem = "SinhSpherical"
par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem)
rfm.reference_metric()
AMPL = 10.0
SINHW = 0.2
r_of_xx0 = sp.sympify(str(rfm.xxSph[0] ).replace("AMPL",str(AMPL)).replace("SINHW",str(SINHW)))
rprime_of_xx0 = sp.sympify(str(sp.diff(rfm.xxSph[0],rfm.xx[0])).replace("AMPL",str(AMPL)).replace("SINHW",str(SINHW)))
create_r_of_xx0_plots(CoordSystem, r_of_xx0,rprime_of_xx0)
```
<a id='sinhsphericalv2'></a>
### Step 3.a.iii: **`reference_metric::CoordSystem = "SinhSphericalv2"`** \[Back to [top](#toc)\]
$$\label{sinhsphericalv2}$$
The same as SinhSpherical coordinates, but with an additional `AMPL*const_dr*xx_0` term:
$$r(xx_0) = \text{AMPL} \left[\text{const_dr}\ xx_0 + \frac{\sinh\left(\frac{xx_0}{\text{SINHW}}\right)}{\sinh\left(\frac{1}{\text{SINHW}}\right)}\right].$$
```python
if CoordSystem == "SinhSphericalv2":
# SinhSphericalv2 adds the parameter "const_dr", which allows for a region near xx[0]=0 to have
# constant radial resolution of const_dr, provided the sinh() term does not dominate near xx[0]=0.
xxmin = [sp.sympify(0), sp.sympify(0), -M_PI]
xxmax = [sp.sympify(1), M_PI, M_PI]
AMPL, SINHW = par.Cparameters("REAL",thismodule,["AMPL","SINHW"],[10.0,0.2])
const_dr = par.Cparameters("REAL",thismodule,["const_dr"],0.0625)
r = AMPL*( const_dr*xx[0] + (sp.exp(xx[0] / SINHW) - sp.exp(-xx[0] / SINHW)) /
(sp.exp(1 / SINHW) - sp.exp(-1 / SINHW)) )
th = xx[1]
ph = xx[2]
# NO CLOSED-FORM EXPRESSION FOR RADIAL INVERSION.
# Cart_to_xx[0] = "NewtonRaphson"
# Cart_to_xx[1] = sp.acos(Cartz / sp.sqrt(Cartx ** 2 + Carty ** 2 + Cartz ** 2))
# Cart_to_xx[2] = sp.atan2(Carty, Cartx)
xxSph[0] = r
xxSph[1] = th
xxSph[2] = ph
# Now define xCart, yCart, and zCart in terms of x0,xx[1],xx[2].
# Note that the relation between r and x0 is not necessarily trivial in SinhSpherical coordinates. See above.
xxCart[0] = xxSph[0]*sp.sin(xxSph[1])*sp.cos(xxSph[2])
xxCart[1] = xxSph[0]*sp.sin(xxSph[1])*sp.sin(xxSph[2])
xxCart[2] = xxSph[0]*sp.cos(xxSph[1])
scalefactor_orthog[0] = sp.diff(xxSph[0],xx[0])
scalefactor_orthog[1] = xxSph[0]
scalefactor_orthog[2] = xxSph[0]*sp.sin(xxSph[1])
# Set the unit vectors
UnitVectors = [[ sp.sin(xxSph[1])*sp.cos(xxSph[2]), sp.sin(xxSph[1])*sp.sin(xxSph[2]), sp.cos(xxSph[1])],
[ sp.cos(xxSph[1])*sp.cos(xxSph[2]), sp.cos(xxSph[1])*sp.sin(xxSph[2]), -sp.sin(xxSph[1])],
[ -sp.sin(xxSph[2]), sp.cos(xxSph[2]), sp.sympify(0) ]]
```
Now we explore $r(xx_0)$ for `SinhSphericalv2` assuming `AMPL=10.0`, `SINHW=0.2`, and `const_dr=0.05`. Notice that the `const_dr` term significantly increases the grid spacing near $xx_0=0$ relative to `SinhSpherical` coordinates.
```python
%matplotlib inline
import sympy as sp
import reference_metric as rfm
import NRPy_param_funcs as par
CoordSystem = "SinhSphericalv2"
par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem)
rfm.reference_metric()
AMPL = 10.0
SINHW = 0.2
const_dr = 0.05
r_of_xx0 = sp.sympify(str(rfm.xxSph[0] ).replace("AMPL",str(AMPL)).replace("SINHW",str(SINHW)).replace("const_dr",str(const_dr)))
rprime_of_xx0 = sp.sympify(str(sp.diff(rfm.xxSph[0],rfm.xx[0])).replace("AMPL",str(AMPL)).replace("SINHW",str(SINHW)).replace("const_dr",str(const_dr)))
create_r_of_xx0_plots(CoordSystem, r_of_xx0,rprime_of_xx0)
```
<a id='cylindricallike'></a>
## Step 3.b: Cylindrical-like coordinate systems \[Back to [top](#toc)\]
$$\label{cylindricallike}$$
<a id='cylindrical'></a>
### Step 3.b.i: **`reference_metric::CoordSystem = "Cylindrical"`** \[Back to [top](#toc)\]
$$\label{cylindrical}$$
Standard cylindrical coordinates, with $(\rho,\phi,z)=(xx_0,xx_1,xx_2)$
```python
if CoordSystem == "Cylindrical":
# Assuming the cylindrical radial coordinate
# is positive makes nice simplifications of
# unit vectors possible.
xx[0] = sp.symbols("xx0", real=True)
RHOMAX,ZMIN,ZMAX = par.Cparameters("REAL",thismodule,["RHOMAX","ZMIN","ZMAX"],[10.0,-10.0,10.0])
xxmin = [sp.sympify(0), -M_PI, ZMIN]
xxmax = [ RHOMAX, M_PI, ZMAX]
RHOCYL = xx[0]
PHICYL = xx[1]
ZCYL = xx[2]
Cart_to_xx[0] = sp.sqrt(Cartx ** 2 + Carty ** 2)
Cart_to_xx[1] = sp.atan2(Carty, Cartx)
Cart_to_xx[2] = Cartz
xxCart[0] = RHOCYL*sp.cos(PHICYL)
xxCart[1] = RHOCYL*sp.sin(PHICYL)
xxCart[2] = ZCYL
xxSph[0] = sp.sqrt(RHOCYL**2 + ZCYL**2)
xxSph[1] = sp.acos(ZCYL / xxSph[0])
xxSph[2] = PHICYL
scalefactor_orthog[0] = sp.diff(RHOCYL,xx[0])
scalefactor_orthog[1] = RHOCYL
scalefactor_orthog[2] = sp.diff(ZCYL,xx[2])
# Set the unit vectors
UnitVectors = [[ sp.cos(PHICYL), sp.sin(PHICYL), sp.sympify(0)],
[-sp.sin(PHICYL), sp.cos(PHICYL), sp.sympify(0)],
[ sp.sympify(0), sp.sympify(0), sp.sympify(1)]]
```
Next let's plot **"Cylindrical"** coordinates.
```python
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
R = np.linspace(0, 2, 24)
h = 2
u = np.linspace(0, 2*np.pi, 24)
x = np.outer(R, np.cos(u))
y = np.outer(R, np.sin(u))
z = h * np.outer(np.ones(np.size(u)), np.ones(np.size(u)))
r = np.arange(0,2,0.25)
theta = 2*np.pi*r*0
fig = plt.figure(figsize=(12,12)) # 8 in x 8 in
fig, (ax1, ax2) = plt.subplots(1, 2)
ax1 = plt.axes(projection='polar')
ax1.set_rmax(2)
ax1.set_rgrids(r,labels=[])
thetas = np.linspace(0,360,24, endpoint=True)
ax1.set_thetagrids(thetas,labels=[])
# ax.grid(True)
ax1.grid(True,linewidth='1.0')
ax1.set_title("Top Down View")
plt.show()
ax2 = plt.axes(projection='3d', xticklabels=[], yticklabels=[], zticklabels=[])
#ax2.plot_surface(x,y,z, alpha=.75, cmap = 'viridis') # z in case of disk which is parallel to XY plane is constant and you can directly use h
x=np.linspace(-2, 2, 100)
z=np.linspace(-2, 2, 100)
Xc, Zc=np.meshgrid(x, z)
Yc = np.sqrt(4-Xc**2)
rstride = 10
cstride = 10
ax2.plot_surface(Xc, Yc, Zc, alpha=1.0, rstride=rstride, cstride=cstride, cmap = 'viridis')
ax2.plot_surface(Xc, -Yc, Zc, alpha=1.0, rstride=rstride, cstride=cstride, cmap = 'viridis')
ax2.set_title("Standard Cylindrical Grid in 3D")
ax2.grid(False)
plt.axis('off')
plt.show()
```
<a id='sinhcylindrical'></a>
### Step 3.b.ii" **`reference_metric::CoordSystem = "SinhCylindrical"`** \[Back to [top](#toc)\]
$$\label{sinhcylindrical}$$
Cylindrical coordinates, but with
$$\rho(xx_0) = \text{AMPLRHO} \frac{\sinh\left(\frac{xx_0}{\text{SINHWRHO}}\right)}{\sinh\left(\frac{1}{\text{SINHWRHO}}\right)}$$
and
$$z(xx_2) = \text{AMPLZ} \frac{\sinh\left(\frac{xx_2}{\text{SINHWZ}}\right)}{\sinh\left(\frac{1}{\text{SINHWZ}}\right)}$$
```python
if CoordSystem == "SinhCylindrical":
# Assuming the cylindrical radial coordinate
# is positive makes nice simplifications of
# unit vectors possible.
xx[0] = sp.symbols("xx0", real=True)
xxmin = [sp.sympify(0), -M_PI, sp.sympify(-1)]
xxmax = [sp.sympify(1), M_PI, sp.sympify(+1)]
AMPLRHO, SINHWRHO, AMPLZ, SINHWZ = par.Cparameters("REAL",thismodule,
["AMPLRHO","SINHWRHO","AMPLZ","SINHWZ"],
[ 10.0, 0.2, 10.0, 0.2])
# Set SinhCylindrical radial & z coordinates by default; overwrite later if CoordSystem == "SinhCylindricalv2".
RHOCYL = AMPLRHO * (sp.exp(xx[0] / SINHWRHO) - sp.exp(-xx[0] / SINHWRHO)) / (sp.exp(1 / SINHWRHO) - sp.exp(-1 / SINHWRHO))
# phi coordinate remains unchanged.
PHICYL = xx[1]
ZCYL = AMPLZ * (sp.exp(xx[2] / SINHWZ) - sp.exp(-xx[2] / SINHWZ)) / (sp.exp(1 / SINHWZ) - sp.exp(-1 / SINHWZ))
Cart_to_xx[0] = SINHWRHO*sp.asinh(sp.sqrt(Cartx ** 2 + Carty ** 2)*sp.sinh(1/SINHWRHO)/AMPLRHO)
Cart_to_xx[1] = sp.atan2(Carty, Cartx)
Cart_to_xx[2] = SINHWZ*sp.asinh(Cartz*sp.sinh(1/SINHWZ)/AMPLZ)
xxCart[0] = RHOCYL*sp.cos(PHICYL)
xxCart[1] = RHOCYL*sp.sin(PHICYL)
xxCart[2] = ZCYL
xxSph[0] = sp.sqrt(RHOCYL**2 + ZCYL**2)
xxSph[1] = sp.acos(ZCYL / xxSph[0])
xxSph[2] = PHICYL
scalefactor_orthog[0] = sp.diff(RHOCYL,xx[0])
scalefactor_orthog[1] = RHOCYL
scalefactor_orthog[2] = sp.diff(ZCYL,xx[2])
# Set the unit vectors
UnitVectors = [[ sp.cos(PHICYL), sp.sin(PHICYL), sp.sympify(0)],
[-sp.sin(PHICYL), sp.cos(PHICYL), sp.sympify(0)],
[ sp.sympify(0), sp.sympify(0), sp.sympify(1)]]
```
Next let's plot **"SinhCylindrical"** coordinates.
```python
fig=plt.figure()
plt.clf()
fig = plt.figure()
ax = plt.subplot(1,1,1, projection='polar')
ax.set_rmax(2)
Nr = 20
xx0s = np.linspace(0,2,Nr, endpoint=True) + 1.0/(2.0*Nr)
rs = []
AMPLRHO = 1.0
SINHW = 0.4
for i in range(Nr):
rs.append(AMPLRHO * (np.exp(xx0s[i] / SINHW) - np.exp(-xx0s[i] / SINHW)) / \
(np.exp(1.0 / SINHW) - np.exp(-1.0 / SINHW)))
ax.set_rgrids(rs,labels=[])
thetas = np.linspace(0,360,25, endpoint=True)
ax.set_thetagrids(thetas,labels=[])
# ax.grid(True)
ax.grid(True,linewidth='1.0')
plt.show()
```
<a id='sinhcylindricalv2'></a>
### Step 3.b.iii: **`reference_metric::CoordSystem = "SinhCylindricalv2"`** \[Back to [top](#toc)\]
$$\label{sinhcylindricalv2}$$
Cylindrical coordinates, but with
$$\rho(xx_0) = \text{AMPLRHO} \left[\text{const_drho}\ xx_0 + \frac{\sinh\left(\frac{xx_0}{\text{SINHWRHO}}\right)}{\sinh\left(\frac{1}{\text{SINHWRHO}}\right)}\right]$$
and
$$z(xx_2) = \text{AMPLZ} \left[\text{const_dz}\ xx_2 + \frac{\sinh\left(\frac{xx_2}{\text{SINHWZ}}\right)}{\sinh\left(\frac{1}{\text{SINHWZ}}\right)}\right]$$
```python
if CoordSystem == "SinhCylindricalv2":
# Assuming the cylindrical radial coordinate
# is positive makes nice simplifications of
# unit vectors possible.
xx[0] = sp.symbols("xx0", real=True)
# SinhCylindricalv2 adds the parameters "const_drho", "const_dz", which allows for regions near xx[0]=0
# and xx[2]=0 to have constant rho and z resolution of const_drho and const_dz, provided the sinh() terms
# do not dominate near xx[0]=0 and xx[2]=0.
xxmin = [sp.sympify(0), -M_PI, sp.sympify(-1)]
xxmax = [sp.sympify(1), M_PI, sp.sympify(+1)]
AMPLRHO, SINHWRHO, AMPLZ, SINHWZ = par.Cparameters("REAL",thismodule,
["AMPLRHO","SINHWRHO","AMPLZ","SINHWZ"],
[ 10.0, 0.2, 10.0, 0.2])
const_drho, const_dz = par.Cparameters("REAL",thismodule,["const_drho","const_dz"],[0.0625,0.0625])
RHOCYL = AMPLRHO * ( const_drho*xx[0] + (sp.exp(xx[0] / SINHWRHO) - sp.exp(-xx[0] / SINHWRHO)) / (sp.exp(1 / SINHWRHO) - sp.exp(-1 / SINHWRHO)) )
PHICYL = xx[1]
ZCYL = AMPLZ * ( const_dz *xx[2] + (sp.exp(xx[2] / SINHWZ ) - sp.exp(-xx[2] / SINHWZ )) / (sp.exp(1 / SINHWZ ) - sp.exp(-1 / SINHWZ )) )
# NO CLOSED-FORM EXPRESSION FOR RADIAL OR Z INVERSION.
# Cart_to_xx[0] = "NewtonRaphson"
# Cart_to_xx[1] = sp.atan2(Carty, Cartx)
# Cart_to_xx[2] = "NewtonRaphson"
xxCart[0] = RHOCYL*sp.cos(PHICYL)
xxCart[1] = RHOCYL*sp.sin(PHICYL)
xxCart[2] = ZCYL
xxSph[0] = sp.sqrt(RHOCYL**2 + ZCYL**2)
xxSph[1] = sp.acos(ZCYL / xxSph[0])
xxSph[2] = PHICYL
scalefactor_orthog[0] = sp.diff(RHOCYL,xx[0])
scalefactor_orthog[1] = RHOCYL
scalefactor_orthog[2] = sp.diff(ZCYL,xx[2])
# Set the unit vectors
UnitVectors = [[ sp.cos(PHICYL), sp.sin(PHICYL), sp.sympify(0)],
[-sp.sin(PHICYL), sp.cos(PHICYL), sp.sympify(0)],
[ sp.sympify(0), sp.sympify(0), sp.sympify(1)]]
```
For example, let's set up **`SinhCylindricalv2`** coordinates and output the Christoffel symbol $\hat{\Gamma}^{xx_2}_{xx_2 xx_2}$, or more simply $\hat{\Gamma}^2_{22}$:
```python
par.set_parval_from_str("reference_metric::CoordSystem","SinhCylindricalv2")
rfm.reference_metric()
sp.pretty_print(sp.simplify(rfm.GammahatUDD[2][2][2]))
```
⎛ 2⋅xx₂ ⎞ 1
⎜ ────── ⎟ ──────
⎜ SINHWZ ⎟ SINHWZ
-⎝ℯ - 1⎠⋅ℯ
────────────────────────────────────────────────────────────────────────
⎛ ⎛ 2 ⎞ xx₂ ⎛ 2⋅xx₂ ⎞ 1 ⎞
⎜ ⎜ ────── ⎟ ────── ⎜ ────── ⎟ ──────⎟
⎜ ⎜ SINHWZ ⎟ SINHWZ ⎜ SINHWZ ⎟ SINHWZ⎟
SINHWZ⋅⎝- SINHWZ⋅const_dz⋅⎝ℯ - 1⎠⋅ℯ - ⎝ℯ + 1⎠⋅ℯ ⎠
As we will soon see, defining these "hatted" quantities will be quite useful when expressing hyperbolic ([wave-equation](https://en.wikipedia.org/wiki/Wave_equation)-like) PDEs in non-Cartesian coordinate systems.
<a id='cartesianlike'></a>
## Step 3.c: Cartesian-like coordinate systems \[Back to [top](#toc)\]
$$\label{cartesianlike}$$
<a id='cartesian'></a>
### Step 3.c.i: **`reference_metric::CoordSystem = "Cartesian"`** \[Back to [top](#toc)\]
$$\label{cartesian}$$
Standard Cartesian coordinates, with $(x,y,z)=$ `(xx0,xx1,xx2)`
```python
if CoordSystem == "Cartesian":
xmin, xmax, ymin, ymax, zmin, zmax = par.Cparameters("REAL",thismodule,
["xmin","xmax","ymin","ymax","zmin","zmax"],
[ -10.0, 10.0, -10.0, 10.0, -10.0, 10.0])
xxmin = ["xmin", "ymin", "zmin"]
xxmax = ["xmax", "ymax", "zmax"]
xxCart[0] = xx[0]
xxCart[1] = xx[1]
xxCart[2] = xx[2]
xxSph[0] = sp.sqrt(xx[0] ** 2 + xx[1] ** 2 + xx[2] ** 2)
xxSph[1] = sp.acos(xx[2] / xxSph[0])
xxSph[2] = sp.atan2(xx[1], xx[0])
Cart_to_xx[0] = Cartx
Cart_to_xx[1] = Carty
Cart_to_xx[2] = Cartz
scalefactor_orthog[0] = sp.sympify(1)
scalefactor_orthog[1] = sp.sympify(1)
scalefactor_orthog[2] = sp.sympify(1)
# Set the transpose of the matrix of unit vectors
UnitVectors = [[sp.sympify(1), sp.sympify(0), sp.sympify(0)],
[sp.sympify(0), sp.sympify(1), sp.sympify(0)],
[sp.sympify(0), sp.sympify(0), sp.sympify(1)]]
```
```python
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
plt.clf()
fig = plt.figure()
ax = fig.gca()
Nx = 16
ax.set_xticks(np.arange(0, 1., 1./Nx))
ax.set_yticks(np.arange(0, 1., 1./Nx))
# plt.scatter(x, y)
ax.set_aspect('equal')
plt.grid()
# plt.savefig("Cartgrid.png",dpi=300)
plt.show()
# plt.close(fig)
```
<a id='prolatespheroidal'></a>
## Step 3.d: [Prolate spheroidal](https://en.wikipedia.org/wiki/Prolate_spheroidal_coordinates)-like coordinate systems \[Back to [top](#toc)\]
$$\label{prolatespheroidal}$$
<a id='symtp'></a>
### Step 3.d.i: **`reference_metric::CoordSystem = "SymTP"`** \[Back to [top](#toc)\]
$$\label{symtp}$$
Symmetric TwoPuncture coordinates, with $(\rho,\phi,z)=(xx_0\sin(xx_1), xx_2, \sqrt{xx_0^2 + \text{bScale}^2}\cos(xx_1))$
```python
if CoordSystem == "SymTP":
var1, var2= sp.symbols('var1 var2',real=True)
bScale, AW, AMAX, RHOMAX, ZMIN, ZMAX = par.Cparameters("REAL",thismodule,
["bScale","AW","AMAX","RHOMAX","ZMIN","ZMAX"],
[0.5, 0.2, 10.0, 10.0, -10.0, 10.0])
# Assuming xx0, xx1, and bScale
# are positive makes nice simplifications of
# unit vectors possible.
xx[0],xx[1] = sp.symbols("xx0 xx1", real=True)
xxmin = [sp.sympify(0), sp.sympify(0),-M_PI]
xxmax = [ AMAX, M_PI, M_PI]
AA = xx[0]
if CoordSystem == "SinhSymTP":
AA = (sp.exp(xx[0]/AW)-sp.exp(-xx[0]/AW))/2
var1 = sp.sqrt(AA**2 + (bScale * sp.sin(xx[1]))**2)
var2 = sp.sqrt(AA**2 + bScale**2)
RHOSYMTP = AA*sp.sin(xx[1])
PHSYMTP = xx[2]
ZSYMTP = var2*sp.cos(xx[1])
xxCart[0] = AA *sp.sin(xx[1])*sp.cos(xx[2])
xxCart[1] = AA *sp.sin(xx[1])*sp.sin(xx[2])
xxCart[2] = ZSYMTP
xxSph[0] = sp.sqrt(RHOSYMTP**2 + ZSYMTP**2)
xxSph[1] = sp.acos(ZSYMTP / xxSph[0])
xxSph[2] = PHSYMTP
rSph = sp.sqrt(Cartx ** 2 + Carty ** 2 + Cartz ** 2)
thSph = sp.acos(Cartz / rSph)
phSph = sp.atan2(Carty, Cartx)
# Mathematica script to compute Cart_to_xx[]
# AA = x1;
# var2 = Sqrt[AA^2 + bScale^2];
# RHOSYMTP = AA*Sin[x2];
# ZSYMTP = var2*Cos[x2];
# Solve[{rSph == Sqrt[RHOSYMTP^2 + ZSYMTP^2],
# thSph == ArcCos[ZSYMTP/Sqrt[RHOSYMTP^2 + ZSYMTP^2]],
# phSph == x3},
# {x1, x2, x3}]
Cart_to_xx[0] = sp.sqrt(-bScale**2 + rSph**2 +
sp.sqrt(bScale**4 + 2*bScale**2*rSph**2 + rSph**4 -
4*bScale**2*rSph**2*sp.cos(thSph)**2))*M_SQRT1_2 # M_SQRT1_2 = 1/sqrt(2); define this way for UnitTesting
# The sign() function in the following expression ensures the correct root is taken.
Cart_to_xx[1] = sp.acos(sp.sign(Cartz)*(
sp.sqrt(1 + rSph**2/bScale**2 -
sp.sqrt(bScale**4 + 2*bScale**2*rSph**2 + rSph**4 -
4*bScale**2*rSph**2*sp.cos(thSph)**2)/bScale**2)*M_SQRT1_2)) # M_SQRT1_2 = 1/sqrt(2); define this way for UnitTesting
Cart_to_xx[2] = phSph
```
<a id='sinhsymtp'></a>
### Step 3.d.ii: **`reference_metric::CoordSystem = "SinhSymTP"`** \[Back to [top](#toc)\]
$$\label{sinhsymtp}$$
Symmetric TwoPuncture coordinates, but with $$xx_0 \to \sinh(xx_0/\text{AW})$$
```python
if CoordSystem == "SinhSymTP":
var1, var2= sp.symbols('var1 var2',real=True)
bScale, AW, AMAX, RHOMAX, ZMIN, ZMAX = par.Cparameters("REAL",thismodule,
["bScale","AW","AMAX","RHOMAX","ZMIN","ZMAX"],
[0.5, 0.2, 10.0, 10.0, -10.0, 10.0])
# Assuming xx0, xx1, and bScale
# are positive makes nice simplifications of
# unit vectors possible.
xx[0],xx[1] = sp.symbols("xx0 xx1", real=True)
xxmin = [sp.sympify(0), sp.sympify(0),-M_PI]
xxmax = [ AMAX, M_PI, M_PI]
AA = xx[0]
if CoordSystem == "SinhSymTP":
# With xxmax[0] == AMAX, sinh(xx0/AMAX) will evaluate to a number between 0 and 1.
# Similarly, sinh(xx0/(AMAX*SINHWAA)) / sinh(1/SINHWAA) will also evaluate to a number between 0 and 1.
# Then AA = AMAX*sinh(xx0/(AMAX*SINHWAA)) / sinh(1/SINHWAA) will evaluate to a number between 0 and AMAX.
AA = AMAX * (sp.exp(xx[0] / (AMAX*SINHWAA)) - sp.exp(-xx[0] / (AMAX*SINHWAA))) / (sp.exp(1 / SINHWAA) - sp.exp(-1 / AMAX))
var1 = sp.sqrt(AA**2 + (bScale * sp.sin(xx[1]))**2)
var2 = sp.sqrt(AA**2 + bScale**2)
RHOSYMTP = AA*sp.sin(xx[1])
PHSYMTP = xx[2]
ZSYMTP = var2*sp.cos(xx[1])
xxCart[0] = AA *sp.sin(xx[1])*sp.cos(xx[2])
xxCart[1] = AA *sp.sin(xx[1])*sp.sin(xx[2])
xxCart[2] = ZSYMTP
xxSph[0] = sp.sqrt(RHOSYMTP**2 + ZSYMTP**2)
xxSph[1] = sp.acos(ZSYMTP / xxSph[0])
xxSph[2] = PHSYMTP
scalefactor_orthog[0] = sp.diff(AA,xx[0]) * var1 / var2
scalefactor_orthog[1] = var1
scalefactor_orthog[2] = AA * sp.sin(xx[1])
# Set the transpose of the matrix of unit vectors
UnitVectors = [[sp.sin(xx[1]) * sp.cos(xx[2]) * var2 / var1,
sp.sin(xx[1]) * sp.sin(xx[2]) * var2 / var1,
AA * sp.cos(xx[1]) / var1],
[AA * sp.cos(xx[1]) * sp.cos(xx[2]) / var1,
AA * sp.cos(xx[1]) * sp.sin(xx[2]) / var1,
-sp.sin(xx[1]) * var2 / var1],
[-sp.sin(xx[2]), sp.cos(xx[2]), sp.sympify(0)]]
```
<a id='latex_pdf_output'></a>
# Step 4: Output this module to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-Reference_Metric.pdf](Tutorial-Reference_Metric.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```python
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx Tutorial-Reference_Metric.ipynb
!pdflatex -interaction=batchmode Tutorial-Reference_Metric.tex
!pdflatex -interaction=batchmode Tutorial-Reference_Metric.tex
!pdflatex -interaction=batchmode Tutorial-Reference_Metric.tex
!rm -f Tut*.out Tut*.aux Tut*.log
```
[NbConvertApp] Converting notebook Tutorial-Reference_Metric.ipynb to latex
[NbConvertApp] Support files will be in Tutorial-Reference_Metric_files/
[NbConvertApp] Making directory Tutorial-Reference_Metric_files
[NbConvertApp] Making directory Tutorial-Reference_Metric_files
[NbConvertApp] Making directory Tutorial-Reference_Metric_files
[NbConvertApp] Making directory Tutorial-Reference_Metric_files
[NbConvertApp] Making directory Tutorial-Reference_Metric_files
[NbConvertApp] Making directory Tutorial-Reference_Metric_files
[NbConvertApp] Making directory Tutorial-Reference_Metric_files
[NbConvertApp] Writing 132807 bytes to Tutorial-Reference_Metric.tex
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
|
### Quaternion type heavily based off of Base lib Complex.jl
"""
Quaternion{T<:Real} <: Number
Quaternion number type with real and imaginary parts of type `T`.
`QuatF64` and `QuatF32` or `QuaternionF64` and `QuaternionF32` are aliases for
`Quaternion{Float64}` and `Quaternion{Float32}` respectively.
"""
struct Quaternion{T<:Real} <: Number
re::T
im::T
jm::T
km::T
end
### Constructors
Quaternion(a::Real, b::Real, c::Real, d::Real) = Quaternion(promote(a,b,c,d)...)
Quaternion(x::Real) = Quaternion(x, zero(x), zero(x), zero(x))
Quaternion(z::Complex) = Quaternion(real(z), imag(z), zero(real(z)), zero(real(z)))
Quaternion(q::Quaternion) = q
(::Type{Quaternion{T}})(x::Real) where {T} = Quaternion{T}(x, 0, 0, 0)
(::Type{Quaternion{T}})(z::Complex) where {T} = Quaternion{T}(z.re, z.im, 0, 0)
(::Type{Quaternion{T}})(q::Quaternion) where {T} = Quaternion{T}(q.re, q.im, q.jm, q.km)
### For representation as q = z + c*j for complex z and c
Quaternion(z1::Complex, z2::Complex) = Quaternion(real(z1), imag(z1), real(z2), imag(z2))
# Doesn't seem like a good design, fix this ?
Quaternion(v::Vector{<:Real}) = Quaternion(v[1], v[2], v[3], v[4])
"""
jm
Quaternion imaginary unit.
"""
const jm = Quaternion(false,false,true,false)
"""
km
Quaternion imaginary unit.
"""
const km = Quaternion(false,false,false,true)
const QuaternionF16 = Quaternion{Float16}
const QuaternionF32 = Quaternion{Float32}
const QuaternionF64 = Quaternion{Float64}
const QuatF16 = Quaternion{Float16}
const QuatF32 = Quaternion{Float32}
const QuatF64 = Quaternion{Float64}
convert(::Type{Quaternion{T}}, x::Real) where {T<:Real} = Quaternion{T}(x,0,0,0)
convert(::Type{Quaternion{T}}, z::Complex) where {T<:Real} = Quaternion{T}(real(z),imag(z),0,0)
convert(::Type{Quaternion{T}}, q::Quaternion) where {T<:Real} = Quaternion{T}(q.re, q.im, q.jm, q.km)
promote_rule(::Type{Quaternion{T}}, ::Type{S}) where {T<:Real,S<:Real} =
Quaternion{promote_type(T,S)}
promote_rule(::Type{Quaternion{T}}, ::Type{Complex{S}}) where {T<:Real,S<:Real} =
Quaternion{promote_type(T,S)}
promote_rule(::Type{Quaternion{T}}, ::Type{Quaternion{S}}) where {T<:Real,S<:Real} =
Quaternion{promote_type(T,S)}
widen(::Type{Quaternion{T}}) where {T} = Quaternion{widen(T)}
bswap(q::Quaternion) = Quaternion(bswap(q.re), bswap(q.im), bswap(q.jm), bswap(q.km))
### Redefine/overload methods for Complex
real(q::Quaternion) = q.re
imag(q::Quaternion) = q.im
"""
jmag(q)
Return the j-imaginary part of the quaternion number `q`.
# Example
```jldoctest
julia> jmag(1 + 3im + 2jm + 5km)
2
```
"""
jmag(q::Quaternion) = q.jm
"""
kmag(q)
Return the k-imaginary part of the quaternion number `q`.
# Example
```jldoctest
julia> kmag(1 + 3im + 2jm + 5km)
5
```
"""
kmag(q::Quaternion) = q.km
real(::Type{Quaternion{T}}) where {T<:Real} = T
isreal(q::Quaternion) = iszero(q.im) && iszero(q.jm) && iszero(q.km)
isinteger(q::Quaternion) = isreal(q) && isinteger(q.re)
isfinite(q::Quaternion) = isfinite(q.re) && isfinite(q.im) && isfinite(q.jm) && isfinite(q.km)
isnan(q::Quaternion) = isnan(q.re) || isnan(q.im) || isnan(q.jm) || isnan(q.km)
isinf(q::Quaternion) = isinf(q.re) || isinf(q.im) || isinf(q.jm) || isinf(q.km)
iszero(q::Quaternion) = iszero(q.re) && iszero(q.im) && iszero(q.jm) && iszero(q.km)
iscomplex(q::Quaternion) = iszero(q.jm) && iszero(q.km)
isone(q::Quaternion) = isreal(q) && isone(q.re)
zero(::Type{Quaternion{T}}) where {T<:Real} = Quaternion{T}(zero(T), zero(T), zero(T), zero(T))
"""
quat(r::Real [, i::Real, j::Real, k::Real])
quat(c::Complex [, z::Complex])
Convert real numbers or arrays to quaternion. `i`, `j`, `k`, default to zero.
Convert a complex number to a quaternion, with `j` and `k` parts zero.
Convert a pair of complex numbers to a quaternion `c + z*j`.
Mostly just a more convienient way to call the constructor Quaternion(...).
"""
quat(a::Real, b::Real, c::Real, d::Real) = Quaternion(a, b, c, d)
quat(x::Real) = Quaternion(x)
quat(z::Complex) = Quaternion(z)
quat(q::Quaternion) = q
quat(z1::Complex, z2::Complex) = Quaternion(z1, z2)
quat(v::Vector{<:Real}) = Quaternion(v)
"""
complex(q::Quaternion)
Returns the complex part of a quaternion, equivalent to `complex(q.re, q.im)`.
"""
complex(::Type{Quaternion{T}}) where {T<:Real} = Complex{T}
complex(q::Quaternion) = complex(q.re, q.im)
"""
quat(T::Type)
Returns an appropriate type which can represent a value of type `T` as a quaternion.
Equivalent to `typeof(quat(zero(T)))`.
"""
quat(::Type{T}) where {T<:Real} = Quaternion{T}
quat(::Type{Complex{T}}) where {T<:Real} = Quaternion{T}
quat(::Type{Quaternion{T}}) where {T<:Real} = Quaternion{T}
vec(q::Quaternion) = vcat(q.re, q.im, q.jm, q.km)
function show(io::IO, q::Quaternion)
r, i, j, k = vec(q)
compact = get(io, :compact, false)
show(io, r)
for (x, xm) in [(i, "im"), (j, "jm"), (k, "km")]
if signbit(x) && !isnan(x)
x = -x
print(io, compact ? "-" : " - ")
else
print(io, compact ? "+" : " + ")
end
show(io, x)
if !(isa(x,Integer) && !isa(x,Bool) || isa(x,AbstractFloat) && isfinite(x))
print(io, "*")
end
print(io, xm)
end
end
function show(io::IO, q::Quaternion{Bool})
if q == im
print(io, "im")
elseif q == jm
print(io, "jm")
elseif q == km
print(io, "km")
else
print(io, "Quaternion($(q.re),$(q.im),$(q.jm),$(q.km))")
end
end
function show(io::IO, ::MIME"text/html", q::Quaternion)
r, i, j, k = vec(q)
compact = get(io, :compact, false)
show(io, r)
for (x, xm) in [(i, "i"), (j, "j"), (k, "k")]
if signbit(x) && !isnan(x)
x = -x
print(io, compact ? "-" : " - ")
else
print(io, compact ? "+" : " + ")
end
show(io, x)
if !(isa(x,Integer) && !isa(x,Bool) || isa(x,AbstractFloat) && isfinite(x))
print(io, "*")
end
print(io, "<b><i>" * xm * "</i></b>")
end
end
function show(io::IO, ::MIME"text/html", q::Quaternion{Bool})
if q == im
print(io, "<b><i>i</i></b>")
elseif q == jm
print(io, "<b><i>j</i></b>")
elseif q == km
print(io, "<b><i>k</i></b>")
else
print(io, "Quaternion($(q.re),$(q.im),$(q.jm),$(q.km))")
end
end
function read(s::IO, ::Type{Quaternion{T}}) where {T}
r = read(s,T)
i = read(s,T)
j = read(s,T)
k = read(s,T)
Quaternion{T}(r,i,j,k)
end
function write(s::IO, q::Quaternion)
write(s,q.re,q.im,q.jm,q.km)
end
==(q::Quaternion, w::Quaternion) = q.re == w.re && q.im == w.im && q.jm == w.jm && q.km == w.km
==(q::Quaternion, z::Complex) = iscomplex(q) && q.im == imag(z) && q.re == real(z)
==(q::Quaternion, x::Real) = isreal(q) && q.re == x
isequal(q::Quaternion, w::Quaternion) = isequal(q.re,w.re) & isequal(q.im,w.im) &
isequal(q.jm,w.jm) & isequal(q.km,w.km)
#TODO: hash
"""
conj(q::Quaternion)
Compute the complex conjugate of a quaternion `z`.
"""
conj(q::Quaternion) = Quaternion(q.re,-q.im,-q.jm,-q.km)
"""
abs(q::Quaternion)
Compute the absolute value (euclidean norm) of a quaternion.
"""
abs(q::Quaternion) = sqrt(abs2(q))
"""
abs2(q::Quaternion)
Compute the squared euclidean norm of a quaternion.
Equivalent to `q * conj(q)` but computationally faster and returns a real.
"""
abs2(q::Quaternion) = q.re*q.re + q.im*q.im + q.jm*q.jm + q.km*q.km
inv(q::Quaternion) = conj(q)/abs2(q)
inv(q::Quaternion{<:Integer}) = inv(float(q))
-(q::Quaternion) = Quaternion(-q.re, -q.im, -q.jm, -q.km)
+(q::Quaternion, w::Quaternion) = Quaternion(q.re + w.re, q.im + w.im,
q.jm + w.jm, q.km + w.km)
-(q::Quaternion, w::Quaternion) = Quaternion(q.re - w.re, q.im - w.im,
q.jm - w.jm, q.km - w.km)
*(q::Quaternion, w::Quaternion) = Quaternion(q.re*w.re - q.im*w.im - q.jm*w.jm - q.km*w.km,
q.re*w.im + q.im*w.re + q.jm*w.km - q.km*w.jm,
q.re*w.jm - q.im*w.km + q.jm*w.re + q.km*w.im,
q.re*w.km + q.im*w.jm - q.jm*w.im + q.km*w.re)
+(x::Real, q::Quaternion) = Quaternion(x + q.re, q.im, q.jm, q.km)
+(q::Quaternion, x::Real) = Quaternion(q.re + x, q.im, q.jm, q.km)
+(z::Complex, q::Quaternion) = Quaternion(real(z) + q.re, imag(z) + q.im, q.jm, q.km)
+(q::Quaternion, z::Complex) = Quaternion(q.re + real(z), q.im + imag(z), q.jm, q.km)
function -(x::Real, q::Quaternion)
# we don't want the default type for -(Bool)
re = x - q.re
Quaternion(re, - oftype(re, q.im), - oftype(re, q.jm), - oftype(re, q.km))
end
function -(z::Complex, q::Quaternion)
# we don't want the default type for -(Bool)
re = real(z) - q.re
Quaternion(re, imag(z) - q.im, - oftype(re, q.jm), - oftype(re, q.km))
end
-(q::Quaternion, x::Real) = Quaternion(q.re - x, q.im, q.jm, q.km)
-(q::Quaternion, z::Complex) = Quaternion(q.re - real(z), q.im - imag(z), q.jm, q.km)
*(x::Real, q::Quaternion) = Quaternion(x * q.re, x * q.im, x * q.jm, x * q.km)
*(q::Quaternion, x::Real) = Quaternion(q.re * x, q.im * x, q.jm * x, q.km * x)
*(q::Quaternion, z::Complex) = Quaternion(q.re*z.re - q.im*z.im, q.re*z.im + q.im*z.re,
q.jm*z.re + q.km*z.im, q.km*z.re - q.jm*z.im)
*(z::Complex, q::Quaternion) = Quaternion(z.re*q.re - z.im*q.im, z.re*q.im + z.im*q.re,
z.re*q.jm - z.im*q.km, z.re*q.km + z.im*q.jm)
/(a::R, q::S) where {R<:Real,S<:Quaternion} = (T = promote_type(R,S); a*inv(T(q)))
/(q::Quaternion, x::Real) = Quaternion(q.re/x, q.im/x, q.jm/x, q.km/x)
/(q::Quaternion, w::Quaternion) = q * inv(w)
rand(r::AbstractRNG, ::SamplerType{Quaternion{T}}) where {T<:Real} =
Quaternion(rand(r,T), rand(r,T), rand(r,T), rand(r,T))
"""
When the type argument is quaternion, the values are drawn
from the circularly symmetric quaternion normal distribution.
This is std normal in each component but with variance scaled by 1/4.
"""
randn(r::AbstractRNG, ::Type{Quaternion{T}}) where {T<:AbstractFloat} =
Quaternion(T(0.5)*randn(r,T), T(0.5)*randn(r,T), T(0.5)*randn(r,T), T(0.5)*randn(r,T))
"""
norm(q::Quaternion, p::Real=2)
Computes the `p` norm of the quaternion components of `q` as a vector.
For the default `p=2` euclidean norm, this is same as the quaternion `abs()`.
"""
norm(q::Quaternion{T}, p::Real=2) where {T<:Real} = norm(vec(q), p)
"""
normalize(q::Quaternion, p::Real=2)
Computes the normalized unit quaternion. Optional choice of `p` norm.
"""
normalize(q::Quaternion{T}, p::Real=2) where {T<:Real} = Quaternion(normalize(vec(q), p))
exp(q::Quaternion{<:Integer}) = exp(float(q))
log(q::Quaternion{<:Integer}) = log(float(q))
### See Neil Dantam - Quaternion Computation - for exp and log
function exp(q::Quaternion{<:AbstractFloat})
V = norm([q.im q.jm q.km])
s = if V < 1e-8
1 - V*V/6 + V*V*V*V/120
else
sin(V)/V
end
exp(q.re)*(cos(V) + quat(zero(q.re), q.im*s, q.jm*s, q.km*s))
end
function log(q::Quaternion{<:AbstractFloat})
V = norm([q.im q.jm q.km])
Q = abs(q)
ϕ = atan(V, q.re)
s = if V < 1e-8
(1+ϕ*ϕ/6 +7*ϕ*ϕ*ϕ*ϕ/360)/Q
else
ϕ/V
end
quat(log(Q), q.im*s, q.jm*s, q.km*s)
end
### NOTE: sqrt(q^2) == q is NOT true for pure quaternion for this computation
sqrt(q::Quaternion) = exp(0.5*log(q))
^(q::Quaternion, w::Quaternion) = exp(w*log(q))
function round(q::Quaternion, rr::RoundingMode=RoundNearest, ri::RoundingMode=rr; kwargs...)
Quaternion(round(real(q), rr; kwargs...),
round(imag(q), ri; kwargs...),
round(jmag(q), ri; kwargs...),
round(kmag(q), ri; kwargs...))
end
float(q::Quaternion{<:AbstractFloat}) = q
float(q::Quaternion) = Quaternion(float(q.re), float(q.im), float(q.jm), float(q.km))
big(::Type{Quaternion{T}}) where {T<:Real} = Quaternion{big(T)}
big(q::Quaternion{T}) where {T<:Real} = Quaternion{big(T)}(q)
## Matrix representations of Quaternions
"""
cmatrix(q::Quaternion)
cmatrix(q::Matrix{Quaternion})
Returns the complex matrix representation of a quaternion.
If we write `q = x + y*j` then this is `[x y; -conj(y) conj(x)]`.
This works if `q` is a quaternion of matrix of quaternions.
For a matrix of size `(n,m)` this gives a complex matrix of size `(2n, 2m)`.
"""
function cmatrix(q::Quaternion)
[complex( q.re, q.im) complex(q.jm, q.km);
complex(-q.jm, q.km) complex(q.re, -q.im)]
end
function cmatrix(Q::Matrix{Quaternion{T}}) where {T}
[complex.( real.(Q), imag.(Q)) complex.( jmag.(Q), kmag.(Q));
complex.(-jmag.(Q), kmag.(Q)) complex.( real.(Q), -imag.(Q))]
end
"""
qmatrix(c::Matrix{Complex})
Given a complex matrix representation of a quaternion of quaternion matrix,
returns the quaternion representation of the matrix of size `(n÷2, m÷2)`.
This gives an inverse of the `cmatrix` function.
This function does no error checking, make sure your matrix is valid beforehand.
"""
function qmatrix(C::Matrix{Complex{T}}) where {T}
n, m = size(C)
quat.(C[1:n÷2, 1:m÷2], C[1:n÷2, m÷2+1:m])
end
## Array operations on complex numbers ##
quat(A::AbstractArray{<:Quaternion}) = A
|
"""Ensemble plan manually split by type moode/theme."""
import json
from dbispipeline.evaluators import FixedSplitEvaluator
from dbispipeline.evaluators import ModelCallbackWrapper
import numpy as np
from sklearn.pipeline import Pipeline
from mediaeval2021 import common
from mediaeval2021.dataloaders.melspectrograms import MelSpectPickleLoader
from mediaeval2021.models.ensemble import Ensemble
from mediaeval2021.models.wrapper import TorchWrapper
dataloader = MelSpectPickleLoader('data/mediaeval2020/melspect_1366.pickle')
label_splits = [
np.arange(0, 14, 1),
np.arange(14, 28, 1),
np.arange(28, 42, 1),
np.arange(42, 56, 1),
]
pipeline = Pipeline([
('model',
Ensemble(
base_estimator=TorchWrapper(
model_name='ResNet-18',
dataloader=dataloader,
batch_size=64,
early_stopping=True,
),
label_splits=label_splits,
epochs=100,
)),
])
evaluator = ModelCallbackWrapper(
FixedSplitEvaluator(**common.fixed_split_params()),
lambda model: common.store_prediction(model, dataloader),
)
result_handlers = [
lambda results: print(json.dumps(results, indent=4)),
]
|
%%%%%%%%%%%%%%%%%%%%%%% file template.tex %%%%%%%%%%%%%%%%%%%%%%%%%
%
% This is a general template file for the LaTeX package SVJour3
% for Springer journals. Springer Heidelberg 2010/09/16
%
% Copy it to a new file with a new name and use it as the basis
% for your article. Delete % signs as needed.
%
% This template includes a few options for different layouts and
% content for various journals. Please consult a previous issue of
% your journal as needed.
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\RequirePackage{fix-cm}
%\documentclass[twocolumn]{svjour3} % twocolumn
\documentclass[smallextended]{svjour3} % twocolumn
\smartqed % flush right qed marks, e.g. at end of proof
\usepackage{graphicx} % adjustbox loads it
\usepackage{epstopdf} % loads eps
\usepackage[normalem]{ulem}
% insert here the call for the packages your document requires
\usepackage{geometry}
\usepackage[export]{adjustbox}
\usepackage[labelfont=bf, labelsep=space]{caption}
%\usepackage{subcaption}
\usepackage{subfig}
\usepackage[utf8]{inputenc}
\usepackage{amssymb}
\usepackage{amsmath}
\usepackage{natbib} % enables author year and other citation styles
\usepackage[bookmarks,bookmarksopen,bookmarksdepth=2]{hyperref} % active links
\hypersetup{backref,
colorlinks=true,
citecolor=blue,
linkcolor=blue}
\usepackage{tikz} % graphics,
\usetikzlibrary{fit,positioning} % tikz elements positioning
\usepackage{soul} % annotations
\DeclareMathOperator*{\argmin}{arg\,min}
\newcommand\alberto[1]{\textcolor{red}{#1}}
\newcommand{\Muo}{\boldsymbol{\mu}_{0}^\text{(a)}}
\newcommand{\Ro}{\mathbf{R}_{0}^\text{(a)}}
\newcommand{\invRo}{\left(\mathbf{R}_{0}^\text{(a)}\right)^{-1}}
\newcommand{\Wo}{\mathbf{W}_{0}^\text{(a)}}
\newcommand{\invWo}{\left(\mathbf{W}_{0}^{\text{(a)}}\right)^{-1}}
\newcommand{\betaoa}{\beta_{0}^\text{(a)}}
\newcommand{\muo}{\mu_{0}^\text{(f)}}
\newcommand{\ro}{r_{0}^\text{(f)}}
\newcommand{\invro}{\left(r_{0}^\text{(f)}\right)^{-1}}
\newcommand{\wo}{w_{0}^\text{(f)}}
\newcommand{\invwo}{\left(w_{0}^\text{(f)}\right)^{-1}}
\newcommand{\betaof}{\beta_{0}^\text{(f)}}
\newcommand{\Muk}{\boldsymbol{\mu}_{k}^\text{(a)}}
\newcommand{\Sk}{\mathbf{S}_{k}^\text{(a)}}
\newcommand{\muk}{\mu_{k}^\text{(f)}}
\newcommand{\sk}{s_{k}^\text{(f)}}
\newcommand{\Szu}{\mathbf{S}_{z_u}^\text{(a)}}
\newcommand{\invSzu}{\left(\mathbf{S}_{z_u}^{\text{(a)}}\right)^{-1}}
\newcommand{\szu}{s_{z_u}^\text{(f)}}
\newcommand{\invszu}{\left(s_{z_u}^{\text{(f)}}\right)^{-1}}
\newcommand{\Muzu}{\boldsymbol{\mu}_{z_u}^\text{(a)}}
\newcommand{\muzu}{\mu_{z_u}^\text{(f)}}
%\DeclareGraphicsRule{.tif}{png}{.png}{`convert #1 `dirname #1`/`basename #1 .tif`.png}
%\graphicspath{ {./img/} }
\DeclareMathOperator*{\argmax}{arg\,max}
\journalname{Comput Stat}
\begin{document}
\title{Non-parametric clustering over user features and latent behavioral functions with dual-view mixture models}
%\subtitle{Do you have a subtitle?\\ If so, write it here}
\titlerunning{Non-parametric user clustering with dual-view mixture models} % if too long for running head
\author{Alberto Lumbreras \and
Julien Velcin \and\\
Marie Guégan \and
Bertrand Jouve
}
%\authorrunning{Short form of author list} % if too long for running head
\institute{Alberto Lumbreras \and Marie Guégan \at
Technicolor\\
975 Avenue des Champs Blancs, \\35576 Cesson-Sévigné,\\ France\\
\email{[email protected]}\\
\email{[email protected]}
\and
Julien Velcin \at
Laboratoire ERIC, Université de Lyon,\\
5, avenue Pierre Mendès France, 69676 Bron,\\ France\\
\email{[email protected]}
\and
Bertrand Jouve \at
Université de Toulouse; UT2; FRAMESPA/IMT; 5 allée Antonio Machado, 31058 Toulouse, cedex 9\\
CNRS; FRAMESPA; F-31000 Toulouse\\
CNRS; IMT; F-31000 Toulouse\\ France\\
\email{[email protected]}
}
\date{Received: date / Accepted: date}
% The correct dates will be entered by the editor
\maketitle
\begin{abstract}
We present a dual-view mixture model to cluster users based on their features and latent behavioral functions. Every component of the mixture model represents a probability density over a feature view for observed user attributes and a behavior view for latent behavioral functions that are indirectly observed through user actions or behaviors. Our task is to infer the groups of users as well as their latent behavioral functions. We also propose a non-parametric version based on a Dirichlet Process to automatically infer the number of clusters. We test the properties and performance of the model on a synthetic dataset that represents the participation of users in the threads of an online forum. Experiments show that dual-view models outperform single-view ones when one of the views lacks information.
\keywords{Multi-view clustering \and Model-based clustering \and Dirichlet Process (DP) \and Chinese Restaurant Process (CRP) }
\end{abstract}
%Nonparametric mixture model
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}\label{sec:introduction}
%%%%%%%%%% General problem of science
We consider the problem of clustering users over both their observed features and their latent behavioral functions. The goal is to infer the behavioral function of each user and a clustering of users that takes into account both their features and their behavioral functions. In our model, users in the same cluster are assumed to have similar features \textit{and} behavioral functions, and thus the inference of the clusters depends on the inference of the behavioral functions, and vice versa.
% Motivation/Applications
Latent behavioral functions are used to model individual behaviors such as pairwise choices over products, reactions to medical treatments or user activities in online discussions. The inclusion of latent behavioral functions in clustering methods has several potential applications. On the one hand, it allows richer clusterings in settings where users, besides being described through feature vectors, also perform some observable actions that can be modelled as the output of a latent function. On the other hand, by assuming that users in the same cluster share similar latent functions, it may leverage inference of these functions. In the case, for instance, of recommender systems, this may be used to alleviate the \textit{cold-start problem} ---the fact that we cannot infer user preferences until they have interacted with the system for a while--- if we have a set of features describing user profiles. In that context, a user with low activity will be given the same cluster as those users with similar features. Then, the inference of its behavioral function (e.g.: its preferences) will be based on the behavioral functions of the users in the same cluster.
%%%% Sneak peak
One of the difficulties in dealing with latent behavioral functions is that, since these functions are latent, they are not representable in a feature-like way and therefore traditional clustering methods are not directly applicable.
Our approach is to think of features and behaviors as two different \textit{views} or representations of users, and to find the partition that is most consensual between the different views. In this sense, this is a multi-view clustering problem \citep{SteffenandTobiasScheffer}. However, the clustering in one of the views depends on latent variables that need to be inferred. In this regard, it has similarities to Latent Dirichlet Allocation when used for clustering (e.g., cluster together documents that belong to the same latent topics).
%%%%%%%%%%%%%% MULTIVIEW REFS
% multiple observed views, one consensual clustering.
Multi-view clustering ranges from \cite{Kumar2011}, which finds a consensual partition by co-training \citep{Mitchell1998}, to \cite{Greene2009a}, which proposes a two-step multi-view clustering that allows both consensual groups and groups that only exist in one of the views. In \cite{Niu2012}, the multi-view approach is presented as the problem of finding multiple cluster solutions for a single description of features.
%%%%%%%%%%%%% OUR METHOD
% Our model
In this paper, we extend the idea of multi-view clustering to deal with cases where one of the views comprises latent functions that are only indirectly observed through their outputs. The proposed method consists of a dual-view mixture model where every component represents two probability densities: one over the space of features and the other over the space of latent behavioral functions. The model allows to infer both the clusters and the latent functions. Moreover, the inference of the latent functions allows to make predictions on future user behaviors. The main assumption is that users in the same cluster share both similar features and similar latent functions and that users with similar features and behaviors are in the same cluster. Under this assumption, we show that this dual-view model requires less examples than single-view models to make good inferences.
%%%%%%%%%% VERY RELATED %%%%%%%%%%%
% Clustrering over latent features
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Multiple observed views, one single clustering of functions. (no clustering over views)
The idea of using similarities in one view to enhance inference in the other view is not new. In bioinformatics, \cite{Eisen1998} found evidence suggesting that genes with similar DNA microarray expression data may have similar functions. \cite{Brown2000} exploit this evidence to train a Support Vector Machine (SVM) for each functional class and predict whether an unclassified gene belongs to that class given its expression data as an input. \cite{Pavlidis2002} extend the method of Brown et al. by also exploiting evidence that similar phylogenetic profiles between genes suggested a same functional class as well \citep{Pellegrini1999a}. Pavlidis et al. propose a multi-view SVM method that uses both types of gene data as input.
More recently, \cite{Cheng2014} applied multi-view techniques to predict user labels in social networks such as LinkedIn (e.g., engineer, professor) or IMdB (e.g., directors, actors). Their method lies in the maximization of an objective function with a co-regularization that penalizes predictions of different labels for users that are similar either in terms of profile features or in terms of graph connectivity.
In the context of preference learning, \cite{Bonilla2010} also work with the assumption that similar users may have similar preferences, and models this assumption via a Gaussian Process prior over user utility functions. This prior favors utility functions that account for user similarity and item similarity. To alleviate the computational cost of this model, \cite{Abbasnejad2013a} propose an infinite mixture of Gaussian Processes that generates utility functions for groups of users assuming that users in each community share one single utility function.
%
The main difference between our model and \cite{Abbasnejad2013a} in that their model clusters users only based on their utility functions, while ours considers user features as well. In short, ours is a multi-view clustering model that also infers latent behavioral functions, while theirs is a single-view model focused on the inference of latent functions.
%%% Structure of the paper
The paper is structured as follows: first, we briefly recall mixture models. Second, we present our model as an extension of classic mixture models. The description of the model ends up with a generalization to an infinite number of clusters, which makes the model non-parametric. We finally describe an application to cluster users in online forums and end the paper with experiments on synthetic data to demonstrate the properties of the model.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Model description}\label{sec:modeldescription}
In this section, we introduce our model through three sequential steps. First, we start with a simple mixture model. Second, we extend the mixture model to build a \textit{dual-view} mixture model. And third, we extend the \textit{dual-view} mixture model so that the number of clusters is automatically inferred.
\subsection{Mixture models}\label{sec:mixturemodels}
When a set of observations $x_1,x_2,...x_n$ cannot be properly fitted by a single distribution, we may get a better fit by considering that different subsets of observations come from different component distributions. Then, instead of a unique set of parameters $\boldsymbol{\theta}$ of a single distribution, we need to infer $K$ sets of parameters $\boldsymbol{\theta}_1,...,\boldsymbol{\theta}_K$ of $K$ components and the assignments $z_1,...,z_n$ of individual observations to one of these components. The model can be expressed as follows:
\begin{align}
x_i | z_i, \boldsymbol{\theta}_{z_i} &\sim F(\boldsymbol{\theta}_{z_i})\notag\\
z_i &\sim \text{Discrete}(\boldsymbol{\pi})
\end{align}
where $\boldsymbol{\pi} = (\pi_1,...\pi_K)$ contains the probability of belonging to each component and $F$ is the likelihood function over the observations. In Bayesian settings it is common to add priors over these parameters, resulting in a model such as:
\begin{align}
x_i | z_i, \boldsymbol{\theta}_{z_i} &\sim F(\boldsymbol{\theta}_{z_i})\notag\\
\boldsymbol{\theta}_j &\sim G_0\notag\\
z_i | \boldsymbol{\pi} &\sim \text{Discrete}(\boldsymbol{\pi})\notag\\
\boldsymbol{\pi} &\sim \text{Dirichlet}(\alpha)
\end{align}
where $G_0$ is the \emph{base distribution} and $\alpha$ the \textit{concentration parameter}. Mixture models are mostly used for \textit{density estimation}. Nonetheless, inference over $\mathbf{z}$ allows to use them as \textit{clustering} methods. In this case, every component is often associated to a cluster.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Dual-view mixture model}\label{sec:dual-view}
In this section, we present an extension of mixture models to account both for features and latent behavioral functions. We denote by \textit{behavioral functions} any function which, if known, can be used to predict the behavior of a user in a given situation. In the context of preference learning \citep{Bonilla2010,Abbasnejad2013a} or recommender systems \citep{Cheung2004}, behavioral functions may indicate hidden preference patterns, such as utility functions over the items, and the observed behavior may be a list of pairwise choices or ratings. In the context of online forums, behavioral functions may indicate the reaction of a user to a certain type of post and the observed behavior may be the set of replies to different posts. In general, behavioral functions are linked to observed behaviors through a likelihood function $p(y | f)$ where $y$ represents an observation and $f$ the latent behavioral function.
Let $\mathbf{a}_u$ be the set of (observed) features of user $u$. Let $f_u$ be a (latent) function of user $u$. Let $y_u$ be the (observed) outcome of $f_u$. By slightly adapting the notation from last section we can describe the variables of our dual model as follows:
\begin{align}
\mathbf{a}_u | z_u, \boldsymbol{\theta}_{z_u}^{\text{(a)}} &\sim F^{\text{(a)}}(\boldsymbol{\theta}_{z_u}^{\text{(a)}})\notag\\
f_u | z_u, \boldsymbol{\theta}_{z_u}^{\text{(f)}} &\sim F^{\text{(f)}}(\boldsymbol{\theta}_{z_u}^{\text{(f)}})\notag\\
y_u | f_u &\sim p(y_u | f_u)\notag\\
\boldsymbol{\theta}_j^{\text{(a)}} &\sim G_0^{\text{(a)}}\notag\\
\boldsymbol{\theta}_j^{\text{(f)}} &\sim G_0^{\text{(f)}}\notag\\
z_u | \boldsymbol{\pi} &\sim \text{Discrete}(\boldsymbol{\pi})\notag\\
\boldsymbol{\pi} &\sim \text{Dirichlet}(\alpha)
\end{align}
where we use the superindex $(a)$ for elements in the \textit{feature view} and the superindex $(f)$ for elements in the latent functions view, henceforth \textit{behavior view}. Otherwise, the structures are similar except for $y_u$, which represents the set of observed behaviors for user $u$. The corresponding Probabilistic Graphical Model is shown in Figure \ref{fig:general}.
\begin{figure}
\center
\scalebox{1}{
\begin{tikzpicture}
\tikzstyle{main}=[circle, minimum size = 12mm, thick, draw =black!80, node distance = 8mm]
\tikzstyle{connect}=[-latex, thick]
\tikzstyle{box}=[rectangle, draw=black!100]
%% Dirichlet
\node (alpha) {$\alpha$};
\node [main](pi) [above=0.5cm of alpha] {$\boldsymbol{\pi}$};
\node [main](z) [above=of pi] {$z_u$};
%% Clustering
\node[main, fill = black!10] (a_u) [left=0.5cm of z] {$\mathbf{a}_u$};
\node[main] (theta_ar) [above=1cm of a_u] {$\boldsymbol{\theta_k^{\text{(a)}}}$};
%% Prediction
\node[main] (f_u) [right=0.5cm of z] {$\mathbf{f_u}$};
\node[main] (theta_fr) [above=1cm of f_u] {$\boldsymbol{\theta}_k^{\text{(f)}}$};
\node[main, fill = black!10] (y) [right=0.6cm of f_u] {$\boldsymbol{y}_u$};
\node [main](G_a0) [above=1cm of theta_ar] {$G_0^{\text{(a)}}$};
\node [main](G_f0) [above=1cm of theta_fr] {$G_0^{\text{(f)}}$};
\path
%% DP
(alpha) edge [connect] (pi)
(pi) edge [connect] (z)
(z) edge [connect] (f_u)
(z) edge [connect] (a_u)
%% Clustering
(G_a0) edge [connect] (theta_ar)
(theta_ar) edge [connect] (a_u)
%% Prediction
(G_f0) edge [connect] (theta_fr)
(theta_fr) edge [connect] (f_u)
(f_u) edge [connect] (y);
% User draws
\node[rectangle, inner sep=0mm, fit= (a_u) (z) (f_u), label=below right:U, yshift=1mm, xshift=34mm] {};
\node[rectangle, inner sep=3mm,draw=black!100, fit= (z) (f_u) (a_u) (y)] {};
% Mixture specific parameters
\node[rectangle, inner sep=1mm, fit= (theta_ar) (theta_fr), label=above left:K, yshift=-17mm, xshift=34mm] {};
\node[rectangle, inner sep=3mm, draw=black!100, fit= (theta_fr) (theta_ar)] {};
\end{tikzpicture}
}
\caption{Graphical model of the generative process for $U$ users and $K$ clusters. Shaded circles represent observations and white circles represent latent variables. Views are connected through the latent assignments $\mathbf{z}$. A user $u$ draws a feature vector $\mathbf{a}_u$ and a behavior $\mathbf{f}_u$ from the cluster indicated by $z_u$ (the $u$-th element of $\mathbf{z}$).}
\label{fig:general}
\end{figure}
Every component has two distributions: one for features and one for latent behavioral functions. Latent behavioral functions are not directly observable, but they may be inferred through some observations if we have a likelihood function of observations given the latent functions.
The model can also be expressed in terms of a generative process:
\begin{itemize}
\item For every component $k$:
\begin{itemize}
\item Draw feature and function parameters from their base distributions $\boldsymbol{\theta}_k^{\text{(a)}} \sim G_0^{\text{(a)}}$ and $ \boldsymbol{\theta}_k^{\text{(f)}} \sim G_0^{\text{(f)}}$.
\end{itemize}
\item Draw the mixture proportions $\boldsymbol{\pi} \sim \text{Dirichlet}(\alpha)$.
\item For every user $u$:
\begin{itemize}
\item Draw a component assignment $z_u \sim \text{Multinomial}(\boldsymbol{\pi})$.
\item Draw user features $\mathbf{a}_u \sim F^{\text{(a)}}(\boldsymbol{\theta}_{z_u}^{\text{(a)}})$.
\item Draw a user latent function $f_u \sim F^{\text{(f)}}(\boldsymbol{\theta}_{z_u}^{\text{(f)}})$.
\item Draw a set of observed behaviors $y_u \sim p(y_u | f_u)$.
\end{itemize}
\end{itemize}
Left and right branches of Figure \ref{fig:general} correspond to the \textit{feature view} and the \textit{behavior view}, respectively. Note that every component contains two sets of parameters, one for features and one for behaviors, so that the two views can be generated from the same component. This encodes our prior assumption that users who belong to the same cluster should be similar in both views.
% z decisions
Given the user assignments $\mathbf{z}$, variables in one view are conditionally independent from variables in the other view. That means their inferences can be considered separately. However, inference of $\mathbf{z}$ uses information from both views. The conditional probability of $\mathbf{z}$ given all the other variables is proportional to the product of its prior and the likelihood of both views:
\begin{align}
p(\mathbf{z} | \cdot)
\propto
p(\mathbf{z} | \boldsymbol{\pi})
p(\mathbf{a} | \boldsymbol{\theta^{\text{(a)}}}, \mathbf{z})
p(\mathbf{f} | \boldsymbol{\theta^{\text{(f)}}}, \mathbf{z})
\label{eq:z_posterior}
\end{align}
\sloppy
The information given by each view is conveyed through the likelihood factors $p(\mathbf{a} | \boldsymbol{\theta^{\text{(a)}}}, \mathbf{z})$ and $p(\mathbf{f} | \boldsymbol{\theta^{\text{(f)}}}, \mathbf{z})$. The ratio between the conditional probability of a partition $\mathbf{z}$ and the conditional probability of a partition $\mathbf{z'}$ is:
\begin{align}
\frac
{p(\mathbf{z}| \cdot)}
{p(\mathbf{z'}| \cdot)}
=
\frac{
p(\mathbf{z} | \boldsymbol{\pi})
}{
p(\mathbf{z'} | \boldsymbol{\pi})
}
\frac{
p(\mathbf{a} | \boldsymbol{\theta^{\text{(a)}}}, \mathbf{z})
}{
{p(\mathbf{a} | \boldsymbol{\theta^{\text{(a)}}}, \mathbf{z'})}
}
\frac{
p(\mathbf{f} | \boldsymbol{\theta^{\text{(f)}}}, \mathbf{z})
}{
{p(\mathbf{f} | \boldsymbol{\theta^{\text{(f)}}}, \mathbf{z'})}
}
\label{eq:z_posterior_ratio}
\end{align}
\fussy
where the contribution of each view depends on how much more likely $\mathbf{z}$ is over the other assignments in that view. An extreme case would be a uniform likelihood in one of the views, meaning that all partitions $\mathbf{z}$ are equally likely. In that case, the other view leads the inference.
The two views provide reciprocal feedback to each other through $\mathbf{z}$. This means that if one view is more confident about a given $\mathbf{z}$, it will not only have more influence on $\mathbf{z}$ but also it will force the other view to re-consider its beliefs and adapt its latent parameters to fit the suggested $\mathbf{z}$.
Note also that inference of latent behavioral functions may be used for prediction of future behaviors.
%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Infinite number of clusters}\label{sec:CRP}
So far, we have considered the number of components $K$ to be known. Nonetheless, if we let $K \rightarrow \infty$ and marginalize over the mixture weights $\boldsymbol{\pi}$, it becomes a non-parametric model with a Dirichlet Process (DP) based on a Chinese Restaurant Process (CRP) prior over the user assignments, which automatically infers the number of \textit{active} of components (see the Appendix for the full derivation). Since we integrate out $\boldsymbol{\pi}$, user assignments are not independent anymore.
Instead, the probability of a user $u$ to be assigned to a non-empty (active) component $k$, given the assignments of all other users $\mathbf{z}_{-u}$, is:
\begin{align}
p(z_u = k | \mathbf{z}_{-u}) &\propto n_k \qquad \text{ for } k = 1,...,c
\end{align}
%
where $c$ denotes the number of non-empty components and $n_k$ is the number of users already assigned to the $k$-th component. The probability of assigning a user $u$ to an empty (non-active) component, that would be labelled as $c+1$, is:
\begin{align}
p(z_u = k | \mathbf{z}_{-u}) &\propto \alpha \qquad \text{ for } k=c+1
\end{align}
%
These two equations reflect a generative process that assigns users to clusters in \textit{a rich get richer} manner. The more users in a component, the more attractive this component becomes. Empty components also have a chance of being filled.
Despite the appearance of these equations, the idea behind the inference of $\mathbf{z}$ remains the same. The only differences between the finite and the infinite cases are the number of components and the probabilities to be assigned to each component.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Application to role detection in online forums}\label{sec:forums}
% From general case to our particular use case.
The above model provides a general framework that can be adapted to many scenarios. In this section, we apply our model to the clustering of users in online forums. Clustering users in online communities may be used for latent \textit{role detection}. Although clustering based on user features may provide interesting insights, we think that the notion of \textit{role} should include information that allows to predict behaviors. After all, this is what roles are useful for. We expect that a person holding a role behaves according to their role.
% feature view and behavior view
Let us specify the two views of our model for this scenario, given $U$ users who participate in a forum composed of $T$ discussion threads. For the feature view, we describe every user through a feature vector $\mathbf{a}_u=(a_{u1}, a_{u2},...,a_{uD})^T$ that will typically contain features such as centrality metrics or number of posts. For the behavior view, we define a latent behavioral function that we call \textit{catalytic power} and denote by $b_u$, which represents the ability of a user to promote long discussions; we refer to $\mathbf{b}$ as the vector of all user catalytic powers. Let the \textit{participation vector} of the discussion thread $\mathbf{p}_t = (p_{1t},...,p_{Ut})^T$ be a binary array indicating which users participated among the first $m$ posts of the thread $t$. Assuming that the dynamic of a discussion is strongly conditioned by the first participants, we model the final length of a thread $y_t$:
\begin{equation*}
y_t \sim \mathcal{N}(\mathbf{p}_t^T\mathbf{b}, s_{\text{y}}^{-1})
\end{equation*}
where $\mathbf{p}_t^T\mathbf{b}$ is the cumulated catalytic power of users who participated in its first $m$ posts and $s_{\text{y}}$ represents the precision (inverse of the variance) of the unknown level of noise.
% What does the model infer
If the assumption that there exist groups of users sharing similar features and similar catalytic power holds, then our model will not only find a clustering based on features and behaviors (catalytic powers), but it will also exploit feature information to infer catalytic powers and, vice versa, the inferred catalytic powers will be treated by the model as an extra feature dimension.
% Clarification: little modification from reference model
Note that, unlike the model presented in Figure \ref{fig:general}, the observed output $y_t$ is common to all users who participated in the first $m$ posts of thread $t$. Moreover, $y_t$ depends on the observed participations $\mathbf{p_t}$. We also defined the noise factor $s_{\text{y}}$ which was not explicitly present in the general model. The graphical model would be similar to that of Figure~\ref{fig:general} but with the thread lengths $\mathbf{y}$, the participation matrix $\mathbf{P}$ and the noise $s_{\text{y}}$ out of the users plate. In the remaining of this section we provide more details about the components of the two views.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Feature view}\label{sec:forums_features}
In this section we specify the component parameters $\boldsymbol{\theta}_k^{\text{(a)}}$ and the base distribution $G_0^{\text{(a)}}$ of the feature view. Our choice of priors follows that of the Infinite Gaussian Mixture Model (IGMM) as described by \cite{Rasmussen2000a} and \cite{Gorur2010}.
The feature vectors are assumed to come from a mixture of Gaussian distributions:
\begin{align}
\boldsymbol{a}_u &\sim \mathcal{N}\left(\Muzu, \invSzu\right)
\end{align}
where the mean $\Muzu$ and the precision $\Szu$ are component parameters common to all users assigned to the same component. The component parameters are given Normal and Wishart priors:
\begin{align}
\Muk &\sim \mathcal{N}\left(\Muo, \invRo\right)\\
\Sk &\sim \mathcal{W}\left( \betaoa, \left( \betaoa \Wo \right)^{-1}\right)
\label{eq:wishart_sar}
\end{align}
where the mean $\Muo$, the precision $\Ro$, the covariance $\Wo$, and the degrees of freedom $\betaoa$ are hyperparameters common to all components. The hyperparameters themselves are given non-informative priors centered at the observed data
\begin{align}
\Muo &\sim \mathcal{N}(\boldsymbol{\mu_a, \Sigma_a}) \\
\Ro &\sim \mathcal{W}(D, (D \boldsymbol{\Sigma_a})^{-1})\\
\Wo &\sim \mathcal{W}(D, \frac{1}{D} \boldsymbol{\Sigma_a})\\
\frac{1}{\betaoa - D + 1} &\sim \mathcal{G}(1, \frac{1}{D}) \label{eq:gamma_ba0}
\end{align}
where $\boldsymbol{\mu_a}$ and $\boldsymbol{\Sigma_a}$ are the mean and covariance of all the features vectors and $\mathcal{G}$ is the Gamma distribution.
Note that the expectation of a random matrix drawn from a Wishart distribution $X \sim \mathcal{W}(\upsilon, W)$ is $\mathbb{E}[\mathbf{X}] = \mathbf{\upsilon W}$. Our parametrization of the Gamma distribution corresponds to a one-dimensional Wishart. Its density function is therefore given by $\mathcal{G}(\alpha, \beta) \propto
x^{\alpha/2-1} \exp(-\frac{x}{2\beta})$
and the expectation of a random scalar drawn from a Gamma distribution $x \sim \mathcal{G}(\alpha, \beta)$ is ${\mathbb{E}[x] = \alpha\beta}$.
As pointed out in \cite{Gorur2010}, this choice of hyperparameters, which is equivalent to scaling the data and using unit priors, makes the model invariant to translations, rotations, and rescaling of the data.
% Why these priors?
Conjugate priors are chosen whenever possible to make the posteriors analytically accessible. As for $\betaoa$, the prior in Equation \ref{eq:gamma_ba0} guarantees that the degrees of freedom in the Wishart distribution in Equation~\ref{eq:wishart_sar} are greater than or equal to $D-1$. The density $p(\betaoa)$ is obtained by a simple transformation of variables (see Appendix).
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Behavior view}\label{sec:forums_behaviors}
In this section we specify the component parameters $\boldsymbol{\theta}_k^{\text{(f)}}$ and the base distribution $G_0^{\text{(f)}}$ of the behavior view. Our choice corresponds to a Bayesian linear regression where coefficients are drawn not from a single multivariate Gaussian but from a \emph{mixture} of one-dimensional Gaussians.
The thread lengths are assumed to come from a Gaussian distribution whose mean is determined by the catalytic power of users who participated in the first posts and whose variance represents the unknown level of noise:
\begin{align}
y_t &\sim \mathcal{N}(\mathbf{p}_t^T \mathbf{b}, s_{\text{y}}^{-1})
\end{align}
where the precision $s_{\text{y}}$ is given a Gamma prior centered at the sample precision $\sigma_0^{-2}$:
\begin{align}
s_{\text{y}} \sim \mathcal{G}(1,\sigma_{\text{0}}^{-2})
\end{align}
The power coefficients $b_u$ come from a mixture of Gaussians:
\begin{align}
b_u &\sim \mathcal{N}\left(\muzu, \invszu\right)
\end{align}
where the mean $\muzu$ and the precision $\szu$ are component parameters common to all users assigned to the same component $z_u$. The component parameters are given Normal and Gamma priors:
\begin{align}
\muk &\sim \mathcal{N}\left(\muo, \invro \right)\\
\sk &\sim \mathcal{G}\left( \betaof, \left(\betaof\wo\right)^{-1}\right)
\end{align}
where the mean $\muo$, the precision $\ro$, the variance $\wo$, and the degrees of freedom $\betaof$ are hyperparameters common to all components. Because the coefficients are not observed, we cannot center the hyperparameters in the data as we did in the feature view. Instead, we use their Maximum Likelihood Estimates, computed as $\mathbf{\mathbf{\hat{b}}} = (\mathbf{P}\mathbf{P^T} + \lambda\mathbf{I})^{-1} \mathbf{P^T y}$, with a regularization parameter $\lambda=0.01$, where $\mathbf{P}$ is the participation matrix $\mathbf{P} = \{\mathbf{p_1},...,\mathbf{p_T}\}$. Then the hyperparameters are given non-informative priors centered at $\mathbf{\hat{b}}$:
\begin{align}
\muo &\sim \mathcal{N}(\mu_{\hat{b}}, \sigma_{\hat{b}}^2) \\
\ro &\sim \mathcal{G}(1, \sigma_{\hat{b}}^{-2})\\
\wo &\sim \mathcal{G}(1, \sigma_{\hat{b}}^{2})\\
\frac{1}{\betaof}&\sim \mathcal{G}(1, 1)
\end{align}
where $\mu_{\hat{b}}$ and $\sigma_{\hat{b}}^2$ are the mean and the variance of the Maximum Likelihood Estimators $\mathbf{\hat{b}}$ of the coefficients. Note that this choice is data-driven and, at the same time, the risk of overfitting is reduced since hyperparameters are high in the model hierarchy.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Shared parameters}\label{sec:sharedparams}
As for the common variables $\mathbf{z}$ and $\alpha$, $\mathbf{z}$ is given a Chinese Restaurant Process prior:
\begin{align}
\label{eq:prior_z_CRP}
p(z_u = k | \mathbf{z}_{-u}, \alpha) \propto
\begin{cases}
n_{k} & \text{for } k=1,...,c\\
\alpha & \text{for } k=c+1
\end{cases}
\end{align}
where $c$ denotes the number of non-empty components before the assignment of $z_u$ and $n_k$ is the number of users already assigned to the $k$-th component. The concentration parameter $\alpha$ is given a vague inverse gamma prior:
\begin{align*}
\frac{1}{\alpha} \sim \mathcal{G}(1,1)
\end{align*}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Inference}\label{sec:inference}
The latent parameters of our model can be inferred by using Gibbs sampling, and sequentially taking samples of every variable given the others. Conditional distributions are detailed in the appendices. A single iteration of the Gibbs sampler goes as follows:
\begin{itemize}
\item Sample component parameters $\Sk, \Muk$ conditional on the indicators $\mathbf{z}$ and all the other variables of the two views.
\item Sample hyperparameters $\Muo$, $\Ro$, $\Wo$, $\betaoa$ conditional on the indicators $\mathbf{z}$ and all the other variables of the two views.
\item Sample component parameters $\sk, \muk$ conditional on the indicators $\mathbf{z}$ and all the other variables of the two views.
\item Sample hyperparameters $\muo, \ro, \wo, \betaof$ conditional on the indicators $\mathbf{z}$ and all the other variables of the two views.
\item Sample coefficients $\mathbf{b}$ conditional on the indicators $\mathbf{z}$ and all the other variables of the two views.
\item Sample $s_{\text{y}}$ conditional on the indicators $\mathbf{z}$ and all the other variables of the two views.
\item Sample indicators $\mathbf{z}$ conditional on all the variables of the two views.
\end{itemize}
Since we use conjugate priors for almost all the variables, their conditional probabilities given all the other variables are analytically accessible. The degrees of freedom $\betaoa, \betaof$ and the concentration parameter $\alpha$ can be sampled by Adaptive Rejection Sampling \citep{Gilks1992}, which exploits the log-concavity of $p(\log \beta_0^{\text{(a)}} | \cdot)$, $p(\log \beta_0^{\text{(f)}} | \cdot)$ and $p(\log \alpha | \cdot)$ (see Appendix). As for the sampling of $\mathbf{z}$, the conditional probability of assigning user $u$ to an active component $k$ is proportional to the prior times the likelihoods:
\begin{align}
p(z_u = k | \mathbf{z_{-u}}, \alpha, \cdot)
%\\&
\propto
n_{k}
p(\mathbf{a}_{u} |\Muk, \Sk)
p({b_u} | \muk, \sk) \qquad \text{for } k=1,...,c
\end{align}
and for the conditional probability of assigning $z_u$ to a non-active component:
\begin{align}
p(z_u = k | \mathbf{z}_{-u}, \alpha, \cdot)
%\\
\propto&
\;\alpha
\int
p(\mathbf{a}_{u} |\Muk, \Sk) p(\Muk) p(\Sk)
\text{d}\Muk
\text{d}\Sk\notag\\
&\times
\int
p(b_u | \muk, \sk)
p(\muk)p(\sk)
\text{d}\muk
\text{d}\sk \qquad \text{for } k=c+1
\end{align}
Unfortunately, these integrals are not analytically tractable because the product of the factors does not give a familiar distribution. \cite{Neal2000} proposes to create $m$ auxiliary empty components with parameters drawn from the base distribution, and then computing the likelihoods of $\mathbf{a}$ and $\mathbf{b}$ given those parameters. The higher the $m$, the closer we will be to the real integral and the less autocorrelated the cluster assignments will be. However, the equilibrium distribution of the Markov Chain is exactly correct for any value of $m$. To speed up the computations, $m$ is usually small. For our experiments, we chose $m=3$. That is, we generate $3$ additional empty tables each one with its own parameters $\boldsymbol{\mu'}, \mathbf{S'}, \mu', s'$. We also run some experiments with $m=4$ and $m=5$, without observing significant differences neither in the clustering nor in the predictions, while it significantly increased the computational time. See \cite{Neal2000} for a more systematic study on the effect of $m$.
\subsection{Predictive distribution}
We are also interested in the ability of the model to predict new thread lengths. The posterior predictive distribution over the length of a new thread is:
\begin{align}
p(y_* | \mathbf{p_*}, & \mathbf{P}, \mathbf{y}) =
%\\ &
\int_{\boldsymbol{\theta}}
p(y_* | \mathbf{p_*}, \boldsymbol{\theta})
p(\boldsymbol{\theta} | \mathbf{y}, \mathbf{P}) \text{d}\boldsymbol{\theta}
\end{align}
where $\mathbf{p_*}$ is the participation vector of the new thread, and $y_*$ its predicted length.
If we have samples $\boldsymbol{\theta}^{(1)},...,\boldsymbol{\theta}^{(N)}$ from the posterior $p(\boldsymbol{\theta}| \mathbf{y}, \mathbf{P})$, we can use them to approximate the predictive distribution:
\begin{align}
p(y_* | \mathbf{p_*}, & \mathbf{P}, \mathbf{y})
\approx
\frac{1}{N}
\sum_{i=1}^N
p(y_* | \mathbf{p_*}, \boldsymbol{\theta}^{(i)})
\label{eq:predictive_posterior_approx}
\end{align}
where $\boldsymbol{\theta}^{(i)}$ are the $i$-th samples of $\mathbf{b}$ and $\sigma_\text{y}$.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Experiments}\label{sec:experiments}
We generated three scenarios to study in which situations dual-view models outperform single-view ones. The data reproduces the scenario of online forums presented in Section~\ref{sec:forums}. In the first scenario, users belong to five clusters and those who belong to the same cluster in one view also share the same cluster in the other view (\textit{agreement between views}). In the second scenario, users belong to five clusters in the behavior view but two of the clusters are completely overlapped in the feature view (\textit{disagreement between views}). In order to reproduce a more realistic clustering structure, in the last scenario user features and coefficients are taken from the \textit{iris dataset}.
We will see in Section~\ref{sec:cost} that the main bottleneck of the model is the sampling of coefficients $b_1,...,b_U$ since they are obtained by sampling from a $U$-dimensional Gaussian distribution that requires, at each Gibbs iteration, inverting a $U\times U$ matrix to get its covariance. This issue would disappear if the inference of the behavioral function parameters for a user were independent from the parameters of the other users. In this paper, we use the \textit{iris} dataset to demonstrate the properties of the model as a whole, without making any statement on the convenience of the presented behavioral functions.
%%%%%%%%%%%%%%%%%%
\subsection{Compared models}
We compared two dual-view models and one single-view model. We call them dual-fixed, dual-DP and single. The \texttt{dual-fixed} corresponds to the present model where the number of clusters is set to the ground truth (five clusters). The \texttt{dual-DP} corresponds to the present model where the number of clusters is also inferred (Section \ref{sec:CRP}). The \texttt{single} model corresponds to a Bayesian linear regression over the coefficients $\mathbf{b}$, which is equivalent to the behavior view specified in Section \ref{sec:forums_behaviors} where the number of clusters is set to one (that is, no clusters at all) and therefore there is no information flowing from the feature view to the behavior view; this model can only learn the latent coefficients $\mathbf{b}$.
Table \ref{tab:models} presents these three models as well as other related models that appear when blinding the models from one of the views. Note that we left out of the analysis those models that use clustering but are blinded of one view. The first two of these (IGMM and GMM), are regular clustering methods over feature vectors; we discarded them because they do not make inferences on latent behaviors. The last two (we call them latent-IGMM and latent-GMM) are Bayesian linear regressions where coefficients are assumed to come from a mixture of Gaussians; because these are in practice very similar to a simple Bayesian linear regression (they can be seen as Bayesian linear regressions with priors that tend to create groups of coefficients), we chose to benchmark only against the \texttt{single} model.
Posterior distributions of parameters are obtained by Gibbs sampling. We used the \texttt{coda} package \citep{coda, coda2015} in R \citep{R} to examine the traces of the Gibbs sampler. For the convergence diagnostics, we used Geweke's test available in the same package. After some initial runs and examinations of the chains to see how long it took to converge to the stationary distribution for the dual models, we observed that convergence for all the models is usually reached before 5,000 samples. Since we run a large number of automatized experiments, we set a conservative number of 30,000 samples for every run, from which the first 15,000 are discarded as burn-in samples.
For the first two experiments we initialized our samplers with all users in the same cluster. For the \textit{iris} experiment we used the result of a k-means with 10 clusters over the feature view. We did not systematically benchmark the two initialisation strategies. Nevertheless, this second strategy is, in general, more convenient in order to arrive to the true target distribution within less iterations.
\begin{table}[ht]
\caption{Compared and related models. Both single models are the same since if the number of clusters is fixed to one they cannot use the feature view. The row marked as $-$ corresponds to a model that has no interest in this context since it simply finds the Gaussian distribution that best fits the observed features and makes neither clustering nor predictions.}
\begin{center}
\tabcolsep = 1\tabcolsep
\begin{tabular}{lccc}
\hline\hline
& features & behaviors & clusters\\
\hline
\textbf{dual-DP} & yes & yes & $\infty$\\
\textbf{dual-fixed} & yes & yes & fixed \\
\textbf{single} & yes & yes & 1 \\
IGMM & yes & no & $\infty$ \\
GMM & yes & no & fixed \\
- & yes & no & 1 \\
latent-IGMM & no & yes & $\infty$ \\
latent-GMM & no & yes & fixed \\
\textbf{single} & no & yes & 1 \\
\hline
\end{tabular}
\label{tab:models}
\end{center}
\end{table}
%%%%%%%%
\subsection{Metrics}
We used two metrics for evaluation, one for clustering (within dual models) and one for predictions of thread lengths (within the three models).
\paragraph{Metric for clustering:}
Clustering by mixtures models suffers from identifiability. The posterior distributions of $\mathbf{z}$ has $k!$ reflections corresponding to the $k!$ possible relabelling of the $k$ components. Due to this, different MCMC samples of $\mathbf{z}$ may come from different reflections making it hard to average the samples. A common practice is to summarize the pairwise posterior probability matrix of clustering, denoted by $\hat{\pi}$, that indicates the probability of every pair of users to be in the same component (no matter the label of the component). In order to obtain a full clustering $\mathbf{z}$ from $\hat{\pi}$, \cite{Dahl2006} proposes a \textit{least-squares model-based clustering} which consists of choosing as $\mathbf{z}$ the sample $\mathbf{z}^{(i)}$ whose corresponding pairwise matrix has the smaller least-squares distance to $\hat{\pi}$:
\begin{align}
\mathbf{z}_{LS} = \argmin_{\mathbf{z} \in \mathbf{z}^{(1)},..,\mathbf{z}^{(N)}} \sum_i^U \sum_j^U (\delta_{i,j}(\mathbf{z}) - \hat{\pi})^2
\end{align}
where $\delta_{i,j}(\mathbf{z})$ indicates whether $i$ and $j$ share the same component in $\mathbf{z}$.
Finally, to assess the quality of the proposed clustering $\mathbf{z}_{LS}$ we use the \textit{Adjusted Rand Index}, a measure of pair agreements between the true and the estimated clusterings.
%%%%%%
\paragraph{Metric for predictions:}
For predictions, we use the \textit{negative loglikelihood},
which measures how likely the lengths are according to the predictive posterior:
\begin{align}
p(y^{\text{test}} | \mathbf{p}_t^{\text{test}}, \mathbf{P}^{\text{train}}, \mathbf{y}^{\text{train}})
\end{align}
and that can be approximated from Equation~\ref{eq:predictive_posterior_approx}. Negative loglikelihoods are computed on test sets of 100 threads.
%%%%%%
\subsection{Agreement between views}
To set up the first scenario, for a set of $U$ users and five clusters we generated an assignment $z_u$ to one of the clusters so that the same number of users is assigned to every cluster.
% Generate features
Once all assignments $\mathbf{z}$ had been generated, we generated the data for each of the views. For the feature view, every user was given a two-dimensional feature vector $\mathbf{a}_u=(a_{u_1}, a_{u_2})^T$ drawn independently from:
\begin{equation}
\mathbf{a}_u \sim \mathcal{N}\left(\boldsymbol{\mu}_{z_u}, \Sigma_a \right)
\end{equation}
where
$\mu_{z_u}=\left(\cos(2\pi \frac{z_u}{5}), \sin(2\pi \frac{z_u}{5})\right)^T \text{for } z_u=1,..,5$
(see Figure~\ref{fig:data_agreement}). For the behavior view, every user was given a coefficient drawn independently from:
\begin{equation}
b_u \sim \mathcal{N}(-50 +25z_u, \sigma^2)
\qquad\qquad \text{for } z_u=1,..,5
\end{equation}
where coefficients for users in the same cluster are generated from the same Normal distribution and the means of these distributions are equidistantly spread in a [-200,200] interval (see Figure~\ref{fig:data_agreement}).
% Participations
To simulate users participations in a forum we generated, for each user $u$, a binary vector $\mathbf{p}_u = (p_{u1},...,p_{uT})^T$ of length $T$ that represents in which threads the user participated among the first $m$ posts. We supposed each user participated in average in half the threads.
\begin{equation}
p_{ut} \sim \text{Bernoulli}(0.5)
\end{equation}
% Thread lengths
Finally, we assumed that the final length of a thread is a linear combination of the coefficients of users who participated among the first posts:
\begin{equation}
y_t \sim \mathcal{N}(\mathbf{p}_t^T \mathbf{b}, \sigma_\text{y})
\end{equation}
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{Fig2_data_agreement_bw}
\caption{Dataset for agreement between the views. User features (left) and user coefficients (right). Every group (shape) of users has a well differentiated set of features and coefficients.}
\label{fig:data_agreement}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{Fig3_results_agreement_bw}%
\caption{Results for agreement between the views. Comparison of models under different threads/users ratios (50 users and variable number of threads). Means and standard errors over 5 runs.}
\label{fig:results_agreement}
\end{figure}
If both views agree and there is enough data for both of them, we expect dual models to find the true clusters and true latent coefficients, and the single model to find also the true latent coefficients. In this case, the feature view brings no competitive advantage when there is enough information in the behavior view (and conversely, dual models should not outperform simple IGMM and GMM for the clustering task since there is enough information in the feature view).
%%%%
On the contrary, when one of the views lacks information, then dual-view models should outperform single-view ones. In our model, the lack of information may come either from having too few threads to learn from or from having a high ratio of users versus threads since we have too many user behavior coefficients to learn.
Figure \ref{fig:results_agreement} shows how increasing the threads vs users ratio affects the accuracy of each model. When the number of threads is too low with respect to the number of users neither view has enough information and thus the three models make bad predictions the inference is difficult for the three models. Yet, dual-view models need less threads than the single model to make good inferences. When the number of threads is high enough, the three models tend to converge.
The number of users and threads in the experiments ranges from 10 to 100 users and from 10 to 100 threads. We avoided larger numbers to prevent the experiments from taking too long. 30,000 iterations of the Gibbs sampler described in Section~\ref{sec:inference} for 50 users and 100 threads take around three hours in a Pentium Intel Core i7-4810MQ @2.80GHz. Nevertheless, the ratio users/threads remains realistic. In the real forums that we analyzed from \texttt{www.reddit.com} a common ratio is 1/10 for a window of one month.
%%%
\subsection{Disagreement between views}
\begin{figure*}
\centering
\includegraphics[width=1\textwidth]{Fig4_data_disagreement_bw}
\caption{Dataset for disagreement between the views. User features (left) and user coefficients (right). Two of the groups (shapes) of users have similar features but different coefficients.}
\label{fig:data_disagreement}
\end{figure*}
\begin{figure}
\centering
\subfloat[]{\includegraphics[width=0.33\textwidth]
{Fig5a.eps}}%
\subfloat[]{\includegraphics[width=0.33\textwidth]
{Fig5b.eps}}%
\subfloat[]{\includegraphics[width=0.33\textwidth]
{Fig5c.eps}}
\caption{Posteriors for DP-dual when the two views see a different number of clusters. (a) 10 users and 100 threads. (b) 25 users and 25 threads. (c) 100 users and 10 threads. Figures on the left: examples of posterior distributions of thread length predictions over 100 test threads with 50\% and 95\% credible intervals in test set with 50 users and 10, 50 and 100 threads. x-axis correspond to threads sorted by their (true) length while y-axis correspond to predicted thread lengths. True lengths are plotted in red (smoothest lines). Figures on the right: examples of posterior pairwise clustering matrices $\hat{\pi}$. x-axes and y-axes correspond to the users. A dark point means a high probability of being in the same cluster. The worst case is (c), which makes a similar clustering to (b) but worse predictions, because the feature view receives misguiding information from the behavior view and the number of threads is not high enough to compensate for it.}
\label{fig:confused}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=1\textwidth]{Fig6_results_disagreement_bw}%
\caption{Results for disagreement between the views. Comparison of models under different threads/users ratios (50 users and variable number of threads) when the two views see a different number of clusters. Means and standard errors over 5 runs.}
\label{fig:results_disagreement}
\end{figure*}
If each view suggests a different clustering $\mathbf{z}$, dual models should find a consensus between them (recall Equation \ref{eq:z_posterior}). We generated a new dataset (Figure \ref{fig:data_disagreement}) where there are four clusters according to the feature view and five clusters according to the behavior view.
Figure~\ref{fig:confused} shows the posterior distributions (over thread lengths and over pairwise clustering) when (a) the behavior view has more information (b) both views lack data (c) the feature view has more information. By \textit{having more information} we mean that a view dominates the inference over the posterior of the clustering $\mathbf{z}$.
\paragraph{(a) Lack of information in feature view:}
If the number of users is low but they participated in a sufficient number of threads, the behavior view (which sees five clusters) will be stronger than the feature view (which sees four clusters) and will impose a clustering in five groups. User coefficients (used for thread length predictions) are also well estimated since the number of threads is high enough to learn them despite the lack of help from the other view (Figure~\ref{fig:confused}a).
\paragraph{(b) Lack of information in both views:}
In the middle ground where neither view has enough evidence, the model recovers four clusters and the predictive posterior over thread lengths gets wider though still making good inferences (Figure~\ref{fig:confused}b).
\paragraph{(c) Lack of information in behavior view:}
If the number of users is high and the number of threads is low, the feature view (four clusters) will have more influence in the posterior than the behavior view (five clusters), (Figure~\ref{fig:confused}c). Predictions get worse because the model imposes a four clusters prior over coefficients that are clearly grouped in five clusters.
In order to compare between the performance in case of agreement and the performance in case of disagreement, we repeated the experiments of the last section with the current configuration.
Figure~\ref{fig:results_disagreement} shows the performance of the models for 50 users and a different number of threads. While predictions and clustering improve with the number of threads, clustering with a small number of threads is worse in case of disagreement since the feature view imposes its 4 clusters vision. To recover the five clusters we would need either more threads or less users.
For the predictions, the dual models still outperform the single one because the feature view mostly agrees with the behavior view except for the users in one of the clusters. If all user features were in the same cluster, (no clustering structure) the performance of the predictions would be similar for the three models since the feature view would add no extra information. If we randomize the features so that, for instance, there are five clusters in the feature view that are very different from the clusters in the behavior view, we may expect the dual-view models to give worse predictions than the single-view one in those cases where they now perform better. In those cases, dual-models would be getting noise in the feature view (or very bad priors) and only a large enough number of threads could compensate for it.
\subsection{Iris dataset}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{Fig7_data_iris_features_bw}%
\includegraphics[width=0.5\textwidth]{Fig7_data_iris_behaviors_bw}
\caption{Iris dataset. User features (left) and synthetic user coefficients (right)}
\label{fig:iris_data}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{Fig8_results_iris_bw}
\caption{Results for the iris dataset. Comparison of models under different threads/users ratios (50 users and variable number of threads). Means and standard errors over 5 runs.}
\label{fig:iris_results}
\end{figure}
To reproduce a more realistic clustering structure we performed a last experiment based on the \textit{iris} dataset. We used the \textit{iris} data available in R, which corresponds to the original dataset reported in \cite{IrisData1935}. In our experiment, features correspond to three of the features of the \textit{iris} dataset (Figure \ref{fig:iris_data}). We chose three out of the four features (sepal width, petal length and petal width) as well as a random subset of 50 observations so that the clustering task is harder if we only use the feature view. The coefficients of the behavior view are similar to those used in the former experiments. The selected observations are assigned to the same cluster than in the \textit{iris} dataset (species). We run a standard EM-GMM, from the R package \texttt{mclust} \citep{mclust}, over the features to have an idea of what we should expect from our model when there are almost no threads and the coefficients are difficult to infer. We also run the same EM-GMM over the features and the true coefficients to have an idea of what we should expect from our model when the number of threads is high enough to make a good inference of the coefficients. This gave us an ARI of 0.48 and 0.79, respectively. Indeed, starting nearer to 0.48 when the number of threads is small, our model gets closer to 0.79 as we keep adding threads (Figure \ref{fig:iris_results}). Of course, the inference of the coefficients and thus the predictions over the test set also improve by increasing the number of threads. Since the single model does not take profit of the feature view, it needs more threads to reach the same levels than its dual counterparts. Figure~\ref{fig:iris_posteriors} shows two examples of confusion matrices and predicted lengths for 10 and 100 threads.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Fig9_iris_posterior_10}%
\includegraphics[width=0.5\textwidth]{Fig9_iris_posterior_100}
\caption{Predictive posteriors given by the \texttt{DP-dual} model over thread lengths and pairwise clustering matrices with 50 users and a training set of 10 threads (left) and 100 threads (right). Shadowed areas indicate 95\% and 50\% credible intervals. True lengths are plotted in red (smoothest lines). As the number of thread increases, the model starts to see the three cluster structure as well as to make better predictions.}
\label{fig:iris_posteriors}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{Fig10_ncomponents}
\caption{Left: histogram of the number of active clusters estimated by the \texttt{DP-model} in the \textit{iris} scenario with 50 threads. Right: mean and standard deviations of the number of users assigned to each cluster during the chain. Most users are assigned to the three major clusters and a small group of users is assigned to a fourth cluster.}
\label{fig:nclusters}
\end{figure}
Figure~\ref{fig:nclusters} shows the histogram of the number of clusters within the MCMC chain and the distribution of cluster sizes. The model infers three clusters but it also places some probability over a two clusters structure due to the closeness of two of the clusters in the feature view.
\subsection{Computational cost}\label{sec:cost}
% Comment on autocorrelations, convergence...
We analyzed the computational cost of the \texttt{dual-DP} model since it is the most complex of the three compared. Unlike the \texttt{single} model, it makes inferences in the two views, meaning about twice the number of variables. And unlike the \texttt{fixed-dual}, it has to infer the number of cluster and does it by creating $m$ empty candidate clusters every time we sample a cluster assignment for a user at each iteration of the Gibbs sampler. This means creating $U \times iterations \times m$ empty clusters and computing, as many times, whether a user belongs to one of these auxiliary clusters ($m$ possible extra clusters at each iteration), which makes it the slowest of the three models in terms of time per iteration.
We look at the autocorrelation time to estimate the distance between two uncorrelated samples:
\begin{equation*}
\tau = 1 + 2\sum_{n=1}^{1000} |\rho_n|
\end{equation*}
where $\rho_n$ is the autocorrelation at lag $n$. The variable with the higher autocorrelation is $\muo$ which has an autocorrelation time of 79. Since we drop the first 15000 samples for burn-in, we get an Effective Sample Size of 189 independent samples.
The bottlenecks of the algorithm are computing the likelihoods of the features and the coefficients given a cluster assignment, and the sampling of the coefficients. The sampling of the coefficients is relatively slow because it implies sampling from a multivariate Gaussian distribution with a $U\times U$ covariance matrix (see Appendix). This is due to the fact that the coefficient of each user depends on the coefficients of the users who have co-participated in the same thread. Note that this bottleneck would disappear in other scenarios where the inference of the behavioral function of a user is independent from the other users.
% Wrap-up paragraph for the whole section
\subsection{Summary of the experiments}
To summarize, the experiments show on the one hand that dual-view models outperform single-view ones when users can be grouped in clusters that share similar features and latent behavioral functions and, on the other hand, that even when this assumption is not true, as long as there is enough data, the inference will lead to a consensual partition and a good estimation of latent functions. Indeed, each view acts as a prior, or a regularization factor, on the other view. Good priors improve inference, while bad priors misguide the inferences unless there is sufficient amount of evidence to ignore the prior.
\section{Conclusions}\label{sec:conclusions}
% Model summary
We presented a dual-view mixture model to cluster users based on features and latent behavioral functions. Every component of the mixture model represents a probability density over two \textit{views}: a feature view for observed user attributes and a
behavior view for latent behavioral functions that are indirectly observed through user actions or behaviors. The posterior distribution of the clustering represents a consensual clustering between the two views. Inference of the parameters in each view depends on the other view through this common clustering, which can be seen as a proxy that passes information between the views. An appealing property of the model is that inference on latent behavioral functions may be used to make predictions of users future behaviors. We presented two versions of the model: a parametric one where the number of clusters is treated as a fixed parameter and a nonparametric, based on a Dirichlet Process, where the number of clusters is also inferred.
% Results and findings
We have adapted the model to a hypothetical case of online forums where behaviors correspond to the ability of users to generate long discussions. We clustered users and inferred their behavioral functions in three datasets to understand the properties of the model. We inferred the posteriors of interest by Gibbs sampling for all the variables but two of them which were inferred by Adapted Rejection Sampling. Experiments confirm that the proposed dual-view model is able to learn with less instances than its single-view counterpart due to the fact that dual-view models use more information. Moreover, inferences with the dual-view model based on a Dirichlet Process are as good as inferences with the parametric model even if the latter knows the true number of clusters.
% Future directions
In our future research we plan to adapt and apply the model to more realistic tasks such as learning users preferences based on choices and users features. Particularly, we would like to compare our model to that of \cite{Abbasnejad2013a} in the \textit{sushi} dataset \citep{Kamishima2009}. Also, it
might be interesting to consider latent functions at a group level, that is, that users in the same cluster share \textit{exactly} the same latent behavior. Not only it would reduce the computational cost but, if we have few data about every user, a group-level inference may also be more grounded and statistically sound.
Finally, in order to apply the model to large scale data we will also explore alternative and faster inference methods such as Bayesian Variational Inference.
\newpage
%%%%%%%%%%%%%%%%%%%%%%%%%
% APPENDIX
%%%%%%%%%%%%%%%%%%%%%%%%%
\appendix
\noindent \textbf{\Large Appendix}
\vspace{-0.1cm}
\section{Chinese Restaurant Process}
In this section we recall the derivation of a Chinese Restaurant Process. Such a process will be used as the prior over cluster assignments in the model. This prior will then be updated through the likelihoods of the observations through the different views.
Imagine that every user $u$ belongs to one of $K$ clusters. $z_u$ is the cluster of user $u$ and $\mathbf{z}$ is a vector that indicates the cluster of every user. Let us assume that $z_u$ is a random variable drawn from a multimomial distribution with probabilities $\boldsymbol{\pi}= (\pi_1,...,\pi_K)$. Let us also assume that the vector $\boldsymbol{\pi}$ is a random variable drawn from a Dirichlet distribution with a symmetric concentration parameter $\boldsymbol{\alpha} = (\alpha/K,...,\alpha/K)$. We have:
\begin{align*}
z_u | \boldsymbol{\pi} &\sim \text{Multinomial}(\boldsymbol{\pi})\notag\\
\boldsymbol{\pi} &\sim \text{Dirichlet}(\boldsymbol{\alpha})
\end{align*}
The marginal probability of the set of cluster assignments $\mathbf{z}$ is:
\begin{align*}
p(\mathbf{z}) =&
\int \prod_{u=1}^U p(z_u | \boldsymbol{\pi})p(\boldsymbol{\pi} | \boldsymbol{\alpha})
\text{d}\boldsymbol{\pi}\\
=&\int
\prod_{i=1}^K \pi_i^{n_i}
\frac{1}{B(\boldsymbol{\alpha})}
\prod_{j=1}^K \pi_j^{\alpha/K-1}
\text{d}\boldsymbol{\pi}\\
=&
\frac{1}{B(\boldsymbol{\alpha})}
\int
\prod_{i=1}^K \pi_i^{\alpha/K + n_i - 1}
\text{d}\boldsymbol{\pi}
\end{align*}
where $n_i$ is the number of users in cluster $i$ and $B$ denotes the Beta function. Noticing that the integrated factor is a Dirichlet distribution with concentration parameter $\boldsymbol{\alpha} + \mathbf{n}$ but without its normalizing factor:
\begin{align*}
p(\mathbf{z})=&
\frac{
B(\boldsymbol{\alpha} + \mathbf{n})
}{
B(\boldsymbol{\alpha})
}%\\&\times #uncomment to break the line in two column mode
\int
\frac{1}{
B(\boldsymbol{\alpha + \mathbf{n}})
}
\prod_{i=1}^K \pi_i^{\alpha/K + n_i - 1}
\text{d}\boldsymbol{\pi}\\
=&
\frac{
B(\boldsymbol{\alpha} + \mathbf{n})
}{
B(\boldsymbol{\alpha})
}
\end{align*}
which expanding the definition of the Beta function becomes:
\begin{align}
p(\mathbf{z})=
\frac{
\prod_{i=1}^K \Gamma(\alpha/K + n_i)
}
{
\Gamma \left(\sum_{i=1}^K \alpha/K + n_i \right)
}
\frac{
\Gamma \left(\sum_{i=1}^K \alpha/K \right)
}
{
\prod_{i=1}^K \Gamma(\alpha/K)
}
=
\frac{
\prod_{i=1}^K \Gamma(\alpha/K + n_i)
}
{
\Gamma \left(\alpha + U \right)
}
\frac{
\Gamma \left(\alpha \right)
}
{
\prod_{i=1}^K \Gamma(\alpha/K)
}
\label{eq:p_z}
\end{align}
where $U=\sum_{i=1}^{K}n_i$. Note that marginalizing out $\boldsymbol{\pi}$ we introduce dependencies between the individual clusters assignments under the form of the counts $n_i$. The conditional distribution of an individual assignment given the others is:
\begin{align}
\label{eq:z_cond}
p(z_u = j| \mathbf{z_{-u}})
=
\frac{p(\mathbf{z})}
{p(\mathbf{z}_{-u})}
\end{align}
To compute the denominator we assume cluster assignments are exchangeable, that is, the joint distribution $p(\mathbf{z})$ is the same regardless the order in which clusters are assigned. This allows us to assume that $z_u$ is the last assignment, therefore obtaining $p(\mathbf{z_{-u}})$ by considering how Equation $\ref{eq:p_z}$ before $z_u$ was assigned to cluster $j$.
\begin{align}
\label{eq:p_z_minus}
p(\mathbf{z}_{-u}) =&
\frac
{
\Gamma(\alpha/K + n_j-1)
\prod_{i\neq j} \Gamma(\alpha/K + n_i)
}
{\Gamma
\left(
\alpha + U -1
\right)
}
%\\ &\times
\frac{
\Gamma \left(\alpha \right)
}
{
\prod_{i=1} \Gamma(\alpha/K)
}
\end{align}
And finally plugging Equations \ref{eq:p_z_minus} and \ref{eq:p_z} into Equation \ref{eq:z_cond}, and cancelling out the factors that do not depend on the cluster assignment $z_u$, and finally using the identity $a \Gamma(a) = \Gamma(a+1)$ we get:
\begin{align*}
p(z_u = j| \mathbf{z}_{-u})
&=
\frac
{\alpha/K + n_j-1}
{\alpha + U -1}
=
\frac
{\alpha/K + n_{-j}}
{\alpha + U -1}
\end{align*}
where $n_{-j}$ is the number of users in cluster $j$ before the assignment of $z_u$.
The Chinese Restaurant Process is the consequence of considering $K \rightarrow \infty$. For clusters where $n_{-j}>0$, we have:
\begin{align*}
p(z_u = j \text{ s.t } n_{-j}>0 | \mathbf{z}_{-u})
&=
\frac
{n_{-j}}
{\alpha + U -1}
\end{align*}
and the probability of assigning $z_u$ to any of the (infinite) empty clusters is:
\begin{align*}
p(z_u = j \text{ s.t } n_{-j}=0 | \mathbf{z_{-u}})
=\;& \lim_{K\rightarrow \infty}
(K - p)\frac
{\alpha/K}
{\alpha + U -1}
=
\frac
{\alpha}
{\alpha + U -1}
\end{align*}
where $p$ is the number of non-empty components.
It can be shown that the generative process composed of a Chinese Restaurant Process were every component $j$ is associated to a probability distribution with parameters $\boldsymbol{\theta}_j$ is equivalent to a Dirichlet Process.
\section{Conditionals for the feature view}
In this appendix we provide the conditional distributions for the feature view to be plugged into the Gibbs sampler. Note that, except for $\betaoa$, conjugacy can be exploited in every case and therefore their derivations are straightforward and well known. The derivation for $\betaoa$ is left for another section:
\subsection{Component parameters}
\subsubsection*{Components means $p(\Muk | \cdot )$:}
\begin{align*}
p(\Muk | \cdot )
&\propto
p\left(\Muk | \Muo, \invRo\right)
\prod_{u \in k} p\left(\mathbf{a}_u | \Muk, \Sk, \mathbf{z}\right)\\
&\propto
\mathcal{N}\left(\Muk | \Muo, \invRo\right)
\prod_{u \in k} \mathcal{N}\left(\mathbf{a}_u | \Muk, \Sk\right)\\
&=
\mathcal{N}(\boldsymbol{\mu', \Lambda'})
\end{align*}
where:
\begin{align*}
\boldsymbol{\Lambda'} &= \Ro + n_k \Sk\\
\boldsymbol{\mu'} &= \boldsymbol{\Lambda'^{-1}} \left(\Ro \Muo + \Sk \sum_{u\in k} \mathbf{a}_u\right)
\end{align*}
\subsubsection*{Components precisions $p(\Sk | \cdot )$:}
\begin{align*}
p(\Sk | \cdot )
\propto\;&
p\left(\Sk |\betaoa, \Wo\right)
\prod_{u \in k} p\left(\mathbf{a}_u | \Muk, \Sk, \mathbf{z}\right)\\
\propto\;&
\mathcal{W}\left(\Sk |\betaoa, (\betaoa
\Wo)^{-1}\right)%\\&\times
\prod_{u \in k} \mathcal{N}\left(\mathbf{a}_u | \Muk, \Sk\right)\\
=\;& \mathcal{W}(\beta', \mathbf{W}')
\end{align*}
where:
\begin{align*}
\beta' &= \betaoa + n_k\\
\mathbf{W}' &=
\left[ \betaoa\Wo + \sum_{u \in k} (\mathbf{a}_u - \Muk)(\mathbf{a}_u- \Muk)^T \right]^{-1}
\end{align*}
\subsection{Shared hyper-parameters}
\subsubsection*{Shared base means $p(\Muo | \cdot)$:}
\begin{align*}
p(\Muo | \cdot)
&\propto
p\left(\Muo | \boldsymbol{\mu_a}, \boldsymbol{\Sigma_a}\right)
\prod_{k = 1}^K p\left(\Muk | \Muo, \Ro \right) \\
&\propto
\mathcal{N}\left(\Muo | \boldsymbol{\mu_a, \Sigma_a}\right)
\prod_{k = 1}^K\mathcal{N}\left(\Muk | \Muo, \invRo\right) \\
&=\mathcal{N}\left(\boldsymbol{\mu'}, \boldsymbol{\Lambda'}^{-1}\right)
\end{align*}
where:
\begin{align*}
\boldsymbol{\Lambda'} &= \boldsymbol{\Lambda_{a}} + K \Ro\\
\boldsymbol{\mu'} &= \boldsymbol{\Lambda'}^{-1} \left(\boldsymbol{\Lambda_{a}} \boldsymbol{\mu_{a}} + K \Ro \overline{\Muk}\right)
\end{align*}
\subsubsection*{Shared base precisions $p(\Ro | \cdot)$:}
\begin{align*}
p(\Ro | \cdot)
\propto\;&
p\left(\Ro | D, \boldsymbol{\Sigma_a^{-1}}\right)
\prod_{k = 1}^K p\left(\Muk | \Muo, \Ro\right) \\
\propto\;&
\mathcal{W}\left(\Ro | D, (D\boldsymbol{\Sigma_a})^{-1}\right)
%\\&\times
\prod_{k = 1}^K \mathcal{N}\left(\Muk | \Muo, \invRo \right) \\
=\;&
\mathcal{W}(\upsilon', \boldsymbol{\Psi}')
\end{align*}
where:
\begin{align*}
\upsilon' &= D+K\\
\boldsymbol{\Psi'} &=
\left[D\boldsymbol{\Sigma_a} + \sum_k (\Muk- \Muo)(\Muk- \Muo)^T \right]^{-1}
\end{align*}
\subsubsection*{Shared base covariances $p(\Wo | \cdot)$:}
\begin{align*}
p(\Wo | \cdot)
\propto\;&
p\left(\Wo | D, \frac{1}{D} \boldsymbol{\Sigma_a}\right)
\prod_{k=1}^K p\left(\Sk | \betaoa, \invWo\right)\\
\propto\;&
\mathcal{W}\left(\Wo | D, \frac{1}{D} \boldsymbol{\Sigma_a}\right)
%\\&\times
\prod_{k=1}^K \mathcal{W}\left(\Sk | \betaoa, \left(\betaoa\Wo\right)^{-1}\right)\\
=\;&
\mathcal{W}(\upsilon', \boldsymbol{\Psi}')
\end{align*}
where:
\begin{align*}
\upsilon' &=D + K\betaoa\\
\boldsymbol{\Psi}' &=
\left[D\boldsymbol{\Sigma_a}^{-1} + \betaoa\sum_{k=1}^K\Sk\right]^{-1}
\end{align*}
\subsubsection*{Shared base degrees of freedom $p(\betaoa | \cdot)$:}
\begin{align*}
p(\betaoa | \cdot)
&\propto
p(\betaoa) \prod_{k=1}^K p\left(\Sk | \Wo , \betaoa\right)\\
&=p(\betaoa | 1, \frac{1}{D})\prod_{k=1}^K \mathcal{W} \left(\Sk | \Wo , \betaoa\right)
\end{align*}
where there is no conjugacy we can exploit. We may sample from this distribution with Adaptive Rejection Sampling.
%%%%%%%%%%%%%%%%%%%%%%%%
\section{Conditionals for the behavior view}
In this appendix we provide the conditional distributions for the behavior view to be plugged into the Gibbs sampler. Except for $\beta_{b_0}$, conjugacy can be exploited in every case and therefore their derivations straightforward and well known. The derivation for $\beta_{b_0}$ is left for another section:
\subsection{Users parameters}
\subsubsection*{Users latent coefficient $p(b_u | \cdot )$:}
%%%%%%%%%%%%%%
% User by user (this derivation is wrong. To avoid messing up with interdependencies between users participations use the matricial derivation computing all coefficients at once)
%%%%%%%%%%%%%%
%\begin{align*}
%p(b_u | \cdot )
%=&
%p(b_u | \mu_{z_u}, s_{z_u}, \mathbf{z_u}) \prod_{t=1}^T p(y_t | \mathbf{p, b})\\
%&= \mathcal{N}(b_u | \mu_{z_u}, s_{z_u}) \prod_{t=1}^T \mathcal{N}(y_t|\mathbf{p^T b, \sigma_y})\\
%=&
%\mathcal{N}(\mu', \sigma'^2)
%\end{align*}
%\begin{align}
%\sigma'^{-2} &= s_{z_u} + T_u \sigma_y^{-2} \\
%\mu' &= \sigma'^{2}(s_{z_u}\mu_{z_u}+ T_u \sigma_y^{-2} \overline{y_t})
%\end{align}
Let $\mathbf{Z}$ be a $K\times U$ a binary matrix where $\mathbf{Z}_{k,u}=1$ denotes whether user $u$ is assigned to cluster $k$. Let $\mathbf{I_{[T]}}$ and $\mathbf{I_{[U]}}$ identity matrices of sizes $T$ and $U$, respectively. Let $\boldsymbol{\mu}^\text{(f)} = (\mu_1^{\text{(f)}},...,\mu_K^{\text{(f)}})$ and $\mathbf{s}^\text{(f)} = (s_1^{\text{(f)}},...,s_K^{\text{(f)}})$ Then:
\begin{align*}
p(\mathbf{b} | \cdot )
\propto&\;
p(\mathbf{b} | \boldsymbol{\mu}^\text{(f)}, \mathbf{s}^\text{(f)}, \mathbf{Z}) p(\mathbf{y} | \mathbf{P, b})\\
\propto&\;
\mathcal{N}(\mathbf{b} | \mathbf{Z}^T\boldsymbol{\mu}^\text{(f)}, \mathbf{Z}^T \mathbf{s}^\text{(f)} \mathbf{I_{[U]}})
\mathcal{N}(\mathbf{y}|\mathbf{P}^T \mathbf{b, \sigma_y I_{[T]}})\\
=&\;
\mathcal{N}(\mathbf{\boldsymbol{\mu'}, \boldsymbol{\Lambda'}^{-1}})
\end{align*}
where:
\begin{align*}
\mathbf{\Lambda'} &= \mathbf{Z}^T \mathbf{s}^\text{(f)} \mathbf{I_{[U]}} + \mathbf{P}\sigma_\textbf{y}^{-2} \mathbf{I}_{[T]} \mathbf{P}^T \\
\boldsymbol{\mu'} &= \mathbf{\Lambda'}^{-1}(\mathbf{Z}^T \mathbf{s}^\text{(f)} \mathbf{Z}^T\boldsymbol{\mu}^\text{(f)}+ \mathbf{P} \sigma_\text{y}^{-2} \mathbf{I_{[T]} y})
\end{align*}
\subsection{Component parameters}
\subsubsection*{Components means $p(\muk| \cdot )$:}
\begin{align*}
p(\muk | \cdot )
&\propto
p\left(\muk | \muo, \invro\right)
\prod_{u \in k} p(b_u | \muk, \sk, \mathbf{z})\\
&\propto
\mathcal{N}\left(\muk | \muo, \invro\right)
\prod_{u \in k} \mathcal{N}(b_u | \muk, \sk)\\
&=\mathcal{N}(\boldsymbol{\mu'}, \boldsymbol{\Lambda'}^{-1})
\end{align*}
where:
\begin{align*}
\boldsymbol{\Lambda'} &= \ro + n_k \sk\\
\boldsymbol{\mu'} &= \boldsymbol{\Lambda'^{-1}} \left(\ro \muo + \sk \sum_{u\in k} b_u\right)
\end{align*}
\subsubsection*{Components precisions $p(\sk | \cdot )$:}
\begin{align*}
p(\sk | \cdot )
&\propto
p(\sk |\betaof, \wo)
\prod_{u \in k} p(b_u | \muk, \sk, \mathbf{z})\\
&\propto
\mathcal{G}\left(\sk |\betaof, \left(\betaof\wo\right)^{-1}\right)
\prod_{u \in k} \mathcal{N}(b_u | \muk, \sk)\\
&=
\mathcal{G}(\upsilon', \psi')
\end{align*}
where:
\begin{align*}
\upsilon' &= \betaof + n_k\\
\psi' &=
\left[ \betaof \wo + \sum_{u \in k} \left(b_u - \muk\right)^2 \right]^{-1}
\end{align*}
\subsection{Shared hyper-parameters}
\subsubsection*{Shared base mean $p(\muo | \cdot)$:}
\begin{align*}
p(\muo | \cdot)
&\propto
p(\muo | \mu_{\hat{b}}, \sigma_{\hat{b}})
\prod_{k = 1}^K p(\muk | \muo, \ro ) \\
&\propto
\mathcal{N}(\muo | \mu_{\hat{b}}, \sigma_{\hat{b}})
\prod_{k = 1}^K\mathcal{N}\left(\muk | \muo, \invro\right) \\
&= \mathcal{N}(\mu', \sigma'^{-2})
\end{align*}
where:
\begin{align*}
\sigma'^{-2} &= \sigma_{\hat{b}}^{-2} + K \ro\\
\mu' &= \sigma_{\hat{b}}^{2'} (\sigma_{\hat{b}}^{-2}
\mu_{\hat{b}} + K \ro \overline{\muk})
\end{align*}
\subsubsection*{Shared base precision $p(\ro | \cdot)$}
\begin{align*}
p(\ro | \cdot)
&\propto
p(\ro | 1, \sigma_{\hat{b}}^{-2})
\prod_{k = 1}^K p(\muk | \muo, \ro) \\
&\propto
\mathcal{G}(\ro | 1, \sigma_{\hat{b}}^{-2})
\prod_{k = 1}^K \mathcal{N}\left(\muk | \muo, \invro\right) \\
&= \mathcal{G}(\upsilon', \psi')
\end{align*}
where:
\begin{align*}
\upsilon' =& 1+K\\
\psi' =& \left[\sigma_{\hat{b}}^{-2} + \sum_{k=1}^K \left(\muk- \muo\right)^2\right]^{-1}
\end{align*}
\subsubsection*{Shared base variance $p(\wo | \cdot)$:}
\begin{align*}
p(\wo | \cdot)
&\propto
p(\wo | 1, \sigma_{\hat{b}})
\prod_{r=1}^K p\left(\sk | \betaof, \wo\right)\\
&\propto
\mathcal{G}(\wo | 1, \sigma_{\hat{b}})
\prod_{k=1}^K \mathcal{G}\left(\sk | \betaof, \left(\beta\wo\right)^{-1}\right)\\
&= \mathcal{G}(\upsilon', \psi')
\end{align*}
\begin{align*}
\upsilon' =& 1 + K\betaof\\
\psi' =&
\left[\sigma_{\hat{b}}^{-2} + \betaof \sum_{k=1}^K \sk\right]^{-1}
\end{align*}
\subsubsection*{Shared base degrees of freedom $p(\betaof | \cdot)$:}
\begin{align*}
p(\betaof | \cdot)
&\propto
p(\betaof) \prod_{r=1}^K p(\sk | \wo , \betaof)\\
&=p(\betaof | 1, 1)\prod_{r=1}^K \mathcal{G} \left(\sk |\betaof, \left(\betaof \wo \right)^{-1}\right)
\end{align*}
where there is no conjugacy we can exploit. We will sample from this distribution with Adaptive Rejection Sampling.
\subsection{Regression noise}
Let the precision $s_{\text{y}}$ be the inverse of the variance $\sigma_{\text{y}}^{2}$. Then:
\begin{align*}
p(s_{\text{y}} | \cdot) &\propto
p(s_{\text{y}} | 1,\sigma_{0}^{-2}) \prod_{t=1}^T p(y_t | \mathbf{p^T b}, s_{\text{y}})\\
&\propto \mathcal{G}(s_{\text{y}} | 1,\sigma_{\text{0}}^{-2}) \prod_{t=1}^T \mathcal{N}( y_t | \mathbf{p^T b}, s_{\text{y}})\\
&= \mathcal{G}(\upsilon', \psi')
\end{align*}
\begin{align*}
\upsilon' &= 1+T\\
\psi' &= \left[\sigma_{\text{0}}^{2} + \sum_{t=1}^{T}\left(y_t-\mathbf{p^Tb}\right)^2\right]^{-1}
\end{align*}
%%%%%%%%%%%%
\section{Sampling $\betaoa$}
For the feature view, if:
\begin{align*}
\frac{1}{\beta - D + 1} \sim \mathcal{G}(1, \frac{1}{D})
\end{align*}
we can get the prior distribution of $\beta$ by variable transformation:
\begin{align*}
p(\beta) =\;& \mathcal{G}(\frac{1}{\beta-D+1})|\frac{\partial}{\partial \beta}\frac{1}{\beta-D+1}|\\
&\propto \left(\frac{1}{\beta-D+1}\right)^{-1/2} \exp\left(-\frac{D}{2(\beta-D+1)}\right)
%\\&\times
\frac{1}{(\beta-D+1)^2}\\
&\propto \left(\frac{1}{\beta-D+1}\right)^{3/2} \exp\left(-\frac{D}{2(\beta-D+1)}\right)
\end{align*}
Then:
\begin{align*}
p(\beta) &\propto (\beta - D + 1)^{-3/2} \exp\left( -\frac{D}{2(\beta - D +1)}\right)
\end{align*}
The Wishart likelihood is:
\begin{align*}
\mathcal{W}(\mathbf{S}_k | \beta, (\beta\mathbf{W})^{-1})
=&
\frac{(|\mathbf{W}| (\beta/2)^D)^{\beta/2}}{\Gamma_D(\beta/2)}
|\mathbf{S}_k|^{(\beta-D-1)/2}
%\\&\times
\exp\left(- \frac{\beta}{2}\text{Tr}(\mathbf{S}_k\mathbf{W})\right)\\
=&
\frac{(|\mathbf{W}| (\beta/2)^D)^{\beta/2}}{\prod_{d=1}^{D} \Gamma(\frac{\beta+d-D}{2})}
|\mathbf{S}_k|^{(\beta-D-1)/2}
%\\&\times
\exp\left(- \frac{\beta}{2}\text{Tr}(\mathbf{S}_k\mathbf{W})\right)\\
\end{align*}
We multiply both equations, the Wishart likelihood (its $K$ factors) and the prior, to get the posterior:
\begin{align*}
p(\beta | \cdot) =&
\left(\prod_{d=0}^D \Gamma (\frac{\beta}{2} + \frac{d-D}{2}) \right)^{-K}
%\\&\times
\exp\left(-\frac{D}{2(\beta-D+1)} \right)
(\beta-D+1)^{-3/2}
\\&\times
(\frac{\beta}{2})^{\frac{KD\beta}{2}}
%\\&\times
\prod_{k=1}^K (|\mathbf{S}_k||\mathbf{W}|)^{\beta/2} \exp\left(-\frac{\beta}{2} \text{Tr}(\mathbf{S}_k\mathbf{W})\right)
\end{align*}
Then if $y = \ln\beta$:
\begin{align*}
p(y | \cdot) =\;&
e^y
\left(\prod_{d=0}^D \Gamma (\frac{e^y}{2} + \frac{d-D}{2}) \right)^{-K}
%\\&\times
\exp\left(-\frac{D}{2(e^y-D+1)} \right)
(e^y-D+1)^{-3/2}
\\&\times
(\frac{e^y}{2})^{\frac{KDe^y}{2}}
%\\&\times
\prod_{k=1}^K (|\mathbf{S}_k||\mathbf{W}|)^{e^y/2} \exp\left(-\frac{e^y}{2} \text{Tr}(\mathbf{S}_k\mathbf{W})\right)
\end{align*}
and its logarithm is:
\begin{align*}
\ln p(y | \cdot)
=\;&
y
-K \sum_{d=0}^D \ln\Gamma (\frac{e^y}{2} + \frac{d-D}{2})
%\\&\times
-\frac{D}{2(e^y-D+1)}
-\frac{3}{2}\ln(e^y-D+1)
\\&
+\frac{KDe^y}{2}(y - \ln2)
%\\&
+\frac{e^y}{2} \sum_{k=1}^K \left( \ln (|\mathbf{S}_k||\mathbf{W}|) - \text{Tr}(\mathbf{S}_k\mathbf{W})\right)
\end{align*}
which is a concave function and therefore we can use Adaptive Rejection Sampling (ARS). ARS sampling works with the derivative of the log function:
\begin{align*}
\frac{\partial}{\partial y} \ln p(y | \cdot)
=&
1-K \frac{e^y}{2} \sum_{d=1}^D \Psi (\frac{e^y}{2} + \frac{d-D}{2})
%\\&
+\frac{De^y}{2(e^y-D+1)^2}
-\frac{3}{2}\frac{e^y}{e^y-D+1}
\\&
+\frac{KDe^y}{2}(y - \ln2) + \frac{KDe^y}{2}
%\\&
+\frac{e^y}{2} \sum_{k=1}^K \left(\ln (|\mathbf{S}_k||\mathbf{W}|) - \text{Tr}(\mathbf{S}_k\mathbf{W})\right)
\end{align*}
where $\Psi(x)$ is the digamma function.
%%% Wolfram Alpha
%% d/dx(((D*x*K-3)/2)*ln(x/2))
%% d/dx(-\frac{D}{2(x-D+1)})
%% - K*(d/dx(\sum_{d=1}^{D}(log(\Gamma ((x + d - D)/2)))))
%%%%%%%%%%%%%%%%%%%%%%%%
\section{Sampling $\betaof$}
For the behavior view, if
\begin{align*}
\frac{1}{\beta} \sim \mathcal{G}(1,1)
\end{align*}
the posterior of $\beta$ is:
\begin{align*}
p(\beta | \cdot)
=&
\Gamma(\frac{\beta}{2})^{-K}\exp\left(\frac{-1}{2\beta}\right) \left(\frac{\beta}{2}\right)
^{(K \beta -3)/2}
%\\&\times
\prod_{k=1}^{K} (s_k w)^{\beta/2} \exp\left(-\frac{\beta s_k w}{2}\right)
\end{align*}
Then if $y=\ln\beta$:
\begin{align*}
p(y | \cdot) =& e^y \Gamma(\frac{e^y}{2})^{-K}\exp\left(\frac{-1}{2e^y}\right) \left(\frac{e^y}{2}\right)
^{(K e^y -3)/2}
%\\&\times
\prod_{k=1}^{K} (s_k w)^{e^y/2} \exp\left(-\frac{e^y s_k w}{2}\right)
\end{align*}
and its logarithm:
\begin{align*}
\ln p(y | \cdot) =& y -K\ln\Gamma \left(\frac{e^y}{2}\right) + \left(\frac{-1}{2e^y}\right)%\\&
+\frac{Ke^y-3}{2}\left(y - \ln2\right)%\\&
+ \frac{e^y}{2}\sum_{k=1}^{K} \left(\ln (s_k w) - s_k w \right)
\end{align*}
which is a concave function and therefore we can use Adaptive Rejection Sampling. The derivative is:
\begin{align*}
\frac{\partial}{\partial y} \ln p(y | \cdot) =&
1
-K \Psi \left(\frac{e^y}{2}\right) \frac{e^y}{2}
+ \left(\frac{1}{2e^y}\right)
%\\&
+\frac{Ke^y}{2} \left(y - \ln2\right) + \frac{Ke^y-3}{2}
%\\&
+ \frac{e^y}{2}\sum_{k=1}^{K}\left(\ln (s_k w) - s_k w\right)
\end{align*}
where $\Psi(x)$ is the digamma function.
%%%%%%%%%%%%%%%%%%%%%%%%
\section{Sampling $\alpha$}
Since the inverse of the concentration parameter $\alpha$ is given a Gamma prior
\begin{align*}
\frac{1}{\alpha} \sim \mathcal{G}(1,1)
\end{align*}
we can get the prior over $\alpha$ by variable transformation:
\begin{align*}
p(\alpha)
\propto~&
\alpha^{-3/2} \exp \left(-1/(2\alpha)\right)
\end{align*}
Multiplying the prior of $\alpha$ by its likelihood we get the posterior:
\begin{align*}
p(\alpha | \cdot)
\propto~&
\alpha^{-3/2} \exp \left(-1/(2\alpha)\right)
\times
\frac{\Gamma(\alpha)}{\Gamma(\alpha+U)}
\prod_{j=1}^{K}
\frac{\Gamma(n_j + \alpha/K)}{\alpha/K}\\
\propto~&
\alpha^{-3/2} \exp \left(-1/(2\alpha)\right)
\frac{\Gamma(\alpha)}{\Gamma(\alpha+U)}\alpha^K \\
\propto~&\alpha^{K-3/2} \exp \left(-1/(2\alpha)\right)
\frac{\Gamma(\alpha)}{\Gamma(\alpha+U)}
\end{align*}
Then if $y=\ln \alpha$:
\begin{align*}
p(y | \cdot) = e^{y(K-3/2)}
\exp(-1/(2e^y))
\frac{\Gamma(e^y)}{\Gamma(e^y +U)}
\end{align*}
and its logarithm is:
\begin{align*}
\ln p(y | \cdot) =
y(K-3/2)
-1/(2e^y)+
\ln\Gamma(e^y) - \ln\Gamma(e^y+U)
\end{align*}
which is a concave function and therefore we can use Adaptive Rejection Sampling. The derivative is:
\begin{align*}
\frac{\partial}{\partial y} \ln p(y | \cdot) =
(K-3/2)
+1/(2e^y)+
e^y\Psi(e^y) - e^y\Psi(e^y+U)
\end{align*}
\bibliographystyle{spbasic}
\bibliography{library}
\end{document}
|
----------------------------------------------------------------
-- This file contains the definition of categories. --
----------------------------------------------------------------
module Category.Category where
open import Level public renaming (suc to lsuc)
open import Data.Product
open import Setoid.Total public
open import Relation.Relation
open Setoid public
record Cat {l : Level} : Set (lsuc l) where
field
Obj : Set l
Hom : Obj → Obj → Setoid {l}
comp : {A B C : Obj} → BinSetoidFun (Hom A B) (Hom B C) (Hom A C)
id : {A : Obj} → el (Hom A A)
assocPf : ∀{A B C D}{f : el (Hom A B)}{g : el (Hom B C)}{h : el (Hom C D)}
→ ⟨ Hom A D ⟩[ f ○[ comp ] (g ○[ comp ] h) ≡ (f ○[ comp ] g) ○[ comp ] h ]
idPfCom : ∀{A B}{f : el (Hom A B)} → ⟨ Hom A B ⟩[ id ○[ comp ] f ≡ f ○[ comp ] id ]
idPf : ∀{A B}{f : el (Hom A B)} → ⟨ Hom A B ⟩[ id ○[ comp ] f ≡ f ]
open Cat public
objectSetoid : {l : Level} → (ℂ : Cat {l}) → Setoid {l}
objectSetoid {l} ℂ = record { el = Obj ℂ;
eq = λ a b → _≅_ a b;
eqRpf = record { parEqPf = record { symPf = λ x₁ → sym x₁ ; transPf = λ x₁ x₂ → trans x₁ x₂ } ; refPf = refl }
}
-- Identities composed on the left cancel.
idPf-left : ∀{l}{ℂ : Cat {l}}{A B : Obj ℂ}{f : el (Hom ℂ A B)} → ⟨ Hom ℂ A B ⟩[ f ○[ comp ℂ ] id ℂ ≡ f ]
idPf-left {_}{ℂ}{A}{B}{f} =
transPf (parEqPf (eqRpf (Hom ℂ A B)))
(symPf (parEqPf (eqRpf (Hom ℂ A B))) (idPfCom ℂ {A} {B} {f}))
(idPf ℂ)
-- Congruence results for composition.
eq-comp-right : ∀{l}
{ℂ : Cat {l}}
{A B C : Obj ℂ}
{f : el (Hom ℂ A B)}
{g₁ g₂ : el (Hom ℂ B C)}
→ ⟨ Hom ℂ B C ⟩[ g₁ ≡ g₂ ]
→ ⟨ Hom ℂ A C ⟩[ f ○[ comp ℂ ] g₁ ≡ f ○[ comp ℂ ] g₂ ]
eq-comp-right {_}{ℂ}{A}{B}{C}{f}{g₁}{g₂} eq =
extT (appT {_}{_}{Hom ℂ A B}{SetoidFunSpace (Hom ℂ B C) (Hom ℂ A C)} (comp ℂ) f) eq
eq-comp-left : ∀{l}
{ℂ : Cat {l}}
{A B C : Obj ℂ}
{f₁ f₂ : el (Hom ℂ A B)}
{g : el (Hom ℂ B C)}
→ ⟨ Hom ℂ A B ⟩[ f₁ ≡ f₂ ]
→ ⟨ Hom ℂ A C ⟩[ f₁ ○[ comp ℂ ] g ≡ f₂ ○[ comp ℂ ] g ]
eq-comp-left {_}{ℂ}{A}{B}{C}{f₁}{f₂}{g} eq = extT (comp ℂ) {f₁}{f₂} eq g
eq-comp-all : ∀{l}
{ℂ : Cat {l}}
{A B C : Obj ℂ}
{f₁ f₂ : el (Hom ℂ A B)}
{g₁ g₂ : el (Hom ℂ B C)}
→ ⟨ Hom ℂ A B ⟩[ f₁ ≡ f₂ ]
→ ⟨ Hom ℂ B C ⟩[ g₁ ≡ g₂ ]
→ ⟨ Hom ℂ A C ⟩[ f₁ ○[ comp ℂ ] g₁ ≡ f₂ ○[ comp ℂ ] g₂ ]
eq-comp-all {_}{ℂ}{A}{B}{C}{f₁}{f₂}{g₁}{g₂} eq₁ eq₂
with eq-comp-left {_}{ℂ}{A}{B}{C}{f₁}{f₂}{g₁} eq₁ |
eq-comp-right {_}{ℂ}{A}{B}{C}{f₂}{g₁}{g₂} eq₂
... | eq₃ | eq₄ = transPf (parEqPf (eqRpf (Hom ℂ A C))) eq₃ eq₄
|
function decim=decimage(im,M)
% IMAGE DECIMATION
% LSS MATLAB WEBSERVER
[imysize,imxsize]=size(im);
decim = im([1:M:imysize],[1:M:imxsize]);
|
function r = exprand(mu,m,n)
% EXPRAND Random matrices from exponential distribution.
%
% R = EXPRAND(MU) returns a matrix of random numbers chosen
% from the exponential distribution with parameter MU.
% The size of R is the size of MU.
% Alternatively, R = EXPRAND(MU,M,N) returns an M by N matrix.
%
% Copyright (c) 1998-2004 Aki Vehtari
% This software is distributed under the GNU General Public
% License (version 3 or later); please refer to the file
% License.txt, included with the software, for details.
error('No mex-file for this architecture. See Matlab help and convert.m in ./linuxCsource or ./winCsource for help.')
|
r"""
Join features
"""
from . import Feature, FeatureTestResult
class JoinFeature(Feature):
r"""
Join of several :class:`~sage.features.Feature` instances.
EXAMPLES::
sage: from sage.features import Executable
sage: from sage.features.join_feature import JoinFeature
sage: F = JoinFeature("shell-boolean",
....: (Executable('shell-true', 'true'),
....: Executable('shell-false', 'false')))
sage: F.is_present()
FeatureTestResult('shell-boolean', True)
sage: F = JoinFeature("asdfghjkl",
....: (Executable('shell-true', 'true'),
....: Executable('xxyyyy', 'xxyyyy-does-not-exist')))
sage: F.is_present()
FeatureTestResult('xxyyyy', False)
"""
def __init__(self, name, features, spkg=None, url=None, description=None):
"""
TESTS:
The empty join feature is present::
sage: from sage.features.join_feature import JoinFeature
sage: JoinFeature("empty", ()).is_present()
FeatureTestResult('empty', True)
"""
if spkg is None:
spkgs = set(f.spkg for f in features if f.spkg)
if len(spkgs) > 1:
raise ValueError('given features have more than one spkg; provide spkg argument')
elif len(spkgs) == 1:
spkg = next(iter(spkgs))
if url is None:
urls = set(f.url for f in features if f.url)
if len(urls) > 1:
raise ValueError('given features have more than one url; provide url argument')
elif len(urls) == 1:
url = next(iter(urls))
super().__init__(name, spkg=spkg, url=url, description=description)
self._features = features
def _is_present(self):
r"""
Test for the presence of the join feature.
EXAMPLES::
sage: from sage.features.latte import Latte
sage: Latte()._is_present() # optional - latte_int
FeatureTestResult('latte_int', True)
"""
for f in self._features:
test = f._is_present()
if not test:
return test
return FeatureTestResult(self, True)
def is_functional(self):
r"""
Test whether the join feature is functional.
EXAMPLES::
sage: from sage.features.latte import Latte
sage: Latte().is_functional() # optional - latte_int
FeatureTestResult('latte_int', True)
"""
for f in self._features:
test = f.is_functional()
if not test:
return test
return FeatureTestResult(self, True)
|
## Data-driven Design and Analyses of Structures and Materials (3dasm)
## Lecture 7
### Miguel A. Bessa | <a href = "mailto: [email protected]">[email protected]</a> | Associate Professor
**What:** A lecture of the "3dasm" course
**Where:** This notebook comes from this [repository](https://github.com/bessagroup/3dasm_course)
**Reference for entire course:** Murphy, Kevin P. *Probabilistic machine learning: an introduction*. MIT press, 2022. Available online [here](https://probml.github.io/pml-book/book1.html)
**How:** We try to follow Murphy's book closely, but the sequence of Chapters and Sections is different. The intention is to use notebooks as an introduction to the topic and Murphy's book as a resource.
* If working offline: Go through this notebook and read the book.
* If attending class in person: listen to me (!) but also go through the notebook in your laptop at the same time. Read the book.
* If attending lectures remotely: listen to me (!) via Zoom and (ideally) use two screens where you have the notebook open in 1 screen and you see the lectures on the other. Read the book.
**Optional reference (the "bible" by the "bishop"... pun intended 😆) :** Bishop, Christopher M. *Pattern recognition and machine learning*. Springer Verlag, 2006.
**References/resources to create this notebook:**
* [Car figure](https://korkortonline.se/en/theory/reaction-braking-stopping/)
Apologies in advance if I missed some reference used in this notebook. Please contact me if that is the case, and I will gladly include it here.
## **OPTION 1**. Run this notebook **locally in your computer**:
1. Confirm that you have the 3dasm conda environment (see Lecture 1).
2. Go to the 3dasm_course folder in your computer and pull the last updates of the [repository](https://github.com/bessagroup/3dasm_course):
```
git pull
```
3. Open command window and load jupyter notebook (it will open in your internet browser):
```
conda activate 3dasm
jupyter notebook
```
4. Open notebook of this Lecture.
## **OPTION 2**. Use **Google's Colab** (no installation required, but times out if idle):
1. go to https://colab.research.google.com
2. login
3. File > Open notebook
4. click on Github (no need to login or authorize anything)
5. paste the git link: https://github.com/bessagroup/3dasm_course
6. click search and then click on the notebook for this Lecture.
```python
# Basic plotting tools needed in Python.
import matplotlib.pyplot as plt # import plotting tools to create figures
import numpy as np # import numpy to handle a lot of things!
from IPython.display import display, Math # to print with Latex math
%config InlineBackend.figure_format = "retina" # render higher resolution images in the notebook
plt.style.use("seaborn") # style for plotting that comes from seaborn
plt.rcParams["figure.figsize"] = (8,4) # rescale figure size appropriately for slides
```
## Outline for today
* Understanding the Posterior Predictive Distribution (PPD)
- Solution and discussion of Homework of Lecture 6
**Reading material**: This notebook
## Solution to Homework of Lecture 6
### Summary of the model
1. The **observation distribution**:
$$
p(y|z) = \mathcal{N}\left(y | \mu_{y|z}=w z+b, \sigma_{y|z}^2\right) = \frac{1}{C_{y|z}} \exp\left[ -\frac{1}{2\sigma_{y|z}^2}(y-\mu_{y|z})^2\right]
$$
where $C_{y|z} = \sqrt{2\pi \sigma_{y|z}^2}$ is the **normalization constant** of the Gaussian pdf, and where $\mu_{y|z}=w z+b$, with $w$, $b$ and $\sigma_{y|z}^2$ being constants.
2. <font color='blue'>but now assuming a different **prior distribution**</font>: $p(z) = \mathcal{N}\left(z| \overset{\scriptscriptstyle <}{\mu}_z=3, \overset{\scriptscriptstyle <}{\sigma}_z^2=2^2
\right)$
As in Lecture 6, we start by using Bayes' rule applied to data to determine the <font color='green'>posterior</font>:
$\require{color}$
$$
{\color{green}p(z|y=\mathcal{D}_y)} = \frac{ {\color{blue}p(y=\mathcal{D}_y|z)}{\color{red}p(z)} } {p(y=\mathcal{D}_y)}
$$
The <font color='blue'>likelihood</font> is the same as in Lecture 6:
$$
{\color{blue}p(y=\mathcal{D}_y | z)} = \frac{1}{|w|^N} \cdot C \cdot \frac{1}{\sqrt{2\pi \sigma^2}} \exp\left[ -\frac{1}{2\sigma^2}(z-\mu)^2\right]
$$
where $\mu = \frac{w^2\sigma^2}{\sigma_{y|z}^2} \sum_{i=1}^N \mu_i = \frac{\sum_{i=1}^N y_i}{w N}-\frac{b}{w}$
$\sigma^2 = \frac{\sigma_{y|z}^2}{w^2 N}$, and
$C = \frac{1}{2\pi^{(N-1)/2}} \sqrt{\frac{\sigma^2}{\left( \frac{\sigma_{y|z}^2}{w^2}\right)^N}}
$
But now the marginal likelihood is different from Lecture 6 because we have a different prior:
$$
p(y=\mathcal{D}_y) = \frac{C\cdot C_M}{|w|^N}
$$
where $C_M = \frac{1}{\sqrt{2\pi\left(\sigma^2+\overset{\scriptscriptstyle <}{\sigma}_z^2\right)}} \exp\left[-\frac{1}{2\left(\sigma^2+\overset{\scriptscriptstyle <}{\sigma}_z^2\right)}\left(\mu - \overset{\scriptscriptstyle <}{\mu}_z\right)^2 \right]$.
(Algebra to get this result is in the notes below.)
#### Note: calculation of the marginal likelihood for Homework of Lecture 6
$$\begin{align}
p(y=\mathcal{D}_y) &= \int p(y=\mathcal{D}_y | z) p(z) dz \\
&= \int \frac{1}{|w|^N} C \cdot \mathcal{N}(z|\mu, \sigma^2)\cdot \mathcal{N}\left(z| \overset{\scriptscriptstyle <}{\mu}_z, \overset{\scriptscriptstyle <}{\sigma}_z^2\right) dz\\
&= \frac{C}{|w|^N} \int C_M\mathcal{N}\left(z\left|\frac{1}{\frac{1}{\sigma^2} + \frac{1}{\overset{\scriptscriptstyle <}{\sigma}_z^2}} \left( \frac{\mu}{\sigma^2} + \frac{\overset{\scriptscriptstyle <}{\mu}_z}{\overset{\scriptscriptstyle <}{\sigma}_z^2}\right), \frac{1}{\frac{1}{\sigma^2} + \frac{1}{\overset{\scriptscriptstyle <}{\sigma}_z^2}}\right.\right) dz = \frac{C\cdot C_M}{|w|^N} \\
\end{align}
$$
where $C_M = \frac{1}{\sqrt{2\pi\left(\sigma^2+\overset{\scriptscriptstyle <}{\sigma}_z^2\right)}} \exp\left[-\frac{1}{2\left(\sigma^2+\overset{\scriptscriptstyle <}{\sigma}_z^2\right)}\left(\mu - \overset{\scriptscriptstyle <}{\mu}_z\right)^2 \right]$
Therefore, the <font color='green'>posterior</font> will also be different:
$$\require{color}\begin{align}
{\color{green}p(z|y=\mathcal{D}_y)} &= \frac{ p(y=\mathcal{D}_y|z)p(z) } {p(y=\mathcal{D}_y)} \\
&= \frac{|w|^N}{C\cdot C_M} \cdot \frac{1}{|w|^N} C \cdot \mathcal{N}(z|\mu,\sigma^2) \cdot \mathcal{N}\left(z| \overset{\scriptscriptstyle <}{\mu}_z, \overset{\scriptscriptstyle <}{\sigma}_z^2\right) \\
&= \mathcal{N}\left(z| \overset{\scriptscriptstyle >}{\mu}_z, \overset{\scriptscriptstyle >}{\sigma}_z^2\right)
\end{align}
$$
where $\overset{\scriptscriptstyle >}{\mu}_z = \frac{1}{\frac{1}{\sigma^2} + \frac{1}{\overset{\scriptscriptstyle <}{\sigma}_z^2}} \left( \frac{\mu}{\sigma^2} + \frac{\overset{\scriptscriptstyle <}{\mu}_z}{\overset{\scriptscriptstyle <}{\sigma}_z^2}\right)$
and $\overset{\scriptscriptstyle >}{\sigma}_z^2 = \frac{1}{\frac{1}{\sigma^2} + \frac{1}{\overset{\scriptscriptstyle <}{\sigma}_z^2}}$
are the parameters of the <font color='green'>posterior</font> distribution, symbolized by the superscript $\overset{\scriptscriptstyle >}{(\cdot)}$.
Reflection on the differences between the posterior we obtain for the two different priors we considered.
* When using the noninformative Uniform prior $p(z) = \frac{1}{C_z}$ (Lecture 6):
$$\require{color}\begin{align}
{\color{green}p(z|y=\mathcal{D}_y)}
&= \mathcal{N}(z|\mu, \sigma^2)
\end{align}
$$
* When using a Gaussian prior $p(z) = \mathcal{N}\left(z| \overset{\scriptscriptstyle <}{\mu}_z, \overset{\scriptscriptstyle <}{\sigma}_z^2\right)$ (this Lecture):
$$\require{color}\begin{align}
{\color{green}p(z|y=\mathcal{D}_y)} &= \mathcal{N}\left(z| \overset{\scriptscriptstyle >}{\mu}_z, \overset{\scriptscriptstyle >}{\sigma}_z^2\right) = \mathcal{N}\left(z\left|\frac{1}{\frac{1}{\sigma^2} + \frac{1}{\overset{\scriptscriptstyle <}{\sigma}_z^2}} \left( \frac{\mu}{\sigma^2} + \frac{\overset{\scriptscriptstyle <}{\mu}_z}{\overset{\scriptscriptstyle <}{\sigma}_z^2}\right), \frac{1}{\frac{1}{\sigma^2} + \frac{1}{\overset{\scriptscriptstyle <}{\sigma}_z^2}}\right.\right)
\end{align}
$$
The posterior is still a Gaussian but its mean and variance have been updated by the influence of the prior!
Finally, the goal of calculating the posterior is to use it to determine the <font color='orange'>Posterior Predictive Distribution (<a title="Posterior Predictive Distribution">PPD</a>)</font> :
$$\require{color}
{\color{orange}p(y|\mathcal{D}_y)} = \int \underbrace{p(y|z)}_{\text{observation}\\ \text{distribution}} \overbrace{p(z|y=\mathcal{D}_y)}^{\text{posterior}} dz
$$
Considering the terms we found before, we get:
$$\begin{align}
p(y|\mathcal{D}_y) &= \int \underbrace{\frac{1}{|w|}\frac{1}{\sqrt{2\pi \left(\frac{\sigma_{y|z}}{w}\right)^2}} \exp\left\{ -\frac{1}{2\left(\frac{\sigma_{y|z}}{w}\right)^2}\left[z-\left(\frac{y-b}{w}\right)\right]^2\right\} }_{\text{observation}\\ \text{distribution}} \overbrace{\mathcal{N}\left(z| \overset{\scriptscriptstyle >}{\mu}_z, \overset{\scriptscriptstyle >}{\sigma}_z^2\right)}^{\text{posterior}} dz
\end{align}
$$
The calculation of this integral is similar to what we did in Lecture 6! The difference is that the posterior has a different mean and variance (indicated with the superscript) that originated from the choice of different prior!
So, we can fast forward to the result we obtained before! We just need to replace the symbols $\mu_z$ by $\overset{\scriptscriptstyle >}{\mu}_z$, and $\sigma_z^2$ for $\overset{\scriptscriptstyle >}{\sigma}_z^2$:
$$\require{color}
{\color{orange}p(y|\mathcal{D}_y)} = \frac{\tilde{C}}{|w|}
$$
where
$$\tilde{C} = \frac{1}{\sqrt{2\pi \left( \overset{\scriptscriptstyle >}{\sigma}_z^2 + \frac{\sigma_{y|z}^2}{w^2} \right)}}\exp\left[ - \frac{\left(\overset{\scriptscriptstyle >}{\mu}_z - \frac{y-b}{w}\right)^2}{2\left( \overset{\scriptscriptstyle >}{\sigma}_z^2+\frac{\sigma_{y|z}^2}{w^2}\right)}\right]$$
is the same constant as $C^*$ in Lecture 6, but replacing $\mu_z$ by $\overset{\scriptscriptstyle >}{\mu}_z$, and $\sigma_z^2$ for $\overset{\scriptscriptstyle >}{\sigma}_z^2$.
After a bit of algebra, we get to the following expression for the PPD:
$$\require{color}
\begin{align}
{\color{orange}p(y|\mathcal{D}_y)} &= \frac{1}{\sqrt{2\pi \left( \sigma_{y|z}^2 + w^2\overset{\scriptscriptstyle >}{\sigma}_z^2\right)}}\exp\left\{ - \frac{1}{2\left( \sigma_{y|z}^2 + w^2\overset{\scriptscriptstyle >}{\sigma}_z^2\right)}\left[y-\left(w\overset{\scriptscriptstyle >}{\mu}_z+b\right)\right]^2\right\} \\
&= \mathcal{N}\left(y \left| w\overset{\scriptscriptstyle >}{\mu}_z+b , \sigma_{y|z}^2 + w^2\overset{\scriptscriptstyle >}{\sigma}_z^2 \right.\right) \\
\end{align}
$$
where all of the terms have been defined before (for convenience, see them in the next cell as notes).
#### Note: PPD when using a Gaussian prior (Homework of Lecture 6)
$$\require{color}
{\color{orange}p(y|\mathcal{D}_y)} = \mathcal{N}\left(y \left| w\overset{\scriptscriptstyle >}{\mu}_z+b , \sigma_{y|z}^2 + w^2\overset{\scriptscriptstyle >}{\sigma}_z^2 \right.\right)$$
$b = \mu_{z_2} x^2 = 0.1 \cdot 75^2 = 562.5$
$w = x = 75$
$\sigma_{y|z}^2 = (x^2 \sigma_{z_2})^2=(75^2\cdot0.01)^2=56.25^2$.
$\overset{\scriptscriptstyle >}{\mu}_z = \frac{1}{\frac{1}{\sigma^2} + \frac{1}{\overset{\scriptscriptstyle <}{\sigma}_z^2}} \left( \frac{\mu}{\sigma^2} + \frac{\overset{\scriptscriptstyle <}{\mu}_z}{\overset{\scriptscriptstyle <}{\sigma}_z^2}\right) = \frac{\overset{\scriptscriptstyle <}{\sigma}_z^2 \sigma^2}{\overset{\scriptscriptstyle <}{\sigma}_z^2+\sigma^2} \left( \frac{\mu}{\sigma^2} + \frac{\overset{\scriptscriptstyle <}{\mu}_z}{\overset{\scriptscriptstyle <}{\sigma}_z^2}\right) = \frac{1}{\overset{\scriptscriptstyle <}{\sigma}_z^2+\sigma^2}\left( \mu \overset{\scriptscriptstyle <}{\sigma}_z^2 + \overset{\scriptscriptstyle <}{\mu}_z \sigma^2\right)$
$\overset{\scriptscriptstyle >}{\sigma}_z^2 = \frac{1}{\frac{1}{\sigma^2} + \frac{1}{\overset{\scriptscriptstyle <}{\sigma}_z^2}} = \frac{\overset{\scriptscriptstyle <}{\sigma}_z^2 \sigma^2}{\overset{\scriptscriptstyle <}{\sigma}_z^2+\sigma^2}$
$\sigma^2 = \frac{\sigma_{y|z}^2}{w^2 N} = \frac{(x^2\cdot\sigma_{z_2})^2}{x^2 N} = \frac{x^2\cdot\sigma_{z_2}^2}{N} $
$\mu = \frac{w^2 \sigma^2}{\sigma_{y|z}^2} \sum_{i=1}^{N} \mu_i = \cdots = \frac{\sum_{i=1}^N y_i}{w N}-\frac{b}{w}$
In order to see the explicit dependence on the observed data $\mathcal{D}_y$, we can also rewrite the PPD as:
$$\require{color}
{\color{orange}p(y|\mathcal{D}_y)} = \mathcal{N}\left(y \left| \mu^*, \sigma^* \right.\right)
$$
where
$
\mu^* = \frac{1}{1+\frac{x^2\sigma_{z_2}^2}{N \overset{\scriptscriptstyle <}{\sigma}_z^2}}\left[\frac{\sum_{i=1}^N y_i}{N} + \frac{x^2\sigma_{z_2}^2}{N \overset{\scriptscriptstyle <}{\sigma}_z^2}\left( w \overset{\scriptscriptstyle <}{\mu}_z + b\right) \right]
$
$
\left(\sigma^*\right)^2 = \sigma_{y|z}^2 + \frac{w^2\sigma^2\overset{\scriptscriptstyle <}{\sigma}_z^2}{\overset{\scriptscriptstyle <}{\sigma}_z^2+\sigma^2} = \sigma_{y|z}^2 + \frac{w^2 x^2 \sigma_{z_2}^2 \overset{\scriptscriptstyle <}{\sigma}_z^2}{N \overset{\scriptscriptstyle <}{\sigma}_z^2 + x^2 \sigma_{z_2}^2}
$
* What happens when $N \rightarrow \infty$ ?
When $N \rightarrow \infty$ the mean and variance of the PPD become:
$$
\mu^* = \frac{\sum_{i=1}^N y_i}{N} \equiv \text{Empirical mean}
$$
$$
\left(\sigma^*\right)^2 = \sigma_{y|z}^2 = (x^2 \sigma_{z_2})^2 \equiv \text{Variance caused only from } z_2 \text{ rv}
$$
So, in this limit of the PPD is simply:
$$\require{color}
{\color{orange}p(y|\mathcal{D}_y)} = \mathcal{N}\left(y \left| \frac{\sum_{i=1}^N y_i}{N}, \sigma_{y|z}^2 \right.\right) \quad \text{when } N\rightarrow \infty
$$
* <font color='blue'>This means that in the limit of $N \rightarrow \infty$ we have exactly the same result obtained when we used the noninformative Uniform prior!</font> Were you expecting this? Let's debate!
```python
# This cell is hidden during presentation. It's just to define a function to plot the governing model of
# the car stopping distance problem. Defining a function that creates a plot allows to repeatedly run
# this function on cells used in this notebook.
def car_fig_2rvs(ax):
x = np.linspace(3, 83, 1000)
mu_z1 = 1.5; sigma_z1 = 0.5; # parameters of the "true" p(z_1)
mu_z2 = 0.1; sigma_z2 = 0.01; # parameters of the "true" p(z_2)
mu_y = mu_z1*x + mu_z2*x**2 # From Homework of Lecture 4
sigma_y = np.sqrt( (x*sigma_z1)**2 + (x**2*sigma_z2)**2 ) # From Homework of Lecture 4
ax.set_xlabel("x (m/s)", fontsize=20) # create x-axis label with font size 20
ax.set_ylabel("y (m)", fontsize=20) # create y-axis label with font size 20
ax.set_title("Car stopping distance problem with two rv's", fontsize=20); # create title with font size 20
ax.plot(x, mu_y, 'k:', label="Governing model $\mu_y$")
ax.fill_between(x, mu_y - 1.9600 * sigma_y,
mu_y + 1.9600 * sigma_y,
color='k', alpha=0.2,
label='95% confidence interval ($\mu_y \pm 1.96\sigma_y$)') # plot 95% credence interval
ax.legend(fontsize=15)
```
```python
# This cell is hidden during the presentation
from scipy.stats import norm # import the normal dist, as we learned before!
def samples_y_with_2rvs(N_samples,x): # observations/measurements/samples for car stop. dist. prob. with 2 rv's
mu_z1 = 1.5; sigma_z1 = 0.5;
mu_z2 = 0.1; sigma_z2 = 0.01;
samples_z1 = norm.rvs(mu_z1, sigma_z1, size=N_samples) # randomly draw samples from the normal dist.
samples_z2 = norm.rvs(mu_z2, sigma_z2, size=N_samples) # randomly draw samples from the normal dist.
samples_y = samples_z1*x + samples_z2*x**2 # compute the stopping distance for samples of z_1 and z_2
return samples_y # return samples of y
```
```python
# This cell is hidden during presentation
def HW_Lec6_PPD_comparison(N_samples): # PLOT PPD for Homework and compare it to data and PPD of Lecture 6
fig_car_PPD, ax_car_PPD = plt.subplots(1,2)
x = 75
mu_z2 = 0.1; sigma_z2 = 0.01
# Observation of N_samples from the true data:
empirical_y = samples_y_with_2rvs(N_samples, x) # Empirical measurements of N_samples at x=75
# Empirical mean and std directly calculated from observations:
empirical_mu_y = np.mean(empirical_y); empirical_sigma_y = np.std(empirical_y);
#
# Now define all the constants needed in the calculation of the PPD's obtained with each prior.
w = x
b = mu_z2*x**2
sigma_yGIVENz = np.sqrt((x**2*sigma_z2)**2) # sigma_y|z (comes from the stochastic influence of the z_2 rv)
sigma = np.sqrt(sigma_yGIVENz**2/(w**2*N_samples)) # std arising from the likelihood
mu = empirical_mu_y/w - b/w # mean arising from the likelihood (product of Gaussian densities for the data)
#
# Now, calculate PPD when using a UNIFORM prior (Lecture 6):
PPD_mu_y_UniformPrior = mu*w + b # same result if using: np.mean(empirical_y)
PPD_sigma_y_UniformPrior = np.sqrt(w**2*sigma**2+sigma_yGIVENz**2) # same as: np.sqrt((x**2*sigma_z2)**2*(1/N_samples + 1))
# Now, calcualte PPD when using the GAUSSIAN prior (Homework of Lecture 6):
mu_prior_z = 3; sigma_prior_z = 2 # parameters of the Gaussian prior distribution
sigma_posterior_z = np.sqrt( (sigma_prior_z**2*sigma**2)/(sigma_prior_z**2+sigma**2) )# std of posterior
mu_posterior_z = sigma_posterior_z**2*( mu/(sigma**2) + mu_prior_z/(sigma_prior_z**2) ) # mean of posterior
PPD_mu_y_GaussianPrior = mu_posterior_z*w + b
PPD_sigma_y_GaussianPrior = np.sqrt(w**2*sigma_posterior_z**2+sigma_yGIVENz**2)
#
car_fig_2rvs(ax_car_PPD[0]) # a function I created to include the background plot of the governing model
for i in range(2): # create two plots (one is zooming in on the error bar)
ax_car_PPD[i].errorbar(x , empirical_mu_y,yerr=1.96*empirical_sigma_y, fmt='m*',
markersize=30, elinewidth=9);
ax_car_PPD[i].errorbar(x , PPD_mu_y_UniformPrior,yerr=1.96*PPD_sigma_y_UniformPrior,
color='#F39C12', fmt='*', markersize=15, elinewidth=6);
ax_car_PPD[i].errorbar(x , PPD_mu_y_GaussianPrior,yerr=1.96*PPD_sigma_y_GaussianPrior,
fmt='b*', markersize=10, elinewidth=3);
ax_car_PPD[i].scatter(x*np.ones_like(empirical_y),empirical_y, s=150,facecolors='none',
edgecolors='k', linewidths=2.0)
print("Ground truth : mean[y] = 675 & std[y] = 67.6")
print("Empirical values (purple) : mean[y] = %.2f & std[y] = %.2f" % (empirical_mu_y,empirical_sigma_y) )
print("PPD with Uniform Prior (orange): mean[y] = %.2f & std[y] = %.2f" % (PPD_mu_y_UniformPrior, PPD_sigma_y_UniformPrior))
print("PPD with Gaussian Prior (blue) : mean[y] = %.2f & std[y] = %.2f" % (PPD_mu_y_GaussianPrior,PPD_sigma_y_GaussianPrior))
fig_car_PPD.set_size_inches(15, 6) # scale figure to be wider (since there are 2 subplots)
```
```python
HW_Lec6_PPD_comparison(N_samples=2) # Plot data and the two PPD's considering different priors
```
### See you next class
Have fun!
|
open import FRP.JS.Nat using ( ℕ ; suc ; _+_ ; _≟_ )
open import FRP.JS.RSet
open import FRP.JS.Time using ( Time ; epoch )
open import FRP.JS.Delay using ( _ms )
open import FRP.JS.Behaviour
open import FRP.JS.Event using ( ∅ )
open import FRP.JS.Bool using ( Bool ; not ; true )
open import FRP.JS.QUnit using ( TestSuite ; ok◇ ; test ; _,_ ; ε )
module FRP.JS.Test.Behaviour where
infixr 2 _≟*_
_≟*_ : ⟦ Beh ⟨ ℕ ⟩ ⇒ Beh ⟨ ℕ ⟩ ⇒ Beh ⟨ Bool ⟩ ⟧
_≟*_ = map2 _≟_
tests : TestSuite
tests =
( test "≟*"
( ok◇ "[ 1 ] ≟* [ 1 ]" ([ 1 ] ≟* [ 1 ])
, ok◇ "[ 1 ] ≟* [ 0 ]" (not* ([ 1 ] ≟* [ 0 ]))
, ok◇ "[ 0 ] ≟* [ 1 ]" (not* ([ 0 ] ≟* [ 1 ])) )
, test "map"
( ok◇ "map suc [ 0 ] ≟* [ 1 ]" (map suc [ 0 ] ≟* [ 1 ])
, ok◇ "map suc [ 1 ] ≟* [ 1 ]" (not* (map suc [ 1 ] ≟* [ 1 ])) )
, test "join"
( ok◇ "join (map [ suc ] [ 0 ] ) ≟* [ 1 ]" (join (map (λ n → [ suc n ]) [ 0 ]) ≟* [ 1 ])
, ok◇ "join (map [ suc ] [ 1 ]) ≟* [ 1 ]" (not* (join (map (λ n → [ suc n ]) [ 1 ]) ≟* [ 1 ])) )
, test "hold"
( ok◇ "hold 1 ∅ ≟* [ 1 ]" (hold 1 ∅ ≟* [ 1 ])
, ok◇ "hold 0 ∅ ≟* [ 1 ]" (not* (hold 0 ∅ ≟* [ 1 ])) ) )
|
[GOAL]
α : Type u
inst✝³ : OrderedCancelCommMonoid α
β : Type u_1
inst✝² : One β
inst✝¹ : Mul β
inst✝ : Pow β ℕ
f : β → α
hf : Injective f
one : f 1 = 1
mul : ∀ (x y : β), f (x * y) = f x * f y
npow : ∀ (x : β) (n : ℕ), f (x ^ n) = f x ^ n
src✝ : OrderedCommMonoid β := orderedCommMonoid f hf one mul npow
a b c : β
bc : f (a * b) ≤ f (a * c)
⊢ f a * f b ≤ f a * f c
[PROOFSTEP]
rwa [← mul, ← mul]
|
Module NatPlayground.
Fixpoint beq_nat (n m : nat) : bool :=
match n with
| O => match m with
| O => true
| S m' => false
end
| S n' => match m with
| O => false
| S m' => beq_nat n' m'
end
end.
Fixpoint leb (n m : nat) : bool :=
match n with
| O => true
| S n' =>
match m with
| O => false
| S m' => leb n' m'
end
end.
Definition not_b (a : bool) :=
match a with
| true => false
| false => true
end.
Definition blt_nat (n m : nat) : bool := andb (leb n m) (not_b (beq_nat n m)).
Example test_blt_nat1: (blt_nat 2 2) = false.
Proof.
reflexivity.
Qed.
Example test_blt_nat2: (blt_nat 2 4) = true.
Proof.
reflexivity.
Qed.
Example test_blt_nat3: (blt_nat 4 2) = false.
Proof.
reflexivity.
Qed.
Theorem simple_minmax : forall n m, n <= m -> min n m <= max n m.
Proof.
intros.
rewrite min_l.
rewrite max_r.
assumption. assumption. assumption.
Qed.
End NatPlayground.
|
{-# LANGUAGE BangPatterns #-}
module MnistData (loadDataWrapper) where
import Codec.Compression.GZip (decompress)
import qualified Data.ByteString.Lazy as BL
import qualified Numeric.LinearAlgebra as H
loadDataWrapper :: IO ([(H.Vector H.R, H.Vector H.R)], [(H.Vector H.R, Int)], [(H.Vector H.R, Int)])
loadDataWrapper = do
(_, _, testImage) <- loadImage "data/t10k-images-idx3-ubyte.gz"
(_, testLabel) <- loadLabel "data/t10k-labels-idx1-ubyte.gz"
(_, _, trainImage) <- loadImage "data/train-images-idx3-ubyte.gz"
(_, trainLabel) <- loadLabel "data/train-labels-idx1-ubyte.gz"
let !testData = zip testImage testLabel
let (!trainData', !validateDate) = splitAt 50000 $ zip trainImage trainLabel
let trainData = map (\(a, b)-> (a, labelToVec b)) trainData'
return (trainData, validateDate, testData)
labelToVec :: Int -> H.Vector H.R
labelToVec n = H.fromList $ replicate n 0 ++ [1.0] ++ replicate (9-n) 0
loadLabel :: FilePath -> IO (Int, [Int])
loadLabel fp = do
content <- decompress <$> BL.readFile fp
let (hd, dat) = BL.splitAt 8 content
let (magic, size) = BL.splitAt 4 hd
case BL.unpack magic of
[0, 0, 8, 1] -> return (readInt size, take (readInt size) (parseData dat))
_ -> return (0, [])
loadImage :: FilePath -> IO (Int, (Int, Int), [H.Vector H.R])
loadImage fp = do
content <- decompress <$> BL.readFile fp
let (hd, dat) = BL.splitAt 16 content
let (magic, tl) = BL.splitAt 4 hd
let (size, xy) = BL.splitAt 4 tl
let (rowW, colW) = BL.splitAt 4 xy
let (row, col) = (readInt rowW, readInt colW)
let images = map (H.fromList . map ((/256.0). fromIntegral)) (every (row * col) $ BL.unpack dat)
case BL.unpack magic of
[0, 0, 8, 3] -> return (readInt size, (row, col), images)
_ -> return (0, (0, 0), images)
parseData :: BL.ByteString -> [Int]
parseData = map fromIntegral . BL.unpack
readInt :: Integral a => BL.ByteString -> a
readInt bs = foldl (\a b -> a * 256 + b) 0 $ map fromIntegral $ BL.unpack bs
every :: Int -> [a] -> [[a]]
every n xs = hd : every n tl
where
(hd, tl) = splitAt n xs
|
CONGRATULATIONS TO OUR 2018 SCHOLARSHIP RECIPIENTS!
Imani Ma'at is a visionary, artist, dancer, choreographer, and photographer from Norfolk, VA. She has trained in multiple dance styles including Improvisation, Experimental, Hip-hop, Modern, Contemporary, and traditional West-African dance practices. Her primary focus in dance is West-African dance and its musical traditions. Through these dance practices, she produces many works with the intentions of developing safe spaces while unifying the community and helping others develop higher consciousness through the performing and visual arts. Her goals are to bring forth spiritual healing and to simply encourage others through her work as a versatile artist.
|
\documentclass[11pt]{article}
\usepackage[english]{babel}
\usepackage{a4}
\usepackage{latexsym}
\usepackage[
colorlinks,
pdftitle={IGV solutions week 20},
pdfsubject={Werkcollege Inleiding Gegevensverwerking week 20},
pdfauthor={Laurens Bronwasser, Martijn Vermaat}
]{hyperref}
\title{IGV solutions week 20}
\author{
Laurens Bronwasser\footnote{E-mail: [email protected], homepage: http://www.cs.vu.nl/\~{}lmbronwa/}
\and
Martijn Vermaat\footnote{E-mail: [email protected], homepage: http://www.cs.vu.nl/\~{}mvermaat/}
}
\date{16th May 2003}
\begin{document}
\maketitle
\begin{abstract}
In this document we present the solutions to the assignment for werkcollege \emph{\mbox{Inleiding Gegevensverwerking}} \mbox{week 20}.
\end{abstract}
\tableofcontents
\newpage
\section{Assignment}
Perform ``data structure extraction'' for the Cobol program \verb|order.cob| from the DBMAIN examples directory. More specifically, you are asked to perform the following actions:
\begin{itemize}
\item
Start a new DB-MAIN project.
\item
Extract the raw physical schema from \verb|order.cob| with File $\rightarrow$ Extract $\rightarrow$ Cobol.
\item
\textbf{Refinement of field structure:} There are the unusually long fields \verb|CUS-DESCR|, \verb|CUS-HIST|, \verb|ORD-DETAIL|, \verb|STK-NAME|. Determine the \emph{variable dependency graph} for these fields to see if these fields are used in any way that allows us to refine their structure. Hint: Check, for example, how the attribute \verb|CUS-DESCR| of the entity \verb|CUS| and the data field \verb|DESCRIPTION| from the program relate to each other. Perform all your refinements of the entities by defining appropriate nested attributes for the structured attributes.
\item
\textbf{Explicit foreign keys:} The physical schema is unconnected in the sense that we have no explicit \emph{ref}erence constraints between the entities. Investigate the Cobol program to propose foreign key attributes. Hint: Check, for example, if \verb|ORD.ORD-CUSTOMER| could possibly refer to \verb|CUS|. Use ``add constraint'' to make explicit all your foreign key findings.
\end{itemize}
You are requested to perform the steps in every detail by using DB-MAIN, to document
all your findings, arguments, and specific actions. Please deliver your documentation
and the resulting physical model (i.e., printout of diagram). Note that the documentation
of your extraction is as crucial as the final model if you want to get your solution
accepted.
\textbf{Note:} There is a designated reader chapter for this week.
\newpage
\section{Solution}
A printout of the physical model can be found in a seperate document. In this document we will document the steps we had to make to create the desired diagram.
\subsection{Refinement of field structure}
We look at some unusually long fields and examine how they relate to some data fields. We define appropriate nested attributes for these structured attributes. Below we will look at these fields one by one.
\subsubsection{CUS-DESCR}
\end{document}
|
Load LFindLoad.
From lfind Require Import LFind.
From QuickChick Require Import QuickChick.
From adtind Require Import goal33.
Derive Show for natural.
Derive Arbitrary for natural.
Instance Dec_Eq_natural : Dec_Eq natural.
Proof. dec_eq. Qed.
Lemma conj18synthconj4 : forall (lv0 : natural) (lv1 : natural) (lv2 : natural) (lv3 : natural), (@eq natural (Succ (plus lv0 (plus lv1 lv2))) (plus lv3 (plus (Succ Zero) lv2))).
Admitted.
QuickChick conj18synthconj4.
|
[STATEMENT]
lemma Cons_in_shuffles_iff:
"z # zs \<in> shuffles xs ys \<longleftrightarrow>
(xs \<noteq> [] \<and> hd xs = z \<and> zs \<in> shuffles (tl xs) ys \<or>
ys \<noteq> [] \<and> hd ys = z \<and> zs \<in> shuffles xs (tl ys))"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (z # zs \<in> shuffles xs ys) = (xs \<noteq> [] \<and> hd xs = z \<and> zs \<in> shuffles (tl xs) ys \<or> ys \<noteq> [] \<and> hd ys = z \<and> zs \<in> shuffles xs (tl ys))
[PROOF STEP]
by (induct xs ys rule: shuffles.induct) auto
|
= = Personal life = =
|
= = Personal life = =
|
= = Personal life = =
|
= = Personal life = =
|
= = Personal life = =
|
= = Personal life = =
|
= = Personal life = =
|
= = Personal life = =
|
= = Personal life = =
|
= = Personal life = =
|
import numpy as np
import matplotlib.pyplot as plt
M = 101 #Количество узлов сетки в металле
N = 601 #Количество узлов сетки (Всего)
t = np.loadtxt('result/t.txt', dtype= np.float)
r = np.loadtxt('result/r.txt', dtype= np.float)
#u = np.loadtxt('result/u.txt', dtype= np.float)
fig = plt.figure()
plt.plot(t*(10 ** 6), r[:, 0]*(10 ** 3), label = 'r_l(t)', linestyle = "-", color = 'k')
plt.plot(t*(10 ** 6), r[:, N - 1]*(10 ** 3), label = 'ex_r(t)', linestyle = "--", color = 'k')
plt.plot(t*(10 ** 6), r[:, M - 1]*(10 ** 3), label = 'r_r(t)', linestyle = "-.", color = 'k')
plt.xlabel('t, мкc')
plt.ylabel('r, мм')
plt.vlines(2.944292622937318e-05*(10**6), 0, 250, linewidth = 1, color = "r", linestyle = '--')
plt.text(2.944292622937318e-05*(10**6) - 1, 125, '29.4')
plt.grid()
plt.legend()
plt.show()
|
#include <stdio.h>
#include <unistd.h>
#include <assert.h>
#include <math.h>
#include <gsl/gsl_vector.h>
#include <gsl/gsl_rng.h>
#include "dlib.h"
#include "svec.h"
#include "rng.h"
const char *usage = "Usage: scode [OPTIONS] < file\n"
"file should have columns of arbitrary tokens\n"
"-r RESTART: number of restarts (default 1)\n"
"-i NITER: number of iterations over data (default UINT32_MAX)\n"
"-t THRESHOLD: quit if logL increase for iter <= this (default .001)\n"
"-d NDIM: number of dimensions (default 25)\n"
"-z Z: partition function approximation (default 0.166)\n"
"-p PHI0: learning rate parameter (default 50.0)\n"
"-u ETA0: learning rate parameter (default 0.2)\n"
"-s SEED: random seed (default 0)\n"
"-c calculate real Z (default false)\n"
"-w The first line of the input is weights (default false)\n"
"-v verbose messages (default false)\n";
//typedef uint32_t u32;
//typedef uint64_t u64;
u32 RESTART = 1;
u32 NITER = UINT32_MAX;
double THRESHOLD = 0.001;
u32 NDIM = 25;
double Z = 0.166;
double PHI0 = 50.0;
double ETA0 = 0.2;
unsigned long int SEED = 0;
bool CALCZ = false;
bool WEIGHT = false;
bool VERBOSE = false;
u32 NTOK = 0;
u64 NTUPLE = 0;
const gsl_rng_type *rng_T;
gsl_rng *rng_R = NULL;
darr_t data;
u64 **update_cnt;
double * weight = NULL;
double * uweight = NULL; /*Updated weights*/
u64 **cnt;
#define frq(i,j) ((double)cnt[i][j]*NTOK/len(data))
svec **vec;
svec **best_vec;
svec dummy_vec;
sym_t qmax;
sym_t NULLFEATID;
#define NULLFEATMARKER "/XX/"
int main(int argc, char **argv);
void init_rng();
void free_rng();
u64 init_data();
u32 init_weight();
void free_weight();
void randomize_vectors();
void copy_best_vec();
void free_data();
void update_tuple(sym_t *t);
double logL();
double calcZ();
#define vmsg(...) if(VERBOSE)msg(__VA_ARGS__)
int main(int argc, char **argv) {
int opt;
while((opt = getopt(argc, argv, "r:i:t:d:z:p:u:s:cwv")) != -1) {
switch(opt) {
case 'r': RESTART = atoi(optarg); break;
case 'i': NITER = atoi(optarg); break;
case 't': THRESHOLD = atof(optarg); break;
case 'd': NDIM = atoi(optarg); break;
case 'z': Z = atof(optarg); break;
case 'p': PHI0 = atof(optarg); break;
case 'u': ETA0 = atof(optarg); break;
case 's': SEED = atoi(optarg); break;
case 'c': CALCZ = true; break;
case 'w': WEIGHT = true; break;
case 'v': VERBOSE = true; break;
default: die("%s",usage);
}
}
vmsg("scode -r %u -i %u -t %g -d %u -z %g -p %g -u %g -s %lu %s%s%s",
RESTART, NITER, THRESHOLD, NDIM, Z, PHI0, ETA0, SEED,
(CALCZ ? "-c " : ""), (WEIGHT ? "-w " : ""), (VERBOSE ? "-v " : ""));
init_rng();
if (SEED) gsl_rng_set(rng_R, SEED);
if (WEIGHT) NTOK = init_weight();
NTUPLE = init_data();
vmsg("Read %zu tuples %u uniq tokens", NTUPLE, qmax);
double best_logL = 0;
for (u32 start = 0; start < RESTART; start++) {
randomize_vectors();
double ll = logL();
vmsg("Restart %u/%u logL0=%g best=%g", 1+start, RESTART, ll, best_logL);
if (CALCZ) vmsg("Z=%g (approx %g)", calcZ(), Z);
for (u32 iter = 0; iter < NITER; iter++) {
for (u64 di = 0; di < NTUPLE; di++) {
update_tuple(&val(data, di * NTOK, sym_t));
}
double ll0 = ll;
ll = logL();
vmsg("Iteration %u/%u logL=%g", 1+iter, NITER, ll);
if (ll - ll0 <= THRESHOLD) break;
}
if (start == 0 || ll > best_logL) {
vmsg("Updating best_vec with logL=%g", ll);
best_logL = ll;
copy_best_vec();
}
vmsg("Restart %u/%u logL1=%g best=%g", 1+start, RESTART, ll, best_logL);
if (CALCZ) vmsg("Z=%g (approx %g)", calcZ(), Z);
}
for (u32 t = 0; t < NTOK; t++) {
for (sym_t q = 1; q <= qmax; q++) {
if (best_vec[t][q] == NULL) continue;
printf("%u:%s\t%zu\t", t, sym2str(q), cnt[t][q]);
svec_print(best_vec[t][q]);
putchar('\n');
}
}
fflush(stdout);
free_data();
free_rng();
if (WEIGHT) free_weight();
symtable_free();
dfreeall();
fprintf(stderr, "%f\n", best_logL);
vmsg("bye");
}
double logL() {
double l = 0;
for (u64 i = 0; i < NTUPLE; i++) {
sym_t *t = &val(data, i * NTOK, sym_t);
sym_t x = t[0];
sym_t y = t[1];
float px = frq(0, x);
float py = frq(1, y);
svec vx = vec[0][x];
svec vy = vec[1][y];
float xy = svec_sqdist(vx, vy);
l += log(px * py) - xy;
}
return (l / NTUPLE - log(Z));
}
double calcZ() {
double z = 0;
for (sym_t x = 1; x <= qmax; x++) {
if (VERBOSE && (x % 1000 == 0)) fputc('.', stderr);
if (cnt[0][x] == 0) continue;
float px = frq(0, x);
svec vx = vec[0][x];
for (sym_t y = 1; y <= qmax; y++) {
if (cnt[1][y] == 0) continue;
float py = frq(1, y);
svec vy = vec[1][y];
float xy = svec_sqdist(vx, vy);
z += px * py * exp(-xy);
}
}
if (VERBOSE) fputc('\n', stderr);
return z;
}
void update_tuple(sym_t *t) {
/*weighted update*/
static svec *u = NULL;
static svec *v = NULL;
static svec dx = NULL;
if (u == NULL) u = _d_malloc(NTOK * sizeof(svec));
if (v == NULL) v = _d_malloc(NTOK * sizeof(svec));
if (dx == NULL) dx = svec_alloc(NDIM);
for (u32 i = 0; i < NTOK; i++) u[i] = vec[i][t[i]];
for (u32 i = 0; i < NTOK; i++) {
/* Sampling values from the marginal distributions. */
/* Can this be done once, or do we have to resample for every x? */
if(i > 0 && t[i] == NULLFEATID) continue;
for (u32 j = 0; j < NTOK; j++) {
if (j==i) { v[j] = u[i]; continue;}
u64 r = gsl_rng_get(rng_R);
r = (r << 32) | gsl_rng_get(rng_R);
r = r % NTUPLE;
sym_t y = val(data, r * NTOK + j, sym_t);
v[j] = vec[j][y];
if(i > 0) break;
}
/* Compute the move for u[i] */
svec_set_zero(dx);
double ww;
for (u32 j = 0; j < NTOK; j++) {
if (j == i) continue;
ww = weight == NULL ? 1 : (i > 0 ? weight[i] : weight[j]);
double push = 0, pull = 0;
if (v[j] == NULL) v[j] = dummy_vec;
else push = exp(-svec_sqdist(u[i], v[j])) / Z;
if(u[j] == NULL) u[j] = dummy_vec;
else pull = 1;
if(push != 0 || pull != 0){
for (u32 d = 0; d < NDIM; d++) {
float dxd = svec_get(dx, d);
float x = svec_get(u[i], d);
float y = svec_get(u[j], d);
float z = svec_get(v[j], d);
svec_set(dx, d, dxd + ww * ( pull * (y - x) + push * (x - z)));
}
}
/*restore the vectors to original forms*/
if(push == 0) v[j] = NULL;
if(pull == 0) u[j] = NULL;
if(i > 0) break;
}
/* Apply the move scaled by learning parameter */
u64 cx = update_cnt[i][t[i]]++;
float nx = ETA0 * (PHI0 / (PHI0 + cx));
svec_scale(dx, nx);
svec_add(u[i], dx);
svec_normalize(u[i]);
}
}
u32 init_weight(){
u32 size = 100, i = 0;
weight = _d_malloc(size * sizeof(double));
forline (buf, NULL) {
fortok (tok, buf) {
weight[i] = atof(tok);
assert(weight[i++] >= 0);
if(i >= 100) {
size *= 2;
weight = _d_realloc(weight, size);
}
}
assert(i > 0);
break;
}
return i;
}
void free_weight() {
if (weight != NULL) _d_free(weight);
}
u64 init_data() {
qmax = 0;
data = darr(0, sym_t);
forline (buf, NULL) {
u32 ntok = 0;
fortok (tok, buf) {
sym_t q = str2sym(tok, true);
if (q > qmax) qmax = q;
size_t lendata = len(data);
val(data, lendata, sym_t) = q;
if(strcmp(tok, NULLFEATMARKER) == 0) NULLFEATID = q;
ntok++;
}
if(NTOK == 0) NTOK = ntok;
assert(ntok == NTOK); //Each line has equal number of tokens
}
assert(NTOK > 0);
update_cnt = _d_malloc(NTOK * sizeof(ptr_t));
cnt = _d_malloc(NTOK * sizeof(ptr_t));
vec = _d_malloc(NTOK * sizeof(ptr_t));
best_vec = _d_malloc(NTOK * sizeof(ptr_t));
dummy_vec = svec_alloc(NDIM);
svec_zero(dummy_vec);
uweight = _d_calloc(NTOK, sizeof(double));
for (u32 i = 0; i < NTOK; i++) {
update_cnt[i] = _d_calloc(qmax+1, sizeof(u64));
cnt[i] = _d_calloc(qmax+1, sizeof(u64));
vec[i] = _d_calloc(qmax+1, sizeof(svec));
best_vec[i] = _d_calloc(qmax+1, sizeof(svec));
}
u64 N = len(data) / NTOK;
for (u64 i = 0; i < N; i++) {
sym_t *p = &val(data, i * NTOK, sym_t);
for (u32 j = 0; j < NTOK; j++) {
sym_t k = p[j];
assert(k <= qmax);
cnt[j][k]++;
if(k == NULLFEATID){
vec[j][k] = best_vec[j][k] = NULL;
}
else if (vec[j][k] == NULL) {
vec[j][k] = svec_alloc(NDIM);
best_vec[j][k] = svec_alloc(NDIM);
}
}
}
return N;
}
void free_data() {
for (u32 i = 0; i < NTOK; i++) {
for (sym_t j = 0; j <= qmax; j++) {
if (vec[i][j] != NULL) {
svec_free(vec[i][j]);
svec_free(best_vec[i][j]);
}
}
_d_free(best_vec[i]);
_d_free(vec[i]);
_d_free(cnt[i]);
_d_free(update_cnt[i]);
}
_d_free(uweight);
svec_free(dummy_vec);
_d_free(best_vec);
_d_free(vec);
_d_free(cnt);
_d_free(update_cnt);
darr_free(data);
}
void randomize_vectors() {
for (u32 j = 0; j < NTOK; j++) {
for (sym_t q = 1; q <= qmax; q++) {
if (vec[j][q] != NULL) {
svec_randomize(vec[j][q]);
update_cnt[j][q] = 0;
}
}
}
}
void copy_best_vec() {
for (u32 j = 0; j < NTOK; j++) {
for (sym_t q = 1; q <= qmax; q++) {
if (vec[j][q] != NULL) {
svec_memcpy(best_vec[j][q], vec[j][q]);
}
}
}
}
void init_rng() {
gsl_rng_env_setup();
rng_T = gsl_rng_mt19937;
rng_R = gsl_rng_alloc(rng_T);
}
void free_rng() {
gsl_rng_free(rng_R);
}
|
If $S$ is a nonempty set of real numbers that is bounded below, then the infimum of $S$ is in the closure of $S$.
|
A life reinsurer are seeking to hire a Senior Actuarial Manager to manager a team responsible for the group's Solvency II reporting processes and activities. You will act as the key internal and external stakeholder liaison specialist. You would also have an active involvement in Group Projects to achieve process improvements and would be responsible for the continued coaching and development of your direct reports. You would also manage the activities of the sub-team that deals with SII cashflow projections whilst leading in the production of actuarial results. You will be an experienced qualified actuary with relevant experience. Some working from home is possible within this role in addition to an excellent approach to having a good work life balance.
|
#ifndef __INCLUDE_EMUPLUSPLUS__
#define __INCLUDE_EMUPLUSPLUS__
#include <gsl/gsl_vector.h>
#include <gsl/gsl_matrix.h>
#include <string>
#include <vector>
extern "C"{
#include "multi_modelstruct.h"
#include "multivar_support.h"
}
using namespace std;
/**
* @file emuplusplus.h
* \brief a simple c++ interface to a trained emulator
*
* Instance of the emulator class are built from a trained emulator,
* the emulator can then be sampled at a point in the parameter space by the function QueryEmulator
*
* They must be initialized from an interactive_emulator statefile, this includes all the data for
* a trained emulator
*
*/
class emulator{
public:
emulator(string StateFilePath); // default constructor will return Y values
emulator(string StateFilePath, bool PcaOnly);
~emulator();
/** query the emulator with a vector xpoint in the parameter space
* could add a method to output the means, errors and a covaraince matrix */
void QueryEmulator(const vector<double> &xpoint, vector<double> &Means, vector<double> &Errors);
/**
* get the emulator pca decomp
*/
void getEmulatorPCA(vector<double> *pca_evals, vector< vector<double> > *pca_evecs, vector<double> *pca_mean);
int getRegressionOrder(void){return(the_model->regression_order);};
int getCovFnIndex(void){return(the_model->cov_fn_index);};
int number_params;
int number_outputs;
private:
// if true the values are output in the pca space, otherwise they're in the real space
bool outputPCAValues;
string StateFilePath;
multi_modelstruct *the_model; // the c structure which defines the model
multi_emulator *the_emulator; // the c structure which defines the emulator
gsl_vector *the_emulate_point;
gsl_vector *the_emulate_mean;
gsl_vector *the_emulate_var;
};
#endif
|
(************************************************************************)
(* * The Coq Proof Assistant / The Coq Development Team *)
(* v * INRIA, CNRS and contributors - Copyright 1999-2018 *)
(* <O___,, * (see CREDITS file for the list of authors) *)
(* \VV/ **************************************************************)
(* // * This file is distributed under the terms of the *)
(* * GNU Lesser General Public License Version 2.1 *)
(* * (see LICENSE file for the text of the license) *)
(************************************************************************)
Require Import XRbase.
Require Import XRfunctions.
Require Import XRseries.
Require Import XPartSum.
Require Import Omega.
Require Import Setoid Morphisms.
Local Open Scope XR_scope.
Set Implicit Arguments.
Local Open Scope nat_scope.
Lemma plus_S_minus : forall n m : nat, m<=n -> S n = m + S(n - m).
Proof.
induction n as [ | n i].
{ simpl. intros m le. apply le_n_0_eq in le. subst m. simpl. reflexivity. }
destruct m.
{ simpl. reflexivity. }
intros. simpl. rewrite (i m).
{ reflexivity. }
apply Peano.le_S_n. assumption.
Qed.
Lemma SnPSm_SSnPm : forall n m : nat, S n + S m = S (S n) + m.
Proof.
intros. simpl. rewrite <- plus_n_Sm. reflexivity.
Qed.
Lemma nMSm_nMmM1 : forall n m, n - (S m) = (n - m) - 1.
Proof.
induction n as [ | n i ]. simpl. reflexivity.
intro m. destruct m. simpl. reflexivity.
rewrite Nat.sub_succ.
rewrite (i m). clear i.
rewrite Nat.sub_succ.
reflexivity.
Qed.
Local Close Scope nat_scope.
Section Sigma.
Variable f : nat -> R.
Definition sigma (low high:nat) : R :=
sum_f_R0 (fun k:nat => f (low + k)) (high - low).
Lemma specific1 : forall (k m:nat),
sum_f_R0 (fun x:nat => f (S (S k) + x)) m =
sum_f_R0 (fun x:nat => f (S k + S x)) m.
Proof.
intros k m. apply sum_eq. intros i im.
rewrite <- plus_n_Sm.
reflexivity.
Qed.
Lemma specific2 : forall (k m:nat),
sum_f_R0 (fun x:nat => f (S (k + x))) m =
sum_f_R0 (fun x:nat => f (k + S x)) m.
Proof.
intros k m. apply sum_eq. intros i im.
rewrite plus_n_Sm. reflexivity.
Qed.
Lemma sigma_S_r : forall (low high:nat),
(low <= high)%nat ->
sigma low high + f (S high) = sigma low (S high).
Proof.
intros low high h. unfold sigma.
rewrite <- minus_Sn_m.
simpl. rewrite <- plus_S_minus.
reflexivity.
assumption. assumption.
Qed.
Lemma sigma_S_l : forall low high : nat,
(S low < high)%nat ->
sigma (S low) high = f (S low) + sigma (S (S low)) high.
Proof.
intros low high kh.
unfold sigma.
rewrite (Nat.sub_succ_r high (S low)).
rewrite <- Nat.add_0_r at 2.
rewrite specific1.
apply decomp_sum.
apply lt_minus_O_lt.
assumption.
Qed.
Lemma snn : forall n : nat, sigma n n = f n.
Proof.
intro n.
unfold sigma.
rewrite <- minus_n_n.
simpl.
rewrite <- plus_n_O.
reflexivity.
Qed.
Lemma sigma_0_sumf : forall high : nat, sigma 0 high = sum_f_R0 f high.
Proof.
intro high.
unfold sigma. simpl. rewrite <- minus_n_O.
reflexivity.
Qed.
Definition shift {T:Type} (f:nat->T) := fun n => f (S n).
Definition shift' {T:Type} (f:nat->T) (base:nat) := fun n => f (base + S n)%nat.
Lemma sigma_S_sumf : forall (low high : nat), sigma (S low) high = sum_f_R0 (shift' f low) (high - S low).
Proof.
intros low high.
unfold sigma. unfold shift'. simpl.
rewrite specific2. reflexivity.
Qed.
Lemma sigma_1_sumf : forall high : nat, sigma 1 high = sum_f_R0 (shift f) (high-1).
Proof.
intro high.
unfold sigma. simpl. unfold shift.
reflexivity.
Qed.
Lemma decomp_sum_with_shift : forall (An : nat -> R) (N : nat),
(0 < N)%nat -> sum_f_R0 An N = An 0%nat + sum_f_R0 (shift An) (Nat.pred N).
Proof.
apply decomp_sum.
Qed.
Theorem sigma_split :
forall low high k:nat,
(low <= k)%nat ->
(k < high)%nat -> sigma low high = sigma low k + sigma (S k) high.
Proof.
intros low high k lk kh.
induction k as [| k i].
{
apply le_n_0_eq in lk. (* x <= 0 -> x = 0 *)
rewrite <- lk. (* low = 0 *)
rewrite snn.
rewrite sigma_0_sumf.
rewrite sigma_1_sumf.
rewrite decomp_sum_with_shift. (* sum f n = f 0 + sum f' n *)
{
rewrite Nat.sub_1_r. (* n - 1 = pred n *)
reflexivity.
}
assumption.
}
apply Compare.le_le_S_eq in lk. (* a <= S b -> S a <= S b \/ a = S b *)
destruct lk as [lk | lk].
{
apply Peano.le_S_n in lk. (* S n <= S m -> n <= m *)
rewrite <- sigma_S_r. (* sum n (S m) = sum n m + f(S m) *)
{
rewrite Rplus_assoc.
rewrite <- sigma_S_l. (* f n + sum (S n) m = sum n m *)
{
apply i. (* induction hypothesis *)
{ assumption. }
apply lt_trans with (S k).
{ apply lt_n_Sn. }
assumption.
}
assumption.
}
assumption.
}
rewrite <- lk. (* low = S k *)
rewrite snn.
rewrite plus_n_O at 2.
rewrite sigma_S_sumf.
simpl.
rewrite Nat.sub_succ_r. (* n - S m = pred (n - m) *)
apply decomp_sum.
apply lt_minus_O_lt. (* 0 < n -m -> n < m *)
apply le_lt_trans with (S k).
{
rewrite lk.
apply le_n. (* n <= n *)
}
assumption.
Qed.
Theorem sigma_diff :
forall low high k:nat,
(low <= k)%nat ->
(k < high)%nat -> sigma low high - sigma low k = sigma (S k) high.
Proof.
intros low high k H1 H2; symmetry ; rewrite (sigma_split H1 H2).
unfold Rminus.
rewrite Rplus_comm.
rewrite <- Rplus_assoc.
rewrite Rplus_opp_l.
rewrite Rplus_0_l.
reflexivity.
Qed.
Theorem sigma_diff_neg :
forall low high k:nat,
(low <= k)%nat ->
(k < high)%nat -> sigma low k - sigma low high = - sigma (S k) high.
Proof.
intros low high k H1 H2; rewrite (sigma_split H1 H2).
unfold Rminus.
rewrite Ropp_plus_distr.
repeat rewrite <- Rplus_assoc.
rewrite Rplus_opp_r.
rewrite Rplus_0_l.
reflexivity.
Qed.
Theorem sigma_first :
forall low high:nat,
(low < high)%nat -> sigma low high = f low + sigma (S low) high.
Proof.
intros low high H1; generalize (lt_le_S low high H1); intro H2;
generalize (lt_le_weak low high H1); intro H3;
replace (f low) with (sigma low low).
apply sigma_split.
apply le_n.
assumption.
unfold sigma; rewrite <- minus_n_n.
simpl.
replace (low + 0)%nat with low; [ reflexivity | ring ].
Qed.
Theorem sigma_last :
forall low high:nat,
(low < high)%nat -> sigma low high = f high + sigma low (pred high).
Proof.
intros low high H1; generalize (lt_le_S low high H1); intro H2;
generalize (lt_le_weak low high H1); intro H3;
replace (f high) with (sigma high high).
rewrite Rplus_comm; cut (high = S (pred high)).
intro; pattern high at 3; rewrite H.
apply sigma_split.
apply le_S_n; rewrite <- H; apply lt_le_S; assumption.
apply lt_pred_n_n; apply le_lt_trans with low; [ apply le_O_n | assumption ].
apply S_pred with 0%nat; apply le_lt_trans with low;
[ apply le_O_n | assumption ].
unfold sigma; rewrite <- minus_n_n; simpl;
replace (high + 0)%nat with high; [ reflexivity | ring ].
Qed.
Theorem sigma_eq_arg : forall low:nat, sigma low low = f low.
Proof. apply snn. Qed.
End Sigma.
|
(* Author: Florian Haftmann, TU Muenchen *)
section \<open>A huge collection of equations to generate code from\<close>
theory Candidates
imports
Complex_Main
"~~/src/HOL/Library/Library"
"~~/src/HOL/Library/Sublist_Order"
"~~/src/HOL/Number_Theory/Euclidean_Algorithm"
"~~/src/HOL/Library/Polynomial_Factorial"
"~~/src/HOL/Data_Structures/Tree_Map"
"~~/src/HOL/Data_Structures/Tree_Set"
"~~/src/HOL/Number_Theory/Eratosthenes"
"~~/src/HOL/ex/Records"
begin
text \<open>Drop technical stuff from @{theory Quickcheck_Narrowing} which is tailored towards Haskell\<close>
setup \<open>
fn thy =>
let
val tycos = Sign.logical_types thy;
val consts = map_filter (try (curry (Axclass.param_of_inst thy)
@{const_name "Quickcheck_Narrowing.partial_term_of"})) tycos;
in fold Code.del_eqns consts thy end
\<close>
text \<open>Simple example for the predicate compiler.\<close>
inductive sublist :: "'a list \<Rightarrow> 'a list \<Rightarrow> bool"
where
empty: "sublist [] xs"
| drop: "sublist ys xs \<Longrightarrow> sublist ys (x # xs)"
| take: "sublist ys xs \<Longrightarrow> sublist (x # ys) (x # xs)"
code_pred sublist .
text \<open>Avoid popular infix.\<close>
code_reserved SML upto
text \<open>Explicit check in \<open>OCaml\<close> for correct precedence of let expressions in list expressions\<close>
definition funny_list :: "bool list"
where
"funny_list = [let b = True in b, False]"
definition funny_list' :: "bool list"
where
"funny_list' = funny_list"
definition check_list :: unit
where
"check_list = (if funny_list = funny_list' then () else undefined)"
text \<open>Explicit check in \<open>Scala\<close> for correct bracketing of abstractions\<close>
definition funny_funs :: "(bool \<Rightarrow> bool) list \<Rightarrow> (bool \<Rightarrow> bool) list"
where
"funny_funs fs = (\<lambda>x. x \<or> True) # (\<lambda>x. x \<or> False) # fs"
end
|
//
// TFibioTransport.h
// fibio
//
// Created by Chen Xu on 14/11/12.
// Copyright (c) 2014 0d0a.com. All rights reserved.
//
#ifndef fibio_thrift_hpp
#define fibio_thrift_hpp
#include <boost/asio/read.hpp>
#include <boost/asio/write.hpp>
#include <fibio/iostream.hpp>
#include <thrift/transport/TVirtualTransport.h>
#include <thrift/transport/TServerTransport.h>
#include <thrift/server/TServer.h>
namespace apache {
namespace thrift {
namespace server {
template <typename Stream, bool buffered>
class TFibioServer;
} // End of namespace server
namespace transport {
namespace detail {
template <bool buffered>
struct io_ops;
template <>
struct io_ops<true>
{
template <typename Stream>
static uint32_t read(fibio::stream::iostream<Stream>& s, uint8_t* buf, uint32_t len)
{
if (!s) {
throw TTransportException(TTransportException::NOT_OPEN, "Cannot read.");
}
s.read((char*)buf, len);
if (s.gcount() < len) {
throw TTransportException(TTransportException::END_OF_FILE, "No more data to read.");
}
return len;
}
template <typename Stream>
static void write(fibio::stream::iostream<Stream>& s, const uint8_t* buf, uint32_t len)
{
if (!s) throw TTransportException(TTransportException::NOT_OPEN, "Cannot write.");
s.write((const char*)buf, len);
}
template <typename Stream>
static bool peek(fibio::stream::iostream<Stream>& s)
{
if (!s) return false;
return s.peek() != fibio::stream::iostream<Stream>::traits_type::eof();
}
};
template <>
struct io_ops<false>
{
template <typename Stream>
static uint32_t read(fibio::stream::iostream<Stream>& s, uint8_t* buf, uint32_t len)
{
boost::system::error_code ec;
uint32_t ret = boost::asio::async_read(
s.stream_descriptor(), boost::asio::buffer(buf, len), fibio::asio::yield[ec]);
if (ec)
throw TTransportException(TTransportException::END_OF_FILE, "No more data to read.");
return ret;
}
template <typename Stream>
static void write(fibio::stream::iostream<Stream>& s, const uint8_t* buf, uint32_t len)
{
boost::system::error_code ec;
boost::asio::async_write(
s.stream_descriptor(), boost::asio::buffer(buf, len), fibio::asio::yield[ec]);
if (ec) throw TTransportException(TTransportException::NOT_OPEN, "Cannot write.");
}
template <typename Stream>
static bool peek(fibio::stream::iostream<Stream>& s)
{
return true;
}
};
} // End of namespace detail
template <typename Stream, bool buffered>
class TFibioTransport : public TVirtualTransport<TFibioTransport<Stream, buffered>>
{
public:
typedef fibio::stream::iostream<Stream> fibio_stream;
typedef fibio::stream::stream_traits<fibio_stream> traits_type;
typedef typename traits_type::endpoint_type endpoint_type;
typedef fibio::stream::listener<fibio_stream> listener_type;
TFibioTransport(const std::string& access_point)
: ep_(fibio::stream::detail::make_endpoint<endpoint_type>(access_point))
, stream_obj_(new fibio_stream)
, stream_(*stream_obj_)
{
}
virtual bool isOpen() override { return stream_.is_open(); }
virtual bool peek() override { return detail::io_ops<buffered>::peek(stream_); }
virtual void open() override { stream_.connect(ep_); }
virtual void close() override { stream_.close(); }
virtual void flush() override
{
if (buffered) stream_.flush();
}
uint32_t read(uint8_t* buf, uint32_t len)
{
return detail::io_ops<buffered>::read(stream_, buf, len);
}
void write(const uint8_t* buf, uint32_t len)
{
return detail::io_ops<buffered>::write(stream_, buf, len);
}
private:
friend class server::TFibioServer<Stream, buffered>;
TFibioTransport(fibio_stream& s) : stream_obj_(), stream_(s) {}
endpoint_type ep_;
std::unique_ptr<fibio_stream> stream_obj_;
fibio_stream& stream_;
};
typedef TFibioTransport<boost::asio::ip::tcp::socket, true> TFibioTCPBufferedTransport;
typedef TFibioTransport<boost::asio::local::stream_protocol::socket, true>
TFibioLocalBufferedTransport;
typedef TFibioTransport<boost::asio::ip::tcp::socket, false> TFibioTCPTransport;
typedef TFibioTransport<boost::asio::local::stream_protocol::socket, false> TFibioLocalTransport;
} // End of namespace transport
namespace server {
template <typename Stream, bool buffered>
class TFibioServer : public TServer
{
public:
typedef fibio::stream::iostream<Stream> fibio_stream;
typedef fibio::stream::stream_traits<fibio_stream> traits_type;
typedef typename traits_type::endpoint_type endpoint_type;
typedef fibio::stream::listener<fibio_stream> listener_type;
typedef transport::TFibioTransport<Stream, buffered> transport_type;
template <typename Processor>
TFibioServer(const boost::shared_ptr<Processor>& processor,
const std::string& access_point,
const boost::shared_ptr<TProtocolFactory>& protocolFactory,
THRIFT_OVERLOAD_IF(Processor, TProcessor))
: TServer(processor,
boost::shared_ptr<TServerTransport>(), // Not used
boost::shared_ptr<TTransportFactory>(new TTransportFactory),
protocolFactory)
, listener_(access_point)
{
}
virtual void serve() override
{
if (eventHandler_) {
eventHandler_->preServe();
}
listener_.start([this](fibio_stream& s) {
boost::shared_ptr<transport_type> client(new transport_type(s));
boost::shared_ptr<TProtocol> inputProtocol = inputProtocolFactory_->getProtocol(client);
boost::shared_ptr<TProtocol> outputProtocol
= outputProtocolFactory_->getProtocol(client);
boost::shared_ptr<TProcessor> processor
= getProcessor(inputProtocol, outputProtocol, client);
void* connectionContext = NULL;
if (eventHandler_) {
connectionContext = eventHandler_->createContext(inputProtocol, outputProtocol);
}
for (;;) {
try {
if (eventHandler_) {
eventHandler_->processContext(connectionContext, client);
}
if (!processor->process(inputProtocol, outputProtocol, connectionContext) ||
// Peek ahead, is the remote side closed?
!inputProtocol->getTransport()->peek()) {
break;
}
} catch (transport::TTransportException& ttx) {
break;
}
}
if (eventHandler_) {
eventHandler_->deleteContext(connectionContext, inputProtocol, outputProtocol);
}
});
listener_.join();
}
virtual void stop() override { listener_.stop(); }
private:
listener_type listener_;
};
typedef TFibioServer<boost::asio::ip::tcp::socket, true> TFibioTCPBufferedServer;
typedef TFibioServer<boost::asio::local::stream_protocol::socket, true> TFibioLocalBufferedServer;
typedef TFibioServer<boost::asio::ip::tcp::socket, false> TFibioTCPServer;
typedef TFibioServer<boost::asio::local::stream_protocol::socket, false> TFibioLocalServer;
} // End of namespace server
} // End of namespace thrift
} // End of namespace apache
#endif /* defined(fibio_thrift_hpp) */
|
SUKRA MACHINES - Manufacturer, Supplier and Exporter of Sugarcane Crusher, Sugarcane Juice Crusher, Sugar Cane Extractor, Automatic Sugarcane Extractor.
Bracketed with the top most Manufacturers, Exporters and Suppliers, we consider it as our prime responsibility to make available efficient Sugarcane Crusher .
Detailed introduction to Sugarcane Crusher Domestic: Domestic Sugarcane Juice Mac please confirm the supplier's qualifiion and product quality before placing your order.
CHANDAN ENGINEERING WORKS - Manufacturer and supplier of sugar cane crusher machine, sugar cane juice crusher machine in Gujarat, India.
Contact verified Sugar Cane Crusher Manufacturers, Sugar Cane Crusher suppliers, Sugar Cane Crusher exporters wholesalers, producers, traders in India.
SITAKANT MACHINES offer Sugarcane Juice Crusher and Fruit Juice Crusher. Get the best quality Sugarcane Juice Crusher Manufacturer,Fruit Juice Crusher Supplier in Mumbai.
KIRAN ENGINEERING WORKS - Manufacturer and supplier of sugar cane crusher machine, sugar cane juice crusher machine in Gujarat, India.
|
open import FRP.JS.Delay using ( Delay )
module FRP.JS.Time.Core where
record Time : Set where
constructor epoch
field toDelay : Delay
{-# COMPILED_JS Time function(x,v) { return v.epoch(x); } #-}
{-# COMPILED_JS epoch function(d) { return d; } #-}
|
\documentclass[usepdftitle=false,professionalfonts,compress ]{beamer}
%Packages to be included
\usepackage[latin1]{inputenc}
\usepackage{graphics,epsfig, subfigure}
\usepackage{url}
\usepackage[T1]{fontenc}
%\usepackage{listings}
\usepackage{hyperref}
\usepackage[english]{babel}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%% PDF meta data inserted here %%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\hypersetup{
pdftitle={Code Generation - Part I},
pdfauthor={Kenneth Sundberg}
}
%%%%%% Beamer Theme %%%%%%%%%%%%%
\usetheme[]{Warsaw}
\title{Code Generation - Part I}
\subtitle{CS 5300}
\author{Kenneth Sundberg}
\date{}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%% Begin Document %%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
\frame[plain]{
\frametitle{}
\titlepage
\vspace{-0.5cm}
\begin{center}
%\frontpagelogo
\end{center}
}
\frame{
\tableofcontents[hideallsubsections]
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%% Content starts here %%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{General Tools}
\subsection{Variable Declarations}
{
\begin{frame}\frametitle{Memory Allocation}
\begin{itemize}
\item Variable declaration determines:
\begin{itemize}
\item size of variable
\item location of variable (local or global)
\item offset to variable
\end{itemize}
\end{itemize}
\end{frame}}
{
\begin{frame}\frametitle{Example:}
\end{frame}}
\subsection{Register Allocation}
{
\begin{frame}\frametitle{Register Pool}
\begin{itemize}
\item Generates bad code -- but its simple
\item At the end of every statement the pool is cleared
\item Registers can be requested from pool and released back to pool
\end{itemize}
\end{frame}}
\subsection{Boilerplate}
{
\begin{frame}\frametitle{Initial Code}
\begin{itemize}
\item Declare text segment
\begin{itemize}
\item .text
\end{itemize}
\item Declare global symbol main
\begin{itemize}
\item .globl main
\end{itemize}
\item Initialize global, frame, and stack pointers
\item Jump to main program
\end{itemize}
\end{frame}}
{
\begin{frame}\frametitle{String Area}
\begin{itemize}
\item Declare data segment
\begin{itemize}
\item .data
\end{itemize}
\item Output labled list of strings
\begin{itemize}
\item .asciiz "string"
\end{itemize}
\item End with label for global area
\end{itemize}
\end{frame}}
\section{Simple Statements}
\subsection{Expressions}
{
\begin{frame}\frametitle{L-Values}
\begin{itemize}
\item Look up name in symbol table
\item Request register
\item Load value from memory into register
\end{itemize}
\end{frame}}
{
\begin{frame}\frametitle{Constant Expressions}
\begin{itemize}
\item Constant Folding
\item Return new constant expression
\item No register is allocated
\end{itemize}
\end{frame}}
{
\begin{frame}\frametitle{Non Constant Expressions}
\begin{itemize}
\item Each operand is in a register
\item Request register
\item Generate code to put result in register
\item Release operand's registers
\end{itemize}
\end{frame}}
{
\begin{frame}\frametitle{Mixed Expressions}
\begin{itemize}
\item One operand in register
\item One operand is constant
\item Generate immediate mode code to put result in register
\end{itemize}
\end{frame}}
\subsection{Assignment}
{
\begin{frame}\frametitle{Assignment}
\begin{itemize}
\item Inverse of L-Value expressions
\item Look up name
\item Store value of right hand side expression into location
\end{itemize}
\end{frame}}
\subsection{Intrinsic Functions}
{
\begin{frame}\frametitle{Write}
\begin{itemize}
\item Based on expression type:
\begin{itemize}
\item Move value of expression to proper register
\item Make write system call
\end{itemize}
\item Each argument can be handled independently
\end{itemize}
\end{frame}}
{
\begin{frame}\frametitle{Read}
\begin{itemize}
\item Like Write done individually on each argument
\item Based on type of expression (L-Value)
\begin{itemize}
\item Make read system call
\item Move value to proper memory location
\end{itemize}
\end{itemize}
\end{frame}}
{
\begin{frame}\frametitle{Char}
\begin{itemize}
\item Changes type of expression
\item Does not emit code
\end{itemize}
\end{frame}}
{
\begin{frame}\frametitle{Ord}
\begin{itemize}
\item Like Char - Changes type
\item No code emitted
\end{itemize}
\end{frame}}
{
\begin{frame}\frametitle{Prec}
\begin{itemize}
\item Emit code based on type to increment value
\end{itemize}
\end{frame}}
{
\begin{frame}\frametitle{Succ}
\begin{itemize}
\item Emit code based on type to decrement value
\end{itemize}
\end{frame}}
\end{document}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.