path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
docs/tensorflow2/tf2_keras_classification.ipynb | ###Markdown
Copyright 2020 Digital Advantage - Deep Insider.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
「分類問題(全1回)」 ― 連載『TensorFlow 2.0+Keras( tf.keras)入門』のノートブック(4) Deep Insiderで記事を読む Google Colabで実行する GitHubでソースコードを見る ※上から順に実行してください。上のコードで実行したものを再利用しているところがあるため、すべて実行しないとエラーになるコードがあります。 すべてのコードを一括実行したい場合は、メニューバーから[ランタイム]-[すべてのセルを実行]をクリックしてください。 ※「Python 3」を利用してください。 Python 3を利用するには、メニューバーから[ランタイム]-[ランタイムのタイプを変更]を選択すると表示される[ノートブックの設定]ダイアログの、[ランタイムのタイプ]欄で「Python 3」に選択し、その右下にある[保存]ボタンをクリックしてください。 第8回 分類問題をディープラーニング(基本のDNN)で解こう ■本稿の目的と方針 ディープラーニングの基本形である「DNN(ディープニューラルネットワーク」をTensorFlow 2.xで実装する方法を示す。これにより、連載第1回~第3回ですでに身に付けたニューラルネットワーク&ディープラーニングの知識だけでも、さまざまな機械学習が行えることを確認する。- 前提知識は、基本的なニューラルネットワークを組めること。具体的には『[TensorFlow 2+Keras(tf.keras)入門 - @IT](https://www.atmarkit.co.jp/ait/subtop/features/di/tf2keras_index.html)』の第1回~第6回の知識レベルが必要- 今回の課題: ファッション商品写真の「多クラス分類」や手書き数字)の「二値分類」を行う- 一般的によく使われている活性化関数/損失関数/最適化アルゴリズムを使用する(前回までは基礎的でシンプルなものを使っていた)- 関連として、第7回で「回帰問題を基本のDNNで解く方法」を解説している ■本稿で説明する大まかな流れ - (0)本ノートブックを実行するための事前準備**―――【多クラス分類編】―――**- (1)データの準備- (2)モデルの定義- (3)学習/最適化(オプティマイザー)- (4)評価/精度検証- (5)推論/未知データによるテスト- 実運用のイメージ**―――[【二値分類編】](scrollTo=E4mSTkCUodUv&line=1&uniqifier=1)―――**(※上記と同様の手順から変更箇所のみ説明)- (6)データの準備- (7)モデルの定義- (8)学習/最適化(オプティマイザー)- (9)評価/精度検証- (10)推論/未知データによるテスト- 実運用のイメージ 【多クラス分類編】---------- ■(0)本ノートブックを実行するための事前準備
###Code
# Google Colabで最新の2.xを使う場合、2.xに切り替える(Colab専用)
%tensorflow_version 2.x
###Output
_____no_output_____
###Markdown
●前提条件 【チェック】Pythonバージョン(※3系を使うこと)Colabにインストール済みのものを使う。もし2系になっている場合は、メニューバーの[ランタイム]-[ランタイムのタイプを変更]をクリックして切り替えてほしい。
###Code
import sys
print('Python', sys.version)
# Python 3.6.9 (default, Apr 18 2020, 01:56:04) …… などと表示される
###Output
_____no_output_____
###Markdown
【チェック】TensorFlowバージョン(※2系を使うこと)基本的にはColabにインストール済みのものを使う。もし2系になっている場合は、リスト4-0を実行してバージョン2.0を使う。
###Code
import tensorflow as tf
print('TensorFlow', tf.__version__)
# TensorFlow 2.2.0 ……などと表示される
###Output
_____no_output_____
###Markdown
リスト0-1 [オプション]ライブラリ「TensorFlow」最新バージョンのインストール
###Code
# Google Colabで最新の2.xを使う場合(Colab専用)
%tensorflow_version 2.x
# 最新バージョンにアップグレードする場合
!pip install --upgrade tensorflow
# バージョンを明示してアップグレードする場合
#!pip install --upgrade tensorflow===2.1.0
# 最新バージョンをインストールする場合
#!pip install tensorflow
# バージョンを明示してインストールする場合
#!pip install tensorflow===2.1.0
###Output
_____no_output_____
###Markdown
[オプション]【チェック】TensorFlowバージョン(※インストール後の確認)バージョン2.xになっているか再度チェックする。
###Code
import tensorflow as tf
print('TensorFlow', tf.__version__)
# TensorFlow 2.2.0 ……などと表示される
###Output
_____no_output_____
###Markdown
■(1)データの準備 ファッション商品写真の画像データセットとして「[Fashion-MNIST](https://www.atmarkit.co.jp/ait/articles/2005/28/news016.html)」を使用する。  リスト1-1 Fashion-MNIST(ファッション商品写真)画像データの取得
###Code
# TensorFlowライブラリのtensorflowパッケージを「tf」という別名でインポート
import tensorflow as tf
import matplotlib.pyplot as plt # グラフ描画ライブラリ(データ画像の表示に使用)
import numpy as np # 数値計算ライブラリ(データのシャッフルに使用)
# Fashion-MNISTデータ(※NumPyの多次元配列型)を取得する
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()
# データ分割は自動で、訓練用が6万枚、テスト用が1万枚(ホールドアウト法)。
# さらにそれぞれを「入力データ(X:行列)」と「ラベル(y:ベクトル)」に分ける
# ※訓練データは、学習時のfit関数で訓練用と精度検証用に分割する。
# そのため、あらかじめ訓練データをシャッフルしておく
p = np.random.permutation(len(X_train)) # ランダムなインデックス順の取得
X_train, y_train = X_train[p], y_train[p] # その順で全行を抽出する(=シャッフル)
# [内容確認]データのうち、最初の10枚だけを表示
classes_name = ['T-shirt/top [0]', 'Trouser [1]', 'Pullover [2]',
'Dress [3]', 'Coat [4]', 'Sandal [5]', 'Shirt [6]',
'Sneaker [7]', 'Bag [8]', 'Ankle boot [9]']
plt.figure(figsize=(10,4)) # 横:10インチ、縦:4インチの図
for i in range(10):
plt.subplot(2,5,i+1) # 図内にある(sub)2行5列の描画領域(plot)の何番目かを指定
plt.xticks([]) # X軸の目盛りを表示しない
plt.yticks([]) # y軸の目盛りを表示しない
plt.grid(False) # グリッド線を表示しない
plt.imshow( # 画像を表示する
X_train[i], # 1つの訓練用入力データ(28行×28列)
cmap=plt.cm.binary) # 白黒(2値:バイナリ)の配色
plt.xlabel(classes_name[y_train[i]]) # X軸のラベルに分類名を表示
plt.show()
###Output
_____no_output_____
###Markdown
このコードのポイント:- Fashion-MNISTデータセットは、TensorFlowでは[`tf.keras.datasets.fashion_mnist.load_data()`関数](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/fashion_mnist/load_data)で取得できる- `load_data()`関数の戻り値は、タプル形式で((訓練用データ、訓練用ラベル)、(テスト用データ、テスト用ラベル))となっている(ホールドアウト法)- ディープラーニングではハイパーパラメーター調整のため基本的に「精度検証用データ」が必要。精度検証用への分割は、fit関数のvalidation_split引数に任せるため、ここでは行わない。それに備えてここでは、訓練データをシャッフルしている。- 多クラス分類の場合、カテゴリ変数を「数値エンコーディング」もしくは「one-hotエンコーディング」する必要があるが、Fashiion-MNISTデータは数値エンコーディング済みの「クラスインデックス」であるため、エンコーディングは不要 リスト1-2 1つの画像データの内容確認
###Code
import pandas as pd # データ解析支援「pandas」
# 1件の訓練データの、ラベルと入力データを表示する
print('y_train(正解ラベル): 「',y_train[0],'」');
print('X_train:');
display(pd.DataFrame(X_train[0])) # NumPy多次元配列をPandasデータフレームに変換して表示
###Output
_____no_output_____
###Markdown
このコードのポイント:- 表データのサンプル出力では、文字列出力用の`print()`関数ではなく、整形描画用の`display()`関数を利用している - Colabでは、組み込みオブジェクトに含まれているため、`display()`の記述だけで[`IPython.display.display()`関数](https://ipython.readthedocs.io/en/stable/api/generated/IPython.display.htmlIPython.display.display)を使える- データが0~255(8bitグレースケール)で表現されていることに着目 - 白「0」~黒「255」の256段階(※[MNIST DATABASEファイルフォーマットの定義](http://yann.lecun.com/exdb/mnist/)に基づく) - つまり画像を描画するときには、白(0)と黒(255)のカラーマップを使う必要がある(リスト1の「cmap=plt.cm.binary」がそれに該当) - RBGで作るグレースケールでは0が黒で、255が白となり逆なので注意してほしい。「RBGに合わせた方が分かりやすい」などの理由で白黒を逆転させたい場合はリスト1-1にある「cmap=plt.cm.binary」を「cmap=plt.cm.gray」に書き換えればよい リスト1-3 入力データの正規化(Normalization)
###Code
X_train = (X_train / 255.0).astype(np.float32)
X_test = (X_test / 255.0).astype(np.float32)
###Output
_____no_output_____
###Markdown
このコードのポイント:- 正規化(Normalization): トレーニングデータの値が0~1などの指定範囲に収まるように、値を加工するテクニック- 行列データの個々の値を255で割っている- リスト1-2で示したように元データは白「0」~黒「255」の256段階となっている。これを、ニューラルネットワークで処理しやすいように、白「0.0」~黒「1.0」の範囲に変換している- 訓練用データセットとテスト用データセットは、同じように前処理することが重要 ■(2)モデルの定義 既に何度か説明しているが、以下の通りで進めていく。- `tf.keras.Model`クラスを**サブクラス化**してモデルを定義する(**初中級者以上にお勧め**)- tf.kerasの基本である`compile()`&`fit()`メソッドを使用する(今回はカスタムループの実装は不要なため) ●ディープニューラルネットワークのモデル設計- 入力の数(`INPUT_FEATURES`)は、**28行**×**28列**(=784)になっているので、フラット化(Flatten)して**784個**- 隠れ層のレイヤー数は、**2つ** - 隠れ層にある1つ目のニューロンの数(`LAYER1_NEURONS`)は、**128個** - 隠れ層にある2つ目のニューロンの数(`LAYER2_NEURONS`)は、**32個**- 出力層にあるニューロンの数(`OUTPUT_RESULTS`)は、**10個** リスト2-1 モデルの定義
###Code
import tensorflow as tf # ライブラリ「TensorFlow」のtensorflowパッケージをインポート
from tensorflow.keras import layers # レイヤー関連モジュールのインポート
# 定数(モデル定義時に必要となるもの)
INPUT_ROWS = 28 # 入力行の数: 28行
INPUT_COLS = 28 # 入力列の数: 28列
# 入力(特徴)の数: 784(=28行×28列)
LAYER1_NEURONS = 128 # ニューロンの数: 128
LAYER2_NEURONS = 32 # ニューロンの数: 32
OUTPUT_RESULTS = 10 # 出力結果の数: 10(=「0」~「9」の10クラスに分類)
#OUTPUT_RESULTS = 1 # 後述する二値分類の場合: 1(=「0.0」~「1.0」の2値に分類)
# 過学習対策でドロップアウトを使う場合はコメントオフ:
#DROPOUT1_RATE = 0.2 # 第1隠れ層から第2隠れ層へのドロップ率: 0.2(20%)
# 変数(モデル定義時に必要となるもの)
activation1 = layers.ReLU(name='activation1') # 活性化関数(隠れ層用): ReLU関数(変更可能)
activation2 = layers.ReLU(name='activation2') # 活性化関数(隠れ層用): ReLU関数(変更可能)
act_output = layers.Softmax(name='act_output') # 活性化関数(出力層用): Softmax関数(固定)
# tf.keras.Modelによるモデルの定義
class NeuralNetwork(tf.keras.Model):
# レイヤー(層)を定義
def __init__(self):
super().__init__()
# 入力層:入力データのフラット化(Flatten)
self.flatten_input = layers.Flatten( # 行列データのフラット化
input_shape=(INPUT_ROWS, INPUT_COLS), # 入力の形状(=入力層)※タプル形式
name='flatten_input')
# 隠れ層:1つ目のレイヤー(layer)
self.layer1 = layers.Dense( # 全結合層(線形変換)
# 入力ユニット数は、前の出力ユニット数が使われるので、指定不要
LAYER1_NEURONS, # 次のレイヤーへの出力ユニット数
name='layer1')
# 第1レイヤーの後でドロップアウトを使う場合はコメントオフ:
#self.dropput1 = layers.Dropout( # ドロップアウト
# DROPOUT1_RATE, # 何%ドロップするか
# name='dropput1')
# 隠れ層:2つ目のレイヤー(layer)
self.layer2 = layers.Dense( # 全結合層
LAYER2_NEURONS, # 次のレイヤーへの出力ユニット数
name='layer2')
# 出力層
self.layer_out = layers.Dense( # 全結合層
OUTPUT_RESULTS, # 出力結果への出力ユニット数
name='layer_out')
# フォワードパスを定義
def call(self, x, train_mode=True):
x = self.flatten_input(x) # 入力データのフラット化
# 「出力=活性化関数(第n層(入力))」の形式で記述
x = activation1(self.layer1(x)) # 活性化関数は変数として定義
#ドロップアウトを使う場合はコメントオフ:
#if train_mode: # 訓練時のみ……
# x = self.dropput2(x) # ……ドロップアウト(不活性化)
x = activation2(self.layer2(x)) # 活性化関数は変数として定義
x = act_output(self.layer_out(x)) # ※活性化関数は「softmax」固定
return x
# モデル内容の出力を行う独自メソッド
def get_static_model(self):
x = layers.Input(shape=(28,28), name='input_features')
static_model = tf.keras.Model(inputs=[x], outputs=self.call(x))
return static_model
###Output
_____no_output_____
###Markdown
このコードのポイント:- このコードは、「[第5回 お勧めの、TensorFlow 2.0最新の書き方入門(エキスパート向け) (1/2):TensorFlow 2+Keras(tf.keras)入門 - @IT](https://www.atmarkit.co.jp/ait/articles/2003/10/news016.html)」で説明したものとほぼ同じ書き方(サブクラス型)である- ニューラルネットワークモデルの定義方法は、「[第2回 ニューラルネットワーク最速入門 ― 仕組み理解×初実装(中編):TensorFlow 2+Keras(tf.keras)入門 - @IT](https://www.atmarkit.co.jp/ait/articles/1910/17/news026.html)」で説明した通り- 隠れ層の活性化関数は、最も一般的な[ReLU関数](https://www.atmarkit.co.jp/ait/articles/2003/11/news016.html)を使用した(前回までは基礎的なTanh関数を使っていた)- 過学習を防ぐために使えるドロップアウト層(ドロップ率:20%)を挿入するためのサンプルコードがコメントアウトした状態で記述されている- 出力層の活性化関数は、多クラス分類時に一般的な[ソフトマックス関数](https://www.atmarkit.co.jp/ait/articles/2004/08/news016.html)を指定している。セットで、損失関数には「多クラス分類用の交差エントロピー」を使う(後述)- `get_functional_model()`メソッドは、次のリスト2-2でモデル内容を描画するために用意した独自の関数である(本来の処理には不要) リスト2-2 モデル内容(図)の確認
###Code
# モデル(NeuralNetworkクラス)のインスタンス化
model = NeuralNetwork()
# モデル概要の図を描画する
f_model = model.get_static_model()
filename = 'model.png';
tf.keras.utils.plot_model(f_model, show_shapes=True, show_layer_names=True, to_file=filename)
from IPython.display import Image
Image(retina=False, filename=filename) # 図で描画
#f_model.summary() # テキストで出力
###Output
_____no_output_____
###Markdown
■(3)学習/最適化(オプティマイザー) リスト3-1 学習方法(損失関数/最適化/学習率)の定義
###Code
# 定数(学習方法設計時に必要となる数値)
LOSS = 'sparse_categorical_crossentropy' # 損失関数:多クラス分類用の交差エントロピー
METRICS = ['accuracy'] # 評価関数:正解率
OPTIMIZER = tf.keras.optimizers.Adam # 最適化:Adam
LEARNING_RATE = 0.001 # 学習率: 0.001(学習率の調整)
# 学習方法を定義する
model.compile(optimizer=OPTIMIZER(learning_rate=LEARNING_RATE),
loss=LOSS,
metrics=METRICS) # 精度(分類では正解率。回帰では損失)
###Output
_____no_output_____
###Markdown
このコードのポイント:- 損失関数には、ソフトマックス活性化関数とセットになる「多クラス分類用の交差エントロピー」を指定している- 評価関数としては、分類の場合は基本的に「正解率(Accracy)」を使う- 最適化アルゴリズムには、最も一般的な「Adam」を使用した(前回までは基礎的な「SGD」を使っていた)- 学習率は「0.001」に調整した リスト3-2 トレーニング(ミニバッチ学習)
###Code
# 定数(ミニバッチ学習時に必要となるもの)
BATCH_SIZE = 96 # バッチサイズ: 96
EPOCHS = 100 # エポック数: 100
# 早期終了
#es = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=3)
# 学習する
hist = model.fit(x=X_train, # 訓練用データ
y=y_train, # 訓練用ラベル
validation_split=0.2, # 精度検証用の割合:20%
batch_size=BATCH_SIZE, # バッチサイズ
epochs=EPOCHS, # エポック数
verbose=1, # 実行状況表示
callbacks=[ # コールバック
#es # 早期終了(する場合はコメントアウトを解除)
])
###Output
_____no_output_____
###Markdown
このコードのポイント:- 引数`validation_split`により、訓練データを訓練用と精度検証用に分割している ■(4)評価/精度検証 リスト4-1 損失値/評価関数値の推移グラフ描画
###Code
import matplotlib.pyplot as plt
# 学習結果(損失=交差エントロピー)のグラフを描画
plt.figure()
train_loss = hist.history['loss']
valid_loss = hist.history['val_loss']
epochs = len(train_loss)
plt.plot(range(epochs), train_loss, marker='.', label='loss (training data)')
plt.plot(range(epochs), valid_loss, marker='.', label='loss (validation data)')
plt.legend(loc='best')
plt.grid()
plt.xlabel('epoch')
plt.ylabel('loss (cross entropy)')
# 評価関数(正解率)のグラフを描画
plt.figure()
train_mae = hist.history['accuracy']
valid_mae = hist.history['val_accuracy']
epochs = len(train_mae)
plt.plot(range(epochs), train_mae, marker='.', label='accuracy (training data)')
plt.plot(range(epochs), valid_mae, marker='.', label='accuracy (validation data)')
plt.legend(loc='best')
plt.grid()
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()
###Output
_____no_output_____
###Markdown
実行結果のポイント:- 訓練データでは、常に損失が下がっている- しかし精度検証データを見ると、4エポック目あたりから逆に損失が上がってきている- つまり、4エポック目で訓練データにフィットしすぎる「過学習」が発生している- このような過学習を発見してハイパーパラメーターをチューニングするために、訓練データから精度検証データを分離しなければならない - ちなみに精度検証をテストデータで行うと、最終的な汎化性能の評価が行えなくなる - そのため、特にディープラーニングではデータは「訓練データ」「精度検証データ」「テストデータ」の3つに分割することが推奨される - なお、重回帰分析のような統計学/データサイエンスの場合、ハイパーパラメーターチューニングが基本的にないため、精度検証データを用意する必要は基本的にない。あくまでディープラーニングにおける話 ■(5)推論/未知データによるテスト リスト5-1 未知データによるテスト(テストデータで評価)
###Code
#BATCH_SIZE = 96 # バッチサイズ(リスト3-2で定義済み)
# 未知のテストデータで学習済みモデルの汎化性能を評価
score = model.evaluate(X_test, y_test, batch_size=BATCH_SIZE)
print('test socre([loss, accuracy]):', score)
# 出力例:
# 105/105 [==============================] - 0s 1ms/step - loss: 0.7703 - accuracy: 0.8766
# test socre([loss, accuracy]): [0.7702566981315613, 0.8766000270843506]
###Output
_____no_output_____
###Markdown
■実運用のイメージ リスト5-2 推論: 写真データを仮作成
###Code
import matplotlib.pyplot as plt # グラフ描画ライブラリ(データ画像の表示に使用)
#import pandas as pd # データ解析支援「pandas」
temp_data = np.array([[ # 9番:アンクルブーツ
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 31,178,162,153,151,142,138, 65, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 78,232,209,202,198,194,203,179, 97, 89, 73, 59, 47, 28, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,121,228,202,196,189,185,175,198,244,245,232,223,218,160, 4, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,157,223,203,199,192,176,186,215,235,228,220,216,214,164, 8, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 12,192,216,204,198,185,179,211,228,232,225,220,216,213,159, 6, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 40,214,210,201,195,182,191,223,232,233,227,224,219,216,150, 2, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 75,224,203,197,193,182,202,229,231,233,230,228,220,217,140, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,113,228,198,188,188,187,208,229,230,234,232,230,220,215,164, 4, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,154,226,197,182,184,189,210,228,231,234,233,231,221,213,196, 35, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 10,190,219,193,183,184,190,210,228,232,234,233,232,223,212,211, 96, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 39,214,212,186,181,183,189,207,225,230,232,233,232,223,212,208,151, 2],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 80,227,208,185,179,183,192,205,210,222,229,229,231,225,214,204,177, 21],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,122,229,206,185,180,184,192,196,192,215,226,232,234,226,215,205,177, 24],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 3,169,225,208,188,180,189,187,180,170,218,235,234,224,217,212,210,160, 10],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 38,212,220,206,179,179,190,186,173,182,229,229,220,213,211,210,207, 84, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0,101,229,216,197,177,188,191,180,187,219,222,214,218,216,211,212,177, 19, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 5,174,226,211,190,179,189,191,189,215,217,213,216,167,174,211,210,131, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 62,221,218,203,183,179,185,198,214,214,214,214, 96, 2, 25,198,214, 79, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 11,166,225,211,192,177,184,203,216,216,212,212, 69, 0, 0, 14,190,204, 34, 0, 0],
[ 0, 0, 0, 0, 0, 0, 22,127,219,219,206,187,178,198,223,221,212,219, 86, 0, 0, 0, 11,190,181, 9, 0, 0],
[ 0, 0, 0, 2, 33, 92,165,204,211,208,191,185,201,220,228,220,224,134, 0, 0, 0, 0, 14,198,157, 0, 0, 0],
[ 0, 21,102,160,192,212,214,199,193,193,182,199,220,226,224,219,202, 26, 0, 0, 0, 0, 16,205,137, 0, 0, 0],
[ 44,193,227,222,217,212,205,190,183,186,198,225,230,228,218,224,111, 0, 0, 0, 0, 0, 17,209,122, 0, 0, 0],
[109,230,213,208,205,205,204,198,196,205,215,227,229,225,221,205, 24, 0, 0, 0, 0, 0, 15,208,113, 0, 0, 0],
[ 25,103,159,193,213,218,215,215,213,213,215,217,220,225,229, 97, 0, 3, 3, 3, 2, 0, 25,225,114, 0, 1, 0],
[ 0, 0, 7, 44,100,150,177,192,203,209,212,215,216,198,110, 17, 15, 12, 10, 9, 8, 4, 30,153, 77, 1, 1, 1],
[ 0, 0, 0, 0, 0, 0, 5, 15, 23, 31, 35, 37, 34, 17, 2, 4, 2, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0]
]], dtype=float)
print(temp_data.shape) # 多次元配列の形状: (1, 28, 28)
# 図を描画
plt.imshow( # 画像を表示する
temp_data[0], # 1つの訓練用入力データ(28行×28列)
cmap=plt.cm.binary) # 白黒(2値:バイナリ)の配色
plt.xlabel('Ankle boot') # X軸のラベルに分類名を表示
plt.show()
#display(pd.DataFrame(temp_data[0])) # 表形式で表示する場合
###Output
_____no_output_____
###Markdown
リスト5-3 推論: 仮データを入力した場合の予測結果値の取得
###Code
import matplotlib.pyplot as plt # グラフ描画ライブラリ(データ画像の表示に使用)
# 推論(予測)する
predictions = model.predict(temp_data)
print(predictions)
# 以下のように出力される(インデックス番号「9」が99.88%)
# array([[1.50212283e-20, 1.48110773e-15, 1.04932664e-13, 2.96827862e-12,
# 6.80513210e-08, 5.95744408e-04, 1.75191891e-18, 6.33274554e-04,
# 6.95163068e-12, 9.98770893e-01]], dtype=float32)
# 数値が最大のインデックス番号を取得(=分類を決定する)
pred_class = np.argmax(predictions, axis=-1)
print(pred_class) # 9 (=Ankle boot)……などと表示される
###Output
_____no_output_____
###Markdown
リスト5-4 推論: 予測結果を棒グラフで描画
###Code
x = range(10) # 0, 1, 2, ……, 9
thisplot = plt.barh(x, predictions[0])
plt.xlim([0.0, 1.0])
classes_name = ['T-shirt/top [0]', 'Trouser [1]', 'Pullover [2]',
'Dress [3]', 'Coat [4]', 'Sandal [5]', 'Shirt [6]',
'Sneaker [7]', 'Bag [8]', 'Ankle boot [9]']
plt.yticks(x, classes_name) # X軸のラベル
plt.gca().invert_yaxis()
plt.show()
###Output
_____no_output_____
###Markdown
実行結果のポイント:- Ankle boot(クラスインデックス:9)が100%となっている 【二値分類編】---------- - 上記と同様の手順から変更箇所のみ説明 ■(6)データの準備 手書き文字の画像データセットとして「[MNIST](https://www.atmarkit.co.jp/ait/articles/2001/22/news012.html)」を使用する。  リスト6-1 MNIST(手書き文字)画像データの取得
###Code
# TensorFlowライブラリのtensorflowパッケージを「tf」という別名でインポート
import tensorflow as tf
import matplotlib.pyplot as plt # グラフ描画ライブラリ(データ画像の表示に使用)
import numpy as np # 数値計算ライブラリ(データのシャッフルに使用)
# Fashion-MNISTデータ(※NumPyの多次元配列型)を取得する
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()
# データ分割は自動で、訓練用が6万枚、テスト用が1万枚(ホールドアウト法)。
# さらにそれぞれを「入力データ(X:行列)」と「ラベル(y:ベクトル)」に分ける
# データのフィルタリング
b = np.where(y_train < 2)[0] # 訓練データから「0」「1」の全インデックスの取得
X_train, y_train = X_train[b], y_train[b] # そのインデックス行を抽出(=フィルタリング)
c = np.where(y_test < 2)[0] # テストデータから「0」「1」の全インデックスの取得
X_test, y_test = X_test[c], y_test[c] # そのインデックス行を抽出(=フィルタリング)
# ※訓練データは、学習時のfit関数で訓練用と精度検証用に分割する。
# そのため、あらかじめ訓練データをシャッフルしておく
p = np.random.permutation(len(X_train)) # ランダムなインデックス順の取得
X_train, y_train = X_train[p], y_train[p] # その順で全行を抽出する(=シャッフル)
# [内容確認]データのうち、最初の10枚だけを表示
classes_name = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
plt.figure(figsize=(10,4)) # 横:10インチ、縦:4インチの図
for i in range(10):
plt.subplot(2,5,i+1) # 図内にある(sub)2行5列の描画領域(plot)の何番目かを指定
plt.xticks([]) # X軸の目盛りを表示しない
plt.yticks([]) # y軸の目盛りを表示しない
plt.grid(False) # グリッド線を表示しない
plt.imshow( # 画像を表示する
X_train[i], # 1つの訓練用入力データ(28行×28列)
cmap=plt.cm.binary) # 白黒(2値:バイナリ)の配色
plt.xlabel(classes_name[y_train[i]]) # X軸のラベルに分類名を表示
plt.show()
###Output
_____no_output_____
###Markdown
このコードのポイント:- `fashion_mnist.load_data()`を`mnist.load_data()`に変更- MNISTデータセットは、TensorFlowでは[`tf.keras.datasets.mnist.load_data()`関数](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/mnist/load_data)で取得できる- `np.where()`関数を使って「0」と「1」の2値のみにフィルタリングをしている(それ以外の「2」~「9」は使わない) リスト6-2 1つの画像データの内容確認
###Code
import pandas as pd # データ解析支援「pandas」
# 1件の訓練データの、ラベルと入力データを表示する
print('y_train(正解ラベル): 「',y_train[0],'」');
print('X_train:');
display(pd.DataFrame(X_train[0])) # NumPy多次元配列をPandasデータフレームに変換して表示
###Output
_____no_output_____
###Markdown
このコードのポイント:- リスト1-2と全く同じです リスト6-3 入力データの正規化(Normalization)
###Code
X_train = (X_train / 255.0).astype(np.float32)
X_test = (X_test / 255.0).astype(np.float32)
###Output
_____no_output_____
###Markdown
このコードのポイント:- リスト1-3と全く同じです ■(7)モデルの定義 以下の通りで進めていく。- `tf.keras.Model`クラスを**サブクラス化**してモデルを定義する(**初中級者以上にお勧め**)- tf.kerasの基本である`compile()`&`fit()`メソッドを使用する(今回はカスタムループの実装は不要なため) ●ディープニューラルネットワークのモデル設計 - 入力の数(`INPUT_FEATURES`)は、**28行**×**28列**(=784)になっているので、フラット化(Flatten)して**784個**- 隠れ層のレイヤー数は、**2つ** - 隠れ層にある1つ目のニューロンの数(`LAYER1_NEURONS`)は、**128個** - 隠れ層にある2つ目のニューロンの数(`LAYER2_NEURONS`)は、**32個**- 出力層にあるニューロンの数(`OUTPUT_RESULTS`)は、**1個**(0~1の数値1個で判定できるため、2クラスにする必要はない) リスト7-1 モデルの定義
###Code
import tensorflow as tf # ライブラリ「TensorFlow」のtensorflowパッケージをインポート
from tensorflow.keras import layers # レイヤー関連モジュールのインポート
from IPython.display import Image
# 定数(モデル定義時に必要となるもの)
INPUT_ROWS = 28 # 入力行の数: 28行
INPUT_COLS = 28 # 入力列の数: 28列
# 入力(特徴)の数: 784(=28行×28列)
LAYER1_NEURONS = 128 # ニューロンの数: 128
LAYER2_NEURONS = 32 # ニューロンの数: 32
#OUTPUT_RESULTS = 10 # 前述する多クラス分類の場合: 10(=「0」~「9」の10クラスに分類)
OUTPUT_RESULTS = 1 # 出力結果の数: 1(=「0.0」~「1.0」の2値に分類)
# 過学習対策でドロップアウトを使う場合はコメントオフ:
#DROPOUT1_RATE = 0.2 # 第1隠れ層から第2隠れ層へのドロップ率: 0.2(20%)
# 変数(モデル定義時に必要となるもの)
activation1 = layers.ReLU(name='activation1') # 活性化関数(隠れ層用): ReLU関数(変更可能)
activation2 = layers.ReLU(name='activation2') # 活性化関数(隠れ層用): ReLU関数(変更可能)
act_output = layers.Activation('sigmoid', name='act_output') # 活性化関数(出力層用): Sigmoid関数(固定)
# tf.keras.Modelによるモデルの定義
class NeuralNetwork(tf.keras.Model):
# レイヤー(層)を定義
def __init__(self):
super(NeuralNetwork, self).__init__()
# 入力層:入力データのフラット化(Flatten)
self.flatten_input = layers.Flatten( # 行列データのフラット化
input_shape=(INPUT_ROWS, INPUT_COLS), # 入力の形状(=入力層)※タプル形式
name='flatten_input')
# 隠れ層:1つ目のレイヤー(layer)
self.layer1 = layers.Dense( # 全結合層(線形変換)
# 入力ユニット数は、前の出力ユニット数が使われるので、指定不要
LAYER1_NEURONS, # 次のレイヤーへの出力ユニット数
name='layer1')
# 第1レイヤーの後でドロップアウトを使う場合はコメントオフ:
#self.dropput1 = layers.Dropout( # ドロップアウト
# DROPOUT1_RATE, # 何%ドロップするか
# name='dropput1')
# 隠れ層:2つ目のレイヤー(layer)
self.layer2 = layers.Dense( # 全結合層
LAYER2_NEURONS, # 次のレイヤーへの出力ユニット数
name='layer2')
# 出力層
self.layer_out = layers.Dense( # 全結合層
OUTPUT_RESULTS, # 出力結果への出力ユニット数
name='layer_out')
# フォワードパスを定義
def call(self, x, train_mode=True):
x = self.flatten_input(x) # 入力データのフラット化
# 「出力=活性化関数(第n層(入力))」の形式で記述
x = activation1(self.layer1(x)) # 活性化関数は変数として定義
#ドロップアウトを使う場合はコメントオフ:
#if train_mode: # 訓練時のみ……
# x = self.dropput2(x) # ……ドロップアウト(不活性化)
x = activation2(self.layer2(x)) # 活性化関数は変数として定義
x = act_output(self.layer_out(x)) # ※活性化関数は「softmax」固定
return x
# モデル内容の出力を行う独自メソッド
def get_static_model(self):
x = layers.Input(shape=(28,28), name='input_features')
static_model = tf.keras.Model(inputs=[x], outputs=self.call(x))
return static_model
###Output
_____no_output_____
###Markdown
このコードのポイント:- 出力層のニューロン数が10個から1個に変わりました- 出力層の活性化関数は、二値分類時に一般的な[シグモイド関数](https://www.atmarkit.co.jp/ait/articles/2003/04/news021.html)を指定している(※先ほどの多クラス分類ではソフトマックス関数だった)。セットで、損失関数には「二値分類用の交差エントロピー」を使う(後述) リスト7-2 モデル内容(図)の確認
###Code
# モデル(NeuralNetworkクラス)のインスタンス化
model = NeuralNetwork()
# モデル概要の図を描画する
f_model = model.get_static_model()
filename = 'model.png';
tf.keras.utils.plot_model(f_model, show_shapes=True, show_layer_names=True, to_file=filename)
from IPython.display import Image
Image(retina=False, filename=filename) # 図で描画
#f_model.summary() # テキストで出力
###Output
_____no_output_____
###Markdown
■(8)学習/最適化(オプティマイザー) リスト8-1 学習方法(損失関数/最適化/学習率)の定義
###Code
# 定数(学習方法設計時に必要となる数値)
LOSS = 'binary_crossentropy' # 損失関数:二値分類用の交差エントロピー
METRICS = ['accuracy'] # 評価関数:正解率
OPTIMIZER = tf.keras.optimizers.Adam # 最適化:Adam
LEARNING_RATE = 0.001 # 学習率: 0.001(学習率の調整)
# 学習方法を定義する
model.compile(optimizer=OPTIMIZER(learning_rate=LEARNING_RATE),
loss=LOSS,
metrics=METRICS) # 精度(分類では正解率。回帰では損失)
###Output
_____no_output_____
###Markdown
このコードのポイント:- 損失関数には、シグモイド活性化関数とセットになる「二値用の交差エントロピー(binary_crossentropy)」を指定している - 多クラス分類用はの交差エントロピーは2種類ある(sparse_categorical_crossentropy:クラスインデックス対応、もしくはcategorical_crossentropy:one-hot表現対応) リスト8-2 トレーニング(ミニバッチ学習)
###Code
# 定数(ミニバッチ学習時に必要となるもの)
BATCH_SIZE = 96 # バッチサイズ: 96
EPOCHS = 100 # エポック数: 100
# 早期終了
#es = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=3)
# 学習する
hist = model.fit(x=X_train, # 訓練用データ
y=y_train, # 訓練用ラベル
validation_split=0.2, # 精度検証用の割合:20%
batch_size=BATCH_SIZE, # バッチサイズ
epochs=EPOCHS, # エポック数
verbose=1, # 実行状況表示
callbacks=[]) # コールバック(早期終了しない場合)
#callbacks=[es]) # コールバック(早期終了する場合)
###Output
_____no_output_____
###Markdown
このコードのポイント:- リスト3-2と全く同じ ■(9)評価/精度検証 リスト9-1 損失値/評価関数値の推移グラフ描画
###Code
import matplotlib.pyplot as plt
# 学習結果(損失=交差エントロピー)のグラフを描画
plt.figure()
train_loss = hist.history['loss']
valid_loss = hist.history['val_loss']
epochs = len(train_loss)
plt.plot(range(epochs), train_loss, marker='.', label='loss (training data)')
plt.plot(range(epochs), valid_loss, marker='.', label='loss (validation data)')
plt.legend(loc='best')
plt.grid()
plt.xlabel('epoch')
plt.ylabel('loss (cross entropy)')
# 評価関数(正解率)のグラフを描画
plt.figure()
train_mae = hist.history['accuracy']
valid_mae = hist.history['val_accuracy']
epochs = len(train_mae)
plt.plot(range(epochs), train_mae, marker='.', label='accuracy (training data)')
plt.plot(range(epochs), valid_mae, marker='.', label='accuracy (validation data)')
plt.legend(loc='best')
plt.grid()
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()
###Output
_____no_output_____
###Markdown
このコードのポイント:- リスト4-1と全く同じ- かなり早い段階で過学習となっている ■(10)推論/未知データによるテスト リスト10-1 未知データによるテスト(テストデータで評価)
###Code
#BATCH_SIZE = 96 # バッチサイズ(リスト3-2で定義済み)
# 未知のテストデータで学習済みモデルの汎化性能を評価
score = model.evaluate(X_test, y_test, batch_size=BATCH_SIZE)
print('test socre([loss, accuracy]):', score)
# 出力例:
# 23/23 [==============================] - 0s 2ms/step - loss: 0.0030 - accuracy: 0.9995
# test socre([loss, accuracy]): [0.002963468199595809, 0.9995272159576416]
###Output
_____no_output_____
###Markdown
このコードのポイント:- リスト5-1と全く同じ ■実運用のイメージ リスト10-2 推論: 手書き文字データを自作
###Code
#@title 「0」か「1」を手書きしてください。
# This code will be hidden when the notebook is loaded.
from IPython.core.display import HTML
from IPython.display import Image
from PIL import Image
from io import BytesIO
import base64
import numpy as np
import matplotlib.pyplot as plt
import google.colab.output
temp_data = np.zeros((1, 28, 28))
def Base64image(encoded):
print(encoded)
binary = base64.b64decode(encoded.split(',')[1])
im224 = Image.open(BytesIO(binary))
im28 = im224.resize(
(28, 28),
Image.BICUBIC).convert('L') # L= 8ビット RGB黒白
global temp_data
temp_data = np.array(im28).reshape((1, 28, 28)) # 多次元配列の形状
temp_data = 255 - temp_data #「白=0」~「黒=255」に変換
google.colab.output.register_callback('Base64image', Base64image)
HTML('''
<canvas id="myCanvas" width="224" height="224" style="border:1px solid #d3d3d3;">
お使いのブラウザーではHTML canvasをサポートしていないようです。
</canvas>
<p>
<button id="clear">削除</button>
<button id="submit">このデータを保存</button>
<span id="infobar"></span>
</p>
<script>
// Canvas描画領域
var canvas = document.getElementById("myCanvas");
var ctx = canvas.getContext("2d");
ctx.strokeStyle = "#000";
ctx.lineJoin = "round";
ctx.lineWidth = 15;
ctx.fillStyle = "#FFF";
ctx.fillRect(0, 0, canvas.width, canvas.height);
// メッセージ表示領域
var infobar = document.getElementById("infobar");
// [削除]ボタン
var clearbtn = document.getElementById("clear");
clearbtn.addEventListener("click", function(){
ctx.clearRect(0, 0, canvas.width, canvas.height);
ctx.fillStyle = "#FFF";
ctx.fillRect(0, 0, canvas.width, canvas.height);
infobar.textContent = "";
});
// [このデータを保存]ボタン
var submitbtn = document.getElementById("submit");
submitbtn.addEventListener("click", function(){
var base64image = canvas.toDataURL('image/jpeg', 1.0);
google.colab.kernel.invokeFunction('Base64image', [base64image])
infobar.textContent = "保存しました!";
});
// マウスカーソルを管理
var mouse = {x: 0, y: 0};
// マウスカーソルを描画する
var onPaint = function() {
ctx.lineTo(mouse.x, mouse.y);
ctx.stroke();
};
// マウスの移動を捕捉する
canvas.addEventListener('mousemove', function(e) {
var rect = e.target.getBoundingClientRect();
mouse.x = e.clientX - rect.left - 1;
mouse.y = e.clientY - rect.top - 1;
}, false);
// マウスボタンが押し下げられたら描画を開始させる
canvas.addEventListener('mousedown', function(e) {
ctx.moveTo(mouse.x, mouse.y);
ctx.beginPath();
canvas.addEventListener('mousemove', onPaint, false);
}, false);
// マウスボタンが離されたら描画を終了させる
canvas.addEventListener('mouseup', function() {
canvas.removeEventListener('mousemove', onPaint, false);
}, false);
</script>
''')
###Output
_____no_output_____
###Markdown
リスト10-3 推論: 自作データ内容の表示
###Code
import matplotlib.pyplot as plt # グラフ描画ライブラリ(データ画像の表示に使用)
#import pandas as pd # データ解析支援「pandas」
print(temp_data.shape) # 多次元配列の形状: (1, 28, 28)
# 図を描画
plt.imshow( # 画像を表示する
temp_data[0], # 1つの訓練用入力データ(28行×28列)
cmap=plt.cm.binary) # 白黒(2値:バイナリ)の配色
plt.show()
#display(pd.DataFrame(temp_data[0])) # 表形式で表示する場合
###Output
_____no_output_____
###Markdown
リスト10-4 推論: 自作データを入力した場合の予測結果値の取得
###Code
import matplotlib.pyplot as plt # グラフ描画ライブラリ(データ画像の表示に使用)
# 推論(予測)する
predictions = model.predict(temp_data)
predictions
# 以下のように出力される(「1.」は100%「1」、「0.」なら100%「0」)
# array([[1.]], dtype=float32)
###Output
_____no_output_____ |
.ipynb_checkpoints/visualization_and_data_analysis-checkpoint.ipynb | ###Markdown
We are only interested in sensors with the following parameters: (sensor_name = sds011, sensor_id = 6127) and (sensor_name = bme280, sensor_id = 6128).
###Code
def part_day(x):
""" Returns part of day based on the timestamp hour """
x = x.hour
if (x > 4) and (x <= 8):
return 1
elif (x > 8) and (x <= 12):
return 2
elif (x > 12) and (x <= 14):
return 3
elif (x > 14) and (x <= 18):
return 4
elif (x > 18) and (x <= 22):
return 5
else:
return 6
def season(x):
"""Returns season based on month"""
x = x.month
if (x > 3) and (x <= 6):
return 1
elif (x > 6) and (x <= 9):
return 2
elif (x > 9) and (x <= 11):
return 3
else:
return 4
def is_workday(x):
""" Returns if day is workday"""
if x <= 4:
return 1
else:
return 0
def mean_print_plot(df, category: str, data_col1: str, data_col2: str) -> None:
"""Function which prints the mean of each category in a data column and plots the difference between the 2 datasets
Parameters:
df: data frame
category: category name
data_col1: data to calculate mean on
data_col2: data to calculate mean on
Returns:
None
"""
print(f"P1 data stats: {df.groupby([category]).mean()[data_col1].sort_index()}")
print('------------------------------------------------')
print(f"P2 data stats: {df.groupby([category]).mean()[data_col2].sort_index()}")
plt.plot(df.groupby([category]).mean()[data_col1].sort_index(), label='P1')
plt.plot(df.groupby([category]).mean()[data_col2].sort_index(), label='P2')
plt.title(f'Mean of {data_col1} and {data_col2} per {category} category')
plt.legend()
plt.grid()
def is_p1_high(x):
if x > 35:
return 1
else:
return 0
def is_holiday(x):
""" Returns if it is holiday if date is 3 days around a holiday"""
for holiday in HOLIDAYS:
if (x >= holiday-datetime.timedelta(days=3)) and (x <= holiday + datetime.timedelta(days=3)):
return 1
else:
return 0
###Output
_____no_output_____
###Markdown
Also, we are going to create a list with all public holidays: Date Holiday Official Name 1 January New Year's Day 3 March Liberation Day 1 May International Workers' Day 6 May Saint George's Day 24 May Bulgarian Education and Culture and Slavonic Literature Day 6 September Unification Day 22 September Independence Day 24 December Christmas Eve 25 & 26 December Christmas Day Moveable Orthodox Good Friday, Holy Saturday & Easter
###Code
# TODO: Automatically take it from G calendar API
HOLIDAYS = [datetime.date(2020,4,19), datetime.date(2021,1,1), datetime.date(2021,3,3),datetime.date(2020,5,1),
datetime.date(2020,5,6),datetime.date(2020,5,24),datetime.date(2020,9,6),
datetime.date(2020,9,22),datetime.date(2020,12,24),datetime.date(2020,12,25),datetime.date(2020,12,26)]
START_DATE = datetime.date(2020,4,1) # start time for the analysis
###Output
_____no_output_____
###Markdown
Data download now
###Code
download_data(sensor_name='sds011', sensor_id=6127)
download_data(sensor_name='bme280', sensor_id=6128)
###Output
Downloading: 2020-04-01
Downloading: 2020-04-02
Downloading: 2020-04-03
Downloading: 2020-04-04
Downloading: 2020-04-05
Downloading: 2020-04-06
Downloading: 2020-04-07
Downloading: 2020-04-08
Downloading: 2020-04-09
Downloading: 2020-04-10
Downloading: 2020-04-11
Downloading: 2020-04-12
Downloading: 2020-04-13
Downloading: 2020-04-14
Downloading: 2020-04-15
Downloading: 2020-04-16
Downloading: 2020-04-17
Downloading: 2020-04-18
Downloading: 2020-04-19
Downloading: 2020-04-20
Downloading: 2020-04-21
Downloading: 2020-04-22
Downloading: 2020-04-23
Downloading: 2020-04-24
Downloading: 2020-04-25
Downloading: 2020-04-26
Downloading: 2020-04-27
Downloading: 2020-04-28
Downloading: 2020-04-29
Downloading: 2020-04-30
Downloading: 2020-05-01
Downloading: 2020-05-02
Downloading: 2020-05-03
Downloading: 2020-05-04
Downloading: 2020-05-05
Downloading: 2020-05-06
Downloading: 2020-05-07
Downloading: 2020-05-08
Downloading: 2020-05-09
Downloading: 2020-05-10
Downloading: 2020-05-11
Downloading: 2020-05-12
Downloading: 2020-05-13
Downloading: 2020-05-14
Downloading: 2020-05-15
Downloading: 2020-05-16
Downloading: 2020-05-17
Downloading: 2020-05-18
Downloading: 2020-05-19
Downloading: 2020-05-20
Downloading: 2020-05-21
Downloading: 2020-05-22
Downloading: 2020-05-23
Downloading: 2020-05-24
Downloading: 2020-05-25
Downloading: 2020-05-26
Downloading: 2020-05-27
Downloading: 2020-05-28
Downloading: 2020-05-29
Downloading: 2020-05-30
Downloading: 2020-05-31
Downloading: 2020-06-01
Downloading: 2020-06-02
Downloading: 2020-06-03
Downloading: 2020-06-04
Downloading: 2020-06-05
Downloading: 2020-06-06
Downloading: 2020-06-07
Downloading: 2020-06-08
Downloading: 2020-06-09
Downloading: 2020-06-10
Downloading: 2020-06-11
Downloading: 2020-06-12
Downloading: 2020-06-13
Downloading: 2020-06-14
Downloading: 2020-06-15
Downloading: 2020-06-16
Downloading: 2020-06-17
Downloading: 2020-06-18
Downloading: 2020-06-19
Downloading: 2020-06-20
Downloading: 2020-06-21
###Markdown
We have all the data, let us now load the data in dataframe
###Code
file_list = os.listdir('./data/')
date_list = set([file.split('_')[0] for file in file_list]) # get unique dates
df = pd.DataFrame()
for date in date_list:
for file in file_list:
if file.find(date) != -1:
if file.find('bme280') != -1:
df_temp_1 = pd.read_csv('./data/'+file, sep=';')
df_temp_1.timestamp = pd.to_datetime(df_temp_1.timestamp, errors='ignore', infer_datetime_format=True)
elif file.find('sds011') != -1:
df_temp_2 = pd.read_csv('./data/'+file, sep=';')
df_temp_2.timestamp = pd.to_datetime(df_temp_2.timestamp, errors='ignore', infer_datetime_format=True)
df_1 = pd.merge_asof(df_temp_1, df_temp_2, on='timestamp', direction='nearest', tolerance=datetime.timedelta(seconds=20), allow_exact_matches=False)
df_1.drop(['altitude', 'pressure', 'durP2', 'ratioP2', 'durP1', 'ratioP1', "ratioP2",
'sensor_id_x', 'sensor_type_x', 'location_x', 'lat_x', 'lon_x', 'pressure_sealevel',
'sensor_id_y', 'sensor_type_y', 'location_y', 'lat_y','lon_y'], axis=1, inplace=True);
df_1.dropna(inplace=True)
df = pd.concat([df, df_1])
df.reset_index(inplace=True)
df.drop('index', axis=1, inplace=True)
df['IsHoliday'] = df['timestamp'].apply(is_holiday)
df['PartDay'] = df['timestamp'].apply(part_day)
df['WeekDay'] = df['timestamp'].dt.dayofweek
df['IsWorking'] = df['WeekDay'].apply(is_workday)
df['Season'] = df['timestamp'].apply(season)
###Output
_____no_output_____
###Markdown
Check if the P1 and P2 values are higher in any part of the day or if it is a holiday.
###Code
mean_print_plot(df, category='IsHoliday', data_col1='P1', data_col2='P2')
locs, ticks = plt.xticks()
plt.xticks([locs[1], locs[-2]], ['Normal day', 'Holiday'])
mean_print_plot(df, category='WeekDay', data_col1='P1', data_col2='P2')
locs, ticks = plt.xticks()
plt.xticks(locs, ['', 'Mon','Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun', ''])
df_blah = df.loc[df['PartDay']==6]
mean_print_plot(df_blah, category='Season', data_col1='P1', data_col2 = 'P2')
mean_print_plot(df, category='PartDay', data_col1='P1', data_col2='P2')
locs, ticks = plt.xticks()
plt.xticks(locs, [' ','Early Morning', 'Morning', 'Noon', 'Afternoon', 'Evening', 'Night', ''])
mean_print_plot(df, category='Season', data_col1='P1', data_col2='P2')
locs, ticks = plt.xticks()
plt.xticks(locs[1:-1:2], ['Spring', 'Summer', 'Autumn', 'Winter'])
###Output
_____no_output_____
###Markdown
As we see there are correlations between the season, week day, time of day and holidays and the air quality. I will now check with a seaborn correlation plot
###Code
sns.heatmap(df.corr(), annot=True)
###Output
_____no_output_____
###Markdown
Strangly enough there seems to be a correlation also between the humidity and the air quality, but not that much with temperature. Regression I think that this problem may be best generalized with a random forest model. First I will start with a regressor, then I will use a classifier
###Code
from sklearn.ensemble import RandomForestRegressor
from sklearn.svm import SVR
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
from sklearn.preprocessing import StandardScaler
X = df[['temperature', 'humidity', 'IsHoliday', 'WeekDay', 'Season']]
y = df['P1']
today = [12, 47, 0, 1, 1]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3)
sc_regr = StandardScaler()
X_train = sc_regr.fit_transform(X_train)
X_test = sc_regr.transform(X_test)
today = sc_regr.transform([today])
regr = RandomForestRegressor()
regr_svm = SVR()
regr.fit(X_train, y_train)
regr_svm.fit(X_train, y_train)
regr.predict(today)
regr_svm.predict(today)
y_pred_svm = regr_svm.predict(X_test)
###Output
_____no_output_____
###Markdown
Regression ANN:
###Code
import tensorflow as tf
ann_regr = tf.keras.models.Sequential()
ann_regr.add(tf.keras.layers.Dense(units = 5, activation='relu'))
ann_regr.add(tf.keras.layers.Dense(units = 8, activation='relu'))
ann_regr.add(tf.keras.layers.Dense(units=1))
ann_regr.compile(optimizer='adam', loss='mean_squared_error')
ann_regr.fit(X, y, batch_size=64, epochs=100)
y_pred = ann_regr.predict(X_test)
print(y_test)
ann_regr.predict(today)
###Output
_____no_output_____
###Markdown
Later training XGBoost
###Code
df['HighP1'] = df['P1'].apply(is_p1_high)
###Output
_____no_output_____
###Markdown
Now I will run a classification RF model
###Code
from sklearn.ensemble import RandomForestClassifier
y = df['HighP1']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3)
today = [1, 100, 0, 6, 1]
classifier = RandomForestClassifier()
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
print(accuracy_score(y_test, y_pred))
print(confusion_matrix(y_test,y_pred))
print(classification_report(y_test, y_pred))
classifier.predict_proba([today])
regr.predict([today])
regr.feature_importances_
classifier.feature_importances_
###Output
_____no_output_____
###Markdown
Results are not great - this may be due to overfitting of the clean air data as we have roughly 5 times more data for that case
###Code
from sklearn.svm import SVC
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
classifier_svc = SVC()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
today = sc.transform([today])
classifier_svc.fit(X_train, y_train)
y_pred_svc = classifier_svc.predict(X_test)
print(classification_report(y_test, y_pred_svc))
classifier_svc.predict(today)
###Output
_____no_output_____
###Markdown
Now I will try with ANN regression/classification
###Code
import tensorflow as tf
ann = tf.keras.models.Sequential()
ann.add(tf.keras.layers.Dense(units = 8, activation='relu'))
ann.add(tf.keras.layers.Dense(units = 8, activation='relu'))
ann.add(tf.keras.layers.Dense(units=1, activation='sigmoid'))
ann.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
ann.fit(X_train, y_train, batch_size=64, epochs=100)
y_pred = ann.predict(X_test)
ann.predict(today)
y_pred = (y_pred > 0.5)
print(classification_report(y_test, y_pred))
data = weathercom.getCityWeatherDetails(city='Sofia',queryType="ten-days-data")
df = pd.read_json(data)
weather = pd.DataFrame()
date = pd.DataFrame()
weather['Temperature'] = df['vt1dailyForecast']['day']['temperature']
weather['Humidity'] = df['vt1dailyForecast']['day']['humidityPct']
date['Date'] = pd.to_datetime(df['vt1dailyForecast']['validDate'])
weather['IsHoliday'] = date['Date'].apply(is_holiday)
weather['Weekday'] = date['Date'].dt.dayofweek
weather['Season'] = date['Date'].apply(season)
weather = sc_regr.transform(weather)
ann_regr.predict(weather)
###Output
_____no_output_____
###Markdown
Saving the model so it can be reused
###Code
import joblib
joblib.dump(sc_regr, 'std_scaler.bin', compress=True)
ann_regr.save('ann_regr_weather.h5')
###Output
_____no_output_____ |
biostatistics/stats1.ipynb | ###Markdown
Confidence intervals of a proportionsource: Intuitive Biostatistics see also: https://en.wikipedia.org/wiki/Confidence_interval First let's import some packages:
###Code
import numpy as np
import scipy.stats as st
import matplotlib.pyplot as plt
import pandas as pd
import statsmodels.api as sm
###Output
_____no_output_____
###Markdown
Now let's generate some example data: Counting weird cells on a slideLet's assume you have done an experiment and you are looking at your cells under the microscope and you notice that some cells look weird - they have deformed nuclei. You are excited and you count how many they are, so you count a hundred cells and 10 are weird. But then you wonder: "Damn, I havent done any replicates, couldnt it just be 10 by chance?" "How sure am I that it is really that many?". Decide how sure you want to be:
###Code
confidence_level = 0.95
###Output
_____no_output_____
###Markdown
This means that with your statement in the end you want to be 95% sure. So with a 5% probability you will be wrong. You think that is ok. Input your data
###Code
n = 100
a = 10
prop = a/n
print(prop)
###Output
0.1
###Markdown
Visualise your dataFor example as a pie chart:
###Code
labels = 'normal', 'deformed'
sizes = [n-a,a]
fig1, ax1 = plt.subplots()
ax1.pie(sizes, labels=labels, autopct='%1.1f%%',
shadow=False, startangle=90)
ax1.axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle.
plt.show()
###Output
_____no_output_____
###Markdown
or as a bar chart:
###Code
labels = 'normal', 'deformed'
sizes = [n-a,a]
width = 0.4 #this is the width of the bar
fig, ax = plt.subplots()
plt.bar(labels,sizes)
plt.show()
###Output
_____no_output_____
###Markdown
or as a stacked bar chart:
###Code
labels = 'normal', 'deformed'
sizes = [n-a,a]
width = 0.4 #this is the width of the bar
fig, ax = plt.subplots()
ax.bar('cells', sizes[0], width)
ax.bar('cells', sizes[1], width, bottom=sizes[0])
ax.set_ylabel('cells')
ax.set_title('deformed cells')
ax.legend(labels)
plt.show()
###Output
_____no_output_____
###Markdown
Measure of confidence To calculate a confidence interval from one measurement, we are taking a few assumptions:- We are looking at a random (or representative) sample!- They are independent observations!- The data are accurate! The confidence intervals we are calculating are confidence intervals of a proportion, this means that they are going back to "binomial variables", which are represented as a proportion. There are several ways to calculate these intervals, the "exact method", the "standard Wald method", the "modified Wald method". The details are probably never to become relevant for you, so we will take the default standard implementation in python, the "asymptotic normal approximation". For this we need the measured proportion (a) and the total number (n). It will give us a lower and an upper interval. alpha is 1 - the confidence level. It will be set by default to 0.05, so a 95% confidence level will be assumed, if not specified explicitely. (this was a problem in my original script, because I had accidentally deleted the specification of alpha... sorry!)
###Code
CI= sm.stats.proportion_confint(a, n, alpha=1-confidence_level)
print(CI)
###Output
(0.050654391191455816, 0.1493456088085442)
###Markdown
Be careful, when you are dealing with proportions and when with percentages!!! Multiply with total cell numbers for the cell count dependent confidence interval:
###Code
CI_tot = [n*i for i in CI]
print(CI_tot)
###Output
[4.120108046379837, 15.879891953620165]
###Markdown
Multiply with 100 for percentages:
###Code
CI_perc = [100*i for i in CI]
print(CI_perc)
###Output
[4.120108046379837, 15.879891953620165]
###Markdown
Plotting the confidence interval
###Code
labels = 'normal', 'deformed'
sizes = [(n-a),a]
width = 0.4 #this is the width of the bar
lower_error = n-CI_tot[1] # we are subtracting it from total, because we are plotting the deformed cells on top
upper_error = n-CI_tot[0]
asymmetric_error = [lower_error, upper_error]
fig, ax = plt.subplots()
ax.bar('cells', sizes[0], width)
ax.bar('cells', sizes[1], width, bottom=sizes[0])
ax.set_ylabel('proportion')
ax.set_title('deformed cells')
ax.legend(labels)
ax.vlines('cells', lower_error, upper_error,color='black')
plt.show()
###Output
_____no_output_____ |
OpenCV/2_Image_stats_and_image_processing.ipynb | ###Markdown
Image stats and image processingThis notebook follows on from the fundamentals notebook.This will introduce some simple stats, smoothing, and basic image processing.But first let us include what we need to include and load up our test image. Estimated time needed: 20 min
###Code
# Download the test image and utils files
!wget --no-check-certificate \
https://raw.githubusercontent.com/computationalcore/introduction-to-opencv/master/assets/noidea.jpg \
-O noidea.jpg
!wget --no-check-certificate \
https://raw.githubusercontent.com/computationalcore/introduction-to-opencv/master/utils/common.py \
-O common.py
# these imports let you use opencv
import cv2 #opencv itself
import common #some useful opencv functions
import numpy as np # matrix manipulations
#the following are to do with this interactive notebook code
%matplotlib inline
from matplotlib import pyplot as plt # this lets you draw inline pictures in the notebooks
import pylab # this allows you to control figure size
pylab.rcParams['figure.figsize'] = (10.0, 8.0) # this controls figure size in the notebook
input_image=cv2.imread('noidea.jpg')
###Output
_____no_output_____
###Markdown
Basic manipulationsRotate, flip...
###Code
flipped_code_0=cv2.flip(input_image,0) # vertical flip
plt.imshow(flipped_code_0)
flipped_code_1=cv2.flip(input_image,1) # horizontal flip
plt.imshow(flipped_code_1)
transposed=cv2.transpose(input_image)
plt.imshow(transposed)
###Output
_____no_output_____
###Markdown
Minimum, maximumTo find the min or max of a matrix, you can use minMaxLoc. This takes a single channel image (it doesn't make much sense to take the max of a 3 channel image). So in the next code snippet you see a for loop, using python style image slicing, to look at each channel of the input image separately.
###Code
for i in range(0,3):
min_value, max_value, min_location, max_location=cv2.minMaxLoc(input_image[:,:,i])
print("min {} is at {}, and max {} is at {}".format(min_value, min_location, max_value, max_location))
###Output
_____no_output_____
###Markdown
Arithmetic operations on imagesOpenCV has a lot of functions for doing mathematics on images. Some of these have "analogous" numpy alternatives, but it is nearly always better to use the OpenCV version. The reason for this that OpenCV is designed to work on images and so handles overflow better (OpenCV add, for example, truncates to 255 if the datatype is image-like and 8 bit; Numpy's alternative wraps around).Useful arithmetic operations include add and addWeighted, which combine two images that are the same size.
###Code
#First create an image the same size as our input
blank_image = np.zeros((input_image.shape), np.uint8)
blank_image[100:200,100:200,1]=100; #give it a green square
new_image=cv2.add(blank_image,input_image) # add the two images together
plt.imshow(cv2.cvtColor(new_image, cv2.COLOR_BGR2RGB))
###Output
_____no_output_____
###Markdown
Noise reductionNoise reduction usually involves blurring/smoothing an image using a Gaussian kernel.The width of the kernel determines the amount of smoothing.
###Code
d=3
img_blur3 = cv2.GaussianBlur(input_image, (2*d+1, 2*d+1), -1)[d:-d,d:-d]
plt.imshow(cv2.cvtColor(img_blur3, cv2.COLOR_BGR2RGB))
d=5
img_blur5 = cv2.GaussianBlur(input_image, (2*d+1, 2*d+1), -1)[d:-d,d:-d]
plt.imshow(cv2.cvtColor(img_blur5, cv2.COLOR_BGR2RGB))
d=15
img_blur15 = cv2.GaussianBlur(input_image, (2*d+1, 2*d+1), -1)[d:-d,d:-d]
plt.imshow(cv2.cvtColor(img_blur15, cv2.COLOR_BGR2RGB))
###Output
_____no_output_____
###Markdown
EdgesEdge detection is the final image processing technique we're going to look at in this tutorial.For a lot of what we think of as "modern" computer vision techniques, edge detection functions as a building block. Much edge detection actually works by **convolution**, and indeed **convolutional neural networks** are absolutely the flavour of the month in some parts of computer vision. Sobel's edge detector was one of the first truly successful edge detection (enhancement) technique and that involves convolution at its core. You can read more about the background to Sobel here in the OpenCV docs [here](https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_gradients/py_gradients.html).
###Code
sobelimage=cv2.cvtColor(input_image,cv2.COLOR_BGR2GRAY)
sobelx = cv2.Sobel(sobelimage,cv2.CV_64F,1,0,ksize=9)
sobely = cv2.Sobel(sobelimage,cv2.CV_64F,0,1,ksize=9)
plt.imshow(sobelx,cmap = 'gray')
# Sobel works in x and in y, change sobelx to sobely in the olt line above to see the difference
###Output
_____no_output_____
###Markdown
Canny edge detection is another winnning technique - it takes two thresholds.The first one determines how likely Canny is to find an edge, and the second determines how likely it is to follow that edge once it's found. Investigate the effect of these thresholds by altering the values below.
###Code
th1=30
th2=60 # Canny recommends threshold 2 is 3 times threshold 1 - you could try experimenting with this...
d=3 # gaussian blur
edgeresult=input_image.copy()
edgeresult = cv2.GaussianBlur(edgeresult, (2*d+1, 2*d+1), -1)[d:-d,d:-d]
gray = cv2.cvtColor(edgeresult, cv2.COLOR_BGR2GRAY)
edge = cv2.Canny(gray, th1, th2)
edgeresult[edge != 0] = (0, 255, 0) # this takes pixels in edgeresult where edge non-zero colours them bright green
plt.imshow(cv2.cvtColor(edgeresult, cv2.COLOR_BGR2RGB))
###Output
_____no_output_____ |
Task1_Narre/Dataset_2_model_narre_Lesser_dropout.ipynb | ###Markdown
###Code
###Output
_____no_output_____ |
source/notebooks/chapter6/Chapter 6 - Contiguity diagram.ipynb | ###Markdown
This notebook generates figure 6.8, the illustration of tessellation-based contiguity matrix.
###Code
import geopandas as gpd
import libpysal
from splot.libpysal import plot_spatial_weights
import matplotlib.pyplot as plt
import pandas as pd
path = (
"data/contiguity_diagram.gpkg"
)
blg_s = gpd.read_file(path, layer="blg_s")
tess_s = gpd.read_file(path, layer="tess_s")
blg_c = gpd.read_file(path, layer="blg_c")
tess_c = gpd.read_file(path, layer="tess_c")
blg = pd.concat([blg_s, blg_c])
tess = pd.concat([tess_s, tess_c])
blg = blg.sort_values("uID")
blg.reset_index(inplace=True)
tess = tess.loc[tess["uID"].isin(blg["uID"])]
tess = tess.sort_values("uID")
tess.reset_index(inplace=True)
weights = libpysal.weights.contiguity.Queen.from_dataframe(tess)
f, ax = plt.subplots(figsize=(20, 10))
tess.plot(ax=ax)
plot_spatial_weights(weights, blg, ax=ax)
#plt.savefig(
# "contiguity_diagram.svg",
# dpi=300,
# bbox_inches="tight",
#)
###Output
/Users/martin/anaconda3/envs/ceus/lib/python3.8/site-packages/libpysal/weights/weights.py:167: UserWarning: The weights matrix is not fully connected:
There are 2 disconnected components.
warnings.warn(message)
|
DataSet_Meteo_Original_EnviDat/M1/newnbdir/M1-D2-DV.ipynb | ###Markdown
Notebook 2, Module 1, Data Aquisition and Data Management, CAS Applied Data Science, 2019-08-22, S. Haug, University of Bern. 1. Visualisation of Data - Examples**Learning outcomes:**Participants will be able to make good data science plots, with praxis on - plot line charts from series and dataframes- plot histograms - understand the effect of binning- plot scatter plots- plot box plots- plot error bars- formatting of plots**Introduction Slides**- https://docs.google.com/presentation/d/1HhRIIVq46DyVNm68WeTqr_vZvOgSMWBZa2XDwWNH8H4/edit?usp=sharing**Further sources**- Python: https://pandas.pydata.org/pandas-docs/stable/visualization.html- https://jakevdp.github.io/PythonDataScienceHandbook/04.00-introduction-to-matplotlib.html- Get inspired here : https://matplotlib.org/gallery/index.htmlHere you have examples on plotting possibilities with pandas. They make data science plotting very easy and fast. However, you may have special needs that are not supported. Then you can use the underlaying plotting module **matplotlib**. Plotting is an art and you can spend enourmous amounts of time doing plotting. There are many types of plots. You may invent your own type. We only show some examples and point out some important things. If you need to go further, you have to work indepentently. Some vocabulary and plots are only understandable with corresponding statistics background. This is part of module 2. 0. Load the modules
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
1. Plot line charts (time series)First we use the data structure Series (one dimensional).
###Code
# Generate 1000 random numbers for 1000 days from the normal distribution
ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000))
ts = ts.cumsum()
ts.head()
#ts.plot()
#plt.show()
###Output
_____no_output_____
###Markdown
We can generate 4 time series, keep them in a dataframe and plot them all four.
###Code
df = pd.DataFrame(np.random.randn(1000, 4), index=ts.index, columns=['All','Bin','C','D'])
df_cumsum = df.cumsum()
plt.figure()
df_cumsum.plot()
plt.show()
###Output
_____no_output_____
###Markdown
2. Plot histograms (frequency plots)For this we use our Iris dataset.
###Code
df = pd.read_csv('iris.csv',names=['slength','swidth','plength','pwidth','species']) # data type is a string (str), i.e. not converted into numbers
df.head() # print first five lines of data
###Output
_____no_output_____
###Markdown
Plot two histograms with a legend in the same graph.
###Code
df['slength'].plot(kind="hist",fill=True,histtype='step',label='slength')
df['swidth'].plot(kind="hist",fill=False,histtype='step',label='swidth')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The effect of binningWhen data is binned (or sampled) the bin size effects the amount of counts in each bin. Counts fluctuate like a normal distribution for counts above about 20. So depending on your bin size, the same data may look differently.Hard binning (small bin size) may introduce pure statistical structures without any other meaning. This is then overfitting. Too big bin sizes may wipe out structures in the data (underfitting). If known, a bin size guided by the physical resolution of the sensor is close to optimal. Plot the same histograms with a different binning.
###Code
df['slength'].plot(bins=10,range=(2,8), kind="hist",fill=False,histtype='step')
plt.show()
###Output
_____no_output_____
###Markdown
Always label the axes (also with units)
###Code
ax = df['slength'].plot(kind="hist",fill=False,histtype='step',label='slength')
df['swidth'].plot(kind="hist",fill=False,histtype='step',label='swidth')
ax.set_xlabel('x / cm')
ax.set_ylabel('Count / 0.3 cm')
ax.set_title('Sepal Length and Width')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
**A figure with several plots** 3. Scatter plotsScatter plots show how the data is distributed in two dimensions. They are good for finding (anti) correlations between two variables. We plot several plots in one figure.
###Code
df.plot(x='slength',y='swidth',kind="scatter",c='c')
plt.show()
###Output
_____no_output_____
###Markdown
With the plotting module there are some nice tools. For example authomatic plotting of all scatter plots.
###Code
from pandas.plotting import scatter_matrix
scatter_matrix(df[df['species']=='Iris-setosa'], alpha=0.2, figsize=(8, 8), diagonal='hist')
plt.show()
###Output
_____no_output_____
###Markdown
Or plotting of Andrew curves. https://en.wikipedia.org/wiki/Andrews_plot
###Code
from pandas.plotting import andrews_curves
andrews_curves(df, 'species')
plt.show()
###Output
_____no_output_____
###Markdown
There are several other tools too. See https://pandas.pydata.org/pandas-docs/stable/visualization.html 4. Box plots Boxplot can be drawn calling Series.plot.box() and DataFrame.plot.box(), or DataFrame.boxplot() to visualize the distribution of values within each column.
###Code
color = dict(boxes='DarkGreen', whiskers='DarkOrange',
medians='DarkBlue', caps='Gray')
df.plot.box(color=color)
plt.show()
###Output
_____no_output_____
###Markdown
Box plots are non-parametric. The box shows the first second and third quartiles. The whiskers may be standard deviations or other percentiles. 5. Plotting with error bars There is no science without error bars, or better, uncertainties. The meaning of uncertainties is discussed in module 2. Here we only show by example how to plot uncertainties. Plotting with error bars is supported in DataFrame.plot() and Series.plot().Horizontal and vertical error bars can be supplied to the xerr and yerr keyword arguments to plot(). The error values can be specified using a variety of formats:- As a DataFrame or dict of errors with column names matching the columns attribute of the plotting DataFrame or matching the name attribute of the Series.- As a str indicating which of the columns of plotting DataFrame contain the error values.- As raw values (list, tuple, or np.ndarray). Must be the same length as the plotting DataFrame/Series.Asymmetrical error bars are also supported, however raw error values must be provided in this case. For a M length Series, a Mx2 array should be provided indicating lower and upper (or left and right) errors. For a MxN DataFrame, asymmetrical errors should be in a Mx2xN array.Here is an example using an error dataframe (symmetric uncertainties).
###Code
my_df = pd.DataFrame([6,15,4,20,16,13]) # Some random data
my_df_e = (my_df)**0.5 # The error dataframe
my_df.plot(yerr=my_df_e)
plt.show()
###Output
_____no_output_____ |
training-data-analyst/courses/machine_learning/deepdive2/introduction_to_tensorflow/labs/2_dataset_api.ipynb | ###Markdown
TensorFlow Dataset API**Learning Objectives**1. Learn how to use tf.data to read data from memory1. Learn how to use tf.data in a training loop1. Learn how to use tf.data to read data from disk1. Learn how to write production input pipelines with feature engineering (batching, shuffling, etc.)In this notebook, we will start by refactoring the linear regression we implemented in the previous lab so that it takes data from a`tf.data.Dataset`, and we will learn how to implement **stochastic gradient descent** with it. In this case, the original dataset will be synthetic and read by the `tf.data` API directly from memory.In a second part, we will learn how to load a dataset with the `tf.data` API when the dataset resides on disk.Each learning objective will correspond to a __TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/2_dataset_api.ipynb).
###Code
import json
import math
import os
from pprint import pprint
import numpy as np
import tensorflow as tf
print(tf.version.VERSION)
###Output
2.1.3
###Markdown
Loading data from memory Creating the dataset Let's consider the synthetic dataset of the previous section:
###Code
N_POINTS = 10
X = tf.constant(range(N_POINTS), dtype=tf.float32)
Y = 2 * X + 10
###Output
_____no_output_____
###Markdown
We begin with implementing a function that takes as input- our $X$ and $Y$ vectors of synthetic data generated by the linear function $y= 2x + 10$- the number of passes over the dataset we want to train on (`epochs`)- the size of the batches the dataset (`batch_size`)and returns a `tf.data.Dataset`: **Remark:** Note that the last batch may not contain the exact number of elements you specified because the dataset was exhausted.If you want batches with the exact same number of elements per batch, we will have to discard the last batch bysetting:```pythondataset = dataset.batch(batch_size, drop_remainder=True)```We will do that here. **Lab Task 1:** Complete the code below to 1. instantiate a `tf.data` dataset using [tf.data.Dataset.from_tensor_slices](https://www.tensorflow.org/api_docs/python/tf/data/Datasetfrom_tensor_slices).2. Set up the dataset to * repeat `epochs` times, * create a batch of size `batch_size`, ignoring extra elements when the batch does not divide the number of input elements evenly.
###Code
# TODO 1
def create_dataset(X, Y, epochs, batch_size):
dataset = tf.data.Dataset.from_tensor_slices((X, Y))
dataset = dataset.repeat(epochs).batch(batch_size, drop_remainder=True)
return dataset
###Output
_____no_output_____
###Markdown
Let's test our function by iterating twice over our dataset in batches of 3 datapoints:
###Code
BATCH_SIZE = 3
EPOCH = 2
dataset = create_dataset(X, Y, epochs=EPOCH, batch_size=BATCH_SIZE)
for i, (x, y) in enumerate(dataset):
print("x:", x.numpy(), "y:", y.numpy())
assert len(x) == BATCH_SIZE
assert len(y) == BATCH_SIZE
assert EPOCH
###Output
x: [0. 1. 2.] y: [10. 12. 14.]
x: [3. 4. 5.] y: [16. 18. 20.]
x: [6. 7. 8.] y: [22. 24. 26.]
x: [9. 0. 1.] y: [28. 10. 12.]
x: [2. 3. 4.] y: [14. 16. 18.]
x: [5. 6. 7.] y: [20. 22. 24.]
###Markdown
Loss function and gradients The loss function and the function that computes the gradients are the same as before:
###Code
def loss_mse(X, Y, w0, w1):
Y_hat = w0 * X + w1
errors = (Y_hat - Y)**2
return tf.reduce_mean(errors)
def compute_gradients(X, Y, w0, w1):
with tf.GradientTape() as tape:
loss = loss_mse(X, Y, w0, w1)
return tape.gradient(loss, [w0, w1])
###Output
_____no_output_____
###Markdown
Training loop The main difference now is that now, in the traning loop, we will iterate directly on the `tf.data.Dataset` generated by our `create_dataset` function. We will configure the dataset so that it iterates 250 times over our synthetic dataset in batches of 2. **Lab Task 2:** Complete the code in the cell below to call your dataset above when training the model. Note that the `step, (X_batch, Y_batch)` iterates over the `dataset`. The inside of the `for` loop should be exactly as in the previous lab.
###Code
# TODO 2
EPOCHS = 250
BATCH_SIZE = 2
LEARNING_RATE = .02
MSG = "STEP {step} - loss: {loss}, w0: {w0}, w1: {w1}\n"
w0 = tf.Variable(0.0)
w1 = tf.Variable(0.0)
dataset = create_dataset(X, Y, epochs=EPOCHS, batch_size=BATCH_SIZE)
for step, (X_batch, Y_batch) in enumerate(dataset):
dw0, dw1 = compute_gradients(X_batch, Y_batch, w0, w1)
w0.assign_sub(LEARNING_RATE*dw0)
w1.assign_sub(LEARNING_RATE*dw1)
if step % 100 == 0:
loss = loss_mse(X_batch, Y_batch, w0, w1)
print(MSG.format(step=step, loss=loss, w0=w0.numpy(), w1=w1.numpy()))
assert loss < 0.0001
assert abs(w0 - 2) < 0.001
assert abs(w1 - 10) < 0.001
###Output
STEP 0 - loss: 109.76800537109375, w0: 0.23999999463558197, w1: 0.4399999976158142
STEP 100 - loss: 9.363959312438965, w0: 2.55655837059021, w1: 6.674341678619385
STEP 200 - loss: 1.393267273902893, w0: 2.2146825790405273, w1: 8.717182159423828
STEP 300 - loss: 0.20730558037757874, w0: 2.082810878753662, w1: 9.505172729492188
STEP 400 - loss: 0.03084510937333107, w0: 2.03194260597229, w1: 9.809128761291504
STEP 500 - loss: 0.004589457996189594, w0: 2.012321710586548, w1: 9.926374435424805
STEP 600 - loss: 0.0006827632314525545, w0: 2.0047526359558105, w1: 9.971602439880371
STEP 700 - loss: 0.00010164897685172036, w0: 2.0018346309661865, w1: 9.989042282104492
STEP 800 - loss: 1.5142451957217418e-05, w0: 2.000706911087036, w1: 9.995771408081055
STEP 900 - loss: 2.256260358990403e-06, w0: 2.0002737045288086, w1: 9.998367309570312
STEP 1000 - loss: 3.3405058275093324e-07, w0: 2.000105381011963, w1: 9.999371528625488
STEP 1100 - loss: 4.977664502803236e-08, w0: 2.000040054321289, w1: 9.999757766723633
STEP 1200 - loss: 6.475602276623249e-09, w0: 2.0000154972076416, w1: 9.99991226196289
###Markdown
Loading data from disk Locating the CSV filesWe will start with the **taxifare dataset** CSV files that we wrote out in a previous lab. The taxifare dataset files have been saved into `../data`.Check that it is the case in the cell below, and, if not, regenerate the taxifare
###Code
!ls -l ../data/taxi*.csv
###Output
-rw-r--r-- 1 jupyter jupyter 123590 May 20 02:13 ../data/taxi-test.csv
-rw-r--r-- 1 jupyter jupyter 579055 May 20 02:13 ../data/taxi-train.csv
-rw-r--r-- 1 jupyter jupyter 123114 May 20 02:13 ../data/taxi-valid.csv
###Markdown
Use tf.data to read the CSV filesThe `tf.data` API can easily read csv files using the helper function tf.data.experimental.make_csv_datasetIf you have TFRecords (which is recommended), you may use tf.data.experimental.make_batched_features_dataset The first step is to define - the feature names into a list `CSV_COLUMNS`- their default values into a list `DEFAULTS`
###Code
CSV_COLUMNS = [
'fare_amount',
'pickup_datetime',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'passenger_count',
'key'
]
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], ['na'], [0.0], [0.0], [0.0], [0.0], [0.0], ['na']]
###Output
_____no_output_____
###Markdown
Let's now wrap the call to `make_csv_dataset` into its own function that will take only the file pattern (i.e. glob) where the dataset files are to be located: **Lab Task 3:** Complete the code in the `create_dataset(...)` function below to return a `tf.data` dataset made from the `make_csv_dataset`. Have a look at the [documentation here](https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset). The `pattern` will be given as an argument of the function but you should set the `batch_size`, `column_names` and `column_defaults`.
###Code
# TODO 3
def create_dataset(pattern):
dataset = tf.data.experimental.make_csv_dataset(pattern,
batch_size=1,
column_names=CSV_COLUMNS, column_defaults=DEFAULTS)
return dataset
tempds = create_dataset('../data/taxi-train*')
print(tempds)
###Output
<PrefetchDataset shapes: OrderedDict([(fare_amount, (1,)), (pickup_datetime, (1,)), (pickup_longitude, (1,)), (pickup_latitude, (1,)), (dropoff_longitude, (1,)), (dropoff_latitude, (1,)), (passenger_count, (1,)), (key, (1,))]), types: OrderedDict([(fare_amount, tf.float32), (pickup_datetime, tf.string), (pickup_longitude, tf.float32), (pickup_latitude, tf.float32), (dropoff_longitude, tf.float32), (dropoff_latitude, tf.float32), (passenger_count, tf.float32), (key, tf.string)])>
###Markdown
Note that this is a prefetched dataset, where each element is an `OrderedDict` whose keys are the feature names and whose values are tensors of shape `(1,)` (i.e. vectors).Let's iterate over the two first element of this dataset using `dataset.take(2)` and let's convert them ordinary Python dictionary with numpy array as values for more readability:
###Code
for data in tempds.take(2):
pprint({k: v.numpy() for k, v in data.items()})
print("\n")
###Output
{'dropoff_latitude': array([40.78385], dtype=float32),
'dropoff_longitude': array([-73.95221], dtype=float32),
'fare_amount': array([6.1], dtype=float32),
'key': array([b'6845'], dtype=object),
'passenger_count': array([1.], dtype=float32),
'pickup_datetime': array([b'2009-02-09 11:38:41 UTC'], dtype=object),
'pickup_latitude': array([40.765858], dtype=float32),
'pickup_longitude': array([-73.9678], dtype=float32)}
{'dropoff_latitude': array([40.70148], dtype=float32),
'dropoff_longitude': array([-74.01182], dtype=float32),
'fare_amount': array([19.5], dtype=float32),
'key': array([b'3904'], dtype=object),
'passenger_count': array([1.], dtype=float32),
'pickup_datetime': array([b'2013-04-18 08:48:00 UTC'], dtype=object),
'pickup_latitude': array([40.752457], dtype=float32),
'pickup_longitude': array([-73.97056], dtype=float32)}
###Markdown
Transforming the features What we really need is a dictionary of features + a label. So, we have to do two things to the above dictionary:1. Remove the unwanted column "key"1. Keep the label separate from the featuresLet's first implement a function that takes as input a row (represented as an `OrderedDict` in our `tf.data.Dataset` as above) and then returns a tuple with two elements:* The first element being the same `OrderedDict` with the label dropped* The second element being the label itself (`fare_amount`)Note that we will need to also remove the `key` and `pickup_datetime` column, which we won't use. **Lab Task 4a:** Complete the code in the `features_and_labels(...)` function below. Your function should return a dictionary of features and a label. Keep in mind `row_data` is already a dictionary and you will need to remove the `pickup_datetime` and `key` from `row_data` as well.
###Code
UNWANTED_COLS = ['pickup_datetime', 'key']
# TODO 4a
def features_and_labels(row_data):
label = row_data.pop(LABEL_COLUMN)
for c in UNWANTED_COLS:
del row_data[c]
features = row_data
return features, label
###Output
_____no_output_____
###Markdown
Let's iterate over 2 examples from our `tempds` dataset and apply our `feature_and_labels`function to each of the examples to make sure it's working:
###Code
for row_data in tempds.take(2):
features, label = features_and_labels(row_data)
pprint(features)
print(label, "\n")
assert UNWANTED_COLS[0] not in features.keys()
assert UNWANTED_COLS[1] not in features.keys()
assert label.shape == [1]
###Output
OrderedDict([('pickup_longitude',
<tf.Tensor: shape=(1,), dtype=float32, numpy=array([-73.975296], dtype=float32)>),
('pickup_latitude',
<tf.Tensor: shape=(1,), dtype=float32, numpy=array([40.761337], dtype=float32)>),
('dropoff_longitude',
<tf.Tensor: shape=(1,), dtype=float32, numpy=array([-73.96335], dtype=float32)>),
('dropoff_latitude',
<tf.Tensor: shape=(1,), dtype=float32, numpy=array([40.756054], dtype=float32)>),
('passenger_count',
<tf.Tensor: shape=(1,), dtype=float32, numpy=array([1.], dtype=float32)>)])
tf.Tensor([4.5], shape=(1,), dtype=float32)
OrderedDict([('pickup_longitude',
<tf.Tensor: shape=(1,), dtype=float32, numpy=array([-74.00528], dtype=float32)>),
('pickup_latitude',
<tf.Tensor: shape=(1,), dtype=float32, numpy=array([40.72889], dtype=float32)>),
('dropoff_longitude',
<tf.Tensor: shape=(1,), dtype=float32, numpy=array([-73.99198], dtype=float32)>),
('dropoff_latitude',
<tf.Tensor: shape=(1,), dtype=float32, numpy=array([40.666794], dtype=float32)>),
('passenger_count',
<tf.Tensor: shape=(1,), dtype=float32, numpy=array([1.], dtype=float32)>)])
tf.Tensor([19.], shape=(1,), dtype=float32)
###Markdown
Batching Let's now refactor our `create_dataset` function so that it takes an additional argument `batch_size` and batch the data correspondingly. We will also use the `features_and_labels` function we implemented for our dataset to produce tuples of features and labels. **Lab Task 4b:** Complete the code in the `create_dataset(...)` function below to return a `tf.data` dataset made from the `make_csv_dataset`. Now, the `pattern` and `batch_size` will be given as an arguments of the function but you should set the `column_names` and `column_defaults` as before. You will also apply a `.map(...)` method to create features and labels from each example.
###Code
# TODO 4b
def create_dataset(pattern, batch_size):
dataset = tf.data.experimental.make_csv_dataset(
pattern, batch_size, CSV_COLUMNS, DEFAULTS)
dataset = dataset.map(features_and_labels)
return dataset
###Output
_____no_output_____
###Markdown
Let's test that our batches are of the right size:
###Code
BATCH_SIZE = 2
tempds = create_dataset('../data/taxi-train*', batch_size=2)
for X_batch, Y_batch in tempds.take(2):
pprint({k: v.numpy() for k, v in X_batch.items()})
print(Y_batch.numpy(), "\n")
assert len(Y_batch) == BATCH_SIZE
###Output
{'dropoff_latitude': array([40.75515 , 40.737858], dtype=float32),
'dropoff_longitude': array([-73.96512 , -74.000275], dtype=float32),
'passenger_count': array([1., 1.], dtype=float32),
'pickup_latitude': array([40.7464 , 40.727943], dtype=float32),
'pickup_longitude': array([-73.9719 , -73.994995], dtype=float32)}
[4.1 6. ]
{'dropoff_latitude': array([40.817875, 40.71632 ], dtype=float32),
'dropoff_longitude': array([-73.943474, -74.00955 ], dtype=float32),
'passenger_count': array([5., 2.], dtype=float32),
'pickup_latitude': array([40.76923 , 40.728394], dtype=float32),
'pickup_longitude': array([-73.98898 , -74.002594], dtype=float32)}
[14.9 5.3]
###Markdown
ShufflingWhen training a deep learning model in batches over multiple workers, it is helpful if we shuffle the data. That way, different workers will be working on different parts of the input file at the same time, and so averaging gradients across workers will help. Also, during training, we will need to read the data indefinitely. Let's refactor our `create_dataset` function so that it shuffles the data, when the dataset is used for training.We will introduce an additional argument `mode` to our function to allow the function body to distinguish the casewhen it needs to shuffle the data (`mode == 'train'`) from when it shouldn't (`mode == 'eval'`).Also, before returning we will want to prefetch 1 data point ahead of time (`dataset.prefetch(1)`) to speed-up training: **Lab Task 4c:** The last step of our `tf.data` dataset will specify shuffling and repeating of our dataset pipeline. Complete the code below to add these three steps to the Dataset pipeline1. follow the `.map(...)` operation which extracts features and labels with a call to `.cache()` the result.2. during training, use `.shuffle(...)` and `.repeat()` to shuffle batches and repeat the dataset3. use `.prefetch(...)` to take advantage of multi-threading and speedup training.
###Code
# TODO 4c
def create_dataset(pattern, batch_size=1, mode='eval'):
dataset = tf.data.experimental.make_csv_dataset(
pattern, batch_size, CSV_COLUMNS, DEFAULTS)
dataset = dataset.map(features_and_labels).cache()
if mode == 'train':
dataset = dataset.shuffle(1000).repeat(3)
# take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(buffer_size=1)
return dataset
###Output
_____no_output_____
###Markdown
Let's check that our function works well in both modes:
###Code
tempds = create_dataset('../data/taxi-train*', 2, 'train')
print(list(tempds.take(1)))
tempds = create_dataset('../data/taxi-valid*', 2, 'eval')
print(list(tempds.take(1)))
###Output
[(OrderedDict([('pickup_longitude', <tf.Tensor: shape=(2,), dtype=float32, numpy=array([-73.87321, -73.99341], dtype=float32)>), ('pickup_latitude', <tf.Tensor: shape=(2,), dtype=float32, numpy=array([40.77404 , 40.762447], dtype=float32)>), ('dropoff_longitude', <tf.Tensor: shape=(2,), dtype=float32, numpy=array([-73.97801, -73.98294], dtype=float32)>), ('dropoff_latitude', <tf.Tensor: shape=(2,), dtype=float32, numpy=array([40.7558 , 40.775913], dtype=float32)>), ('passenger_count', <tf.Tensor: shape=(2,), dtype=float32, numpy=array([5., 2.], dtype=float32)>)]), <tf.Tensor: shape=(2,), dtype=float32, numpy=array([35.83, 4.5 ], dtype=float32)>)]
|
Machine_Learning_Foundations_ A_Case_Study_Approach/Deep Features for Image Classification.ipynb | ###Markdown
Using deep features to build an image classifierFire up GraphLab Create
###Code
import graphlab
###Output
A newer version of GraphLab Create (v1.7.1) is available! Your current version is v1.6.1.
You can use pip to upgrade the graphlab-create package. For more information see https://dato.com/products/create/upgrade.
###Markdown
Load a common image analysis datasetWe will use a popular benchmark dataset in computer vision called CIFAR-10. (We've reduced the data to just 4 categories = {'cat','bird','automobile','dog'}.)This dataset is already split into a training set and test set.
###Code
image_train = graphlab.SFrame('image_train_data/')
image_test = graphlab.SFrame('image_test_data/')
###Output
[INFO] This non-commercial license of GraphLab Create is assigned to [email protected] and will expire on October 14, 2016. For commercial licensing options, visit https://dato.com/buy/.
[INFO] Start server at: ipc:///tmp/graphlab_server-9321 - Server binary: /home/nitin/anaconda/lib/python2.7/site-packages/graphlab/unity_server - Server log: /tmp/graphlab_server_1448359078.log
[INFO] GraphLab Server Version: 1.6.1
###Markdown
Exploring the image data
###Code
graphlab.canvas.set_target('ipynb')
image_train['image'].show()
###Output
_____no_output_____
###Markdown
Train a classifier on the raw image pixelsWe first start by training a classifier on just the raw pixels of the image.
###Code
raw_pixel_model = graphlab.logistic_classifier.create(image_train,target='label',
features=['image_array'])
###Output
PROGRESS: Creating a validation set from 5 percent of training data. This may take a while.
You can set ``validation_set=None`` to disable validation tracking.
PROGRESS: Logistic regression:
PROGRESS: --------------------------------------------------------
PROGRESS: Number of examples : 1904
PROGRESS: Number of classes : 4
PROGRESS: Number of feature columns : 1
PROGRESS: Number of unpacked features : 3072
PROGRESS: Number of coefficients : 9219
PROGRESS: Starting L-BFGS
PROGRESS: --------------------------------------------------------
PROGRESS: +-----------+----------+-----------+--------------+-------------------+---------------------+
PROGRESS: | Iteration | Passes | Step size | Elapsed Time | Training-accuracy | Validation-accuracy |
PROGRESS: +-----------+----------+-----------+--------------+-------------------+---------------------+
PROGRESS: | 1 | 6 | 0.000012 | 1.962574 | 0.298845 | 0.257426 |
PROGRESS: | 2 | 8 | 1.000000 | 2.399317 | 0.376050 | 0.386139 |
PROGRESS: | 3 | 9 | 1.000000 | 2.651643 | 0.388655 | 0.287129 |
PROGRESS: | 4 | 10 | 1.000000 | 2.910653 | 0.429097 | 0.415842 |
PROGRESS: | 5 | 11 | 1.000000 | 3.156336 | 0.449055 | 0.396040 |
PROGRESS: | 6 | 12 | 1.000000 | 3.413203 | 0.464286 | 0.415842 |
PROGRESS: | 10 | 16 | 1.000000 | 4.413906 | 0.512080 | 0.534653 |
PROGRESS: +-----------+----------+-----------+--------------+-------------------+---------------------+
###Markdown
Make a prediction with the simple model based on raw pixels
###Code
image_test[0:3]['image'].show()
image_test[0:3]['label']
raw_pixel_model.predict(image_test[0:3])
###Output
_____no_output_____
###Markdown
The model makes wrong predictions for all three images. Evaluating raw pixel model on test data
###Code
raw_pixel_model.evaluate(image_test)
###Output
_____no_output_____
###Markdown
The accuracy of this model is poor, getting only about 46% accuracy. Can we improve the model using deep featuresWe only have 2005 data points, so it is not possible to train a deep neural network effectively with so little data. Instead, we will use transfer learning: using deep features trained on the full ImageNet dataset, we will train a simple model on this small dataset.
###Code
len(image_train)
###Output
_____no_output_____
###Markdown
Computing deep features for our imagesThe two lines below allow us to compute deep features. This computation takes a little while, so we have already computed them and saved the results as a column in the data you loaded. (Note that if you would like to compute such deep features and have a GPU on your machine, you should use the GPU enabled GraphLab Create, which will be significantly faster for this task.)
###Code
#deep_learning_model = graphlab.load_model('http://s3.amazonaws.com/GraphLab-Datasets/deeplearning/imagenet_model_iter45')
#image_train['deep_features'] = deep_learning_model.extract_features(image_train)
###Output
_____no_output_____
###Markdown
As we can see, the column deep_features already contains the pre-computed deep features for this data.
###Code
image_train.head()
###Output
_____no_output_____
###Markdown
Given the deep features, let's train a classifier
###Code
deep_features_model = graphlab.logistic_classifier.create(image_train,
features=['deep_features'],
target='label')
###Output
PROGRESS: Creating a validation set from 5 percent of training data. This may take a while.
You can set ``validation_set=None`` to disable validation tracking.
PROGRESS: WARNING: Detected extremely low variance for feature(s) 'deep_features' because all entries are nearly the same.
Proceeding with model training using all features. If the model does not provide results of adequate quality, exclude the above mentioned feature(s) from the input dataset.
PROGRESS: Logistic regression:
PROGRESS: --------------------------------------------------------
PROGRESS: Number of examples : 1899
PROGRESS: Number of classes : 4
PROGRESS: Number of feature columns : 1
PROGRESS: Number of unpacked features : 4096
PROGRESS: Number of coefficients : 12291
PROGRESS: Starting L-BFGS
PROGRESS: --------------------------------------------------------
PROGRESS: +-----------+----------+-----------+--------------+-------------------+---------------------+
PROGRESS: | Iteration | Passes | Step size | Elapsed Time | Training-accuracy | Validation-accuracy |
PROGRESS: +-----------+----------+-----------+--------------+-------------------+---------------------+
PROGRESS: | 1 | 5 | 0.000132 | 1.226900 | 0.730911 | 0.716981 |
PROGRESS: | 2 | 9 | 0.250000 | 2.381685 | 0.759874 | 0.716981 |
PROGRESS: | 3 | 10 | 0.250000 | 2.790865 | 0.767246 | 0.726415 |
PROGRESS: | 4 | 11 | 0.250000 | 3.202023 | 0.773565 | 0.735849 |
PROGRESS: | 5 | 12 | 0.250000 | 3.592414 | 0.783044 | 0.726415 |
PROGRESS: | 6 | 13 | 0.250000 | 3.990986 | 0.791469 | 0.754717 |
PROGRESS: | 10 | 17 | 0.250000 | 5.590519 | 0.862033 | 0.773585 |
PROGRESS: +-----------+----------+-----------+--------------+-------------------+---------------------+
###Markdown
Apply the deep features model to first few images of test set
###Code
image_test[0:3]['image'].show()
deep_features_model.predict(image_test[0:3])
###Output
_____no_output_____
###Markdown
The classifier with deep features gets all of these images right! Compute test_data accuracy of deep_features_modelAs we can see, deep features provide us with significantly better accuracy (about 78%)
###Code
deep_features_model.evaluate(image_test)
###Output
_____no_output_____ |
Reg-NoneLinearRegression-py-v1.ipynb | ###Markdown
Non Linear Regression Analysis If the data shows a curvy trend, then linear regression will not produce very accurate results when compared to a non-linear regression because, as the name implies, linear regression presumes that the data is linear. Let's learn about non linear regressions and apply an example on python. In this notebook, we fit a non-linear model to the datapoints corrensponding to China's GDP from 1960 to 2014. Importing required libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Though Linear regression is very good to solve many problems, it cannot be used for all datasets. First recall how linear regression, could model a dataset. It models a linear relation between a dependent variable y and independent variable x. It had a simple equation, of degree 1, for example y = $2x$ + 3.
###Code
x = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = 2*(x) + 3
y_noise = 2 * np.random.normal(size=x.size)
ydata = y + y_noise
#plt.figure(figsize=(8,6))
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
Non-linear regressions are a relationship between independent variables $x$ and a dependent variable $y$ which result in a non-linear function modeled data. Essentially any relationship that is not linear can be termed as non-linear, and is usually represented by the polynomial of $k$ degrees (maximum power of $x$). $$ \ y = a x^3 + b x^2 + c x + d \ $$Non-linear functions can have elements like exponentials, logarithms, fractions, and others. For example: $$ y = \log(x)$$ Or even, more complicated such as :$$ y = \log(a x^3 + b x^2 + c x + d)$$ Let's take a look at a cubic function's graph.
###Code
x = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = 1*(x**3) + 1*(x**2) + 1*x + 3
y_noise = 20 * np.random.normal(size=x.size)
ydata = y + y_noise
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see, this function has $x^3$ and $x^2$ as independent variables. Also, the graphic of this function is not a straight line over the 2D plane. So this is a non-linear function. Some other types of non-linear functions are: Quadratic $$ Y = X^2 $$
###Code
x = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = np.power(x,2)
y_noise = 2 * np.random.normal(size=x.size)
ydata = y + y_noise
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
Exponential An exponential function with base c is defined by $$ Y = a + b c^X$$ where b ≠0, c > 0 , c ≠1, and x is any real number. The base, c, is constant and the exponent, x, is a variable.
###Code
X = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
Y= np.exp(X)
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
LogarithmicThe response $y$ is a results of applying logarithmic map from input $x$'s to output variable $y$. It is one of the simplest form of __log()__: i.e. $$ y = \log(x)$$Please consider that instead of $x$, we can use $X$, which can be polynomial representation of the $x$'s. In general form it would be written as \begin{equation}y = \log(X)\end{equation}
###Code
X = np.arange(-5.0, 5.0, 0.1)
Y = np.log(X)
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
/home/jupyterlab/conda/lib/python3.6/site-packages/ipykernel_launcher.py:3: RuntimeWarning: invalid value encountered in log
This is separate from the ipykernel package so we can avoid doing imports until
###Markdown
Sigmoidal/Logistic $$ Y = a + \frac{b}{1+ c^{(X-d)}}$$
###Code
X = np.arange(-5.0, 5.0, 0.1)
Y = 1-4/(1+np.power(3, X-2))
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
Non-Linear Regression example For an example, we're going to try and fit a non-linear model to the datapoints corresponding to China's GDP from 1960 to 2014. We download a dataset with two columns, the first, a year between 1960 and 2014, the second, China's corresponding annual gross domestic income in US dollars for that year.
###Code
import numpy as np
import pandas as pd
#downloading dataset
!wget -nv -O china_gdp.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/china_gdp.csv
df = pd.read_csv("china_gdp.csv")
df.head(10)
###Output
2019-05-17 20:32:25 URL:https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/china_gdp.csv [1218/1218] -> "china_gdp.csv" [1]
###Markdown
__Did you know?__ When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC) Plotting the Dataset This is what the datapoints look like. It kind of looks like an either logistic or exponential function. The growth starts off slow, then from 2005 on forward, the growth is very significant. And finally, it decelerate slightly in the 2010s.
###Code
plt.figure(figsize=(8,5))
x_data, y_data = (df["Year"].values, df["Value"].values)
plt.plot(x_data, y_data, 'ro')
plt.ylabel('GDP')
plt.xlabel('Year')
plt.show()
###Output
_____no_output_____
###Markdown
Choosing a model From an initial look at the plot, we determine that the logistic function could be a good approximation,since it has the property of starting with a slow growth, increasing growth in the middle, and then decreasing again at the end; as illustrated below:
###Code
X = np.arange(-5.0, 5.0, 0.1)
Y = 1.0 / (1.0 + np.exp(-X))
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
The formula for the logistic function is the following:$$ \hat{Y} = \frac1{1+e^{\beta_1(X-\beta_2)}}$$$\beta_1$: Controls the curve's steepness,$\beta_2$: Slides the curve on the x-axis. Building The Model Now, let's build our regression model and initialize its parameters.
###Code
def sigmoid(x, Beta_1, Beta_2):
y = 1 / (1 + np.exp(-Beta_1*(x-Beta_2)))
return y
###Output
_____no_output_____
###Markdown
Lets look at a sample sigmoid line that might fit with the data:
###Code
beta_1 = 0.10
beta_2 = 1990.0
#logistic function
Y_pred = sigmoid(x_data, beta_1 , beta_2)
#plot initial prediction against datapoints
plt.plot(x_data, Y_pred*15000000000000.)
plt.plot(x_data, y_data, 'ro')
###Output
_____no_output_____
###Markdown
Our task here is to find the best parameters for our model. Lets first normalize our x and y:
###Code
# Lets normalize our data
xdata =x_data/max(x_data)
ydata =y_data/max(y_data)
###Output
_____no_output_____
###Markdown
How we find the best parameters for our fit line?we can use __curve_fit__ which uses non-linear least squares to fit our sigmoid function, to data. Optimal values for the parameters so that the sum of the squared residuals of sigmoid(xdata, *popt) - ydata is minimized.popt are our optimized parameters.
###Code
from scipy.optimize import curve_fit
popt, pcov = curve_fit(sigmoid, xdata, ydata)
#print the final parameters
print(" beta_1 = %f, beta_2 = %f" % (popt[0], popt[1]))
###Output
beta_1 = 690.453017, beta_2 = 0.997207
###Markdown
Now we plot our resulting regression model.
###Code
x = np.linspace(1960, 2015, 55)
x = x/max(x)
plt.figure(figsize=(8,5))
y = sigmoid(x, *popt)
plt.plot(xdata, ydata, 'ro', label='data')
plt.plot(x,y, linewidth=3.0, label='fit')
plt.legend(loc='best')
plt.ylabel('GDP')
plt.xlabel('Year')
plt.show()
###Output
_____no_output_____
###Markdown
PracticeCan you calculate what is the accuracy of our model?
###Code
# split data into train/test
msk = np.random.rand(len(df)) < 0.8
train_x = xdata[msk]
test_x = xdata[~msk]
train_y = ydata[msk]
test_y = ydata[~msk]
# build the model using train set
popt, pcov = curve_fit(sigmoid, train_x, train_y)
# predict using test set
y_hat = sigmoid(test_x, *popt)
# evaluation
print("Mean absolute error: %.2f" % np.mean(np.absolute(y_hat - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((y_hat - test_y) ** 2))
from sklearn.metrics import r2_score
print("R2-score: %.2f" % r2_score(y_hat , test_y) )
###Output
Mean absolute error: 0.04
Residual sum of squares (MSE): 0.00
R2-score: 0.95
|
VGG16_final.ipynb | ###Markdown
Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow. Running the following code will map your GDrive to ```/content/drive```.
###Code
try:
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
COLAB = True
print("Note: using Google CoLab")
%tensorflow_version 2.x
except:
print("Note: not using Google CoLab")
COLAB = False
import os
import string
import glob
from tensorflow.keras.applications import MobileNet
import tensorflow.keras.applications.mobilenet
from tensorflow.keras.applications.vgg16 import VGG16
from keras.applications.vgg16 import preprocess_input
from tqdm import tqdm
import tensorflow.keras.preprocessing.image
import pickle
from time import time
import numpy as np
from PIL import Image
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import (LSTM, Embedding,
TimeDistributed, Dense, RepeatVector,
Activation, Flatten, Reshape, concatenate,
Dropout, BatchNormalization)
from tensorflow.keras.optimizers import Adam, RMSprop
from tensorflow.keras import Input, layers
from tensorflow.keras import optimizers
from tensorflow.keras.models import Model
from tensorflow.keras.layers import add
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.utils import to_categorical
import matplotlib.pyplot as plt
START = "startseq"
STOP = "endseq"
EPOCHS = 10
USE_VGG16 = True
###Output
_____no_output_____
###Markdown
We use the following function to nicely format elapsed times.
###Code
# Nicely formatted time string
def hms_string(sec_elapsed):
h = int(sec_elapsed / (60 * 60))
m = int((sec_elapsed % (60 * 60)) / 60)
s = sec_elapsed % 60
return f"{h}:{m:>02}:{s:>05.2f}"
if COLAB:
root_captioning = "/content/drive/My Drive/projects/captions"
else:
root_captioning = "./data/captions"
###Output
_____no_output_____
###Markdown
Clean/Build Dataset From Flickr8kWe must pull in the Flickr dataset captions and clean them of extra whitespace, punctuation, and other distractions.
###Code
null_punct = str.maketrans('', '', string.punctuation)
lookup = dict()
with open( os.path.join(root_captioning,'Flickr8k_text',\
'Flickr8k.token.txt'), 'r') as fp:
max_length = 0
for line in fp.read().split('\n'):
tok = line.split()
if len(line) >= 2:
id = tok[0].split('.')[0]
desc = tok[1:]
# Cleanup description
desc = [word.lower() for word in desc]
desc = [w.translate(null_punct) for w in desc]
desc = [word for word in desc if len(word)>1]
desc = [word for word in desc if word.isalpha()]
max_length = max(max_length,len(desc))
if id not in lookup:
lookup[id] = list()
lookup[id].append(' '.join(desc))
lex = set()
for key in lookup:
[lex.update(d.split()) for d in lookup[key]]
###Output
_____no_output_____
###Markdown
The following code displays stats on the data downloaded and processed.
###Code
print(len(lookup)) # How many unique words
print(len(lex)) # The dictionary
print(max_length) # Maximum length of a caption (in words)
###Output
8092
8763
32
###Markdown
Next, we load the Glove embeddings.
###Code
# Warning, running this too soon on GDrive can sometimes not work.
# Just rerun if len(img) = 0
img = glob.glob(os.path.join(root_captioning,'Flicker8k_Dataset/', '*.jpg'))
###Output
_____no_output_____
###Markdown
Display the count of how many Glove embeddings we have.
###Code
len(img)
###Output
_____no_output_____
###Markdown
Read all image names and use the predefined train/test sets.
###Code
train_images_path = os.path.join(root_captioning,\
'Flickr8k_text','Flickr_8k.trainImages.txt')
train_images = set(open(train_images_path, 'r').read().strip().split('\n'))
test_images_path = os.path.join(root_captioning,
'Flickr8k_text','Flickr_8k.testImages.txt')
test_images = set(open(test_images_path, 'r').read().strip().split('\n'))
train_img = []
test_img = []
for i in img:
f = os.path.split(i)[-1]
if f in train_images:
train_img.append(f)
elif f in test_images:
test_img.append(f)
###Output
_____no_output_____
###Markdown
Display the size of the train and test sets.
###Code
print(len(train_images))
print(len(test_images))
###Output
6000
1000
###Markdown
Build the sequences. We include a **start** and **stop** token at the beginning/end. We will later use the **start** token to begin the process of generating a caption. Encountering the **stop** token in the generated text will let us know the process is complete.
###Code
train_descriptions = {k:v for k,v in lookup.items() if f'{k}.jpg' \
in train_images}
for n,v in train_descriptions.items():
for d in range(len(v)):
v[d] = f'{START} {v[d]} {STOP}'
###Output
_____no_output_____
###Markdown
See how many discriptions were extracted.
###Code
'260850192_fd03ea26f1' in train_descriptions
###Output
_____no_output_____
###Markdown
Choosing a Computer Vision Neural Network to TransferThis example provides two neural networks that we can use via transfer learning. In this example, I use Glove for the text embedding and InceptionV3 to extract features from the images. Both of these transfers serve to extract features from the raw text and the images. Without this prior knowledge transferred in, this example would take considerably more training.I made it so you can interchange the neural network used for the images. By setting the values **WIDTH**, **HEIGHT**, and **OUTPUT_DIM**, you can interchange images. One characteristic that you are seeking for the image neural network is that it does not have too many outputs (once you strip the 1000-class imagenet classifier, as is common in transfer learning). InceptionV3 has 2,048 features below the classifier, and MobileNet has over 50K. If the additional dimensions truly capture aspects of the images, then they are worthwhile. However, having 50K features increases the processing needed and the complexity of the neural network we are constructing.
###Code
if USE_VGG16:
encode_model = VGG16()
encode_model = Model(encode_model.input, encode_model.layers[-2].output)
WIDTH = 224
HEIGHT = 224
OUTPUT_DIM = 4096
preprocess_input = \
tensorflow.keras.applications.vgg16.preprocess_input
else:
encode_model = MobileNet(weights='imagenet',include_top=False)
WIDTH = 224
HEIGHT = 224
OUTPUT_DIM = 50176
preprocess_input = tensorflow.keras.applications.mobilenet.preprocess_input
###Output
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/vgg16/vgg16_weights_tf_dim_ordering_tf_kernels.h5
553467904/553467096 [==============================] - 24s 0us/step
###Markdown
The summary of the chosen image neural network to be transferred is displayed.
###Code
encode_model.summary()
###Output
Model: "functional_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 224, 224, 3)] 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 224, 224, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 224, 224, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 112, 112, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 112, 112, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 112, 112, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 56, 56, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 56, 56, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 28, 28, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 14, 14, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 7, 7, 512) 0
_________________________________________________________________
flatten (Flatten) (None, 25088) 0
_________________________________________________________________
fc1 (Dense) (None, 4096) 102764544
_________________________________________________________________
fc2 (Dense) (None, 4096) 16781312
=================================================================
Total params: 134,260,544
Trainable params: 134,260,544
Non-trainable params: 0
_________________________________________________________________
###Markdown
Creating the Training SetWe need to encode the images to create the training set. Later we will encode new images to present them for captioning.
###Code
def encodeImage(img):
# Resize all images to a standard size (specified bythe image
# encoding network)
img = img.resize((WIDTH, HEIGHT), Image.ANTIALIAS)
# Convert a PIL image to a numpy array
x = tensorflow.keras.preprocessing.image.img_to_array(img)
# Expand to 2D array
x = x.reshape((1, x.shape[0], x.shape[1], x.shape[2]))
# Perform any preprocessing needed by VGG16 or others
x = preprocess_input(x)
# Call VGG16 (or other) to extract the smaller feature set for
# the image.
x = encode_model.predict(x, verbose = 0) # Get the encoding vector for the image
# Shape to correct form to be accepted by LSTM captioning network.
x = np.reshape(x, OUTPUT_DIM )
return x
###Output
_____no_output_____
###Markdown
We can how to generate the training set, which will involve looping over every JPG that we provided. Because this can take a while to perform, we will save it to a pickle file. This saved file prevents the considerable time needed to reprocess all of the images again. Because the images are processed differently by different transferred neural networks, the filename contains the output dimensions. We follow this naming convention because if you changed from InceptionV3 to MobileNet, the number of output dimensions would change, and you must reprocess the images.
###Code
train_path = os.path.join(root_captioning,"data",f'trainVGG16{OUTPUT_DIM}.pkl')
if not os.path.exists(train_path):
start = time()
encoding_train = {}
for id in tqdm(train_images):
image_path = os.path.join(root_captioning,'Flicker8k_Dataset', id)
img = tensorflow.keras.preprocessing.image.load_img(image_path, target_size=(224, 224))
encoding_train[id] = encodeImage(img)
with open(train_path, "wb") as fp:
pickle.dump(encoding_train, fp)
print(f"\nGenerating training set took: {hms_string(time()-start)}")
else:
with open(train_path, "rb") as fp:
encoding_train = pickle.load(fp)
print('Loaded')
###Output
Loaded
###Markdown
We must also perform a similar process for the test images.
###Code
test_path = os.path.join(root_captioning,"data",f'testVGG16{OUTPUT_DIM}.pkl')
if not os.path.exists(test_path):
start = time()
encoding_test = {}
for id in tqdm(test_img):
image_path = os.path.join(root_captioning,'Flicker8k_Dataset', id)
img = tensorflow.keras.preprocessing.image.load_img(image_path, \
target_size=(HEIGHT, WIDTH))
encoding_test[id] = encodeImage(img)
with open(test_path, "wb") as fp:
pickle.dump(encoding_test, fp)
print(f"\nGenerating testing set took: {hms_string(time()-start)}")
else:
with open(test_path, "rb") as fp:
encoding_test = pickle.load(fp)
print('Loaded')
###Output
Loaded
###Markdown
Next, we separate the captions that we will use for training. There are two sides to this training, the images, and the captions.
###Code
all_train_captions = []
for key, val in train_descriptions.items():
for cap in val:
all_train_captions.append(cap)
len(all_train_captions)
###Output
_____no_output_____
###Markdown
Words that do not occur very often can be misleading to neural network training. It is better to remove such words. Here we remove any words that occur less than ten times. We display the new reduced size of the vocabulary shrunk.
###Code
word_count_threshold = 10
word_counts = {}
nsents = 0
for sent in all_train_captions:
nsents += 1
for w in sent.split(' '):
word_counts[w] = word_counts.get(w, 0) + 1
vocab = [w for w in word_counts if word_counts[w] >= word_count_threshold]
print('preprocessed words %d ==> %d' % (len(word_counts), len(vocab)))
###Output
preprocessed words 7578 ==> 1651
###Markdown
Next, we build two lookup tables for this vocabulary. The table **idxtoword** converts index numbers to actual words to index values. The **wordtoidx** lookup table performs the opposite.
###Code
idxtoword = {}
wordtoidx = {}
ix = 1
for w in vocab:
wordtoidx[w] = ix
idxtoword[ix] = w
ix += 1
vocab_size = len(idxtoword) + 1
vocab_size
###Output
_____no_output_____
###Markdown
Previously we added a start and stop token to all sentences. We must account for this in the maximum length of captions.
###Code
max_length +=2
print(max_length)
###Output
34
###Markdown
Using a Data GeneratorUp to this point, we've always generated training data ahead of time and fit the neural network to it. It is not always practical to create all of the training data ahead of time. The memory demands can be considerable. If we generate the training data as the neural network needs it, it is possible to use a Keras generator. The generator will create new data as it is needed. The generator provided here creates the training data for the caption neural network, as it is needed.If we were to build all needed training data ahead of time, it would look like Figure 10.CAP-WORK.**Figure 10.CAP-WORK: Captioning Training Data**Here we are just training on two captions. However, we would have to duplicate the image for each of these partial captions that we have. Additionally, the Flikr8K data set has five captions for each picture. Those would all require duplication of data as well. It is much more efficient to generate the data as needed.
###Code
def data_generator(descriptions, photos, wordtoidx, \
max_length, num_photos_per_batch):
# x1 - Training data for photos
# x2 - The caption that goes with each photo
# y - The predicted rest of the caption
x1, x2, y = [], [], []
n=0
while True:
for key, desc_list in descriptions.items():
n+=1
photo = photos[key+'.jpg']
# Each photo has 5 descriptions
for desc in desc_list:
# Convert each word into a list of sequences.
seq = [wordtoidx[word] for word in desc.split(' ') \
if word in wordtoidx]
# Generate a training case for every possible sequence and outcome
for i in range(1, len(seq)):
in_seq, out_seq = seq[:i], seq[i]
in_seq = pad_sequences([in_seq], maxlen=max_length)[0]
out_seq = to_categorical([out_seq], num_classes=vocab_size)[0]
x1.append(photo)
x2.append(in_seq)
y.append(out_seq)
if n==num_photos_per_batch:
yield ([np.array(x1), np.array(x2)], np.array(y))
x1, x2, y = [], [], []
n=0
###Output
_____no_output_____
###Markdown
Loading Glove Embeddings
###Code
glove_dir = os.path.join(root_captioning,'glove.6B')
embeddings_index = {}
f = open(os.path.join(glove_dir, 'glove.6B.200d.txt'), encoding="utf-8")
for line in tqdm(f):
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
print(f'Found {len(embeddings_index)} word vectors.')
###Output
400000it [00:25, 15812.56it/s]
###Markdown
Building the Neural NetworkWe build an embedding matrix from Glove. We will directly copy this matrix to the weight matrix of the neural network.
###Code
embedding_dim = 200
# Get 200-dim dense vector for each of the 10000 words in out vocabulary
embedding_matrix = np.zeros((vocab_size, embedding_dim))
for word, i in wordtoidx.items():
#if i < max_words:
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
# Words not found in the embedding index will be all zeros
embedding_matrix[i] = embedding_vector
###Output
_____no_output_____
###Markdown
The matrix dimensions make sense. It is 1652 (the size of the vocabulary) by 200 (the number of features Glove generates for each word).
###Code
embedding_matrix.shape
inputs1 = Input(shape=(OUTPUT_DIM,))
fe1 = Dropout(0.5)(inputs1)
fe2 = Dense(256, activation='relu')(fe1)
inputs2 = Input(shape=(max_length,))
se1 = Embedding(vocab_size, embedding_dim, mask_zero=True)(inputs2)
se2 = Dropout(0.5)(se1)
se3 = LSTM(256)(se2)
decoder1 = add([fe2, se3])
decoder2 = Dense(256, activation='relu')(decoder1)
outputs = Dense(vocab_size, activation='softmax')(decoder2)
caption_model = Model(inputs=[inputs1, inputs2], outputs=outputs)
embedding_dim
caption_model.summary()
caption_model.layers[2].set_weights([embedding_matrix])
caption_model.layers[2].trainable = False
caption_model.compile(loss='categorical_crossentropy', optimizer='adam')
###Output
_____no_output_____
###Markdown
Train the Neural Network
###Code
number_pics_per_bath = 3
steps = len(train_descriptions)//number_pics_per_bath
model_path = os.path.join(root_captioning,"data",f'caption-model_VGG16.hdf5')
model_path_full = os.path.join(root_captioning,"data",f'caption-model_VGG16(Full).hdf5')
if not os.path.exists(model_path):
for i in tqdm(range(EPOCHS*2)):
generator = data_generator(train_descriptions, encoding_train, wordtoidx, max_length, number_pics_per_bath)
caption_model.fit_generator(generator, epochs=1,steps_per_epoch=steps, verbose=1)
caption_model.optimizer.lr = 1e-4
number_pics_per_bath = 6
steps = len(train_descriptions)//number_pics_per_bath
for i in range(EPOCHS):
generator = data_generator(train_descriptions, encoding_train,wordtoidx, max_length, number_pics_per_bath)
caption_model.fit_generator(generator, epochs=1, steps_per_epoch=steps, verbose=1)
caption_model.save_weights(model_path)
caption_model.save(model_path_full)
else:
caption_model.load_weights(model_path)
caption_model.save(model_path_full)
###Output
_____no_output_____
###Markdown
Generating CaptionsIt is essential to understand that we do not generate a caption with one single call to the neural network's predict function. Neural networks output a fixed-length tensor. To get a variable-length output, such as free-form text, requires multiple calls to the neural network.The neural network accepts two objects (which we map to the input neurons). The first input is the photo, and the second input is an ever-growing caption. The caption begins with just the starting token. The neural network's output is the prediction of the next word in the caption. The caption continues to grow until the neural network predicts an end token, or we reach the maximum length of a caption.
###Code
def generateCaption(photo):
in_text = START
for i in range(max_length):
sequence = [wordtoidx[w] for w in in_text.split() if w in wordtoidx]
sequence = pad_sequences([sequence], maxlen=max_length)
yhat = caption_model.predict([photo,sequence], verbose=0)
yhat = np.argmax(yhat)
word = idxtoword[yhat]
in_text += ' ' + word
if word == STOP:
break
final = in_text.split()
final = final[1:-1]
final = ' '.join(final)
return final
###Output
_____no_output_____
###Markdown
Evaluate Performance on Test Data from Flicker8kThe caption model performs relatively well on images that are similar to the training set.
###Code
for z in range(5): # set higher to see more examples
pic = list(encoding_test.keys())[z]
image = encoding_test[pic].reshape((1,OUTPUT_DIM))
print(os.path.join(root_captioning,'Flicker8k_Dataset', pic))
x=plt.imread(os.path.join(root_captioning,'Flicker8k_Dataset', pic))
plt.imshow(x)
plt.show()
print("Caption:",generateCaption(image))
print("_____________________________________")
from numpy import argmax
from pickle import load
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.models import load_model
from nltk.translate.bleu_score import SmoothingFunction, corpus_bleu
from nltk.translate import bleu
import numpy as np
from itertools import chain
# load doc into memory
def load_doc(filename):
# open the file as read only
file = open(filename, 'r')
# read all text
text = file.read()
# close the file
file.close()
return text
# load a pre-defined list of photo identifiers
def load_set(filename):
doc = load_doc(filename)
dataset = list()
# process line by line
for line in doc.split('\n'):
# skip empty lines
if len(line) < 1:
continue
# get the image identifier
identifier = line
dataset.append(identifier)
return set(dataset)
# load clean descriptions into memory
def load_clean_descriptions(filename, dataset):
# load document
doc = load_doc(filename)
descriptions = dict()
for line in doc.split('\n'):
# split line by white space
tokens = line.split()
# split id from description
image_id, image_desc = tokens[0], tokens[1:]
# skip images not in the set
if image_id in dataset:
# create list
if image_id not in descriptions:
descriptions[image_id] = list()
# wrap description in tokens
desc = 'startseq ' + ' '.join(image_desc) + ' endseq'
# store
descriptions[image_id].append(desc)
return descriptions
# load photo features
def load_photo_features(filename, dataset):
# load all features
all_features = load(open(filename, 'rb'))
# filter features
features = {k: all_features[k] for k in dataset}
return features
# covert a dictionary of clean descriptions to a list of descriptions
def to_lines(descriptions):
all_desc = list()
for key in descriptions.keys():
[all_desc.append(d) for d in descriptions[key]]
return all_desc
# fit a tokenizer given caption descriptions
def create_tokenizer(descriptions):
lines = to_lines(descriptions)
tokenizer = Tokenizer()
tokenizer.fit_on_texts(lines)
return tokenizer
# map an integer to a word
def word_for_id(integer, tokenizer):
for word, index in tokenizer.word_index.items():
if index == integer:
return word
return None
# generate a description for an image
def generate_desc(model, tokenizer, photo, max_length):
# seed the generation process
in_text = 'startseq'
# iterate over the whole length of the sequence
for i in range(max_length):
# integer encode input sequence
sequence = tokenizer.texts_to_sequences([in_text])[0]
# pad input
sequence = pad_sequences([sequence], maxlen=max_length)
# predict next word
yhat = model.predict([photo,sequence], verbose=0)
# convert probability to integer
yhat = argmax(yhat)
# map integer to word
word = word_for_id(yhat, tokenizer)
# stop if we cannot map the word
if word is None:
break
# append as input for generating the next word
in_text += ' ' + word
# stop if we predict the end of the sequence
if word == 'endseq':
break
return in_text
# map an integer to a word
def word_for_id(integer, tokenizer):
for word, index in tokenizer.word_index.items():
if index == integer:
return word
return None
# generate a description for an image
def generate_desc(model, tokenizer, photo, max_length):
# seed the generation process
in_text = 'startseq'
photo = np.array([photo])
# iterate over the whole length of the sequence
for i in range(max_length):
# integer encode input sequence
sequence = tokenizer.texts_to_sequences([in_text])[0]
# pad input
sequence = pad_sequences([sequence], maxlen=max_length)
# predict next word
yhat = model.predict([photo,sequence], verbose=0)
# convert probability to integer
yhat = argmax(yhat)
# map integer to word
word = word_for_id(yhat, tokenizer)
# stop if we cannot map the word
if word is None:
break
# append as input for generating the next word
in_text += ' ' + word
# stop if we predict the end of the sequence
if word == 'endseq':
break
return in_text
def dim(a):
if not type(a) == list:
return []
return [len(a)] + dim(a[0])
# evaluate the skill of the model
def evaluate_model(model, descriptions, photos, tokenizer, max_length):
actual, predicted = list(), list()
actual_final, predicted_final = list(), list()
# step over the whole set
for key, desc_list in descriptions.items():
# generate description
yhat = generate_desc(model, tokenizer, photos[key], max_length)
# store actual and predicted
references = [d.split() for d in desc_list]
actual.append(references)
predicted.append(yhat.split())
for temp in actual:
for elem in temp:
actual_final.append(elem)
predicted_final = list(chain.from_iterable(predicted))
# calculate BLEU score
act = actual
smooth = SmoothingFunction().method4
print('BLEU with smoothing function: %f'% bleu(actual_final, predicted_final, smoothing_function=smooth))
print('Simple BLEU value: %f'% bleu(actual_final, predicted_final))
print('BLEU-1: %f' % corpus_bleu(actual, predicted, weights=(1.0, 0, 0, 0)))
print('BLEU-2: %f' % corpus_bleu(actual, predicted, weights=(0.5, 0.5, 0, 0)))
print('BLEU-3: %f' % corpus_bleu(actual, predicted, weights=(0.3, 0.3, 0.3, 0)))
print('BLEU-4: %f' % corpus_bleu(actual, predicted, weights=(0.25, 0.25, 0.25, 0.25)))
# prepare tokenizer on train set
# load training dataset (6K)
filename = os.path.join(root_captioning, 'Flickr8k_text/Flickr_8k.trainImages.txt')
train = load_set(filename)
print('Dataset: %d' % len(train))
# descriptions
train_descriptions = load_clean_descriptions('/content/drive/My Drive/projects/captions/data/descriptions.txt', train)
print('Descriptions: train=%d' % len(train_descriptions))
# prepare tokenizer
tokenizer = create_tokenizer(train_descriptions)
vocab_size = len(tokenizer.word_index) + 1
print('Vocabulary Size: %d' % vocab_size)
# determine the maximum sequence length
print('Description Length: ', max_length)
# prepare test set
# load test set
filename = os.path.join(root_captioning, 'Flickr8k_text/Flickr_8k.testImages.txt')
test = load_set(filename)
print('Dataset: %d' % len(test))
# descriptions
test_descriptions = load_clean_descriptions('/content/drive/My Drive/projects/captions/data/descriptions.txt', test)
print('Descriptions: test=%d' % len(test_descriptions))
# photo features
test_features = load_photo_features('/content/drive/My Drive/projects/captions/data/testVGG164096.pkl', test)
print('Photos: test=%d' % len(test_features))
# load the model
filename = '/content/drive/My Drive/projects/captions/data/caption-model_VGG16(Full).hdf5'
model = load_model(filename)
# evaluate model
ind = evaluate_model(model, test_descriptions, test_features, tokenizer, max_length)
###Output
Dataset: 6000
Descriptions: train=6000
Vocabulary Size: 7579
Description Length: 34
Dataset: 1000
Descriptions: test=1000
Photos: test=1000
BLEU with smoothing function: 0.009513
|
p2_continuous-control/Continuous_Control-DDPG.ipynb | ###Markdown
Continuous Control---In this notebook, you will learn how to use the Unity ML-Agents environment for the second project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program. Unity EnvironmentWe begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
###Code
from unityagents import UnityEnvironment
import numpy as np
###Output
_____no_output_____
###Markdown
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.- **Mac**: `"path/to/Reacher.app"`- **Windows** (x86): `"path/to/Reacher_Windows_x86/Reacher.exe"`- **Windows** (x86_64): `"path/to/Reacher_Windows_x86_64/Reacher.exe"`- **Linux** (x86): `"path/to/Reacher_Linux/Reacher.x86"`- **Linux** (x86_64): `"path/to/Reacher_Linux/Reacher.x86_64"`- **Linux** (x86, headless): `"path/to/Reacher_Linux_NoVis/Reacher.x86"`- **Linux** (x86_64, headless): `"path/to/Reacher_Linux_NoVis/Reacher.x86_64"`For instance, if you are using a Mac, then you downloaded `Reacher.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:```env = UnityEnvironment(file_name="Reacher.app")```
###Code
unity_env_filename = r"C:\Users\jofan\rl\deep-reinforcement-learning\p2_continuous-control\Reacher_Windows_x86_64\Reacher.exe"
env = UnityEnvironment(file_name=unity_env_filename)
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
###Output
Number of agents: 20
Size of each action: 4
There are 20 agents. Each observes a state with length: 33
The state for the first agent looks like: [ 0.00000000e+00 -4.00000000e+00 0.00000000e+00 1.00000000e+00
-0.00000000e+00 -0.00000000e+00 -4.37113883e-08 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 -1.00000000e+01 0.00000000e+00
1.00000000e+00 -0.00000000e+00 -0.00000000e+00 -4.37113883e-08
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 5.75471878e+00 -1.00000000e+00
5.55726624e+00 0.00000000e+00 1.00000000e+00 0.00000000e+00
-1.68164849e-01]
###Markdown
DDPG AgentIntegrate DDPG Agent from Udacity DRLND pendulum environment
###Code
from ddpg_agent import Agent, ddpg
import model
from matplotlib.pylab import plt
###Output
_____no_output_____
###Markdown
Training
###Code
# Start agent
agent = Agent(state_size, action_size, 100)
# Run DQN
scores = ddpg(agent, env, n_episodes=150, print_every=10)
fig = plt.figure(figsize=(10,6))
plt.plot(np.arange(len(scores)), scores)
plt.title('Reacher - DDPG')
plt.xlabel('episode')
plt.ylabel('Total score (averaged over agents)')
plt.grid()
plt.show()
fig.savefig('result-ddpg.png')
###Output
_____no_output_____
###Markdown
Load agent and run
###Code
import torch
agent.actor_local.load_state_dict(torch.load('checkpoint_actor.pth'))
agent.critic_local.load_state_dict(torch.load('checkpoint_critic.pth'))
# Run
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = agent.act(states, add_noise=False)
env_info = env.step(actions)[brain_name]
next_states = env_info.vector_observations
rewards = env_info.rewards
dones = env_info.local_done
scores += rewards
states = next_states
if np.any(dones):
break
print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
###Output
Total score (averaged over agents) this episode: 39.19249912397936
|
examples/kind-project/kind-example.ipynb | ###Markdown
Fogify + Kind Import FogifySDK, connect with controller, and import "fogified" `docker-compose` file
###Code
from FogifySDK import FogifySDK
fogify = FogifySDK("http://controller:5000", "kind-docker-compose.yaml")
###Output
_____no_output_____
###Markdown
Deploy the topologyWhile the system deploys the topology, a progress bar illustrates the current status.
###Code
fogify.deploy()
###Output
Deploy process: 100%|██████████| 4/4 [00:10<00:00, 2.53s/it]
###Markdown
Retrieve the monitoring metrics from running instancesFogify allows users to retrieve monitoring metrics in `pandas` dataframes.The latter helps in depicting: Tables of values
###Code
fogify.get_metrics_from('control-plane_1')
###Output
_____no_output_____
###Markdown
Differences in out-going network data size
###Code
fogify.get_metrics_from('worker-0_1').network_tx_internet.diff().plot()
fogify.get_metrics_from('worker-1_1').network_tx_internet.diff().plot()
fogify.get_metrics_from('worker-2_1').network_tx_internet.diff().plot()
###Output
_____no_output_____
###Markdown
Timelines
###Code
fogify.get_metrics_from('worker-0_1').cpu_util.plot()
###Output
_____no_output_____
###Markdown
Update link connectivity We can update the connectivity between two nodes at run-time through the `update_link` function. The following command will inject to the (`worker-1`,`control-plane`) link a `50ms` network delay at uplink and `50ms` downlink in both nodes (`bidirectional=true`). So at the end, the `rtt` between these nodes will be `200ms`.
###Code
fogify.update_link('internet', 'worker-1', 'control-plane',{'uplink':{'latency':{'delay': '50ms'}}, 'downlink':{'latency':{'delay': '50ms'}}}, True)
###Output
_____no_output_____
###Markdown
UndeployFinally, the FogifySDK provides the `undeploy` method that destroys the emulated infrastructure.
###Code
fogify.undeploy()
###Output
Undeploy process: 100%|██████████| 4/4 [00:25<00:00, 6.36s/it]
|
notebooks/LogisticMap.ipynb | ###Markdown
Logistic mapMathematically, the [logistic map](https://en.wikipedia.org/wiki/Logistic_map) is written$$x_{n+1}=\mu x_n(1-x_n)$$where $x_n$ is a number between zero and one that represents the ratio of existing population to the maximum possible population. The values of interest for the parameter $\mu$ are those in the interval $[0,4]$, so that $x_n$ remains bounded on $[0,1]$. We will draw the system's bifurcation diagram, which shows the possible long-term behaviors (equilibria, fixed points, periodic orbits, and chaotic trajectories) as a function of the system's parameter. We will also compute an approximation of the system's Lyapunov exponent, characterizing the model's sensitivity to initial conditions.Let's import NumPy and matplotlib:
###Code
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Here is the implementation of this function in Python:
###Code
def logistic(mu, x):
return mu * x * (1 - x)
###Output
_____no_output_____
###Markdown
Here is a graphic representation of this function
###Code
x = np.linspace(0, 1)
mu = 4
fig, ax = plt.subplots(1, 1)
ax.plot(x, logistic(mu, x), 'k')
plt.title(f'$f(x)=\mu x (1-x)$, $\mu={mu:.1f}$')
plt.xlabel('x')
plt.ylabel('f(x)')
plt.grid(True)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Our discrete dynamical system is defined by the recursive application of the logistic function:$$x_{n+1}^{(\mu)}=f_\mu(x_n^{(\mu)})=\mu x_n^{(\mu)}(1-x_n^{(\mu)})$$Let's simulate a few iterations of this system with two different values of $\mu$:
###Code
def plot_system(mu, x0, n, ax=None):
# Plot the function and the
# y=x diagonal line.
t = np.linspace(0, 1)
ax.plot(t, logistic(mu, t), 'k', lw=2)
ax.plot([0, 1], [0, 1], 'k', lw=2)
# Recursively apply y=f(x) and plot two lines:
# (x, x) -> (x, y)
# (x, y) -> (y, y)
x = x0
for i in range(n):
y = logistic(mu, x)
# Plot the two lines.
ax.plot([x, x], [x, y], 'k', lw=1)
ax.plot([x, y], [y, y], 'k', lw=1)
# Plot the positions with increasing
# opacity.
ax.plot([x], [y], 'ok', ms=10,
alpha=(i + 1) / n)
x = y
ax.set_xlabel('$x_n$')
ax.set_ylabel('$x_{n+1}$')
ax.set_xlim(0, 1)
ax.set_ylim(0, 1)
ax.set_title(f"$\mu={mu:.1f}, \, x_0={x0:.1f}$")
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 6),
sharey=True)
plot_system(2.5, .1, 10, ax=ax1)
plot_system(3.5, .1, 10, ax=ax2)
###Output
_____no_output_____
###Markdown
On the left panel, we can see that our system converges to the intersection point of the curve and the diagonal line (fixed point). On the right panel however, using a different value for $\mu$, we observe a seemingly chaotic behavior of the system. Now, we simulate this system for 10000 values of $\mu$ linearly spaced between 2.5 and 4, and vectorize the simulation with NumPy by considering a vector of independent systems (one dynamical system per parameter value):
###Code
n = 10000
mu = np.linspace(2.5, 4.0, n)
###Output
_____no_output_____
###Markdown
We use 1000 iterations of the logistic map and keep the last 100 iterations to display the bifurcation diagram:
###Code
iterations = 1000
last = 100
###Output
_____no_output_____
###Markdown
We initialize our system with the same initial condition $x_0=0.00001$:
###Code
x = 1e-5 * np.ones(n)
###Output
_____no_output_____
###Markdown
We also compute an approximation of the Lyapunov exponent for every value of $\mu$. The Lyapunov exponent is defined by:$$\lambda(\mu)=lim_{n\to\infty}\frac{1}{n}\sum_{i=0}^{n-1}\log \left|\frac{df_\mu}{dx}(x_i^{(\mu)})\right|$$We first initialize the lyapunov vector:
###Code
lyapunov = np.zeros(n)
###Output
_____no_output_____
###Markdown
Now, we simulate the system and plot the bifurcation diagram. The simulation only involves the iterative evaluation of the `logistic()` function on our vector $x$. Then, to display the bifurcation diagram, we draw one pixel per point $x_n^{(\mu)}$ during the last 100 iterations:
###Code
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(8, 9),
sharex=True)
for i in range(iterations):
x = logistic(mu, x)
# We compute the partial sum of the
# Lyapunov exponent.
lyapunov += np.log(abs(mu - 2 * mu * x))
# We display the bifurcation diagram.
if i >= (iterations - last):
ax1.plot(mu, x, ',k', alpha=.25)
ax1.set_xlabel('$\mu$')
ax1.set_ylabel('$x$')
ax1.set_xlim(2.5, 4)
ax1.set_title("Bifurcation diagram")
ax1.grid(True)
# We display the Lyapunov exponent.
# Horizontal line.
ax2.axhline(0, color='k', lw=.5, alpha=.5)
# Negative Lyapunov exponent.
ax2.plot(mu[lyapunov < 0],
lyapunov[lyapunov < 0] / iterations,
'.k', alpha=.5, ms=.5)
# Positive Lyapunov exponent.
ax2.plot(mu[lyapunov >= 0],
lyapunov[lyapunov >= 0] / iterations,
'.r', alpha=.5, ms=.5)
ax2.set_xlabel('$\mu$')
ax2.set_ylabel('$\lambda$')
ax2.set_xlim(2.5, 4)
ax2.set_ylim(-2, 1)
ax2.set_title("Lyapunov exponent")
ax2.grid(True)
###Output
_____no_output_____ |
nbs/dl2/11_workable_train_imagenette.ipynb | ###Markdown
Imagenet(te) training [Jump_to lesson 12 video](https://course.fast.ai/videos/?lesson=12&t=1681)
###Code
path = datasets.untar_data(datasets.URLs.IMAGENETTE_160)
size = 128
tfms = [make_rgb, RandomResizedCrop(size, scale=(0.35,1)), np_to_float, PilRandomFlip()]
bs = 64
il = ImageList.from_files(path, tfms=tfms)
sd = SplitData.split_by_func(il, partial(grandparent_splitter, valid_name='val'))
ll = label_by_func(sd, parent_labeler, proc_y=CategoryProcessor())
ll.valid.x.tfms = [make_rgb, CenterCrop(size), np_to_float]
data = ll.to_databunch(bs, c_in=3, c_out=10, num_workers=8)
###Output
_____no_output_____
###Markdown
XResNet [Jump_to lesson 12 video](https://course.fast.ai/videos/?lesson=12&t=1701)
###Code
#export
def noop(x): return x
class Flatten(nn.Module):
def forward(self, x): return x.view(x.size(0), -1)
def conv(ni, nf, ks=3, stride=1, bias=False):
return nn.Conv2d(ni, nf, kernel_size=ks, stride=stride, padding=ks//2, bias=bias)
#export
act_fn = nn.ReLU(inplace=True)
def init_cnn(m):
if getattr(m, 'bias', None) is not None: nn.init.constant_(m.bias, 0)
if isinstance(m, (nn.Conv2d,nn.Linear)): nn.init.kaiming_normal_(m.weight)
for l in m.children(): init_cnn(l) #apply recursively to childer of the module
def conv_layer(ni, nf, ks=3, stride=1, zero_bn=False, act=True):
bn = nn.BatchNorm2d(nf)
nn.init.constant_(bn.weight, 0. if zero_bn else 1.)
layers = [conv(ni, nf, ks, stride=stride), bn]
if act: layers.append(act_fn)
return nn.Sequential(*layers)
#export
class ResBlock(nn.Module):
def __init__(self, expansion, ni, nh, stride=1):
super().__init__()
nf,ni = nh*expansion,ni*expansion
layers = [conv_layer(ni, nh, 3, stride=stride),
conv_layer(nh, nf, 3, zero_bn=True, act=False)
] if expansion == 1 else [
conv_layer(ni, nh, 1),
conv_layer(nh, nh, 3, stride=stride),
conv_layer(nh, nf, 1, zero_bn=True, act=False)
]
self.convs = nn.Sequential(*layers)
self.idconv = noop if ni==nf else conv_layer(ni, nf, 1, act=False)
self.pool = noop if stride==1 else nn.AvgPool2d(2, ceil_mode=True)
def forward(self, x): return act_fn(self.convs(x) + self.idconv(self.pool(x)))
#export
class XResNet(nn.Sequential):
@classmethod
def create(cls, expansion, layers, c_in=3, c_out=1000):
nfs = [c_in, (c_in+1)*8, 64, 64]
stem = [conv_layer(ni=nfs[i],nf=nfs[i+1], stride=2 if i==0 else 1)
for i in range(3)]
nfs = [64//expansion,64,128,256,512]
# create a list of sequential blocks, where the i'th block is made of layers[i] ResBlocks
res_layers = [cls._make_layer(expansion=expansion, ni=nfs[i], nf=nfs[i+1],
n_blocks=l, stride=1 if i==0 else 2)
for i,l in enumerate(layers)]
res = cls(
*stem,
nn.MaxPool2d(kernel_size=3, stride=2, padding=1),
*res_layers,
nn.AdaptiveAvgPool2d(1), Flatten(),
nn.Linear(nfs[-1]*expansion, c_out),
)
init_cnn(res)
return res
@staticmethod
def _make_layer(expansion, ni, nf, n_blocks, stride):
# creates n_blocks sequential ResBlock layers.
# the first layer in the seuquence uses stride 1, while the following use ```stride```.
return nn.Sequential(
*[ResBlock(expansion, ni if i==0 else nf, nf, stride if i==0 else 1)
for i in range(n_blocks)])
#export
def xresnet18 (**kwargs): return XResNet.create(1, [2, 2, 2, 2], **kwargs)
def xresnet34 (**kwargs): return XResNet.create(1, [3, 4, 6, 3], **kwargs)
def xresnet50 (**kwargs): return XResNet.create(4, [3, 4, 6, 3], **kwargs)
def xresnet101(**kwargs): return XResNet.create(4, [3, 4, 23, 3], **kwargs)
def xresnet152(**kwargs): return XResNet.create(4, [3, 8, 36, 3], **kwargs)
###Output
_____no_output_____
###Markdown
Train [Jump_to lesson 12 video](https://course.fast.ai/videos/?lesson=12&t=2515)
###Code
cbfs = [partial(AvgStatsCallback,accuracy), ProgressCallback, CudaCallback,
partial(BatchTransformXCallback, norm_imagenette),
# partial(MixUp, alpha=0.2)
]
loss_func = LabelSmoothingCrossEntropy()
arch = partial(xresnet18, c_out=10)
opt_func = adam_opt(mom=0.9, mom_sqr=0.99, eps=1e-6, wd=1e-2)
#export
def get_batch(dl, learn):
learn.xb,learn.yb = next(iter(dl))
learn.do_begin_fit(0)
learn('begin_batch')
learn('after_fit')
return learn.xb,learn.yb
###Output
_____no_output_____
###Markdown
We need to replace the old `model_summary` since it used to take a `Runner`.
###Code
# export
def model_summary(model, data, find_all=False, print_mod=False):
xb,yb = get_batch(data.valid_dl, learn)
mods = find_modules(model, is_lin_layer) if find_all else model.children()
f = lambda hook,mod,inp,out: print(f"====\n{mod}\n" if print_mod else "", out.shape)
with Hooks(mods, f) as hooks: learn.model(xb)
learn = Learner(arch(), data, loss_func, lr=1, cb_funcs=cbfs, opt_func=opt_func)
learn.model = learn.model.cuda()
model_summary(learn.model, data, print_mod=False)
arch = partial(xresnet34, c_out=10)
learn = Learner(arch(), data, loss_func, lr=1, cb_funcs=cbfs, opt_func=opt_func)
learn.fit(1, cbs=[LR_Find(), Recorder()])
learn.recorder.plot(3)
#export
def create_phases(phases):
phases = listify(phases)
return phases + [1-sum(phases)]
print(create_phases(0.3))
print(create_phases([0.3,0.2]))
lr = 1e-2
pct_start = 0.5
phases = create_phases(pct_start)
sched_lr = combine_scheds(phases, cos_1cycle_anneal(lr/10., lr, lr/1e5))
sched_mom = combine_scheds(phases, cos_1cycle_anneal(0.95, 0.85, 0.95))
cbsched = [
ParamScheduler('lr', sched_lr),
ParamScheduler('mom', sched_mom)]
learn = Learner(arch(), data, loss_func, lr=lr, cb_funcs=cbfs, opt_func=opt_func)
learn.fit(5, cbs=cbsched)
###Output
_____no_output_____
###Markdown
cnn_learner [Jump_to lesson 12 video](https://course.fast.ai/videos/?lesson=12&t=2711)
###Code
#export
def cnn_learner(arch, data, loss_func, opt_func, c_in=None, c_out=None,
lr=1e-2, cuda=True, norm=None, progress=True, mixup=0, xtra_cb=None, **kwargs):
cbfs = [partial(AvgStatsCallback,accuracy)]+listify(xtra_cb)
if progress: cbfs.append(ProgressCallback)
if cuda: cbfs.append(CudaCallback)
if norm: cbfs.append(partial(BatchTransformXCallback, norm))
if mixup: cbfs.append(partial(MixUp, mixup))
arch_args = {}
if not c_in : c_in = data.c_in
if not c_out: c_out = data.c_out
if c_in: arch_args['c_in' ]=c_in
if c_out: arch_args['c_out']=c_out
return Learner(arch(**arch_args), data, loss_func, opt_func=opt_func, lr=lr, cb_funcs=cbfs, **kwargs)
learn = cnn_learner(xresnet34, data, loss_func, opt_func, norm=norm_imagenette)
learn.fit(5, cbsched)
###Output
_____no_output_____
###Markdown
Imagenet You can see all this put together in the fastai [imagenet training script](https://github.com/fastai/fastai/blob/master/examples/train_imagenet.py). It's the same as what we've seen so far, except it also handles multi-GPU training. So how well does this work?We trained for 60 epochs, and got an error of 5.9%, compared to the official PyTorch resnet which gets 7.5% error in 90 epochs! Our xresnet 50 training even surpasses standard resnet 152, which trains for 50% more epochs and has 3x as many layers. Export
###Code
!./notebook2script.py 11_train_imagenette.ipynb
###Output
Converted 11_train_imagenette.ipynb to exp/nb_11.py
|
aps/satskred/tyin/test_intersection.ipynb | ###Markdown
See http://geopandas.org/set_operations.html
###Code
polys1 = gpd.GeoSeries([Polygon([(0,0), (2,0), (2,2), (0,2)]),
Polygon([(2,2), (4,2), (4,4), (2,4)])])
polys2 = gpd.GeoSeries([Polygon([(1,1), (3,1), (3,3), (1,3)]),
Polygon([(3,3), (5,3), (5,5), (3,5)])])
df1 = gpd.GeoDataFrame({'geometry': polys1, 'df1':[1,2]})
df2 = gpd.GeoDataFrame({'geometry': polys2, 'df2':[1,2]})
ax = df1.plot(color='red');
df2.plot(ax=ax, color='green', alpha=0.5);
res_intersection = gpd.overlay(df1, df2, how='intersection')
ax = res_intersection.plot(cmap='tab10')
df1.plot(ax=ax, facecolor='none', edgecolor='k')
df2.plot(ax=ax, facecolor='none', edgecolor='k')
###Output
_____no_output_____ |
University Project .ipynb | ###Markdown
Cluster _ Hierarchical
###Code
from sklearn.cluster import AgglomerativeClustering
import scipy.cluster.hierarchy as sch
dendrogram = sch.dendrogram(sch.linkage(df, method='ward'))
model = AgglomerativeClustering(n_clusters=40)
cluster = model.fit_predict(df_scaler)
cluster
df['Cluster'] = cluster
df
###Output
_____no_output_____ |
init_capstone.ipynb | ###Markdown
Capstone Project Data Science This notebook is solely made for the purpose of capstone project in Data Science course.
###Code
import pandas as pd
import numpy as np
print("Hello Capstone Project Course!")
###Output
Hello Capstone Project Course!
|
citibike_actigram.ipynb | ###Markdown
First pass, are there interesting daily patterns in biking behavior? Data not includedGrabbed data from citibike urls, have to add data retrieval steps, consult https://github.com/toddwschneider/nyc-citibike-data Cheated with speed and distance. I just comput distance and speed by adding together change in Latitude to the Change in Longitude.
###Code
def readCSV(csv):
d = pd.read_csv(csv,
parse_dates = ['starttime','stoptime'],
infer_datetime_format = True)
d.columns = ['tripdur','st','et','ssid','ssname','sslat','sslon',
'esid','esname','eslat','eslon','bikeid','usertype','yob','gender']
d['age'] = d['st'].apply(lambda x: x.year) - d.yob
# convert from degrees to miles,
d['ns'] = (d.eslat - d.sslat)*69.172
# the distance of longitudintal degrees vary with latidute, employ a quick transformation.
d['ew'] = (d.eslon - d.sslon)*(math.cos(40.7648 *(math.pi/180)) * 69.172)
d['psuedospeed'] = (d.ns.abs()+d.ew.abs()) / (d.tripdur/3600)
d['psuedodistance'] = d.ns.abs()+d.ew.abs()
# break start and stop times into 5 minute intervals
ns5min=5*60*1000000000 # 5 minutes in nanoseconds
d['st5min'] = pd.to_datetime(((d.st.astype(np.int64) // ns5min + 1 ) * ns5min))
d['et5min'] = pd.to_datetime(((d.et.astype(np.int64) // ns5min + 1 ) * ns5min))
ns10min=10*60*1000000000 # 5 minutes in nanoseconds
d['st10min'] = pd.to_datetime(((d.st.astype(np.int64) // ns10min + 1 ) * ns10min))
d['et10min'] = pd.to_datetime(((d.et.astype(np.int64) // ns10min + 1 ) * ns10min))
return d
# we have som garbage in here
# had used glob, but order was not sorted or fixed, so do this:
csvpaths = [os.path.join(os.pardir,'data',"2015%02d-citibike-tripdata.csv" % m) for m in np.arange(12)+1]
t = np.zeros((int(60*24/10),12))
spd = np.copy(t)
dist = np.copy(t)
dur = np.copy(t)
cnts = np.copy(t)
for i,csv in enumerate(csvpaths):
print(csv)
d = readCSV(csv)
grp = d.groupby(by=[d.st10min.map(lambda x : int(x.hour*60 + x.minute))])
x = pd.to_timedelta(grp.psuedodistance.mean().index.values/60, unit='m')
t[:,i] = x
cnts[:,i] = grp.tripdur.size()
spd[:,i] = grp.psuedospeed.mean()
dist[:,i] = grp.psuedodistance.mean()
dur[:,i] = grp.tripdur.mean()/60 # do trip duration in minutes.
import matplotlib.patches as patches
import matplotlib.transforms as transforms
import calendar
# we have som garbage in here
def timeTicks(x, pos):
d = datetime.timedelta(milliseconds=x/1000000)
return str(d)[2:4]
formatter = mpl.ticker.FuncFormatter(timeTicks)
f, axar = plt.subplots(2,len(csvpaths), sharey='row', sharex='col')
for i,csv in enumerate(csvpaths):
axar[0,i].plot(x,dist[:,i],'-k',linewidth=0.5)
axar[1,i].plot(x,spd[:,i],'-k',linewidth=0.5)
axar[0,i].set_title(calendar.month_name[i+1])
for ax in axar.flatten():
trans = transforms.blended_transform_factory(ax.transData, ax.transAxes)
amrush = 8*60*10**9
pmrush = 17*60*10**9
amrect = patches.Rectangle((amrush,0), width=60*10**9, height=1,
transform=trans, color='darkmagenta',
alpha=0.5)
pmrect = patches.Rectangle((pmrush,0), width=60*10**9, height=1,
transform=trans, color='darkcyan',
alpha=0.5)
[ax.add_patch(rect) for rect in [amrect,pmrect]]
axar[0,i]
ax.xaxis.set_major_formatter(formatter)
ax.xaxis.set_ticks([9*60*10**9,17*60*10**9])
ax.set_title
axar[0,0].set_ylabel("Estimated Distance (~miles)")
axar[1,0].set_ylabel("Estimated Speed (~mph)")
[ax.set_ylim(0.9,1.6) for ax in axar[0,:]]
[ax.set_ylim(6,9.5) for ax in axar[1,:]]
f.set_size_inches(20,6)
f.savefig('CartesianDistance.pdf')
###Output
_____no_output_____
###Markdown
You can see that there are peaks in the distance traveled around the beginning (magenta) and the end (cyan) of the work day (top row). Interestingly, there is a not a cooresponding pattern in the speed of travel about the work day. There is a clear peak in speed before the beginining of work, and a substantial slowing in the middle of the day. At the end of the work day, trips are faster than in the middle of the day, but not nearly as fast as in the morning. This pattern is observed throughout the year, but is more pronounced in the summer months, indicating a interaction with temperature (I am considering time of day as a proxy for temperature) If individuals are riding more slowly on their return commute, then trip duration should dilate at the end of the day. Trips in the system have a 45 minute time limit though, and it would be interesting to see if the tendency to go slower at the end of the day, especially in the summer, runs into a penalty for taking too long. We could compare the distributions of durations for trips starting during the AM rush hour to those starting during the PM rush hour, maybe there is an obvious difference?
###Code
f, axar = plt.subplots(2,len(csvpaths), subplot_kw=dict(projection='polar'))
st,m,e = (0,int((24*60/5)//2), int((24*60/5)))
tm = np.linspace(2.5*np.pi,0.5*np.pi,int((24*60/5)//2))
labels={'darkmagenta':'am',
'darkcyan':'pm'}
for i,csv in enumerate(csvpaths):
for s, col in zip([slice(st,m),slice(m,e)],['darkmagenta','darkcyan']):
axar[0,i].plot(tm,dist[:,i][s],color = col,label = labels[col])
axar[1,i].plot(tm,spd[:,i][s],color = col)
axar[0,i].set_title(calendar.month_name[i+1])
[ax.set_xticks([0,np.pi/2,np.pi,np.pi*3/2]) for ax in axar.flatten()]
[ax.set_xticklabels([3,12,9,6]) for ax in axar.flatten()]
[ax.set_rticks([]) for ax in axar.flatten()]
[ax.set_rlim(0.8,1.8) for ax in axar[0,:]]
[ax.set_rlim(5,9) for ax in axar[1,:]]
axar[0,0].set_ylabel('Distance')
axar[1,0].set_ylabel('Speed')
h,l = axar[0,0].get_legend_handles_labels()
f.legend(h,l,'center')
#[ax.set_ylim(0.8,1.6) for ax in axar[0,:]]
#[ax.xaxis.set_ticks([9*60*10**9,17*60*10**9]) for ax in axar.flatten()]
f.set_size_inches(24,4)
f.savefig('PolarDistance.pdf')
###Output
_____no_output_____
###Markdown
Saw something interesting! Riders go fast in AM commute, slow on the way home. New Question: Does riders respond to poor air quality? The AM commute and PM commute are much different, would be interested to see t GOAL: a NLS regression model to see whether air quality impacts riding behavior.
###Code
air = pd.read_csv(os.path.join(os.pardir,'data','AQDM.txt'),
parse_dates = [['Date Local','24 Hour Local']],
usecols = [6, 7, 9, 10, 11,16,17],
dtype = {'Site Num':str},
infer_datetime_format=True,
skipfooter = 1)
air.rename(columns={'Date Local_24 Hour Local':'DateTime'},inplace = True)
###Output
C:\Users\Matthew Perkins\Anaconda3\lib\site-packages\ipykernel\__main__.py:6: ParserWarning: Falling back to the 'python' engine because the 'c' engine does not support skipfooter; you can avoid this warning by specifying engine='python'.
###Markdown
The existence of negative PM 2.5 densities is a bit troubling for PS19 and Division Street, may have to examine what measurement procedure produces these negative values?
###Code
f, axar = plt.subplots(1,4,sharey='row')
sites = tuple(zip(['0128','0115','0134','0135'],
['PS19','IS143','Division Street','CCNY']))
for i, (Site, Desc) in enumerate(sites):
tmpd = air[(air['Site Num']==Site) & (air['Parameter Code']==88502)]
axar[i].plot(tmpd.DateTime, tmpd['Sample Measurement'],'.k',markersize = 2)
axar[i].axhline(35,color = 'red')
axar[i].axhline(0,color = 'darkcyan')
axar[i].set_title(Desc)
axar[0].set_ylabel('Micrograms/cubic meter')
f.set_size_inches(15,4)
###Output
_____no_output_____
###Markdown
Intermediate School 143 is the monitoring site with the most observations with exceeding the EPA PM 2.5 limit? (35 Micrograms/cubic meter), city college of New York, up in harlem is doing better. Would like to see to what degree the measurements between different stations concord
###Code
PM25 = air[air['Parameter Code']==88502]
PM25 = PM25.pivot(index='DateTime', columns = 'Site Num')['Sample Measurement']
from pandas.plotting import scatter_matrix
axar = scatter_matrix(PM25,diagonal = 'kde',figsize = (10,10), alpha = 0.3)
for ax in axar.flatten():
ax.set_ylim(0,50)
ax.set_xlim(0,50)
for ax in axar.diagonal():
ax.set_ylim(0,0.09)
###Output
_____no_output_____
###Markdown
Sites PS 19 and IS 143 definetly look like they are dirtier than Division street and CCNY.
###Code
air[air['Parameter Code']==88502].groupby(['Site Num'])['Sample Measurement'].apply(lambda x: np.sum(x>35))
plt.figure()
hrly = air.groupby(by = ['Site Num','Parameter Code',air.DateTime.map(lambda x: x.hour)])['Sample Measurement'].mean()
mnthly = air.groupby(by = ['Site Num','Parameter Code',air.DateTime.map(lambda x: x.month)])['Sample Measurement'].mean()
f, axar = plt.subplots(1,2,sharey='row')
for (Site, Descr) in sites:
axar[0].plot(hrly.xs([Site,88502]).index,
hrly.xs([Site,88502]).values,
label = Descr)
axar[1].plot(mnthly.xs([Site,88502]).index,
mnthly.xs([Site,88502]).values,
label = Descr)
axar[0].set_title('Hourly Avg.')
axar[1].set_title('Monthly Avg.')
f.set_size_inches(10,4)
plt.gca().legend()
###Output
_____no_output_____
###Markdown
It looks like PM2.5 concentrations are MUCH higher in the winter, and have a bump during rush hour. I blame low grade fuel oil for the winter bump. CCNY monitering site also has measures of Ozone and Carbon Monoxide, would like to compare this patterns to PM2.5.Ozone has an obviously different pattern through out the day.
###Code
mnthly
hrly = air.groupby(by = ['Site Num','Parameter Code',air.DateTime.map(lambda x: x.hour)])['Sample Measurement'].mean()
f, axar = plt.subplots(3,2,gridspec_kw = {'wspace':0.1,'hspace':0.6},sharex='col',sharey='row')
Codes = zip(air['Parameter Code'].unique(),air['AQS Parameter Desc'].unique())
for i, (Code, Desc) in enumerate(Codes):
axar[i,0].plot(hrly.xs(['0135',Code]).index,
hrly.xs(['0135',Code]).values,
label = Desc)
axar[i,1].plot(mnthly.xs(['0135',Code]).index,
mnthly.xs(['0135',Code]).values,
label = Desc)
axar[i,0].set_ylabel(Desc)
axar[-1,0].set_xlabel('hour')
axar[-1,1].set_xlabel('month')
axar[0,0].set_title('daily rhythms')
axar[0,1].set_title('annual rhythms')
f.set_size_inches((8,8))
###Output
_____no_output_____
###Markdown
Ozone probably roughly tracks temperature, on a daily and annual rhythm. This could make it difficult to assess whether it has an impact on riding behavior independent of temperature.
###Code
f, axar = plt.subplots(1,len(csvpaths), sharey='row', sharex='col')
for i,csv in enumerate(csvpaths):
axar[i].plot(x,cnts[:,i],'-k',linewidth=0.5)
axar[i].set_title(calendar.month_name[i+1])
for ax in axar.flatten():
trans = transforms.blended_transform_factory(ax.transData, ax.transAxes)
amrush = 8*60*10**9
pmrush = 17*60*10**9
amrect = patches.Rectangle((amrush,0), width=60*10**9, height=1,
transform=trans, color='darkmagenta',
alpha=0.5)
pmrect = patches.Rectangle((pmrush,0), width=60*10**9, height=1,
transform=trans, color='darkcyan',
alpha=0.5)
#[ax.add_patch(rect) for rect in [amrect,pmrect]]
ax.xaxis.set_major_formatter(formatter)
ax.xaxis.set_ticks([9*60*10**9,17*60*10**9])
axar[0].set_ylabel("number of trips")
f.set_size_inches(20,6)
f.savefig('NumTrips.pdf')
###Output
_____no_output_____ |
.ipynb_checkpoints/05b - Convolutional Neural Networks (Tensorflow)-checkpoint.ipynb | ###Markdown
Convolutional Neural Networks with TensorFlow"Deep Learning" is a general term that usually refers to the use of neural networks with multiple layers that synthesize the way the human brain learns and makes decisions. A convolutional neural network is a kind of neural network that extracts *features* from matrices of numeric values (often images) by convolving multiple filters over the matrix values to apply weights and identify patterns, such as edges, corners, and so on in an image. The numeric representations of these patterns are then passed to a fully-connected neural network layer to map the features to specific classes.There are several commonly used frameworks for creating CNNs. In this notebook, we'll build a simple example CNN using TensorFlow. Install and import librariesFirst, let's install and import the TensorFlow libraries we'll need.
###Code
!pip install --upgrade tensorflow
import tensorflow
from tensorflow import keras
print('TensorFlow version:',tensorflow.__version__)
print('Keras version:',keras.__version__)
###Output
TensorFlow version: 2.3.1
Keras version: 2.4.0
###Markdown
Explore the dataIn this exercise, you'll train a CNN-based classification model that can classify images of geometric shapes. Let's take a look at the classes of shape the model needs to identify.
###Code
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
# The images are in the data/shapes folder
data_folder = 'data/shapes'
# Get the class names
classes = os.listdir(data_folder)
classes.sort()
print(len(classes), 'classes:')
print(classes)
# Show the first image in each folder
fig = plt.figure(figsize=(8, 12))
i = 0
for sub_dir in os.listdir(data_folder):
i+=1
img_file = os.listdir(os.path.join(data_folder,sub_dir))[0]
img_path = os.path.join(data_folder, sub_dir, img_file)
img = mpimg.imread(img_path)
a=fig.add_subplot(1, len(classes),i)
a.axis('off')
imgplot = plt.imshow(img)
a.set_title(img_file)
plt.show()
###Output
3 classes:
['circle', 'square', 'triangle']
###Markdown
Prepare the dataBefore we can train the model, we need to prepare the data. We'll divide the feature values by 255 to normalize them as floating point values between 0 and 1, and we'll split the data so that we can use 70% of it to train the model, and hold back 30% to validate it. When loading the data, the data generator will assing "hot-encoded" numeric labels to indicate which class each image belongs to based on the subfolders in which the data is stored. In this case, there are three subfolders - *circle*, *square*, and *triangle*, so the labels will consist of three *0* or *1* values indicating which of these classes is associated with the image - for example the label [0 1 0] indicates that the image belongs to the second class (*square*).
###Code
from tensorflow.keras.preprocessing.image import ImageDataGenerator
img_size = (128, 128)
batch_size = 30
print("Getting Data...")
datagen = ImageDataGenerator(rescale=1./255, # normalize pixel values
validation_split=0.3) # hold back 30% of the images for validation
print("Preparing training dataset...")
train_generator = datagen.flow_from_directory(
data_folder,
target_size=img_size,
batch_size=batch_size,
class_mode='categorical',
subset='training') # set as training data
print("Preparing validation dataset...")
validation_generator = datagen.flow_from_directory(
data_folder,
target_size=img_size,
batch_size=batch_size,
class_mode='categorical',
subset='validation') # set as validation data
classnames = list(train_generator.class_indices.keys())
print('Data generators ready')
###Output
Getting Data...
Preparing training dataset...
Found 840 images belonging to 3 classes.
Preparing validation dataset...
Found 360 images belonging to 3 classes.
Data generators ready
###Markdown
Define the CNNNow we're ready to create our model. This involves defining the layers for our CNN, and compiling them for multi-class classification.
###Code
# Define a CNN classifier network
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dropout, Flatten, Dense
# Define the model as a sequence of layers
model = Sequential()
# The input layer accepts an image and applies a convolution that uses 32 6x6 filters and a rectified linear unit activation function
model.add(Conv2D(32, (6, 6), input_shape=train_generator.image_shape, activation='relu'))
# Next we'll add a max pooling layer with a 2x2 patch
model.add(MaxPooling2D(pool_size=(2,2)))
# We can add as many layers as we think necessary - here we'll add another convolution and max pooling layer
model.add(Conv2D(32, (6, 6), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
# And another set
model.add(Conv2D(32, (6, 6), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
# A dropout layer randomly drops some nodes to reduce inter-dependencies (which can cause over-fitting)
model.add(Dropout(0.2))
# Flatten the feature maps
model.add(Flatten())
# Generate a fully-cpnnected output layer with a predicted probability for each class
# (softmax ensures all probabilities sum to 1)
model.add(Dense(train_generator.num_classes, activation='softmax'))
# With the layers defined, we can now compile the model for categorical (multi-class) classification
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print(model.summary())
###Output
_____no_output_____
###Markdown
Train the modelWith the layers of the CNN defined, we're ready to train the model using our image data. In the example below, we use 5 iterations (*epochs*) to train the model in 30-image batches, holding back 30% of the data for validation. After each epoch, the loss function measures the error (*loss*) in the model and adjusts the weights (which were randomly generated for the first iteration) to try to improve accuracy. > **Note**: We're only using 5 epochs to minimze the training time for this simple example. A real-world CNN is usually trained over more epochs than this. CNN model training is processor-intensive, involving a lot of matrix and vector-based operations; so it's recommended to perform this on a system that can leverage GPUs, which are optimized for these kinds of calculation. This will take a while to complete on a CPU-based system - status will be displayed as the training progresses.
###Code
# Train the model over 5 epochs using 30-image batches and using the validation holdout dataset for validation
num_epochs = 5
history = model.fit(
train_generator,
steps_per_epoch = train_generator.samples // batch_size,
validation_data = validation_generator,
validation_steps = validation_generator.samples // batch_size,
epochs = num_epochs)
###Output
_____no_output_____
###Markdown
View the loss historyWe tracked average training and validation loss history for each epoch. We can plot these to verify that loss reduced as the model was trained, and to detect *overfitting* (which is indicated by a continued drop in training loss after validation loss has levelled out or started to increase).
###Code
%matplotlib inline
from matplotlib import pyplot as plt
epoch_nums = range(1,num_epochs+1)
training_loss = history.history["loss"]
validation_loss = history.history["val_loss"]
plt.plot(epoch_nums, training_loss)
plt.plot(epoch_nums, validation_loss)
plt.xlabel('epoch')
plt.ylabel('loss')
plt.legend(['training', 'validation'], loc='upper right')
plt.show()
###Output
_____no_output_____
###Markdown
Evaluate model performanceWe can see the final accuracy based on the test data, but typically we'll want to explore performance metrics in a little more depth. Let's plot a confusion matrix to see how well the model is predicting each class.
###Code
# Tensorflow doesn't have a built-in confusion matrix metric, so we'll use SciKit-Learn
import numpy as np
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
%matplotlib inline
print("Generating predictions from validation data...")
# Get the image and label arrays for the first batch of validation data
x_test = validation_generator[0][0]
y_test = validation_generator[0][1]
# Use the model to predict the class
class_probabilities = model.predict(x_test)
# The model returns a probability value for each class
# The one with the highest probability is the predicted class
predictions = np.argmax(class_probabilities, axis=1)
# The actual labels are hot encoded (e.g. [0 1 0], so get the one with the value 1
true_labels = np.argmax(y_test, axis=1)
# Plot the confusion matrix
cm = confusion_matrix(true_labels, predictions)
plt.imshow(cm, interpolation="nearest", cmap=plt.cm.Blues)
plt.colorbar()
tick_marks = np.arange(len(classnames))
plt.xticks(tick_marks, classnames, rotation=85)
plt.yticks(tick_marks, classnames)
plt.xlabel("Actual Shape")
plt.ylabel("Predicted Shape")
plt.show()
###Output
_____no_output_____
###Markdown
Save the Trained modelNow that you've trained a working model, you can save it (including the trained weights) for use later.
###Code
# Save the trained model
modelFileName = 'models/shape_classifier.h5'
model.save(modelFileName)
del model # deletes the existing model variable
print('model saved as', modelFileName)
###Output
_____no_output_____
###Markdown
Use the trained modelWhen you have a new image, you can use the saved model to predict its class.
###Code
from tensorflow.keras import models
import numpy as np
from random import randint
import os
%matplotlib inline
# Function to predict the class of an image
def predict_image(classifier, image):
from tensorflow import convert_to_tensor
# The model expects a batch of images as input, so we'll create an array of 1 image
imgfeatures = img.reshape(1, img.shape[0], img.shape[1], img.shape[2])
# We need to format the input to match the training data
# The generator loaded the values as floating point numbers
# and normalized the pixel values, so...
imgfeatures = imgfeatures.astype('float32')
imgfeatures /= 255
# Use the model to predict the image class
class_probabilities = classifier.predict(imgfeatures)
# Find the class predictions with the highest predicted probability
index = int(np.argmax(class_probabilities, axis=1)[0])
return index
# Function to create a random image (of a square, circle, or triangle)
def create_image (size, shape):
from random import randint
import numpy as np
from PIL import Image, ImageDraw
xy1 = randint(10,40)
xy2 = randint(60,100)
col = (randint(0,200), randint(0,200), randint(0,200))
img = Image.new("RGB", size, (255, 255, 255))
draw = ImageDraw.Draw(img)
if shape == 'circle':
draw.ellipse([(xy1,xy1), (xy2,xy2)], fill=col)
elif shape == 'triangle':
draw.polygon([(xy1,xy1), (xy2,xy2), (xy2,xy1)], fill=col)
else: # square
draw.rectangle([(xy1,xy1), (xy2,xy2)], fill=col)
del draw
return np.array(img)
# Create a random test image
classnames = os.listdir(os.path.join('data', 'shapes'))
classnames.sort()
img = create_image ((128,128), classnames[randint(0, len(classnames)-1)])
plt.axis('off')
plt.imshow(img)
# Use the classifier to predict the class
model = models.load_model(modelFileName) # loads the saved model
class_idx = predict_image(model, img)
print (classnames[class_idx])
###Output
_____no_output_____ |
cistenie_dat_IV/cistenie_dat_IV.ipynb | ###Markdown
0. Imports
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
1. Load csv
###Code
csvFile = pd.read_csv('/content/drive/MyDrive/Škola/WM/cistenie_dat_IV/log_jeden_den.csv', ';', usecols=range(1,10))
csvFile
###Output
_____no_output_____
###Markdown
2. Get robots ips into array
###Code
ipArray = csvFile[csvFile.url.str.contains('robots.txt')]['ip'].unique()
print("Size: " + str(ipArray.size))
print(ipArray)
###Output
Size: 108
['95.108.213.51' '46.229.168.163' '46.229.168.146' '66.240.192.138'
'199.47.87.143' '46.229.168.137' '103.131.71.128' '103.131.71.135'
'13.66.139.1' '77.75.77.95' '46.229.168.139' '66.249.76.10' '77.75.73.26'
'46.229.168.148' '46.229.168.129' '141.8.183.215' '77.75.78.167'
'46.229.168.136' '46.229.168.144' '77.75.77.62' '136.243.70.68'
'46.229.168.150' '35.239.58.193' '87.250.224.88' '46.229.168.162'
'46.229.168.161' '77.75.76.166' '46.229.168.152' '46.229.168.145'
'188.165.235.21' '77.75.79.109' '46.229.168.154' '54.36.148.19'
'46.229.168.131' '94.130.238.229' '37.9.169.7' '157.55.39.20'
'46.229.168.135' '157.55.39.86' '47.108.80.103' '17.58.102.1'
'157.55.39.223' '157.55.39.0' '85.208.96.67' '46.229.168.140'
'77.75.77.32' '54.36.149.106' '46.229.168.151' '199.58.86.209'
'46.229.168.141' '5.196.87.175' '46.229.168.153' '199.58.86.211'
'46.229.168.142' '212.150.211.166' '46.229.168.147' '185.25.35.14'
'46.229.168.149' '46.229.161.131' '13.52.221.38' '199.47.87.142'
'46.229.168.130' '162.210.196.129' '46.229.168.132' '216.244.66.247'
'162.210.196.100' '220.243.136.250' '46.229.168.133' '77.75.78.161'
'144.76.96.236' '103.131.71.40' '54.36.148.128' '46.229.168.134'
'54.80.78.185' '108.59.8.70' '144.76.38.40' '95.163.255.223'
'54.224.19.135' '77.75.76.165' '91.121.183.15' '46.229.168.138'
'95.163.255.244' '5.9.66.153' '5.9.156.30' '162.210.196.130'
'66.249.76.130' '54.36.148.214' '157.55.39.43' '77.75.76.171'
'95.163.255.204' '95.163.255.242' '162.210.196.98' '77.75.78.160'
'46.229.168.143' '62.210.151.70' '144.76.23.232' '220.243.136.26'
'79.137.130.151' '95.163.255.201' '207.46.13.236' '35.196.62.179'
'77.75.78.172' '199.58.86.206' '88.198.33.145' '178.151.245.174'
'77.75.76.160' '95.163.255.213' '192.99.13.228']
###Markdown
3. Remove robot ips from csv file
###Code
csvFile = csvFile[~csvFile.ip.str.contains('|'.join(ipArray))]
csvFile
###Output
_____no_output_____
###Markdown
4. Remove robots according key words like (bot, crawl, spider)
###Code
robotArray = ['bot', 'crawl', 'spider']
csvFile = csvFile[~csvFile.useragent.str.contains('|'.join(robotArray))]
csvFile
###Output
_____no_output_____
###Markdown
5. Save to csv
###Code
csvFile.to_csv('Laca_cistenie_dat_IV.csv', sep=';')
###Output
_____no_output_____ |
plot_data_by_bokeh.ipynb | ###Markdown
import lib
###Code
from bokeh.io import show, output_file
from bokeh.models import ColumnDataSource, FactorRange, HoverTool
from bokeh.plotting import figure
from bokeh.transform import factor_cmap
from bokeh.palettes import viridis
import matplotlib.pyplot as plt
import matplotlib.patches as patches
###Output
_____no_output_____
###Markdown
Setting Color
###Code
colors = viridis(47)
colors
###Output
_____no_output_____
###Markdown
Setting Data
###Code
output_file("bars.html")
data = {
"EVT_UDX_STA_PHBT_START_TIMER": [
0,
0,
0,
0,
0,
0,
0,
0,
0.0016125307692307692,
0.0016125307692307692,
0.0016125307692307692,
0.0016125307692307692,
0.0016125307692307692,
0.0016125307692307692,
0.0016125307692307692,
0.0016125307692307692,
0.0016125307692307692,
0.0016125307692307692,
0.0016125307692307692,
0.0016125307692307692,
0.0016125307692307692,
0.0016125307692307692,
0.0016125307692307692,
],
"EVT_TFU_RA_REQUEST_INDICATION": [
0,
0,
0,
0,
0,
0,
0,
0,
0.017306153846153846,
0.017306153846153846,
0.017306153846153846,
0.017306153846153846,
0.017306153846153846,
0.017306153846153846,
0.017306153846153846,
0.017306153846153846,
0.017306153846153846,
0.017306153846153846,
0.017306153846153846,
0.017306153846153846,
0.017306153846153846,
0.017306153846153846,
0.017306153846153846,
],
"EVT_TFU_CRC_INDICATION": [
0,
0,
0,
0,
0,
0,
0,
0,
0.0015209000000000002,
0.0015613076923076924,
0.0015601999999999999,
0.001557246153846154,
0.0015510307692307693,
0.0015499615384615385,
0.001548323076923077,
0.0015480769230769233,
0.0015480692307692308,
0.0015457538461538462,
0.001544,
0.0015427846153846152,
0.0015413692307692309,
0.0015412923076923077,
0.0015414384615384615,
],
"EVT_RGR_LVL2_CONFIG_REQUEST": [
0.016276923076923078,
0.016276923076923078,
0.016276923076923078,
0.016276923076923078,
0.016276923076923078,
0.016276923076923078,
0.016276923076923078,
0.016276923076923078,
0.016276923076923078,
0.016276923076923078,
0.016276923076923078,
0.016276923076923078,
0.016276923076923078,
0.016276923076923078,
0.016276923076923078,
0.016276923076923078,
0.016276923076923078,
0.016276923076923078,
0.016276923076923078,
0.016276923076923078,
0.016276923076923078,
0.016276923076923078,
0.016276923076923078,
],
"EVT_RGR_LVL1_SI_CONFIG_REQUEST": [
0.010367692307692309,
0.010367692307692309,
0.010367692307692309,
0.010367692307692309,
0.010367692307692309,
0.010367692307692309,
0.010367692307692309,
0.010367692307692309,
0.010367692307692309,
0.010367692307692309,
0.010367692307692309,
0.010367692307692309,
0.010367692307692309,
0.010367692307692309,
0.010367692307692309,
0.010367692307692309,
0.010367692307692309,
0.010367692307692309,
0.010367692307692309,
0.010367692307692309,
0.010367692307692309,
0.010367692307692309,
0.010367692307692309,
],
"EVT_RGR_LVL1_CONFIG_REQUEST": [
1.057976923076923,
1.057976923076923,
1.057976923076923,
1.057976923076923,
1.057976923076923,
1.057976923076923,
1.057976923076923,
1.057976923076923,
0.28357615384615387,
0.28357615384615387,
0.28357615384615387,
0.28357615384615387,
0.28357615384615387,
0.28357615384615387,
0.28357615384615387,
0.28357615384615387,
0.28357615384615387,
0.28357615384615387,
0.28357615384615387,
0.28357615384615387,
0.28357615384615387,
0.28357615384615387,
0.28357615384615387,
],
"EVT_RGR_LVL1_CCCH_DATA_REQUEST": [
0,
0,
0,
0,
0,
0,
0,
0,
0.0028692307692307693,
0.0028692307692307693,
0.0028692307692307693,
0.0028692307692307693,
0.0028692307692307693,
0.0028692307692307693,
0.0028692307692307693,
0.0028692307692307693,
0.0028692307692307693,
0.0028692307692307693,
0.0028692307692307693,
0.0028692307692307693,
0.0028692307692307693,
0.0028692307692307693,
0.0028692307692307693,
],
"EVT_L1_MSG_INDICATION": [
0.02379153846153846,
0.008093076923076922,
0.008096923076923076,
0.008126307692307693,
0.008150461538461539,
0.008166846153846155,
0.00817723076923077,
0.008184846153846154,
0.008825,
0.010299615384615384,
0.011344923076923077,
0.012135615384615384,
0.01274853846153846,
0.013241076923076925,
0.013642076923076923,
0.013979692307692306,
0.014266,
0.014505461538461537,
0.01471176923076923,
0.01489323076923077,
0.015052538461538461,
0.015195538461538462,
0.015323538461538461,
],
"EVT_KW_UMUL_REASSEMBLE_TIMER": [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0.007118461538461539,
0.006548207692307692,
0.006350769230769231,
0.0063193846153846165,
0.0063193846153846165,
0.0063193846153846165,
0.0063193846153846165,
0.0063193846153846165,
0.0063193846153846165,
0.006256407692307692,
0.006332115384615385,
0.00634876923076923,
],
"EVT_KW_AMUL_STA_PROH_TIMER": [
0,
0,
0,
0,
0,
0,
0,
0,
0.003874946153846154,
0.003874946153846154,
0.003874946153846154,
0.003874946153846154,
0.003874946153846154,
0.003874946153846154,
0.003874946153846154,
0.003874946153846154,
0.003874946153846154,
0.003874946153846154,
0.003874946153846154,
0.003874946153846154,
0.003874946153846154,
0.003874946153846154,
0.003874946153846154,
],
"EVT_CTF_CONFIG_REQUEST": [
0.36807384615384614,
0.36807384615384614,
0.36807384615384614,
0.36807384615384614,
0.36807384615384614,
0.36807384615384614,
0.36807384615384614,
0.36807384615384614,
0.36807384615384614,
0.36807384615384614,
0.36807384615384614,
0.36807384615384614,
0.36807384615384614,
0.36807384615384614,
0.36807384615384614,
0.36807384615384614,
0.36807384615384614,
0.36807384615384614,
0.36807384615384614,
0.36807384615384614,
0.36807384615384614,
0.36807384615384614,
0.36807384615384614,
],
"EVT_COMMON_UE_ACK_INDICATION": [
0,
0,
0,
0,
0,
0,
0,
0,
0.005253076923076924,
0.005253076923076924,
0.005253076923076924,
0.005253076923076924,
0.005253076923076924,
0.005253076923076924,
0.005253076923076924,
0.005253076923076924,
0.005253076923076924,
0.005253076923076924,
0.005253076923076924,
0.005253076923076924,
0.005253076923076924,
0.005253076923076924,
0.005253076923076924,
],
}
KEYS_NAME = list()
PERIODS = [f'Period {i + 1}' for i, item in enumerate(data[list(data.keys())[0]])]
data["PERIODS"] = PERIODS
EVENTS = list(data.keys())
EVENTS.remove("PERIODS")
palette = list(colors)
x = [(period, event) for period in PERIODS for event in EVENTS]
counts = sum(zip(value for key, value in data.items() if key != "PERIODS"), ())
counts
source = ColumnDataSource(data=dict(x=x, counts=counts))
p = figure(x_range=FactorRange(*x), plot_height=600, plot_width=990, title="TEST",
tools="pan,wheel_zoom,box_zoom,reset, save")
p.xaxis.axis_label_text_font_size = "5pt"
p.xaxis.axis_label_text_font_style = 'bold'
p.vbar(x='x', top='counts', width=0.9, source=source, fill_color=factor_cmap('x', palette=palette, factors=EVENTS, start=1, end=22))
p.add_tools(HoverTool(tooltips=[("PERIOD", "@x"), ("SEC", "@counts")]))
p.y_range.start = 0
p.x_range.range_padding = 0.1
p.xaxis.major_label_orientation = 1
p.xgrid.grid_line_color = None
show(p)
a = {
"A": [1, 2],
"B": [3, 4]
}
tuple(sub_x for x in zip(*a.values()) for sub_x in x)
###Output
_____no_output_____ |
week09_RL_intro_and_overview/practice_Crossentropy_method.ipynb | ###Markdown
Crossentropy method_Reference: based on Practical RL_ [week01](https://github.com/yandexdataschool/Practical_RL/tree/master/week01_intro)This notebook will teach you to solve reinforcement learning problems with crossentropy method. We'll follow-up by scaling everything up and using neural network policy.
###Code
# # in google colab uncomment this
# import os
# os.system('apt-get install -y xvfb')
# os.system('wget https://raw.githubusercontent.com/yandexdataschool/Practical_DL/fall18/xvfb -O ../xvfb')
# os.system('apt-get install -y python-opengl ffmpeg')
# XVFB will be launched if you run on a server
import os
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
%env DISPLAY = : 1
import gym
import numpy as np
import pandas as pd
env = gym.make("Taxi-v3")
env.reset()
env.render()
n_states = env.observation_space.n
n_actions = env.action_space.n
print("n_states=%i, n_actions=%i" % (n_states, n_actions))
###Output
n_states=500, n_actions=6
###Markdown
Create stochastic policyThis time our policy should be a probability distribution.```policy[s,a] = P(take action a | in state s)```Since we still use integer state and action representations, you can use a 2-dimensional array to represent the policy.Please initialize policy __uniformly__, that is, probabililities of all actions should be equal.
###Code
policy = # <your code here! Create an array to store action probabilities>
assert type(policy) in (np.ndarray, np.matrix)
assert np.allclose(policy, 1./n_actions)
assert np.allclose(np.sum(policy, axis=1), 1)
policy
###Output
_____no_output_____
###Markdown
Play the gameJust like before, but we also record all states and actions we took.
###Code
def generate_session(policy, t_max=int(10**4)):
"""
Play game until end or for t_max ticks.
:param policy: an array of shape [n_states,n_actions] with action probabilities
:returns: list of states, list of actions and sum of rewards
"""
states, actions = [], []
total_reward = 0.
s = env.reset()
for t in range(t_max):
a = # <sample action from policy(hint: use np.random.choice) >
new_s, r, done, info = env.step(a)
# Record state, action and add up reward to states,actions and total_reward accordingly.
states.append(s)
actions.append(a)
total_reward += r
s = new_s
if done:
break
return states, actions, total_reward
s, a, r = generate_session(policy)
assert type(s) == type(a) == list
assert len(s) == len(a)
assert type(r) in [float, np.float]
# let's see the initial reward distribution
import matplotlib.pyplot as plt
%matplotlib inline
sample_rewards = [generate_session(policy, t_max=1000)[-1] for _ in range(200)]
plt.hist(sample_rewards, bins=20)
plt.vlines([np.percentile(sample_rewards, 50)], [0], [100], label="50'th percentile", color='green')
plt.vlines([np.percentile(sample_rewards, 90)], [0], [100], label="90'th percentile", color='red')
plt.legend()
np.percentile(sample_rewards, 50)
###Output
_____no_output_____
###Markdown
Crossentropy method steps (2pts)
###Code
def select_elites(states_batch, actions_batch, rewards_batch, percentile=50):
"""
Select states and actions from games that have rewards >= percentile
:param states_batch: list of lists of states, states_batch[session_i][t]
:param actions_batch: list of lists of actions, actions_batch[session_i][t]
:param rewards_batch: list of rewards, rewards_batch[session_i]
:returns: elite_states,elite_actions, both 1D lists of states and respective actions from elite sessions
Please return elite states and actions in their original order
[i.e. sorted by session number and timestep within session]
If you are confused, see examples below. Please don't assume that states are integers
(they will become different later).
"""
reward_threshold = # <Compute minimum reward for elite sessions. Hint: use np.percentile >
elite_mask = # <your code here >
elite_states = # <your code here >
elite_actions = # <your code here >
return np.concatenate(elite_states), np.concatenate(elite_actions)
states_batch = [
[1, 2, 3], # game1
[4, 2, 0, 2], # game2
[3, 1], # game3
]
actions_batch = [
[0, 2, 4], # game1
[3, 2, 0, 1], # game2
[3, 3], # game3
]
rewards_batch = [
3, # game1
4, # game2
5, # game3
]
test_result_0 = select_elites(
states_batch, actions_batch, rewards_batch, percentile=0)
test_result_40 = select_elites(
states_batch, actions_batch, rewards_batch, percentile=30)
test_result_90 = select_elites(
states_batch, actions_batch, rewards_batch, percentile=90)
test_result_100 = select_elites(
states_batch, actions_batch, rewards_batch, percentile=100)
assert np.all(test_result_0[0] == [1, 2, 3, 4, 2, 0, 2, 3, 1]) \
and np.all(test_result_0[1] == [0, 2, 4, 3, 2, 0, 1, 3, 3]),\
"For percentile 0 you should return all states and actions in chronological order"
assert np.all(test_result_40[0] == [4, 2, 0, 2, 3, 1]) and \
np.all(test_result_40[1] == [3, 2, 0, 1, 3, 3]),\
"For percentile 30 you should only select states/actions from two first"
assert np.all(test_result_90[0] == [3, 1]) and \
np.all(test_result_90[1] == [3, 3]),\
"For percentile 90 you should only select states/actions from one game"
assert np.all(test_result_100[0] == [3, 1]) and\
np.all(test_result_100[1] == [3, 3]),\
"Please make sure you use >=, not >. Also double-check how you compute percentile."
print("Ok!")
def update_policy(elite_states, elite_actions):
"""
Given old policy and a list of elite states/actions from select_elites,
return new updated policy where each action probability is proportional to
policy[s_i,a_i] ~ #[occurences of si and ai in elite states/actions]
Don't forget to normalize policy to get valid probabilities and handle 0/0 case.
In case you never visited a state, set probabilities for all actions to 1./n_actions
:param elite_states: 1D list of states from elite sessions
:param elite_actions: 1D list of actions from elite sessions
"""
new_policy = np.zeros([n_states, n_actions])
# <Your code here: update probabilities for actions given elite states & actions >
# Don't forget to set 1/n_actions for all actions in unvisited states.
return new_policy
elite_states = [1, 2, 3, 4, 2, 0, 2, 3, 1]
elite_actions = [0, 2, 4, 3, 2, 0, 1, 3, 3]
new_policy = update_policy(elite_states, elite_actions)
assert np.isfinite(new_policy).all(
), "Your new policy contains NaNs or +-inf. Make sure you don't divide by zero."
assert np.all(
new_policy >= 0), "Your new policy can't have negative action probabilities"
assert np.allclose(new_policy.sum(
axis=-1), 1), "Your new policy should be a valid probability distribution over actions"
reference_answer = np.array([
[1., 0., 0., 0., 0.],
[0.5, 0., 0., 0.5, 0.],
[0., 0.33333333, 0.66666667, 0., 0.],
[0., 0., 0., 0.5, 0.5]])
assert np.allclose(new_policy[:4, :5], reference_answer)
print("Ok!")
###Output
Ok!
###Markdown
Training loopGenerate sessions, select N best and fit to those.
###Code
from IPython.display import clear_output
def show_progress(rewards_batch, log, percentile, reward_range=[-990, +10]):
"""
A convenience function that displays training progress.
No cool math here, just charts.
"""
mean_reward = np.mean(rewards_batch)
threshold = np.percentile(rewards_batch, percentile)
log.append([mean_reward, threshold])
clear_output(True)
print("mean reward = %.3f, threshold=%.3f" % (mean_reward, threshold))
plt.figure(figsize=[8, 4])
plt.subplot(1, 2, 1)
plt.plot(list(zip(*log))[0], label='Mean rewards')
plt.plot(list(zip(*log))[1], label='Reward thresholds')
plt.legend()
plt.grid()
plt.subplot(1, 2, 2)
plt.hist(rewards_batch, range=reward_range)
plt.vlines([np.percentile(rewards_batch, percentile)],
[0], [100], label="percentile", color='red')
plt.legend()
plt.grid()
plt.show()
# reset policy just in case
policy = np.ones([n_states, n_actions]) / n_actions
policy
n_sessions = 250 # sample this many sessions
percentile = 50 # take this percent of session with highest rewards
learning_rate = 0.5 # add this thing to all counts for stability
log = []
for i in range(100):
%time sessions = # [ < generate a list of n_sessions new sessions > ]
states_batch, actions_batch, rewards_batch = zip(*sessions)
elite_states, elite_actions = # <select elite states/actions >
new_policy = # <compute new policy >
policy = learning_rate*new_policy + (1-learning_rate)*policy
# display results on chart
show_progress(rewards_batch, log, percentile)
###Output
mean reward = -29.252, threshold=5.000
###Markdown
Digging deeper: approximate crossentropy with neural netsIn this section we will train a neural network policy for continuous state space game
###Code
# if you see "<classname> has no attribute .env", remove .env or update gym
env = gym.make("CartPole-v0").env
env.reset()
n_actions = env.action_space.n
plt.imshow(env.render("rgb_array"))
n_actions
# create agent
from sklearn.neural_network import MLPClassifier
agent = MLPClassifier(
hidden_layer_sizes=(20, 20),
activation='tanh',
warm_start=True, # keep progress between .fit(...) calls
max_iter=1, # make only 1 iteration on each .fit(...)
)
# initialize agent to the dimension of state an amount of actions
agent.fit([env.reset()]*n_actions, range(n_actions))
def generate_session(t_max=100):
states, actions = [], []
total_reward = 0
s = env.reset()
for t in range(t_max):
# predict array of action probabilities
probs = agent.predict_proba([s])[0]
a = # <sample action with such probabilities >
new_s, r, done, info = env.step(a)
# record sessions like you did before
states.append(s)
actions.append(a)
total_reward += r
s = new_s
if done:
break
return states, actions, total_reward
n_sessions = 100
percentile = 70
log = []
for i in range(100):
# generate new sessions
sessions = # < generate a list of n_sessions new sessions > ]
states_batch, actions_batch, rewards_batch = map(np.array, zip(*sessions))
elite_states, elite_actions = # <select elite actions just like before>
# <fit agent to predict elite_actions(y) from elite_states(X)>
if max(rewards_batch) > min(rewards_batch):
show_progress(rewards_batch, log, percentile, reward_range=[0, np.max(rewards_batch)])
if np.mean(rewards_batch) > 190:
print("You Win! You may stop training now via KeyboardInterrupt.")
###Output
mean reward = 60.260, threshold=76.000
###Markdown
Results
###Code
# record sessions
import gym.wrappers
env = gym.wrappers.Monitor(gym.make("CartPole-v0"),
directory="videos", force=True)
sessions = [generate_session() for _ in range(100)]
env.close()
# show video
from IPython.display import HTML
import os
video_names = list(
filter(lambda s: s.endswith(".mp4"), os.listdir("./videos/")))
HTML("""
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
""".format("./videos/"+video_names[-3])) # this may or may not be _last_ video. Try other indices
###Output
_____no_output_____
###Markdown
Bonus area I Tabular crossentropy methodYou may have noticed that the taxi problem quickly converges from -100 to a near-optimal score and then descends back into -50/-100. This is in part because the environment has some innate randomness. Namely, the starting points of passenger/driver change from episode to episode. Tasks- __1.1__ (1 pts) Find out how the algorithm performance changes if you use a different `percentile` and/or `n_sessions`.- __1.2__ (2 pts) Tune the algorithm to end up with positive average score.It's okay to modify the existing code. `````` Bonus area II Deep crossentropy methodBy this moment you should have got enough score on [CartPole-v0](https://gym.openai.com/envs/CartPole-v0) to consider it solved (see the link). It's time to try something harder.* if you have any trouble with CartPole-v0 and feel stuck, feel free to ask us or your peers for help. Tasks* __2.1__ (3 pts) Pick one of environments: MountainCar-v0 or LunarLander-v2. * For MountainCar, get average reward of __at least -150__ * For LunarLander, get average reward of __at least +50__See the tips section below, it's kinda important.__Note:__ If your agent is below the target score, you'll still get most of the points depending on the result, so don't be afraid to submit it. * __2.2__ (bonus: 4++ pt) Devise a way to speed up training at least 2x against the default version * Obvious improvement: use [joblib](https://www.google.com/search?client=ubuntu&channel=fs&q=joblib&ie=utf-8&oe=utf-8) * Try re-using samples from 3-5 last iterations when computing threshold and training * Experiment with amount of training iterations and learning rate of the neural network (see params) * __Please list what you did in anytask submission form__ Tips* Gym page: [MountainCar](https://gym.openai.com/envs/MountainCar-v0), [LunarLander](https://gym.openai.com/envs/LunarLander-v2)* Sessions for MountainCar may last for 10k+ ticks. Make sure ```t_max``` param is at least 10k. * Also it may be a good idea to cut rewards via ">" and not ">=". If 90% of your sessions get reward of -10k and 20% are better, than if you use percentile 20% as threshold, R >= threshold __fails cut off bad sessions__ whule R > threshold works alright.* _issue with gym_: Some versions of gym limit game time by 200 ticks. This will prevent cem training in most cases. Make sure your agent is able to play for the specified __t_max__, and if it isn't, try `env = gym.make("MountainCar-v0").env` or otherwise get rid of TimeLimit wrapper.* If you use old _swig_ lib for LunarLander-v2, you may get an error. See this [issue](https://github.com/openai/gym/issues/100) for solution.* If it won't train it's a good idea to plot reward distribution and record sessions: they may give you some clue. If they don't, call course staff :)* 20-neuron network is probably not enough, feel free to experiment.You may find the following snippet useful:
###Code
def visualize_mountain_car(env, agent):
xs = np.linspace(env.min_position, env.max_position, 100)
vs = np.linspace(-env.max_speed, env.max_speed, 100)
grid = np.dstack(np.meshgrid(xs, vs)).transpose(1, 0, 2)
grid_flat = grid.reshape(len(xs) * len(vs), 2)
probs = agent.predict_proba(grid_flat).reshape(len(xs), len(vs), 3)
return probs
plt.imshow(visualize_mountain_car(env, agent))
###Output
_____no_output_____ |
notebook/Unit2-1-PyThermo-IF97.ipynb | ###Markdown
IAPWS-IF97 Libraries 1 Introduction to IAPWS-IF97http://www.iapws.org/relguide/IF97-Rev.htmlThis formulation is recommended for industrial use (primarily the steam power industry) for the calculation of thermodynamic properties of ordinary water in its fluid phases, including vapor-liquid equilibrium. The release also contains "backward" equations to allow calculations with certain common sets of independent variables to be made without iteration; these equations may also be used to provide good initial guesses for iterative solutions. Since the release was first issued, it has been supplemented by several additional "backward" equations that are available for use if desired; these are for p(h,s) in Regions 1 and 2, T(p,h), v(p,h), T(p,s), v(p,s) in Region 3, p(h,s) in Region 3 with auxiliary equations for independent variables h and s, and v(p,T) in Region 3.  2 Python library of IAPWS 2.1 IAPWS https://github.com/jjgomera/iapws **dependences:** Numpy,scipy: library with mathematic and scientific tools ```bashpython -m pip install iapws```
###Code
from iapws import IAPWS97
sat_steam=IAPWS97(P=1,x=1) # saturated steam with known P,x=1
sat_liquid=IAPWS97(T=370, x=0) #saturated liquid with known T,x=0
steam=IAPWS97(P=2.5, T=500) # steam with known P and T(K)
print(sat_steam.h, sat_liquid.h, steam.h) #calculated enthalpies
###Output
_____no_output_____
###Markdown
2.2 SEUIF97 https://github.com/PySEE/SEUIF97The high-speed shared library is provided for developers to calculate the properties of water and steam where the direct IAPWS-IF97 implementation may be unsuitable because of their computing time consumption, such as Computational Fluid Dynamics (CFD), heat cycle calculations, simulations of non-stationary processes, and real-time process optimizations.Through the high-speed library, the results of the IAPWS-IF97 are accurately produced at about 3 times computational speed than the repeated squaring method for fast computation of large positive integer powers.The library is written in ANSI C for faster, smaller binaries and better compatibility for accessing the DLL/SO from different C++ compilers.For Windows and Linux users, the convenient binary library and APIs are provided.* The shared library: Windows(32/64): `libseuif97.dll`; Linux(64): `libseuif97.so`* The binding API: Python, C/C++, Microsoft Excel VBA, MATLAB,Java, Fortran, C 2.2.1 API:seuif97.pyFunctions of `water and steam properties`, `exerg`y analysis and the `thermodynamic process of steam turbine` are provided in **SEUIF97** 2.2.1.1 Functions of water and steam propertiesUsing SEUIF97, you can set the state of steam using various pairs of know properties to get any output properties you wish to know, including in the `30 properties in libseuif97`The following input pairs are implemented: ```c(p,t) (p,h) (p,s) (p,v)(t,h) (t,s) (t,v) (h,s) (p,x) (t,x) ```The two type functions are provided in the seuif97 pacakge: * ??2?(in1,in2) , e.g: ```h=pt2h(p,t)``` * ??(in1,in2,propertyID), , e.g: ```h=pt(p,t,4)```, the propertyID h is 4Python API:seuif97.py```pythonfrom ctypes import *flib = windll.LoadLibrary('libseuif97.dll')prototype = WINFUNCTYPE(c_double, c_double, c_double, c_int) ---(p,t) ----------------def pt(p, t, pid): f = prototype(("seupt", flib),) result = f(p, t, pid) return resultdef pt2h(p, t): f = prototype(("seupt", flib),) result = f(p, t, 4) return result```
###Code
import seuif97
p, t = 16.10, 535.10
# ??2?(in1,in2)
h = seuif97.pt2h(p, t)
s = seuif97.pt2s(p, t)
v = seuif97.pt2v(p, t)
print("(p,t),h,s,v:",
"{:>.2f}\t {:>.2f}\t {:>.2f}\t {:>.3f}\t {:>.4f}".format(p, t, h, s, v))
# ??(in1,in2,propertyid)
t = seuif97.ph(p, h, 1)
s = seuif97.ph(p, h, 5)
v = seuif97.ph(p, h, 3)
print("(p,h),t,s,v:",
"{:>.2f}\t {:>.2f}\t {:>.2f}\t {:>.3f}\t {:>.4f}".format(p, h, t, s, v))
###Output
_____no_output_____
###Markdown
2.2.1.2 Functions of Thermodynamic Process of Steam Turbine* 1 Isentropic Enthalpy Drop:ishd(pi,ti,pe) pi - double, inlet pressure(MPa); ti - double, inlet temperature(°C) pe - double, outlet pressure(MPa)* 2 Isentropic Efficiency(`0~100`): ief(pi,ti,pe,te) pi - double, inlet pressure(MPa); ti - double, inlet temperature(°C) pe - double, outlet pressure(MPa); te - double, outlet temperature(°C)
###Code
from seuif97 import *
p1=16.1
t1=535.2
p2=3.56
t2=315.1
hdis=ishd(p1,t1,p2) # Isentropic Enthalpy Drop
ef=ief(p1,t1,p2,t2) # Isentropic Efficiency:0-100
print('Isentropic Enthalpy Drop =',hdis,'kJ/kg')
print('Isentropic Efficiency = %.2f%%'%ef)
###Output
_____no_output_____
###Markdown
2.2.2 Propertiey and Process Diagram**1 T-s Diagram**
###Code
%matplotlib inline
"""
T-s Diagram
1 isoenthalpic lines isoh(200, 3600)kJ/kg
2 isobar lines isop(611.657e-6,100)MPa
3 saturation lines x=0,x=1
4 isoquality lines x(0.1,0.9)
"""
from seuif97 import pt2h, ph2t, ph2s, tx2s
import numpy as np
import matplotlib.pyplot as plt
Pt=611.657e-6
Tc=647.096
xAxis = "s"
yAxis = "T"
title = {"T": "T, ºC", "s": "s, kJ/kgK"}
plt.title("%s-%s Diagram" % (yAxis, xAxis))
plt.xlabel(title[xAxis])
plt.ylabel(title[yAxis])
plt.xlim(0, 11.5)
plt.ylim(0, 800)
plt.grid()
isoh = np.linspace(200, 3600, 18)
isop = np.array([Pt,0.001,0.002,0.004,0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1.0,
2.0, 5.0, 10.0, 20.0, 50.0, 100.0])
for h in isoh:
T = np.array([ph2t(p, h) for p in isop])
S = np.array([ph2s(p, h) for p in isop])
plt.plot(S, T, 'b', lw=0.5)
for p in isop:
T = np.array([ph2t(p, h) for h in isoh])
S = np.array([ph2s(p, h) for h in isoh])
plt.plot(S, T, 'b', lw=0.5)
tc = Tc - 273.15
T = np.linspace(0.01, tc, 100)
for x in np.array([0, 1.0]):
S = np.array([tx2s(t, x) for t in T])
plt.plot(S, T, 'r', lw=1.0)
for x in np.linspace(0.1, 0.9, 11):
S = np.array([tx2s(t, x) for t in T])
plt.plot(S, T, 'r--', lw=0.5)
plt.show()
###Output
_____no_output_____
###Markdown
**2 H-S Diagram**
###Code
%matplotlib inline
"""
h-s Diagram
1 Calculating Isotherm lines isot(0.0,800)ºC
2 Calculating Isobar lines isop(611.657e-6, 100)Mpa
3 Calculating saturation lines x=0,x=1
4 Calculating isoquality lines x(0.1,0.9)
"""
from seuif97 import pt2h,pt2s,tx2s,tx2h
import numpy as np
import matplotlib.pyplot as plt
xAxis = "s"
yAxis = "h"
title = { "h": "h, kJ/kg", "s": "s, kJ/kgK"}
plt.title("%s-%s Diagram" % (yAxis, xAxis))
plt.xlabel(title[xAxis])
plt.ylabel(title[yAxis])
plt.xlim(0, 12.2)
plt.ylim(0, 4300)
plt.grid()
Pt=611.657e-6
isot = np.array([0, 50, 100, 200, 300, 400, 500, 600, 700, 800])
isop = np.array([Pt,0.001, 0.01, 0.1, 1, 10, 20, 50, 100])
# Isotherm lines in ºC
for t in isot:
h = np.array([pt2h(p,t) for p in isop])
s = np.array([pt2s(p,t) for p in isop])
plt.plot(s,h,'g',lw=0.5)
# Isobar lines in Mpa
for p in isop:
h = np.array([pt2h(p,t) for t in isot])
s = np.array([pt2s(p,t) for t in isot])
plt.plot(s,h,'b',lw=0.5)
tc=647.096-273.15
T = np.linspace(0.1,tc,100)
# saturation lines
for x in np.array([0,1.0]):
h = np.array([tx2h(t,x) for t in T])
s = np.array([tx2s(t,x) for t in T])
plt.plot(s,h,'r',lw=1.0)
# Isoquality lines
isox=np.linspace(0.1,0.9,11)
for x in isox:
h = np.array([tx2h(t,x) for t in T])
s = np.array([tx2s(t,x) for t in T])
plt.plot(s,h,'r--',lw=0.5)
plt.show()
###Output
_____no_output_____
###Markdown
**4 H-S(Mollier) Diagram of Steam Turbine Expansion**
###Code
%matplotlib inline
"""
H-S(Mollier) Diagram of Steam Turbine Expansion
4 lines:
1 Isobar line:p inlet
2 Isobar line:p outlet
3 isentropic line: (p inlet ,t inlet h inlet,s inlet), (p outlet,s inlet)
4 Expansion line: inlet,outlet
License: this code is in the public domain
Author: Cheng Maohua
Email: [email protected]
Last modified: 2018.11.28
"""
import matplotlib.pyplot as plt
import numpy as np
from seuif97 import pt2h, pt2s, ps2h, ph2t, ief, ishd
class Turbine(object):
def __init__(self, pin, tin, pex, tex):
self.pin = pin
self.tin = tin
self.pex = pex
self.tex = tex
def analysis(self):
self.ef = ief(self.pin, self.tin, self.pex, self.tex)
self.his = ishd(self.pin, self.tin, self.pex)
self.hin = pt2h(self.pin, self.tin)
self.sin = pt2s(self.pin, self.tin)
self.hex = pt2h(self.pex, self.tex)
self.sex = pt2s(self.pex, self.tex)
def expansionline(self):
sdelta = 0.01
# 1 Isobar pin
s_isopin = np.array([self.sin - sdelta, self.sin + sdelta])
h_isopin = np.array([ps2h(self.pin, s_isopin[0]),
ps2h(self.pin, s_isopin[1])])
# 2 Isobar pex
s_isopex = np.array([s_isopin[0], self.sex + sdelta])
h_isopex = np.array([ps2h(self.pex, s_isopex[0]),
ps2h(self.pex, s_isopex[1])])
# 3 isentropic lines
h_isos = np.array([self.hin, ps2h(self.pex, self.sin)])
s_isos = np.array([self.sin, self.sin])
# 4 expansion Line
h_expL = np.array([self.hin, self.hex])
s_expL = np.array([self.sin, self.sex])
# plot lines
plt.figure(figsize=(6, 8))
plt.title("H-S(Mollier) Diagram of Steam Turbine Expansion")
plt.plot(s_isopin, h_isopin, 'b-') # Isobar line: pin
plt.plot(s_isopex, h_isopex, 'b-') # Isobar line: pex
plt.plot(s_isos, h_isos, 'ys-') # isoentropic line:
plt.plot(s_expL, h_expL, 'r-', label='Expansion Line')
plt.plot(s_expL, h_expL, 'rs')
_title = 'The isentropic efficiency = ' + \
r'$\frac{h_1-h_2}{h_1-h_{2s}}$' + '=' + \
'{:.2f}'.format(self.ef) + '%'
plt.legend(loc="center", bbox_to_anchor=[
0.6, 0.9], ncol=2, shadow=True, title=_title)
# annotate the inlet and exlet
txt = "h1(%.2f,%.2f)" % (self.pin, self.tin)
plt.annotate(txt,
xy=(self.sin, self.hin), xycoords='data',
xytext=(+1, +10), textcoords='offset points', fontsize=10,
arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2"))
txt = "h2(%.2f,%.2f)" % (self.pex, self.tex)
plt.annotate(txt,
xy=(self.sex, self.hex), xycoords='data',
xytext=(+1, +10), textcoords='offset points', fontsize=10,
arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2"))
# annotate h2s
txt = "h2s(%.2f,%.2f)" % (self.pex, ph2t(self.pex, h_isos[1]))
plt.annotate(txt,
xy=(self.sin, h_isos[1]), xycoords='data',
xytext=(+1, +10), textcoords='offset points', fontsize=10,
arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2"))
plt.xlabel('s(kJ/(kg.K))')
plt.ylabel('h(kJ/kg)')
plt.grid()
plt.show()
def __str__(self):
result = ('\n Inlet(p, t) {:>6.2f}MPa {:>6.2f}°C \n Exlet(p, t) {:>6.2f}MPa {:>6.2f}°C \nThe isentropic efficiency: {:>5.2f}%'
.format(self.pin, self.tin, self.pex, self.tex, self.ef))
return result
if __name__ == '__main__':
pin, tin = 16.0, 535.0
pex, tex = 3.56, 315.0
tb1 = Turbine(pin, tin, pex, tex)
tb1.analysis()
print(tb1)
tb1.expansionline()
###Output
_____no_output_____
###Markdown
IAPWS-IF97 Libraries 1 Introduction to IAPWS-IF97http://www.iapws.org/relguide/IF97-Rev.htmlThis formulation is recommended for industrial use (primarily the steam power industry) for the calculation of thermodynamic properties of ordinary water in its fluid phases, including vapor-liquid equilibrium. The release also contains "backward" equations to allow calculations with certain common sets of independent variables to be made without iteration; these equations may also be used to provide good initial guesses for iterative solutions. Since the release was first issued, it has been supplemented by several additional "backward" equations that are available for use if desired; these are for p(h,s) in Regions 1 and 2, T(p,h), v(p,h), T(p,s), v(p,s) in Region 3, p(h,s) in Region 3 with auxiliary equations for independent variables h and s, and v(p,T) in Region 3.  2 IAPWS-IF97 Libraries 2.1 IAPWS in Python https://github.com/jjgomera/iapws **dependences:** Numpy,scipy: library with mathematic and scientific tools ```bashpython -m pip install iapws```
###Code
from iapws import IAPWS97
sat_steam=IAPWS97(P=1,x=1) # saturated steam with known P,x=1
sat_liquid=IAPWS97(T=370, x=0) #saturated liquid with known T,x=0
steam=IAPWS97(P=2.5, T=500) # steam with known P and T(K)
print(sat_steam.h, sat_liquid.h, steam.h) #calculated enthalpies
###Output
_____no_output_____
###Markdown
2.2 SEUIF97 https://github.com/PySEE/SEUIF97The high-speed shared library is provided for developers to calculate the properties of water and steam where the direct IAPWS-IF97 implementation may be unsuitable because of their computing time consumption, such as Computational Fluid Dynamics (CFD), heat cycle calculations, simulations of non-stationary processes, and real-time process optimizations.Through the high-speed library, the results of the IAPWS-IF97 are accurately produced at about 3 times computational speed than the repeated squaring method for fast computation of large positive integer powers.The library is written in ANSI C for faster, smaller binaries and better compatibility for accessing the DLL/SO from different C++ compilers.For Windows and Linux users, the convenient binary library and APIs are provided.* The shared library: Windows(32/64): `libseuif97.dll`; Linux(64): `libseuif97.so`* The binding API: Python, C/C++, Microsoft Excel VBA, MATLAB,Java, Fortran, C 2.2.1 Python API:seuif97.pyFunctions of `water and steam properties`, `exerg`y analysis and the `thermodynamic process of steam turbine` are provided in **SEUIF97** 2.2.1.1 Functions of water and steam propertiesUsing SEUIF97, you can set the state of steam using various pairs of know properties to get any output properties you wish to know, including in the `30 properties in libseuif97`The following input pairs are implemented: ```c(p,t) (p,h) (p,s) (p,v)(t,h) (t,s) (t,v) (h,s) (p,x) (t,x) ```The two type functions are provided in the seuif97 pacakge: * ??2?(in1,in2) , e.g: ```h=pt2h(p,t)``` * ??(in1,in2,propertyID), , e.g: ```h=pt(p,t,4)```, the propertyID h is 4Python API:seuif97.py```pythonfrom ctypes import *flib = windll.LoadLibrary('libseuif97.dll')prototype = WINFUNCTYPE(c_double, c_double, c_double, c_int) ---(p,t) ----------------def pt(p, t, pid): f = prototype(("seupt", flib),) result = f(p, t, pid) return resultdef pt2h(p, t): f = prototype(("seupt", flib),) result = f(p, t, 4) return result```
###Code
import seuif97
p, t = 16.10, 535.10
# ??2?(in1,in2)
h = seuif97.pt2h(p, t)
s = seuif97.pt2s(p, t)
v = seuif97.pt2v(p, t)
print("(p,t),h,s,v:\n {:>.2f}\t {:>.2f}\t {:>.2f}\t {:>.3f}\t {:>.4f}".format(p, t, h, s, v))
# ??(in1,in2,propertyid)
t = seuif97.ph(p, h, 1)
s = seuif97.ph(p, h, 5)
v = seuif97.ph(p, h, 3)
print("(p,h),t,s,v:\n {:>.2f}\t {:>.2f}\t {:>.2f}\t {:>.3f}\t {:>.4f}".format(p, h, t, s, v))
###Output
_____no_output_____
###Markdown
2.2.1.2 Functions of Thermodynamic Process of Steam Turbine* 1 Isentropic Enthalpy Drop:ishd(pi,ti,pe) pi - double, inlet pressure(MPa); ti - double, inlet temperature(°C) pe - double, outlet pressure(MPa)* 2 Isentropic Efficiency(`0~100`): ief(pi,ti,pe,te) pi - double, inlet pressure(MPa); ti - double, inlet temperature(°C) pe - double, outlet pressure(MPa); te - double, outlet temperature(°C)
###Code
from seuif97 import *
p1=16.1
t1=535.2
p2=3.56
t2=315.1
hdis=ishd(p1,t1,p2) # Isentropic Enthalpy Drop
ef=ief(p1,t1,p2,t2) # Isentropic Efficiency:0-100
# There are a few ways of printing:
print('Isentropic Enthalpy Drop =',hdis,'kJ/kg')
print('Isentropic Enthalpy Drop = {:>6.2f} kJ/kg'.format(hdis))
print('Isentropic Efficiency = %.2f%%'%ef)
###Output
_____no_output_____
###Markdown
3 The `print()` built-in function`print()` built-in function ```pythonprint(*objects, sep=' ', end='\n', file=sys.stdout, flush=False) Print objects to the text stream file (default standard output sys.stdout), separated by sep (default space) and followed by end (default newline). *objects denotes variable number of positional arguments packed into a tuple```There are a few ways of printing:print() prints a `newline` at the end. You need to include argument `end=''` to suppress the newline.print(str.format()) https://docs.python.org/3/library/stdtypes.htmlstr.formatPython 3's new style for formatted string via `str` class member function `str.format()`. The string on which this method is called can contain `literal text` or `replacement fields` delimited by braces `{}`.Each `replacement field` contains either the numeric index of a `positional` argument, or the name of a `keyword` argument, with C-like **format specifiers** beginning with `:` (instead of % in C) such as* `:4d` for integer,* `:6.2f` for floating-point number * `:5s` for string flags such as:* `<` for left-align* `>` for right-align * `^` for center-align**print('formatting-string' % args)**Python 2's `old` style for formatted string using `%` operator. The formatting-string could contain C-like format-specifiers, such as * `%4d` for integer,* `%6.2f` for floating-point number, * `%8s` for string.
###Code
str1="the default print"
print(str1)
print(str1,end="")
print(str1,end="")
str2="is:"
int1=11
float1=12.6
# the automatic field numbering
print('The print(str.format()){:<8s} {:^4d} {:>6.2f}'.format(str2,int1,float1))
# a keyword argument and a positional argument
formatters_str='The print(str.format()){ps:<8s} {0:^4d} {1:>6.2f}'
print(formatters_str.format(int1,float1,ps=str2))
str3='isdemo'
print('The old style %8s,%4d,%6.2f' %(str3,int1,float1))
###Output
_____no_output_____
###Markdown
4 Propertiey and Process Diagram* [matplotlib.pyplot.xlim](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.xlim.html) Get or set the x-limits of the current axes.* [matplotlib.pyplot.ylim](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.ylim.html) Get or set the y-limits of the current axes.* [matplotlib.pyplot.grid](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.grid.html) Configure the grid lines.* [matplotlib.pyplot.annotate](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.annotate.html) Annotate the point xy with text s. 4.1 T-s Diagram
###Code
%matplotlib inline
"""
T-s Diagram
1 isoenthalpic lines isoh(200, 3600)kJ/kg
2 isobar lines isop(611.657e-6,100)MPa
3 saturation lines x=0,x=1
4 isoquality lines x(0.1,0.9)
"""
from seuif97 import pt2h, ph2t, ph2s, tx2s
import numpy as np
import matplotlib.pyplot as plt
Pt=611.657e-6
Tc=647.096
plt.figure()
xAxis = "s"
yAxis = "T"
title = {"T": "T, ºC", "s": "s, kJ/kgK"}
plt.title("%s-%s Diagram" % (yAxis, xAxis))
plt.xlabel(title[xAxis])
plt.ylabel(title[yAxis])
plt.xlim(0, 11.5)
plt.ylim(0, 800)
plt.grid()
isoh = np.linspace(200, 3600, 18)
isop = np.array([Pt,0.001,0.002,0.004,0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1.0,
2.0, 5.0, 10.0, 20.0, 50.0, 100.0])
for h in isoh:
T = np.array([ph2t(p, h) for p in isop])
S = np.array([ph2s(p, h) for p in isop])
plt.plot(S, T, 'b', lw=0.5)
for p in isop:
T = np.array([ph2t(p, h) for h in isoh])
S = np.array([ph2s(p, h) for h in isoh])
plt.plot(S, T, 'b', lw=0.5)
tc = Tc - 273.15
T = np.linspace(0.01, tc, 100)
for x in np.array([0, 1.0]):
S = np.array([tx2s(t, x) for t in T])
plt.plot(S, T, 'r', lw=1.0)
for x in np.linspace(0.1, 0.9, 11):
S = np.array([tx2s(t, x) for t in T])
plt.plot(S, T, 'r--', lw=0.5)
plt.show()
###Output
_____no_output_____
###Markdown
4.2 H-S Diagram
###Code
%matplotlib inline
"""
h-s Diagram
1 Calculating Isotherm lines isot(0.0,800)ºC
2 Calculating Isobar lines isop(611.657e-6, 100)Mpa
3 Calculating saturation lines x=0,x=1
4 Calculating isoquality lines x(0.1,0.9)
"""
from seuif97 import pt2h,pt2s,tx2s,tx2h
import numpy as np
import matplotlib.pyplot as plt
plt.figure()
xAxis = "s"
yAxis = "h"
title = { "h": "h, kJ/kg", "s": "s, kJ/kgK"}
plt.title("%s-%s Diagram" % (yAxis, xAxis))
plt.xlabel(title[xAxis])
plt.ylabel(title[yAxis])
plt.xlim(0, 12.2)
plt.ylim(0, 4300)
plt.grid()
Pt=611.657e-6
isot = np.array([0, 50, 100, 200, 300, 400, 500, 600, 700, 800])
isop = np.array([Pt,0.001, 0.01, 0.1, 1, 10, 20, 50, 100])
# Isotherm lines in ºC
for t in isot:
h = np.array([pt2h(p,t) for p in isop])
s = np.array([pt2s(p,t) for p in isop])
plt.plot(s,h,'g',lw=0.5)
# Isobar lines in Mpa
for p in isop:
h = np.array([pt2h(p,t) for t in isot])
s = np.array([pt2s(p,t) for t in isot])
plt.plot(s,h,'b',lw=0.5)
tc=647.096-273.15
T = np.linspace(0.1,tc,100)
# saturation lines
for x in np.array([0,1.0]):
h = np.array([tx2h(t,x) for t in T])
s = np.array([tx2s(t,x) for t in T])
plt.plot(s,h,'r',lw=1.0)
# Isoquality lines
isox=np.linspace(0.1,0.9,11)
for x in isox:
h = np.array([tx2h(t,x) for t in T])
s = np.array([tx2s(t,x) for t in T])
plt.plot(s,h,'r--',lw=0.5)
plt.show()
###Output
_____no_output_____
###Markdown
4.3 Steam Turbine Expansion: H-S(Mollier) Diagram
###Code
%matplotlib inline
"""
H-S(Mollier) Diagram of Steam Turbine Expansion
4 lines:
1 Isobar line:p inlet
2 Isobar line:p outlet
3 isentropic line: (p inlet ,t inlet h inlet,s inlet), (p outlet,s inlet)
4 Expansion line: inlet,outlet
"""
import matplotlib.pyplot as plt
import numpy as np
from seuif97 import pt2h, pt2s, ps2h, ph2t, ief, ishd
class Turbine:
def __init__(self, pin, tin, pex, tex):
self.pin = pin
self.tin = tin
self.pex = pex
self.tex = tex
def analysis(self):
self.ef = ief(self.pin, self.tin, self.pex, self.tex)
self.his = ishd(self.pin, self.tin, self.pex)
self.hin = pt2h(self.pin, self.tin)
self.sin = pt2s(self.pin, self.tin)
self.hex = pt2h(self.pex, self.tex)
self.sex = pt2s(self.pex, self.tex)
def expansionline(self):
sdelta = 0.01
# 1 Isobar pin
s_isopin = np.array([self.sin - sdelta, self.sin + sdelta])
h_isopin = np.array([ps2h(self.pin, s_isopin[0]),
ps2h(self.pin, s_isopin[1])])
# 2 Isobar pex
s_isopex = np.array([s_isopin[0], self.sex + sdelta])
h_isopex = np.array([ps2h(self.pex, s_isopex[0]),
ps2h(self.pex, s_isopex[1])])
# 3 isentropic lines
h_isos = np.array([self.hin, ps2h(self.pex, self.sin)])
s_isos = np.array([self.sin, self.sin])
# 4 expansion Line
h_expL = np.array([self.hin, self.hex])
s_expL = np.array([self.sin, self.sex])
# plot lines
plt.figure(figsize=(6, 8))
plt.title("H-S(Mollier) Diagram of Steam Turbine Expansion")
plt.plot(s_isopin, h_isopin, 'b-') # Isobar line: pin
plt.plot(s_isopex, h_isopex, 'b-') # Isobar line: pex
plt.plot(s_isos, h_isos, 'ys-') # isoentropic line:
plt.plot(s_expL, h_expL, 'r-', label='Expansion Line')
plt.plot(s_expL, h_expL, 'rs')
_title = 'The isentropic efficiency = ' + \
r'$\frac{h_1-h_2}{h_1-h_{2s}}$' + '=' + \
'{:.2f}'.format(self.ef) + '%'
plt.legend(loc="center", bbox_to_anchor=[
0.6, 0.9], ncol=2, shadow=True, title=_title)
# annotate the inlet and exlet
txt = "h1(%.2f,%.2f)" % (self.pin, self.tin)
plt.annotate(txt,
xy=(self.sin, self.hin), xycoords='data',
xytext=(+1, +10), textcoords='offset points', fontsize=10,
arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2"))
txt = "h2(%.2f,%.2f)" % (self.pex, self.tex)
plt.annotate(txt,
xy=(self.sex, self.hex), xycoords='data',
xytext=(+1, +10), textcoords='offset points', fontsize=10,
arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2"))
# annotate h2s
txt = "h2s(%.2f,%.2f)" % (self.pex, ph2t(self.pex, h_isos[1]))
plt.annotate(txt,
xy=(self.sin, h_isos[1]), xycoords='data',
xytext=(+1, +10), textcoords='offset points', fontsize=10,
arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2"))
plt.xlabel('s(kJ/(kg.K))')
plt.ylabel('h(kJ/kg)')
plt.grid()
plt.show()
def __str__(self):
result = ('\n Inlet(p, t) {:>6.2f}MPa {:>6.2f}°C \n Exlet(p, t) {:>6.2f}MPa {:>6.2f}°C \nThe isentropic efficiency: {:>5.2f}%'
.format(self.pin, self.tin, self.pex, self.tex, self.ef))
return result
if __name__ == '__main__':
pin, tin = 16.0, 535.0
pex, tex = 3.56, 315.0
tb1 = Turbine(pin, tin, pex, tex)
tb1.analysis()
print(tb1)
tb1.expansionline()
###Output
Inlet(p, t) 16.00MPa 535.00°C
Exlet(p, t) 3.56MPa 315.00°C
The isentropic efficiency: 89.92%
|
.ipynb_checkpoints/ML0101EN-Clas-SVM-cancer-py-v1-checkpoint.ipynb | ###Markdown
SVM (Support Vector Machines)Estimated time needed: **15** minutes ObjectivesAfter completing this lab you will be able to:* Use scikit-learn to Support Vector Machine to classify In this notebook, you will use SVM (Support Vector Machines) to build and train a model using human cell records, and classify cells to whether the samples are benign or malignant.SVM works by mapping data to a high-dimensional feature space so that data points can be categorized, even when the data are not otherwise linearly separable. A separator between the categories is found, then the data is transformed in such a way that the separator could be drawn as a hyperplane. Following this, characteristics of new data can be used to predict the group to which a new record should belong. Table of contents Load the Cancer data Modeling Evaluation Practice
###Code
!pip install scikit-learn==0.23.1
import pandas as pd
import pylab as pl
import numpy as np
import scipy.optimize as opt
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
%matplotlib inline
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Load the Cancer dataThe example is based on a dataset that is publicly available from the UCI Machine Learning Repository (Asuncion and Newman, 2007)[http://mlearn.ics.uci.edu/MLRepository.html]. The dataset consists of several hundred human cell sample records, each of which contains the values of a set of cell characteristics. The fields in each record are:| Field name | Description || ----------- | --------------------------- || ID | Clump thickness || Clump | Clump thickness || UnifSize | Uniformity of cell size || UnifShape | Uniformity of cell shape || MargAdh | Marginal adhesion || SingEpiSize | Single epithelial cell size || BareNuc | Bare nuclei || BlandChrom | Bland chromatin || NormNucl | Normal nucleoli || Mit | Mitoses || Class | Benign or malignant |For the purposes of this example, we're using a dataset that has a relatively small number of predictors in each record. To download the data, we will use `!wget` to download it from IBM Object Storage.**Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)
###Code
#Click here and press Shift+Enter
!wget -O cell_samples.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%203/data/cell_samples.csv
###Output
_____no_output_____
###Markdown
Load Data From CSV File
###Code
cell_df = pd.read_csv("cell_samples.csv")
cell_df.head()
###Output
_____no_output_____
###Markdown
The ID field contains the patient identifiers. The characteristics of the cell samples from each patient are contained in fields Clump to Mit. The values are graded from 1 to 10, with 1 being the closest to benign.The Class field contains the diagnosis, as confirmed by separate medical procedures, as to whether the samples are benign (value = 2) or malignant (value = 4).Let's look at the distribution of the classes based on Clump thickness and Uniformity of cell size:
###Code
ax = cell_df[cell_df['Class'] == 4][0:50].plot(kind='scatter', x='Clump', y='UnifSize', color='DarkBlue', label='malignant');
cell_df[cell_df['Class'] == 2][0:50].plot(kind='scatter', x='Clump', y='UnifSize', color='Yellow', label='benign', ax=ax);
plt.show()
###Output
_____no_output_____
###Markdown
Data pre-processing and selection Let's first look at columns data types:
###Code
cell_df.dtypes
###Output
_____no_output_____
###Markdown
It looks like the **BareNuc** column includes some values that are not numerical. We can drop those rows:
###Code
cell_df = cell_df[pd.to_numeric(cell_df['BareNuc'], errors='coerce').notnull()]
cell_df['BareNuc'] = cell_df['BareNuc'].astype('int')
cell_df.dtypes
feature_df = cell_df[['Clump', 'UnifSize', 'UnifShape', 'MargAdh', 'SingEpiSize', 'BareNuc', 'BlandChrom', 'NormNucl', 'Mit']]
X = np.asarray(feature_df)
X[0:5]
###Output
_____no_output_____
###Markdown
We want the model to predict the value of Class (that is, benign (=2) or malignant (=4)). As this field can have one of only two possible values, we need to change its measurement level to reflect this.
###Code
cell_df['Class'] = cell_df['Class'].astype('int')
y = np.asarray(cell_df['Class'])
y [0:5]
###Output
_____no_output_____
###Markdown
Train/Test dataset We split our dataset into train and test set:
###Code
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=4)
print ('Train set:', X_train.shape, y_train.shape)
print ('Test set:', X_test.shape, y_test.shape)
###Output
_____no_output_____
###Markdown
Modeling (SVM with Scikit-learn) The SVM algorithm offers a choice of kernel functions for performing its processing. Basically, mapping data into a higher dimensional space is called kernelling. The mathematical function used for the transformation is known as the kernel function, and can be of different types, such as:```1.Linear2.Polynomial3.Radial basis function (RBF)4.Sigmoid```Each of these functions has its characteristics, its pros and cons, and its equation, but as there's no easy way of knowing which function performs best with any given dataset. We usually choose different functions in turn and compare the results. Let's just use the default, RBF (Radial Basis Function) for this lab.
###Code
from sklearn import svm
clf = svm.SVC(kernel='rbf')
clf.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
After being fitted, the model can then be used to predict new values:
###Code
yhat = clf.predict(X_test)
yhat [0:5]
###Output
_____no_output_____
###Markdown
Evaluation
###Code
from sklearn.metrics import classification_report, confusion_matrix
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Compute confusion matrix
cnf_matrix = confusion_matrix(y_test, yhat, labels=[2,4])
np.set_printoptions(precision=2)
print (classification_report(y_test, yhat))
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=['Benign(2)','Malignant(4)'],normalize= False, title='Confusion matrix')
###Output
_____no_output_____
###Markdown
You can also easily use the **f1\_score** from sklearn library:
###Code
from sklearn.metrics import f1_score
f1_score(y_test, yhat, average='weighted')
###Output
_____no_output_____
###Markdown
Let's try the jaccard index for accuracy:
###Code
from sklearn.metrics import jaccard_score
jaccard_score(y_test, yhat,pos_label=2)
###Output
_____no_output_____
###Markdown
PracticeCan you rebuild the model, but this time with a __linear__ kernel? You can use __kernel='linear'__ option, when you define the svm. How the accuracy changes with the new kernel function?
###Code
# write your code here
###Output
_____no_output_____ |
heart disease.ipynb | ###Markdown
IntroductionI was thinking for a while about my master thesis topic and I wanted it was related to data mining and artificial intelligence because I want to learn more about this field. I want to work with machine learning and for that, I have to study it more. Writing a thesis is a great way to get more experience with it. I thought I will use this dataset as a part of my thesis topic. Machine learning in medicineThis field of study takes a more and more important role in our life. AI helps not only in the IT section but also in medicine. It supports doctors, farceurs, helps with validating data about patients, and even helps with diagnosing disease. **In this kernel we will try to:*** Analise data of patients with heart problems. * Find what plays a key role in causing heart disease* Process data * And make a prediction model Dataset explenation* age: The person's age in years* sex: The person's sex (1 = male, 0 = female)* cp: The chest pain experienced (Value 1: typical angina, Value 2: atypical angina, Value 3: non-anginal pain, Value 4: * asymptomatic)* trestbps: The person's resting blood pressure (mm Hg on admission to the hospital)* chol: The person's cholesterol measurement in mg/dl* fbs: The person's fasting blood sugar (> 120 mg/dl, 1 = true; 0 = false)* restecg: Resting electrocardiographic measurement (0 = normal, 1 = having ST-T wave abnormality, 2 = showing probable or * definite left ventricular hypertrophy by Estes' criteria)* thalach: The person's maximum heart rate achieved* exang: Exercise induced angina (1 = yes; 0 = no)* oldpeak: ST depression induced by exercise relative to rest ('ST' relates to positions on the ECG plot. See more here)* slope: the slope of the peak exercise ST segment (Value 1: upsloping, Value 2: flat, Value 3: downsloping)* ca: The number of major vessels (0-3)* thal: A blood disorder called thalassemia (3 = normal; 6 = fixed defect; 7 = reversable defect)* target: Heart disease (0 = no, 1 = yes)
###Code
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import export_graphviz
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
from pdpbox import pdp, info_plots
import shap
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import math
dataset = pd.read_csv('Data/heart.csv')
dataset.head()
###Output
_____no_output_____
###Markdown
That's how our dataset looks like Analysing data We will start with small changes in our dataset so we can better understand what is going on on the plots.
###Code
dt = dataset.copy() # make copy of dataset
dt['sex'][dt['sex'] == 0] = 'female'
dt['sex'][dt['sex'] == 1] = 'male'
dt['cp'][dt['cp'] == 0] = 'typical angina'
dt['cp'][dt['cp'] == 1] = 'atypical angina'
dt['cp'][dt['cp'] == 2] = 'non-anginal pain'
dt['cp'][dt['cp'] == 3] = 'asymptomatic'
dt['slope'][dt['slope'] == 0] = 'upsloping'
dt['slope'][dt['slope'] == 1] = 'flat'
dt['slope'][dt['slope'] == 2] = 'downsloping'
dt['target'][dt['target'] == 0] = 'healthy'
dt['target'][dt['target'] == 1] = "sick"
dt.head()
countplot = sns.countplot(x='target', data=dt)
###Output
_____no_output_____
###Markdown
We can see our data is more-less balanced. Now let's check how many rows we have in our dataset.
###Code
dt['age'].size
###Output
_____no_output_____
###Markdown
303 rowsIt might be enough for studying machine learning and data visualization - which means it suits our needs. However, it's not enough to fully analyze heart disease and make a prediction model with at least 95% accuracy. We can start with checking if in our dataset gender has some impact on disease
###Code
sns.countplot(x='target', hue='sex', data=dt)
###Output
_____no_output_____
###Markdown
Looks a bit odd. More male is in both groups which can mean there is much more male in our dataset than female. Let's check it out.
###Code
pie, ax = plt.subplots(figsize=[10,6])
data = dt.groupby("sex").size() # data for Pie chart
labels = ['Female', 'Male']
plt.pie(x=data, autopct="%.1f%%", explode=[0.025]*2,labels=labels, pctdistance=0.5)
plt.title("Gender", fontsize=14);
###Output
_____no_output_____
###Markdown
Yes, as I thought the majority of people in this dataset are male. From what we can read on the internet mostly men suffer from heart disease which may explain why we have in dataset more men. Next we will take a look on a age.
###Code
dt["age"] = dt["age"].astype(float)
dt["age"].plot.hist()
dt["age"].mean() # the age mean
###Output
_____no_output_____
###Markdown
Clearly, most patients in the dataset are people older than 50 years old. The mean is 54 years old. Which isn't anything surprising. Mostly older people have problems with the heart. What is more interesting is the number of people age above 65 years old. However, it's just a small dataset so we cannot be sure why there are fewer people who are very old (65 years old and more) Chest Pain Type Analysis
###Code
sns.countplot(dt['cp'])
plt.xlabel('Chest Type')
plt.ylabel('Count')
plt.title('Chest Type vs Count State')
plt.show()
sns.countplot(x="cp", hue="target", data=dt)
###Output
_____no_output_____
###Markdown
In this plot, I divide data into 4 groups depending on the type of chest pain and compare it to target (is patient healthy or not)We can see that above 100 people with **typical angina** pain are healthy. And on the other side, the major of people with **non-anginal pain** have heart disease To do that we will need create a model. This time we will use **Random Forest** with depth 3. We don't have many cases so we cannot use higher depth. I'm still not sure if there will not be any **overfit**.
###Code
X = dataset.drop("target", axis=1) # X = all data apart of survived column
y = dataset["target"] # y = only column survived
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
model_RFC = RandomForestClassifier(max_depth=3)
model_RFC.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Now I'm going to use **Partial Dependence Plot**. It's something I just learned so let's try it out.
###Code
base_features = dataset.columns.values.tolist()
base_features.remove('target')
nr_of_vessles = dataset.columns.values.tolist()
nr_of_vessles.remove('target')
# ca - number of major vessels
pdp_dist = pdp.pdp_isolate(model=model_RFC, dataset=X_test, model_features=base_features, feature='ca')
pdp.pdp_plot(pdp_dist, 'ca')
plt.show()
###Output
_____no_output_____
###Markdown
Result of Partial Dependence Plot on 'ca' value We see that line drops down when number of 'ca' increase but what does it mean? It means that when number of major blood vessels **increases**, the probability of heart disease **decrease**. Let's build a Logistic Regression model as welland check **confusion matrix** and accuracy for both models
###Code
model_lr = LogisticRegression()
model_lr.fit(X_train,y_train)
# confusion matrix for random forest
prediction = model_RFC.predict(X_test)
confusion_matrix(y_test, prediction)
acc = model_RFC.score(X_test,y_test)*100
print("Accuracy of Random Forest = ", acc);
prediction = model_lr.predict(X_test)
confusion_matrix(y_test, prediction)
acc = model_lr.score(X_test,y_test)*100
print("Accuracy of Logistic Regression= ", acc);
###Output
Accuracy of Logistic Regression= 85.24590163934425
|
Michigan - Text Mining/Assignment+1-3.ipynb | ###Markdown
---_You are currently looking at **version 1.1** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-text-mining/resources/d9pwm) course resource._--- Assignment 1In this assignment, you'll be working with messy medical data and using regex to extract relevant infromation from the data. Each line of the `dates.txt` file corresponds to a medical note. Each note has a date that needs to be extracted, but each date is encoded in one of many formats.The goal of this assignment is to correctly identify all of the different date variants encoded in this dataset and to properly normalize and sort the dates. Here is a list of some of the variants you might encounter in this dataset:* 04/20/2009; 04/20/09; 4/20/09; 4/3/09* Mar-20-2009; Mar 20, 2009; March 20, 2009; Mar. 20, 2009; Mar 20 2009;* 20 Mar 2009; 20 March 2009; 20 Mar. 2009; 20 March, 2009* Mar 20th, 2009; Mar 21st, 2009; Mar 22nd, 2009* Feb 2009; Sep 2009; Oct 2010* 6/2008; 12/2009* 2009; 2010Once you have extracted these date patterns from the text, the next step is to sort them in ascending chronological order accoring to the following rules:* Assume all dates in xx/xx/xx format are mm/dd/yy* Assume all dates where year is encoded in only two digits are years from the 1900's (e.g. 1/5/89 is January 5th, 1989)* If the day is missing (e.g. 9/2009), assume it is the first day of the month (e.g. September 1, 2009).* If the month is missing (e.g. 2010), assume it is the first of January of that year (e.g. January 1, 2010).* Watch out for potential typos as this is a raw, real-life derived dataset.With these rules in mind, find the correct date in each note and return a pandas Series in chronological order of the original Series' indices.For example if the original series was this: 0 1999 1 2010 2 1978 3 2015 4 1985Your function should return this: 0 2 1 4 2 0 3 1 4 3Your score will be calculated using [Kendall's tau](https://en.wikipedia.org/wiki/Kendall_rank_correlation_coefficient), a correlation measure for ordinal data.*This function should return a Series of length 500 and dtype int.*
###Code
import pandas as pd
import numpy as np
doc = []
with open('dates.txt') as file:
for line in file:
doc.append(line)
df = pd.Series(doc)
df.head(10)
# df.shape
def date_sorter():
# Extract dates
df_dates = df.str.replace(r'(\d+\.\d+)', '')
df_dates = df_dates.str.extractall(r'[\s\.,\-/]*?(?P<ddmonthyyyy>\d\d[\s\.,\-/]+(?:January|February|March|April|May|June|July|August|September|October|November|December|' + \
'Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)[\s\.,\-/]+(?:19|20)\d\d)|' + \
r'[\s\.,\-/]*?(?P<monthddyyyy>(?:Jan.*\b|February|March|April|May|June|July|August|September|October|November|Dec.*\b|' + \
'Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)[\s\.,\-/]+\d\d[\s\.,\-/]+?(?:19|20)?\d\d)|' + \
r'[\s\.,\-/]*?(?P<monthyyyy>(?:January|February|March|April|May|June|July|August|September|October|November|December|' + \
'Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)[\s\.,\-/]+(?:19|20)\d\d)|' + \
r'(?P<mmddyyyy>[0-3]?\d[\-/]+[0-3]?\d[\-/]+(?:19|20)\d\d)|' + \
r'(?P<mmddyy>[0-3]?\d[\-/]+[0-3]?\d[\-/]+\d\d)|' + \
r'(?P<mmyyyy>[0-1]?\d[\-/]+(?:19|20)\d\d)|' + \
r'(?P<year>(?:19|20)\d\d)')
# Munge dates
df_dates = df_dates.fillna('')
df_dates = df_dates.sum(axis=1).apply(pd.to_datetime)
df_dates = df_dates.reset_index()
df_dates.columns = ['index', 'match', 'dates']
# Sort dates
df_dates.sort_values(by='dates', inplace=True)
result = df_dates.loc[:, 'index'].astype('int32')
# Unit test & Sanity check
assert result.shape[0] == 500
assert result[0].dtype == 'int32'
return result
date_sorter()
###Output
_____no_output_____ |
ExampleCode.ipynb | ###Markdown
**Create Session** with your API Key
###Code
et_sess = ExtractTable(api_key)
###Output
_____no_output_____
###Markdown
**Validate** the Key and check the plan usage
###Code
usage = et_sess.check_usage()
###Output
_____no_output_____
###Markdown
*If there is no error encountered in the above cell, it means we have a valid API key. Now, get started by checking the usage and trigger the file for processing*
###Code
print(usage)
###Output
{'credits': 1000, 'queued': 2, 'used': 533}
###Markdown
**credits**: Total number credits attached to the API Key**queued** : Number of triggered jobs that were left "IN_PROGRESS", not yet retrieved**used** : Number of credits already used **Trigger** the process to extract tabular data from the file
###Code
filepath = r'testimages/chervolet.jpg'
table_data = et_sess.process_file(filepath)
table_data # Notice the default output is a pandas dataframe
###Output
_____no_output_____
###Markdown
**Whatelse** is in the store.- check the latest Actual ServerResponse attached to the session with `et_sess.ServerResponse.json()`- check out list of available output formats `ExtractTable._OUTPUT` Check the **latest ServerResponse** in the processs
###Code
et_sess.ServerResponse.json()
###Output
_____no_output_____
###Markdown
Check out the list of all **available output formats**
###Code
ExtractTable._OUTPUT_FORMATS
###Output
_____no_output_____ |
sklearn/sklearn learning/demonstration/auto_examples_jupyter/linear_model/plot_logistic_l1_l2_sparsity.ipynb | ###Markdown
L1 Penalty and Sparsity in Logistic RegressionComparison of the sparsity (percentage of zero coefficients) of solutions whenL1, L2 and Elastic-Net penalty are used for different values of C. We can seethat large values of C give more freedom to the model. Conversely, smallervalues of C constrain the model more. In the L1 penalty case, this leads tosparser solutions. As expected, the Elastic-Net penalty sparsity is betweenthat of L1 and L2.We classify 8x8 images of digits into two classes: 0-4 against 5-9.The visualization shows coefficients of the models for varying C.
###Code
print(__doc__)
# Authors: Alexandre Gramfort <[email protected]>
# Mathieu Blondel <[email protected]>
# Andreas Mueller <[email protected]>
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
X, y = datasets.load_digits(return_X_y=True)
X = StandardScaler().fit_transform(X)
# classify small against large digits
y = (y > 4).astype(np.int)
l1_ratio = 0.5 # L1 weight in the Elastic-Net regularization
fig, axes = plt.subplots(3, 3)
# Set regularization parameter
for i, (C, axes_row) in enumerate(zip((1, 0.1, 0.01), axes)):
# turn down tolerance for short training time
clf_l1_LR = LogisticRegression(C=C, penalty='l1', tol=0.01, solver='saga')
clf_l2_LR = LogisticRegression(C=C, penalty='l2', tol=0.01, solver='saga')
clf_en_LR = LogisticRegression(C=C, penalty='elasticnet', solver='saga',
l1_ratio=l1_ratio, tol=0.01)
clf_l1_LR.fit(X, y)
clf_l2_LR.fit(X, y)
clf_en_LR.fit(X, y)
coef_l1_LR = clf_l1_LR.coef_.ravel()
coef_l2_LR = clf_l2_LR.coef_.ravel()
coef_en_LR = clf_en_LR.coef_.ravel()
# coef_l1_LR contains zeros due to the
# L1 sparsity inducing norm
sparsity_l1_LR = np.mean(coef_l1_LR == 0) * 100
sparsity_l2_LR = np.mean(coef_l2_LR == 0) * 100
sparsity_en_LR = np.mean(coef_en_LR == 0) * 100
print("C=%.2f" % C)
print("{:<40} {:.2f}%".format("Sparsity with L1 penalty:", sparsity_l1_LR))
print("{:<40} {:.2f}%".format("Sparsity with Elastic-Net penalty:",
sparsity_en_LR))
print("{:<40} {:.2f}%".format("Sparsity with L2 penalty:", sparsity_l2_LR))
print("{:<40} {:.2f}".format("Score with L1 penalty:",
clf_l1_LR.score(X, y)))
print("{:<40} {:.2f}".format("Score with Elastic-Net penalty:",
clf_en_LR.score(X, y)))
print("{:<40} {:.2f}".format("Score with L2 penalty:",
clf_l2_LR.score(X, y)))
if i == 0:
axes_row[0].set_title("L1 penalty")
axes_row[1].set_title("Elastic-Net\nl1_ratio = %s" % l1_ratio)
axes_row[2].set_title("L2 penalty")
for ax, coefs in zip(axes_row, [coef_l1_LR, coef_en_LR, coef_l2_LR]):
ax.imshow(np.abs(coefs.reshape(8, 8)), interpolation='nearest',
cmap='binary', vmax=1, vmin=0)
ax.set_xticks(())
ax.set_yticks(())
axes_row[0].set_ylabel('C = %s' % C)
plt.show()
###Output
_____no_output_____ |
Week 3-5 - Roots of Equations/NuMeth_Group_4_Act_3Roots_of_Linear_Equation.ipynb | ###Markdown
CONTRIBUTION
The group talked about the grades that will be given should be the same for each member of the group. Each group member participated and contribute ideas to finish this user manual before the due date. Each member would like to say thank you for their fellow group members.
Inside the Module
###Code
### Brute force algorithm(f(x)=0)
def f_of_x(f,roots,tol,i, epochs=100):
x_roots=[] # list of roots
n_roots= roots # number of roots needed to find
incre = i #increments
h = tol #tolerance is the starting guess
for epoch in range(epochs): # the list of iteration that will be using
if np.isclose(f(h),0): # applying current h or the tolerance in the equation and the approximation on f(x) = 0
x_roots.insert(len(x_roots), h)
end_epochs = epoch
if len(x_roots) == n_roots:
break # once the root is found it will stop and print the root
h+=incre # the change of value in h wherein if the roots did not find it will going to loop
return x_roots, end_epochs # returning the value of the roots and the iteration or the epochs
### brute force algorithm (in terms of x)
def in_terms_of_x(eq,tol,epochs=100):
funcs = eq # equation to be solved
x_roots=[] # list of roots
n_roots = len(funcs) # How many roots needed to find according to the length of the equation
# epochs= begin_epochs # number of iteration
h = tol # tolerance or the guess to adjust
for func in funcs:
x = 0 # initial value or initial guess
for epoch in range(epochs): # the list of iteration that will be using
x_prime = func(x)
if np.allclose(x, x_prime,h):
x_roots.insert(len(x_roots),x_prime)
break # once the root is found it will stop and print the root
x = x_prime
return x_roots, epochs # returning the value of the roots and the iteration or the epochs
### newton-raphson method
def newt_raphson(func_eq,prime_eq, inits, epochs=100):
f = func_eq # first equation
f_prime = prime_eq # second equation
# epochs= max_iter # number of iteration
x_inits = inits # guess of the roots in range
roots = [] # list of roots
for x_init in x_inits:
x = x_init
for epoch in range(epochs):
x_prime = x - (f(x)/f_prime(x))
if np.allclose(x, x_prime):
roots.append(x)
break # once the root is found it will stop and print the root
x = x_prime
return roots, epochs # returning the value of the roots and the iteration or the epochs
###Output
_____no_output_____
###Markdown
Inside also of the module there is an import numpy and import matplotlib the module is given the name first_two_method for this activity to avoid confusion to the last three method. On the package The package was name numeth_yon for the numeth that stands for the course numerical method while the yon is the group number. Explaining how to use your package and module with examples.
###Code
'''
Import the Numpy package to the IDLE (import numpy as np), and from numeth_yon package import first_two_method for it to run the equation
it is needed for the first_two_method to be next to the function that was inside in the first_two_method module that has been made which are
the, f_of_x, in_terms_of_x and the newt_raphson, that wiil seen below:
'''
import numpy as np
from numeth_yon import first_two_method
'''
For the user to use f_of_x function, it is needed to provide the roots, the iteration is already set into the default value of 100,
the estimated guess, and a number of increase must be provided. The function created that has been created,
is to find the roots of the given equation.
'''
sample1 = lambda x: x**2+x-2
roots, epochs = first_two_method.f_of_x(sample1,2,-10,1) # the first_two_method is the module that is next to the function inside of the module
print("The root is: {}, found at epoch {}".format(roots,epochs+1))
# Output: The root is: [-2, 1], found at epoch 12
'''
In this method of using Brute Force Algorithm in terms of x, the user must have the equation to be solved,
number of iteration was already set to 100 as default value and tolerance to adjust the guess number.
'''
sample2 = lambda x: 2-x**2
sample3 = lambda x: np.sqrt(2-x)
funcs = [sample2, sample3]
roots, epochs = first_two_method.in_terms_of_x(funcs,1e-05) # the first_two_method is the module that is next to the function inside of the module
print("The root is {} found after {} epochs".format(roots,epochs))
# Output: The root is [-2, 1.00000172977337] found after 100 epochs
'''
To use the newt_raphson, the user must provide an equation, the derivative of the equation,
the number of repetitions was set to default value of 100, and the range of searching value for the parameters.
The function of the function, is to find the roots of the given equation.
'''
g = lambda x: 2*x**2 - 5*x + 3
g_prime = lambda x: 4*x-5
# the first_two_method is the module that is next to the function inside of the module
roots, epochs = first_two_method.newt_raphson(g,g_prime, np.arange(0,5))
x_roots = np.round(roots,3)
x_roots = np.unique(x_roots)
# Output: The root is [1. 1.5] found after 100 epochs
###Output
_____no_output_____
###Markdown
To further understand please refer to the PDF version. **Activity 2.1**
1. Identify **two more polynomials** preferably **orders higher than 2** and **two transcendental functions**. Write them in **LaTex**.
2. Plot their graphs you may choose your own set of pre-images.
3. Manually solve for their roots and plot them along the graph of the equation.
###Code
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
$$ f(x) = x^3+3x^2-4x $$
###Code
##Input the function on the define f(x)
def f(x):
return x**3+3*x**2-4*x
x0, x1,x2 = -4, 0, 1 ## Roots of the function f(x)
## Plotting the roots in a graph
X = np.linspace(-5,2,dtype=float)
Y = f(X)
## Creating the grid of the graph
plt.figure(figsize=(10,5))
plt.axhline(y=0,color='black')
plt.axvline(x=0,color='black')
plt.grid()
## Plotting the roots in the graph
plt.plot(X,Y,color='blue')
plt.scatter([x0,x1,x2],[0,0,0], c='red', label='roots')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
$$ f(x) = 2x^3-3x^2-11x-6 $$
###Code
def f(x):
return 2*x**3+3*x**2-11*x-6
x0, x1,x2 = -3, -0.5, 2 ## Roots of the function f(x)
## Plotting the roots in a graph
X = np.linspace(-5,4,dtype=float)
Y = f(X)
## Creating the grid of the graph
plt.figure(figsize=(10,5))
plt.axhline(y=0,color='black')
plt.axvline(x=0,color='black')
plt.grid()
## Plotting the roots in the graph
plt.plot(X,Y,color='blue')
plt.scatter([x0,x1,x2],[0,0,0], c='red', label='roots')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
$$ f(x) = log(x) $$
###Code
def f(x):
return np.log(x)
x0 = 1 ## Roots of the function f(x)
## Plotting the roots in a graph
X = np.linspace(0.2,4,dtype=float)
Y = f(X)
## Creating the grid of the graph
plt.figure(figsize=(10,5))
plt.axhline(y=0,color='black')
plt.axvline(x=0,color='black')
plt.grid()
## Plotting the roots in the graph
plt.plot(X,Y,color='blue')
plt.scatter([x0],[0], c='red', label='roots')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
$$ f(x) = \sqrt{9-x} $$
###Code
def f(x):
return np.sqrt(9-x)
x0 = 9 # Roots of the function f(x)
## Plotting the roots in a graph
X = np.linspace(1,9,dtype=float)
Y = f(X)
## Creating the grid of the graph
plt.figure(figsize=(10,5))
plt.axhline(y=0,color='black')
plt.axvline(x=0,color='black')
plt.grid()
## Plotting the roots in the graph
plt.plot(X,Y,color='blue')
plt.scatter([x0],[0], c='red', label='roots')
plt.legend()
plt.show()
###Output
_____no_output_____ |
FibrePrices.ipynb | ###Markdown
###Code
import requests
import json
import numpy as np
import matplotlib.pyplot as plt
import itertools
plt.style.use('seaborn-whitegrid')
# if using a Jupyter notebook, include:
%matplotlib inline
max_speed = 100
min_speed = 20
blacklist = ["telkom", "vodacom", ]#"easyweb", "bitco", "homeconnect", "supersonic"]
whitelist = ["vox", "axxess", "mweb", "homeconnect", "webafrica", "fibrestream", "rsaweb", "cellc"]
url = "https://www.fibretiger.co.za/api/server.php"
# sslug - ISP Name
# shaped
# price - Monthly fee in Rand
# speed_down - Download speed in Mb
# speed_up - Upload speed in Mb
speed_up = 1
speed_down = min_speed
minspeed_in_mb = speed_down
network = "vumatel"
uncapped = 1
params = {"minspeed": minspeed_in_mb,"slug":False,"nslug": network,"redirect":0,"uncapped": uncapped,"homeq":False}
data = {"id":1,"method":"package_get_all","params":[params]}
response = requests.post(url, json=data )
package_list = response.json()["result"]["packageList"]
# print(json.dumps(package_list, indent=4))
# for package in package_list:
# if package.get("sslug") == "axxess":
# # if (speed_down == package.get('speed_down')):
# if "shaped" in package:
# print(
# f"\n{package.get('sslug')}\n---\nPrice: R{package.get('price')}\nDownload: {package.get('speed_down')}Mb\nUpload: {package.get('speed_up')}Mb\nShaped: {package.get('shaped')}\n"
# )
isp_data = {}
speed_list = []
isp_list = []
for p in package_list:
isp_name = p.get("sslug")
down_speed = p.get("speed_down")
up_speed = p.get("speed_up")
shaped = p.get("shaped")
rating = p.get("rating")
price = p.get("price")
if min_speed <= down_speed <= max_speed and isp_name in whitelist:
if isp_name not in isp_data:
isp_data.update(
{
isp_name: {
"prices": {
down_speed: {
up_speed: price
}
},
"rating": rating,
"shaped": shaped
}
}
)
else:
if down_speed not in isp_data[isp_name]["prices"]:
isp_data[isp_name]["prices"].update(
{
down_speed: {
up_speed: price
}
}
)
else:
if up_speed not in isp_data[isp_name]["prices"][down_speed]:
isp_data[isp_name]["prices"][down_speed].update(
{
up_speed: price
}
)
# print(json.dumps(isp_data['vodacom'], indent=4))
marker = itertools.cycle(("v", "+", "P", ".", "o", ">", "*", "^", "d", "s", "X", "<"))
fig = plt.figure(figsize=(10, 10))
ax = plt.axes()
for name, isp in isp_data.items():
speed_list = [int(k) for k in isp["prices"].keys()]
price_values = []
for speed, prices in isp["prices"].items():
for up_speed, price in prices.items():
if up_speed != speed:
price_values.append(price)
# print(name)
# print(speed_list)
# print(price_values)
if speed_list and price_values and (len(speed_list) == len(price_values)):
plt.plot(speed_list, price_values, label=name, linestyle="", marker=next(marker), markersize=10)
for i,j in zip(speed_list, price_values):
ax.annotate(str(j),xy = (i,j), fontsize=16)
plt.title("ISP Fibre Speed vs Prices",fontsize=18, fontweight="bold")
plt.xlabel("Download Speed (Mb)", fontsize=16)
plt.xticks(fontsize=16)
plt.ylabel("Price (R)", fontsize=16);
plt.yticks(fontsize=16)
plt.legend(fontsize=14)
###Output
_____no_output_____ |
Recsys_Challenge_Trailmix/Recsys_Challenge_Trailmix_FULL/Combine_QQ_Xing/Combine_Smart_Way_QQ_Xing_W2V_N2V.ipynb | ###Markdown
For different Task (change from here)
###Code
SEED = json.load(open('../DATA_PROCESSING/PL_TRACKS_5_TEST_T' + Task + '.json'))
GT_Now = {}
for pl in SEED:
GT_Now[pl] = []
for t in GT[pl]:
if t not in SEED[pl]:
GT_Now[pl].append(t)
Xing_Raw = json.load(open('../MODEL_5_Counter_Artist_Album/Pure_Xing/Pure_Xing_T' + Task + '.json'))
Xing = {}
for pl in Xing_Raw:
Xing[pl] = []
for t in Xing_Raw[pl]:
Xing[pl].append(track_index_reversed[t])
QQ = json.load(open('../MODEL_5_Counter_Artist_Album/Pure_QQ/T' + Task + '_100_500_50.json'))
W2V_raw = json.load(open('/home/xing/xing/xing/T' + Task + '.json'))
W2V = {}
for pl in W2V_raw:
W2V[pl] = []
for t in W2V_raw[pl]:
if t not in SEED[pl]:
W2V[pl].append(t)
if len(W2V[pl]) == 500:
break
N2V = json.load(open('../MODEL_5_Counter_Artist_Album/Pure_Node2Vec/Pure_Node2Vec_T' + Task + '.json'))
###Output
_____no_output_____
###Markdown
Combine Method 1 (Used in current submission)
###Code
def combine_method_1(QQ_dict, Xing_dict):
R_combined = {}
for pl in Xing_dict:
if len(Xing_dict[pl]) == 500:
R_combined[pl] = Xing_dict[pl][:]
else:
rest = 500 - len(Xing_dict[pl])
R_combined[pl] = Xing_dict[pl][:]
for t in QQ_dict[pl]:
if t not in R_combined[pl]:
R_combined[pl].append(t)
if len(R_combined[pl]) == 500:
break
return R_combined
Combined_R_method_1 = combine_method_1(QQ, Xing)
R_precision_1 = {}
NDCG_1 = {}
Clicks_1 = {}
for pl in Xing:
R_precision_1[pl] = r_precision(GT_Now[pl], Combined_R_method_1[pl])
NDCG_1[pl] = ndcg(GT_Now[pl], Combined_R_method_1[pl])
Clicks_1[pl] = clicks(GT_Now[pl], Combined_R_method_1[pl])
print sum(R_precision_1.values()) / len(R_precision_1)
print sum(NDCG_1.values()) / len(NDCG_1)
print sum(Clicks_1.values()) / len(Clicks_1)
###Output
0.202342210441
0.3833401973331007
1.5
###Markdown
Combine Method 2
###Code
def rank(new, current, sort_base):
layer = list(set(new) - set(current))
layer_index = {}
for elem in layer:
layer_index[elem] = sort_base[elem]
layer_sorted = sorted(layer_index.items(), key=operator.itemgetter(1))
layer_final = [i[0] for i in layer_sorted]
return current + layer_final
def combine_method_2_sub_1(QQ, Xing, GT_len):
QQ_index = {}
for elem in QQ:
QQ_index[elem] = len(QQ_index)
Xing_index = {}
for elem in Xing:
Xing_index[elem] = len(Xing_index)
Ax = set(Xing[:GT_len])
Bx = set(Xing[GT_len:])
Aq = set(QQ[:GT_len])
Bq = set(QQ[GT_len:])
# Layer 1
up_to_layer1 = rank(list(Ax & Aq), [], Xing_index)
# Layer 2 part 1
up_to_layer2_1 = rank(list(Ax & Bq), up_to_layer1, Xing_index)
# Layer 2 part 2
up_to_layer2_2 = rank(list(Bx & Aq), up_to_layer2_1, Xing_index)
# Layer 3 part 1
up_to_layer3_1 = rank(list(Ax), up_to_layer2_2, Xing_index)
# Layer 3 part 2
up_to_layer3_2 = rank(list(Aq), up_to_layer3_1, QQ_index)
# Layer 3 part 3
up_to_layer3_3 = rank(list(Bx & Bq), up_to_layer3_2, Xing_index)
# Layer 4 part 1
up_to_layer4_1 = rank(list(Bx), up_to_layer3_3, Xing_index)
# Layer 4 part 2
up_to_layer4_2 = rank(list(Bq), up_to_layer4_1, QQ_index)
return up_to_layer4_2[:500]
def combine_method_2_sub_2(QQ, Xing, GT_len):
QQ_index = {}
for elem in QQ:
QQ_index[elem] = len(QQ_index)
Xing_index = {}
for elem in Xing:
Xing_index[elem] = len(Xing_index)
Ax = set(Xing[:GT_len])
Bx = set(Xing[GT_len:])
Aq = set(QQ[:GT_len])
Bq = set(QQ[GT_len:])
# Layer 1
up_to_layer1 = rank(list(Ax & Aq), [], Xing_index)
# Layer 2 part 1
up_to_layer2_1 = rank(list(Ax & Bq), up_to_layer1, Xing_index)
# Layer 2 part 2
up_to_layer2_2 = rank(list(Ax), up_to_layer2_1, Xing_index)
# Layer 3 part 1
up_to_layer3_1 = rank(list(Bx & Aq), up_to_layer2_2, Xing_index)
# Layer 3 part 2
up_to_layer3_2 = rank(list(Aq), up_to_layer3_1, QQ_index)
# Layer 3 part 3
up_to_layer3_3 = rank(list(Bx & Bq), up_to_layer3_2, Xing_index)
# Layer 4 part 1
up_to_layer4_1 = rank(list(Bx), up_to_layer3_3, Xing_index)
# Layer 4 part 2
up_to_layer4_2 = rank(list(Bq), up_to_layer4_1, QQ_index)
return up_to_layer4_2[:500]
def combine_method_2_sub_3(QQ, Xing, GT_len):
QQ_index = {}
for elem in QQ:
QQ_index[elem] = len(QQ_index)
Xing_index = {}
for elem in Xing:
Xing_index[elem] = len(Xing_index)
Ax = set(Xing[:GT_len])
Bx = set(Xing[GT_len:])
Aq = set(QQ[:GT_len])
Bq = set(QQ[GT_len:])
# Layer 1
up_to_layer1 = rank(list(Ax & Aq), [], Xing_index)
# Layer 2 part 2
up_to_layer2_1 = rank(list(Bx & Aq), up_to_layer1, Xing_index)
# Layer 2 part 1
up_to_layer2_2 = rank(list(Ax & Bq), up_to_layer2_1, Xing_index)
# Layer 3 part 1
up_to_layer3_1 = rank(list(Ax), up_to_layer2_2, Xing_index)
# Layer 3 part 2
up_to_layer3_2 = rank(list(Aq), up_to_layer3_1, QQ_index)
# Layer 3 part 3
up_to_layer3_3 = rank(list(Bx & Bq), up_to_layer3_2, Xing_index)
# Layer 4 part 1
up_to_layer4_1 = rank(list(Bx), up_to_layer3_3, Xing_index)
# Layer 4 part 2
up_to_layer4_2 = rank(list(Bq), up_to_layer4_1, QQ_index)
return up_to_layer4_2[:500]
def combine_method_2_sub_4(QQ, Xing, GT_len):
QQ_index = {}
for elem in QQ:
QQ_index[elem] = len(QQ_index) * 1.5
Xing_index = {}
for elem in Xing:
Xing_index[elem] = len(Xing_index) * 1.0
final_index = {}
for elem in list(set(Xing_index.keys()) | set(QQ_index.keys())):
if elem in Xing and elem not in QQ:
final_index[elem] = Xing_index[elem]
elif elem in QQ and elem not in Xing:
final_index[elem] = QQ_index[elem]
else:
final_index[elem] = Xing_index[elem] + QQ_index[elem]
Ax = set(Xing[:GT_len])
Bx = set(Xing[GT_len:])
Aq = set(QQ[:GT_len])
Bq = set(QQ[GT_len:])
# Layer 1
up_to_layer1 = rank(list(Ax & Aq), [], final_index)
# Layer 2 part 1
up_to_layer2_1 = rank(list(Ax & Bq), up_to_layer1, final_index)
# Layer 2 part 2
up_to_layer2_2 = rank(list(Bx & Aq), up_to_layer2_1, final_index)
# Layer 3 part 1
up_to_layer3_1 = rank(list(Ax), up_to_layer2_2, final_index)
# Layer 3 part 2
up_to_layer3_2 = rank(list(Aq), up_to_layer3_1, final_index)
# Layer 3 part 3
up_to_layer3_3 = rank(list(Bx & Bq), up_to_layer3_2, final_index)
# Layer 4 part 1
up_to_layer4_1 = rank(list(Bx), up_to_layer3_3, final_index)
# Layer 4 part 2
up_to_layer4_2 = rank(list(Bq), up_to_layer4_1, final_index)
return up_to_layer4_2[:500]
def combine_method_2_sub_5(QQ, Xing, GT_len):
QQ_index = {}
for elem in QQ:
QQ_index[elem] = (500 - len(QQ_index)) * 2.0
Xing_index = {}
for elem in Xing:
Xing_index[elem] = (500 - len(Xing_index)) * 3.0
final_index = {}
for elem in list(set(Xing_index.keys()) | set(QQ_index.keys())):
if elem in Xing and elem not in QQ:
final_index[elem] = Xing_index[elem]
elif elem in QQ and elem not in Xing:
final_index[elem] = QQ_index[elem]
else:
final_index[elem] = Xing_index[elem] + QQ_index[elem]
layer_sorted = sorted(final_index.items(), key=operator.itemgetter(1), reverse = True)
layer_final = [i[0] for i in layer_sorted]
return layer_final[:500]
def combine_method_2_sub_6(QQ, Xing, W2V, GT_len):
all_tracks = list(set(QQ + Xing + W2V))
QQ_index = {}
t = 0
for elem in QQ:
QQ_index[elem] = (501 - t) * 2.0
t += 1
Xing_index = {}
t = 0
for elem in Xing:
Xing_index[elem] = (501 - t) * 3.0
t += 1
W2V_index = {}
t = 0
for elem in W2V:
W2V_index[elem] = (501 - t) * 1.5
t += 1
for elem in all_tracks:
if elem not in QQ_index:
QQ_index[elem] = 0
if elem not in Xing_index:
Xing_index[elem] = 0
if elem not in W2V_index:
W2V_index[elem] = 0
final_index = {}
for elem in all_tracks:
final_index[elem] = Xing_index[elem] + QQ_index[elem] + W2V_index[elem]
layer_sorted = sorted(final_index.items(), key=operator.itemgetter(1), reverse = True)
layer_final = [i[0] for i in layer_sorted]
return layer_final[:500]
# Ax = set(Xing[:GT_len])
# Bx = set(Xing[GT_len:])
# Aq = set(QQ[:GT_len])
# Bq = set(QQ[GT_len:])
# # Layer 1
# up_to_layer1 = rank(list(Ax & Aq), [], final_index)
# # Layer 2 part 1
# up_to_layer2_1 = rank(list(Ax & Bq), up_to_layer1, final_index)
# # Layer 2 part 2
# up_to_layer2_2 = rank(list(Bx & Aq), up_to_layer2_1, final_index)
# # Layer 3 part 1
# up_to_layer3_1 = rank(list(Ax), up_to_layer2_2, final_index)
# # Layer 3 part 2
# up_to_layer3_2 = rank(list(Aq), up_to_layer3_1, final_index)
# # Layer 3 part 3
# up_to_layer3_3 = rank(list(Bx & Bq), up_to_layer3_2, final_index)
# # Layer 4 part 1
# up_to_layer4_1 = rank(list(Bx), up_to_layer3_3, final_index)
# # Layer 4 part 2
# up_to_layer4_2 = rank(list(Bq), up_to_layer4_1, final_index)
# return up_to_layer4_2[:500]
def combine_method_2(QQ_dict, Xing_dict, W2V_dict):
global GT
R_combined = {}
for pl in Xing_dict:
R_combined[pl] = combine_method_2_sub_6(QQ_dict[pl], Xing_dict[pl], W2V_dict[pl], len(GT_Now[pl]))
return R_combined
Combined_R_method_2 = combine_method_2(QQ, Xing, W2V)
R_precision_2 = {}
NDCG_2 = {}
Clicks_2 = {}
for pl in Xing:
R_precision_2[pl] = r_precision(GT_Now[pl], Combined_R_method_2[pl])
NDCG_2[pl] = ndcg(GT_Now[pl], Combined_R_method_2[pl])
Clicks_2[pl] = clicks(GT_Now[pl], Combined_R_method_2[pl])
print sum(R_precision_2.values()) / len(R_precision_2)
print sum(NDCG_2.values()) / len(NDCG_2)
print sum(Clicks_2.values()) / len(Clicks_2)
0.217464854917
0.4193948692135327
0.722
R_precision_2 = {}
NDCG_2 = {}
Clicks_2 = {}
for pl in W2V:
R_precision_2[pl] = r_precision(GT_Now[pl], W2V[pl])
NDCG_2[pl] = ndcg(GT_Now[pl], W2V[pl])
Clicks_2[pl] = clicks(GT_Now[pl], W2V[pl])
print sum(R_precision_2.values()) / len(R_precision_2)
print sum(NDCG_2.values()) / len(NDCG_2)
print sum(Clicks_2.values()) / len(Clicks_2)
R_precision_2 = {}
NDCG_2 = {}
Clicks_2 = {}
for pl in N2V:
R_precision_2[pl] = r_precision(GT_Now[pl], N2V[pl])
NDCG_2[pl] = ndcg(GT_Now[pl], N2V[pl])
Clicks_2[pl] = clicks(GT_Now[pl], N2V[pl])
print sum(R_precision_2.values()) / len(R_precision_2)
print sum(NDCG_2.values()) / len(NDCG_2)
print sum(Clicks_2.values()) / len(Clicks_2)
R_precision_2
###Output
_____no_output_____ |
src/stat_analyze_allels_per_person_gender.ipynb | ###Markdown
Statistical Analysis for distributions of Alleles SEC population only per gender Author: Efthymios Tzinis
###Code
# read the data
agg_alleles_datapath = '../data/genotypes_person.xlsx'
result_filepath = '../results/sec_allels_per_gender'
import pandas as pd
from pprint import pprint
xls = pd.ExcelFile(agg_alleles_datapath)
df = xls.parse(xls.sheet_names[0])
polys = list(df.columns)[2:]
print polys
pairs_of_allels = [['A','G'],['C','G'],['G','A'],['C','G'],['T','C']]
print pairs_of_allels
# do the same procedure for each polymorphism
# but first group data per gender
df_grouped = df.groupby(['GENDER'])[polys].sum()
import numpy as np
# gather information about a specific polymorphism for all populations
alleles_agg_data = {}
for i in range(len(pairs_of_allels)):
alleles = pairs_of_allels[i]
poly = polys[i]
data = np.empty((2,2))
for j in range(len(alleles)):
data[j][0] = df_grouped[poly].loc['A'].count(alleles[j])
data[j][1] = df_grouped[poly].loc['F'].count(alleles[j])
data = data.astype(int)
alleles_agg_data[poly] = {}
alleles_agg_data[poly]['data'] = data
alleles_agg_data[poly]['labels'] = alleles
import math
import scipy.stats as stats
def compute_odds_ratio_CI_95(data,odds_ratio):
val_in_list = list(data.flatten())
val_in_list = map(lambda x: 1/float(x),val_in_list)
sum_of_vals = sum(val_in_list)
error = 1.96 * math.sqrt(sum_of_vals)
ln_or = math.log(odds_ratio)
uci = math.exp(ln_or + error)
lci = math.exp(ln_or - error)
return lci, uci
def compute_odds_ratio(data):
if data[0][1] == 0 or data[1][0] == 0:
return 0
else:
return float(data[0][0] * data[1][1]) / (data[0][1] * data[1][0])
def mean_confidence_interval(data, confidence=0.95):
a = 1.0*np.array(data)
n = len(a)
m, se = np.mean(a), scipy.stats.sem(a)
h = se * sp.stats.t._ppf((1+confidence)/2., n-1)
return m, m-h, m+h
import xlwt
def get_stats_from_matrix(data):
oddsratio, pval_f = stats.fisher_exact(data)
# print oddsratio, pval_f
# print compute_odds_ratio(obs)
lci, uci = compute_odds_ratio_CI_95(data,oddsratio)
if pval_f < 0.0001:
new_p_str = '< 0.0001'
else:
new_p_str = round(pval_f,4)
return "{},{},{}-{}".format(new_p_str,round(oddsratio,4),round(lci,4), round(uci,4))
# for each polymorphism just do the same distr. comparisons in between populations
def compare_between_populations_for_a_polymorphism(data,labels,poly,book):
ws = book.add_sheet(str(poly).split("/")[-1])
fout = open(result_filepath+'_'+str(poly).split("/")[-1]+'.txt','w')
header_str = 'Group_1,Group_2,p_value_fischer,odds_ratio,Confidence_Interval_95'
fout.write('Group_1,Group_2,p_value_fischer,odds_ratio,Confidence_Interval_95%\n')
for j in range(len(header_str.split(','))):
ws.write(0,j,header_str.split(',')[j])
i = 1
try:
matrix = data
except:
print "Polymorphism: {} does not contain comparison tuple {}".format(poly,comp_tuple)
stats = get_stats_from_matrix(data)
result_str = 'Males'+','+'Females'+','+stats
fout.write(result_str+ "\n")
print result_str
for j in range(len(result_str.split(','))):
ws.write(i,j,result_str.split(',')[j])
i += 2
ws.write(i,1,'M')
ws.write(i,2,'F')
i+=1
for z in range(2):
ws.write(i+z,0,labels[z])
for z in range(data.shape[0]):
for k in range(data.shape[1]):
ws.write(i+z,1+k,data[z][k])
fout.close()
return book
book = xlwt.Workbook()
for poly,v in alleles_agg_data.items():
book = compare_between_populations_for_a_polymorphism(v['data'],v['labels'],poly,book)
book.save(result_filepath+'.xls')
###Output
_____no_output_____ |
slides/slideM1_15_19.ipynb | ###Markdown
Summary Statistics and Quick Viz!
###Code
import pandas as pd
pd.options.display.max_rows = 30
###Output
_____no_output_____
###Markdown
START HERENow we've learned about how to get our dataframe how we want it, let's try and get some fun out of it!We have our data, now what? We usually like to learn from it. We want to find out about maybe some summary statistics about the features of the data. Let's load in our cereal dataset again.
###Code
df = pd.read_csv('../data/cereal.csv', index_col = 0)
df.head(15)
###Output
_____no_output_____
###Markdown
Pandas describe()Pandas has a lot up it's sleeve but one of the most useful functions is called describe and it does exactly that. it _describes_ your data let's try it out.
###Code
df.describe()
###Output
_____no_output_____
###Markdown
This table will tell you about:- `count`: The number of non-NA/null observations.- `mean`: The mean of column - `std` : The standard deviation of a column- `min`: The min value for a column- `max`: The max value for a column - by default the 25,50 and 75 percentile of the observations You can make change to either limit how much you show or extend it.
###Code
df.describe(include = "all")
###Output
_____no_output_____
###Markdown
Adding `include = "all"` withinh the brackets adds some additional statistics - `unique`: how many observations are unique- `top`: which observation value is most occuring- `freq`: what is the frequency of the most occuring observation you can also get single statistics of each column using:either `df.mean()`,`df.std()`, `df.count()`, `df.median()`, `df.sum()`. Some of these might produce some wild results especially if the column is a qualitative observation.
###Code
df.sum()
###Output
_____no_output_____
###Markdown
`pd.value_counts`If you want to get a frequency table of categorical columns `pd.value_counts` is very useful. In the previous slides we talked about getting a single column from a dataframe using double brackets like `df[['column-name']]`. That's great but to use pd.value_counts we need to use a different structure which you'll learn in the next module. Instead of getting a single columns with double brackets we only use single brackets like so:
###Code
manufacturer_column = df["mfr"]
manufacturer_column
###Output
_____no_output_____
###Markdown
We saved the object in a variable called `manufacturer_column` in the same way as we have aave dataframes before. Next we cant use `pd.value_counts()` referencing that column we saved within the brackets.
###Code
manufacturer_freq = manufacturer_column.value_counts()
manufacturer_freq
###Output
_____no_output_____
###Markdown
We can then see the frequency of each qualitative value. _Careful here! Notice that instead of putting the dataframe first, we indicate the package (pd) that `value_counts` is coming from and then the object we want the counts of within the brackets! This looks a bit funny though doesn't it? That's because this output isn't our usual dataframe type so we need to make it so. We can make it prettier with the following
###Code
manufacturer_freq_df = pd.DataFrame(manufacturer_freq)
manufacturer_freq_df
###Output
_____no_output_____
###Markdown
Ah! That's what we are used to. The column name is specifying the counts of the manufacturers, but maybe we should rename that column to something that makes more sense. let's rename that column to `freq`. But how? We use something called `rename` of course! When we rename things it's especially important that we don't forget to assign it to a variable or the column name won't stick! Let's assign it to `freq_mfr_df`.
###Code
freq_mfr_df = manufacturer_freq_df.rename(columns = {"mfr": "freq"})
freq_mfr_df
###Output
_____no_output_____
###Markdown
_Note: The code above uses something we've never seen before, `{}` curley brackets! These have a special meaning but for now you need to know that this `columns` argument need to be set equal to `"old column name" : "new-column-name"` in curley brackets for us to rename the column._ Quick Viz with Pandas If we want to visualize things using different plots we can do that too! Take `manufacturer_freq` object. This would be great to express as a bar chart. But how do we do it?!
###Code
freq_mfr_df.plot.bar();
###Output
_____no_output_____
###Markdown
The important things to notice here is that we want to `.plot` a `.bar()` graph. You may have noticed also this `;` after the code. this just prevents an additional unnecessary output such as ``````which we don't really need. What else can we plot from our original cereal dataframe? Maybe we want to see the relationship between `calories` and `rating` in cereals? This would require a `scatter` plot! In the code we would need to specify the x and y axis which means we just need to put in the column names.
###Code
df.plot.scatter(x='sugars',y='calories');
###Output
_____no_output_____
###Markdown
Something you may have noticed is that there are 77 cereals but there doesn't seem to be 77 data points! That's because some of them are lying on top of each other with the same sugar ar calorie values. It may be of use to set an opacity to the graph to differential those points. Opacity is set with the argument `alpha` and accepts values between 0 and 1, 1 being full intensity.
###Code
df.plot.scatter(x='sugars',y='calories', alpha= .3)
###Output
_____no_output_____
###Markdown
Look at that! Now we can see there are multiple cereals that have 2.5g of sugar with 100 calories. What if we wanted to change the colour to purple? Enter parameter `c`! We
###Code
plota = df.plot.scatter(x="sugars",
y="calories",
alpha= .3,
color = "purple")
###Output
_____no_output_____
###Markdown
Those data points look pretty small. To enlarge them, the argument `s` should do the trick. Also every good graph should havew a title! Let's take this opportunity to set the argument `title` to something.
###Code
ploty = df.plot.scatter(x="sugars",
y="calories",
alpha= 0.3,
color="Darkblue",
s= 50,
title = "The relationship between sugar and calories in cereals")
###Output
_____no_output_____
###Markdown
Let's try this in the assignment now.
###Code
ploty.show()
position_bar = position_freq_df.plot.bar(color = "Teal",
alpha = .5,
title = "Canuck player positions")
###Output
_____no_output_____ |
03 - Working with NumPy/notebooks/08-Statistics.ipynb | ###Markdown
Statistics
###Code
import numpy as np
# 1D array
A1 = np.arange(20)
print(A1)
A.ndim
# 2D array
A2 = np.array([[11, 12, 13], [21, 22, 23]])
print(A2)
np.sum(A2, axis=0)
np.sum(A2)
A2.ndim
###Output
_____no_output_____
###Markdown
Sum - Sum of array elements over a given axis. - **Syntax:** `np.sum(array); array-wise sum` - **Syntax:** `np.sum(array, axis=0); row-wise sum` - **Syntax:** `np.sum(array, axis=1); column-wise sum` Axis 0 is thus the first dimension (the "rows"), and axis 1 is the second dimension (the "columns")
###Code
# sum of 1D array
np.sum(A1)
# array-wise sum of 2D array
np.sum(A2)
A2
# sum of 2D array(axis=0, row-wise sum)
np.sum(A2, axis=0)
# sum of 2D array(axis=1, column-wise sum)
np.sum(A2, axis=1)
###Output
_____no_output_____
###Markdown
Mean - Compute the median along the specified axis.- Returns the average of the array elements. The average is taken over the flattened array by default, otherwise over the specified axis. `float64` intermediate and return values re used for integer inputs. - **Syntax:** `np.mean(array); array-wise mean` - **Syntax:** `np.mean(array, axis=0); row-wise mean` - **Syntax:** `np.mean(array, axis=1); column-wise mean`
###Code
A1
A2
# compute the average of array `A1`
np.mean(A)
# mean of 2D array(axis=0, row-wise)
np.mean(A2, axis=0)
# mean of 2D array(axis=1, column-wise)
np.mean(A2, axis=1)
###Output
_____no_output_____
###Markdown
Median- Compute the median along the specified axis.- Returns the median of the array elements. - **Syntax:** `np.median(array); array-wise median` - **Syntax:** `np.median(array, axis=0); row-wise median` - **Syntax:** `np.median(array, axis=1); column-wise median`
###Code
# compute the meadian of `A1`
np.median(A1)
# median of 2D array(axis=0, row-wise)
np.median(A2, axis=0)
# median of 2D array(axis=1, column-wise)
np.median(A2, axis=1)
###Output
_____no_output_____
###Markdown
Minimum - Return the minimum of an array or minimum along an axis. - **Syntax:** `np.min(array); array-wise min` - **Syntax:** `np.min(array, axis=0); row-wise min` - **Syntax:** `np.min(array, axis=1); column-wise min`
###Code
# minimum value of `A1`
np.min(A)
# minimum value of A2(axis=0, row-wise)
np.min(A2, axis=0)
# minimum value of A2(axis=1, column-wise)
np.min(A2, axis=1)
###Output
_____no_output_____
###Markdown
Maximum- Return the maximum of an array or minimum along an axis. - **Syntax:** `np.max(array); array-wise max` - **Syntax:** `np.max(array, axis=0); row-wise max` - **Syntax:** `np.max(array, axis=1); column-wise max`
###Code
# maxiumum value of `A1`
np.max(A1)
# maxiumum value of A2(axis=0, row-wise)
np.max(A2, axis=0)
# maxiumum value of A2(axis=1, column-wise)
np.max(A2, axis=1)
###Output
_____no_output_____
###Markdown
Range - **Syntax:** `np.max(array) - np.min(array)`
###Code
A1.max()
A1.min()
r = np.max(A1) - np.min(A1)
print(r)
###Output
_____no_output_____
###Markdown
Standard Deviation- Compute the standard deviation along the specified axis.- Returns the standard deviation, a measure of the spread of a distribution, of the array elements. The standard deviation is computed for theflattened array by default, otherwise over the specified axis. - **Syntax:** `np.std(array); array-wise std` - **Syntax:** `np.std(array, axis=0); row-wise std` - **Syntax:** `np.std(array, axis=1); column-wise std`
###Code
# compute the standard deviation of `A1`
np.std(A1)
# standard deviation of 2D array(axis=0, row-wise)
np.std(A2, axis=0)
# standard deviation of 2D array(axis=1, column-wise)
np.std(A2, axis=1)
###Output
_____no_output_____
###Markdown
Variance- Compute the variance along the specified axis.- Returns the variance of the array elements, a measure of the spread of a distribution. The variance is computed for the flattened array by default, otherwise over the specified axis. - **Syntax:** `np.var(array); array-wise var` - **Syntax:** `np.var(array, axis=0); row-wise var` - **Syntax:** `np.var(array, axis=1); column-wise var`
###Code
# compute the variance of `A`
np.var(A1)
# variance of 2D array(axis=0, row-wise)
np.std(A2, axis=0)
# variance of 2D array(axis=1, column-wise)
np.std(A2, axis=1)
###Output
_____no_output_____
###Markdown
Quantile- Compute the q-th quantile of the data along the specified axis. - **Syntax:** `np.quantile(array); array-wise quantile` - **Syntax:** `np.quantile(array, axis=0); row-wise quantile` - **Syntax:** `np.quantile(array, axis=1); column-wise quantile`
###Code
# 25th percentile of `A1`
np.quantile(A1, 0.25)
# 50th percentile of `A2`(axis=0)
np.quantile(A2, 0.5, axis=0)
# 75th percentile of `A2`(axis=1)
np.quantile(A2, 0.75, axis=1)
###Output
_____no_output_____
###Markdown
Correlation Coefficient
###Code
# documentation
np.info(np.corrcoef)
# compute Correlation Coefficient
np.corrcoef(A2)
###Output
_____no_output_____ |
ames_housing_dataset/Ames Housing Dataset Investigation.ipynb | ###Markdown
Ames Housing Dataset price modelingWe investigate the data to remove unnecessary columns and max-scale labelThis could either happen in the private data lake or on the modeler's machine. In this case, we mimic a modeler requesting certain fields and a certain series of preprocessing steps.
###Code
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(rc={'figure.figsize':(11.7,8.27)})
sns.set_style("darkgrid")
df = pd.read_csv("data.csv")
print(f"Original size of dataframe {df.shape}")
###Output
Original size of dataframe (1460, 81)
###Markdown
Initial InvestigationWe first subset the data according to some pre-determined requirements like only numerical data, and data that is immediately relevant to the problem. Take a look at the data_description.txt file to get a better understanding of this.
###Code
residential_areas = {"RH", "RL", "RP", "RM"}
acceptable_housing_conditions = {10, 9, 8, 7, 6}
df = df[df["MSZoning"].isin(residential_areas)]
print(f"First subset iteration taking only residential areas. Size of dataframe {df.shape}\n")
df = df[df["OverallCond"].isin(acceptable_housing_conditions)]
print(f"Second subset iteration taking only homes above some quality. Size of dataframe {df.shape}")
df
df.columns
columns_to_keep = ["LotArea", 'YearBuilt', 'TotalBsmtSF',
'1stFlrSF', '2ndFlrSF', 'MiscVal',
"GarageCars", "Fireplaces", "BedroomAbvGr",
"SalePrice" # Our label
]
df = df[columns_to_keep]
df = df.reset_index()
df.head()
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 546 entries, 0 to 545
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 index 546 non-null int64
1 LotArea 546 non-null int64
2 YearBuilt 546 non-null int64
3 TotalBsmtSF 546 non-null int64
4 1stFlrSF 546 non-null int64
5 2ndFlrSF 546 non-null int64
6 MiscVal 546 non-null int64
7 GarageCars 546 non-null int64
8 Fireplaces 546 non-null int64
9 BedroomAbvGr 546 non-null int64
10 SalePrice 546 non-null int64
dtypes: int64(11)
memory usage: 47.0 KB
###Markdown
We note that there are 546 entries in the dataset and that all of the entries are non-null which is appropriate for our problem setup.
###Code
df.describe()
df.head()
###Output
_____no_output_____
###Markdown
Price Investigation In this section we conduct EDA to gain an understanding of the distribution of prices.
###Code
df.boxplot(column="SalePrice")
###Output
_____no_output_____
###Markdown
Boxplot discussion From the boxplot we see that there exist many outliers in the data that would lead to issues in our linear regression. Next StepsWe investigate this further by sorting the points by value then plotting We follow this by removing values above and below the "minimum" and "maximum" line and visualizing both the boxplot and the sorting and plotting. Again, in this scenario we imagine a data modeler that has submitted a series of steps to the private data lake.
###Code
sorted_prices = df.SalePrice.sort_values()
sorted_prices = sorted_prices.reset_index()
sorted_prices.drop(columns="index", inplace=True)
sorted_prices
sorted_prices.plot(style='.')
###Output
_____no_output_____
###Markdown
Scatter Plot DiscussionFrom the plot it is apparent that the data is non-linear. We proceed to remove the outliers according to the boxplot from earlier
###Code
Q1 = df['SalePrice'].quantile(0.25)
Q3 = df['SalePrice'].quantile(0.75)
IQR = Q3 - Q1 #IQR is interquartile range.
filter = (df['SalePrice'] >= Q1 - 1.5 * IQR) & (df['SalePrice'] <= Q3 + 1.5 *IQR)
df = df.loc[filter]
###Output
_____no_output_____
###Markdown
Post-outlier removal analysis
###Code
df.boxplot(column="SalePrice")
###Output
_____no_output_____
###Markdown
Boxplot Discussion 2We see that the outliers (based on the boxplot calculations) have been removed from the dataset and the range of values is acceptable. To "verify" this, we do a scatter plot of the data.
###Code
sorted_prices = df.SalePrice.sort_values()
sorted_prices = sorted_prices.reset_index()
sorted_prices.drop(columns="index", inplace=True)
sorted_prices
sorted_prices.plot(style='.')
###Output
_____no_output_____
###Markdown
NormalizeNormalization can be easily accomplished on the server
###Code
df
for col_to_scale in df.columns:
col_min = min(df[col_to_scale])
col_max = max(df[col_to_scale])
df[col_to_scale] = (df[col_to_scale] - col_min )/ (col_max - col_min)
label = df.SalePrice
df.drop(columns=["SalePrice", "index"], inplace=True)
df
label
###Output
_____no_output_____
###Markdown
Scatter Plot Discussion 2Although the data is still non-linear, this is acceptable and we can begin modeling.
###Code
df.to_csv("processed_X.csv", index=False, sep=",")
label.to_csv("processed_y.csv", index=False, sep=",")
###Output
_____no_output_____ |
workflow/Consult-IUCN-for-Conservation-Status.ipynb | ###Markdown
__Description__This notebook retrieves information on convservation status from IUCN for each of the scientific names of species in the WYNDD database that overlap the WLCI geospatial boundary. This notebook requires an API key. Also we need to be considerate of IUCN user agreements.__Source(s)__'../sources/wyndd_wlci_species_list.json' : A list of scientific names. This list is a subset of species from WYNDD's species list that have ranges overlapping with the WLCI geospatial boundary. This file was created in notebook _Consult-and-Explore-WYNDD-Species-Data.ipynb_ __Output(s)___'../cache/iucn.json'_ : Information from IUCN conservation status for species of interest (based on scientific names)
###Code
#Import needed packages
import json
import bispy
import requests
from IPython.display import display
from joblib import Parallel, delayed
iucn = bispy.iucn.Iucn()
bis_utils = bispy.bis.Utils()
# Open list of scientific names to process
with open("../sources/wyndd_wlci_species_list.json", "r") as f:
spp_list = json.loads(f.read())
iucn_results = Parallel(n_jobs=8)(delayed(iucn.search_species)(name)for name in spp_list)
iucn_success_results = [i for i in iucn_results if i['processing_metadata']['status']=='success']
#Print message to user
print (f'{len(iucn_success_results)} out of {len(spp_list)} species were successfully connected to IUCN Information')
# Cache the array of retrieved documents and return/display a random sample for verification
display(bis_utils.doc_cache("../cache/iucn.json", iucn_success_results))
###Output
_____no_output_____ |
examples/phenology-uc2/Timeseries batch compute.ipynb | ###Markdown
Batch computation of timeseries
###Code
import openeo
import geopandas as gpd
ll /data/users/Public/kristofvt/data/BELCAM/fields/tesfields.shp
fields_file = "/data/users/Public/kristofvt/data/BELCAM/fields/tesfields.shp"
fields_file = "/data/users/Public/driesj/flanders_2017.geojson"
fields = gpd.read_file(fields_file)
fields.crs
len(fields)
###Output
_____no_output_____
###Markdown
Filter out overlapping fieldsWhen parcels do not overlap (which is the most logical case), the geopyspark backend will use an optimized implementation to compute the timeseries, which is much faster. Therefore we explicitly filter out overlapping fields on the clients side.We can still retrieve the results for the overlapping fields in a separate call if needed.
###Code
from tqdm import tqdm
#from tqdm._tqdm_notebook import tqdm_notebook
import pandas as pd
#tqdm.pandas()
from joblib import Parallel, delayed
fields['id'] = fields.CODE_OBJ
data_temp=fields#.head(20000)
data_overlaps=gpd.GeoDataFrame(crs=data_temp.crs)
def get_overlap(index, row):
data_temp1=data_temp.loc[data_temp.id!=row.id,]
# check if intersection occured
overlaps=data_temp1[data_temp1.geometry.overlaps(row.geometry)]['id'].tolist()
if len(overlaps)>0:
temp_list=[]
#print(index)
# compare the area with threshold
areas = []
for y in overlaps:
temp_area=gpd.overlay(data_temp.loc[data_temp.id==y,],data_temp.loc[data_temp.id==row.id,],how='intersection')
temp_area=temp_area.loc[temp_area.geometry.area>=9e-9]
if temp_area.shape[0]>0:
#print("found overlap")
#
areas.append(temp_area)
return areas
return []
result = Parallel(n_jobs=20)(delayed(get_overlap)(index,row) for index,row in tqdm(data_temp.iterrows(), total=data_temp.shape[0]))
[r for r in result if len(r) !=0 ]
for r in result:
if(len(r)!=0):
for temp_area in r:
data_overlaps=gpd.GeoDataFrame(pd.concat([temp_area,data_overlaps],ignore_index=True),crs=data_temp.crs)
data_overlaps
# get unique of list id
data_overlaps['sorted']=data_overlaps.apply(lambda y: sorted([y['id_1'],y['id_2']]),axis=1)
data_overlaps['sorted']=data_overlaps.sorted.apply(lambda y: ''.join(y))
data_overlaps=data_overlaps.drop_duplicates('sorted')
data_overlaps=data_overlaps.reset_index()[['id_1','id_2','geometry']]
data_overlaps
data_overlaps['area'] = data_overlaps.to_crs({'init': 'epsg:32631'}).geometry.area
data_overlaps
non_overlap = fields
for id in data_overlaps.id_1:
#print(id)
non_overlap = non_overlap[non_overlap.CODE_OBJ != id]
non_overlap.head(25000).to_file("/data/users/Public/driesj/fields_flanders_non_overlap.shp")
###Output
/opt/rh/rh-python35/root/usr/lib/python3.5/site-packages/geopandas/io/file.py:108: FionaDeprecationWarning: Use fiona.Env() instead.
with fiona.drivers():
###Markdown
Create and submit batch jobFor timeseries computation on large files, it is more robust to use batch jobs as they do not run the risk of being interrupted by network timeouts.
###Code
import openeo
session = openeo.connect("http://openeo.vgt.vito.be/openeo/0.4.0")
asc = session.imagecollection('S1_GRD_SIGMA0_ASCENDING').date_range_filter('2019-01-01', '2019-01-31')\
.bbox_filter(west=3, east=6, north=52, south=50, crs='EPSG:4326')
timeseries_job = asc.polygonal_mean_timeseries("/data/users/Public/driesj/fields_flanders_non_overlap.shp").send_job(out_format="json")
timeseries_job.describe_job()
timeseries_job.start_job()
timeseries_job.describe_job()
timeseries_job.describe_job()
timeseries_job.download_results('timeseries.json')
import json
with open('timeseries.json','r') as f:
ts_json = json.load(f)
import pandas as pd
timeseries_dataframe = pd.DataFrame(ts_json)
timeseries_dataframe.head()
%matplotlib inline
timeseries_dataframe.T.dropna().plot()
###Output
_____no_output_____ |
lab/nyc-taxi-data-regression-model-building.ipynb | ###Markdown
 NYC Taxi Data Regression ModelThis is an [Azure Machine Learning Pipelines](https://aka.ms/aml-pipelines) version of two-part tutorial ([Part 1](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-data-prep), [Part 2](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-auto-train-models)) available for Azure Machine Learning.You can combine the two part tutorial into one using AzureML Pipelines as Pipelines provide a way to stitch together various steps involved (like data preparation and training in this case) in a machine learning workflow.In this notebook, you learn how to prepare data for regression modeling by using open source library [pandas](https://pandas.pydata.org/). You run various transformations to filter and combine two different NYC taxi datasets. Once you prepare the NYC taxi data for regression modeling, then you will use [AutoMLStep](https://docs.microsoft.com/python/api/azureml-train-automl-runtime/azureml.train.automl.runtime.automl_step.automlstep?view=azure-ml-py) available with [Azure Machine Learning Pipelines](https://aka.ms/aml-pipelines) to define your machine learning goals and constraints as well as to launch the automated machine learning process. The automated machine learning technique iterates over many combinations of algorithms and hyperparameters until it finds the best model based on your criterion.After you complete building the model, you can predict the cost of a taxi trip by training a model on data features. These features include the pickup day and time, the number of passengers, and the pickup location. PrerequisiteIf you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the configuration Notebook located at https://github.com/Azure/MachineLearningNotebooks first if you haven't. This sets you up with a working config file that has information on your workspace, subscription id, etc. Prepare data for regression modelingFirst, we will prepare data for regression modeling. We will leverage the convenience of Azure Open Datasets along with the power of Azure Machine Learning service to create a regression model to predict NYC taxi fare prices. Perform `pip install azureml-opendatasets` to get the open dataset package. The Open Datasets package contains a class representing each data source (NycTlcGreen and NycTlcYellow) to easily filter date parameters before downloading. Load dataBegin by creating a dataframe to hold the taxi data. When working in a non-Spark environment, Open Datasets only allows downloading one month of data at a time with certain classes to avoid MemoryError with large datasets. To download a year of taxi data, iteratively fetch one month at a time, and before appending it to green_df_raw, randomly sample 500 records from each month to avoid bloating the dataframe. Then preview the data. To keep this process short, we are sampling data of only 1 month.Note: Open Datasets has mirroring classes for working in Spark environments where data size and memory aren't a concern.
###Code
import azureml.core
# Check core SDK version number
print("SDK version:", azureml.core.VERSION)
from azureml.opendatasets import NycTlcGreen, NycTlcYellow
import pandas as pd
from datetime import datetime
from dateutil.relativedelta import relativedelta
green_df_raw = pd.DataFrame([])
start = datetime.strptime("1/1/2016","%m/%d/%Y")
end = datetime.strptime("1/31/2016","%m/%d/%Y")
number_of_months = 1
sample_size = 5000
for sample_month in range(number_of_months):
temp_df_green = NycTlcGreen(start + relativedelta(months=sample_month), end + relativedelta(months=sample_month)) \
.to_pandas_dataframe()
green_df_raw = green_df_raw.append(temp_df_green.sample(sample_size))
yellow_df_raw = pd.DataFrame([])
start = datetime.strptime("1/1/2016","%m/%d/%Y")
end = datetime.strptime("1/31/2016","%m/%d/%Y")
sample_size = 500
for sample_month in range(number_of_months):
temp_df_yellow = NycTlcYellow(start + relativedelta(months=sample_month), end + relativedelta(months=sample_month)) \
.to_pandas_dataframe()
yellow_df_raw = yellow_df_raw.append(temp_df_yellow.sample(sample_size))
###Output
_____no_output_____
###Markdown
See the data
###Code
from IPython.display import display
display(green_df_raw.head(5))
display(yellow_df_raw.head(5))
###Output
_____no_output_____
###Markdown
Download data locally and then upload to Azure BlobThis is a one-time process to save the dave in the default datastore.
###Code
import os
dataDir = "data"
if not os.path.exists(dataDir):
os.mkdir(dataDir)
greenDir = dataDir + "/green"
yelloDir = dataDir + "/yellow"
if not os.path.exists(greenDir):
os.mkdir(greenDir)
if not os.path.exists(yelloDir):
os.mkdir(yelloDir)
greenTaxiData = greenDir + "/unprepared.parquet"
yellowTaxiData = yelloDir + "/unprepared.parquet"
green_df_raw.to_csv(greenTaxiData, index=False)
yellow_df_raw.to_csv(yellowTaxiData, index=False)
print("Data written to local folder.")
from azureml.core import Workspace
ws = Workspace.from_config()
print("Workspace: " + ws.name, "Region: " + ws.location, sep = '\n')
# Default datastore
default_store = ws.get_default_datastore()
default_store.upload_files([greenTaxiData],
target_path = 'green',
overwrite = True,
show_progress = True)
default_store.upload_files([yellowTaxiData],
target_path = 'yellow',
overwrite = True,
show_progress = True)
print("Upload calls completed.")
###Output
_____no_output_____
###Markdown
Create and register datasetsBy creating a dataset, you create a reference to the data source location. If you applied any subsetting transformations to the dataset, they will be stored in the dataset as well. You can learn more about the what subsetting capabilities are supported by referring to [our documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabular_dataset.tabulardataset?view=azure-ml-pyremarks). The data remains in its existing location, so no extra storage cost is incurred.
###Code
from azureml.core import Dataset
green_taxi_data = Dataset.Tabular.from_delimited_files(default_store.path('green/unprepared.parquet'))
yellow_taxi_data = Dataset.Tabular.from_delimited_files(default_store.path('yellow/unprepared.parquet'))
###Output
_____no_output_____
###Markdown
Register the taxi datasets with the workspace so that you can reuse them in other experiments or share with your colleagues who have access to your workspace.
###Code
green_taxi_data = green_taxi_data.register(ws, 'green_taxi_data')
yellow_taxi_data = yellow_taxi_data.register(ws, 'yellow_taxi_data')
###Output
_____no_output_____
###Markdown
Setup Compute Create new or use an existing compute
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "cpu-cluster"
# Verify that cluster does not exist already
try:
aml_compute = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=4)
aml_compute = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
aml_compute.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Define RunConfig for the computeWe will also use `pandas`, `scikit-learn` and `automl`, `pyarrow` for the pipeline steps. Defining the `runconfig` for that.
###Code
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
# Create a new runconfig object
aml_run_config = RunConfiguration()
# Use the aml_compute you created above.
aml_run_config.target = aml_compute
# Enable Docker
aml_run_config.environment.docker.enabled = True
# Set Docker base image to the default CPU-based image
aml_run_config.environment.docker.base_image = "mcr.microsoft.com/azureml/base:0.2.1"
# Use conda_dependencies.yml to create a conda environment in the Docker image for execution
aml_run_config.environment.python.user_managed_dependencies = False
# Specify CondaDependencies obj, add necessary packages
aml_run_config.environment.python.conda_dependencies = CondaDependencies.create(
conda_packages=['pandas','scikit-learn'],
pip_packages=['azureml-sdk[automl,explain]', 'pyarrow'])
print ("Run configuration created.")
###Output
_____no_output_____
###Markdown
Prepare dataNow we will prepare for regression modeling by using `pandas`. We run various transformations to filter and combine two different NYC taxi datasets.We achieve this by creating a separate step for each transformation as this allows us to reuse the steps and saves us from running all over again in case of any change. We will keep data preparation scripts in one subfolder and training scripts in another.> The best practice is to use separate folders for scripts and its dependent files for each step and specify that folder as the `source_directory` for the step. This helps reduce the size of the snapshot created for the step (only the specific folder is snapshotted). Since changes in any files in the `source_directory` would trigger a re-upload of the snapshot, this helps keep the reuse of the step when there are no changes in the `source_directory` of the step. Define Useful ColumnsHere we are defining a set of "useful" columns for both Green and Yellow taxi data.
###Code
display(green_df_raw.columns)
display(yellow_df_raw.columns)
# useful columns needed for the Azure Machine Learning NYC Taxi tutorial
useful_columns = str(["cost", "distance", "dropoff_datetime", "dropoff_latitude",
"dropoff_longitude", "passengers", "pickup_datetime",
"pickup_latitude", "pickup_longitude", "store_forward", "vendor"]).replace(",", ";")
print("Useful columns defined.")
###Output
_____no_output_____
###Markdown
Cleanse Green taxi data
###Code
from azureml.pipeline.core import PipelineData
from azureml.pipeline.steps import PythonScriptStep
# python scripts folder
prepare_data_folder = './scripts/prepdata'
# rename columns as per Azure Machine Learning NYC Taxi tutorial
green_columns = str({
"vendorID": "vendor",
"lpepPickupDatetime": "pickup_datetime",
"lpepDropoffDatetime": "dropoff_datetime",
"storeAndFwdFlag": "store_forward",
"pickupLongitude": "pickup_longitude",
"pickupLatitude": "pickup_latitude",
"dropoffLongitude": "dropoff_longitude",
"dropoffLatitude": "dropoff_latitude",
"passengerCount": "passengers",
"fareAmount": "cost",
"tripDistance": "distance"
}).replace(",", ";")
# Define output after cleansing step
cleansed_green_data = PipelineData("cleansed_green_data", datastore=default_store).as_dataset()
print('Cleanse script is in {}.'.format(os.path.realpath(prepare_data_folder)))
# cleansing step creation
# See the cleanse.py for details about input and output
cleansingStepGreen = PythonScriptStep(
name="Cleanse Green Taxi Data",
script_name="cleanse.py",
arguments=["--useful_columns", useful_columns,
"--columns", green_columns,
"--output_cleanse", cleansed_green_data],
inputs=[green_taxi_data.as_named_input('raw_data')],
outputs=[cleansed_green_data],
compute_target=aml_compute,
runconfig=aml_run_config,
source_directory=prepare_data_folder,
allow_reuse=True
)
print("cleansingStepGreen created.")
###Output
_____no_output_____
###Markdown
Cleanse Yellow taxi data
###Code
yellow_columns = str({
"vendorID": "vendor",
"tpepPickupDateTime": "pickup_datetime",
"tpepDropoffDateTime": "dropoff_datetime",
"storeAndFwdFlag": "store_forward",
"startLon": "pickup_longitude",
"startLat": "pickup_latitude",
"endLon": "dropoff_longitude",
"endLat": "dropoff_latitude",
"passengerCount": "passengers",
"fareAmount": "cost",
"tripDistance": "distance"
}).replace(",", ";")
# Define output after cleansing step
cleansed_yellow_data = PipelineData("cleansed_yellow_data", datastore=default_store).as_dataset()
print('Cleanse script is in {}.'.format(os.path.realpath(prepare_data_folder)))
# cleansing step creation
# See the cleanse.py for details about input and output
cleansingStepYellow = PythonScriptStep(
name="Cleanse Yellow Taxi Data",
script_name="cleanse.py",
arguments=["--useful_columns", useful_columns,
"--columns", yellow_columns,
"--output_cleanse", cleansed_yellow_data],
inputs=[yellow_taxi_data.as_named_input('raw_data')],
outputs=[cleansed_yellow_data],
compute_target=aml_compute,
runconfig=aml_run_config,
source_directory=prepare_data_folder,
allow_reuse=True
)
print("cleansingStepYellow created.")
###Output
_____no_output_____
###Markdown
Merge cleansed Green and Yellow datasetsWe are creating a single data source by merging the cleansed versions of Green and Yellow taxi data.
###Code
# Define output after merging step
merged_data = PipelineData("merged_data", datastore=default_store).as_dataset()
print('Merge script is in {}.'.format(os.path.realpath(prepare_data_folder)))
# merging step creation
# See the merge.py for details about input and output
mergingStep = PythonScriptStep(
name="Merge Taxi Data",
script_name="merge.py",
arguments=["--output_merge", merged_data],
inputs=[cleansed_green_data.parse_parquet_files(file_extension=None),
cleansed_yellow_data.parse_parquet_files(file_extension=None)],
outputs=[merged_data],
compute_target=aml_compute,
runconfig=aml_run_config,
source_directory=prepare_data_folder,
allow_reuse=True
)
print("mergingStep created.")
###Output
_____no_output_____
###Markdown
Filter dataThis step filters out coordinates for locations that are outside the city border. We use a TypeConverter object to change the latitude and longitude fields to decimal type.
###Code
# Define output after merging step
filtered_data = PipelineData("filtered_data", datastore=default_store).as_dataset()
print('Filter script is in {}.'.format(os.path.realpath(prepare_data_folder)))
# filter step creation
# See the filter.py for details about input and output
filterStep = PythonScriptStep(
name="Filter Taxi Data",
script_name="filter.py",
arguments=["--output_filter", filtered_data],
inputs=[merged_data.parse_parquet_files(file_extension=None)],
outputs=[filtered_data],
compute_target=aml_compute,
runconfig = aml_run_config,
source_directory=prepare_data_folder,
allow_reuse=True
)
print("FilterStep created.")
###Output
_____no_output_____
###Markdown
Normalize dataIn this step, we split the pickup and dropoff datetime values into the respective date and time columns and then we rename the columns to use meaningful names.
###Code
# Define output after normalize step
normalized_data = PipelineData("normalized_data", datastore=default_store).as_dataset()
print('Normalize script is in {}.'.format(os.path.realpath(prepare_data_folder)))
# normalize step creation
# See the normalize.py for details about input and output
normalizeStep = PythonScriptStep(
name="Normalize Taxi Data",
script_name="normalize.py",
arguments=["--output_normalize", normalized_data],
inputs=[filtered_data.parse_parquet_files(file_extension=None)],
outputs=[normalized_data],
compute_target=aml_compute,
runconfig = aml_run_config,
source_directory=prepare_data_folder,
allow_reuse=True
)
print("normalizeStep created.")
###Output
_____no_output_____
###Markdown
Transform dataTransform the normalized taxi data to final required format. This steps does the following:- Split the pickup and dropoff date further into the day of the week, day of the month, and month values. - To get the day of the week value, uses the derive_column_by_example() function. The function takes an array parameter of example objects that define the input data, and the preferred output. The function automatically determines the preferred transformation. For the pickup and dropoff time columns, split the time into the hour, minute, and second by using the split_column_by_example() function with no example parameter.- After new features are generated, use the drop_columns() function to delete the original fields as the newly generated features are preferred. - Rename the rest of the fields to use meaningful descriptions.
###Code
# Define output after transform step
transformed_data = PipelineData("transformed_data", datastore=default_store).as_dataset()
print('Transform script is in {}.'.format(os.path.realpath(prepare_data_folder)))
# transform step creation
# See the transform.py for details about input and output
transformStep = PythonScriptStep(
name="Transform Taxi Data",
script_name="transform.py",
arguments=["--output_transform", transformed_data],
inputs=[normalized_data.parse_parquet_files(file_extension=None)],
outputs=[transformed_data],
compute_target=aml_compute,
runconfig = aml_run_config,
source_directory=prepare_data_folder,
allow_reuse=True
)
print("transformStep created.")
###Output
_____no_output_____
###Markdown
Split the data into train and test setsThis function segregates the data into dataset for model training and dataset for testing.
###Code
train_model_folder = './scripts/trainmodel'
# train and test splits output
output_split_train = PipelineData("output_split_train", datastore=default_store).as_dataset()
output_split_test = PipelineData("output_split_test", datastore=default_store).as_dataset()
print('Data spilt script is in {}.'.format(os.path.realpath(train_model_folder)))
# test train split step creation
# See the train_test_split.py for details about input and output
testTrainSplitStep = PythonScriptStep(
name="Train Test Data Split",
script_name="train_test_split.py",
arguments=["--output_split_train", output_split_train,
"--output_split_test", output_split_test],
inputs=[transformed_data.parse_parquet_files(file_extension=None)],
outputs=[output_split_train, output_split_test],
compute_target=aml_compute,
runconfig = aml_run_config,
source_directory=train_model_folder,
allow_reuse=True
)
print("testTrainSplitStep created.")
###Output
_____no_output_____
###Markdown
Use automated machine learning to build regression modelNow we will use **automated machine learning** to build the regression model. We will use [AutoMLStep](https://docs.microsoft.com/python/api/azureml-train-automl-runtime/azureml.train.automl.runtime.automl_step.automlstep?view=azure-ml-py) in AML Pipelines for this part. Perform `pip install azureml-sdk[automl]`to get the automated machine learning package. These functions use various features from the data set and allow an automated model to build relationships between the features and the price of a taxi trip. Automatically train a model Create experiment
###Code
from azureml.core import Experiment
experiment = Experiment(ws, 'NYCTaxi_Tutorial_Pipelines')
print("Experiment created")
###Output
_____no_output_____
###Markdown
Define settings for autogeneration and tuningHere we define the experiment parameter and model settings for autogeneration and tuning. We can specify automl_settings as **kwargs as well.Use your defined training settings as a parameter to an `AutoMLConfig` object. Additionally, specify your training data and the type of model, which is `regression` in this case.Note: When using AmlCompute, we can't pass Numpy arrays directly to the fit method.
###Code
import logging
from azureml.train.automl import AutoMLConfig
# Change iterations to a reasonable number (50) to get better accuracy
automl_settings = {
"iteration_timeout_minutes" : 10,
"iterations" : 2,
"primary_metric" : 'spearman_correlation',
"n_cross_validations": 5
}
training_dataset = output_split_train.parse_parquet_files(file_extension=None).keep_columns(['pickup_weekday','pickup_hour', 'distance','passengers', 'vendor', 'cost'])
automl_config = AutoMLConfig(task = 'regression',
debug_log = 'automated_ml_errors.log',
path = train_model_folder,
compute_target = aml_compute,
featurization = 'auto',
training_data = training_dataset,
label_column_name = 'cost',
**automl_settings)
print("AutoML config created.")
###Output
_____no_output_____
###Markdown
Define AutoMLStep
###Code
from azureml.pipeline.steps import AutoMLStep
trainWithAutomlStep = AutoMLStep(name='AutoML_Regression',
automl_config=automl_config,
allow_reuse=True)
print("trainWithAutomlStep created.")
###Output
_____no_output_____
###Markdown
Build and run the pipeline
###Code
from azureml.pipeline.core import Pipeline
from azureml.widgets import RunDetails
pipeline_steps = [trainWithAutomlStep]
pipeline = Pipeline(workspace = ws, steps=pipeline_steps)
print("Pipeline is built.")
pipeline_run = experiment.submit(pipeline, regenerate_outputs=False)
print("Pipeline submitted for execution.")
RunDetails(pipeline_run).show()
###Output
_____no_output_____
###Markdown
Explore the results
###Code
# Before we proceed we need to wait for the run to complete.
pipeline_run.wait_for_completion()
# functions to download output to local and fetch as dataframe
def get_download_path(download_path, output_name):
output_folder = os.listdir(download_path + '/azureml')[0]
path = download_path + '/azureml/' + output_folder + '/' + output_name
return path
def fetch_df(step, output_name):
output_data = step.get_output_data(output_name)
download_path = './outputs/' + output_name
output_data.download(download_path, overwrite=True)
df_path = get_download_path(download_path, output_name) + '/processed.parquet'
return pd.read_parquet(df_path)
###Output
_____no_output_____
###Markdown
View cleansed taxi data
###Code
green_cleanse_step = pipeline_run.find_step_run(cleansingStepGreen.name)[0]
yellow_cleanse_step = pipeline_run.find_step_run(cleansingStepYellow.name)[0]
cleansed_green_df = fetch_df(green_cleanse_step, cleansed_green_data.name)
cleansed_yellow_df = fetch_df(yellow_cleanse_step, cleansed_yellow_data.name)
display(cleansed_green_df.head(5))
display(cleansed_yellow_df.head(5))
###Output
_____no_output_____
###Markdown
View the combined taxi data profile
###Code
merge_step = pipeline_run.find_step_run(mergingStep.name)[0]
combined_df = fetch_df(merge_step, merged_data.name)
display(combined_df.describe())
###Output
_____no_output_____
###Markdown
View the filtered taxi data profile
###Code
filter_step = pipeline_run.find_step_run(filterStep.name)[0]
filtered_df = fetch_df(filter_step, filtered_data.name)
display(filtered_df.describe())
###Output
_____no_output_____
###Markdown
View normalized taxi data
###Code
normalize_step = pipeline_run.find_step_run(normalizeStep.name)[0]
normalized_df = fetch_df(normalize_step, normalized_data.name)
display(normalized_df.head(5))
###Output
_____no_output_____
###Markdown
View transformed taxi data
###Code
transform_step = pipeline_run.find_step_run(transformStep.name)[0]
transformed_df = fetch_df(transform_step, transformed_data.name)
display(transformed_df.describe())
display(transformed_df.head(5))
###Output
_____no_output_____
###Markdown
View training data used by AutoML
###Code
split_step = pipeline_run.find_step_run(testTrainSplitStep.name)[0]
train_split = fetch_df(split_step, output_split_train.name)
display(train_split.describe())
display(train_split.head(5))
###Output
_____no_output_____
###Markdown
View the details of the AutoML run
###Code
from azureml.train.automl.run import AutoMLRun
#from azureml.widgets import RunDetails
# workaround to get the automl run as its the last step in the pipeline
# and get_steps() returns the steps from latest to first
for step in pipeline_run.get_steps():
automl_step_run_id = step.id
print(step.name)
print(automl_step_run_id)
break
automl_run = AutoMLRun(experiment = experiment, run_id=automl_step_run_id)
#RunDetails(automl_run).show()
###Output
_____no_output_____
###Markdown
Retrieve all Child runsWe use SDK methods to fetch all the child runs and see individual metrics that we log.
###Code
children = list(automl_run.get_children())
metricslist = {}
for run in children:
properties = run.get_properties()
metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}
metricslist[int(properties['iteration'])] = metrics
rundata = pd.DataFrame(metricslist).sort_index(1)
rundata
###Output
_____no_output_____
###Markdown
Retreive the best modelUncomment the below cell to retrieve the best model
###Code
# best_run, fitted_model = automl_run.get_output()
# print(best_run)
# print(fitted_model)
###Output
_____no_output_____
###Markdown
Test the model Get test dataUncomment the below cell to get test data
###Code
# split_step = pipeline_run.find_step_run(testTrainSplitStep.name)[0]
# x_test = fetch_df(split_step, output_split_test.name)[['distance','passengers', 'vendor','pickup_weekday','pickup_hour']]
# y_test = fetch_df(split_step, output_split_test.name)[['cost']]
# display(x_test.head(5))
# display(y_test.head(5))
###Output
_____no_output_____
###Markdown
Test the best fitted modelUncomment the below cell to test the best fitted model
###Code
# y_predict = fitted_model.predict(x_test)
# y_actual = y_test.values.tolist()
# display(pd.DataFrame({'Actual':y_actual, 'Predicted':y_predict}).head(5))
# import matplotlib.pyplot as plt
# fig = plt.figure(figsize=(14, 10))
# ax1 = fig.add_subplot(111)
# distance_vals = [x[0] for x in x_test.values]
# ax1.scatter(distance_vals[:100], y_predict[:100], s=18, c='b', marker="s", label='Predicted')
# ax1.scatter(distance_vals[:100], y_actual[:100], s=18, c='r', marker="o", label='Actual')
# ax1.set_xlabel('distance (mi)')
# ax1.set_title('Predicted and Actual Cost/Distance')
# ax1.set_ylabel('Cost ($)')
# plt.legend(loc='upper left', prop={'size': 12})
# plt.rcParams.update({'font.size': 14})
# plt.show()
###Output
_____no_output_____ |
OOP_workshop/OOP_Workshop.ipynb | ###Markdown
Object-Oriented PythonObject-oriented programming (OOP) is a way of writing programs that represent real-world problem spaces (in terms of objects, functions, classes, attributes, methods, and inheritance). As Allen Downey explains in [__Think Python__](http://www.greenteapress.com/thinkpython/html/thinkpython018.html), in object-oriented programming, we shift away from framing the *function* as the active agent and toward seeing the *object* as the active agent.In this workshop, we are going to create a class that represents the rational numbers. This tutorial is adapted from content in Anand Chitipothu's [__Python Practice Book__](http://anandology.com/python-practice-book/index.html). It was created by [Rebecca Bilbro](https://github.com/rebeccabilbro/Tutorials/tree/master/OOP) Part 1: Classes, methods, modules, and packages. Pair Programming: Partner up with the person sitting next to youCopy the code below into a file called RatNum.py in your code editor. It may help to review [built-ins in Python](https://docs.python.org/3.5/library/functions.html) and the [Python data model](https://docs.python.org/3.5/reference/datamodel.html).
###Code
class RationalNumber:
"""Any number that can be expressed as the quotient or fraction p/q
of two integers, p and q, with the denominator q not equal to zero.
Since q may be equal to 1, every integer is a rational number.
"""
def __init__(self, numerator, denominator=1):
self.n = numerator
self.d = denominator
def __add__(self, other):
# Write a function that allows for the addition of two rational numbers.
# I did this one for you :D
if not isinstance(other, RationalNumber):
other = RationalNumber(other)
n = self.n * other.d + self.d * other.n
d = self.d * other.d
return RationalNumber(n, d)
def __sub__(self, other):
# Write a function that allows for the subtraction of two rational numbers.
pass
def __mul__(self, other):
# Write a function that allows for the multiplication of two rational numbers.
pass
def __truediv__(self, other):
# Write a function that allows for the division of two rational numbers.
pass
def __str__(self):
return "%s/%s" % (self.n, self.d)
__repr__ = __str__
if __name__ == "__main__":
x = RationalNumber(1,2)
y = RationalNumber(3,2)
print ("The first number is {!s}".format(x))
print ("The second number is {!s}".format(y))
print ("Their sum is {!s}".format(x+y))
print ("Their product is {!s}".format(x*y))
print ("Their difference is {!s}".format(x-y))
print ("Their quotient is {!s}".format(x/y))
###Output
The first number is 1/2
The second number is 3/2
Their sum is 8/4
Their product is None
Their difference is None
Their quotient is None
###Markdown
(hint) |Operation |Method ||---------------|----------------------------||Addition |(a/b) + (c/d) = (ad + bc)/bd||Subtraction |(a/b) - (c/d) = (ad - bc)/bd||Multiplication |(a/b) x (c/d) = ac/bd ||Division |(a/b) / (c/d) = ad/bc | Modules Modules are reusable libraries of code. Many libraries come standard with Python. You can import them into a program using the *import* statement. For example:
###Code
import math
print ("The first few digits of pi are {:f}...".format(math.pi))
###Output
The first few digits of pi are 3.141593...
###Markdown
The math module implements many functions for complex mathematical operations using floating point values, including logarithms, trigonometric operations, and irrational numbers like π. As an exercise, we'll encapsulate your rational numbers script into a module and then import it.Save the RatNum.py file you've been working in. Open your terminal and navigate whereever you have the file saved. Type: python When you're inside the Python interpreter, enter: from RatNum import RationalNumber a = RationalNumber(1,3) b = RationalNumber(2,3) print (a*b)Success! You have just made a module. PackagesA package is a directory of modules. For example, we could make a big package by bundling together modules with classes for natural numbers, integers, irrational numbers, and real numbers. The Python Package Index, or "PyPI", is the official third-party software repository for the Python programming language. It is a comprehensive catalog of all open source Python packages and is maintained by the Python Software Foundation. You can download packages from PyPI with the *pip* command in your terminal.PyPI packages are uploaded by individual package maintainers. That means you can write and contribute your own Python packages! Now let's turn your module into a package called Mathy.1. Create a folder called Mathy, and add your RatNum.py file to the folder.2. Add an empty file to the folder called \_\_init\_\_.py.3. Create a third file in that folder called MathQuiz.py that imports RationalNumber from RatNum... 4. ...and uses the RationalNumbers class from RatNum. For example: MathQuiz.py from RatNum import RationalNumber print "Pop quiz! Find the sum, product, difference, and quotient for the following rational numbers:" x = RationalNumber(1,3) y = RationalNumber(2,3) print ("The first number is {!s}".format(x)) print ("The second number is {!s}".format(y)) print ("Their sum is {!s}".format(x+y)) print ("Their product is {!s}".format(x*y)) print ("Their difference is {!s}".format(x-y)) print ("Their quotient is {!s}".format(x/y)) In the terminal, navigate to the Mathy folder. When you are inside the folder, type: python MathQuiz.pyCongrats! You have just made a Python package! Now type: python RatNum.py What did you get this time? Is it different from the answer you got for the previous command? Why??Once you've completed this exercise, move on to Part 2. Part 2: Inheritance Suppose we were to write out another class for another set of numbers, say the integers. What are the rules for addition, subtraction, multiplication, and division? If we can identify shared properties between integers and rational numbers, we could use that information to write a integer class that 'inherits' properties from our rational number class. Let's add an integer class to our RatNum.py file that inherits all the properties of our RationalNumber class.
###Code
class Integer(RationalNumber):
#What should we add here?
pass
###Output
_____no_output_____
###Markdown
Now update your \_\_name\_\_ == "\_\_main\_\_" statement at the end of RatNum.py to read:
###Code
if __name__ == "__main__":
q = Integer(5)
r = Integer(6)
print ("{!s} is an integer expressed as a rational number".format(q))
print ("So is {!s}".format(r))
print ("When you add them you get {!s}".format(q+r))
print ("When you multiply them you get {!s}".format(q*r))
print ("When you subtract them you get {!s}".format(q-r))
print ("When you divide them you get {!s}".format(q/r))
###Output
_____no_output_____ |
Climate Analysis(Step 1).ipynb | ###Markdown
Reflect Tables into SQLAlchemy ORM
###Code
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from matplotlib.pyplot import figure
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
from sqlalchemy import inspect
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
conn = engine.connect()
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# We can view all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
measurement_df = pd.read_sql('select * FROM measurement', conn)
measurement_df.head()
station_df = pd.read_sql('select * FROM station', conn)
station_df.head()
###Output
_____no_output_____
###Markdown
Exploratory Climate Analysis* Design a query to retrieve the last 12 months of precipitation data and plot the results* Calculate the date 1 year ago from the last data point in the database* Perform a query to retrieve the data and precipitation scores* Save the query results as a Pandas DataFrame and set the index to the date column* Sort the dataframe by date* Use Pandas Plotting with Matplotlib to plot the data
###Code
# Design a query to retrieve the last 12 months of precipitation data and plot the results
##find latest date
mostRctDt = session.query(Measurement.date).order_by(Measurement.date.desc()).first()
print("Most recent date: ",mostRctDt)
# Calculate the date 1 year ago from the last data point in the database
yearAgo = dt.date(2017, 8, 23) - dt.timedelta(days=365)
print("1 year ago: ",yearAgo)
# Perform a query to retrieve the date and precipitation scores
# Save the query results as a Pandas DataFrame and set the index to the date column
# Sort the dataframe by date
precipData = session.query(Measurement.date, Measurement.prcp).filter(Measurement.date).\
filter(Measurement.date >= "2016-08-23").all()
precipData_df = pd.DataFrame(precipData, columns=['date', 'prcp'])
precipData_df.set_index('date', inplace=True)
precipData_df.head()
# Use Pandas Plotting with Matplotlib to plot the data
precipPlot = pd.DataFrame(precipData_df)
fig, ax = plt.subplots(figsize=(22,10))
precipPlot = precipPlot.sort_index(ascending=True)
precipPlot.plot(ax=ax)
plt.xticks(rotation=90)
plt.title("12 Months of Precipitation Data from Most Recent Recorded Date ")
plt.xlabel("Date")
plt.ylabel('Precipitation (in)' )
# Use Pandas to calculate the summary statistics for the precipitation data
precipData_df.describe()
# Design a query to show how many stations are available in this dataset?
stationCount = session.query(Station.id).count()
stationCount
# What are the most active stations? (i.e. what stations have the most rows)?
# List the stations and the counts in descending order.
activeStations = session.query(Measurement.station, func.count(Measurement.station)).\
group_by(Measurement.station).\
order_by(func.count(Measurement.station).desc()).all()
activeStations
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature of the most active station?
tempMeasure = session.query(Measurement.station,func.min(Measurement.tobs),
func.max(Measurement.tobs),
func.avg(Measurement.tobs)).filter(Measurement.station == 'USC00519281').all()
tempMeasure
# Choose the station with the highest number of temperature observations.
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
activeStation = session.query(Measurement.tobs, Measurement.station).filter(Measurement.date).\
filter(Measurement.station == 'USC00519281').\
filter(Measurement.date >= "2016-08-23").all()
activeStation
tempData_df = pd.DataFrame(activeStation, columns=['Temperature', 'Station ID'])
tempData_df.set_index('Station ID', inplace=True)
tempData_df
tempData = pd.DataFrame(tempData_df)
tempData
tempData.plot(kind='hist', bins=12, figsize=(10,7))
plt.title("12 Months of Temperature Observations", fontsize='large', fontweight='bold')
plt.xlabel('Temperature (F)', fontsize='large', fontweight='bold')
plt.ylabel('Frequency', fontsize='large', fontweight='bold')
###Output
_____no_output_____ |
notebooks/Part_3_Model_Training_and_Evaluation.ipynb | ###Markdown
Part 3: Model Training and EvaluationIf you haven't complete the **Part 1: Data Preparation** and **Part 2: Phrase Learning**, please complete them before moving forward with **Part 3: Model Training and Evaluation**.**NOTE**: Python 3 kernel doesn't include Azure Machine Learning Workbench functionalities. Please switch the kernel to `local` before continuing further. This example is designed to score new questions against the pre-existing Q&A pairs by training text classification models where each pre-existing Q&A pair is a unique class and a subset of the duplicate questions for each Q&A pair are available as training material. In the Part 3, the classification model uses an ensemble method to aggregate the following three base classifiers. In each base classifier, the `AnswerId` is used as the class label and the BOWs representations is used as the features.1. Naive Bayes Classifier2. Support Vector Machine (TF-IDF as features)3. Random Forest (NB Scores as features)Two different evaluation metrics are used to assess performance.1. `Average Rank (AR)`: indicates the average position where the correct answer is found in the list of retrieved Q&A pairs (out of the full set of 103 answer classes). 2. `Top 3 Percentage`: indicates the percentage of test questions that the correct answer can be retrieved in the top three choices in the returned ranked list. `Average Rank (AR)` and `Top 3 Percentage` on the test set are calculated using the following formula: Import Required Python Modules`modules.feature_extractor` contains a list of user-defined Python modules to extract effective features that are used in this examples. You can find the source code of those modules in the directory of `modules/feature_extractor.py`.
###Code
import pandas as pd
import numpy as np
import os, warnings
from sklearn import svm
from sklearn.ensemble import RandomForestClassifier
from modules.feature_extractor import (tokensToIds, countMatrix, priorProbabilityAnswer, posterioriProb,
feature_selection, featureWeights, wordProbabilityInAnswer,
wordProbabilityNotinAnswer, normalizeTF, getIDF, softmax)
from azureml.logging import get_azureml_logger
warnings.filterwarnings("ignore")
run_logger = get_azureml_logger()
run_logger.log('amlrealworld.QnA-matching.part3-model-training-eval','true')
###Output
_____no_output_____
###Markdown
Access trainQ and testQ from Part 2As we have prepared the _trainQ_ and _testQ_ with learned phrases and tokens from `Part 2: Phrase Learning`, we retrieve the datasets here for the further process._trainQ_ contains 5,153 training examples and _testQ_ contains 1,735 test examples. Also, there are 103 unique answer classes in both datasets.
###Code
workfolder = os.environ.get('AZUREML_NATIVE_SHARE_DIRECTORY')
# paths to trainQ and testQ.
trainQ_path = os.path.join(workfolder, 'trainQ_part2')
testQ_path = os.path.join(workfolder, 'testQ_part2')
# load the training and test data.
trainQ = pd.read_csv(trainQ_path, sep='\t', index_col='Id', encoding='latin1')
testQ = pd.read_csv(testQ_path, sep='\t', index_col='Id', encoding='latin1')
###Output
_____no_output_____
###Markdown
Extract FeaturesSelecting the right set of features is very critical for the model training. In this section, we show you several feature extraction approaches that have proved to yield good performance in text classification use cases. Term Frequency and Inverse Document Frequency (TF-IDF) TF-IDF is a commonly used feature weighting approach for text classification. Each question `d` is typically represented by a feature vector `x` that represents the contents of `d`. Because different questions may have different lengths, it can be useful to apply L1 normalization on the feature vector `x`. Therefore, a normalized `Term Frequency` matrix can be obtained based on the following formula.Considering all tokens observed in the training questions, we compute the `Inverse Document Frequency` for each token based on the following formula.By knowing the `Term Frequency (TF)` matrix and `Inverse Document Frequency (IDF)` vector, we can simply compute `TF-IDF` matrix by multiplying them together.
###Code
token2IdHashInit = tokensToIds(trainQ['Tokens'], featureHash=None)
# get unique answerId in ascending order
uniqueAnswerId = list(np.unique(trainQ['AnswerId']))
N_wQ = countMatrix(trainQ, token2IdHashInit)
idf = getIDF(N_wQ)
x_wTrain = normalizeTF(trainQ, token2IdHashInit)
x_wTest = normalizeTF(testQ, token2IdHashInit)
tfidfTrain = (x_wTrain.T * idf).T
tfidfTest = (x_wTest.T * idf).T
###Output
_____no_output_____
###Markdown
Naive Bayes ScoresBesides using the IDF as the word weighting mechnism, a hypothesis testing likelihood ratio approach is also implemented here. In this approach, the word weights are associated with the answer classes and are calculated using the following formula.By knowing the `Term Frequency (TF)` matrix and `Weight` vector for each class, we can simply compute the `Naive Bayes Scores` matrix for each class by multiplying them together. Feature selectionText classification models often pre-select a set of features (i.e., tokens) which carry the most class relevant information for further processing while ignoring words that carry little to no value for identifying classes. A variety of feature selection methods have been previously explored for both text processing. In this example, we have had the most success selecting features based on the estimated class posterior probability `P(A|w)`, where `A` is a specific answer class and `w` is a specific token. The maximum a posteriori probability (MAP) estimate of `P(A|w)` is expressed asFeature selection in this example is performed by selecting the top `N` tokens which maximize for each `P(A|w)`. In order to determine the best value for the `TopN` parameter, you can simply run the `scripts/naive_bayes.py` with `local` compute context in the Azure Machine Learning Workbench and enter different integer values as `Arguments`.Based our experiments, the `TopN = 19` yields the best result and is demonstrated in this notebook.
###Code
# calculate the count matrix of all training questions.
N_wAInit = countMatrix(trainQ, token2IdHashInit, 'AnswerId', uniqueAnswerId)
P_A = priorProbabilityAnswer(trainQ['AnswerId'], uniqueAnswerId)
P_Aw = posterioriProb(N_wAInit, P_A, uniqueAnswerId)
# select top N important tokens per answer class.
featureHash = feature_selection(P_Aw, token2IdHashInit, topN=19)
token2IdHash = tokensToIds(trainQ['Tokens'], featureHash=featureHash)
N_wA = countMatrix(trainQ, token2IdHash, 'AnswerId', uniqueAnswerId)
alpha = 0.0001
P_w = featureWeights(N_wA, alpha)
beta = 0.0001
P_wA = wordProbabilityInAnswer(N_wA, P_w, beta)
P_wNotA = wordProbabilityNotinAnswer(N_wA, P_w, beta)
NBWeights = np.log(P_wA / P_wNotA)
###Output
_____no_output_____
###Markdown
Train Classification Models and Predict on Test Data Naive Bayes ClassifierWe implement the _Naive Bayes Classifier_ as described in the paper entitled ["MCE Training Techniques for Topic Identification of Spoken Audio Documents"](http://ieeexplore.ieee.org/abstract/document/5742980/).
###Code
beta_A = 0
x_wTest = normalizeTF(testQ, token2IdHash)
Y_test_prob1 = softmax(-beta_A + np.dot(x_wTest.T, NBWeights))
###Output
_____no_output_____
###Markdown
Support Vector Machine (TF-IDF as features)Traditionally, Support Vector Machine (SVM) model finds a hyperplane which maximally seperates positive and negative training tokens in a vector space. In its standard form, an SVM is a two-class classifier. To create a SVM model for a problem with multiple classes, a one-versus-rest (OVR) SVM classifier is typically learned for each answer class.The `sklearn` Python package implement such a classifier and we use the implementation in this example. More information about this `LinearSVC` classifier can be found [here](http://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html).
###Code
X_train, Y_train = tfidfTrain.T, np.array(trainQ['AnswerId'])
clf = svm.LinearSVC(dual=True, multi_class='ovr', penalty='l2', C=1, loss="squared_hinge", random_state=1)
clf.fit(X_train, Y_train)
X_test = tfidfTest.T
Y_test_prob2 = softmax(clf.decision_function(X_test))
###Output
_____no_output_____
###Markdown
Random Forest (NB Scores as features)Similar to the above one-versus-rest SVM classifier, we also implement a one-versus-rest Random Forest classifier based on a base two-class Random Forest classifier from `sklearn`. More information about the `RandomForestClassifier` can be found [here](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html).In each base classifier, we dynamically compute the naive bayes scores for the positive class as the features. Since the number of negative examples is much larger than the number of positive examples, we hold all positive example and randomly select negative examples based on a negative to positive ratio to obtain a balanced training data. This is controlled by the `ratio` parameter in the `ovrClassifier` function below.In this classifier, we need to tune two hyper-parameters: `TopN` and `n_estimators`. `TopN` is the same parameter as we learned in the _Feature Selection_ step and `n_estimators` indicates the number of trees to be constructed in the Random Forest classifier. To identify the best values for the hyper-parameters, you can run `scripts/random_forest.py` with `local` compute context in the Azure Machine Learning Workbench and enter different integer values `Arguments`. The value of `TopN` and the value of `n_estimators` should be space delimited.Based our experiments, the `TopN = 19` and `n_estimators = 250` yields the best result, and are demonstrated in this notebook.
###Code
# train one-vs-rest classifier using NB scores as features.
def ovrClassifier(trainLabels, x_wTrain, x_wTest, NBWeights, clf, ratio):
uniqueLabel = np.unique(trainLabels)
dummyLabels = pd.get_dummies(trainLabels)
numTest = x_wTest.shape[1]
Y_test_prob = np.zeros(shape=(numTest, len(uniqueLabel)))
for i in range(len(uniqueLabel)):
X_train_all, Y_train_all = x_wTrain.T * NBWeights[:, i], dummyLabels.iloc[:, i]
X_test = x_wTest.T * NBWeights[:, i]
# with sample selection.
if ratio is not None:
# ratio = # of Negative/# of Positive
posIdx = np.where(Y_train_all == 1)[0]
negIdx = np.random.choice(np.where(Y_train_all == 0)[0], ratio*len(posIdx))
allIdx = np.concatenate([posIdx, negIdx])
X_train, Y_train = X_train_all[allIdx], Y_train_all.iloc[allIdx]
else: # without sample selection.
X_train, Y_train = X_train_all, Y_train_all
clf.fit(X_train, Y_train)
if hasattr(clf, "decision_function"):
Y_test_prob[:, i] = clf.decision_function(X_test)
else:
Y_test_prob[:, i] = clf.predict_proba(X_test)[:, 1]
return softmax(Y_test_prob)
x_wTrain = normalizeTF(trainQ, token2IdHash)
x_wTest = normalizeTF(testQ, token2IdHash)
clf = RandomForestClassifier(n_estimators=250, criterion='entropy', random_state=1)
Y_test_prob3 = ovrClassifier(trainQ["AnswerId"], x_wTrain, x_wTest, NBWeights, clf, ratio=3)
###Output
_____no_output_____
###Markdown
Ensemble ModelWe build an ensemble model by aggregating the predicted probabilities from three previously trained classifiers. The base classifiers are equally weighted in this ensemble method.
###Code
Y_test_prob_aggr = np.mean([Y_test_prob1, Y_test_prob2, Y_test_prob3], axis=0)
###Output
_____no_output_____
###Markdown
Evaluate Model PerformanceTwo different evaluation metrics are used to assess performance. 1. `Average Rank (AR)`: indicates the average position where the correct answer is found in the list of retrieved Q&A pairs (out of the full set of 103 answer classes). 2. `Top 3 Percentage`: indicates the percentage of test questions that the correct answer can be retrieved in the top three choices in the returned ranked list.
###Code
# get the rank of answerIds for a given question.
def rank(frame, scores, uniqueAnswerId):
frame['SortedAnswers'] = list(np.array(uniqueAnswerId)[np.argsort(-scores, axis=1)])
rankList = []
for i in range(len(frame)):
rankList.append(np.where(frame['SortedAnswers'].iloc[i] == frame['AnswerId'].iloc[i])[0][0] + 1)
frame['Rank'] = rankList
return frame
testQ = rank(testQ, Y_test_prob_aggr, uniqueAnswerId)
AR = np.floor(testQ['Rank'].mean())
top3 = round(len(testQ.query('Rank <= 3'))/len(testQ), 3)
print('Average of rank: ' + str(AR))
print('Percentage of questions find answers in the first 3 choices: ' + str(top3))
###Output
Average of rank: 5.0
Percentage of questions find answers in the first 3 choices: 0.684
|
Recommendation_Engine.ipynb | ###Markdown
Recommendation EngineThis is a recommendation engine built on company electronics company X's sales dataset. The dataset contains information of each sale that was made; the columns are lists of the **Billing Date, item purchased(MaterialDesc), item code(MaterialNumber), division of items(fan, wires, etc.), Customer Name,** and other features. It has **91046** entries and **15** features for each entry, and data has been already pre-processed to remove any missing values.Our main goal is to recommend similar items to a customer who has already purchased one or more items. In order to use the algorithm one more column was added named **"rating"** which binarily signifies each user-item interaction. Rating **1** is given for each user who had puchased a product. Users are small shops and firms who places orders for electronics products.
###Code
# libraries required to work with arrays and dataframes.
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics.pairwise import pairwise_distances
from sklearn.metrics.pairwise import cosine_similarity
from scipy import sparse
# having a peek at the dataset
location = './RI/data/Recommendation_data.csv'
data = pd.read_csv(location, encoding = 'latin1')
# removing column for non-disclosure of company name
data.drop("DivisionDesc", axis = 1, inplace = True)
data.head()
data.shape
###Output
_____no_output_____
###Markdown
For our recommendation engine we will only require three columns. The **user name, the material and the rating** of their interaction. We also need to find out the number of unique users and items in the dataset. From now on we will be continuing in terms of **users** and **items**.
###Code
def create_data(dataframe, user_id = 'MaterialDesc', item_id = 'CustomerName', rating = 'rating'):
# creating a new dataframe having the required columns
features = [item_id, user_id, rating]
df = data[features]
df.to_json("./RI/data/Recommendation_data.json", orient = "index")
return df
df = create_data(data)
df.head()
# finding the distinct number of users and items
n_users = len(df.CustomerName.unique())
n_items = len(df.MaterialDesc.unique())
print("Users: ", n_users, "Items: ", n_items)
###Output
Users: 10 Items: 4783
###Markdown
Recommender SystemsThere are multiple recommendation system algorithms, but for this type of problem **Item-Item Collaborative Filtering** is used.Two most ubiquitous types of recommender systems are **Content-Based** and **Collaborative Filtering (CF)**. Collaborative filtering produces recommendations based on the knowledge of users’ attitude to items, that is it uses the “wisdom of the crowd” to recommend items. In contrast, content-based recommender systems focus on the attributes of the items and give you recommendations based on the similarity between them. Let us know more about this Collaborative Filtering algorithm. Collaborative Filtering AlgorithmCollaborative Filtering approaches can be divided into two main sections: **user-item filtering** and **item-item filtering**. User-Item filteringA user-item filtering takes a particular user, find users that are similar to that user based on similarity of ratings, and recommend items that those similar users liked. This algorithm is very effective but takes a lot of time and resources. It requires to compute every customer pair information which takes time. Therefore, for big base platforms, this algorithm is hard to implement without a very strong parallelizable system. Item-Item filteringItem-item filtering will take an item, find users who liked that item, and find other items that those users or similar users also liked. It takes items and outputs other items as recommendations. This algorithm is far less resource consuming than user-user collaborative filtering. Hence, for a new customer the algorithm takes far lesser time than user-user collaborate as we don’t need all similarity scores between customers. And with fixed number of products, product-product look alike matrix is fixed over time. * *Item-Item Collaborative Filtering: “Users who liked this item also liked …”** *User-Item Collaborative Filtering: “Users who are similar to you also liked …”* In both cases, you create a user-item matrix which you build from the entire dataset. Since you have split the data into **testing** and **training** you will need to create two **10 × 4783** matrices. The training matrix contains 75% of the ratings and the testing matrix contains 25% of the ratings.
###Code
# diving our dataset into training and test dataset
train_data, test_data = train_test_split(df, test_size = 0.25)
# creating an empty dataset having row names as users and column names as itmes
customers = df['CustomerName'].drop_duplicates()
materials = df['MaterialDesc'].drop_duplicates()
train_data_items = pd.DataFrame(0, index= customers, columns= materials)
test_data_items = pd.DataFrame(0, index= customers, columns= materials)
# filling the dataset with the rating to generate a sparse dataset
for row in train_data.itertuples():
train_data_items[row[2]][row[1]] = row[3]
for row in test_data.itertuples():
test_data_items[row[2]][row[1]] = row[3]
train_data_items.head(10)
###Output
_____no_output_____
###Markdown
Item-Item Based Collaborative filtering for the datasetIn our dataset we have set the ratings as **1** if a user has purchased that item and **0** if he has not. Now our next step is to:* Find similar items by using a similarity metric* For a user, recommend the items most similar to the items (s)he already likes Similarity MatrixA similarity matrix is a user-user or item-item marix cosisting of the similarities metric of each user or item pair. A **similarity metric** is a metric used to determine the measure of similarity between two item or user vectors. The most commonly used similarity metrics are:**Jaccard Similarity**: * Similarity is based on the number of users which have rated item A and B divided by the number of users who have rated either A or B * It is typically used where we don’t have a numeric rating but just a **boolean value** like a product being bought or an add being clicked Jaccard Similarity is given by Sij= p/(p+q+r)where, p = number of attributes positive for both objects q = number of attributes 1 for i and 0 for j r = number of attributes 0 for i and 1 for j **Cosine Similarity:*** Similarity is the cosine of the angle between the 2 vectors of the item vectors of A and B* Closer the vectors, smaller will be the angle and larger the cosinecosine similarity = A⋅B/∥A∥∥B∥ where A and B are object vectors.**Pearson Similarity*** Similarity is the pearson coefficient between the two vectors.* pearson coefficient is the cosine similarity of normalized user or item vectors Jaccard SimilarityThe metric used for similarity was Jaccard Similarity as we have boolean ratings.
###Code
def calculate_similarity(data_items):
"""Calculate the column-wise cosine similarity for a sparse
matrix. Return a new dataframe matrix with similarities.
"""
data_sparse = sparse.csr_matrix(data_items)
similarities = 1 - pairwise_distances(train_data_items.transpose(), metric='jaccard')
sim = pd.DataFrame(data=similarities, index= data_items.columns, columns= data_items.columns)
return sim
# Build the similarity matrix
sim_matrix = calculate_similarity(train_data_items)
print(sim_matrix.shape)
# Lets get the top 11 similar items for 1200 mm FAN FUSION PEARL IVORY
print(sim_matrix.loc['1200 mm FAN FUSION PEARL IVORY'].nlargest(10))
###Output
C:\Users\hp\Anaconda3\lib\site-packages\sklearn\utils\validation.py:475: DataConversionWarning: Data with input dtype int64 was converted to bool by check_pairwise_arrays.
warnings.warn(msg, DataConversionWarning)
###Markdown
Predicting user-item ratings for similar itemsIf **Rx** be the vector of user **x's** rating and **N** be the set of **k** most similar items to the items purchased by **x**. Prediction of user **x** and item **i** can be simply given as the weighted average given as:The neighnourhood of 10 items for each item purchased by a user is specified. We can tweak with this value according to the number of similar items we want to include. For more similar items we can increase this number and vice versa.
###Code
# func determines top 10 recommendations for each user
def recommendations(user, data_matrix, similarity_matrix):
#------------------------
# USER-ITEM CALCULATIONS
#------------------------
# Construct a new dataframe with the 10 closest neighbours (most similar)
# for each artist.
data_neighbours = pd.DataFrame(index=similarity_matrix.columns, columns=range(1,11))
for i in range(0, len(similarity_matrix.columns)):
data_neighbours.iloc[i,:10] = similarity_matrix.iloc[0:,i].sort_values(ascending=False)[:10].index
# Get the items the user has purchased.
known_user_likes = data_matrix.loc[user]
known_user_likes = known_user_likes[known_user_likes >0].index.values
# Construct the neighbourhood from the most similar items to the
# ones our user has already purchased.
most_similar_to_likes = data_neighbours.loc[known_user_likes]
similar_list = most_similar_to_likes.values.tolist()
similar_list = list(set([item for sublist in similar_list for item in sublist]))
neighbourhood = similarity_matrix[similar_list].loc[similar_list]
# A user vector containing only the neighbourhood items and
# the known user purchases.
user_vector = data_matrix.loc[user].loc[similar_list]
# Calculate the score.
score = neighbourhood.dot(user_vector).div(neighbourhood.sum(axis=1))
# Drop the known purchases.
for i in known_user_likes:
if i in score.index:
score = score.drop(i)
return score.nlargest(5)
print(recommendations("K Raj and Co", train_data_items, sim_matrix))
###Output
MaterialDesc
6A 2 pin Socket Oro 0.830882
SENZO 5S 15LTR SM FP WHITE-SWH 0.826334
1200 mm FABIO PLATINUM BRUSHED NICKEL 0.822806
1320 mm FAN DEW VIKING TEAK BRUSH NICKEL 0.822806
230 mm BIRDIE PERSONAL YELLOW MR FAN 0.822806
dtype: float64
###Markdown
Bonus: Cosine SimilarityBefore using cosine similarity we need to normalize our ratings so that each user vector is a unit vector.This is the idea of normalizing the user vectors was implemented so that a user with many ratings contributes less to any individual rating. This is to say that a like from a user who has only liked 10 items is more valuable to us than a like from someone who likes everything she comes across.
###Code
def normalize(matrix):
# We might need to normalize the user vectors to unit vectors for some algoritms to work effectively.
# magnitude = sqrt(x2 + y2 + z2 + ...) ; here x,y,z.. are item vectors
magnitude = np.sqrt(np.square(matrix).sum(axis=1))
# unitvector = (x / magnitude, y / magnitude, z / magnitude, ...)
matrix = matrix.divide(magnitude, axis='index')
return matrix
#normalized training data
norm_data_items = normalize(train_data_items)
norm_data_items.head()
def calculate_similarity(data_items):
"""Calculate the column-wise cosine similarity for a sparse
matrix. Return a new dataframe matrix with similarities.
"""
data_sparse = sparse.csr_matrix(data_items)
similarities = 1 - pairwise_distances(train_data_items.transpose(), metric='cosine')
sim = pd.DataFrame(data=similarities, index= data_items.columns, columns= data_items.columns)
return sim
# Build the similarity matrix
cos_sim_matrix = calculate_similarity(norm_data_items)
# Lets get the top 11 similar items for 1200 mm FAN FUSION PEARL IVORY
print(cos_sim_matrix.loc['1200 mm FAN FUSION PEARL IVORY'].nlargest(10))
###Output
MaterialDesc
1200 mm FAN FUSION PEARL IVORY 1.000000
1200 mm FAN XP390 PLUS BROWN 1.000000
1200 mm FAN XP390 PLUS ELEGANT WHITE 1.000000
1200 mm FAN FUSION BEIGE BROWN 0.925820
1200 mm FAN FUSION SILVER BLUE 0.925820
1200 mm FAN NICOLA PEARL WHITE SILVER 0.912871
1200 mm FAN AREOLE PEARL BROWN 0.912871
900 mm FAN FUSION PEARL IVORY 0.912871
175 mm I-COOL PERSONAL FAN BLACK GREY 0.912871
230 mm FAN VENTIL AIR DB 0.912871
Name: 1200 mm FAN FUSION PEARL IVORY, dtype: float64
###Markdown
EvaluationThere are many evaluation metrics but one of the most popular metric used to evaluate accuracy of predicted ratings is Root Mean Squared Error (RMSE).You can use the mean_square_error (MSE) function from sklearn, where the RMSE is just the square root of MSE.Since you only want to consider predicted ratings that are in the test dataset, you filter out all other elements in the prediction matrix with prediction[ground_truth.nonzero()].
###Code
### predicting the score for each use-item interaction ###
# training data matrix
train_data_matrix = np.array(train_data_items.values)
# jaccard similarity matrix
similarity_matrix = np.array(sim_matrix.values)
# cosine similarity matrix
cos_similarity_matrix = np.array(cos_sim_matrix.values)
def predict(ratings, similarity, type='item'):
if type == 'user':
mean_user_rating = ratings.mean(axis=1)
#You use np.newaxis so that mean_user_rating has same format as ratings
ratings_diff = (ratings - mean_user_rating[:, np.newaxis])
pred = mean_user_rating[:, np.newaxis] + similarity.dot(ratings_diff) / np.array([np.abs(similarity).sum(axis=1)]).T
elif type == 'item':
pred = ratings.dot(similarity) / np.array([np.abs(similarity).sum(axis=1)])
return pred
# predicted rating matrix
item_prediction = predict(train_data_matrix, similarity_matrix, type='item')
# replacing NA value swith zeroes for jaccard similarity
np.nan_to_num(item_prediction, copy = False)
# predicted rating matrix using cosine similarity
item_prediction_cos = predict(train_data_matrix, cos_similarity_matrix, type='item')
from sklearn.metrics import mean_squared_error
from math import sqrt
def rmse(prediction, ground_truth):
prediction = prediction[ground_truth.nonzero()].flatten()
ground_truth = ground_truth[ground_truth.nonzero()].flatten()
return sqrt(mean_squared_error(prediction, ground_truth))
# eavaluating the score on the test data mtrix
jaccard_error = rmse(item_prediction, np.array(test_data_items))
cosine_error = rmse(item_prediction_cos, np.array(test_data_items))
print('Item-based CF RMSE: ', str(jaccard_error))
print('Item-based CF RMSE(cos): ', str(cosine_error))
# plotting the rmse values for jaccard and cosine similarities
# for pickle and joblib model
import matplotlib.pyplot as plt
fig = plt.figure(figsize = (8, 4))
ax1 = fig.add_subplot(121)
ax1.bar(['jaccard', 'cosine'], [jaccard_error, cosine_error])
ax1.set_ylabel('RMSE(Error)', size = 12)
plt.suptitle('Cpmparing Jaccard and Cosine Metrics', size = 14)
plt.show()
###Output
_____no_output_____
###Markdown
As a result we can conclude that using **jaccard similarity** in case of **binary ratings**; where information, if a user has rated a item or not is given, is a better option
###Code
# Saving the recommendations in json format
l = []
for customer in customers:
rec = recommendations(customer, train_data_items, sim_matrix)
for i,j in zip(rec, rec.index):
d = {'customer': customer, 'item': j, 'score': i}
l.append(d)
###Output
_____no_output_____
###Markdown
Persisting our recommendationsThe final recommendations for each user along with their score is shown below.
###Code
import json
with open("./RI/data/Recommendations.json", "w") as json_data:
json.dump(l, json_data)
jd = pd.read_json('./RI/data/Recommendations.json', orient = 'records')
jd.head(10)
###Output
_____no_output_____ |
3 - Redes Neurais/MLP PrevEmprego/PrevEmprego.ipynb | ###Markdown
Previsão de contratação Importando bibliotecas
###Code
import pandas as pd
from sklearn.neural_network import MLPClassifier
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import metrics
import numpy as np
###Output
_____no_output_____
###Markdown
Carregando CSVNesta etapa, vamos carregar TODOS os campos do CSV.
###Code
dataframe = pd.read_csv("emprego.csv")
dataframe.describe()
###Output
_____no_output_____
###Markdown
Como solicitado, vamos selecionar os campos: `ssc_p`, `hsc_p`, `degree_p`, `workex`, `etest_p` e `status`.
###Code
target = dataframe['status']
features = ['ssc_p', 'hsc_p', 'degree_p', 'workex', 'etest_p']
dataframe = dataframe[features]
###Output
_____no_output_____
###Markdown
`target` são as nossas saídas. Os campos desejados foram armazenados em `features` e depois extraídos de `dataframe`. Transformação e DivisãoOs campos `status` e `workex` precisam ser transformados. Os demais campos precisam ser normalizados.
###Code
# Converte 'Placed' -> 1, 'Not Placed' -> 0
target = np.where(target == "Placed", 1, 0)
# Converte 'Yes' -> 1, 'No' -> 0
dataframe['workex'] = np.where(dataframe.workex == "Yes", 1, 0)
# Divisão
X_train, X_test, y_train, y_test = train_test_split(dataframe, target, test_size=0.3)
# Normalizando os campos de treinamento e teste
scaler = MinMaxScaler()
scaler = scaler.fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
print("Treinamento: ", len(X_train))
print("Teste: ", len(X_test))
###Output
Treinamento: 150
Teste: 65
###Markdown
Treinamento
###Code
model = MLPClassifier(hidden_layer_sizes=(30,), verbose=True, max_iter=1000)
model.fit(X_train, y_train)
###Output
Iteration 1, loss = 0.62067912
Iteration 2, loss = 0.61786596
Iteration 3, loss = 0.61511217
Iteration 4, loss = 0.61242276
Iteration 5, loss = 0.60978927
Iteration 6, loss = 0.60721648
Iteration 7, loss = 0.60470062
Iteration 8, loss = 0.60224726
Iteration 9, loss = 0.59986094
Iteration 10, loss = 0.59754609
Iteration 11, loss = 0.59530939
Iteration 12, loss = 0.59314475
Iteration 13, loss = 0.59105058
Iteration 14, loss = 0.58901763
Iteration 15, loss = 0.58704509
Iteration 16, loss = 0.58513331
Iteration 17, loss = 0.58328362
Iteration 18, loss = 0.58150010
Iteration 19, loss = 0.57978229
Iteration 20, loss = 0.57812928
Iteration 21, loss = 0.57653436
Iteration 22, loss = 0.57500203
Iteration 23, loss = 0.57352845
Iteration 24, loss = 0.57211607
Iteration 25, loss = 0.57075652
Iteration 26, loss = 0.56944907
Iteration 27, loss = 0.56819150
Iteration 28, loss = 0.56698178
Iteration 29, loss = 0.56581651
Iteration 30, loss = 0.56470067
Iteration 31, loss = 0.56363051
Iteration 32, loss = 0.56260071
Iteration 33, loss = 0.56160797
Iteration 34, loss = 0.56064875
Iteration 35, loss = 0.55972341
Iteration 36, loss = 0.55883173
Iteration 37, loss = 0.55797089
Iteration 38, loss = 0.55714217
Iteration 39, loss = 0.55634460
Iteration 40, loss = 0.55557943
Iteration 41, loss = 0.55483762
Iteration 42, loss = 0.55411565
Iteration 43, loss = 0.55341456
Iteration 44, loss = 0.55273043
Iteration 45, loss = 0.55206568
Iteration 46, loss = 0.55142288
Iteration 47, loss = 0.55079720
Iteration 48, loss = 0.55018913
Iteration 49, loss = 0.54959006
Iteration 50, loss = 0.54900487
Iteration 51, loss = 0.54843487
Iteration 52, loss = 0.54787345
Iteration 53, loss = 0.54731744
Iteration 54, loss = 0.54676753
Iteration 55, loss = 0.54622548
Iteration 56, loss = 0.54568101
Iteration 57, loss = 0.54514009
Iteration 58, loss = 0.54459905
Iteration 59, loss = 0.54405616
Iteration 60, loss = 0.54351586
Iteration 61, loss = 0.54297519
Iteration 62, loss = 0.54243233
Iteration 63, loss = 0.54188940
Iteration 64, loss = 0.54134287
Iteration 65, loss = 0.54079460
Iteration 66, loss = 0.54024088
Iteration 67, loss = 0.53969146
Iteration 68, loss = 0.53914521
Iteration 69, loss = 0.53859646
Iteration 70, loss = 0.53804977
Iteration 71, loss = 0.53750485
Iteration 72, loss = 0.53696142
Iteration 73, loss = 0.53641775
Iteration 74, loss = 0.53586903
Iteration 75, loss = 0.53532413
Iteration 76, loss = 0.53478418
Iteration 77, loss = 0.53424225
Iteration 78, loss = 0.53369710
Iteration 79, loss = 0.53315144
Iteration 80, loss = 0.53260281
Iteration 81, loss = 0.53205364
Iteration 82, loss = 0.53150189
Iteration 83, loss = 0.53095052
Iteration 84, loss = 0.53040369
Iteration 85, loss = 0.52985900
Iteration 86, loss = 0.52931449
Iteration 87, loss = 0.52876854
Iteration 88, loss = 0.52822416
Iteration 89, loss = 0.52768224
Iteration 90, loss = 0.52714129
Iteration 91, loss = 0.52659981
Iteration 92, loss = 0.52605904
Iteration 93, loss = 0.52551768
Iteration 94, loss = 0.52497866
Iteration 95, loss = 0.52443920
Iteration 96, loss = 0.52389938
Iteration 97, loss = 0.52335856
Iteration 98, loss = 0.52281610
Iteration 99, loss = 0.52227258
Iteration 100, loss = 0.52172683
Iteration 101, loss = 0.52117985
Iteration 102, loss = 0.52063027
Iteration 103, loss = 0.52007914
Iteration 104, loss = 0.51953069
Iteration 105, loss = 0.51898437
Iteration 106, loss = 0.51843619
Iteration 107, loss = 0.51788659
Iteration 108, loss = 0.51733502
Iteration 109, loss = 0.51678107
Iteration 110, loss = 0.51622664
Iteration 111, loss = 0.51567146
Iteration 112, loss = 0.51511748
Iteration 113, loss = 0.51456147
Iteration 114, loss = 0.51400299
Iteration 115, loss = 0.51344115
Iteration 116, loss = 0.51287678
Iteration 117, loss = 0.51230843
Iteration 118, loss = 0.51173720
Iteration 119, loss = 0.51116338
Iteration 120, loss = 0.51058392
Iteration 121, loss = 0.51000409
Iteration 122, loss = 0.50942553
Iteration 123, loss = 0.50884698
Iteration 124, loss = 0.50826697
Iteration 125, loss = 0.50768402
Iteration 126, loss = 0.50709671
Iteration 127, loss = 0.50650974
Iteration 128, loss = 0.50591979
Iteration 129, loss = 0.50532911
Iteration 130, loss = 0.50473701
Iteration 131, loss = 0.50414631
Iteration 132, loss = 0.50355396
Iteration 133, loss = 0.50295876
Iteration 134, loss = 0.50235941
Iteration 135, loss = 0.50175828
Iteration 136, loss = 0.50115335
Iteration 137, loss = 0.50054253
Iteration 138, loss = 0.49992830
Iteration 139, loss = 0.49930875
Iteration 140, loss = 0.49868306
Iteration 141, loss = 0.49804963
Iteration 142, loss = 0.49741043
Iteration 143, loss = 0.49677011
Iteration 144, loss = 0.49612178
Iteration 145, loss = 0.49547291
Iteration 146, loss = 0.49482422
Iteration 147, loss = 0.49416420
Iteration 148, loss = 0.49350027
Iteration 149, loss = 0.49283054
Iteration 150, loss = 0.49215862
Iteration 151, loss = 0.49147693
Iteration 152, loss = 0.49079526
Iteration 153, loss = 0.49011585
Iteration 154, loss = 0.48942679
Iteration 155, loss = 0.48873179
Iteration 156, loss = 0.48803893
Iteration 157, loss = 0.48734092
Iteration 158, loss = 0.48664667
Iteration 159, loss = 0.48594066
Iteration 160, loss = 0.48523759
Iteration 161, loss = 0.48454240
Iteration 162, loss = 0.48384998
Iteration 163, loss = 0.48315392
Iteration 164, loss = 0.48245296
Iteration 165, loss = 0.48175069
Iteration 166, loss = 0.48104555
Iteration 167, loss = 0.48033434
Iteration 168, loss = 0.47962361
Iteration 169, loss = 0.47891340
Iteration 170, loss = 0.47820133
Iteration 171, loss = 0.47748761
Iteration 172, loss = 0.47677278
Iteration 173, loss = 0.47605373
Iteration 174, loss = 0.47533256
Iteration 175, loss = 0.47460767
Iteration 176, loss = 0.47388171
Iteration 177, loss = 0.47315515
Iteration 178, loss = 0.47243114
Iteration 179, loss = 0.47170821
Iteration 180, loss = 0.47098412
Iteration 181, loss = 0.47025871
Iteration 182, loss = 0.46953167
Iteration 183, loss = 0.46880673
Iteration 184, loss = 0.46808086
Iteration 185, loss = 0.46735449
Iteration 186, loss = 0.46662739
Iteration 187, loss = 0.46589939
Iteration 188, loss = 0.46517007
Iteration 189, loss = 0.46444160
Iteration 190, loss = 0.46371312
Iteration 191, loss = 0.46298444
Iteration 192, loss = 0.46225556
Iteration 193, loss = 0.46152606
Iteration 194, loss = 0.46079635
Iteration 195, loss = 0.46006671
Iteration 196, loss = 0.45933683
Iteration 197, loss = 0.45860675
Iteration 198, loss = 0.45787683
Iteration 199, loss = 0.45714579
Iteration 200, loss = 0.45641559
Iteration 201, loss = 0.45568459
Iteration 202, loss = 0.45495382
Iteration 203, loss = 0.45422919
Iteration 204, loss = 0.45350578
Iteration 205, loss = 0.45278324
Iteration 206, loss = 0.45206045
Iteration 207, loss = 0.45133732
Iteration 208, loss = 0.45061663
Iteration 209, loss = 0.44989918
Iteration 210, loss = 0.44918206
Iteration 211, loss = 0.44846608
Iteration 212, loss = 0.44775145
Iteration 213, loss = 0.44703941
Iteration 214, loss = 0.44632708
Iteration 215, loss = 0.44561427
Iteration 216, loss = 0.44490106
Iteration 217, loss = 0.44419108
Iteration 218, loss = 0.44348092
Iteration 219, loss = 0.44277128
Iteration 220, loss = 0.44206247
Iteration 221, loss = 0.44135502
Iteration 222, loss = 0.44064822
Iteration 223, loss = 0.43994994
Iteration 224, loss = 0.43925419
Iteration 225, loss = 0.43856165
Iteration 226, loss = 0.43787139
Iteration 227, loss = 0.43718129
Iteration 228, loss = 0.43649248
Iteration 229, loss = 0.43580806
Iteration 230, loss = 0.43512598
Iteration 231, loss = 0.43444488
Iteration 232, loss = 0.43376510
Iteration 233, loss = 0.43308695
Iteration 234, loss = 0.43241094
Iteration 235, loss = 0.43173705
Iteration 236, loss = 0.43106685
Iteration 237, loss = 0.43039886
Iteration 238, loss = 0.42973244
Iteration 239, loss = 0.42906723
Iteration 240, loss = 0.42840309
Iteration 241, loss = 0.42773995
Iteration 242, loss = 0.42707780
Iteration 243, loss = 0.42641688
Iteration 244, loss = 0.42575762
Iteration 245, loss = 0.42509962
Iteration 246, loss = 0.42444208
Iteration 247, loss = 0.42378577
Iteration 248, loss = 0.42313182
Iteration 249, loss = 0.42247940
Iteration 250, loss = 0.42182807
Iteration 251, loss = 0.42117776
Iteration 252, loss = 0.42052892
Iteration 253, loss = 0.41988166
Iteration 254, loss = 0.41923618
Iteration 255, loss = 0.41859082
Iteration 256, loss = 0.41794733
Iteration 257, loss = 0.41730523
Iteration 258, loss = 0.41666323
Iteration 259, loss = 0.41602256
Iteration 260, loss = 0.41538465
Iteration 261, loss = 0.41474846
Iteration 262, loss = 0.41411397
Iteration 263, loss = 0.41348096
Iteration 264, loss = 0.41284907
Iteration 265, loss = 0.41222374
Iteration 266, loss = 0.41159823
Iteration 267, loss = 0.41097308
Iteration 268, loss = 0.41034985
Iteration 269, loss = 0.40972762
Iteration 270, loss = 0.40910663
Iteration 271, loss = 0.40848672
Iteration 272, loss = 0.40786796
Iteration 273, loss = 0.40725043
Iteration 274, loss = 0.40663417
Iteration 275, loss = 0.40601974
Iteration 276, loss = 0.40540675
Iteration 277, loss = 0.40479494
Iteration 278, loss = 0.40418487
Iteration 279, loss = 0.40357895
Iteration 280, loss = 0.40297287
Iteration 281, loss = 0.40236797
Iteration 282, loss = 0.40176424
Iteration 283, loss = 0.40116304
Iteration 284, loss = 0.40056363
Iteration 285, loss = 0.39996530
Iteration 286, loss = 0.39936843
Iteration 287, loss = 0.39877545
Iteration 288, loss = 0.39818544
Iteration 289, loss = 0.39759519
Iteration 290, loss = 0.39700584
Iteration 291, loss = 0.39641933
Iteration 292, loss = 0.39583390
Iteration 293, loss = 0.39524962
Iteration 294, loss = 0.39466680
Iteration 295, loss = 0.39408755
Iteration 296, loss = 0.39351168
Iteration 297, loss = 0.39293853
Iteration 298, loss = 0.39236568
Iteration 299, loss = 0.39179470
Iteration 300, loss = 0.39122849
Iteration 301, loss = 0.39066378
Iteration 302, loss = 0.39009977
Iteration 303, loss = 0.38953882
Iteration 304, loss = 0.38897996
Iteration 305, loss = 0.38842294
Iteration 306, loss = 0.38786780
Iteration 307, loss = 0.38731424
Iteration 308, loss = 0.38676287
Iteration 309, loss = 0.38621348
Iteration 310, loss = 0.38566600
Iteration 311, loss = 0.38512058
Iteration 312, loss = 0.38457699
Iteration 313, loss = 0.38403525
Iteration 314, loss = 0.38349579
Iteration 315, loss = 0.38295848
Iteration 316, loss = 0.38242396
Iteration 317, loss = 0.38189075
Iteration 318, loss = 0.38135905
Iteration 319, loss = 0.38082928
Iteration 320, loss = 0.38030134
Iteration 321, loss = 0.37977531
Iteration 322, loss = 0.37925127
Iteration 323, loss = 0.37872930
Iteration 324, loss = 0.37820985
Iteration 325, loss = 0.37769271
Iteration 326, loss = 0.37717910
Iteration 327, loss = 0.37666570
Iteration 328, loss = 0.37615651
Iteration 329, loss = 0.37564876
Iteration 330, loss = 0.37514467
Iteration 331, loss = 0.37464206
Iteration 332, loss = 0.37414030
Iteration 333, loss = 0.37364219
Iteration 334, loss = 0.37314568
Iteration 335, loss = 0.37265016
Iteration 336, loss = 0.37215796
Iteration 337, loss = 0.37166827
Iteration 338, loss = 0.37118027
Iteration 339, loss = 0.37069492
Iteration 340, loss = 0.37021213
Iteration 341, loss = 0.36973101
Iteration 342, loss = 0.36925180
Iteration 343, loss = 0.36877507
Iteration 344, loss = 0.36830113
Iteration 345, loss = 0.36782962
Iteration 346, loss = 0.36736048
Iteration 347, loss = 0.36689379
Iteration 348, loss = 0.36642844
Iteration 349, loss = 0.36596651
Iteration 350, loss = 0.36550515
Iteration 351, loss = 0.36504701
Iteration 352, loss = 0.36459308
Iteration 353, loss = 0.36414580
Iteration 354, loss = 0.36369651
Iteration 355, loss = 0.36325096
Iteration 356, loss = 0.36281160
Iteration 357, loss = 0.36236854
Iteration 358, loss = 0.36192863
Iteration 359, loss = 0.36149795
Iteration 360, loss = 0.36106525
Iteration 361, loss = 0.36063276
Iteration 362, loss = 0.36020127
Iteration 363, loss = 0.35977881
Iteration 364, loss = 0.35935252
Iteration 365, loss = 0.35893238
Iteration 366, loss = 0.35851490
Iteration 367, loss = 0.35809765
Iteration 368, loss = 0.35768100
Iteration 369, loss = 0.35726592
Iteration 370, loss = 0.35685695
Iteration 371, loss = 0.35644839
Iteration 372, loss = 0.35604125
Iteration 373, loss = 0.35563841
Iteration 374, loss = 0.35523647
Iteration 375, loss = 0.35483821
Iteration 376, loss = 0.35444146
Iteration 377, loss = 0.35404588
Iteration 378, loss = 0.35365374
Iteration 379, loss = 0.35326314
Iteration 380, loss = 0.35287635
Iteration 381, loss = 0.35249039
Iteration 382, loss = 0.35210769
Iteration 383, loss = 0.35172755
Iteration 384, loss = 0.35134882
Iteration 385, loss = 0.35097490
Iteration 386, loss = 0.35060198
Iteration 387, loss = 0.35022954
Iteration 388, loss = 0.34985763
Iteration 389, loss = 0.34949559
Iteration 390, loss = 0.34913139
Iteration 391, loss = 0.34876503
Iteration 392, loss = 0.34840168
Iteration 393, loss = 0.34804448
Iteration 394, loss = 0.34768760
Iteration 395, loss = 0.34733106
Iteration 396, loss = 0.34697931
Iteration 397, loss = 0.34662849
Iteration 398, loss = 0.34627821
Iteration 399, loss = 0.34593287
Iteration 400, loss = 0.34558835
Iteration 401, loss = 0.34524367
Iteration 402, loss = 0.34490266
Iteration 403, loss = 0.34456489
Iteration 404, loss = 0.34422642
Iteration 405, loss = 0.34389426
Iteration 406, loss = 0.34356158
Iteration 407, loss = 0.34322883
Iteration 408, loss = 0.34290534
Iteration 409, loss = 0.34257987
Iteration 410, loss = 0.34225181
Iteration 411, loss = 0.34193054
Iteration 412, loss = 0.34161297
Iteration 413, loss = 0.34129519
Iteration 414, loss = 0.34097750
Iteration 415, loss = 0.34066070
Iteration 416, loss = 0.34035056
Iteration 417, loss = 0.34004115
Iteration 418, loss = 0.33973250
Iteration 419, loss = 0.33942447
Iteration 420, loss = 0.33911880
Iteration 421, loss = 0.33881685
Iteration 422, loss = 0.33851425
Iteration 423, loss = 0.33821750
Iteration 424, loss = 0.33792091
Iteration 425, loss = 0.33762427
Iteration 426, loss = 0.33733127
Iteration 427, loss = 0.33703982
Iteration 428, loss = 0.33674959
Iteration 429, loss = 0.33646137
Iteration 430, loss = 0.33617626
Iteration 431, loss = 0.33589357
Iteration 432, loss = 0.33561198
Iteration 433, loss = 0.33533099
Iteration 434, loss = 0.33505213
Iteration 435, loss = 0.33477540
Iteration 436, loss = 0.33450119
Iteration 437, loss = 0.33422885
Iteration 438, loss = 0.33395829
Iteration 439, loss = 0.33368969
Iteration 440, loss = 0.33342225
Iteration 441, loss = 0.33315816
Iteration 442, loss = 0.33289529
Iteration 443, loss = 0.33263489
Iteration 444, loss = 0.33237542
Iteration 445, loss = 0.33211801
Iteration 446, loss = 0.33186281
Iteration 447, loss = 0.33160855
Iteration 448, loss = 0.33135697
Iteration 449, loss = 0.33110649
Iteration 450, loss = 0.33086002
Iteration 451, loss = 0.33061234
Iteration 452, loss = 0.33036696
Iteration 453, loss = 0.33012467
Iteration 454, loss = 0.32988310
Iteration 455, loss = 0.32964407
Iteration 456, loss = 0.32940563
Iteration 457, loss = 0.32917137
Iteration 458, loss = 0.32893568
Iteration 459, loss = 0.32870312
Iteration 460, loss = 0.32847220
Iteration 461, loss = 0.32824145
Iteration 462, loss = 0.32801361
Iteration 463, loss = 0.32778747
Iteration 464, loss = 0.32756072
Iteration 465, loss = 0.32733637
Iteration 466, loss = 0.32711412
Iteration 467, loss = 0.32689259
Iteration 468, loss = 0.32667376
Iteration 469, loss = 0.32645513
Iteration 470, loss = 0.32623832
Iteration 471, loss = 0.32602280
Iteration 472, loss = 0.32580839
Iteration 473, loss = 0.32559480
Iteration 474, loss = 0.32538189
Iteration 475, loss = 0.32516942
Iteration 476, loss = 0.32496008
Iteration 477, loss = 0.32475163
Iteration 478, loss = 0.32454389
Iteration 479, loss = 0.32433694
Iteration 480, loss = 0.32413090
Iteration 481, loss = 0.32392719
Iteration 482, loss = 0.32372414
Iteration 483, loss = 0.32352224
Iteration 484, loss = 0.32332197
Iteration 485, loss = 0.32312263
Iteration 486, loss = 0.32292577
Iteration 487, loss = 0.32272886
Iteration 488, loss = 0.32253228
Iteration 489, loss = 0.32233625
Iteration 490, loss = 0.32214028
Iteration 491, loss = 0.32194532
Iteration 492, loss = 0.32175133
Iteration 493, loss = 0.32155773
Iteration 494, loss = 0.32136554
Iteration 495, loss = 0.32117465
Iteration 496, loss = 0.32098504
Iteration 497, loss = 0.32079661
Iteration 498, loss = 0.32060610
Iteration 499, loss = 0.32041175
Iteration 500, loss = 0.32021317
Iteration 501, loss = 0.32001200
Iteration 502, loss = 0.31980844
Iteration 503, loss = 0.31959431
Iteration 504, loss = 0.31937266
Iteration 505, loss = 0.31914268
Iteration 506, loss = 0.31890423
Iteration 507, loss = 0.31865223
Iteration 508, loss = 0.31842320
Iteration 509, loss = 0.31823818
Iteration 510, loss = 0.31805209
Iteration 511, loss = 0.31786713
Iteration 512, loss = 0.31769258
Iteration 513, loss = 0.31753581
Iteration 514, loss = 0.31739572
Iteration 515, loss = 0.31725797
Iteration 516, loss = 0.31709808
Iteration 517, loss = 0.31692446
Iteration 518, loss = 0.31674926
Iteration 519, loss = 0.31658124
Iteration 520, loss = 0.31641422
Iteration 521, loss = 0.31625243
Iteration 522, loss = 0.31609859
Iteration 523, loss = 0.31594509
Iteration 524, loss = 0.31579184
Iteration 525, loss = 0.31564002
Iteration 526, loss = 0.31548746
Iteration 527, loss = 0.31533404
Iteration 528, loss = 0.31518659
Iteration 529, loss = 0.31504190
Iteration 530, loss = 0.31489307
Iteration 531, loss = 0.31474084
Iteration 532, loss = 0.31459490
Iteration 533, loss = 0.31445151
Iteration 534, loss = 0.31431053
Iteration 535, loss = 0.31417060
Iteration 536, loss = 0.31403018
Iteration 537, loss = 0.31388929
Iteration 538, loss = 0.31375075
Iteration 539, loss = 0.31361328
Iteration 540, loss = 0.31347660
Iteration 541, loss = 0.31334102
Iteration 542, loss = 0.31320626
Iteration 543, loss = 0.31307473
Iteration 544, loss = 0.31294640
Iteration 545, loss = 0.31281845
Iteration 546, loss = 0.31269086
Iteration 547, loss = 0.31256520
Iteration 548, loss = 0.31244503
Iteration 549, loss = 0.31232611
Iteration 550, loss = 0.31220814
Iteration 551, loss = 0.31209032
Iteration 552, loss = 0.31197374
Iteration 553, loss = 0.31185839
Iteration 554, loss = 0.31174443
Iteration 555, loss = 0.31163111
Iteration 556, loss = 0.31151814
Iteration 557, loss = 0.31140551
Iteration 558, loss = 0.31129430
Iteration 559, loss = 0.31118364
Iteration 560, loss = 0.31107293
Iteration 561, loss = 0.31096418
Iteration 562, loss = 0.31085631
Iteration 563, loss = 0.31074907
Iteration 564, loss = 0.31064249
Iteration 565, loss = 0.31053683
Iteration 566, loss = 0.31043258
Iteration 567, loss = 0.31032866
Iteration 568, loss = 0.31022554
Iteration 569, loss = 0.31012385
Iteration 570, loss = 0.31002203
Iteration 571, loss = 0.30992164
Iteration 572, loss = 0.30982222
Iteration 573, loss = 0.30972317
Iteration 574, loss = 0.30962454
Iteration 575, loss = 0.30952688
Iteration 576, loss = 0.30943003
Iteration 577, loss = 0.30933431
Iteration 578, loss = 0.30923949
Iteration 579, loss = 0.30914523
Iteration 580, loss = 0.30905146
Iteration 581, loss = 0.30895878
Iteration 582, loss = 0.30886679
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
###Markdown
Avaliando Modelo
###Code
predict = model.predict(X_test)
print ("Acurácia:", metrics.accuracy_score(y_test, predict))
###Output
Acurácia: 0.8461538461538461
|
CFPS_Research_Paper/keras_VGGFace_1FC.ipynb | ###Markdown
[View in Colaboratory](https://colab.research.google.com/github/ZER-0-NE/ML_problems/blob/master/keras_VGGFace_1FC.ipynb)
###Code
from google.colab import auth
auth.authenticate_user()
!pip install PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# Authenticate and create the PyDrive client.
# This only needs to be done once in a notebook.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
fileId = drive.CreateFile({'id': '1AoyPQ3HoBbRnLpjEBsEBVd-49CYZaZJD'}) #DRIVE_FILE_ID is file id example: 1iytA1n2z4go3uVCwE_vIKouTKyIDjEq
print(fileId['title']) # folder_data.zip
fileId.GetContentFile('test_img.zip') # Save Drive file as a local file
!unzip test_img.zip -d ./
fileId = drive.CreateFile({'id': '1OhPBMbSOG3ejP26-peRmDPYX7WfF2ixN'}) #DRIVE_FILE_ID is file id example: 1iytA1n2z4go3uVCwE_vIKouTKyIDjEq
print(fileId['title']) # folder_data.zip
fileId.GetContentFile('dataset_cfps.zip') # Save Drive file as a local file
!unzip dataset_cfps.zip -d ./
!ls
!rm -rf test_cfps.zip
yfileId = drive.CreateFile({'id': '1OhPBMbSOG3ejP26-peRmDPYX7WfF2ixN'}) #DRIVE_FILE_ID is file id example: 1iytA1n2z4go3uVCwE_vIKouTKyIDjEq
print(fileId['title']) # folder_data.zip
fileId.GetContentFile('dataset_cfps.zip') # Save Drive file as a local file
!unzip dataset_cfps.zip -d ./
from keras import models
from keras import layers
from keras import optimizers
from keras.applications import VGG16
from keras.applications import InceptionResNetV2
import sys
import os
from keras.preprocessing.image import ImageDataGenerator
from keras import optimizers
from keras.models import Sequential
from keras.layers import Dropout, Flatten, Dense, Activation, Input
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras import callbacks, regularizers
from keras.models import load_model
import matplotlib.pyplot as plt
from keras.layers.normalization import BatchNormalization
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K
from keras_vggface.vggface import VGGFace
from keras.engine import Model
from keras.models import load_model
from sklearn import metrics
import numpy
!pip install keras_vggface
train_data_path = 'dataset_cfps/train'
validation_data_path = 'dataset_cfps/validation'
test_data_path = 'test'
#Parametres
img_width, img_height = 224, 224
#Load the VGG model
#vgg_conv = VGG16(weights='imagenet', include_top=False, input_shape=(img_width, img_height, 3))
vggface = VGGFace(model='resnet50', include_top=False, input_shape=(img_width, img_height, 3))
#vgg_model = VGGFace(include_top=False, input_shape=(224, 224, 3))
last_layer = vggface.get_layer('avg_pool').output
x = Flatten(name='flatten')(last_layer)
xx = Dense(256, activation = 'softmax', kernel_regularizer=regularizers.l2(0.0001))(x)
x1 = BatchNormalization()(xx)
x2 = Dropout(0.7)(x1)
x3 = Dense(12, activation='softmax', name='classifier', kernel_regularizer=regularizers.l2(0.0001))(x2)
custom_vgg_model = Model(vggface.input, x3)
# Create the model
model = models.Sequential()
# Add the convolutional base model
model.add(custom_vgg_model)
# Add new layers
#model.add(layers.Flatten())
# model.add(layers.Dense(1024, activation='relu'))
# model.add(BatchNormalization())
#model.add(layers.Dropout(0.5))
# model.add(layers.Dense(12, activation='sigmoid'))
# Show a summary of the model. Check the number of trainable parameters
model.summary()
#model = load_model('keras_vggface.h5')
def f1(y_true, y_pred):
def recall(y_true, y_pred):
"""Recall metric.
Only computes a batch-wise average of recall.
Computes the recall, a metric for multi-label classification of
how many relevant items are selected.
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall
def precision(y_true, y_pred):
"""Precision metric.
Only computes a batch-wise average of precision.
Computes the precision, a metric for multi-label classification of
how many selected items are relevant.
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
precision = precision(y_true, y_pred)
recall = recall(y_true, y_pred)
return 2*((precision*recall)/(precision+recall))
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
validation_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
# Change the batchsize according to your system RAM
train_batchsize = 32
val_batchsize = 32
train_generator = train_datagen.flow_from_directory(
train_data_path,
target_size=(img_width, img_height),
batch_size=train_batchsize,
class_mode='categorical')
validation_generator = validation_datagen.flow_from_directory(
validation_data_path,
target_size=(img_width, img_height),
batch_size=val_batchsize,
class_mode='categorical')
# Compile the model
model.compile(loss='categorical_crossentropy',
optimizer=optimizers.SGD(lr=1e-3),
metrics=['acc'])
# Train the model
history = model.fit_generator(
train_generator,
steps_per_epoch=train_generator.samples/train_generator.batch_size ,
epochs=100,
validation_data=validation_generator,
validation_steps=validation_generator.samples/validation_generator.batch_size,
verbose=1)
# Save the model
model.save('keras_vggface_1FC.h5')
# loss and accuracy curves.
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'b', label='Training acc')
plt.plot(epochs, val_acc, 'r', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'b', label='Training loss')
plt.plot(epochs, val_loss, 'r', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
test_generator = ImageDataGenerator()
test_data_generator = test_generator.flow_from_directory(
test_data_path,
target_size=(img_width, img_height),
batch_size=32,
shuffle=False)
test_steps_per_epoch = numpy.math.ceil(test_data_generator.samples / test_data_generator.batch_size)
predictions = model.predict_generator(test_data_generator, steps=test_steps_per_epoch)
# Get most likely class
predicted_classes = numpy.argmax(predictions, axis=1)
true_classes = test_data_generator.classes
class_labels = list(test_data_generator.class_indices.keys())
report = metrics.classification_report(true_classes, predicted_classes, target_names=class_labels)
print(report)
test_generator = ImageDataGenerator()
test_data_generator = test_generator.flow_from_directory(
test_data_path,
target_size=(img_width, img_height),
batch_size=32,
shuffle=False)
test_steps_per_epoch = numpy.math.ceil(test_data_generator.samples / test_data_generator.batch_size)
predictions = model.predict_generator(test_data_generator, steps=test_steps_per_epoch)
# Get most likely class
predicted_classes = numpy.argmax(predictions, axis=1)
true_classes = test_data_generator.classes
class_labels = list(test_data_generator.class_indices.keys())
report = metrics.classification_report(true_classes, predicted_classes, target_names=class_labels)
print(report)
from google.colab import files
files.download('facenet_resnet_lr3_SGD_new_FC3_200.h5')
# memory footprint support libraries/code
!ln -sf /opt/bin/nvidia-smi /usr/bin/nvidia-smi
!pip install gputil
!pip install psutil
!pip install humanize
import psutil
import humanize
import os
import GPUtil as GPU
GPUs = GPU.getGPUs()
# XXX: only one GPU on Colab and isn’t guaranteed
gpu = GPUs[0]
def printm():
process = psutil.Process(os.getpid())
print("Gen RAM Free: " + humanize.naturalsize( psutil.virtual_memory().available ), " I Proc size: " + humanize.naturalsize( process.memory_info().rss))
print("GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal))
printm()
###Output
Collecting gputil
Downloading https://files.pythonhosted.org/packages/45/99/837428d26b47ebd6b66d6e1b180e98ec4a557767a93a81a02ea9d6242611/GPUtil-1.3.0.tar.gz
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from gputil) (1.14.3)
Building wheels for collected packages: gputil
Running setup.py bdist_wheel for gputil ... [?25l- done
[?25h Stored in directory: /content/.cache/pip/wheels/17/0f/04/b79c006972335e35472c0b835ed52bfc0815258d409f560108
Successfully built gputil
Installing collected packages: gputil
Successfully installed gputil-1.3.0
Requirement already satisfied: psutil in /usr/local/lib/python3.6/dist-packages (5.4.5)
Collecting humanize
Downloading https://files.pythonhosted.org/packages/8c/e0/e512e4ac6d091fc990bbe13f9e0378f34cf6eecd1c6c268c9e598dcf5bb9/humanize-0.5.1.tar.gz
Building wheels for collected packages: humanize
Running setup.py bdist_wheel for humanize ... [?25l- done
[?25h Stored in directory: /content/.cache/pip/wheels/69/86/6c/f8b8593bc273ec4b0c653d3827f7482bb2001a2781a73b7f44
Successfully built humanize
Installing collected packages: humanize
Successfully installed humanize-0.5.1
Gen RAM Free: 9.7 GB I Proc size: 3.4 GB
GPU RAM Free: 503MB | Used: 10936MB | Util 96% | Total 11439MB
|
Features/Merge features.ipynb | ###Markdown
Merge featuresEl objetivo de este notebook es agrupar los features de distintas notebooks en un solo dataframe (*df_features*).
###Code
import os
if '__file__' in locals():
current_folder = os.path.dirname(os.path.abspath(__file__))
else:
current_folder = os.getcwd()
features_seba = '"{}"'.format(os.path.join(current_folder, "Features Seba.ipynb"))
features_fran = '"{}"'.format(os.path.join(current_folder, "Feature Engineering Fran.ipynb"))
features_ale = '"{}"'.format(os.path.join(current_folder, "Features Ale.ipynb"))
features_fede = '"{}"'.format(os.path.join(current_folder, "Features_fede.ipynb"))
%run $features_seba
df_features.head()
%run $features_fran
df_features.head()
%run $features_ale
df_features.head()
%run $features_fede
df_features.head()
###Output
_____no_output_____ |
notebooks/0_prepare_your_data_from_anndata.ipynb | ###Markdown
Example 1 Step 0 - prepare your dataPrepare cellphoneDB inputs starting from an anndata object object
###Code
import numpy as np
import pandas as pd
import scanpy as sc
import anndata
import os
import sys
from scipy import sparse
sc.settings.verbosity = 1 # verbosity: errors (0), warnings (1), info (2), hints (3)
sys.executable
###Output
_____no_output_____
###Markdown
1. Load andata The anndata object contains counts that have been normalized (per cell) and log-transformed.
###Code
adata = sc.read('endometrium_example_counts.h5ad')
###Output
_____no_output_____
###Markdown
2. Generate your metaIn this example, our input is an anndata containing the cluster/celltype information in anndata.obs['cell_type']The object also has anndata.obs['lineage'] information wich will be used below for a hierarchical DEGs approach.
###Code
adata.obs['cell_type'].values.describe()
df_meta = pd.DataFrame(data={'Cell':list(adata.obs.index),
'cell_type':[ i for i in adata.obs['cell_type']]
})
df_meta.set_index('Cell', inplace=True)
df_meta.to_csv('endometrium_example_meta.tsv', sep = '\t')
###Output
_____no_output_____
###Markdown
3. Compute DEGs (optional) We will import out gene expression into Seurat using rpy2 so that we can estimate the differentially expressed genes using Seurat `FindAllMarkers`
###Code
# Conver to dense matrix for Seurat
adata.X = adata.X.toarray()
import rpy2.rinterface_lib.callbacks
import logging
# Ignore R warning messages
#Note: this can be commented out to get more verbose R output
rpy2.rinterface_lib.callbacks.logger.setLevel(logging.ERROR)
import anndata2ri
anndata2ri.activate()
%load_ext rpy2.ipython
%%R -i adata
adata
###Output
class: SingleCellExperiment
dim: 20975 1949
metadata(0):
assays(1): X
rownames(20975): RP11-34P13.7 FO538757.2 ... AC004556.1 AC240274.1
rowData names(2): gene_ids n_cells
colnames(1949): 4861STDY7387181_AAACCTGAGGGCACTA
4861STDY7387181_AAACCTGTCAATAAGG ... GSM4577315_TTGTTCAAGCCACCGT
GSM4577315_TTTACGTTCGTAGGGA
colData names(20): sample_names log2p1_count ... cell_type n_counts
reducedDimNames(0):
altExpNames(0):
###Markdown
Use Seurat `FindAllMarkers` to compute differentially expressed genes and extract the corresponding data frame `DEGs`.Here there are three options you may be interested on:1. Identify DEGs for each cell type (compare cell type vs rest, most likely option) 2. Identify DEGs for each cell type using a per-lineage hierarchycal approach (compare cell type vs rest in the lineage, such as in endometrium paper Garcia-Alonso et al 2021)In the endometrium paper (Garcia-Alonso et al 2021) we're interested in the differences within the stromal and epithelial lineages, rather than the commonalities (example, what is specific of epithelials in the glands compared to epithelials in the lumen). The reason is that epithelial and stromal subtypes vary in space and type and thus we wanna extract the subtile differences within the lineage to better understand their differential location/ biological role.
###Code
%%R -o DEGs
library(Seurat)
so = as.Seurat(adata, counts = "X", data = "X")
Idents(so) = so$cell_type
## OPTION 1 - compute DEGs for all cell types
## Extract DEGs for each cell_type
# DEGs <- FindAllMarkers(so,
# test.use = 'LR',
# verbose = F,
# only.pos = T,
# random.seed = 1,
# logfc.threshold = 0.2,
# min.pct = 0.1,
# return.thresh = 0.05)
# OPTION 2 - optional - Re-compute hierarchical (per lineage) DEGs for Epithelial and Stromal lineages
DEGs = c()
for( lin in c('Epithelial', 'Stromal') ){
message('Computing DEGs within linage ', lin)
so_in_lineage = subset(so, cells = Cells(so)[ so$lineage == lin ] )
celltye_in_lineage = unique(so$cell_type[ so$lineage == lin ])
DEGs_lin = FindAllMarkers(so_in_lineage,
test.use = 'LR',
verbose = F,
only.pos = T,
random.seed = 1,
logfc.threshold = 0.2,
min.pct = 0.1,
return.thresh = 0.05)
DEGs = rbind(DEGs_lin, DEGs)
}
###Output
_____no_output_____
###Markdown
Filter significant genes. Here we select genes with adjusted p-value `0.1`
###Code
DEGs.head()
cond1 = DEGs['p_val_adj'] < 0.05
cond2 = DEGs['avg_log2FC'] > 0.1
mask = [all(tup) for tup in zip(cond1, cond2)]
fDEGs = DEGs[mask]
###Output
_____no_output_____
###Markdown
Save significant DEGs into a file.Important, the DEGs output file must contain - 1st column = cluster- 2nd column = gene - 3rd-Z columns = ignored
###Code
# 1st column = cluster; 2nd column = gene
fDEGs = fDEGs[['cluster', 'gene', 'p_val_adj', 'p_val', 'avg_log2FC', 'pct.1', 'pct.2']]
fDEGs.to_csv('endometrium_example_DEGs.tsv', index=False, sep='\t')
###Output
_____no_output_____ |
book/Untitled.ipynb | ###Markdown
신경망입력층(input layer) -> 은닉층(hidden layer) -> 출력층(output layer)784(입력, 특징 개수) -> 256(첫 번째 은닉층 뉴런 개수)-> 256 (두 번째 은닉층 뉴런 개수) -> 10 (결괏값 0-9 분류 개수)
###Code
#과적합을 막기위한 드롭아웃
keep_prob = tf.placeholder(tf.float32)
W1 = tf.Variable(tf.random_normal([784, 256], stddev=0.01))
L1 = tf.nn.relu(tf.matmul(X, W1))
L1 = tf.nn.dropout(L1, keep_prob)
# 배치 정규화 Batch Normalization : 과적합을 막아주고 학습속도를 향상시켜줌
# L1 = tf.layers.batch_normalization(L1, training=True)
W2 = tf.Variable(tf.random_normal([256, 256], stddev=0.01))
L2 = tf.nn.relu(tf.matmul(L1, W2))
L2 = tf.nn.dropout(L2, keep_prob)
# L2 = tf.layers.batch_normalization(L2, training=True)
W3 = tf.Variable(tf.random_normal([256, 10], stddev=0.01))
model = tf.matmul(L2, W3, name='model')
# 표준편차가 0.01인 정규분포를 가지는 임의의 값으로 뉴런 초기화
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=model, labels=Y))
optimizer = tf.train.AdamOptimizer(0.001).minimize(cost)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
batch_size = 100
total_batch = int(mnist.train.num_examples / batch_size)
for epoch in range(30):
total_cost = 0
for i in range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
_, cost_val = sess.run([optimizer, cost],
feed_dict={X: batch_xs, Y: batch_ys, keep_prob: 0.8})
total_cost += cost_val
print('Epoch:', '%04d' % (epoch + 1),
'Avg. cost =', '{:3f}'.format(total_cost / total_batch))
print('최적화 완료!')
is_correct = tf.equal(tf.argmax(model, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(is_correct, tf.float32))
print('정확도:', sess.run(accuracy,
feed_dict={X: mnist.test.images,
Y:mnist.test.labels, keep_prob: 1}))
labels = sess.run(model,
feed_dict = {X: mnist.test.images,
Y: mnist.test.labels,
keep_prob: 1})
fig = plt.figure()
for i in range(10):
# 2행 5열의 그래프를 만들고, i + 1번째에 숫자 이미지를 출력합니다.
subplot = fig.add_subplot(2, 5, i + 1)
# 이미지를 깨끗하게 출력하기 위해 x와 y의 눈금을 출력하지 않습니다.
subplot.set_xticks([])
subplot.set_yticks([])
# 출력한 이미지 위에 예측한 숫자를 출력합니다.
# np.argmax는 tf.argmax와 같은 기능의 함수입니다.
subplot.set_title('%d' % np.argmax(labels[i]))
subplot.imshow(mnist.test.images[i].reshape((28, 28)),
cmap=plt.cm.gray_r)
plt.show()
###Output
_____no_output_____ |
c01_experimentos_estruturados.ipynb | ###Markdown
Experimentos para os dados estruturados
###Code
import datetime
import re
import json
import yaml
import sys
import os
import logging
import logging.config
import time
import multiprocessing
from collections import OrderedDict
import requests
import sqlalchemy
import string
import unicodedata
import yaml
import warnings
warnings.filterwarnings('ignore')
from lightgbm import LGBMClassifier
import pandas as pd
import seaborn as sns
import matplotlib
from matplotlib.cm import ScalarMappable
from matplotlib.colors import ListedColormap
import matplotlib.pyplot as plt
import tqdm
import numpy as np
from scipy.sparse import issparse
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.model_selection import train_test_split
from sklearn.metrics import (
make_scorer,
accuracy_score,
balanced_accuracy_score,
average_precision_score,
brier_score_loss,
f1_score,
log_loss,
precision_score,
recall_score,
jaccard_score,
roc_auc_score,
classification_report,
confusion_matrix,
roc_curve,
precision_recall_curve,
auc,
)
from sklearn.utils import resample
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, BaggingClassifier, GradientBoostingClassifier, ExtraTreesClassifier
from sklearn.linear_model import LogisticRegression, SGDClassifier
from sklearn.naive_bayes import GaussianNB, BernoulliNB, MultinomialNB
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis, QuadraticDiscriminantAnalysis
from sklearn.svm import SVC, LinearSVC, NuSVC
from sklearn.neural_network import MLPClassifier
from sklearn.feature_selection import SelectPercentile, VarianceThreshold, SelectFromModel
from sklearn.model_selection import GridSearchCV, cross_val_score, cross_validate, RepeatedStratifiedKFold
from sklearn.calibration import CalibratedClassifierCV
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import RobustScaler, StandardScaler, MinMaxScaler, Binarizer
from sklearn.decomposition import LatentDirichletAllocation, TruncatedSVD, PCA
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline, FeatureUnion
from lightgbm import LGBMClassifier
import xgboost as xgb
from xgboost import XGBClassifier
import joblib
from joblib import delayed, Parallel
#################################
# VARIÁVEIS GLOBAIS
#################################
N_JOBS = -1
BASE_DIR = './'
DEFAULT_RANDOM_STATE = 42
#################################
# LOGS
#################################
with open(os.path.join(BASE_DIR, 'log.conf.yaml'), 'r') as f:
config = yaml.safe_load(f.read())
logging.config.dictConfig(config)
#################################
# CONFIGURAÇÕES
#################################
pd.options.display.max_rows = 500
# # Leitura dos dados estruturados extraídos das denúncias.
# # Informações completamente anonimizadas.
# df = pd.read_parquet('datasets/df_treinamento_faro.parquet')
# df['LABEL'] = df['GrauAptidao'].apply(lambda x: 1 if x > 50 else 0)
# df.drop(columns=['IdManifestacao','GrauAptidao','TxtFatoManifestacao','TextoAnexo'], inplace=True)
# df.columns = [f'F{i:>03}' for i, c in enumerate(df.columns[:-1])] + ['LABEL']
# # Divisão dos dados em treino e teste
# X_train, X_test, y_train, y_test = train_test_split(df.drop(columns=['LABEL']), df['LABEL'], test_size=.2, random_state=DEFAULT_RANDOM_STATE, stratify=df['LABEL'])
# # altera a escala das features para um intervalo entre 0 e 1.
# X_train_original = X_train.copy()
# X_test_original = X_test.copy()
# scaler = MinMaxScaler()
# scaler.fit(X_train)
# X_train = scaler.transform(X_train)
# X_test = scaler.transform(X_test)
# NUMERO_FEATURES = X_train.shape[1]
# df_tmp = pd.DataFrame(X_train, columns=X_train_original.columns, index=X_train_original.index)
# df_tmp['LABEL'] = y_train
# df_tmp.to_parquet('datasets/df_train_de.parquet')
# df_tmp = pd.DataFrame(X_test, columns=X_test_original.columns, index=X_test_original.index)
# df_tmp['LABEL'] = y_test
# df_tmp.to_parquet('datasets/df_test_de.parquet')
df_train = pd.read_parquet('datasets/df_train_de.parquet')
X_train, y_train = df_train.drop(columns=['LABEL']), df_train['LABEL']
df_test = pd.read_parquet('datasets/df_test_de.parquet')
X_test, y_test = df_test.drop(columns=['LABEL']), df_test['LABEL']
NUMERO_FEATURES = X_train.shape[1]
###Output
_____no_output_____
###Markdown
Escolha de features
###Code
# métricas utilizadas para avaliação
metrics = ['average_precision','balanced_accuracy','roc_auc']
k_vs_avg_prec_score = []
k_vs_bal_acc_score = []
k_vs_roc_auc_score = []
# cálculo do peso para a classe positiva (utilizado em algoritmos que podem lidar com o dataset desbalanceado)
POS_WEIGHT = y_train.value_counts()[0]/y_train.value_counts()[1]
class_weight = {0: 1, 1: POS_WEIGHT}
for k in tqdm.tqdm_notebook(range(2, NUMERO_FEATURES + 1, 2)):
# seleciona k features utilizando SelectFromModel(RandomForestClassifier)
selector = SelectFromModel(
RandomForestClassifier(
n_estimators=500,
class_weight=class_weight,
random_state=DEFAULT_RANDOM_STATE,
n_jobs=N_JOBS),
max_features=k,
threshold=-np.inf)
selector.fit(X_train, y_train)
# transforma o dataset para preservar apenas as features selecionadas
X_train_fs = selector.transform(X_train)
# realiza validação cruzada com 10 folds estratificados
rskfcv = RepeatedStratifiedKFold(
n_splits=10,
n_repeats=1,
random_state=DEFAULT_RANDOM_STATE)
valores = cross_validate(
RandomForestClassifier(
n_estimators=500,
class_weight=class_weight,
random_state=DEFAULT_RANDOM_STATE,
n_jobs=N_JOBS),
X_train_fs,
y_train,
scoring=metrics,
cv=rskfcv,
n_jobs=N_JOBS
)
cv_scores = {k[5:]: np.mean(v) for k, v in valores.items() if k not in ['fit_time', 'score_time']}
avg_prec, bal_acc, roc_auc = cv_scores['average_precision'],cv_scores['balanced_accuracy'],cv_scores['roc_auc']
logging.info("k = {} - average_precision = {} - balanced_accuracy = {} - roc_auc_score = {}".format(k, avg_prec, bal_acc, roc_auc))
k_vs_avg_prec_score.append(avg_prec)
k_vs_bal_acc_score.append(bal_acc)
k_vs_roc_auc_score.append(roc_auc)
# resumo das métricas
df_feature_selection = pd.DataFrame({
'Número de Features': range(2, NUMERO_FEATURES + 1, 2),
'Average Precision': k_vs_avg_prec_score,
'Balanced Accuracy': k_vs_bal_acc_score,
'ROC AUC': k_vs_roc_auc_score,
})
df_feature_selection[df_feature_selection['Número de Features']%10==0].reset_index(drop=True)
# gráfico com as métricas
tamanho_vetores = range(2, NUMERO_FEATURES + 1, 2)
import matplotlib
import matplotlib.pyplot as plt
matplotlib.rcParams.update({'font.size': 13})
fig, ax = plt.subplots(1,2, figsize=(16, 5), dpi=80)
t = list(range(0, NUMERO_FEATURES + 1, 10))
pd.Series(k_vs_avg_prec_score[0:],index=tamanho_vetores).plot(color='#45B39D', ax=ax[0])
ax[0].set_xlabel('Quantidade de features')
ax[0].set_ylabel('Average Precision')
ax[0].grid(which='major',linestyle='--', linewidth=0.5)
ax[0].set_xticks(t)
pd.Series(k_vs_roc_auc_score[0:],index=tamanho_vetores).plot(color='#45B39D', ax=ax[1])
ax[1].set_xlabel('Quantidade de features')
ax[1].set_ylabel('ROC AUC')
ax[1].grid(which='major',linestyle='--', linewidth=0.5)
ax[1].set_xticks(t)
plt.tight_layout()
plt.savefig('./fig_selecao_features.png')
plt.show()
# gráfico com as métricas
tamanho_vetores = range(2, NUMERO_FEATURES + 1, 2)
import matplotlib
import matplotlib.pyplot as plt
matplotlib.rcParams.update({'font.size': 13})
fig, ax = plt.subplots(1,3, figsize=(16, 5), dpi=80)
t = list(range(0, NUMERO_FEATURES + 1, 10))
pd.Series(k_vs_avg_prec_score[0:],index=tamanho_vetores).plot(color='#45B39D', ax=ax[0])
ax[0].set_xlabel('Quantidade de features')
ax[0].set_ylabel('Average Precision')
ax[0].grid(which='major',linestyle='--', linewidth=0.5)
ax[0].set_xticks(t)
pd.Series(k_vs_bal_acc_score[0:],index=tamanho_vetores).plot(color='#45B39D', ax=ax[1])
ax[1].set_xlabel('Quantidade de features')
ax[1].set_ylabel('Balanced Accuracy Score')
ax[1].grid(which='major',linestyle='--', linewidth=0.5)
ax[1].set_xticks(t)
pd.Series(k_vs_roc_auc_score[0:],index=tamanho_vetores).plot(color='#45B39D', ax=ax[2])
ax[2].set_xlabel('Quantidade de features')
ax[2].set_ylabel('ROC AUC')
ax[2].grid(which='major',linestyle='--', linewidth=0.5)
ax[2].set_xticks(t)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Escolha de um modelo
###Code
# seleciona k features utilizando SelectFromModel(RandomForestClassifier)
k = 20
selector_model=RandomForestClassifier(
n_estimators=1000,
class_weight=class_weight,
random_state=DEFAULT_RANDOM_STATE,
n_jobs=N_JOBS)
selector = SelectFromModel(selector_model, max_features=k, threshold=-np.inf)
selector.fit(X_train, y_train)
X_train = selector.transform(X_train)
X_test = selector.transform(X_test)
X_train.shape
# realiza testes com validação cruzada para a escolha do modelo
metrics = ['roc_auc','balanced_accuracy', 'average_precision', 'recall', 'accuracy', 'f1_macro','f1_weighted']
results = [
]
model = [
RandomForestClassifier,
LogisticRegression,
XGBClassifier,
KNeighborsClassifier,
BaggingClassifier,
ExtraTreesClassifier,
SGDClassifier,
SVC,
NuSVC,
LinearSVC,
BernoulliNB,
LGBMClassifier,
MLPClassifier,
AdaBoostClassifier,
]
params = [
{
'n_estimators': [1000],
'max_depth': [5,7,9],
'min_samples_split': [2,3],
'min_samples_leaf': [1,2],
'class_weight': [class_weight],
'random_state': [DEFAULT_RANDOM_STATE],
'max_samples': [.8, 1],
},
{
'penalty' : ['l2'],
'C' : [1],
'solver' : ['liblinear'],
'random_state': [DEFAULT_RANDOM_STATE],
'class_weight': [class_weight],
},
{
'learning_rate': [0.01],
'n_estimators': [1000],
'subsample' : [.8,.45],
'min_child_weight': [1],
'max_depth': [3,4,7],
'random_state': [DEFAULT_RANDOM_STATE],
'reg_lambda': [2],
'scale_pos_weight': [POS_WEIGHT]
},
{
'n_neighbors' : [5,7,9,11],
},
{
'n_estimators': [1000],
'max_samples': [.8],
'random_state': [DEFAULT_RANDOM_STATE],
},
{
'n_estimators': [1000],
'max_samples' : [.8],
'max_depth': [6,7],
'random_state': [DEFAULT_RANDOM_STATE],
'class_weight': [class_weight],
},
{
'random_state': [DEFAULT_RANDOM_STATE],
'class_weight': [class_weight],
},
{
'gamma': ['auto'],
'C': [0.5],
'random_state': [DEFAULT_RANDOM_STATE],
'class_weight': [class_weight],
},
{
'gamma': ['auto'],
'random_state': [DEFAULT_RANDOM_STATE],
'class_weight': [class_weight],
},
{
'random_state': [DEFAULT_RANDOM_STATE],
'class_weight': [class_weight],
},
{
},
{
'n_estimators': [1000],
'subsample': [.6,.7,.8,1],
'random_state': [DEFAULT_RANDOM_STATE],
'class_weight': [class_weight],
},
{
'alpha': [1],
'max_iter': [1000],
},
{
}
]
import pdb
logging.info('Início')
# itera a lista de modelos e seus hiperparâmetros
lista_mh = list(zip(model, params))
for m, p in tqdm.tqdm_notebook(lista_mh):
logging.info('Modelo: {}'.format(m.__name__))
rskfcv = RepeatedStratifiedKFold(n_splits=10, n_repeats=1, random_state=DEFAULT_RANDOM_STATE)
# utiliza o GridSearchCV para encontrar o melhor conjunto de hiperparâmetros
# que maximizem o roc_auc score
cv = GridSearchCV(estimator=m(),param_grid=p, n_jobs=N_JOBS, error_score=0, refit=True, scoring='roc_auc', cv=rskfcv)
cv.fit(X_train, y_train)
model = cv.best_estimator_
best_params = cv.best_params_
# instancia o modelo com os hiperparâmetros escolhidos acima e utiliza validação cruzada
# para avaliar todas as métricas escolhidas
valores = cross_validate(m(**best_params), X_train, y_train, scoring=metrics, cv=rskfcv)
cv_scores = {k[5:]: np.mean(v) for k, v in valores.items() if k not in ['fit_time', 'score_time']}
# monta um registro com todas as informações coletadas para comparação
linha = {
'Modelo': m.__name__,
'ScoreTreino': cv.score(X_train, y_train),
'BestParams': best_params,
'RawScores': {k[5:]: v for k, v in valores.items() if k not in ['fit_time', 'score_time']}
}
linha.update(cv_scores)
results.append(linha)
# tabela comparativa com os resultados apresentados por cada algoritmo executado
df_results = pd.DataFrame(results)
df_results.sort_values('roc_auc', ascending=False)
metricas = ['roc_auc', 'average_precision', 'balanced_accuracy', 'f1_weighted' ]
matplotlib.rcParams.update({'font.size': 13})
fig, axis = plt.subplots(2,2, figsize=(14, 10), dpi=80)
axis = np.ravel(axis)
for i, m in enumerate(metricas):
df_score = pd.DataFrame({m: s for m, s in zip(df_results['Modelo'], df_results['RawScores'].apply(lambda x: x[m]))})
df_score = pd.melt(df_score, var_name='Modelo', value_name='Score')
sns.boxplot(x='Modelo', y='Score', data=df_score, color='#45B39D', linewidth=1, ax=axis[i])
axis[i].set_xlabel('Modelo')
axis[i].set_ylabel(f'Score ({m})')
axis[i].set_xticklabels(labels=df_score['Modelo'].drop_duplicates(), rotation=70, ha='right', fontsize=12)
axis[i].grid(which='major',linestyle='--', linewidth=0.5, )
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Tunning de Hiperparâmetros
###Code
from skopt import forest_minimize
def funcao_otimizacao(params):
"""
Esta funcao recebe os hiperpâmetros selecionados pela função forest_minimize,
instancia o RandomForestClassifier com os parâmetros recebidos e avalia o
modelo com os dados de treino utilizando validação cruzada com 10 folds.
"""
logging.info(params)
n_estimators = params[0]
max_depth = params[1]
min_samples_split = params[2]
min_samples_leaf = params[3]
max_samples = params[4]
class_weight = {0: 1, 1: params[5]}
model = RandomForestClassifier(n_estimators=n_estimators,
max_depth=max_depth,
min_samples_split=min_samples_split,
min_samples_leaf=min_samples_leaf,
max_samples=max_samples,
class_weight=class_weight,
random_state=DEFAULT_RANDOM_STATE,
n_jobs=-1)
rskfcv = RepeatedStratifiedKFold(n_splits=10, n_repeats=1, random_state=DEFAULT_RANDOM_STATE)
score = cross_val_score(model, X_train, y_train, scoring='roc_auc', cv=rskfcv)
return -np.mean(score)
space = [
(200, 3000), # n_estimators
(1, 10), # max_depth
(2, 20), # min_samples_split
(1, 20), # min_samples_leaf
(0.4, 1.), # max_samples
(2, 10), # class_weight
]
N_CALLS = 50
pbar = tqdm.tqdm_notebook(total=N_CALLS)
def atualizar_progresso(res):
pbar.update(1)
res = forest_minimize(funcao_otimizacao, space, random_state=DEFAULT_RANDOM_STATE, n_random_starts=10, n_calls=N_CALLS, verbose=1, n_jobs=N_JOBS, callback=atualizar_progresso)
pbar.close()
res.x
X_train.shape
X_test.shape
params = [2412, 6, 14, 5, 0.5006957711150638, 2]
params = res.x
n_estimators = params[0]
max_depth = params[1]
min_samples_split = params[2]
min_samples_leaf = params[3]
max_samples = params[4]
class_weight = {0: 1, 1: params[5]}
model = RandomForestClassifier(n_estimators=n_estimators,
max_depth=max_depth,
min_samples_split=min_samples_split,
min_samples_leaf=min_samples_leaf,
max_samples=max_samples,
class_weight=class_weight,
random_state=DEFAULT_RANDOM_STATE,
n_jobs=-1)
model.fit(X_train, y_train)
p = model.predict(X_test)
balanced_accuracy_score(y_test, p)
accuracy_score(y_test, p)
f1_score(y_test, p)
recall_score(y_test, p)
precision_score(y_test, p)
accuracy_score(y_test, p)
roc_auc_score(y_test, model.predict_proba(X_test)[:,1])
pd.DataFrame(confusion_matrix(y_test, p), columns=['Predito como Falso','Predito como Verdadeiro'], index=['Falso', 'Verdadeiro'])
print(classification_report(y_test, p))
matplotlib.rcParams.update({'font.size': 12.5})
plt.figure(figsize=(14, 6), dpi=80)
# plt.title(' Curva Característica de Operação do Receptor (ROC)')
lr_fpr, lr_tpr, thresholds = roc_curve(y_test.values, model.predict_proba(X_test)[:,1], drop_intermediate=False, pos_label=1)
plt.plot(lr_fpr, lr_tpr, label='RandomForestClassifier',color='#45B39D')
plt.plot([0, 1], [0,1], linestyle='--', label='Aleatório/Chute')
plt.xlabel('Taxa de Falsos Positivos (FPR)')
plt.ylabel('Taxa de Verdadeiros Positivos (TPR ou Recall)')
plt.legend()
plt.grid(which='major',linestyle='--', linewidth=0.5)
plt.tight_layout()
plt.show()
df_histograma = pd.Series(model.predict_proba(X_test)[:,1]).to_frame().rename(columns={0:'Score'})
df_histograma['Bins'] = pd.cut(df_histograma['Score'], bins=np.arange(0,1.05,0.05))
df_histograma['Y'] = y_test.values
df_histograma['Acertos Thr 0.5'] = df_histograma.apply(lambda x: 1 if (1 if x['Score']>=.5 else 0)==x['Y'] else 0,axis=1)
df_barplot = df_histograma[['Bins','Acertos Thr 0.5']].groupby(['Bins']).apply(lambda x: x['Acertos Thr 0.5'].sum()/x.shape[0]).fillna(0).to_frame().rename(columns={0: 'Acertos (%)'})
df_barplot['Contagem'] = df_histograma[['Bins','Acertos Thr 0.5']].groupby(['Bins']).count()
df_barplot = df_barplot.reset_index()
df_barplot['left'] = df_barplot['Bins'].apply(lambda x: x.left+0.025)
N = 20
vals = np.ones((N, 4))
vals[:, 0] = np.linspace(.5,45/256, N)
vals[:, 1] = np.linspace(0, 179/256, N)
vals[:, 2] = np.linspace(0, 157/256, N)
newcmp = ListedColormap(vals)
matplotlib.rcParams.update({'font.size': 12.5})
plt.figure(figsize=(14, 6), dpi=80)
color='#45B39D'
scalarMappable = ScalarMappable(cmap=newcmp)
plt.bar(df_barplot['left'], df_barplot['Contagem'], width=0.05, color=scalarMappable.cmap(df_barplot['Acertos (%)']), alpha=1, linewidth=1, edgecolor='white')
colorbar = plt.colorbar(scalarMappable)
colorbar.set_label('Índice de Acertos na Faixa')
plt.xlim(0,1)
plt.grid(which='both',linestyle='--', linewidth=0.5)
plt.title('Histograma para os Scores dados pelo modelo')
plt.xlabel('Score')
plt.ylabel('Quantidade de Observações')
plt.tight_layout()
plt.xticks(ticks=np.arange(0,1.05, 0.05), rotation=90)
plt.show()
###Output
_____no_output_____ |
codes/labs_lecture13/seq2seq_transformers_demo.ipynb | ###Markdown
Lab 01 : Seq2Seq Transformers - demoAnnotated Transformers : https://nlp.seas.harvard.edu/2018/04/03/attention.htmlAuthor : Alexander RushModified by Xavier Bresson to run with Pytorch 1.1.0Task : Memorize/copy-paste a sequence of arbitrary numbers
###Code
# For Google Colaboratory
import sys, os
if 'google.colab' in sys.modules:
from google.colab import drive
drive.mount('/content/gdrive')
file_name = 'seq2seq_transformers_demo.ipynb'
import subprocess
path_to_file = subprocess.check_output('find . -type f -name ' + str(file_name), shell=True).decode("utf-8")
print(path_to_file)
path_to_file = path_to_file.replace(file_name,"").replace('\n',"")
os.chdir(path_to_file)
!pwd
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import math, copy, time
from torch.autograd import Variable
import matplotlib.pyplot as plt
import seaborn
seaborn.set_context(context="talk")
%matplotlib inline
###Output
_____no_output_____
###Markdown
Classes Definition
###Code
class EncoderDecoder(nn.Module):
"""
A standard Encoder-Decoder architecture. Base for this and many
other models.
"""
def __init__(self, encoder, decoder, src_embed, tgt_embed, generator):
super(EncoderDecoder, self).__init__()
self.encoder = encoder
self.decoder = decoder
self.src_embed = src_embed
self.tgt_embed = tgt_embed
self.generator = generator
def forward(self, src, tgt, src_mask, tgt_mask):
"Take in and process masked src and target sequences."
return self.decode(self.encode(src, src_mask), src_mask,
tgt, tgt_mask)
def encode(self, src, src_mask):
return self.encoder(self.src_embed(src), src_mask)
def decode(self, memory, src_mask, tgt, tgt_mask):
return self.decoder(self.tgt_embed(tgt), memory, src_mask, tgt_mask)
class Generator(nn.Module):
"Define standard linear + softmax generation step."
def __init__(self, d_model, vocab):
super(Generator, self).__init__()
self.proj = nn.Linear(d_model, vocab)
def forward(self, x):
return F.log_softmax(self.proj(x), dim=-1)
def clones(module, N):
"Produce N identical layers."
return nn.ModuleList([copy.deepcopy(module) for _ in range(N)])
class Encoder(nn.Module):
"Core encoder is a stack of N layers"
def __init__(self, layer, N):
super(Encoder, self).__init__()
self.layers = clones(layer, N)
self.norm = LayerNorm(layer.size)
def forward(self, x, mask):
"Pass the input (and mask) through each layer in turn."
for layer in self.layers:
x = layer(x, mask)
return self.norm(x)
class LayerNorm(nn.Module):
"Construct a layernorm module (See citation for details)."
def __init__(self, features, eps=1e-6):
super(LayerNorm, self).__init__()
self.a_2 = nn.Parameter(torch.ones(features))
self.b_2 = nn.Parameter(torch.zeros(features))
self.eps = eps
def forward(self, x):
mean = x.mean(-1, keepdim=True)
std = x.std(-1, keepdim=True)
return self.a_2 * (x - mean) / (std + self.eps) + self.b_2
class SublayerConnection(nn.Module):
"""
A residual connection followed by a layer norm.
Note for code simplicity the norm is first as opposed to last.
"""
def __init__(self, size, dropout):
super(SublayerConnection, self).__init__()
self.norm = LayerNorm(size)
self.dropout = nn.Dropout(dropout)
def forward(self, x, sublayer):
"Apply residual connection to any sublayer with the same size."
return x + self.dropout(sublayer(self.norm(x)))
class EncoderLayer(nn.Module):
"Encoder is made up of self-attn and feed forward (defined below)"
def __init__(self, size, self_attn, feed_forward, dropout):
super(EncoderLayer, self).__init__()
self.self_attn = self_attn
self.feed_forward = feed_forward
self.sublayer = clones(SublayerConnection(size, dropout), 2)
self.size = size
def forward(self, x, mask):
"Follow Figure 1 (left) for connections."
x = self.sublayer[0](x, lambda x: self.self_attn(x, x, x, mask))
return self.sublayer[1](x, self.feed_forward)
class Decoder(nn.Module):
"Generic N layer decoder with masking."
def __init__(self, layer, N):
super(Decoder, self).__init__()
self.layers = clones(layer, N)
self.norm = LayerNorm(layer.size)
def forward(self, x, memory, src_mask, tgt_mask):
for layer in self.layers:
x = layer(x, memory, src_mask, tgt_mask)
return self.norm(x)
class DecoderLayer(nn.Module):
"Decoder is made of self-attn, src-attn, and feed forward (defined below)"
def __init__(self, size, self_attn, src_attn, feed_forward, dropout):
super(DecoderLayer, self).__init__()
self.size = size
self.self_attn = self_attn
self.src_attn = src_attn
self.feed_forward = feed_forward
self.sublayer = clones(SublayerConnection(size, dropout), 3)
def forward(self, x, memory, src_mask, tgt_mask):
"Follow Figure 1 (right) for connections."
m = memory
x = self.sublayer[0](x, lambda x: self.self_attn(x, x, x, tgt_mask))
x = self.sublayer[1](x, lambda x: self.src_attn(x, m, m, src_mask))
return self.sublayer[2](x, self.feed_forward)
def subsequent_mask(size):
"Mask out subsequent positions."
attn_shape = (1, size, size)
subsequent_mask = np.triu(np.ones(attn_shape), k=1).astype('uint8')
return torch.from_numpy(subsequent_mask) == 0
plt.figure(figsize=(5,5))
plt.imshow(subsequent_mask(20)[0])
None
def attention(query, key, value, mask=None, dropout=None):
"Compute 'Scaled Dot Product Attention'"
d_k = query.size(-1)
scores = torch.matmul(query, key.transpose(-2, -1)) \
/ math.sqrt(d_k)
if mask is not None:
scores = scores.masked_fill(mask == 0, -1e9)
p_attn = F.softmax(scores, dim = -1)
if dropout is not None:
p_attn = dropout(p_attn)
return torch.matmul(p_attn, value), p_attn
class MultiHeadedAttention(nn.Module):
def __init__(self, h, d_model, dropout=0.1):
"Take in model size and number of heads."
super(MultiHeadedAttention, self).__init__()
assert d_model % h == 0
# We assume d_v always equals d_k
self.d_k = d_model // h
self.h = h
self.linears = clones(nn.Linear(d_model, d_model), 4)
self.attn = None
self.dropout = nn.Dropout(p=dropout)
def forward(self, query, key, value, mask=None):
"Implements Figure 2"
if mask is not None:
# Same mask applied to all h heads.
mask = mask.unsqueeze(1)
nbatches = query.size(0)
# 1) Do all the linear projections in batch from d_model => h x d_k
query, key, value = \
[l(x).view(nbatches, -1, self.h, self.d_k).transpose(1, 2)
for l, x in zip(self.linears, (query, key, value))]
# 2) Apply attention on all the projected vectors in batch.
x, self.attn = attention(query, key, value, mask=mask,
dropout=self.dropout)
# 3) "Concat" using a view and apply a final linear.
x = x.transpose(1, 2).contiguous() \
.view(nbatches, -1, self.h * self.d_k)
return self.linears[-1](x)
class PositionwiseFeedForward(nn.Module):
"Implements FFN equation."
def __init__(self, d_model, d_ff, dropout=0.1):
super(PositionwiseFeedForward, self).__init__()
self.w_1 = nn.Linear(d_model, d_ff)
self.w_2 = nn.Linear(d_ff, d_model)
self.dropout = nn.Dropout(dropout)
def forward(self, x):
return self.w_2(self.dropout(F.relu(self.w_1(x))))
class Embeddings(nn.Module):
def __init__(self, d_model, vocab):
super(Embeddings, self).__init__()
self.lut = nn.Embedding(vocab, d_model)
self.d_model = d_model
def forward(self, x):
return self.lut(x) * math.sqrt(self.d_model)
class PositionalEncoding(nn.Module):
"Implement the PE function."
def __init__(self, d_model, dropout, max_len=5000):
super(PositionalEncoding, self).__init__()
self.dropout = nn.Dropout(p=dropout)
# Compute the positional encodings once in log space.
pe = torch.zeros(max_len, d_model)
position = torch.arange(0, max_len).unsqueeze(1).float()
div_term = torch.exp(torch.arange(0, d_model, 2).float() * -(math.log(10000.0) / d_model))
#div_term = 1 / (10000 ** (torch.arange(0., d_model, 2) / d_model))
pe[:, 0::2] = torch.sin(position * div_term)
pe[:, 1::2] = torch.cos(position * div_term)
pe = pe.unsqueeze(0)
self.register_buffer('pe', pe)
def forward(self, x):
x = x + Variable(self.pe[:, :x.size(1)],
requires_grad=False)
return self.dropout(x)
plt.figure(figsize=(15, 5))
pe = PositionalEncoding(20, 0)
y = pe.forward(Variable(torch.zeros(1, 100, 20)))
plt.plot(np.arange(100), y[0, :, 4:8].data.numpy())
plt.legend(["dim %d"%p for p in [4,5,6,7]])
None
###Output
_____no_output_____
###Markdown
Instantiate sequence-to-sequence Transformers
###Code
def make_model(src_vocab, tgt_vocab, N=6,
d_model=512, d_ff=2048, h=8, dropout=0.1):
"Helper: Construct a model from hyperparameters."
c = copy.deepcopy
attn = MultiHeadedAttention(h, d_model)
ff = PositionwiseFeedForward(d_model, d_ff, dropout)
position = PositionalEncoding(d_model, dropout)
model = EncoderDecoder(
Encoder(EncoderLayer(d_model, c(attn), c(ff), dropout), N),
Decoder(DecoderLayer(d_model, c(attn), c(attn),
c(ff), dropout), N),
nn.Sequential(Embeddings(d_model, src_vocab), c(position)),
nn.Sequential(Embeddings(d_model, tgt_vocab), c(position)),
Generator(d_model, tgt_vocab))
# This was important from their code.
# Initialize parameters with Glorot / fan_avg.
for p in model.parameters():
if p.dim() > 1:
nn.init.xavier_uniform_(p)
return model
# Small example model.
tmp_model = make_model(10, 10, 2)
print(tmp_model)
None
###Output
EncoderDecoder(
(encoder): Encoder(
(layers): ModuleList(
(0): EncoderLayer(
(self_attn): MultiHeadedAttention(
(linears): ModuleList(
(0): Linear(in_features=512, out_features=512, bias=True)
(1): Linear(in_features=512, out_features=512, bias=True)
(2): Linear(in_features=512, out_features=512, bias=True)
(3): Linear(in_features=512, out_features=512, bias=True)
)
(dropout): Dropout(p=0.1, inplace=False)
)
(feed_forward): PositionwiseFeedForward(
(w_1): Linear(in_features=512, out_features=2048, bias=True)
(w_2): Linear(in_features=2048, out_features=512, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(sublayer): ModuleList(
(0): SublayerConnection(
(norm): LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(1): SublayerConnection(
(norm): LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
(1): EncoderLayer(
(self_attn): MultiHeadedAttention(
(linears): ModuleList(
(0): Linear(in_features=512, out_features=512, bias=True)
(1): Linear(in_features=512, out_features=512, bias=True)
(2): Linear(in_features=512, out_features=512, bias=True)
(3): Linear(in_features=512, out_features=512, bias=True)
)
(dropout): Dropout(p=0.1, inplace=False)
)
(feed_forward): PositionwiseFeedForward(
(w_1): Linear(in_features=512, out_features=2048, bias=True)
(w_2): Linear(in_features=2048, out_features=512, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(sublayer): ModuleList(
(0): SublayerConnection(
(norm): LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(1): SublayerConnection(
(norm): LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(norm): LayerNorm()
)
(decoder): Decoder(
(layers): ModuleList(
(0): DecoderLayer(
(self_attn): MultiHeadedAttention(
(linears): ModuleList(
(0): Linear(in_features=512, out_features=512, bias=True)
(1): Linear(in_features=512, out_features=512, bias=True)
(2): Linear(in_features=512, out_features=512, bias=True)
(3): Linear(in_features=512, out_features=512, bias=True)
)
(dropout): Dropout(p=0.1, inplace=False)
)
(src_attn): MultiHeadedAttention(
(linears): ModuleList(
(0): Linear(in_features=512, out_features=512, bias=True)
(1): Linear(in_features=512, out_features=512, bias=True)
(2): Linear(in_features=512, out_features=512, bias=True)
(3): Linear(in_features=512, out_features=512, bias=True)
)
(dropout): Dropout(p=0.1, inplace=False)
)
(feed_forward): PositionwiseFeedForward(
(w_1): Linear(in_features=512, out_features=2048, bias=True)
(w_2): Linear(in_features=2048, out_features=512, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(sublayer): ModuleList(
(0): SublayerConnection(
(norm): LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(1): SublayerConnection(
(norm): LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(2): SublayerConnection(
(norm): LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
(1): DecoderLayer(
(self_attn): MultiHeadedAttention(
(linears): ModuleList(
(0): Linear(in_features=512, out_features=512, bias=True)
(1): Linear(in_features=512, out_features=512, bias=True)
(2): Linear(in_features=512, out_features=512, bias=True)
(3): Linear(in_features=512, out_features=512, bias=True)
)
(dropout): Dropout(p=0.1, inplace=False)
)
(src_attn): MultiHeadedAttention(
(linears): ModuleList(
(0): Linear(in_features=512, out_features=512, bias=True)
(1): Linear(in_features=512, out_features=512, bias=True)
(2): Linear(in_features=512, out_features=512, bias=True)
(3): Linear(in_features=512, out_features=512, bias=True)
)
(dropout): Dropout(p=0.1, inplace=False)
)
(feed_forward): PositionwiseFeedForward(
(w_1): Linear(in_features=512, out_features=2048, bias=True)
(w_2): Linear(in_features=2048, out_features=512, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(sublayer): ModuleList(
(0): SublayerConnection(
(norm): LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(1): SublayerConnection(
(norm): LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(2): SublayerConnection(
(norm): LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(norm): LayerNorm()
)
(src_embed): Sequential(
(0): Embeddings(
(lut): Embedding(10, 512)
)
(1): PositionalEncoding(
(dropout): Dropout(p=0.1, inplace=False)
)
)
(tgt_embed): Sequential(
(0): Embeddings(
(lut): Embedding(10, 512)
)
(1): PositionalEncoding(
(dropout): Dropout(p=0.1, inplace=False)
)
)
(generator): Generator(
(proj): Linear(in_features=512, out_features=10, bias=True)
)
)
###Markdown
Create a mini-batch
###Code
class Batch:
"Object for holding a batch of data with mask during training."
def __init__(self, src, trg=None, pad=0):
self.src = src
self.src_mask = (src != pad).unsqueeze(-2)
if trg is not None:
self.trg = trg[:, :-1]
self.trg_y = trg[:, 1:]
self.trg_mask = \
self.make_std_mask(self.trg, pad)
self.ntokens = (self.trg_y != pad).data.sum()
@staticmethod
def make_std_mask(tgt, pad):
"Create a mask to hide padding and future words."
tgt_mask = (tgt != pad).unsqueeze(-2)
tgt_mask = tgt_mask & Variable(
subsequent_mask(tgt.size(-1)).type_as(tgt_mask.data))
return tgt_mask
###Output
_____no_output_____
###Markdown
Train
###Code
def run_epoch(data_iter, model, loss_compute):
"Standard Training and Logging Function"
start = time.time()
total_tokens = 0
total_loss = 0
tokens = 0
for i, batch in enumerate(data_iter):
out = model.forward(batch.src, batch.trg, batch.src_mask, batch.trg_mask)
loss = loss_compute(out, batch.trg_y, batch.ntokens)
total_loss += loss.detach().numpy()
total_tokens += batch.ntokens.numpy()
tokens += batch.ntokens.numpy()
if i % 50 == 1:
elapsed = time.time() - start
print('Epoch Step: {} Loss: {:.4f} Tokens per Sec: {:.4f}'.format( i, loss.detach().float().numpy() / batch.ntokens.float().numpy(), float(tokens) / float(elapsed) ) )
print(' t1: {:.4f} t2: {:.4f}'.format( loss.detach().float().numpy() , batch.ntokens.float().numpy() ) )
start = time.time()
tokens = 0
return total_loss / total_tokens
global max_src_in_batch, max_tgt_in_batch
def batch_size_fn(new, count, sofar):
"Keep augmenting batch and calculate total number of tokens + padding."
global max_src_in_batch, max_tgt_in_batch
if count == 1:
max_src_in_batch = 0
max_tgt_in_batch = 0
max_src_in_batch = max(max_src_in_batch, len(new.src))
max_tgt_in_batch = max(max_tgt_in_batch, len(new.trg) + 2)
src_elements = count * max_src_in_batch
tgt_elements = count * max_tgt_in_batch
return max(src_elements, tgt_elements)
###Output
_____no_output_____
###Markdown
Optimization
###Code
class NoamOpt:
"Optim wrapper that implements rate."
def __init__(self, model_size, factor, warmup, optimizer):
self.optimizer = optimizer
self._step = 0
self.warmup = warmup
self.factor = factor
self.model_size = model_size
self._rate = 0
def step(self):
"Update parameters and rate"
self._step += 1
rate = self.rate()
for p in self.optimizer.param_groups:
p['lr'] = rate
self._rate = rate
self.optimizer.step()
def rate(self, step = None):
"Implement `lrate` above"
if step is None:
step = self._step
return self.factor * \
(self.model_size ** (-0.5) *
min(step ** (-0.5), step * self.warmup ** (-1.5)))
def get_std_opt(model):
return NoamOpt(model.src_embed[0].d_model, 2, 4000,
torch.optim.Adam(model.parameters(), lr=0, betas=(0.9, 0.98), eps=1e-9))
# Three settings of the lrate hyperparameters.
opts = [NoamOpt(512, 1, 4000, None),
NoamOpt(512, 1, 8000, None),
NoamOpt(256, 1, 4000, None)]
plt.plot(np.arange(1, 20000), [[opt.rate(i) for opt in opts] for i in range(1, 20000)])
plt.legend(["512:4000", "512:8000", "256:4000"])
None
###Output
_____no_output_____
###Markdown
Label smoothing
###Code
class LabelSmoothing(nn.Module):
"Implement label smoothing."
def __init__(self, size, padding_idx, smoothing=0.0):
super(LabelSmoothing, self).__init__()
#self.criterion = nn.KLDivLoss(size_average=False)
self.criterion = nn.KLDivLoss(reduction='sum')
self.padding_idx = padding_idx
self.confidence = 1.0 - smoothing
self.smoothing = smoothing
self.size = size
self.true_dist = None
def forward(self, x, target):
assert x.size(1) == self.size
true_dist = x.data.clone()
true_dist.fill_(self.smoothing / (self.size - 2))
true_dist.scatter_(1, target.data.unsqueeze(1), self.confidence)
true_dist[:, self.padding_idx] = 0
mask = torch.nonzero(target.data == self.padding_idx)
if mask.dim() > 0:
true_dist.index_fill_(0, mask.squeeze(), 0.0)
self.true_dist = true_dist
return self.criterion(x, Variable(true_dist, requires_grad=False))
#Example of label smoothing.
crit = LabelSmoothing(5, 0, 0.4)
predict = torch.FloatTensor([[0, 0.2, 0.7, 0.1, 0],
[0, 0.2, 0.7, 0.1, 0],
[0, 0.2, 0.7, 0.1, 0]])
v = crit(Variable(predict.log()),
Variable(torch.LongTensor([2, 1, 0])))
# Show the target distributions expected by the system.
plt.imshow(crit.true_dist)
None
crit = LabelSmoothing(5, 0, 0.1)
def loss(x):
d = x + 3 * 1
predict = torch.FloatTensor([[0, x / d, 1 / d, 1 / d, 1 / d],
])
#print(predict)
return crit(Variable(predict.log()),
#Variable(torch.LongTensor([1]))).data[0]
Variable(torch.LongTensor([1]))).item()
plt.plot(np.arange(1, 100), [loss(x) for x in range(1, 100)])
None
###Output
_____no_output_____
###Markdown
Task : Memorize/copy-paste a sequence of arbitrary numbers
###Code
def data_gen(V, batch, nbatches):
"Generate random data for a src-tgt copy task."
for i in range(nbatches):
data = torch.from_numpy(np.random.randint(1, V, size=(batch, 10)))
data[:, 0] = 1
src = Variable(data, requires_grad=False)
tgt = Variable(data, requires_grad=False)
yield Batch(src, tgt, 0)
class SimpleLossCompute:
"A simple loss compute and train function."
def __init__(self, generator, criterion, opt=None):
self.generator = generator
self.criterion = criterion
self.opt = opt
def __call__(self, x, y, norm):
x = self.generator(x)
loss = self.criterion(x.contiguous().view(-1, x.size(-1)),
y.contiguous().view(-1)) / norm
loss.backward()
if self.opt is not None:
self.opt.step()
self.opt.optimizer.zero_grad()
return loss.item() * norm
# Train the simple copy task
# Training time : 100sec
V = 11
criterion = LabelSmoothing(size=V, padding_idx=0, smoothing=0.0)
model = make_model(V, V, N=2)
model_opt = NoamOpt(model.src_embed[0].d_model, 1, 400,
torch.optim.Adam(model.parameters(), lr=0, betas=(0.9, 0.98), eps=1e-9))
for epoch in range(10):
model.train()
run_epoch(data_gen(V, 30, 20), model,
SimpleLossCompute(model.generator, criterion, model_opt))
model.eval()
print(run_epoch(data_gen(V, 30, 5), model,
SimpleLossCompute(model.generator, criterion, None)))
###Output
Epoch Step: 1 Loss: 3.0000 Tokens per Sec: 674.8524
t1: 810.0000 t2: 270.0000
Epoch Step: 1 Loss: 1.0000 Tokens per Sec: 975.9554
t1: 270.0000 t2: 270.0000
1.4
Epoch Step: 1 Loss: 1.0000 Tokens per Sec: 697.1771
t1: 270.0000 t2: 270.0000
Epoch Step: 1 Loss: 1.0000 Tokens per Sec: 1037.1729
t1: 270.0000 t2: 270.0000
1.0
Epoch Step: 1 Loss: 1.0000 Tokens per Sec: 690.7649
t1: 270.0000 t2: 270.0000
Epoch Step: 1 Loss: 1.0000 Tokens per Sec: 1023.9162
t1: 270.0000 t2: 270.0000
1.0
Epoch Step: 1 Loss: 1.0000 Tokens per Sec: 648.6243
t1: 270.0000 t2: 270.0000
Epoch Step: 1 Loss: 1.0000 Tokens per Sec: 1076.5442
t1: 270.0000 t2: 270.0000
1.0
Epoch Step: 1 Loss: 1.0000 Tokens per Sec: 669.2552
t1: 270.0000 t2: 270.0000
Epoch Step: 1 Loss: 0.0000 Tokens per Sec: 1001.2830
t1: 0.0000 t2: 270.0000
0.2
Epoch Step: 1 Loss: 1.0000 Tokens per Sec: 642.4679
t1: 270.0000 t2: 270.0000
Epoch Step: 1 Loss: 0.0000 Tokens per Sec: 1026.3759
t1: 0.0000 t2: 270.0000
0.0
Epoch Step: 1 Loss: 0.0000 Tokens per Sec: 615.0579
t1: 0.0000 t2: 270.0000
Epoch Step: 1 Loss: 0.0000 Tokens per Sec: 977.2604
t1: 0.0000 t2: 270.0000
0.0
Epoch Step: 1 Loss: 0.0000 Tokens per Sec: 678.2351
t1: 0.0000 t2: 270.0000
Epoch Step: 1 Loss: 0.0000 Tokens per Sec: 919.1223
t1: 0.0000 t2: 270.0000
0.0
Epoch Step: 1 Loss: 0.0000 Tokens per Sec: 624.0509
t1: 0.0000 t2: 270.0000
Epoch Step: 1 Loss: 0.0000 Tokens per Sec: 898.2978
t1: 0.0000 t2: 270.0000
0.0
Epoch Step: 1 Loss: 0.0000 Tokens per Sec: 630.7748
t1: 0.0000 t2: 270.0000
Epoch Step: 1 Loss: 0.0000 Tokens per Sec: 1024.5554
t1: 0.0000 t2: 270.0000
0.0
###Markdown
Test network
###Code
def greedy_decode(model, src, src_mask, max_len, start_symbol):
memory = model.encode(src, src_mask)
ys = torch.ones(1, 1).fill_(start_symbol).type_as(src.data)
for i in range(max_len-1):
out = model.decode(memory, src_mask,
Variable(ys),
Variable(subsequent_mask(ys.size(1))
.type_as(src.data)))
prob = model.generator(out[:, -1])
_, next_word = torch.max(prob, dim = 1)
next_word = next_word.data[0]
ys = torch.cat([ys,
torch.ones(1, 1).type_as(src.data).fill_(next_word)], dim=1)
return ys
model.eval()
src = Variable(torch.LongTensor([[1,2,3,4,5,6,7,8,9,10]]) )
n = src.size(1)
src_mask = Variable(torch.ones(1, 1, n) )
print(greedy_decode(model, src, src_mask, max_len=n, start_symbol=1))
src = Variable(torch.LongTensor([[1,3,5,7,9]]) )
n = src.size(1)
src_mask = Variable(torch.ones(1, 1, n) )
print(greedy_decode(model, src, src_mask, max_len=n, start_symbol=1))
src = Variable(torch.LongTensor([[1,2,4,5,7,8]]) )
n = src.size(1)
src_mask = Variable(torch.ones(1, 1, n) )
print(greedy_decode(model, src, src_mask, max_len=n, start_symbol=1))
###Output
tensor([[ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]])
tensor([[1, 3, 5, 7, 9]])
tensor([[1, 2, 4, 5, 7, 8]])
|
Model backlog/Train/3-melanoma-5fold-resnet18-basic-augment.ipynb | ###Markdown
Dependencies
###Code
# !pip install --quiet efficientnet
!pip install --quiet image-classifiers
import warnings, json, re
from scripts_step_lr_schedulers import *
from melanoma_utility_scripts import *
from kaggle_datasets import KaggleDatasets
from sklearn.model_selection import KFold
import tensorflow.keras.layers as L
import tensorflow.keras.backend as K
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
from tensorflow.keras import optimizers, layers, metrics, losses, Model
# import efficientnet.tfkeras as efn
from classification_models.tfkeras import Classifiers
SEED = 0
seed_everything(SEED)
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
TPU configuration
###Code
strategy, tpu = set_up_strategy()
print("REPLICAS: ", strategy.num_replicas_in_sync)
AUTO = tf.data.experimental.AUTOTUNE
###Output
REPLICAS: 1
###Markdown
Model parameters
###Code
base_model_path = '/kaggle/input/efficientnet/'
dataset_path = 'melanoma-256x256'
config = {
"HEIGHT": 256,
"WIDTH": 256,
"CHANNELS": 3,
"BATCH_SIZE": 64,
"EPOCHS": 15,
"LEARNING_RATE": 3e-4,
"ES_PATIENCE": 5,
"N_FOLDS": 5,
"BASE_MODEL_PATH": base_model_path + 'efficientnet-b3_weights_tf_dim_ordering_tf_kernels_autoaugment_notop.h5',
"DATASET_PATH": dataset_path
}
with open('config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
config
###Output
_____no_output_____
###Markdown
Load data
###Code
database_base_path = '/kaggle/input/siim-isic-melanoma-classification/'
k_fold = pd.read_csv(database_base_path + 'train.csv')
print('Train samples: %d' % len(k_fold))
display(k_fold.head())
GCS_PATH = KaggleDatasets().get_gcs_path(dataset_path)
TRAINING_FILENAMES = tf.io.gfile.glob(GCS_PATH + '/train*.tfrec')
###Output
Train samples: 33126
###Markdown
Auxiliar functions
###Code
# Datasets utility functions
LABELED_TFREC_FORMAT = {
"image": tf.io.FixedLenFeature([], tf.string), # tf.string means bytestring
"target": tf.io.FixedLenFeature([], tf.int64), # shape [] means single element
"image_name": tf.io.FixedLenFeature([], tf.string),
# meta features
"patient_id": tf.io.FixedLenFeature([], tf.int64),
"sex": tf.io.FixedLenFeature([], tf.int64),
"age_approx": tf.io.FixedLenFeature([], tf.int64),
"anatom_site_general_challenge": tf.io.FixedLenFeature([], tf.int64),
"diagnosis": tf.io.FixedLenFeature([], tf.int64)
}
def decode_image(image_data, height, width, channels):
image = tf.image.decode_jpeg(image_data, channels=channels)
image = tf.cast(image, tf.float32) / 255.0
image = tf.reshape(image, [height, width, channels])
return image
def read_labeled_tfrecord(example, height=config['HEIGHT'], width=config['WIDTH'], channels=config['CHANNELS']):
example = tf.io.parse_single_example(example, LABELED_TFREC_FORMAT)
image = decode_image(example['image'], height, width, channels)
label = tf.cast(example['target'], tf.float32)
# meta features
data = {}
data['patient_id'] = tf.cast(example['patient_id'], tf.int32)
data['sex'] = tf.cast(example['sex'], tf.int32)
data['age_approx'] = tf.cast(example['age_approx'], tf.int32)
data['anatom_site_general_challenge'] = tf.cast(tf.one_hot(example['anatom_site_general_challenge'], 7), tf.int32)
data['diagnosis'] = tf.cast(tf.one_hot(example['diagnosis'], 10), tf.int32)
return {'input_image': image, 'input_meta': data}, label # returns a dataset of (image, data, label)
def read_labeled_tfrecord_eval(example, height=config['HEIGHT'], width=config['WIDTH'], channels=config['CHANNELS']):
example = tf.io.parse_single_example(example, LABELED_TFREC_FORMAT)
image = decode_image(example['image'], height, width, channels)
label = tf.cast(example['target'], tf.float32)
image_name = example['image_name']
# meta features
data = {}
data['patient_id'] = tf.cast(example['patient_id'], tf.int32)
data['sex'] = tf.cast(example['sex'], tf.int32)
data['age_approx'] = tf.cast(example['age_approx'], tf.int32)
data['anatom_site_general_challenge'] = tf.cast(tf.one_hot(example['anatom_site_general_challenge'], 7), tf.int32)
data['diagnosis'] = tf.cast(tf.one_hot(example['diagnosis'], 10), tf.int32)
return {'input_image': image, 'input_meta': data}, label, image_name # returns a dataset of (image, data, label, image_name)
def data_augment(image, label):
p_spatial = tf.random.uniform([1], minval=0, maxval=1, dtype='float32', seed=SEED)
# p_pixel = tf.random.uniform([1], minval=0, maxval=1, dtype='float32', seed=SEED)
# p_crop = tf.random.uniform([1], minval=0, maxval=1, dtype='float32', seed=SEED)
### Spatial-level transforms
if p_spatial >= .2: # flips
image['input_image'] = tf.image.random_flip_left_right(image['input_image'], seed=SEED)
image['input_image'] = tf.image.random_flip_up_down(image['input_image'], seed=SEED)
if p_spatial >= .7:
image['input_image'] = tf.image.transpose(image['input_image'])
if p_spatial >= .5: # rotate
image['input_image'] = tf.image.rot90(image['input_image'])
if p_spatial >= .75: # double rotate
image['input_image'] = tf.image.rot90(image['input_image'])
# if p_crop >= .7: # crops
# if p_crop >= .95:
# image['input_image'] = tf.image.random_crop(image['input_image'], size=[int(config['HEIGHT']*.6), int(config['WIDTH']*.6), config['CHANNELS']], seed=SEED)
# elif p_crop >= .85:
# image['input_image'] = tf.image.random_crop(image['input_image'], size=[int(config['HEIGHT']*.7), int(config['WIDTH']*.7), config['CHANNELS']], seed=SEED)
# elif p_crop >= .8:
# image['input_image'] = tf.image.random_crop(image['input_image'], size=[int(config['HEIGHT']*.8), int(config['WIDTH']*.8), config['CHANNELS']], seed=SEED)
# else:
# image['input_image'] = tf.image.random_crop(image['input_image'], size=[int(config['HEIGHT']*.9), int(config['WIDTH']*.9), config['CHANNELS']], seed=SEED)
# image['input_image'] = tf.image.resize(image['input_image'], size=[config['HEIGHT'], config['WIDTH']])
# ## Pixel-level transforms
# if p_pixel >= .4: # pixel transformations
# if p_pixel >= .85:
# image['input_image'] = tf.image.random_saturation(image['input_image'], lower=0, upper=2, seed=SEED)
# elif p_pixel >= .65:
# image['input_image'] = tf.image.random_contrast(image['input_image'], lower=.8, upper=2, seed=SEED)
# elif p_pixel >= .5:
# image['input_image'] = tf.image.random_brightness(image['input_image'], max_delta=.2, seed=SEED)
# else:
# image['input_image'] = tf.image.adjust_gamma(image['input_image'], gamma=.5)
return image, label
def load_dataset(filenames, ordered=False, buffer_size=-1):
ignore_order = tf.data.Options()
if not ordered:
ignore_order.experimental_deterministic = False # disable order, increase speed
dataset = tf.data.TFRecordDataset(filenames, num_parallel_reads=buffer_size) # automatically interleaves reads from multiple files
dataset = dataset.with_options(ignore_order) # uses data as soon as it streams in, rather than in its original order
dataset = dataset.map(read_labeled_tfrecord, num_parallel_calls=buffer_size)
return dataset # returns a dataset of (image, data, label)
def load_dataset_eval(filenames, buffer_size=-1):
dataset = tf.data.TFRecordDataset(filenames, num_parallel_reads=buffer_size) # automatically interleaves reads from multiple files
dataset = dataset.map(read_labeled_tfrecord_eval, num_parallel_calls=buffer_size)
return dataset # returns a dataset of (image, data, label, image_name)
def get_training_dataset(filenames, batch_size, buffer_size=-1):
dataset = load_dataset(filenames, ordered=False, buffer_size=buffer_size)
dataset = dataset.map(data_augment, num_parallel_calls=AUTO)
dataset = dataset.repeat() # the training dataset must repeat for several epochs
dataset = dataset.shuffle(2048)
dataset = dataset.batch(batch_size, drop_remainder=True) # slighly faster with fixed tensor sizes
dataset = dataset.prefetch(buffer_size) # prefetch next batch while training (autotune prefetch buffer size)
return dataset
def get_validation_dataset(filenames, ordered=True, repeated=False, batch_size=32, buffer_size=-1):
dataset = load_dataset(filenames, ordered=ordered, buffer_size=buffer_size)
if repeated:
dataset = dataset.repeat()
dataset = dataset.shuffle(2048)
dataset = dataset.batch(batch_size, drop_remainder=repeated)
dataset = dataset.prefetch(buffer_size)
return dataset
def get_eval_dataset(filenames, batch_size=32, buffer_size=-1):
dataset = load_dataset_eval(filenames, buffer_size=buffer_size)
dataset = dataset.batch(batch_size, drop_remainder=False)
dataset = dataset.prefetch(buffer_size)
return dataset
def count_data_items(filenames):
n = [int(re.compile(r"-([0-9]*)\.").search(filename).group(1)) for filename in filenames]
return np.sum(n)
###Output
_____no_output_____
###Markdown
Learning rate scheduler
###Code
lr_min = 1e-6
lr_start = 0
lr_max = config['LEARNING_RATE']
step_size = 26880 // config['BATCH_SIZE'] #(len(k_fold[k_fold[f'fold_{fold_n}'] == 'train']) * 2) // config['BATCH_SIZE']
total_steps = config['EPOCHS'] * step_size
hold_max_steps = 0
warmup_steps = step_size * 5
rng = [i for i in range(0, total_steps, step_size)]
y = [linear_schedule_with_warmup(tf.cast(x, tf.float32), total_steps=total_steps,
warmup_steps=warmup_steps, hold_max_steps=hold_max_steps,
lr_start=lr_start, lr_max=lr_max, lr_min=lr_min) for x in rng]
sns.set(style="whitegrid")
fig, ax = plt.subplots(figsize=(20, 6))
plt.plot(rng, y)
print("Learning rate schedule: {:.3g} to {:.3g} to {:.3g}".format(y[0], max(y), y[-1]))
###Output
Learning rate schedule: 0 to 0.0003 to 3e-05
###Markdown
Model
###Code
def model_fn(input_shape):
input_image = L.Input(shape=input_shape, name='input_image')
ResNet18, preprocess_input = Classifiers.get('resnet18')
base_model = ResNet18(input_shape=input_shape,
weights='imagenet',
include_top=False)
x = base_model(input_image)
x = L.GlobalAveragePooling2D()(x)
output = L.Dense(1, activation='sigmoid')(x)
model = Model(inputs=input_image, outputs=output)
return model
###Output
_____no_output_____
###Markdown
Training
###Code
eval_dataset = get_eval_dataset(TRAINING_FILENAMES, batch_size=config['BATCH_SIZE'], buffer_size=AUTO)
image_names = next(iter(eval_dataset.unbatch().map(lambda data, label, image_name: image_name).batch(len(k_fold)))).numpy().astype('U')
image_data = eval_dataset.map(lambda data, label, image_name: data)
history_list = []
kfold = KFold(config['N_FOLDS'], shuffle=True, random_state=SEED)
for n_fold, (trn_idx, val_idx) in enumerate(kfold.split(TRAINING_FILENAMES)):
n_fold +=1
print('\nFOLD: %d' % (n_fold))
# tf.tpu.experimental.initialize_tpu_system(tpu)
K.clear_session()
### Data
train_filenames = np.array(TRAINING_FILENAMES)[trn_idx]
valid_filenames = np.array(TRAINING_FILENAMES)[val_idx]
train_size = count_data_items(train_filenames)
step_size = train_size // config['BATCH_SIZE']
# Train model
model_path = 'model_fold_%d.h5' % (n_fold)
es = EarlyStopping(monitor='val_loss', mode='min', patience=config['ES_PATIENCE'],
restore_best_weights=True, verbose=1)
checkpoint = ModelCheckpoint(model_path, monitor='val_loss', mode='min', save_best_only=True, save_weights_only=True)
with strategy.scope():
model = model_fn((config['HEIGHT'], config['WIDTH'], config['CHANNELS']))
lr = lambda: linear_schedule_with_warmup(tf.cast(optimizer.iterations, tf.float32),
total_steps=total_steps, warmup_steps=warmup_steps,
hold_max_steps=hold_max_steps, lr_start=lr_start,
lr_max=lr_max, lr_min=lr_min)
optimizer = optimizers.Adam(learning_rate=lr)
model.compile(optimizer, loss=losses.BinaryCrossentropy(),
metrics=[metrics.AUC()])
history = model.fit(get_training_dataset(train_filenames, batch_size=config['BATCH_SIZE'], buffer_size=AUTO),
validation_data=get_validation_dataset(valid_filenames, ordered=True, repeated=False,
batch_size=config['BATCH_SIZE'], buffer_size=AUTO),
epochs=config['EPOCHS'],
steps_per_epoch=step_size,
callbacks=[checkpoint, es],
verbose=2).history
history_list.append(history)
# Make predictions
preds = model.predict(image_data)
name_preds = dict(zip(image_names, preds.reshape(len(preds))))
k_fold[f'pred_fold_{n_fold}'] = k_fold.apply(lambda x: name_preds[x['image_name']], axis=1)
valid_filenames = np.array(TRAINING_FILENAMES)[val_idx]
valid_dataset = get_eval_dataset(valid_filenames, batch_size=config['BATCH_SIZE'], buffer_size=AUTO)
valid_image_names = next(iter(valid_dataset.unbatch().map(lambda data, label, image_name: image_name).batch(count_data_items(valid_filenames)))).numpy().astype('U')
k_fold[f'fold_{n_fold}'] = k_fold.apply(lambda x: 'validation' if x['image_name'] in valid_image_names else 'train', axis=1)
###Output
FOLD: 1
Downloading data from https://github.com/qubvel/classification_models/releases/download/0.0.1/resnet18_imagenet_1000_no_top.h5
44924928/44920640 [==============================] - 1s 0us/step
Epoch 1/15
388/388 - 110s - loss: 0.2895 - auc: 0.5799 - val_loss: 0.2685 - val_auc: 0.3793
Epoch 2/15
388/388 - 110s - loss: 0.0737 - auc: 0.8386 - val_loss: 0.0969 - val_auc: 0.5908
Epoch 3/15
388/388 - 109s - loss: 0.0705 - auc: 0.8682 - val_loss: 0.0723 - val_auc: 0.8444
Epoch 4/15
388/388 - 109s - loss: 0.0701 - auc: 0.8616 - val_loss: 0.0758 - val_auc: 0.8066
Epoch 5/15
388/388 - 108s - loss: 0.0705 - auc: 0.8672 - val_loss: 0.0825 - val_auc: 0.7724
Epoch 6/15
388/388 - 109s - loss: 0.0690 - auc: 0.8735 - val_loss: 0.0774 - val_auc: 0.8366
Epoch 7/15
388/388 - 109s - loss: 0.0667 - auc: 0.8835 - val_loss: 0.0713 - val_auc: 0.8542
Epoch 8/15
388/388 - 110s - loss: 0.0620 - auc: 0.9090 - val_loss: 0.0687 - val_auc: 0.8805
Epoch 9/15
388/388 - 109s - loss: 0.0593 - auc: 0.9191 - val_loss: 0.0696 - val_auc: 0.8583
Epoch 10/15
388/388 - 109s - loss: 0.0573 - auc: 0.9245 - val_loss: 0.0795 - val_auc: 0.7964
Epoch 11/15
388/388 - 110s - loss: 0.0525 - auc: 0.9372 - val_loss: 0.1081 - val_auc: 0.6715
Epoch 12/15
388/388 - 110s - loss: 0.0476 - auc: 0.9489 - val_loss: 0.0774 - val_auc: 0.8188
Epoch 13/15
Restoring model weights from the end of the best epoch.
388/388 - 110s - loss: 0.0417 - auc: 0.9605 - val_loss: 0.0709 - val_auc: 0.8723
Epoch 00013: early stopping
FOLD: 2
Epoch 1/15
420/420 - 116s - loss: 0.4728 - auc: 0.5980 - val_loss: 0.2281 - val_auc: 0.3935
Epoch 2/15
420/420 - 115s - loss: 0.0740 - auc: 0.8439 - val_loss: 0.0908 - val_auc: 0.6639
Epoch 3/15
420/420 - 115s - loss: 0.0711 - auc: 0.8625 - val_loss: 0.0837 - val_auc: 0.7722
Epoch 4/15
420/420 - 115s - loss: 0.0710 - auc: 0.8542 - val_loss: 0.0779 - val_auc: 0.8267
Epoch 5/15
420/420 - 114s - loss: 0.0708 - auc: 0.8669 - val_loss: 0.0780 - val_auc: 0.8274
Epoch 6/15
420/420 - 115s - loss: 0.0665 - auc: 0.8836 - val_loss: 0.0824 - val_auc: 0.7724
Epoch 7/15
420/420 - 115s - loss: 0.0670 - auc: 0.8821 - val_loss: 0.0697 - val_auc: 0.8680
Epoch 8/15
420/420 - 115s - loss: 0.0645 - auc: 0.8931 - val_loss: 0.0823 - val_auc: 0.8494
Epoch 9/15
420/420 - 114s - loss: 0.0596 - auc: 0.9189 - val_loss: 0.0765 - val_auc: 0.8321
Epoch 10/15
420/420 - 115s - loss: 0.0566 - auc: 0.9251 - val_loss: 0.0772 - val_auc: 0.8383
Epoch 11/15
420/420 - 115s - loss: 0.0545 - auc: 0.9348 - val_loss: 0.0835 - val_auc: 0.8144
Epoch 12/15
Restoring model weights from the end of the best epoch.
420/420 - 114s - loss: 0.0485 - auc: 0.9494 - val_loss: 0.0834 - val_auc: 0.8751
Epoch 00012: early stopping
FOLD: 3
Epoch 1/15
420/420 - 115s - loss: 0.3104 - auc: 0.6038 - val_loss: 0.2810 - val_auc: 0.4007
Epoch 2/15
420/420 - 115s - loss: 0.0730 - auc: 0.8413 - val_loss: 0.1030 - val_auc: 0.5298
Epoch 3/15
420/420 - 115s - loss: 0.0706 - auc: 0.8557 - val_loss: 0.0742 - val_auc: 0.8557
Epoch 4/15
420/420 - 114s - loss: 0.0707 - auc: 0.8622 - val_loss: 0.0961 - val_auc: 0.7199
Epoch 5/15
420/420 - 115s - loss: 0.0694 - auc: 0.8636 - val_loss: 0.0851 - val_auc: 0.8039
Epoch 6/15
420/420 - 115s - loss: 0.0680 - auc: 0.8805 - val_loss: 0.0792 - val_auc: 0.8151
Epoch 7/15
420/420 - 115s - loss: 0.0651 - auc: 0.8849 - val_loss: 0.0964 - val_auc: 0.8202
Epoch 8/15
420/420 - 115s - loss: 0.0634 - auc: 0.9001 - val_loss: 0.0736 - val_auc: 0.8653
Epoch 9/15
420/420 - 114s - loss: 0.0599 - auc: 0.9181 - val_loss: 0.0740 - val_auc: 0.8414
Epoch 10/15
420/420 - 114s - loss: 0.0584 - auc: 0.9165 - val_loss: 0.0785 - val_auc: 0.8211
Epoch 11/15
420/420 - 115s - loss: 0.0514 - auc: 0.9341 - val_loss: 0.0748 - val_auc: 0.8455
Epoch 12/15
420/420 - 115s - loss: 0.0476 - auc: 0.9535 - val_loss: 0.0728 - val_auc: 0.8652
Epoch 13/15
420/420 - 115s - loss: 0.0415 - auc: 0.9623 - val_loss: 0.0736 - val_auc: 0.8467
Epoch 14/15
420/420 - 115s - loss: 0.0341 - auc: 0.9718 - val_loss: 0.0778 - val_auc: 0.8572
Epoch 15/15
420/420 - 115s - loss: 0.0239 - auc: 0.9869 - val_loss: 0.0771 - val_auc: 0.8310
FOLD: 4
Epoch 1/15
420/420 - 116s - loss: 0.3954 - auc: 0.5925 - val_loss: 0.1135 - val_auc: 0.5157
Epoch 2/15
420/420 - 115s - loss: 0.0720 - auc: 0.8366 - val_loss: 0.0908 - val_auc: 0.6873
Epoch 3/15
420/420 - 115s - loss: 0.0716 - auc: 0.8517 - val_loss: 0.0795 - val_auc: 0.8057
Epoch 4/15
420/420 - 118s - loss: 0.0702 - auc: 0.8603 - val_loss: 0.0731 - val_auc: 0.8529
Epoch 5/15
420/420 - 115s - loss: 0.0700 - auc: 0.8598 - val_loss: 0.0799 - val_auc: 0.7986
Epoch 6/15
420/420 - 114s - loss: 0.0690 - auc: 0.8693 - val_loss: 0.0735 - val_auc: 0.8638
Epoch 7/15
420/420 - 114s - loss: 0.0654 - auc: 0.8880 - val_loss: 0.0765 - val_auc: 0.8532
Epoch 8/15
420/420 - 114s - loss: 0.0617 - auc: 0.9026 - val_loss: 0.0764 - val_auc: 0.8397
Epoch 9/15
420/420 - 115s - loss: 0.0610 - auc: 0.9074 - val_loss: 0.0709 - val_auc: 0.8664
Epoch 10/15
420/420 - 114s - loss: 0.0563 - auc: 0.9307 - val_loss: 0.0819 - val_auc: 0.7790
Epoch 11/15
420/420 - 114s - loss: 0.0508 - auc: 0.9387 - val_loss: 0.0841 - val_auc: 0.7978
Epoch 12/15
420/420 - 115s - loss: 0.0464 - auc: 0.9509 - val_loss: 0.0845 - val_auc: 0.7800
Epoch 13/15
420/420 - 115s - loss: 0.0379 - auc: 0.9697 - val_loss: 0.0759 - val_auc: 0.8405
Epoch 14/15
Restoring model weights from the end of the best epoch.
420/420 - 115s - loss: 0.0315 - auc: 0.9736 - val_loss: 0.0804 - val_auc: 0.8363
Epoch 00014: early stopping
FOLD: 5
Epoch 1/15
420/420 - 115s - loss: 0.1697 - auc: 0.6165 - val_loss: 0.2399 - val_auc: 0.6274
Epoch 2/15
420/420 - 115s - loss: 0.0717 - auc: 0.8411 - val_loss: 0.0832 - val_auc: 0.7518
Epoch 3/15
420/420 - 114s - loss: 0.0732 - auc: 0.8550 - val_loss: 0.0723 - val_auc: 0.8768
Epoch 4/15
420/420 - 114s - loss: 0.0697 - auc: 0.8625 - val_loss: 0.0814 - val_auc: 0.8474
Epoch 5/15
420/420 - 114s - loss: 0.0718 - auc: 0.8513 - val_loss: 0.0746 - val_auc: 0.8159
Epoch 6/15
420/420 - 114s - loss: 0.0681 - auc: 0.8681 - val_loss: 0.0807 - val_auc: 0.7871
Epoch 7/15
420/420 - 114s - loss: 0.0686 - auc: 0.8764 - val_loss: 0.0694 - val_auc: 0.8700
Epoch 8/15
420/420 - 114s - loss: 0.0633 - auc: 0.9003 - val_loss: 0.0691 - val_auc: 0.8585
Epoch 9/15
420/420 - 114s - loss: 0.0628 - auc: 0.9067 - val_loss: 0.0684 - val_auc: 0.8637
Epoch 10/15
420/420 - 114s - loss: 0.0593 - auc: 0.9134 - val_loss: 0.0691 - val_auc: 0.8658
Epoch 11/15
420/420 - 115s - loss: 0.0551 - auc: 0.9320 - val_loss: 0.0700 - val_auc: 0.8610
Epoch 12/15
420/420 - 114s - loss: 0.0488 - auc: 0.9479 - val_loss: 0.0706 - val_auc: 0.8500
Epoch 13/15
420/420 - 115s - loss: 0.0419 - auc: 0.9602 - val_loss: 0.0744 - val_auc: 0.8249
Epoch 14/15
Restoring model weights from the end of the best epoch.
420/420 - 115s - loss: 0.0350 - auc: 0.9715 - val_loss: 0.0763 - val_auc: 0.8155
Epoch 00014: early stopping
###Markdown
Model loss graph
###Code
for n_fold in range(config['N_FOLDS']):
print(f'Fold: {n_fold}')
plot_metrics(history_list[n_fold])
###Output
Fold: 0
###Markdown
Model evaluation
###Code
display(evaluate_model(k_fold, config['N_FOLDS']).style.applymap(color_map))
###Output
_____no_output_____
###Markdown
Model evaluation by Subset
###Code
display(evaluate_model_Subset(k_fold, config['N_FOLDS']).style.applymap(color_map))
###Output
_____no_output_____
###Markdown
Confusion matrix
###Code
for n_fold in range(config['N_FOLDS']):
n_fold += 1
pred_col = f'pred_fold_{n_fold}'
train_set = k_fold[k_fold[f'fold_{n_fold}'] == 'train']
validation_set = k_fold[k_fold[f'fold_{n_fold}'] == 'validation']
print(f'Fold: {n_fold}')
plot_confusion_matrix(train_set['target'], np.round(train_set[pred_col]),
validation_set['target'], np.round(validation_set[pred_col]))
###Output
Fold: 1
###Markdown
Visualize predictions
###Code
k_fold['pred'] = 0
for n_fold in range(config['N_FOLDS']):
n_fold +=1
k_fold['pred'] += k_fold[f'pred_fold_{n_fold}'] / config['N_FOLDS']
print('Top 10 samples')
display(k_fold[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis',
'target', 'pred'] + [c for c in k_fold.columns if (c.startswith('pred_fold'))]].head(10))
print('Top 10 positive samples')
display(k_fold[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis',
'target', 'pred'] + [c for c in k_fold.columns if (c.startswith('pred_fold'))]].query('target == 1').head(10))
print('Top 10 predicted positive samples')
display(k_fold[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis',
'target', 'pred'] + [c for c in k_fold.columns if (c.startswith('pred_fold'))]].query('pred >= .5').head(10))
###Output
Top 10 samples
|
MachineLearning/Linear Regression.ipynb | ###Markdown
Linear Regression
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import pylab as pl
import tensorflow as tf
import matplotlib.patches as mpatches
plt.rcParams['figure.figsize'] = (10,6)
X = np.arange(0.0,5.0,0.1)
X
#slope and intercept
a =1
b =0
y = a*X + b
plt.plot(X,y)
plt.ylabel('Dependent Variable')
plt.xlabel('Independent Variable')
plt.show()
!wget -O FuelConsumption.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/FuelConsumptionCo2.csv
df = pd.read_csv("FuelConsumption.csv")
df.head()
###Output
_____no_output_____
###Markdown
Predict co2 Emission of cars based on their engine size.
###Code
train_x = np.asanyarray(df[['ENGINE SIZE']])
train_y = np.asanyarray(df[['CO2 EMISSIONS']])
train_x
a = tf.Variable(20.0)
b = tf.Variable(30.2)
y = a * train_x + b
y
loss = tf.reduce_mean(tf.square(y - train_y))
loss
optimizer = tf.train.GradientDescentOptimizer(0.05)
###Output
_____no_output_____
###Markdown
Training model
###Code
train = optimizer.minimize(loss)
init = tf.global_variables_initializer()
sess = tf.Session()
result = sess.run(init)
loss_values = []
train_data = []
for step in range (100):
_,loss_val, a_val, b_val = sess.run([train,loss,a,b])
loss_values.append(loss_val)
if step % 5 ==0:
print(step,loss_val,a_val,b_val)
train_data.append([a_val,b_val])
plt.plot(loss_values,'b+')
cr, cg, cb = (1.0, 1.0, 0.0)
for f in train_data:
cb += 1.0 / len(train_data)
cg -= 1.0 / len(train_data)
if cb > 1.0: cb = 1.0
if cg < 0.0: cg = 0.0
[a, b] = f
f_y = np.vectorize(lambda x: a*x + b)(train_x)
line = plt.plot(train_x, f_y)
plt.setp(line, color=(cr,cg,cb))
plt.plot(train_x, train_y, 'bo')
green_line = mpatches.Patch(color='yellow', label='Data Points')
plt.legend(handles=[green_line])
plt.show()
###Output
_____no_output_____ |
Analysis/Merging_U2M_&_V2M .ipynb | ###Markdown
Merging U2M and V2M data and calculating Wind Speed
###Code
import pandas as pd
import numpy as np
# Load Wind data
df1 = pd.read_csv("../merradownload/2M Eastward Wind/North Carolina_monthly.csv")
df2 = pd.read_csv("../merradownload/2M Northward Wind/North Carolina_monthly.csv")
df1['month'] = [str(x).split('-')[-1] for x in df1['date']]
df1['year'] = [str(x).split('-')[0] for x in df1['date']]
df2['month'] = [str(x).split('-')[-1] for x in df2['date']]
df2['year'] = [str(x).split('-')[0] for x in df2['date']]
df1 = df1.drop(columns=['date'])
df2 = df2.drop(columns=['date'])
# Merging both wind data
df = pd.merge(df1, df2, on= ['lat','lon','year','month'])
df = df[['year','month','lat','lon','U2M','V2M']]
df['wind_speed'] = np.sqrt(df['U2M']**2 + df['V2M']**2)
df['wind_direction'] = np.arctan2(df['V2M'],df['U2M'])
df['state'] = 'North Carolina'
df
df.to_csv('North_carolina.csv', index=False )
###Output
_____no_output_____ |
IntroductionNN/Exercise4-CustomImages.ipynb | ###Markdown
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Below is code with a link to a happy or sad dataset which contains 80 images, 40 happy and 40 sad. Create a convolutional neural network that trains to 100% accuracy on these images, which cancels training upon hitting training accuracy of >.999Hint -- it will work best with 3 convolutional layers.
###Code
import tensorflow as tf
import os
import zipfile
DESIRED_ACCURACY = 0.999
!wget --no-check-certificate \
"https://storage.googleapis.com/laurencemoroney-blog.appspot.com/happy-or-sad.zip" \
-O "/tmp/happy-or-sad.zip"
zip_ref = zipfile.ZipFile("/tmp/happy-or-sad.zip", 'r')
zip_ref.extractall("/tmp/h-or-s")
zip_ref.close()
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('accuracy')>DESIRED_ACCURACY):
print("\nReached 99.9% accuracy so cancelling training!")
self.model.stop_training = True
callbacks = myCallback()
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(150, 150, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
from tensorflow.keras.optimizers import RMSprop
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=0.001),
metrics=['accuracy'])
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale=1/255)
train_generator = train_datagen.flow_from_directory(
"/tmp/h-or-s",
target_size=(150, 150),
batch_size=10,
class_mode='binary')
history = model.fit_generator(
train_generator,
steps_per_epoch=2,
epochs=15,
verbose=1,
callbacks=[callbacks])
import matplotlib.pyplot as plt
acc = history.history['accuracy']
loss = history.history['loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, loss, 'b', label='Training Loss')
plt.title('Training and validation accuracy')
plt.figure()
plt.show()
###Output
_____no_output_____ |
notebooks/ai/exercise3/exercise3.ipynb | ###Markdown
###Code
from tensorflow.keras.datasets import mnist #Загружаем базу mnist
from tensorflow.keras.datasets import cifar10 #Загружаем базу cifar10
from tensorflow.keras.datasets import cifar100 #Загружаем базу cifar100
from tensorflow.keras.models import Sequential #Сеть прямого распространения
#Базовые слои для счёрточных сетей
from tensorflow.keras.layers import Dense, Conv2D, MaxPooling2D, Flatten, Dropout, BatchNormalization
from tensorflow.python.keras.preprocessing.image import ImageDataGenerator # работа с изображениями
from tensorflow.keras.optimizers import Adam, Adadelta # оптимизаторы
from tensorflow.keras import utils #Используем дял to_categoricall
from tensorflow.keras.preprocessing import image #Для отрисовки изображений
from google.colab import files #Для загрузки своей картинки
import numpy as np #Библиотека работы с массивами
import matplotlib.pyplot as plt #Для отрисовки графиков
from PIL import Image #Для отрисовки изображений
import random #Для генерации случайных чисел
import math # Для округления
import os #Для работы с файлами
###Output
_____no_output_____
###Markdown
LIGHT Вариант 2
###Code
#Загружаем MNIST
(x_train, y_train), (x_test, y_test) = mnist.load_data()
#Превращаем y_train и y_test сетей в формат one hot encoding
y_train = utils.to_categorical(y_train, 10)
y_test = utils.to_categorical(y_test, 10)
#Меняем формат данных MNIST
#Надо добавить в конце размерность 1
#Чтобы свёрточная сеть понимала, что это чёрно-белые данные
x_train = x_train.reshape(x_train.shape[0], 28, 28, 1)
x_test = x_test.reshape(x_test.shape[0], 28, 28, 1)
print(x_test.shape)
def modelCreate(neuro_num=32,act_func='relu'):
model = Sequential()
model.add(BatchNormalization(input_shape=(28, 28, 1)))
model.add(Conv2D(neuro_num, (3, 3), padding='same', activation=act_func))
model.add(Conv2D(neuro_num, (3, 3), padding='same', activation=act_func))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.25))
model.add(Dense(10, activation='softmax'))
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"])
return model
def modelPlot(history):
plt.plot(history.history['accuracy'],
label='Доля верных ответов на обучающем наборе')
plt.plot(history.history['val_accuracy'],
label='Доля верных ответов на проверочном наборе')
plt.xlabel('Эпоха обучения')
plt.ylabel('Доля верных ответов')
plt.legend()
plt.show()
# Таблица результатов
results = {'neiro_num': {2:[], 4:[], 16:[], 32:[]}, 'act_func': {'relu':[], 'linear': []}, 'batch': {10:[], 100:[], 128:[], 48000:[]}}
for n in results['neiro_num'].keys():
model = modelCreate(neuro_num=n)
batch_size = 128
history = model.fit(x_train, y_train, batch_size=batch_size, epochs=15, validation_data=(x_test, y_test), verbose=1)
modelPlot(history)
ac = round(float(str(history.history['accuracy'][-1])),4)
val = round(float(str(history.history['val_accuracy'][-1])),4)
results['neiro_num'][n].append((n,'relu',batch_size))
results['neiro_num'][n].append((ac,val))
for n in results['act_func'].keys():
model = modelCreate(act_func=n)
batch_size = 128
history = model.fit(x_train, y_train, batch_size=batch_size, epochs=15, validation_data=(x_test, y_test), verbose=1)
modelPlot(history)
ac = round(float(str(history.history['accuracy'][-1])),4)
val = round(float(str(history.history['val_accuracy'][-1])),4)
results['act_func'][n].append((32,n,batch_size))
results['act_func'][n].append((ac,val))
for n in results['batch'].keys():
model = modelCreate()
batch_size = n
try:
history = model.fit(x_train, y_train, batch_size=batch_size, epochs=15, validation_data=(x_test, y_test), verbose=1)
modelPlot(history)
ac = round(float(str(history.history['accuracy'][-1])),4)
val = round(float(str(history.history['val_accuracy'][-1])),4)
except:
ac = 0
val = 0
results['batch'][n].append((32,'relu',batch_size))
results['batch'][n].append((ac,val))
print(results)
###Output
{'neiro_num': {2: [(2, 'relu', 128), (0.9655, 0.9781)], 4: [(4, 'relu', 128), (0.9888, 0.9889)], 16: [(16, 'relu', 128), (0.9963, 0.9913)], 32: [(32, 'relu', 128), (0.9976, 0.9913)]}, 'act_func': {'relu': [(32, 'relu', 128), (0.9972, 0.9918)], 'linear': [(32, 'linear', 128), (0.9952, 0.9871)]}, 'batch': {10: [(32, 'relu', 10), (0.9972, 0.9931)], 100: [(32, 'relu', 100), (0.9973, 0.9912)], 128: [(32, 'relu', 128), (0.997, 0.9892)], 48000: []}}
###Markdown
PRO Вариант 3
###Code
train_path = './cars' #Папка с папками картинок, рассортированных по категориям
batch_size = 20 #Размер выборки
img_width = 96 #Ширина изображения
img_height = 54 #Высота изображения
#Генератор изображений
datagen = ImageDataGenerator(
rescale=1. / 255, #Значения цвета меняем на дробные показания
# rotation_range=10, #Поворачиваем изображения при генерации выборки
width_shift_range=0.1, #Двигаем изображения по ширине при генерации выборки
height_shift_range=0.1, #Двигаем изображения по высоте при генерации выборки
# zoom_range=0.1, #Зумируем изображения при генерации выборки
# horizontal_flip=True, #Отключаем отзеркаливание изображений
fill_mode='nearest', #Заполнение пикселей вне границ ввода
validation_split=0.2 #Указываем разделение изображений на обучающую и тестовую выборку
)
# обучающая выборка
train_generator = datagen.flow_from_directory(
train_path, #Путь ко всей выборке выборке
target_size=(img_height, img_width), #Размер изображений
batch_size=batch_size, #Размер batch_size
class_mode='categorical', #Категориальный тип выборки. Разбиение выборки по маркам авто
shuffle=True, #Перемешивание выборки
subset='training' # устанавливаем как набор для обучения
)
# проверочная выборка
validation_generator = datagen.flow_from_directory(
train_path, #Путь ко всей выборке выборке
target_size=(img_height, img_width), #Размер изображений
batch_size=batch_size, #Размер batch_size
class_mode='categorical', #Категориальный тип выборки. Разбиение выборки по маркам авто
shuffle=True, #Перемешивание выборки
subset='validation' # устанавливаем как валидационный набор
)
#Создаем последовательную модель
model = Sequential()
model.add(BatchNormalization(input_shape=(img_height, img_width, 3)))
#Первый сверточный слой
model.add(Conv2D(1024, (3, 3), padding='same', activation='relu'))
model.add(Dropout(0.2))
model.add(Conv2D(1024, (3, 3), padding='same', activation='relu'))
model.add(Dropout(0.2))
model.add(MaxPooling2D(pool_size=(3, 3)))
#Третий сверточный слой
model.add(Conv2D(512, (3, 3), padding='same', activation='relu'))
model.add(Dropout(0.2))
model.add(Conv2D(512, (3, 3), padding='same', activation='relu'))
model.add(Dropout(0.2))
model.add(MaxPooling2D(pool_size=(3, 3)))
#Пятый сверточный слой
model.add(Conv2D(256, (3, 3), padding='same', activation='relu'))
model.add(Dropout(0.2))
model.add(Conv2D(256, (3, 3), padding='same', activation='relu'))
model.add(Dropout(0.2))
model.add(MaxPooling2D(pool_size=(3, 3)))
#Слой преобразования двумерных данных в одномерные
model.add(Flatten())
#Полносвязный слой
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.2))
#Полносвязный слой
model.add(Dense(2048, activation='relu'))
model.add(Dropout(0.2))
#Вызодной полносвязный слой
model.add(Dense(len(train_generator.class_indices), activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=0.00001), metrics=['accuracy'])
history = model.fit_generator(
train_generator,
steps_per_epoch = train_generator.samples // batch_size,
validation_data = validation_generator,
validation_steps = validation_generator.samples // batch_size,
epochs=25,
verbose=1
)
#Оображаем график точности обучения
plt.plot(history.history['accuracy'],
label='Доля верных ответов на обучающем наборе')
plt.plot(history.history['val_accuracy'],
label='Доля верных ответов на проверочном наборе')
plt.xlabel('Эпоха обучения')
plt.ylabel('Доля верных ответов')
plt.legend()
plt.show()
###Output
_____no_output_____ |
09_NLP/02_Sentiment_Analyisis_Series/.ipynb_checkpoints/01_Simple_Sentiment_Analysis-checkpoint.ipynb | ###Markdown
1. Simple Sentiment Analysis usin `IMDB` dataset.* In this notebook we are going to predict weather a movie review is positive or negative using the `imdb` dataset. Preparing DataWe are going to use the `TorchText`'s ``Field`` which define how your data should be processed.We use the ``TEXT`` field to define how the review should be processed, and the ``LABEL`` field to process the sentiment.Our ``TEXT`` field has ``tokenize='spacy'`` as an argument. This defines that the "tokenization" (the act of splitting the string into discrete "tokens") should be done using the ``spaCy`` tokenizer. If no tokenize argument is passed, the default is simply splitting the string on spaces. We also need to specify a ``tokenizer_language`` which tells torchtext which spaCy model to use. We use the ``en_core_web_sm``.**Downloading the `eng_core_web_sm`:**```python -m spacy download en_core_web_sm`````LABEL`` is defined by a ``LabelField``, a special subset of the ``Field`` class specifically used for handling labels.
###Code
import en_core_web_sm
import torch
from torchtext.legacy import data
SEED = 1234
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
device = torch.device("cuda" if torch.cuda.is_available() else 'cpu')
device
TEXT = data.Field(tokenize = 'spacy',
tokenizer_language = 'en_core_web_sm')
LABEL = data.LabelField(dtype = torch.float)
TEXT, LABEL
###Output
_____no_output_____
###Markdown
Downloading the `IMDB` dataset.Another handy feature of ``TorchText`` is that it has support for common datasets used in natural language processing (NLP).
###Code
from torchtext.legacy import datasets
train_data, test_data = datasets.IMDB.splits(TEXT, LABEL)
###Output
aclImdb_v1.tar.gz: 0%| | 0.00/84.1M [00:00<?, ?B/s]
###Markdown
Checking the data structure.
###Code
print(f"TRAINING EXAMPLES: \t {len(train_data)}\nTEST EXAMPLES: \t {len(test_data)}\nTOTAL EXAMPLES: \t {len(train_data) + len(test_data)}")
###Output
TRAINING EXAMPLES: 25000
TEST EXAMPLES: 25000
TOTAL EXAMPLES: 50000
###Markdown
Checking one example.
###Code
vars(train_data.examples[0])
###Output
_____no_output_____
###Markdown
Creating the validation data.By default the `IMDB` only have two sets which are the trainning and testing set, we also need the validation set in our sample. We are going to use the `.split()` method on the train data.1. `.split()` method.This method split the dataset into a ration of ``70% `` trainning and ``30%`` validation.* We can change this by specifying the keyword arg `split_ratio = 0.8` which means ``80%`` of the data will belong to the training and the rest to the testing.
###Code
from random import seed
train_data, val_data = train_data.split(random_state=seed(SEED))
###Output
_____no_output_____
###Markdown
Let's check how many example do we have now.
###Code
print(f"TRAINING EXAMPLES: \t {len(train_data)}\nVALIDATION EXAMPLES: \t {len(val_data)}\nTEST EXAMPLES: \t {len(test_data)}\nTOTAL EXAMPLES: \t {len(train_data) + len(test_data) + len(val_data)}")
###Output
TRAINING EXAMPLES: 17500
VALIDATION EXAMPLES: 7500
TEST EXAMPLES: 25000
TOTAL EXAMPLES: 50000
###Markdown
$B$uilding a $V$ocabulary.A vocanulary is a effectively a look up table where every unique word in your data set has a corresponding index (an integer).The reason we create a vocabulary is because our machine learning models can not operate on string data.Each index is used to construct a one-hot vector for each word. A one-hot vector is a vector where all of the elements are 0, except one, which is 1, and dimensionality is the total number of unique words in your vocabulary, commonly denoted by $V$.The number of unique words in our training set is over 100,000, which means that our one-hot vectors will have over 100,000 dimensions which will make trainning slower.There are two ways effectively cut down our vocabulary, we can either only take the top $n$ most common words or ignore words that appear less than $m$ times. We'll do the former, only keeping the top 25,000 words.What do we do with words that appear in examples but we have cut from the vocabulary? We replace them with a special unknown or ```` token. For example, if the sentence was "This film is great and I love it" but the word "love" was not in the vocabulary, it would become "This film is great and I ```` it".Let's build a Vocabulary.
###Code
MAX_VOCAB_SIZE = 25_000
TEXT.build_vocab(train_data, max_size = MAX_VOCAB_SIZE)
LABEL.build_vocab(train_data)
###Output
_____no_output_____
###Markdown
**Why building the vocabulary on the ``train set`` only?*** Machine learning system must not look at the ``test data`` in any way.* We want the ``validation data`` to represent the testing datasets as much as possible.
###Code
print(f"Unique words in the: {len(TEXT.vocab)}")
print(f"Unique labels in the: {len(LABEL.vocab)}")
###Output
Unique words in the: 25002
Unique labels in the: 2
###Markdown
But wait, isn't that we said our vocabulary size $2500$, what the hack is going on with the extra $2$ words? Where did they come from?Come down dude, the two additions to our vocabulary are `` for unknown words and `` for padding sequences. But why?When we feed sentences into our model, we feed a batch of them at a time, i.e. more than one at a time, and all sentences in the batch need to be the same size. Consider the following illestration: Most common words.The most common 10 words and their frequences.
###Code
TEXT.vocab.freqs.most_common(10)
###Output
_____no_output_____
###Markdown
The vocabulary.We can also see the vocabulary directly bu using either the stoi (**s**tring **t**o **i**nt) or itos (**i**nt **t**o **s**tring) method, for both the text and the labels
###Code
print(TEXT.vocab.itos)
print(TEXT.vocab.stoi)
print(LABEL.vocab.stoi)
print(LABEL.vocab.itos)
###Output
defaultdict(None, {'neg': 0, 'pos': 1})
['neg', 'pos']
###Markdown
Creating Iterators - `BucketIterator`The final step of preparing the data is creating the iterators. We iterate over these in the training/evaluation loop, and they return a batch of examples (indexed and converted into tensors) at each iteration.We'll use a BucketIterator which is a special type of iterator that will return a batch of examples where each example is of a similar length, minimizing the amount of padding per example.
###Code
BATCH_SIZE = 64
train_iterator, test_iterator, validation_iterator = data.BucketIterator.splits(
(train_data, test_data, val_data),
batch_size = BATCH_SIZE,
device=device
)
for X in train_iterator:
break
print(X)
###Output
[torchtext.legacy.data.batch.Batch of size 64]
[.text]:[torch.cuda.LongTensor of size 1205x64 (GPU 0)]
[.label]:[torch.cuda.FloatTensor of size 64 (GPU 0)]
###Markdown
Creating a model.The next stage is building the model that we'll eventually train and evaluate.1. **The embedding layer.**The embedding layer is used to transform our sparse ``one-hot vector`` (sparse as most of the elements are 0) into a dense embedding vector (dense as the dimensionality is a lot smaller and all the elements are real numbers). This embedding layer is simply a single fully connected layer. As well as reducing the dimensionality of the input to the ``RNN``, there is the theory that words which have similar impact on the sentiment of the review are mapped close together in this dense vector space.The RNN layer is our RNN which takes in our dense vector and the previous hidden state $h_{t-1}$, which it uses to calculate the next hidden state, $h_t$.</pFinally, the linear layer takes the final hidden state and feeds it through a fully connected layer, $f(h_T)$, transforming it to the correct output dimension.The forward method is called when we feed examples into our model.Each batch, text, is a tensor of size **[sentence length, batch size]**. That is a batch of sentences, each having each word converted into a one-hot vector.You may notice that this tensor should have another dimension due to the one-hot vectors, however PyTorch conveniently stores a one-hot vector as it's index value, i.e. the tensor representing a sentence is just a tensor of the indexes for each token in that sentence. The act of converting a list of tokens into a list of indexes is commonly called **numericalizing**.The input batch is then passed through the embedding layer to get embedded, which gives us a dense vector representation of our sentences. embedded is a tensor of size **[sentence length, batch size, embedding dim]**.embedded is then fed into the **RNN**. In some frameworks you must feed the initial hidden state, $h_0$, into the RNN, however in PyTorch, if no initial hidden state is passed as an argument it defaults to a tensor of all zeros.The **RNN** returns 2 tensors, output of size **[sentence length, batch size, hidden dim]** and hidden of size **[1, batch size, hidden dim]**. output is the concatenation of the hidden state from every time step, whereas hidden is simply the final hidden state. We verify this using the assert statement. Note the squeeze method, which is used to remove a dimension of size 1.Finally, we feed the last hidden state, hidden, through the linear layer, fc, to produce a prediction.
###Code
from torch import nn
from torch.nn import functional as F
class RNN(nn.Module):
def __init__(self,input_size, hidden_size, embedding_size, num_layers, output_size):
super().__init__()
self.emb = nn.Embedding(input_size, embedding_dim=embedding_size)
self.rnn = nn.RNN(embedding_size, hidden_size=hidden_size, num_layers=num_layers)
self.fc = nn.Linear(hidden_size, out_features=output_size)
def forward(self, x):
# x = [sent len, batch size]
embedded = self.emb(x)
#embedded = [sent len, batch size, emb dim]
output, hidden = self.rnn(embedded)
#output = [sent len, batch size, hid dim]
#hidden = [1, batch size, hid dim]
assert torch.equal(output[-1,:,:], hidden.squeeze(0))
return self.fc(output[-1,:,:])
###Output
_____no_output_____
###Markdown
We now create an instance of our RNN class.The input size is the dimension of the one-hot vectors, which is equal to the vocabulary size.The embedding size is the size of the dense word vectors. This is usually around ``50-250`` dimensions, but depends on the size of the vocabulary.The ``hidden size`` is the size of the hidden states. This is usually around ``100-500`` dimensions, but also depends on factors such as on the vocabulary size, the size of the dense vectors and the complexity of the task.The ``output size`` is usually the number of classes, however in the case of only 2 classes the output value is between 0 and 1 and thus can be 1-dimensional, i.e. a single scalar real number.
###Code
INPUT_SIZE = len(TEXT.vocab)
EMBEDDING_SIZE = 100
HIDDEN_SIZE = 256
OUTPUT_SIZE = 1
NUM_LAYERS = 1
model = RNN(INPUT_SIZE, HIDDEN_SIZE, EMBEDDING_SIZE, NUM_LAYERS, OUTPUT_SIZE)
model
###Output
_____no_output_____
###Markdown
A function that tells us how many trainable parameters do we have in the model.
###Code
def count_trainable_params(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad == True)
print(f'The model has {count_trainable_params(model):,} trainable parameters')
###Output
The model has 2,592,105 trainable parameters
###Markdown
Trainning the model.We are going to use th `SGD` as our optimizer and `BCEWithLogitsLoss` as our loss.* The reason we are using this loss is because we don't have the activation function on our last layer [more](https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html).* This loss combines a Sigmoid layer and the BCELoss in one single class. This version is more numerically stable than using a plain Sigmoid followed by a BCELoss as, by combining the operations into one layer, we take advantage of the log-sum-exp trick for numerical stability.
###Code
optimizer = torch.optim.SGD(model.parameters(), lr=0.001)
criterion = nn.BCEWithLogitsLoss()
###Output
_____no_output_____
###Markdown
Pushing the model and loss function to the devics
###Code
model = model.to(device)
criterion = criterion.to(device)
###Output
_____no_output_____
###Markdown
$L$oss and $A$ccuracy.Our criterion function calculates the loss, however we have to write our function to calculate the accuracy.This function first feeds the predictions through a sigmoid layer, squashing the values between 0 and 1, we then round them to the nearest integer. This rounds any value greater than 0.5 to 1 (a positive sentiment) and the rest to 0 (a negative sentiment).We then calculate how many rounded predictions equal the actual labels and average it across the batch.
###Code
def accuracy(y_preds, y_true):
#round predictions to the closest integer
rounded_preds = torch.round(torch.sigmoid(y_preds))
correct = (rounded_preds == y_true).float() #convert into float for division
acc = correct.sum() / len(correct)
return acc
###Output
_____no_output_____
###Markdown
The train function iterates over all examples, one batch at a time.**model.train()** is used to put the model in "training mode", which turns on ``dropout`` and ``batch normalization``. Although we aren't using them in this model, it's good practice to include it.For each batch, we first ``zero the gradients``. Each parameter in a model has a grad attribute which stores the gradient calculated by the criterion. PyTorch does not automatically remove (or "zero") the gradients calculated from the last gradient calculation, so they must be manually zeroed.We then feed the batch of sentences, ``batch.text``, into the model. Note, you do not need to do ``model.forward(batch.text)``, simply calling the model works. The squeeze is needed as the predictions are initially size ``[batch size, 1]``, and we need to remove the dimension of size ``1`` as PyTorch expects the predictions input to our criterion function to be of size ``[batch size]``.The loss and accuracy are then calculated using our predictions and the labels, batch.label, with the loss being averaged over all examples in the batch.We calculate the gradient of each parameter with loss.``backward()``, and then update the parameters using the gradients and optimizer algorithm with ``optimizer.step().``The loss and accuracy is accumulated across the epoch, the ``.item()`` method is used to extract a scalar from a tensor which only contains a single value.Finally, we return the loss and accuracy, averaged across the epoch. The len of an iterator is the number of batches in the iterator.You may recall when initializing the LABEL field, we set ``dtype=torch.float``. This is because TorchText sets tensors to be LongTensors by default, however our criterion expects both inputs to be ``FloatTensors``. Setting the dtype to be torch.float, did this for us. The alternative method of doing this would be to do the conversion inside the train function by passing ``batch.label.float()`` instad of ``batch.label`` to the criterion.
###Code
def train(model, iterator, optimizer, criterion):
epoch_loss = 0
epoch_acc = 0
model.train()
for batch in iterator:
optimizer.zero_grad()
predictions = model(batch.text).squeeze(1)
loss = criterion(predictions, batch.label)
acc = accuracy(predictions, batch.label)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
###Output
_____no_output_____
###Markdown
``evaluate`` is similar to train, with a few modifications as you don't want to update the parameters when evaluating.``model.eval()`` puts the model in "evaluation mode", this turns off ``dropout`` and ``batch normalization``. Again, we are not using them in this model, but it is good practice to include them.No gradients are calculated on PyTorch operations inside the ``with no_grad()`` block. This causes less memory to be used and speeds up computation.The rest of the function is the same as train, with the removal of ``optimizer.zero_grad(),`` ``loss.backward()`` and ``optimizer.step()``, as we do not update the model's parameters when evaluating.
###Code
def evaluate(model, iterator, criterion):
epoch_loss = 0
epoch_acc = 0
model.eval()
with torch.no_grad():
for batch in iterator:
predictions = model(batch.text).squeeze(1)
loss = criterion(predictions, batch.label)
acc = accuracy(predictions, batch.label)
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
###Output
_____no_output_____
###Markdown
We'll also create a function to tell us how long an epoch takes to compare training times between models.
###Code
import time
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
###Output
_____no_output_____
###Markdown
We then train the model through multiple epochs, an epoch being a complete pass through all examples in the training and validation sets.At each epoch, if the validation loss is the best we have seen so far, we'll save the parameters of the model and then after training has finished we'll use that model on the test set.
###Code
N_EPOCHS = 5
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
valid_loss, valid_acc = evaluate(model, validation_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tut1-model.pt')
print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}%')
###Output
Epoch: 01 | Epoch Time: 0m 15s
Train Loss: 0.693 | Train Acc: 50.22%
Val. Loss: 0.694 | Val. Acc: 50.42%
Epoch: 02 | Epoch Time: 0m 15s
Train Loss: 0.693 | Train Acc: 50.37%
Val. Loss: 0.695 | Val. Acc: 49.83%
Epoch: 03 | Epoch Time: 0m 15s
Train Loss: 0.693 | Train Acc: 50.03%
Val. Loss: 0.694 | Val. Acc: 50.46%
Epoch: 04 | Epoch Time: 0m 15s
Train Loss: 0.693 | Train Acc: 50.29%
Val. Loss: 0.695 | Val. Acc: 50.24%
Epoch: 05 | Epoch Time: 0m 15s
Train Loss: 0.693 | Train Acc: 50.19%
Val. Loss: 0.694 | Val. Acc: 50.75%
###Markdown
You may have noticed the loss is not really decreasing and the accuracy is poor. This is due to several issues with the model which we'll improve in the next notebook.Finally, the metric we actually care about, the test loss and accuracy, which we get from our parameters that gave us the best validation loss.
###Code
model.load_state_dict(torch.load('tut1-model.pt'))
test_loss, test_acc = evaluate(model, test_iterator, criterion)
print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}%')
###Output
Test Loss: 0.687 | Test Acc: 58.45%
|
exercises/day6/homework_06.ipynb | ###Markdown
Read the arabica data set into a dataframeYou will notice an "Unnamed: 0" column. Where does it come from, how can one get "rid" of it?
###Code
arabica = pd.read_csv(Path("../../data/arabica_data_cleaned.csv"))
arabica.head()
arabica.keys()[0]
arabica = arabica.set_index('Unnamed: 0')
arabica.index.name = None
arabica.head()
###Output
_____no_output_____
###Markdown
Drop some categorical columnsThe dataset is rich in meta data, however we just want to keep "Country.of.Origin", "Producer", "Processing.Method". How to drop the rest?While you're at it rename the three columns we want to keep so they do not have dots but space in their name.
###Code
arabica = arabica.convert_dtypes()
mask_string_columns = arabica.dtypes == "string"
mask_string_columns["Country.of.Origin", "Producer", "Processing.Method"] = False
columns_to_drop = mask_string_columns.index[mask_string_columns==True]
arabica.drop(columns=columns_to_drop, inplace=True)
# arabica.rename(columns={"Country.of.Origin":"Country of Origin",
# "Processing.Method": "Processing Method",
# "Number.of.Bags":"Number of Bags"},
# inplace=True)
arabica.rename(columns = {col:col.replace('.', ' ') for col in arabica.columns}, inplace=True)
arabica.rename(columns = {col:col.replace('_', ' ') for col in arabica.columns}, inplace=True)
arabica.head()
###Output
_____no_output_____
###Markdown
Clean the data setThe dataset does not seem to be as clean as the filename might suggest. How can you get a quick overview over the data and identify which columns have extreme outliers? If you cannot using pd commands, try to plot (see next question)!
###Code
arabica.describe()
arabica.info()
arabica.dropna(axis=0, inplace=True)
arabica.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 924 entries, 1 to 1310
Data columns (total 22 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Country of Origin 924 non-null string
1 Producer 924 non-null string
2 Number of Bags 924 non-null Int64
3 Processing Method 924 non-null string
4 Aroma 924 non-null Float64
5 Flavor 924 non-null Float64
6 Aftertaste 924 non-null Float64
7 Acidity 924 non-null Float64
8 Body 924 non-null Float64
9 Balance 924 non-null Float64
10 Uniformity 924 non-null Float64
11 Clean Cup 924 non-null Float64
12 Sweetness 924 non-null Float64
13 Cupper Points 924 non-null Float64
14 Total Cup Points 924 non-null Float64
15 Moisture 924 non-null Float64
16 Category One Defects 924 non-null Int64
17 Quakers 924 non-null Int64
18 Category Two Defects 924 non-null Int64
19 altitude low meters 924 non-null Float64
20 altitude high meters 924 non-null Float64
21 altitude mean meters 924 non-null Float64
dtypes: Float64(15), Int64(4), string(3)
memory usage: 183.2 KB
###Markdown
Plot Plot a plotly histogram for each of the remaining columns. Can you write a loop?
###Code
for column_name in arabica.columns:
fig = px.histogram(arabica, x=column_name, title='Arabica: {}'.format(column_name))
fig.show()
###Output
_____no_output_____
###Markdown
cleaning outliers* define heuristics that classify outliers for each column* set outliers to median
###Code
quantile_5 = arabica.quantile(q=0.05, axis='index')
quantile_95 = arabica.quantile(q=0.95, axis='index')
for column_name in quantile_5.index:
index_mask_5 = arabica[column_name] < quantile_5[column_name]
index_mask_95 = arabica[column_name] > quantile_95[column_name]
index_mask = index_mask_5 + index_mask_95
arabica.loc[index_mask, column_name] = np.median(arabica[column_name])
fig = px.histogram(arabica, x='Category Two Defects', title='Arabica: {}'.format('Category Two Defects'))
fig.show()
###Output
_____no_output_____
###Markdown
Identify * Which countries have more than 10 and less than 30 entries? * Which is the producer with most entries? * What is the mosts common and least common "Processing Method"
###Code
countries_more_than_10 = arabica["Country of Origin"].value_counts() > 10
countries_less_than_30 = arabica["Country of Origin"].value_counts() < 30
wanted_countries = countries_less_than_30 * countries_more_than_10
wanted_countries.index[wanted_countries].to_list()
arabica["Producer"].value_counts().idxmax()
most_common_processing_meth = arabica["Processing Method"].value_counts().idxmax()
least_common_processing_meth = arabica["Processing Method"].value_counts().idxmin()
most_common_processing_meth, least_common_processing_meth
###Output
_____no_output_____ |
02_Measure_morphometric_characters.ipynb | ###Markdown
Measure morphometric charactersComputational notebook 02 for Climate adaptation plans in the context of coastal settlements: the case of Portugal.Date: 27/06/2020---This notebook generates additional morphometric elements (morphological tessellation and tessellation-based blocks) and measures primary morphometric characters using `momepy v0.1.1`.The network data obtained using `01_Retrieve_network_data.ipynb` were manually cleaned in the meantime to represent topologically correct morphological network. Moreover, the layer `name_case` containing a single polygon representing case study area for each case was manually created based on street network (captures blocks with any buildings). Structure of GeoPackages:```./data/ atlantic.gpkg name_blg - Polygon layers name_str - LineString layers name_case - Polygon layers ... preatl.gpkg name_blg name_str name_case ... premed.gpkg name_blg name_str name_case ... med.gpkg name_blg name_str name_case ...```CRS of the original data is EPSG:3763.```Name: ETRS89 / Portugal TM06Axis Info [cartesian]:- X[east]: Easting (metre)- Y[north]: Northing (metre)Area of Use:- name: Portugal - mainland - onshore- bounds: (-9.56, 36.95, -6.19, 42.16)Coordinate Operation:- name: Portugual TM06- method: Transverse MercatorDatum: European Terrestrial Reference System 1989- Ellipsoid: GRS 1980- Prime Meridian: Greenwich```
###Code
import fiona
import geopandas as gpd
import momepy as mm
import libpysal
import numpy as np
fiona.__version__, gpd.__version__, mm.__version__, libpysal.__version__, np.__version__
folder = 'data/'
parts = ['atlantic', 'preatl', 'premed', 'med']
# Iterate through parts and layers
for part in parts:
path = folder + part + '.gpkg'
layers = [x for x in fiona.listlayers(path) if 'blg' in x]
for l in layers:
print(l)
buildings = gpd.read_file(path, layer=l)
buildings = buildings.explode().reset_index(drop=True) # avoid MultiPolygons
buildings['uID'] = mm.unique_id(buildings)
try:
buildings = buildings.drop(columns=['Buildings', 'id'])
except:
buildings = buildings[['uID', 'geometry']]
# Generate morphological tessellation
limit = gpd.read_file(path, layer=l[:-3] + 'case').geometry[0]
tess = mm.Tessellation(buildings, 'uID', limit=limit)
tessellation = tess.tessellation
# Measure individual characters
buildings['sdbAre'] = mm.Area(buildings).series
buildings['sdbPer'] = mm.Perimeter(buildings).series
buildings['ssbCCo'] = mm.CircularCompactness(buildings).series
buildings['ssbCor'] = mm.Corners(buildings).series
buildings['ssbSqu'] = mm.Squareness(buildings).series
buildings['ssbERI'] = mm.EquivalentRectangularIndex(buildings).series
buildings['ssbElo'] = mm.Elongation(buildings).series
buildings['ssbCCD'] = mm.CentroidCorners(buildings).mean
buildings['stbCeA'] = mm.CellAlignment(buildings, tessellation,
mm.Orientation(buildings).series,
mm.Orientation(tessellation).series, 'uID', 'uID').series
buildings['mtbSWR'] = mm.SharedWallsRatio(buildings, 'uID').series
blg_sw1 = mm.sw_high(k=1, gdf=tessellation, ids='uID')
buildings['mtbAli'] = mm.Alignment(buildings, blg_sw1, 'uID', mm.Orientation(buildings).series).series
buildings['mtbNDi'] = mm.NeighborDistance(buildings, blg_sw1, 'uID').series
tessellation['sdcLAL'] = mm.LongestAxisLength(tessellation).series
tessellation['sdcAre'] = mm.Area(tessellation).series
tessellation['sscERI'] = mm.EquivalentRectangularIndex(tessellation).series
tessellation['sicCAR'] = mm.AreaRatio(tessellation, buildings, 'sdcAre', 'sdbAre', 'uID').series
buildings['ldbPWL'] = mm.PerimeterWall(buildings).series
edges = gpd.read_file(path, layer=l[:-3] + 'str')
edges = edges.loc[~(edges.geom_type != "LineString")].explode().reset_index(drop=True)
edges = mm.network_false_nodes(edges)
edges['nID'] = mm.unique_id(edges)
buildings['nID'] = mm.get_network_id(buildings, edges, 'nID', min_size=100)
# merge and drop unlinked
tessellation = tessellation.drop(columns='nID').merge(buildings[['uID', 'nID']], on='uID')
tessellation = tessellation[~tessellation.isna().any(axis=1)]
buildings = buildings[~buildings.isna().any(axis=1)]
buildings['stbSAl'] = mm.StreetAlignment(buildings, edges, mm.Orientation(buildings).series, network_id='nID').series
tessellation['stcSAl'] = mm.StreetAlignment(tessellation, edges, mm.Orientation(tessellation).series, network_id='nID').series
edges['sdsLen'] = mm.Perimeter(edges).series
edges['sssLin'] = mm.Linearity(edges).series
profile = mm.StreetProfile(edges, buildings, distance=3)
edges['sdsSPW'] = profile.w
edges['stsOpe'] = profile.o
edges['svsSDe'] = profile.wd
edges['sdsAre'] = mm.Reached(edges, tessellation, 'nID', 'nID', mode='sum').series
edges['sdsBAr'] = mm.Reached(edges, buildings, 'nID', 'nID', mode='sum').series
edges['sisBpM'] = mm.Count(edges, buildings, 'nID', 'nID', weighted=True).series
regimes = np.ones(len(buildings))
block_w = libpysal.weights.block_weights(regimes, ids=buildings.uID.values)
buildings['ltcBuA'] = mm.BuildingAdjacency(buildings, block_w, 'uID').series
G = mm.gdf_to_nx(edges)
G = mm.meshedness(G, radius=5, name='meshedness')
mm.mean_nodes(G, 'meshedness')
edges = mm.nx_to_gdf(G, points=False)
if 'bID' in buildings.columns:
buildings = buildings.drop(columns='bID')
# Generate blocks
gen_blocks = mm.Blocks(tessellation, edges, buildings, 'bID', 'uID')
blocks = gen_blocks.blocks
buildings['bID'] = gen_blocks.buildings_id
tessellation['bID'] = gen_blocks.tessellation_id
blocks['ldkAre'] = mm.Area(blocks).series
blocks['lskElo'] = mm.Elongation(blocks).series
blocks['likGra'] = mm.Count(blocks, buildings, 'bID', 'bID', weighted=True).series
# Save to file
buildings.to_file(path, layer=l, driver='GPKG')
tessellation.to_file(path, layer=l[:-3] + 'tess', driver='GPKG')
edges.to_file(path, layer=l[:-3] + 'str', driver='GPKG')
blocks.to_file(path, layer=l[:-3] + 'blocks', driver='GPKG')
###Output
_____no_output_____ |
notebooks/ch1_pca_relative_value.ipynb | ###Markdown
Yield Curve PCAThere are three basic movements in yield curve: 1. level or a parallel shift; 2. slope, i.e., a flattening or steepening; and 3. curvature, i.e., hump or butterlfy.PCA formalizes this viewpoint.PCA can be applied to:1. trade screening and construction;2. risk assessment and return attribution;3. scenarios analysis;4. curve-neutral hedge.* Accompanying notebook for [Chapter One](https://letianzj.gitbook.io/systematic-investing/products_and_methodologies/fixed_income)* comments are placed below the cell. 1. Data preparation
###Code
%matplotlib inline
import os
import io
import time
from datetime import date, datetime, timedelta
import pandas as pd
import numpy as np
import scipy
import pandas_datareader.data as pdr
from pandas_datareader.fred import FredReader
import matplotlib.pyplot as plt
import seaborn as sns
# download CMT treasury curves from Fred
codes = ['DGS1MO', 'DGS3MO', 'DGS6MO', 'DGS1', 'DGS2', 'DGS3', 'DGS5', 'DGS7', 'DGS10', 'DGS20', 'DGS30']
start_date = datetime(2000, 1, 1)
# end_date = datetime.today()
end_date = datetime(2020,12,31)
df = pd.DataFrame()
for code in codes:
reader = FredReader(code, start_date, end_date)
df0 = reader.read()
df = df.merge(df0, how='outer', left_index=True, right_index=True, sort=False)
reader.close()
df.dropna(axis = 0, inplace = True)
df = df['2006':]
df.tail(5)
# view the yield curve
plt.figure(figsize=(15,8))
plt.plot(df)
plt.show()
# correlation among tenors
# sns.pairplot(df)
df_weekly = df.resample("W").last()
df_weekly.tail()
df_weekly_centered = df_weekly.sub(df_weekly.mean())
df_weekly_diff = df_weekly.diff()
df_weekly_diff.dropna(inplace=True)
df_weekly_diff_centered = df_weekly_diff.sub(df_weekly_diff.mean())
df_weekly.shape, df_weekly_diff.shape
# covariance
df_weekly_diff.cov()
# correlation
df_weekly_diff.corr()
###Output
_____no_output_____
###Markdown
Correlation looks reasonable. The further apart between two tenors, the lower their correlation would be. 2. Fit PCA
###Code
# PCA fit
from sklearn.decomposition import PCA
pca_level = PCA().fit(df_weekly) # call fit or fit_transform
pca_change = PCA().fit(df_weekly_diff)
###Output
_____no_output_____
###Markdown
Level is used to find the trading signals; change is used to find weights (hedge ratios).
###Code
print(pca_change.explained_variance_) # eigenvalues
print(pca_change.explained_variance_ratio_) # normalized eigenvalues (sum to 1)
print(np.cumsum(pca_change.explained_variance_ratio_))
plt.plot(pca_change.explained_variance_ratio_.cumsum())
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance')
###Output
_____no_output_____
###Markdown
The first three PCA explain 93.59% of the total variance. This is slightly lower than some published papers where the number is above 95%.
###Code
df_pca_level = pca_level.transform(df_weekly) # T or PCs
df_pca_level = pd.DataFrame(df_pca_level, columns=[f'PCA_{x+1}' for x in range(df_pca_level.shape[1])]) # np.array to dataframe
df_pca_level.index = df_weekly.index
plt.figure(figsize=(15,8))
plt.plot(df_pca_level['PCA_1'], label='first component')
plt.plot(df_pca_level['PCA_2'], label='second component')
plt.plot(df_pca_level['PCA_3'], label='third component')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The first PC is at its lower bound; second PC is bouncing back; third PC is trending towards its upper bound.
###Code
df_pca_change = pca_change.transform(df_weekly_diff) # T or PCs
df_pca_change = pd.DataFrame(df_pca_change, columns=[f'PCA_{x+1}' for x in range(df_pca_change.shape[1])]) # np.array to dataframe
df_pca_change.index = df_weekly_diff.index
plt.figure(figsize=(15,8))
plt.plot(df_pca_change['PCA_1'], label='first component')
plt.plot(df_pca_change['PCA_2'], label='second component')
plt.plot(df_pca_change['PCA_3'], label='third component')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
On average, the first PC has the largest weekly changes; the second PC has the largest spike in late 2007. The third PC changes are relatively smaller. This is in line with the fact that first PC explains the highest variation.
###Code
print(pca_change.singular_values_.shape) # SVD singular values of sigma
print(pca_change.get_covariance().shape) # covariance
print(pca_change.components_.shape) # p*p, W^T
###Output
(11,)
(11, 11)
(11, 11)
###Markdown
SVD has p singluar values; covariance matrix is pxp. $W^T$ is pca.components_, which is pxp
###Code
print(pca_level.components_.T[:5, :5])
print(pca_change.components_.T[:5, :5])
###Output
[[ 0.36023252 -0.25655937 0.39913189 -0.56723613 0.01482805]
[ 0.36796015 -0.24666686 0.29225917 -0.13972673 -0.01300978]
[ 0.374633 -0.22942481 0.16101634 0.26181778 0.13081354]
[ 0.36507482 -0.19841834 -0.02053929 0.45741015 0.04709652]
[ 0.33890817 -0.10331612 -0.29148257 0.31022718 -0.21836308]]
[[-0.10250026 -0.75712007 -0.37420159 -0.4478502 0.20599242]
[-0.11200385 -0.47010396 -0.00867281 0.34676659 -0.4662093 ]
[-0.13449477 -0.28349481 0.20771456 0.51845356 -0.11134675]
[-0.17040677 -0.21223415 0.29967192 0.3449688 0.28172845]
[-0.26827879 -0.06440793 0.43679404 -0.12325569 0.40230782]]
###Markdown
Usually PCA on level and PCA on change give different results/weights.
###Code
print(df_pca_change.iloc[:5,:5]) # df_pca: T = centered(X) * W
print(np.matmul(df_weekly_diff_centered, pca_change.components_.T).iloc[:5, :5]) # XW
###Output
PCA_1 PCA_2 PCA_3 PCA_4 PCA_5
DATE
2006-01-15 0.037659 -0.145471 -0.003422 0.042814 -0.042772
2006-01-22 -0.071417 0.107283 0.099630 0.122137 -0.022446
2006-01-29 -0.453613 -0.196927 -0.054567 -0.039250 0.059868
2006-02-05 -0.106102 -0.164901 0.114008 -0.047069 0.007816
2006-02-12 -0.196248 -0.097897 0.140442 -0.034231 -0.024348
DGS1MO DGS3MO DGS6MO DGS1 DGS2
DATE
2006-01-15 0.037659 -0.145471 -0.003422 0.042814 -0.042772
2006-01-22 -0.071417 0.107283 0.099630 0.122137 -0.022446
2006-01-29 -0.453613 -0.196927 -0.054567 -0.039250 0.059868
2006-02-05 -0.106102 -0.164901 0.114008 -0.047069 0.007816
2006-02-12 -0.196248 -0.097897 0.140442 -0.034231 -0.024348
###Markdown
The transform() output is T, or the first dataframe. Each volume is an eigenvector of covariance matrix $X^TX$.The second dataframe should match the first, or $T=XW$. Here the input data X is centered but not scaled before applying SVD. W is pca.components_.T
###Code
np.matmul(pca_change.components_, pca_change.components_.T)[1,1], np.matmul(pca_change.components_.T, pca_change.components_)[1,1]
###Output
_____no_output_____
###Markdown
Eigenvector W^T is unitary (wi and wj are orthogonal)
###Code
print(pca_change.explained_variance_[0]) # eigenvalue
print(np.dot(np.dot(pca_change.components_[0,:].reshape(1, -1), df_weekly_diff.cov()), pca_change.components_[0,:].reshape(-1, 1))) # W^T X^TX W = lambda
print(np.dot(pca_change.components_[0,:].reshape(1, -1), df_weekly_diff.cov())) # Ax
print(pca_change.components_[0,:]*pca_change.explained_variance_[0]) # lambda x
###Output
0.07872586495891083
[[0.07872586]]
[[-0.00806942 -0.0088176 -0.01058822 -0.01341542 -0.02112048 -0.02541622
-0.03097223 -0.03273537 -0.03145325 -0.02949365 -0.0279408 ]]
[-0.00806942 -0.0088176 -0.01058822 -0.01341542 -0.02112048 -0.02541622
-0.03097223 -0.03273537 -0.03145325 -0.02949365 -0.0279408 ]
###Markdown
It shows that the eigenvalues of $X^TX$ are explained variance. They represent the variance in the direction of the eigenvector. The second line is the calculated eigenvalue $\lambda$.The third line calculates $AX$, and the last line calculates $\lambda x$, where $A=X^TX$. By definition, they should match.
###Code
df_pca_change_123 = PCA(n_components=3).fit_transform(df_weekly_diff)
df_pca_change_123 = pd.DataFrame(data = df_pca_change_123, columns = ['first component', 'second component', 'third component'])
print(df_pca_change_123.head(5))
print(df_pca_change.iloc[:5, :3])
###Output
first component second component third component
0 0.037659 -0.145471 -0.003422
1 -0.071417 0.107283 0.099630
2 -0.453613 -0.196927 -0.054567
3 -0.106102 -0.164901 0.114008
4 -0.196248 -0.097897 0.140442
PCA_1 PCA_2 PCA_3
DATE
2006-01-15 0.037659 -0.145471 -0.003422
2006-01-22 -0.071417 0.107283 0.099630
2006-01-29 -0.453613 -0.196927 -0.054567
2006-02-05 -0.106102 -0.164901 0.114008
2006-02-12 -0.196248 -0.097897 0.140442
###Markdown
Alternatively We can do fit_transform on one call. It should match the two-step fit and transform. 3. Curve Analysis
###Code
tenors_label = ['1M', '3M', '6M', '1Y', '2Y', '3Y', '5Y', '7Y', '10Y', '20Y', '30Y']
plt.figure(figsize=(15,4))
plt.subplot(131)
plt.plot(tenors_label, pca_change.components_[0, :])
plt.subplot(132)
plt.plot(tenors_label, pca_change.components_[1, :])
plt.subplot(133)
plt.plot(tenors_label, pca_change.components_[2, :])
###Output
_____no_output_____
###Markdown
**The first eigenvector (first column of W) is the exposure (factor loading) of X to the first rotated rates (first PCA factor, as the first column of T).**Note that it takes first row of pca.components_ because of the W transpose.First PC is level. All tenors shift down (negative) but long tenors move more than short tenors. The peak is at 7s. If the first pca moves up 1bps, all tenors move down. 1M moves down 0.10bps, 7Y moves down 0.40bps, 30Y moves down 0.35bps.Second PC is spread. It suggests that short tenors move downward while long tenors move upward, or steepening. Third PC is buterfly or curvature. The belly rises 40bps while the wings fall 40bps.
###Code
T = np.matmul(df_weekly_diff_centered, pca_change.components_.T) # T = XW
bump_up = np.zeros(T.shape[1]).reshape(1,-1)
bump_up[0,0] = 1 # first PC moves 1bps
bump_up = np.repeat(bump_up, T.shape[0], axis=0)
T_new = T+bump_up
df_weekly_diff_new = np.matmul(T_new, pca_change.components_) # X_new = T_new * W^T
print((df_weekly_diff_new-df_weekly_diff_centered).head()) # X - X_new
print(pca_change.components_[0, :])
###Output
DGS1MO DGS3MO DGS6MO ... DGS10 DGS20 DGS30
DATE ...
2006-01-15 -0.1025 -0.112004 -0.134495 ... -0.399529 -0.374637 -0.354913
2006-01-22 -0.1025 -0.112004 -0.134495 ... -0.399529 -0.374637 -0.354913
2006-01-29 -0.1025 -0.112004 -0.134495 ... -0.399529 -0.374637 -0.354913
2006-02-05 -0.1025 -0.112004 -0.134495 ... -0.399529 -0.374637 -0.354913
2006-02-12 -0.1025 -0.112004 -0.134495 ... -0.399529 -0.374637 -0.354913
[5 rows x 11 columns]
[-0.10250026 -0.11200385 -0.13449477 -0.17040677 -0.26827879 -0.32284454
-0.39341869 -0.41581462 -0.39952882 -0.37463736 -0.35491257]
###Markdown
To see why each column of W is the exposure, parallel shift first PC up by 1bps. Then for each tenor, the move is according to the factor exposure (two prints match). 4. Mean-reversion
###Code
plt.figure(figsize=(15,8))
plt.plot(df_pca_level['PCA_3']*100, label='third component')
def mle(x):
start = np.array([0.5, np.mean(x), np.std(x)]) # starting guess
def error_fuc(params):
theta = params[0]
mu = params[1]
sigma = params[2]
muc = x[:-1]*np.exp(-theta) + mu*(1.0-np.exp(-theta)) # conditional mean
sigmac = sigma*np.sqrt((1-np.exp(-2.0*theta))/(2*theta)) # conditional vol
return -np.sum(scipy.stats.norm.logpdf(x[1:], loc=muc, scale=sigmac))
result = scipy.optimize.minimize(error_fuc, start, method='L-BFGS-B',
bounds=[(1e-6, None), (None, None), (1e-8, None)],
options={'maxiter': 500, 'disp': False})
return result.x
theta, mu, sigma = mle(df_pca_level['PCA_3'])
print(theta, mu, sigma)
print(f'fly mean is {mu*100} bps')
print(f'half-life in week {np.log(2)/theta}')
print(f'annual standard deviation is {sigma/np.sqrt(2*theta)*100} bps, weekly {sigma/np.sqrt(2*theta)*100*np.sqrt(1/52)} bps')
print(np.mean(df_pca_change)[:3]*100, np.std(df_pca_change)[:3]*100) # stats
print(df_pca_level['PCA_3'].tail(1)*100) # current pca_3
###Output
0.04192451154834535 0.006565036501929492 0.11472812649582306
fly mean is 0.6565036501929492 bps
half-life in week 16.533220184584405
annual standard deviation is 39.62058631800509 bps, weekly 5.494386751289013 bps
PCA_1 -3.162210e-16
PCA_2 6.396516e-16
PCA_3 -6.544034e-17
dtype: float64 PCA_1 28.040184
PCA_2 16.729748
PCA_3 9.621329
dtype: float64
DATE
2021-01-03 16.783769
Freq: W-SUN, Name: PCA_3, dtype: float64
###Markdown
See Chapter Mean-reversion equation (A8) for the MLE expression.The fly mean is 0.657bps, the weekly mean-reversion is 4.19bps, or half-life is 16 weeks. Weekly standard deviation is 5.5 bps. In comparison, the statistics show PCA_3 mean is 0 and std is 9.62bps. 5. Butterfly
###Code
fly5050 = df_weekly_diff['DGS5'] - (df_weekly_diff['DGS2']+df_weekly_diff['DGS10'])/2
plt.figure(figsize=(20,6))
plt.subplot(131)
sns.regplot(x=df_pca_change['PCA_1'], y=fly5050)
plt.subplot(132)
sns.regplot(x=df_pca_change['PCA_2'], y=fly5050)
plt.subplot(133)
sns.regplot(x=df_pca_change['PCA_3'], y=fly5050)
###Output
_____no_output_____
###Markdown
This is 50-50 DV01 neutral fly. It is not market value neutral.It has negative exposure to PC1 and positive exposure to PC2 (the linear regression coefficient is not zero).
###Code
flymkt = df_weekly_diff['DGS5'] - (0.25*df_weekly_diff['DGS2']+0.75*df_weekly_diff['DGS10'])
plt.figure(figsize=(20,6))
plt.subplot(131)
sns.regplot(x=df_pca_change['PCA_1'], y=flymkt)
plt.subplot(132)
sns.regplot(x=df_pca_change['PCA_2'], y=flymkt)
plt.subplot(133)
sns.regplot(x=df_pca_change['PCA_3'], y=flymkt)
###Output
_____no_output_____
###Markdown
Assume 2s, 5s, 10s durations are 1.8, 4.5, and 9.0, respectively.* The 50-50 DV01 neutral has DV01 weights 0.5-1.0-0.5, and market value 1.25mm-1mm-250k. It buys more 2s than 10s because of shorter duration. Buying fly pays 0.5mm upfront.* The market neutral has market value 6.25k-1mm-375k; DV01 weights 0.25-1-0.75. In order to have zero upfront payment and zero DV01, it underweights (overweights) 2s (10s).
###Code
W = pd.DataFrame(pca_change.components_.T)
W.columns = [f'PCA_{i+1}' for i in range(W.shape[1])]
W.index = codes
w21 = W.loc['DGS2', 'PCA_1']
w22 = W.loc['DGS2', 'PCA_2']
w23 = W.loc['DGS2', 'PCA_3']
w51 = W.loc['DGS5', 'PCA_1']
w52 = W.loc['DGS5', 'PCA_2']
w53 = W.loc['DGS5', 'PCA_3']
w101 = W.loc['DGS10', 'PCA_1']
w102 = W.loc['DGS10', 'PCA_2']
w103 = W.loc['DGS10', 'PCA_3']
w551 = w51 - (w21+w101)/2.0
w552 = w52 - (w22+w102)/2.0
print(w551, w552)
###Output
-0.059514884305378046 0.021767823095657196
###Markdown
50-50 duration has non-zero exposures on PC1 and PC2
###Code
A = np.array([[w21, w101],[w22,w102]])
b_ = np.array([w51, w52])
a, b = np.dot(np.linalg.inv(A), b_)
a, b
###Output
_____no_output_____
###Markdown
To immunize against first and second PCA, we solve DV01 a and b from the following$$w21*a - w51*1 + w101*b = 0 \\w22*a - w52*1 + w102*b = 0$$By solving a and b, it gives DV01 0.486-1-0.658, or market value 1.215mm-1mm-329k.
###Code
flypca = df_weekly_diff['DGS5']*1 - (a*df_weekly_diff['DGS2']+b*df_weekly_diff['DGS10'])
plt.figure(figsize=(20,6))
plt.subplot(131)
sns.regplot(x=df_pca_change['PCA_1'], y=flypca, ci=None)
plt.subplot(132)
sns.regplot(x=df_pca_change['PCA_2'], y=flypca, ci=None)
plt.subplot(133)
sns.regplot(x=df_pca_change['PCA_3'], y=flypca, ci=None)
###Output
_____no_output_____
###Markdown
PCA weighted fly has zero exposure to PC1 and PC2 (the line is horizontal).
###Code
plt.figure(figsize=(20,6))
plt.subplot(131)
plt.plot(df_pca_change['PCA_1'], flypca, 'o')
m1, b1 = np.polyfit(df_pca_change['PCA_1'], flypca, 1)
plt.plot(df_pca_change['PCA_1'], m1*df_pca_change['PCA_1']+b1)
plt.subplot(132)
plt.plot(df_pca_change['PCA_2'], flypca, 'o')
m2, b2 = np.polyfit(df_pca_change['PCA_2'], flypca, 1)
plt.plot(df_pca_change['PCA_2'], m2*df_pca_change['PCA_2']+b2)
plt.subplot(133)
plt.plot(df_pca_change['PCA_3'], flypca, 'o')
m3, b3 = np.polyfit(df_pca_change['PCA_3'], flypca, 1)
plt.plot(df_pca_change['PCA_3'], m3*df_pca_change['PCA_3']+b3)
print(f'slope 1: {m1}, 2: {m2}, 3: {m3}')
###Output
slope 1: 1.1415520546405518e-16, 2: -1.3348731104241916e-16, 3: 0.10950726425130013
###Markdown
This is an alternative plot via matplotlib, equivalent to the sns plot above. The print shows slopes are zero to PC1 and PC2.
###Code
###Output
_____no_output_____ |
test_for_colab.ipynb | ###Markdown
このnotebookはGoogle colabをGithubと連携するためのテストです。
###Code
1+1
###Output
_____no_output_____ |
Notebook/.ipynb_checkpoints/Morris_ranking_verification_plot_Total Tertiary Education Graduates Indicator-checkpoint.ipynb | ###Markdown
Loading experiments
###Code
ema_logging.log_to_stderr(ema_logging.INFO)
from Model_init import vensimModel
from ema_workbench import (TimeSeriesOutcome,
perform_experiments,
RealParameter,
CategoricalParameter,
ema_logging,
save_results,
load_results)
directory = 'C:/Users/moallemie/EM_analysis/Model/'
df_unc = pd.read_excel(directory+'ScenarioFramework.xlsx', sheet_name='Uncertainties')
df_unc['Min'] = df_unc['Min'] + df_unc['Reference'] * 0.75
df_unc['Max'] = df_unc['Max'] + df_unc['Reference'] * 1.25
# From the Scenario Framework (all uncertainties), filter only those top 20 sensitive uncertainties under each outcome
sa_dir='C:/Users/moallemie/EM_analysis/Data/'
mu_df = pd.read_csv(sa_dir+"MorrisIndices_{}_sc5000_t{}.csv".format(outcome_var, t))
mu_df.rename(columns={'Unnamed: 0': 'Uncertainty'}, inplace=True)
mu_df.sort_values(by=['mu_star'], ascending=False, inplace=True)
mu_df = mu_df.head(20)
mu_unc = mu_df['Uncertainty']
mu_unc_df = mu_unc.to_frame()
# Remove the rest of insensitive uncertainties from the Scenario Framework and update df_unc
keys = list(mu_unc_df.columns.values)
i1 = df_unc.set_index(keys).index
i2 = mu_unc_df.set_index(keys).index
df_unc2 = df_unc[i1.isin(i2)]
vensimModel.uncertainties = [RealParameter(row['Uncertainty'], row['Min'], row['Max']) for index, row in df_unc2.iterrows()]
df_out = pd.read_excel(directory+'ScenarioFramework.xlsx', sheet_name='Outcomes')
vensimModel.outcomes = [TimeSeriesOutcome(out) for out in df_out['Outcome']]
r_dir = 'D:/moallemie/EM_analysis/Data/'
results = load_results(r_dir+'SDG_experiments_ranking_verification_{}_sc{}.tar.gz'.format(outcome_var, sc))
experiments, outcomes = results
###Output
[MainProcess/INFO] results loaded succesfully from D:\moallemie\EM_analysis\Data\SDG_experiments_ranking_verification_Total Tertiary Education Graduates Indicator_sc500.tar.gz
###Markdown
Calculating SA (Morris) metrics
###Code
#Sobol indice calculation as a function of number of scenarios and time
def make_morris_df(scores, problem, outcome_var, sc, t):
scores_filtered = {k:scores[k] for k in ['mu_star','mu_star_conf','mu','sigma']}
Si_df = pd.DataFrame(scores_filtered, index=problem['names'])
sa_dir='C:/Users/moallemie/EM_analysis/Data/'
Si_df.to_csv(sa_dir+"MorrisIndices_verification_{}_sc{}_t{}.csv".format(outcome_var, sc, t))
Si_df.sort_values(by=['mu_star'], ascending=False, inplace=True)
Si_df = Si_df.head(20)
Si_df = Si_df.iloc[::-1]
indices = Si_df[['mu_star','mu']]
errors = Si_df[['mu_star_conf','sigma']]
return indices, errors
# Set the outcome variable, number of scenarios generated, and the timeslice you're interested in for SA
problem = get_SALib_problem(vensimModel.uncertainties)
X = experiments.iloc[:, :-3].values
Y = outcomes[outcome_var][:,-1]
scores = morris.analyze(problem, X, Y, print_to_console=False)
inds, errs = make_morris_df(scores, problem, outcome_var, sc, t)
###Output
_____no_output_____
###Markdown
Plotting SA results
###Code
# define the ploting function
def plot_scores(inds, errs, outcome_var, sc):
sns.set_style('white')
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(3, 6))
ind = inds.iloc[:,0]
err = errs.iloc[:,0]
ind.plot.barh(xerr=err.values.T,ax=ax, color = ['#FCF6F5']*(20-top_factor)+['#C6B5ED']*top_factor,
ecolor='dimgray', capsize=2, width=.9)
ax.set_ylabel('')
ax.legend().set_visible(False)
ax.set_xlabel('mu_star index', fontsize=12)
ylabels = ax.get_yticklabels()
ylabels = [item.get_text()[:-10] for item in ylabels]
ax.set_yticklabels(ylabels, fontsize=12)
ax.set_title("Number of experiments (N): "+str(sc*21), fontsize=12)
plt.suptitle("{} in 2100".format(outcome_var), y=0.94, fontsize=12)
plt.rcParams["figure.figsize"] = [7.08,7.3]
plt.savefig('{}/Morris_verification_set_{}_sc{}.png'.format(r'C:/Users/moallemie/EM_analysis/Fig/sa_ranking', outcome_var, sc), dpi=600, bbox_inches='tight')
return fig
plot_scores(inds, errs, outcome_var, sc)
plt.close()
###Output
_____no_output_____ |
python-tuts/0-beginner/5 - First Class Functions/07 - Map, Filter, Zip and List Comprehensions.ipynb | ###Markdown
Higher-Order Functions: Map and Filter **Definition**: A function that takes a function as an argument, and/or returns a function as its return value For example, the **sorted** function is a higher-order function as we saw in an earlier video. Map The **map** built-in function is a higher-order function that applies a function to an iterable type object:
###Code
help(map)
def fact(n):
return 1 if n < 2 else n * fact(n-1)
fact(3)
fact(4)
map(fact, [1, 2, 3, 4, 5])
###Output
_____no_output_____
###Markdown
The **map** function returns a **map** object, which is an **iterable** - we can either convert that to a list or enumerate it:
###Code
l = list(map(fact, [1, 2, 3, 4, 5]))
print(l)
###Output
[1, 2, 6, 24, 120]
###Markdown
We can also use it this way:
###Code
l1 = [1, 2, 3, 4, 5]
l2 = [10, 20, 30, 40, 50]
f = lambda x, y: x+y
m = map(f, l1, l2)
list(m)
###Output
_____no_output_____
###Markdown
Filter
###Code
help(filter)
###Output
Help on class filter in module builtins:
class filter(object)
| filter(function or None, iterable) --> filter object
|
| Return an iterator yielding those items of iterable for which function(item)
| is true. If function is None, return the items that are true.
|
| Methods defined here:
|
| __getattribute__(self, name, /)
| Return getattr(self, name).
|
| __iter__(self, /)
| Implement iter(self).
|
| __next__(self, /)
| Implement next(self).
|
| __reduce__(...)
| Return state information for pickling.
|
| ----------------------------------------------------------------------
| Static methods defined here:
|
| __new__(*args, **kwargs) from builtins.type
| Create and return a new object. See help(type) for accurate signature.
###Markdown
The **filter** function is a function that filters an iterable based on the truthyness of the elements, or the truthyness of the elements after applying a function to them. Like the **map** function, the **filter** function returns an iterable that we can view by generating a list from it, or simply enumerating in a for loop.
###Code
l = [0, 1, 2, 3, 4, 5, 6]
for e in filter(None, l):
print(e)
###Output
1
2
3
4
5
6
###Markdown
Notice how **0** was eliminated from the list, since **0** is **falsy**. We can use a function for this filtering.Suppose we want to filter out all odd values, only retaining even values: We could first define a function to return True if the value is even, and False otherwise:
###Code
def is_even(n):
return n % 2 == 0
l = [1, 2, 3, 4, 5, 6, 7, 8, 9]
result = filter(is_even, l)
print(list(result))
###Output
[2, 4, 6, 8]
###Markdown
Of course, we could just use a lambda expression instead:
###Code
l = [1, 2, 3, 4, 5, 6, 7, 8, 9]
result = filter(lambda x: x % 2 == 0, l)
print(list(result))
###Output
[2, 4, 6, 8]
###Markdown
Alternatives to **map** and **filter** using Comprehensions We'll cover comprehensions in much more detail later, but, for now, just be aware that we can use comprehensions instead of the **map** and **filter** functions - you decide which one you find more readable and enjoyable to write. Map using a list comprehension: * factorial example
###Code
l = [1, 2, 3, 4, 5]
result = [fact(i) for i in l]
print(result)
###Output
[1, 2, 6, 24, 120]
###Markdown
* two iterables example Before we do this example we need to know about the **zip** function.The **zip** built-in function will take one or more iterables, and generate an iterable of tuples where each tuple contains one element from each iterable:
###Code
l1 = 1, 2, 3
l2 = 'a', 'b', 'c'
list(zip(l1, l2))
l1 = 1, 2, 3
l2 = [10, 20, 30]
l3 = ('a', 'b', 'c')
list(zip(l1, l2, l3))
l1 = [1, 2, 3]
l2 = (10, 20, 30)
l3 = 'abc'
list(zip(l1, l2, l3))
l1 = range(100)
l2 = 'python'
list(zip(l1, l2))
###Output
_____no_output_____
###Markdown
Using the **zip** function we can now add our two lists element by element as follows:
###Code
l1 = [1, 2, 3, 4, 5]
l2 = [10, 20, 30, 40, 50]
result = [i + j for i,j in zip(l1,l2)]
print(result)
###Output
[11, 22, 33, 44, 55]
###Markdown
Filtering using a comprehension We can very easily filter an iterable using a comprehension as follows:
###Code
l = [1, 2, 3, 4, 5, 6, 7, 8, 9]
result = [i for i in l if i % 2 == 0]
print(result)
###Output
[2, 4, 6, 8]
###Markdown
As you can see, we did not even need a lambda expression! Combining **map** and **filter**
###Code
list(filter(lambda y: y < 25, map(lambda x: x**2, range(10))))
###Output
_____no_output_____
###Markdown
Alternatively, we can use a list comprehension to do the same thing:
###Code
[x**2 for x in range(10) if x**2 < 25]
###Output
_____no_output_____ |
.ipynb_checkpoints/ml11-checkpoint.ipynb | ###Markdown
https://docs.python.org/3.8/library/itertools.htmlitertools.cycle https://docs.python.org/3.8/library/itertools.htmlitertools.islice
###Code
# 設定群集與繪圖參數
plt.figure(figsize=(10, 8))
plot_num = 1
datasets = [(noisy_circles, {'n_clusters': 2}),
(varied, {'n_clusters': 3}),
(aniso, {'n_clusters': 3})]
# 執行各種樣板資料的繪圖迴圈
for i_dataset, (dataset, algo_params) in enumerate(datasets):
X, y = dataset
X = StandardScaler().fit_transform(X)
# 設定KMeans三種不同的起始點
random_1 = cluster.KMeans(n_clusters=algo_params['n_clusters'], random_state=1)
random_50 = cluster.KMeans(n_clusters=algo_params['n_clusters'], random_state=50)
random_90 = cluster.KMeans(n_clusters=algo_params['n_clusters'], random_state=90)
clustering_algorithms = (
('seed1', random_1),
('seed50', random_50),
('seed90', random_90))
# 繪製三種圖形
for name, algorithm in clustering_algorithms:
y_pred = algorithm.fit_predict(X)
plt.subplot(len(datasets), len(clustering_algorithms), plot_num)
if i_dataset == 0:
plt.title(name, size=18)
colors = np.array(list(islice(cycle(['#377eb8', '#ff7f00', '#4daf4a',
'#f781bf', '#a65628', '#984ea3',
'#999999', '#e41a1c', '#dede00']),
int(max(y_pred) + 1))))
plt.scatter(X[:, 0], X[:, 1], s=10, color=colors[y_pred])
plt.xlim(-2.5, 2.5)
plt.ylim(-2.5, 2.5)
plt.xticks(())
plt.yticks(())
plot_num += 1
plt.show()
###Output
_____no_output_____ |
quick_tests.ipynb | ###Markdown
Implementation of exponentially and logarithmically growing bin size for ord-vector.Parameters can be optimized in outer-loop with Optuna, for example.Eqn. 1, exponential growth: $N(t) = \lfloor N_0 \exp(at) + 1/2 \rfloor$Eqn. 2, logarithmic growth: $N(t) = \lfloor N_0 + a \ln(t/b + 1) + 1/2 \rfloor$where $N_0$ is the initial bin size and $a$ and $b$ are parameters affecting the growth rates.Note: adding 1/2 and flooring to round to nearest integer.**Sketch of algorithm**1. Use eqn. 1 or 2 to define the bin size and continue taking bins until out of data.2. If final bin bleeds over, merge it with the penultimate bin.Hard to implement efficiently in Python/NumPy, but this might ok since we shouldn't be making that many bins. Potential alternative: use pandas exponential weighted functions?
###Code
# N(t) = n0 * exp(-at)
t = np.linspace(0, 365, num=int(5e2))
n0 = 5
a = 1/150
n1=5
b=20
c=20
exp_grow = lambda t,n0,a: n0*np.exp(a*t)
log_grow = lambda t,n0,a,b: n0 + a*np.log(t/b + 1)
weight = exp_grow(t,n0,a)
sns.lineplot(x=t, y=weight.round(), label='Exp Growth')
sns.lineplot(x=t, y=log_grow(t,n1,b,c).round(), label='Log Growth');
sizes = weight.round()
sns.lineplot(x=t, y=sizes)
def get_exp_grow_bins(t, n0, a):
assert (n0 - 1) > 1, "Need a minimum binwidth (n0 - 1) > 0."
sizes = exp_grow(t, n0, a).round()
starts = []
ends = []
i = 0
while i < len(t):
starts.append(i)
ends.append(i+sizes[i])
i += sizes[i]
if i != len(t) - 1:
ends[-1] = len(t) - 1
# for s,e in zip(starts, ends):
# vals.append(agg(x[s:e]))
###Output
_____no_output_____ |
NATURAL LANGUAGE PROCESSING.ipynb | ###Markdown
converting the data into proper csv file to perform analysis
###Code
messages =pd.read_csv("smsspamcollection/SMSSpamCollection",sep='\t',names=['lable','message'])
messages
messages.describe()
messages.groupby('lable').describe()
messages['length']= messages['message'].apply(len)
messages
###Output
_____no_output_____
###Markdown
ploting the data on basis of length
###Code
messages['length'].hist(bins=200)
###Output
_____no_output_____
###Markdown
checking the factors of data on the basis of length
###Code
messages['length'].describe()
kk=messages[messages['length']==910]
kk
df=kk['message'].iloc[0]
df
###Output
_____no_output_____
###Markdown
distributing the data on basis of spam and ham according to their length
###Code
messages.hist(column='length',by='lable',bins=60,figsize=(12,4))
import string
###Output
_____no_output_____
###Markdown
creating a sample function to remove the punctuation from the data
###Code
nopunc=[c for c in df if c not in string.punctuation]
from nltk.corpus import stopwords as stp
stp.words('english')
nopunc=''.join(nopunc)
nopunc
nopunc.split()
###Output
_____no_output_____
###Markdown
remove the stop words of english such as 'a,the,if,there etc' by using NLP corpus to clean the data
###Code
clean_df=[words for words in nopunc.split() if words not in stp.words('english')]
clean_df
###Output
_____no_output_____
###Markdown
creating a proper function with the help of sample function to clean the complete data
###Code
def text_process(df):
"""
1. remove punctuation
2. remove stopwords
3. return clean text
"""
nopunc=[char for char in df if char not in string.punctuation]
nopunc=''.join(nopunc)
return[word for word in nopunc.split() if word.lower() not in stp.words('english')]
messages['message'].apply(text_process)
###Output
_____no_output_____
###Markdown
Building a predictive machine learning model
###Code
from sklearn.feature_extraction.text import CountVectorizer
bow_transformer=CountVectorizer(analyzer=text_process)
bow_transformer.fit(messages['message'])
print(len(bow_transformer.vocabulary_))
###Output
11425
###Markdown
BREAKDOWN OF WHAT EXACTLY THE FUNCTION bow_transformer DID...
###Code
mess4=messages['message'][3]
mess4
bow4 = bow_transformer.transform([mess4])
print(bow4)
###Output
(0, 4068) 2
(0, 4629) 1
(0, 5261) 1
(0, 6204) 1
(0, 6222) 1
(0, 7186) 1
(0, 9554) 2
###Markdown
bow_transformer gives us the count of each word repeated and converts it into a numeric value
###Code
bow_transformer.get_feature_names()[4068] # to check what word got repeated
bow_transformer.get_feature_names()[9554]
messages_bow = bow_transformer.transform(messages['message'])
print('the shape of sparse metrics : ',messages_bow.shape)
messages_bow.nnz #non zero occurance values that non zero in sparse matrix (nnz)
###Output
_____no_output_____
###Markdown
TF: Term Frequency, which measures how frequently a term occurs in a document. Since every document is different in length, it is possible that a term would appear much more times in long documents than shorter ones. Thus, the term frequency is often divided by the document length (aka. the total number of terms in the document) as a way of normalization:TF(t) = (Number of times term t appears in a document) / (Total number of terms in the document).IDF: Inverse Document Frequency, which measures how important a term is. While computing TF, all terms are considered equally important. However it is known that certain terms, such as "is", "of", and "that", may appear a lot of times but have little importance. Thus we need to weigh down the frequent terms while scale up the rare ones, by computing the following:IDF(t) = log_e(Total number of documents / Number of documents with term t in it).See below for a simple example.Example:Consider a document containing 100 words wherein the word cat appears 3 times.The term frequency (i.e., tf) for cat is then (3 / 100) = 0.03. Now, assume we have 10 million documents and the word cat appears in one thousand of these. Then, the inverse document frequency (i.e., idf) is calculated as log(10,000,000 / 1,000) = 4. Thus, the Tf-idf weight is the product of these quantities: 0.03 * 4 = 0.12.
###Code
from sklearn.feature_extraction.text import TfidfTransformer
tfidf_tramsformer = TfidfTransformer().fit(messages_bow)
# what exactly we did
tfidf4=tfidf_tramsformer.transform(bow4)
print(tfidf4)
lo = tfidf_tramsformer.idf_[bow_transformer.vocabulary_['love']]
print('tfidf weight for love: ',lo)
messages_tfidf = tfidf_tramsformer.transform(messages_bow)
messages_tfidf.shape
###Output
_____no_output_____
###Markdown
Creating a ML MODEL WE are using naive bayes method
###Code
from sklearn.naive_bayes import MultinomialNB
model=MultinomialNB()
spam_detection = model.fit(messages_tfidf,messages['lable'])
predictions= spam_detection.predict(messages_tfidf)
predictions
#Doing the same thing but on train test split
from sklearn.model_selection import train_test_split
x= messages['lable']
y= messages['message']
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.3)
###Output
_____no_output_____
###Markdown
Creating a Data PipelineLet's run our model again and then predict off the test set. We will use SciKit Learn's pipeline capabilities to store a pipeline of workflow. This will allow us to set up all the transformations that we will do to the data for future use. Let's see an example of how it works: Basically we are going to store the above process of CountVectorizer() TfidfTransformer() and MultinomialNB() into a pipline provided by sklearn in form of a workflow.
###Code
from sklearn.pipeline import Pipeline
###Output
_____no_output_____
###Markdown
FINAL MODEL
###Code
process_pipeline = Pipeline([
('bow',CountVectorizer(analyzer=text_process)),
('tfidf_transformation',TfidfTransformer()),
('predictive_model',MultinomialNB())
])
process_pipeline.fit(y_train,x_train)
y_pred = process_pipeline.predict(y_test)
y_pred
from sklearn.metrics import classification_report
print(classification_report(y_pred,x_test))
###Output
precision recall f1-score support
ham 1.00 0.96 0.98 1510
spam 0.72 1.00 0.84 162
accuracy 0.96 1672
macro avg 0.86 0.98 0.91 1672
weighted avg 0.97 0.96 0.96 1672
|
88.kaggle/titanic/titanic.ipynb | ###Markdown
Load dataset
###Code
test_df = pd.read_csv("./test.csv")
train_df = pd.read_csv("./train.csv")
train_df.head(1)
test_df.head()
train_df.set_index('PassengerId', inplace=True)
test_df.set_index('PassengerId', inplace=True)
train_df.head()
test_df.head()
train_index = train_df.index
test_index = test_df.index
y_train_df = train_df.pop("Survived")
y_train_df.head()
###Output
_____no_output_____
###Markdown
Data preproecessing
###Code
pd.set_option('display.float_format', lambda x: '%.2f' % x)
test_df.head()
test_df.isnull().sum() / len(test_df)
###Output
_____no_output_____
###Markdown
Decion 1 - Drop cabin
###Code
del test_df["Cabin"]
del train_df["Cabin"]
all_df = train_df.append(test_df)
all_df.head()
(all_df.isnull().sum() / len(all_df)).plot(kind='bar')
plt.show()
len(all_df)
del all_df["Name"]
del all_df["Ticket"]
all_df.head()
all_df["Sex"] = all_df["Sex"].replace({"male": 0, "female": 1})
all_df.head()
all_df["Embarked"].unique()
all_df["Embarked"] = all_df["Embarked"].replace({"S":0, "C":1, "Q":2, np.nan:99})
all_df["Embarked"].unique()
all_df.head()
pd.get_dummies(all_df["Embarked"], prefix="embarked")
matrix_df = pd.merge(all_df, pd.get_dummies(all_df["Embarked"], prefix="embarked"), left_index=True, right_index=True)
matrix_df.head()
matrix_df.corr()
all_df.groupby("Pclass")["Age"].mean()
#all_df.loc[]
all_df.loc[
(all_df["Pclass"]==1 & (all_df["Age"].isnull()), "Age")
]
# 실측값에 대해서 억지로 넣어줌
all_df.loc[(all_df["Pclass"] == 1) & (all_df["Age"].isnull()), "Age"] = 39.16
all_df.loc[(all_df["Pclass"] == 2) & all_df["Age"].isnull(), "Age"] = 29.51
all_df.loc[(all_df["Pclass"] == 3) & all_df["Age"].isnull(), "Age"] = 24.82
all_df.isnull().sum()
all_df.groupby("Pclass")["Fare"].mean()
all_df[all_df["Fare"].isnull()]
all_df.loc[all_df["Fare"].isnull(), "Fare"] = 13.30
del all_df["Embarked"]
all_df["Pclass"] = all_df["Pclass"].replace({1:"A", 2:"B", 3:"C"})
all_df = pd.get_dummies(all_df)
all_df.head()
all_df = pd.merge(
all_df, matrix_df[["embarked_0", "embarked_1", "embarked_2", "embarked_99"]],
left_index= True, right_index=True)
train_df = all_df[all_df.index.isin(train_index)]
test_df = all_df[all_df.index.isin(test_index)]
train_df.head(3)
test_df.head(3)
###Output
_____no_output_____
###Markdown
Build Model
###Code
x_data = train_df.values
y_data = y_train_df.values
y_data
from sklearn.linear_model import LogisticRegression
cls = LogisticRegression()
cls.fit(x_data, y_data)
cls.intercept_
cls.coef_
cls.predict(test_df.values)
test_df.index
x_test = test_df.as_matrix()
y_test =cls.predict(x_test)
y_test
result = np.concatenate((test_index.values.reshape(-1,1), cls.predict(x_test).reshape(-1,1)), axis=1)
result[:5]
df_submssion = pd.DataFrame(result, columns=["PassengerId", "Servived"])
df_submssion
df_submssion.to_csv("submission_result.csv",index=False)
###Output
_____no_output_____ |
_projects/project0/.ipynb_checkpoints/Jupyter-Getting-Started-checkpoint.ipynb | ###Markdown
Welcome to Python!There are many excellent Python and Jupyter/IPython tutorials out there. This Notebook contains a few snippets of code from here and there, but we suggest you go over some in-depth tutorials, especially if you are not familiar with Python. Here we borrow some material from:- [A Crash Course in Python for Scientists](http://nbviewer.ipython.org/gist/rpmuller/5920182) (which itself contains some nice links to other tutorials), - [matplotlib examples](http://matplotlib.org/gallery.html),- [Chapter 1 from Pandas Cookbook](http://nbviewer.ipython.org/github/jvns/pandas-cookbook/tree/master/cookbook/)This short introduction is itself written in Jupyter Notebook. See the Project 0 setup instructions to start a Jupyter server and open this notebook there.As a starting point, you can simply type in expressions into the python shell in the browser.
###Code
8+8
###Output
_____no_output_____
###Markdown
Enter will continue the **cell**. If you want to execute the commands, you can either press the **play** button, or use Shift+Enter
###Code
days_of_the_week = ["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday"]
for day in days_of_the_week:
statement = "Today is " + day
print(statement)
###Output
Today is Sunday
Today is Monday
Today is Tuesday
Today is Wednesday
Today is Thursday
Today is Friday
Today is Saturday
###Markdown
The above code uses a List. In case you haven't realized this yet, Python uses "indentation" to decide the scope, so there is no need to enclose code within {} or similar constructs. The other data structures in Python include Tuples and Dictionaries. Tuples are similar to Lists, but are immutable so we can't modify it (say by appending). Dictionaries are similar to Maps.
###Code
tuple1 = (1,2,'hi',9.0)
tuple1
# The following code will give an error since we are trying to change an immutable object
tuple1.append(7)
ages_dictionary = {"Rick": 46, "Bob": 86, "Fred": 21}
print("Rick's age is ",ages_dictionary["Rick"])
###Output
Rick's age is 46
###Markdown
FunctionsHere we write a quick function to compute the Fibonacci sequence (remember this from Discrete Math?)
###Code
def fibonacci(sequence_length):
"Return the Fibonacci sequence of length *sequence_length*"
sequence = [0,1]
if sequence_length < 1:
print("Fibonacci sequence only defined for length 1 or greater")
return
if 0 < sequence_length < 3:
return sequence[:sequence_length]
for i in range(2,sequence_length):
sequence.append(sequence[i-1]+sequence[i-2])
return sequence
help(fibonacci)
fibonacci(10)
###Output
_____no_output_____
###Markdown
The following function shows several interesting features, including the ability to return multiple values as a tuple, and the idea of "tuple assignment", where objects are unpacked into variables (the first line after for).
###Code
positions = [
('Bob',0.0,21.0),
('Cat',2.5,13.1),
('Dog',33.0,1.2)
]
def minmax(objects):
minx = 1e20 # These are set to really big numbers
miny = 1e20
for obj in objects:
name,x,y = obj
if x < minx:
minx = x
if y < miny:
miny = y
return minx,miny
x,y = minmax(positions)
print(x,y)
import bs4
import requests
bs4
requests
from bs4 import BeautifulSoup
###Output
_____no_output_____ |
notebooks/reddit_data_cleaning_2_jcraft.ipynb | ###Markdown
Reddit Cleaning 2 Import dependencies
###Code
import pandas as pd
import re
import string
import pickle # just in case
###Output
_____no_output_____
###Markdown
Set file locations
###Code
# File for cleaned comment data (input file)
cleaned_reddit_comments = '../data/reddit/reddit_comments_cleaned.csv'
# File for preprocessed comment text (output file)
cleaned_reddit_comment_text = '../data/reddit/reddit_comment_text_cleaned.csv'
###Output
_____no_output_____
###Markdown
Read in the data
###Code
df = pd.read_csv(cleaned_reddit_comments)
df.head()
df.info(verbose=True)
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1911 entries, 0 to 1910
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 post_id 1911 non-null object
1 comment_id 1911 non-null object
2 author 1911 non-null object
3 comment 1911 non-null object
4 created_utc 1911 non-null float64
5 downs 1911 non-null float64
6 ups 1911 non-null float64
7 reply 1911 non-null object
8 comment_replied_id 214 non-null object
9 comment_date 1911 non-null object
dtypes: float64(3), object(7)
memory usage: 149.4+ KB
###Markdown
Preprocess comment text for analysis
###Code
# Create a new dataframe. Use 'comment_id' for any future merges.
text_df = df[['comment_id', 'comment']].copy()
text_df.set_index('comment_id')
# Define a little cleaner function
# I would really like to get some review on the regex here.
def clean_text_round1(text):
'''Make text lowercase, remove punctuation, excess whitespace (in that order).'''
# make text lowercase
text = text.lower()
# remove punctuation
text = re.sub('[%s]' % re.escape(string.punctuation), '', text)
# remove multiple whitespace, and convert all whitespace to space (' ').
text = " ".join(text.split())
return text
text_df['clean_text'] = text_df['comment'].apply(lambda x: clean_text_round1(x))
text_df
text_df.info(verbose=True)
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1911 entries, 0 to 1910
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 comment_id 1911 non-null object
1 comment 1911 non-null object
2 clean_text 1911 non-null object
dtypes: object(3)
memory usage: 44.9+ KB
###Markdown
Export cleaned data
###Code
text_df.to_csv(cleaned_reddit_comment_text, index = False)
###Output
_____no_output_____ |
Scripts/chapter11_DataCollectionAPI.ipynb | ###Markdown
Data collection: Using social media APIs
###Code
# Libraries
import tweepy
import sys
# set path
sys.path.insert(1,"C:/Users/askes/OneDrive/Skrivebord/SDS/BaseCamp")
# Import API keys from private file.
from AppCred import CONSUMER_KEY, CONSUMER_SECRET
from AppCred import ACCESS_TOKEN, ACCESS_TOKEN_SECRET
###Output
_____no_output_____ |
tutorials/fMRI - 1 - Graph Analysis (Group).ipynb | ###Markdown
If this tutorial we are going to use estimate the connectivity and subsequently filter them. - - - Load data
###Code
import sys
import tqdm
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
import numpy as np
np.set_printoptions(threshold=sys.maxsize)
fmri = np.load('data/fmri_autism_ts.npy', allow_pickle=True)
labels = np.load('data/fmri_autism_labels.npy')
num_subjects = len(fmri)
num_samples, num_rois = np.shape(fmri[0])
###Output
_____no_output_____
###Markdown
Compute the connectivity
###Code
conn_mtx = np.zeros((num_subjects, num_rois, num_rois))
for subj in tqdm.tqdm(range(num_subjects)):
fmri_ts = fmri[subj]
conn_mtx[subj, ...] = np.corrcoef(fmri_ts.T)
np.save('data/fmri_autism_conn_mtx.npy', conn_mtx)
###Output
_____no_output_____
###Markdown
Filter connectivity matrices
###Code
thres_conn_mtx = np.zeros_like(conn_mtx)
from dyconnmap.graphs import threshold_eco
for subj in tqdm.tqdm(range(num_subjects)):
subj_conn_mtx = np.abs(conn_mtx[subj])
_, CIJtree, _ = threshold_eco(subj_conn_mtx)
thres_conn_mtx[subj] = CIJtree
np.save('data/fmri_autism_thres_conn_mtx.npy', thres_conn_mtx)
###Output
_____no_output_____ |
hw1_lovro.ipynb | ###Markdown
Classifying Fashion-MNISTNow it's your turn to build and train a neural network. You'll be using the [Fashion-MNIST dataset](https://github.com/zalandoresearch/fashion-mnist), a drop-in replacement for the MNIST dataset. MNIST is actually quite trivial with neural networks where you can easily achieve better than 97% accuracy. Fashion-MNIST is a set of 28x28 greyscale images of clothes. It's more complex than MNIST, so it's a better representation of the actual performance of your network, and a better representation of datasets you'll use in the real world.In this notebook, you'll build your own neural network. For the most part, you could just copy and paste the code from Part 3, but you wouldn't be learning. It's important for you to write the code yourself and get it to work. Feel free to consult the previous notebooks though as you work through this.First off, let's load the dataset through torchvision.
###Code
import matplotlib.pyplot as plt
import numpy as np
import torch
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))])
# Download and load the training data
trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)
###Output
_____no_output_____
###Markdown
Here we can see one of the images.
###Code
def imshow(image, ax=None, title=None, normalize=True):
"""Imshow for Tensor."""
if ax is None:
fig, ax = plt.subplots()
image = image.numpy().transpose((1, 2, 0))
if normalize:
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
image = std * image + mean
image = np.clip(image, 0, 1)
ax.imshow(image)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.tick_params(axis='both', length=0)
ax.set_xticklabels('')
ax.set_yticklabels('')
return ax
image, label = next(iter(trainloader))
imshow(image[0,:]);
###Output
_____no_output_____
###Markdown
Building the networkHere you should define your network. As with MNIST, each image is 28x28 which is a total of 784 pixels, and there are 10 classes. You should include at least one hidden layer. We suggest you use ReLU activations for the layers and to return the logits or log-softmax from the forward pass. It's up to you how many layers you add and the size of those layers.
###Code
from torch import nn, optim
import torch.nn.functional as F
# TODO: Define your network architecture here
# You should create a Convolutional Neural Network
# (you can also add some fully connected layers if you wish)
class Classifier(nn.Module):
def __init__(self):
super(Classifier, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv3 = nn.Conv2d(20, 20, kernel_size=3)
self.mp = nn.MaxPool2d(2)
self.fc = nn.Linear(320, 10)
def forward(self, x):
in_size = x.size(0)
x = F.relu(self.conv1(x))
x = F.relu(self.mp(self.conv2(x)))
x = F.relu(self.mp(self.conv3(x)))
x = x.view(in_size, -1) # flatten the tensor
x = self.fc(x)
return F.log_softmax(x)
###Output
_____no_output_____
###Markdown
Train the networkNow you should create your network and train it. First you'll want to define [the criterion](http://pytorch.org/docs/master/nn.htmlloss-functions) (something like `nn.CrossEntropyLoss` or `nn.NLLLoss`) and [the optimizer](http://pytorch.org/docs/master/optim.html) (typically `optim.SGD` or `optim.Adam`).Then write the training code. Remember the training pass is a fairly straightforward process:* Make a forward pass through the network to get the logits * Use the logits to calculate the loss* Perform a backward pass through the network with `loss.backward()` to calculate the gradients* Take a step with the optimizer to update the weightsBy adjusting the hyperparameters (hidden units, learning rate, etc), you should be able to get the training loss below 0.4.
###Code
# TODO: Create the network, define the criterion and optimizer
model = Classifier()
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr = 0.002)
n_epochs = 3
# TODO: Train the network here
for epoch in range(n_epochs):
for imgs, labels in trainloader:
optimizer.zero_grad()
batch_size = imgs.shape[0]
output = model(imgs)
loss = criterion(output, labels)
loss.backward()
optimizer.step()
print("Epoch: %d, Loss: %f" % (epoch, float(loss)))
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
def view_classify(img, ps, version="MNIST"):
''' Function for viewing an image and it's predicted classes.
'''
ps = ps.data.numpy().squeeze()
fig, (ax1, ax2) = plt.subplots(figsize=(6,9), ncols=2)
ax1.imshow(img.resize_(1, 28, 28).numpy().squeeze())
ax1.axis('off')
ax2.barh(np.arange(10), ps)
ax2.set_aspect(0.1)
ax2.set_yticks(np.arange(10))
if version == "MNIST":
ax2.set_yticklabels(np.arange(10))
elif version == "Fashion":
ax2.set_yticklabels(['T-shirt/top',
'Trouser',
'Pullover',
'Dress',
'Coat',
'Sandal',
'Shirt',
'Sneaker',
'Bag',
'Ankle Boot'], size='small');
ax2.set_title('Class Probability')
ax2.set_xlim(0, 1.1)
plt.tight_layout()
# Test out your network!
dataiter = iter(testloader)
images, labels = dataiter.next()
img = images[1]
img = img.unsqueeze(1)
# TODO: Calculate the class probabilities (softmax) for img
prob = model(img)
prob = torch.exp(model(img))
# Plot the image and probabilities
view_classify(img, prob, version='Fashion')
###Output
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:20: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument.
###Markdown
Ex2As you can see the neural network with a high 99% recognizes digits. Try the network using the Gradio library. Draw a few digits and see the results. If you get unexpected results, write down possible reasons why the network did not recognize the digits you wrote.
###Code
!pip install gradio
import tensorflow as tf
import gradio as gr
from matplotlib import pyplot as plt
import numpy as np
objects = tf.keras.datasets.mnist
(training_images, training_labels), (test_images, test_labels) = objects.load_data()
training_images = training_images / 255.0
test_images = test_images / 255.0
from tensorflow.keras.layers import Flatten, Dense
model = tf.keras.models.Sequential([Flatten(input_shape=(28,28)),
Dense(256, activation='relu'),
Dense(256, activation='relu'),
Dense(128, activation='relu'),
Dense(10, activation=tf.nn.softmax)])
model.compile(optimizer = 'adam',
loss = 'sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(training_images, training_labels, epochs=10)
def predict_image(img):
img_3d=img.reshape(-1,28,28)
im_resize=img_3d/255.0
prediction=model.predict(im_resize)
pred=np.argmax(prediction)
return pred
iface = gr.Interface(predict_image, inputs="sketchpad", outputs="label")
iface.launch(debug='True')
###Output
Colab notebook detected. This cell will run indefinitely so that you can see errors and logs. To turn off, set debug=False in launch().
Running on public URL: https://45827.gradio.app
This share link expires in 72 hours. For free permanent hosting, check out Spaces (https://huggingface.co/spaces)
|
Advanced_Interval_Plotting.ipynb | ###Markdown
Advanced Interval PlottingAuthor(s): Paul Miles | Date Created: July 18, 2019For the purpose of this example we will consider the Monod model demonstrated [here](Monod.ipynb).
###Code
import numpy as np
import scipy.optimize
from pymcmcstat.MCMC import MCMC
import matplotlib.pyplot as plt
import pymcmcstat
print(pymcmcstat.__version__)
def model(q, data):
x = data.xdata[0]
a, b = q
y = a*x/(b + x)
return y.reshape(y.size,)
def ssfun(q, data):
yd = data.ydata[0]
ym = model(q, data).reshape(yd.shape)
return ((yd - ym)**2).sum()
from pymcmcstat.MCMC import DataStructure
data = DataStructure()
# data structure
x = np.array([28, 55, 83, 110, 138, 225, 375]) # (mg / L COD)
y = np.array([0.053, 0.060, 0.112, 0.105, 0.099, 0.122, 0.125]) # (1 / h)
data.add_data_set(x, y)
# Calculate initial covariance matrix
def residuals(q):
yd = data.ydata[0]
ym = model(q, data)
res = yd - ym.reshape(yd.shape)
return res.reshape(res.size, )
ls0 = scipy.optimize.least_squares(residuals, [0.15, 100],
verbose=2, max_nfev=100)
theta0 = ls0.x
n = data.n[0] # number of data points in model
p = len(theta0) # number of model parameters (dof)
ssmin = ssfun(theta0, data) # calculate the sum-of-squares error
mse = ssmin/(n-p) # estimate for the error variance
J = np.array([[x/(theta0[1]+x)], [-theta0[0]*x/(theta0[1]+x)**2]])
J = J.transpose()
J = J.reshape(n, p)
tcov = np.linalg.inv(np.dot(J.transpose(), J))*mse
# Initialize MCMC object
mcstat = MCMC()
mcstat.data = data
# Define model parameters, simulation options, and model settings.
mcstat.parameters.add_model_parameter(
name='$\mu_{max}$',
theta0=theta0[0],
minimum=0)
mcstat.parameters.add_model_parameter(
name='$K_x$',
theta0=theta0[1],
minimum=0)
mcstat.simulation_options.define_simulation_options(
nsimu=int(5.0e3),
updatesigma=True,
qcov=tcov)
mcstat.model_settings.define_model_settings(
sos_function=ssfun,
sigma2=0.01**2)
# Run simulation
mcstat.run_simulation()
# Extract results and print statistics
results = mcstat.simulation_results.results
names = results['names']
chain = results['chain']
s2chain = results['s2chain']
names = results['names'] # parameter names
mcstat.chainstats(chain, results)
###Output
Sampling these parameters:
name start [ min, max] N( mu, sigma^2)
$\mu_{max}$: 0.15 [ 0.00e+00, inf] N( 0.00e+00, inf)
$K_x$: 49.05 [ 0.00e+00, inf] N( 0.00e+00, inf)
[-----------------100%-----------------] 5000 of 5000 complete in 1.0 sec
------------------------------
name : mean std MC_err tau geweke
$\mu_{max}$: 0.1544 0.0220 0.0008 10.8093 0.9298
$K_x$ : 61.7267 29.0816 1.2178 15.6448 0.7330
------------------------------
==============================
Acceptance rate information
---------------
Results dictionary:
Stage 1: 32.62%
Stage 2: 49.36%
Net : 81.98% -> 4099/5000
---------------
Chain provided:
Net : 81.98% -> 4099/5000
---------------
Note, the net acceptance rate from the results dictionary
may be different if you only provided a subset of the chain,
e.g., removed the first part for burnin-in.
------------------------------
###Markdown
Plot Credible/Prediction IntervalsDefine function for generating intervals, setup calculations, and generate.
###Code
from pymcmcstat.propagation import calculate_intervals
intervals = calculate_intervals(chain, results, data, model,
s2chain=s2chain, nsample=500, waitbar=True)
def format_plot():
plt.xlabel('x (mg/L COD)', fontsize=20)
plt.xticks(fontsize=20)
plt.ylabel('y (1/h)', fontsize=20)
plt.yticks(fontsize=20)
plt.title('Predictive envelopes of the model', fontsize=20);
###Output
[-----------------100%-----------------] 500 of 500 complete in 0.0 sec
###Markdown
PlottingRequired inputs:- `intervals`: Output from `calculate_intervals`- `time`: Independent x-axis valuesAvailable inputs: (Defaults in Parantheses)- `ydata`: Observations, expect 1-D array if defined. (`None`)- `xdata`: Independent values corresponding to observations. This is required if the observations do not align with your times of generating the model response. (`None`)- `limits`: Quantile limits that correspond to percentage size of desired intervals. Note, this is the default limits, but specific limits can be defined using the ciset and piset dictionaries.- `adddata`: Flag to include data. (`False`, - if `ydata` is not `None`, then `True`)- `addmodel`: Flag to include median model response. (`True`)- `addlegend`: Flag to include legend. (`True`)- `addcredible`: Flag to include credible intervals. (`True`)- `addprediction`: Flag to include prediction intervals. (`True`)- `fig`: Handle of previously created figure object. (`None`)- `figsize`: (width, height) in inches. (`None`)- `legloc`: Legend location - matplotlib help for details. (`'upper left'`)- `ciset`: Settings for credible intervals. (`None` - see below)- `piset`: Settings for prediction intervals. (`None` - see below)- `return_settings`: Flag to return ciset and piset along with fig and ax. (`False`)- `model_display`: Model display settings. (See below)- `data_display`: Data display settings. (See below)- `interval_display`: Interval display settings. (See below)Default general display options:- `interval_display = {'linestyle': ':', 'linewidth': 1, 'alpha': 0.5, 'edgecolor': 'k'}`- `model_display = {'linestyle': '-', 'marker': '', 'color': 'r', 'linewidth': 2, 'markersize': 5, 'label': 'model', 'alpha': 1.0}`- `data_display = {'linestyle': '', 'marker': '.', 'color': 'b', 'linewidth': 1, 'markersize': 5, 'label': 'data', 'alpha': 1.0}`Display options specify to credible and prediction intervals:- `limits`: This should be a list of numbers between 0 and 100, e.g., limits=[50, 90] will result in 50% and 90% intervals.- `cmap`: The program is designed to “try” to choose colors that are visually distinct. The user can specify the colormap to choose from.- `colors`: The user can specify the color they would like for each interval in a list, e.g., [‘r’, ‘g’, ‘b’]. This list should have the same number of elements as limits or the code will revert back to its default behavior. Case 1: Use default settings
###Code
from pymcmcstat.propagation import plot_intervals
plot_intervals(intervals, data.xdata[0])
format_plot()
###Output
_____no_output_____
###Markdown
Case 2: Include Data and Adjust Appearance
###Code
data_display = dict(
marker='o',
color='k',
markersize=10)
plot_intervals(intervals, data.xdata[0], data.ydata[0],
data_display=data_display, adddata=True)
format_plot()
###Output
_____no_output_____
###Markdown
Case 3: Adjust Appearance of Model
###Code
model_display = dict(
linestyle='-.',
linewidth=3,
color='r',
marker='o',
markersize=10)
plot_intervals(intervals, data.xdata[0], data.ydata[0],
model_display=model_display, adddata=True)
format_plot()
###Output
_____no_output_____
###Markdown
Case 3: Adjust Appearance of Intervals
###Code
interval_display = dict(
linestyle='-',
linewidth=3,
alpha=0.75,
edgecolor='k')
plot_intervals(intervals, data.xdata[0], data.ydata[0],
interval_display=interval_display, adddata=True)
format_plot()
###Output
_____no_output_____
###Markdown
Case 4: Specify Credible Intervals Size and Colors- Turn off prediction intervals- Specify colors using color map or directly- Adjust legend location
###Code
from matplotlib import cm
ciset = dict(
limits=[50, 90, 95, 99],
cmap=cm.Blues)
f, ax = plot_intervals(intervals, data.xdata[0], data.ydata[0], addprediction=False,
adddata=True, ciset=ciset)
format_plot()
f.tight_layout()
ciset = dict(
limits=[50, 90, 95, 99],
cmap=cm.Blues,
colors=['r', 'g', 'b', 'y'])
f, ax = plot_intervals(intervals, data.xdata[0], data.ydata[0], addprediction=False,
adddata=True, ciset=ciset)
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5),
fancybox=True, shadow=True, ncol=1)
format_plot()
f.tight_layout()
###Output
_____no_output_____ |
workflow/feature_engineering/active_site_angles/AS_angles.ipynb | ###Markdown
Calculating the octahedral volume and other geometric quantities--- Import Modules
###Code
import os
print(os.getcwd())
import sys
import time; ti = time.time()
import copy
import numpy as np
import pandas as pd
import math
# #########################################################
from misc_modules.pandas_methods import reorder_df_columns
# #########################################################
from proj_data import metal_atom_symbol
metal_atom_symbol_i = metal_atom_symbol
from methods import (
get_df_jobs_anal,
get_df_atoms_sorted_ind,
get_df_active_sites,
get_df_coord,
)
from methods import unit_vector, angle_between
from methods import get_df_coord, get_df_coord_wrap
# #########################################################
from local_methods import get_angle_between_surf_normal_and_O_Ir
from methods import isnotebook
isnotebook_i = isnotebook()
if isnotebook_i:
from tqdm.notebook import tqdm
verbose = True
else:
from tqdm import tqdm
verbose = False
###Output
_____no_output_____
###Markdown
Read Data
###Code
df_jobs_anal = get_df_jobs_anal()
df_jobs_anal_i = df_jobs_anal
df_atoms_sorted_ind = get_df_atoms_sorted_ind()
df_active_sites = get_df_active_sites()
###Output
_____no_output_____
###Markdown
Filtering down to `oer_adsorbate` jobs
###Code
df_ind = df_jobs_anal.index.to_frame()
df_jobs_anal = df_jobs_anal.loc[
df_ind[df_ind.job_type == "oer_adsorbate"].index
]
df_jobs_anal = df_jobs_anal.droplevel(level=0)
df_ind = df_atoms_sorted_ind.index.to_frame()
df_atoms_sorted_ind = df_atoms_sorted_ind.loc[
df_ind[df_ind.job_type == "oer_adsorbate"].index
]
df_atoms_sorted_ind = df_atoms_sorted_ind.droplevel(level=0)
sys.path.insert(0,
os.path.join(
os.environ["PROJ_irox_oer"],
"workflow/feature_engineering"))
from feature_engineering_methods import get_df_feat_rows
df_feat_rows = get_df_feat_rows(
df_jobs_anal=df_jobs_anal,
df_atoms_sorted_ind=df_atoms_sorted_ind,
df_active_sites=df_active_sites,
)
###Output
_____no_output_____
###Markdown
###Code
# #########################################################
data_dict_list = []
# #########################################################
iterator = tqdm(df_feat_rows.index, desc="1st loop")
for i_cnt, index_i in enumerate(iterator):
# #####################################################
row_i = df_feat_rows.loc[index_i]
# #####################################################
compenv_i = row_i.compenv
slab_id_i = row_i.slab_id
ads_i = row_i.ads
active_site_orig_i = row_i.active_site_orig
att_num_i = row_i.att_num
job_id_max_i = row_i.job_id_max
active_site_i = row_i.active_site
# #####################################################
if active_site_orig_i == "NaN":
from_oh_i = False
else:
from_oh_i = True
name_i = (
row_i.compenv, row_i.slab_id, row_i.ads,
row_i.active_site_orig, row_i.att_num,
)
# #####################################################
row_atoms_i = df_atoms_sorted_ind.loc[name_i]
# #####################################################
atoms_i = row_atoms_i.atoms_sorted_good
# #####################################################
# atoms_i.write("out_data/atoms.traj")
df_coord_i = get_df_coord_wrap(name_i, active_site_i)
angle_i = get_angle_between_surf_normal_and_O_Ir(
atoms=atoms_i,
df_coord=df_coord_i,
active_site=active_site_i,
)
# #####################################################
data_dict_i = dict()
# #####################################################
data_dict_i["job_id_max"] = job_id_max_i
data_dict_i["from_oh"] = from_oh_i
data_dict_i["active_site"] = active_site_i
data_dict_i["compenv"] = compenv_i
data_dict_i["slab_id"] = slab_id_i
data_dict_i["ads"] = ads_i
data_dict_i["active_site_orig"] = active_site_orig_i
data_dict_i["att_num"] = att_num_i
data_dict_i["angle_O_Ir_surf_norm"] = angle_i
# #####################################################
data_dict_list.append(data_dict_i)
# #####################################################
# #########################################################
df_angles = pd.DataFrame(data_dict_list)
col_order_list = ["compenv", "slab_id", "ads", "active_site", "att_num"]
df_angles = reorder_df_columns(col_order_list, df_angles)
# #########################################################
df_angles = df_angles.set_index(
# ["compenv", "slab_id", "ads", "active_site", "att_num", ],
["compenv", "slab_id", "ads", "active_site", "att_num", "from_oh"],
drop=False)
df = df_angles
multi_columns_dict = {
"features": ["angle_O_Ir_surf_norm", ],
"data": ["from_oh", "compenv", "slab_id", "ads", "att_num", "active_site", "job_id_max", ],
# "features": ["eff_oxid_state", ],
# "data": ["job_id_max", "from_oh", "compenv", "slab_id", "ads", "att_num", ]
}
nested_columns = dict()
for col_header, cols in multi_columns_dict.items():
for col_j in cols:
nested_columns[col_j] = (col_header, col_j)
df = df.rename(columns=nested_columns)
df.columns = [c if isinstance(c, tuple) else ("", c) for c in df.columns]
df.columns = pd.MultiIndex.from_tuples(df.columns)
df_angles = df
df_angles = df_angles.reindex(columns = ["data", "features", ], level=0)
###Output
_____no_output_____
###Markdown
###Code
root_path_i = os.path.join(
os.environ["PROJ_irox_oer"],
"workflow/feature_engineering/active_site_angles")
# Pickling data ###########################################
import os; import pickle
directory = os.path.join(root_path_i, "out_data")
if not os.path.exists(directory): os.makedirs(directory)
path_i = os.path.join(root_path_i, "out_data/df_AS_angles.pickle")
with open(path_i, "wb") as fle:
pickle.dump(df_angles, fle)
# #########################################################
from methods import get_df_angles
df_angles_tmp = get_df_angles()
df_angles_tmp
# #########################################################
print(20 * "# # ")
print("All done!")
print("Run time:", np.round((time.time() - ti) / 60, 3), "min")
print("AS_angles.ipynb")
print(20 * "# # ")
# #########################################################
###Output
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
All done!
Run time: 0.43 min
AS_angles.ipynb
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
###Markdown
###Code
# df_feat_rows = df_feat_rows.sample(n=10)
# df = df_feat_rows
# df = df[
# (df["compenv"] == "sherlock") &
# (df["slab_id"] == "lufinanu_76") &
# (df["ads"] == "oh") &
# (df["active_site"] == 46.) &
# (df["att_num"] == 0) &
# [True for i in range(len(df))]
# ]
# df_feat_rows = df
# df_feat_rows = df_feat_rows.loc[[574]]
# False 46.0 sherlock lufinanu_76 o 1 bobatudi_54
# df_feat_rows
# df_angles.head()
# df_angles
###Output
_____no_output_____ |
code/model-4.ipynb | ###Markdown
Data Pre-Processing
###Code
from tensorflow.keras.utils import to_categorical
from sklearn.preprocessing import LabelEncoder
# Step 1: Label-encode data set
label_encoder = LabelEncoder()
label_encoder.fit(y_train)
encoded_y_train = label_encoder.transform(y_train)
encoded_y_test = label_encoder.transform(y_test)
# Step 2: Convert encoded labels to one-hot-encoding
y_train_categorical = to_categorical(encoded_y_train)
y_test_categorical = to_categorical(encoded_y_test)
###Output
_____no_output_____
###Markdown
Create a Deep Learning Model
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
# Create a Neural Network model and add layers
model_nn = Sequential()
number_inputs = 20
number_hidden_nodes = 3
model_nn.add(Dense(units=number_hidden_nodes,
activation='relu', input_dim=number_inputs))
number_classes = 3
model_nn.add(Dense(units=number_classes, activation='softmax'))
model_nn.summary()
# This callback will stop the training when there is no improvement in
# the validation loss for three consecutive epochs.
callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=5)
# Compile and fit the model
model_nn.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
# Fit (train) the model
model_nn.fit(
X_train_scaled,
y_train_categorical,
epochs=1000,
shuffle=True,
callbacks=[callback], # EarlyStopping
verbose=1
)
###Output
Epoch 1/1000
175/175 [==============================] - 0s 674us/step - loss: 0.9067 - accuracy: 0.5426
Epoch 2/1000
175/175 [==============================] - 0s 715us/step - loss: 0.7125 - accuracy: 0.7010
Epoch 3/1000
175/175 [==============================] - 0s 691us/step - loss: 0.6079 - accuracy: 0.7378
Epoch 4/1000
175/175 [==============================] - 0s 691us/step - loss: 0.5286 - accuracy: 0.7611
Epoch 5/1000
175/175 [==============================] - 0s 657us/step - loss: 0.4752 - accuracy: 0.7777
Epoch 6/1000
175/175 [==============================] - 0s 708us/step - loss: 0.4405 - accuracy: 0.7825
Epoch 7/1000
175/175 [==============================] - 0s 720us/step - loss: 0.4188 - accuracy: 0.7870
Epoch 8/1000
175/175 [==============================] - 0s 668us/step - loss: 0.4055 - accuracy: 0.7897
Epoch 9/1000
175/175 [==============================] - 0s 674us/step - loss: 0.3974 - accuracy: 0.7933
Epoch 10/1000
175/175 [==============================] - 0s 668us/step - loss: 0.3917 - accuracy: 0.7949
Epoch 11/1000
175/175 [==============================] - 0s 657us/step - loss: 0.3875 - accuracy: 0.7974
Epoch 12/1000
175/175 [==============================] - 0s 662us/step - loss: 0.3842 - accuracy: 0.7976
Epoch 13/1000
175/175 [==============================] - 0s 685us/step - loss: 0.3812 - accuracy: 0.7992
Epoch 14/1000
175/175 [==============================] - 0s 668us/step - loss: 0.3788 - accuracy: 0.7976
Epoch 15/1000
175/175 [==============================] - 0s 691us/step - loss: 0.3770 - accuracy: 0.8001
Epoch 16/1000
175/175 [==============================] - 0s 662us/step - loss: 0.3752 - accuracy: 0.8010
Epoch 17/1000
175/175 [==============================] - 0s 668us/step - loss: 0.3737 - accuracy: 0.8017
Epoch 18/1000
175/175 [==============================] - 0s 674us/step - loss: 0.3724 - accuracy: 0.8024
Epoch 19/1000
175/175 [==============================] - 0s 680us/step - loss: 0.3715 - accuracy: 0.8019
Epoch 20/1000
175/175 [==============================] - 0s 668us/step - loss: 0.3703 - accuracy: 0.8033
Epoch 21/1000
175/175 [==============================] - 0s 680us/step - loss: 0.3692 - accuracy: 0.8069
Epoch 22/1000
175/175 [==============================] - 0s 708us/step - loss: 0.3680 - accuracy: 0.8078
Epoch 23/1000
175/175 [==============================] - 0s 651us/step - loss: 0.3671 - accuracy: 0.8070
Epoch 24/1000
175/175 [==============================] - 0s 680us/step - loss: 0.3659 - accuracy: 0.8137
Epoch 25/1000
175/175 [==============================] - 0s 668us/step - loss: 0.3644 - accuracy: 0.8144
Epoch 26/1000
175/175 [==============================] - 0s 662us/step - loss: 0.3635 - accuracy: 0.8160
Epoch 27/1000
175/175 [==============================] - 0s 657us/step - loss: 0.3623 - accuracy: 0.8156
Epoch 28/1000
175/175 [==============================] - 0s 691us/step - loss: 0.3615 - accuracy: 0.8196
Epoch 29/1000
175/175 [==============================] - 0s 680us/step - loss: 0.3606 - accuracy: 0.8178
Epoch 30/1000
175/175 [==============================] - 0s 668us/step - loss: 0.3599 - accuracy: 0.8192
Epoch 31/1000
175/175 [==============================] - 0s 668us/step - loss: 0.3593 - accuracy: 0.8190
Epoch 32/1000
175/175 [==============================] - 0s 685us/step - loss: 0.3584 - accuracy: 0.8201
Epoch 33/1000
175/175 [==============================] - 0s 674us/step - loss: 0.3579 - accuracy: 0.8215
Epoch 34/1000
175/175 [==============================] - 0s 668us/step - loss: 0.3572 - accuracy: 0.8197
Epoch 35/1000
175/175 [==============================] - 0s 674us/step - loss: 0.3567 - accuracy: 0.8214
Epoch 36/1000
175/175 [==============================] - 0s 662us/step - loss: 0.3564 - accuracy: 0.8212
Epoch 37/1000
175/175 [==============================] - 0s 702us/step - loss: 0.3557 - accuracy: 0.8237
Epoch 38/1000
175/175 [==============================] - 0s 691us/step - loss: 0.3551 - accuracy: 0.8235
Epoch 39/1000
175/175 [==============================] - 0s 668us/step - loss: 0.3547 - accuracy: 0.8235
Epoch 40/1000
175/175 [==============================] - 0s 680us/step - loss: 0.3543 - accuracy: 0.8235
Epoch 41/1000
175/175 [==============================] - 0s 662us/step - loss: 0.3539 - accuracy: 0.8262
Epoch 42/1000
175/175 [==============================] - 0s 651us/step - loss: 0.3534 - accuracy: 0.8240
Epoch 43/1000
175/175 [==============================] - 0s 668us/step - loss: 0.3531 - accuracy: 0.8240
Epoch 44/1000
175/175 [==============================] - 0s 628us/step - loss: 0.3526 - accuracy: 0.8253
Epoch 45/1000
175/175 [==============================] - 0s 674us/step - loss: 0.3523 - accuracy: 0.8233
Epoch 46/1000
175/175 [==============================] - 0s 668us/step - loss: 0.3519 - accuracy: 0.8237
Epoch 47/1000
175/175 [==============================] - 0s 674us/step - loss: 0.3516 - accuracy: 0.8274
Epoch 48/1000
175/175 [==============================] - 0s 685us/step - loss: 0.3513 - accuracy: 0.8273
Epoch 49/1000
175/175 [==============================] - 0s 685us/step - loss: 0.3513 - accuracy: 0.8244
Epoch 50/1000
175/175 [==============================] - 0s 680us/step - loss: 0.3508 - accuracy: 0.8253
Epoch 51/1000
175/175 [==============================] - 0s 674us/step - loss: 0.3504 - accuracy: 0.8264
Epoch 52/1000
175/175 [==============================] - 0s 668us/step - loss: 0.3502 - accuracy: 0.8264
Epoch 53/1000
175/175 [==============================] - 0s 697us/step - loss: 0.3500 - accuracy: 0.8246
Epoch 54/1000
175/175 [==============================] - 0s 651us/step - loss: 0.3500 - accuracy: 0.8265
Epoch 55/1000
175/175 [==============================] - 0s 634us/step - loss: 0.3494 - accuracy: 0.8267
Epoch 56/1000
175/175 [==============================] - 0s 668us/step - loss: 0.3493 - accuracy: 0.8269
Epoch 57/1000
175/175 [==============================] - 0s 674us/step - loss: 0.3492 - accuracy: 0.8280
Epoch 58/1000
175/175 [==============================] - 0s 668us/step - loss: 0.3488 - accuracy: 0.8292
Epoch 59/1000
175/175 [==============================] - 0s 663us/step - loss: 0.3484 - accuracy: 0.8307
Epoch 60/1000
175/175 [==============================] - 0s 657us/step - loss: 0.3483 - accuracy: 0.8285
Epoch 61/1000
175/175 [==============================] - 0s 662us/step - loss: 0.3481 - accuracy: 0.8310
Epoch 62/1000
175/175 [==============================] - 0s 662us/step - loss: 0.3480 - accuracy: 0.8301
Epoch 63/1000
175/175 [==============================] - 0s 657us/step - loss: 0.3475 - accuracy: 0.8310
Epoch 64/1000
175/175 [==============================] - 0s 668us/step - loss: 0.3472 - accuracy: 0.8319
Epoch 65/1000
175/175 [==============================] - 0s 720us/step - loss: 0.3473 - accuracy: 0.8299
Epoch 66/1000
175/175 [==============================] - 0s 657us/step - loss: 0.3469 - accuracy: 0.8287
Epoch 67/1000
175/175 [==============================] - 0s 662us/step - loss: 0.3469 - accuracy: 0.8308
Epoch 68/1000
175/175 [==============================] - 0s 651us/step - loss: 0.3465 - accuracy: 0.8310
Epoch 69/1000
175/175 [==============================] - 0s 617us/step - loss: 0.3462 - accuracy: 0.8314
Epoch 70/1000
175/175 [==============================] - 0s 640us/step - loss: 0.3462 - accuracy: 0.8314
Epoch 71/1000
175/175 [==============================] - 0s 662us/step - loss: 0.3460 - accuracy: 0.8308
Epoch 72/1000
175/175 [==============================] - 0s 674us/step - loss: 0.3458 - accuracy: 0.8321
Epoch 73/1000
175/175 [==============================] - 0s 634us/step - loss: 0.3457 - accuracy: 0.8296
Epoch 74/1000
175/175 [==============================] - 0s 640us/step - loss: 0.3454 - accuracy: 0.8321
Epoch 75/1000
175/175 [==============================] - 0s 628us/step - loss: 0.3453 - accuracy: 0.8314
Epoch 76/1000
175/175 [==============================] - 0s 628us/step - loss: 0.3452 - accuracy: 0.8323
Epoch 77/1000
175/175 [==============================] - 0s 611us/step - loss: 0.3447 - accuracy: 0.8323
Epoch 78/1000
175/175 [==============================] - 0s 640us/step - loss: 0.3450 - accuracy: 0.8328
Epoch 79/1000
###Markdown
Quantify the Trained Model
###Code
model_loss, model_accuracy = model_nn.evaluate(X_test_scaled, y_test_categorical, verbose=2)
print(f"Normal Neural Network - Loss: {model_loss}, Accuracy: {model_accuracy}")
###Output
44/44 - 0s - loss: 0.3698 - accuracy: 0.8192
Normal Neural Network - Loss: 0.3698229193687439, Accuracy: 0.8191565275192261
###Markdown
Based on above, I can conclude that Random Forest which gives 89% Accuracy is the best model for solving the problem of Exoplanet classification Save the Model
###Code
# save trained model using the HDF5 binary format
model_nn.save("model_deep.h5")
###Output
_____no_output_____ |
02 - Memory layout.ipynb | ###Markdown
Memory layout `shape` and `strides` NumPy array is just a memory block with extra information on how to interpret its contents. Since memory has only linear address space, NumPy arrays need extra information how to lay out this block into multiple dimensions. This is done by means of `shape` and `strides` attributes:
###Code
import numpy as np
a = np.arange(8, dtype=np.uint8)
a
a.shape, a.strides
b = a.reshape(2, 4)
b
b.shape, b.strides
###Output
_____no_output_____
###Markdown
 Note that strides are in bytes:
###Code
a_long = np.arange(8, dtype=np.int32).reshape(2,4)
a_long.strides
###Output
_____no_output_____
###Markdown
To obtain number of bytes taken by a single element use `itemsize` property:
###Code
a_long.itemsize
###Output
_____no_output_____
###Markdown
Exercise: TransposeCreate 3x4 random array. Compare the `shape` and `strides` properties of `x` and `x.T`. How can youexplain the new strides? Exercise: Broadcasting revisitedExplain how broadcasting works internally using the example below. What will be the `shape` and `strides` of `x` and `y` after broadcasting. Test it using `np.broadcast_arrays` in the following example and look at `strides` and `shape` properties of both arrays.```pythonx = np.random.rand(5, 10)y = np.random.rand(10)z = x + yxb, yb = np.broadcast_arrays(x, y)``` Manipulating strides We my use `as_strided` function from NumPy library module to manipulate the `shape` and `strides` properties (**Warning**: `as_strided` does not check if you remain in the memory bounds of the original array, so use it with care!) For example, we may define overlapping strides:
###Code
a = np.arange(8, dtype=np.uint8)
a2 = np.lib.stride_tricks.as_strided(a, strides=(2, 1), shape=(3,4))
a2
###Output
_____no_output_____
###Markdown
Note that in this example some values appear twice, but they do not consume extra memory -- both arrays share the same memory block:
###Code
a[2] = 100
a
a2
###Output
_____no_output_____
###Markdown
Exercise: Sliding window*Modified Exercise by Stéfan van der Walt and Juan Nunez-Iglesias*Use `as_strided` to produce a sliding-window view of a 1D array.```pythondef sliding_window(arr, size=2): """Produce an array of sliding window views of `arr` Parameters ---------- arr : 1D array, shape (N,) The input array. size : int, optional The size of the sliding window. Returns ------- arr_slide : 2D array, shape (N - size + 1, size) The sliding windows of size `size` of `arr`. Examples -------- >>> a = np.array([0, 1, 2, 3]) >>> sliding_window(a, 2) array([[0, 1], [1, 2], [2, 3]]) """ return arr fix this``` Fortran and C-ordering
###Code
c_order = np.arange(6).reshape(2, 3)
c_order
c_order.strides
###Output
_____no_output_____
###Markdown
Let's see how the array is stored in memory.
###Code
np.ravel(c_order, order='A')
###Output
_____no_output_____
###Markdown
In C, the last index changes most rapidly as one moves through the array as stored in memory.
###Code
a[0]
fortran_order = np.array(c_order, order='F')
fortran_order
fortran_order.strides
np.ravel(fortran_order, order='A')
###Output
_____no_output_____ |
projects/customer_seg/customer_segments.ipynb | ###Markdown
Machine Learning Engineer Nanodegree Unsupervised Learning Project: Creating Customer Segments Welcome to the third project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with **'Implementation'** in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a `'TODO'` statement. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. >**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. Getting StartedIn this project, you will analyze a dataset containing data on various customers' annual spending amounts (reported in *monetary units*) of diverse product categories for internal structure. One goal of this project is to best describe the variation in the different types of customers that a wholesale distributor interacts with. Doing so would equip the distributor with insight into how to best structure their delivery service to meet the needs of each customer.The dataset for this project can be found on the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Wholesale+customers). For the purposes of this project, the features `'Channel'` and `'Region'` will be excluded in the analysis — with focus instead on the six product categories recorded for customers.Run the code block below to load the wholesale customers dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.
###Code
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the wholesale customers dataset
try:
data = pd.read_csv("customers.csv")
data.drop(['Region', 'Channel'], axis = 1, inplace = True)
print("Wholesale customers dataset has {} samples with {} features each.".format(*data.shape))
except:
print("Dataset could not be loaded. Is the dataset missing?")
###Output
Wholesale customers dataset has 440 samples with 6 features each.
###Markdown
Data ExplorationIn this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project.Run the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories: **'Fresh'**, **'Milk'**, **'Grocery'**, **'Frozen'**, **'Detergents_Paper'**, and **'Delicatessen'**. Consider what each category represents in terms of products you could purchase.
###Code
# Display a description of the dataset
display(data.describe())
###Output
_____no_output_____
###Markdown
Implementation: Selecting SamplesTo get a better understanding of the customers and how their data will transform through the analysis, it would be best to select a few sample data points and explore them in more detail. In the code block below, add **three** indices of your choice to the `indices` list which will represent the customers to track. It is suggested to try different sets of samples until you obtain customers that vary significantly from one another.
###Code
# TODO: Select three indices of your choice you wish to sample from the dataset
indices = [4,103,301]
# Create a DataFrame of the chosen samples
samples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True)
print("Chosen samples of wholesale customers dataset:")
display(samples)
###Output
Chosen samples of wholesale customers dataset:
###Markdown
Question 1Consider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers. * What kind of establishment (customer) could each of the three samples you've chosen represent?**Hint:** Examples of establishments include places like markets, cafes, delis, wholesale retailers, among many others. Avoid using names for establishments, such as saying *"McDonalds"* when describing a sample customer as a restaurant. You can use the mean values for reference to compare your samples with. The mean values are as follows:* Fresh: 12000.2977* Milk: 5796.2* Grocery: 3071.9* Detergents_paper: 2881.4* Delicatessen: 1524.8Knowing this, how do your samples compare? Does that help in driving your insight into what kind of establishments they might be? **Answer:** Customer 4.My first choose customer 4 Has a very high fresh consumtion greater then 75% the Delicatesse is also higher then 75%Detergrents_paper is less then the mean. This really looks like a Deli with a lot of fresh products and delicatesses. Customer 103This customer has alot of fresh products but also >75% frozen products. But not much Milk and Groceries. I think this is a Restaurant or Bar Customer 301Alot of groceries and milk this looks like a grocery store someone who resells products, so a retailer of any kind. Implementation: Feature RelevanceOne interesting thought to consider is if one (or more) of the six product categories is actually relevant for understanding customer purchasing. That is to say, is it possible to determine whether customers purchasing some amount of one category of products will necessarily purchase some proportional amount of another category of products? We can make this determination quite easily by training a supervised regression learner on a subset of the data with one feature removed, and then score how well that model can predict the removed feature.In the code block below, you will need to implement the following: - Assign `new_data` a copy of the data by removing a feature of your choice using the `DataFrame.drop` function. - Use `sklearn.cross_validation.train_test_split` to split the dataset into training and testing sets. - Use the removed feature as your target label. Set a `test_size` of `0.25` and set a `random_state`. - Import a decision tree regressor, set a `random_state`, and fit the learner to the training data. - Report the prediction score of the testing set using the regressor's `score` function.
###Code
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
# TODO: Make a copy of the DataFrame, using the 'drop' function to drop the given feature
column = 'Detergents_Paper'
new_data = data.drop([column], axis=1)
# TODO: Split the data into training and testing sets(0.25) using the given feature as the target
# Set a random state.
X_train, X_test, y_train, y_test = train_test_split(new_data, data[column], test_size=0.25, random_state=44)
# TODO: Create a decision tree regressor and fit it to the training set
regressor = DecisionTreeRegressor(random_state=3)
fit = regressor.fit(X_train, y_train)
# TODO: Report the score of the prediction using the testing set
score = fit.score(X_test, y_test)
score
###Output
_____no_output_____
###Markdown
Question 2* Which feature did you attempt to predict? * What was the reported prediction score? * Is this feature necessary for identifying customers' spending habits?**Hint:** The coefficient of determination, `R^2`, is scored between 0 and 1, with 1 being a perfect fit. A negative `R^2` implies the model fails to fit the data. If you get a low score for a particular feature, that lends us to beleive that that feature point is hard to predict using the other features, thereby making it an important feature to consider when considering relevance. **Answer:**I attempt to predict the feature 'Detergents_Paper'. The reported score was 0.70970466569757373. Which seems like a good fit.This means that this feature can be predicted rather good from the remaining features. Which means that this feature can be dropped without impacting the indentification of spending habits.I also tested the other features like Fresh and Delicatessen, The score of Fresh was -0.7193469288693235 which means that this feature can not be predicted from the others and is thus important. Groceries also has a high r^2 which makes me believe that Grocery and Detergents_Paper are correlated and that one is a good predictor for the other.**Note** In my experiments i found out that the random_state is of a rather big influence of instance the Detergents_Paper r^2 change to around 0.51 on some values of random_state Visualize Feature DistributionsTo get a better understanding of the dataset, we can construct a scatter matrix of each of the six product features present in the data. If you found that the feature you attempted to predict above is relevant for identifying a specific customer, then the scatter matrix below may not show any correlation between that feature and the others. Conversely, if you believe that feature is not relevant for identifying a specific customer, the scatter matrix might show a correlation between that feature and another feature in the data. Run the code block below to produce a scatter matrix.
###Code
# Produce a scatter matrix for each pair of features in the data
pd.plotting.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
###Output
_____no_output_____
###Markdown
Question 3* Using the scatter matrix as a reference, discuss the distribution of the dataset, specifically talk about the normality, outliers, large number of data points near 0 among others. If you need to sepearate out some of the plots individually to further accentuate your point, you may do so as well.* Are there any pairs of features which exhibit some degree of correlation? * Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? * How is the data for those features distributed?**Hint:** Is the data normally distributed? Where do most of the data points lie? You can use [corr()](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.corr.html) to get the feature correlations and then visualize them using a [heatmap](http://seaborn.pydata.org/generated/seaborn.heatmap.html) (the data that would be fed into the heatmap would be the correlation values, for eg: `data.corr()`) to gain further insight.
###Code
import seaborn as sns
sns.heatmap(data.corr())
###Output
_____no_output_____
###Markdown
**Answer:**What we can see from the heatmap but also from the scatterplots is that there seams to be a big correlation between Grocery and Detergents_Paper there is also some correlation between Milk and Detergents_Paper and between Milk and Grocery so the 3 of them seems to be correlated. Although not as strong as Grocery and Detergents_Paper. This achknowledges our intuition. That for instance Detergents_Paper can be dropped, because Grocery and Milk are still good predictors for this feature.What I also see from the scatter plot is that delicatesse has alot of point around 0 so this wil need some fixing. Because now all the data points are squashed into a small region.What i also find interesting from the scatter plots is that the data from the correlated features seems to be skewed towards the left.All of the features seems to have outliers. Data PreprocessingIn this section, you will preprocess the data to create a better representation of customers by performing a scaling on the data and detecting (and optionally removing) outliers. Preprocessing data is often times a critical step in assuring that results you obtain from your analysis are significant and meaningful. Implementation: Feature ScalingIf data is not normally distributed, especially if the mean and median vary significantly (indicating a large skew), it is most [often appropriate](http://econbrowser.com/archives/2014/02/use-of-logarithms-in-economics) to apply a non-linear scaling — particularly for financial data. One way to achieve this scaling is by using a [Box-Cox test](http://scipy.github.io/devdocs/generated/scipy.stats.boxcox.html), which calculates the best power transformation of the data that reduces skewness. A simpler approach which can work in most cases would be applying the natural logarithm.In the code block below, you will need to implement the following: - Assign a copy of the data to `log_data` after applying logarithmic scaling. Use the `np.log` function for this. - Assign a copy of the sample data to `log_samples` after applying logarithmic scaling. Again, use `np.log`.
###Code
# TODO: Scale the data using the natural logarithm
log_data = np.log(data)
# TODO: Scale the sample data using the natural logarithm
log_samples = np.log(samples)
# Produce a scatter matrix for each pair of newly-transformed features
pd.plotting.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
###Output
_____no_output_____
###Markdown
ObservationAfter applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before).Run the code below to see how the sample data has changed after having the natural logarithm applied to it.
###Code
# Display the log-transformed sample data
display(log_samples)
###Output
_____no_output_____
###Markdown
Implementation: Outlier DetectionDetecting outliers in the data is extremely important in the data preprocessing step of any analysis. The presence of outliers can often skew results which take into consideration these data points. There are many "rules of thumb" for what constitutes an outlier in a dataset. Here, we will use [Tukey's Method for identfying outliers](https://www.kdnuggets.com/2017/01/3-methods-deal-outliers.html): An *outlier step* is calculated as 1.5 times the interquartile range (IQR). A data point with a feature that is beyond an outlier step outside of the IQR for that feature is considered abnormal.In the code block below, you will need to implement the following: - Assign the value of the 25th percentile for the given feature to `Q1`. Use `np.percentile` for this. - Assign the value of the 75th percentile for the given feature to `Q3`. Again, use `np.percentile`. - Assign the calculation of an outlier step for the given feature to `step`. - Optionally remove data points from the dataset by adding indices to the `outliers` list.**NOTE:** If you choose to remove any outliers, ensure that the sample data does not contain any of these points! Once you have performed this implementation, the dataset will be stored in the variable `good_data`.
###Code
from collections import Counter
# For each feature find the data points with extreme high or low values
outliers_multi = []
for feature in log_data.keys():
# TODO: Calculate Q1 (25th percentile of the data) for the given feature
Q1 = np.percentile(log_data[feature],25)
# TODO: Calculate Q3 (75th percentile of the data) for the given feature
Q3 = np.percentile(log_data[feature],75)
# TODO: Use the interquartile range to calculate an outlier step (1.5 times the interquartile range)
step = 1.5*(Q3-Q1)
# Display the outliers
print("Data points considered outliers for the feature '{}':".format(feature))
outliers = log_data[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))]
outliers_multi.extend(outliers.index.values)
display(outliers)
# OPTIONAL: Select the indices for data points you wish to remove
outliers = [outlier for outlier, count in Counter(outliers_multi).items() if count > 1]
print(outliers)
# Remove the outliers, if any were specified
good_data = log_data.drop(log_data.index[outliers]).reset_index(drop = True)
###Output
Data points considered outliers for the feature 'Fresh':
###Markdown
Question 4* Are there any data points considered outliers for more than one feature based on the definition above? * Should these data points be removed from the dataset? * If any data points were added to the `outliers` list to be removed, explain why.** Hint: ** If you have datapoints that are outliers in multiple categories think about why that may be and if they warrant removal. Also note how k-means is affected by outliers and whether or not this plays a factor in your analysis of whether or not to remove them. **Answer:**Yes when we look at the code we remove the outliers that are in multiple features, [65, 66, 128, 154, 75]With this samples size removing all the uni-variate outliers will remove to much of the data points, but we need to remove the outliers in multiple features because they will influnce our cluster finding.So yes I added all the features that appear ass outliers in multiple feature sets. Feature TransformationIn this section you will use principal component analysis (PCA) to draw conclusions about the underlying structure of the wholesale customer data. Since using PCA on a dataset calculates the dimensions which best maximize variance, we will find which compound combinations of features best describe customers. Implementation: PCANow that the data has been scaled to a more normal distribution and has had any necessary outliers removed, we can now apply PCA to the `good_data` to discover which dimensions about the data best maximize the variance of features involved. In addition to finding these dimensions, PCA will also report the *explained variance ratio* of each dimension — how much variance within the data is explained by that dimension alone. Note that a component (dimension) from PCA can be considered a new "feature" of the space, however it is a composition of the original features present in the data.In the code block below, you will need to implement the following: - Import `sklearn.decomposition.PCA` and assign the results of fitting PCA in six dimensions with `good_data` to `pca`. - Apply a PCA transformation of `log_samples` using `pca.transform`, and assign the results to `pca_samples`.
###Code
from sklearn.decomposition import PCA
# TODO: Apply PCA by fitting the good data with the same number of dimensions as features
pca = PCA(n_components=6).fit(good_data)
# TODO: Transform log_samples using the PCA fit above
pca_samples = pca.transform(log_samples)
# Generate PCA results plot
pca_results = vs.pca_results(good_data, pca)
###Output
_____no_output_____
###Markdown
Question 5* How much variance in the data is explained* **in total** *by the first and second principal component? * How much variance in the data is explained by the first four principal components? * Using the visualization provided above, talk about each dimension and the cumulative variance explained by each, stressing upon which features are well represented by each dimension(both in terms of positive and negative variance explained). Discuss what the first four dimensions best represent in terms of customer spending.**Hint:** A positive increase in a specific dimension corresponds with an *increase* of the *positive-weighted* features and a *decrease* of the *negative-weighted* features. The rate of increase or decrease is based on the individual feature weights.
###Code
0.4430+0.2638+0.1231+0.1012
###Output
_____no_output_____
###Markdown
**Answer:**The total of variance explained by component 1 and component 2 = 0.4430 + 0.2638 = 0.7068So 70% of the variance is explained by the first 2 components.Components 1,2,3,4 attribute to 0.931 of the variance which is more then 93%As a note I think component 1 and 2 seperate the type of establisment, So are you an establishment like restaurants bars, etc.. or are you a retailer like a grocery shop.Component1: This component has the largest associations with Milk, Detergents_Paper and Grocery, This are the products that are need together in some types of the establisments (I will call them resellers / retailers).Component2 : large associations in Fresh, Frozen and Delicatesse this is acctualy the oppposite of Component 2 en also groups together products that are bought by different types of establishments, restaurants for instance.Component3: large associations positive and negativve for Fresh Delicatess and Frozen, So this determines in my oppion the seppration between establishment which need Fresh goods and luxury products or Frozen stuff, so it is is about the freshness of there ingredients.Component4: Again like 3 Delicatesse and Frozen I think this can also say something about the quality of the used ingredients. ObservationRun the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it in six dimensions. Observe the numerical value for the first four dimensions of the sample points. Consider if this is consistent with your initial interpretation of the sample points.
###Code
# Display sample log-data after having a PCA transformation applied
display(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values))
###Output
_____no_output_____
###Markdown
Implementation: Dimensionality ReductionWhen using principal component analysis, one of the main goals is to reduce the dimensionality of the data — in effect, reducing the complexity of the problem. Dimensionality reduction comes at a cost: Fewer dimensions used implies less of the total variance in the data is being explained. Because of this, the *cumulative explained variance ratio* is extremely important for knowing how many dimensions are necessary for the problem. Additionally, if a signifiant amount of variance is explained by only two or three dimensions, the reduced data can be visualized afterwards.In the code block below, you will need to implement the following: - Assign the results of fitting PCA in two dimensions with `good_data` to `pca`. - Apply a PCA transformation of `good_data` using `pca.transform`, and assign the results to `reduced_data`. - Apply a PCA transformation of `log_samples` using `pca.transform`, and assign the results to `pca_samples`.
###Code
# TODO: Apply PCA by fitting the good data with only two dimensions
pca = PCA(n_components=2).fit(good_data)
# TODO: Transform the good data using the PCA fit above
reduced_data = pca.transform(good_data)
# TODO: Transform log_samples using the PCA fit above
pca_samples = pca.transform(log_samples)
# Create a DataFrame for the reduced data
reduced_data = pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2'])
###Output
_____no_output_____
###Markdown
ObservationRun the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions.
###Code
# Display sample log-data after applying PCA transformation in two dimensions
display(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2']))
###Output
_____no_output_____
###Markdown
Visualizing a BiplotA biplot is a scatterplot where each data point is represented by its scores along the principal components. The axes are the principal components (in this case `Dimension 1` and `Dimension 2`). In addition, the biplot shows the projection of the original features along the components. A biplot can help us interpret the reduced dimensions of the data, and discover relationships between the principal components and original features.Run the code cell below to produce a biplot of the reduced-dimension data.
###Code
# Create a biplot
vs.biplot(good_data, reduced_data, pca)
###Output
_____no_output_____
###Markdown
ObservationOnce we have the original feature projections (in red), it is easier to interpret the relative position of each data point in the scatterplot. For instance, a point the lower right corner of the figure will likely correspond to a customer that spends a lot on `'Milk'`, `'Grocery'` and `'Detergents_Paper'`, but not so much on the other product categories. From the biplot, which of the original features are most strongly correlated with the first component? What about those that are associated with the second component? Do these observations agree with the pca_results plot you obtained earlier? ClusteringIn this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture Model clustering algorithm to identify the various customer segments hidden in the data. You will then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale. Question 6* What are the advantages to using a K-Means clustering algorithm? * What are the advantages to using a Gaussian Mixture Model clustering algorithm? * Given your observations about the wholesale customer data so far, which of the two algorithms will you use and why?** Hint: ** Think about the differences between hard clustering and soft clustering and which would be appropriate for our dataset. **Answer:** K-means**Pro**- K-means is fast even when alot of features (dimensions) are used- K-means produces Tight clusters (If the data is roughly globular)Some negative attributes of k-means could be for instance that the number of cluster needs to be known before hand.Also k-means performance good on globular data, this is not always the case GMM**Pro**- GMM can do soft-clustering, so a point is not part of one or the other cluster but has a factor associated to it.- GMM is good in processing clusters where the underlying data is generated by (a combination) of gaussian distributions.A problem for GMM could be that it falls into a local maximum. It can also fail when the dimensionality is High.Of the two I would use GMM, the dimensionality is not high (and we reduced it with PCA) and I suspect that the underlying proces is normal distributed. Implementation: Creating ClustersDepending on the problem, the number of clusters that you expect to be in the data may already be known. When the number of clusters is not known *a priori*, there is no guarantee that a given number of clusters best segments the data, since it is unclear what structure exists in the data — if any. However, we can quantify the "goodness" of a clustering by calculating each data point's *silhouette coefficient*. The [silhouette coefficient](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.silhouette_score.html) for a data point measures how similar it is to its assigned cluster from -1 (dissimilar) to 1 (similar). Calculating the *mean* silhouette coefficient provides for a simple scoring method of a given clustering.In the code block below, you will need to implement the following: - Fit a clustering algorithm to the `reduced_data` and assign it to `clusterer`. - Predict the cluster for each data point in `reduced_data` using `clusterer.predict` and assign them to `preds`. - Find the cluster centers using the algorithm's respective attribute and assign them to `centers`. - Predict the cluster for each sample data point in `pca_samples` and assign them `sample_preds`. - Import `sklearn.metrics.silhouette_score` and calculate the silhouette score of `reduced_data` against `preds`. - Assign the silhouette score to `score` and print the result.
###Code
# TODO: Apply your clustering algorithm of choice to the reduced data
from sklearn.mixture import GaussianMixture
from sklearn.metrics import silhouette_score
clusterer = GaussianMixture(n_components=2, random_state=43)
gmm = clusterer.fit(reduced_data)
# TODO: Predict the cluster for each data point
preds = gmm.predict(reduced_data)
# TODO: Find the cluster centers
centers = gmm.means_
# TODO: Predict the cluster for each transformed sample data point
sample_preds = gmm.predict(pca_samples)
# TODO: Calculate the mean silhouette coefficient for the number of clusters chosen
score = silhouette_score(reduced_data, preds)
print("The score is: {}".format(score))
###Output
The score is: 0.4223246826459388
###Markdown
Question 7* Report the silhouette score for several cluster numbers you tried. * Of these, which number of clusters has the best silhouette score? **Answer:**n = 2, 0.4223n = 3, 0.3778n = 4, 0.3449n = 5, 0.2878n = 6, 0.28632 clusters has the best silhouette score Cluster VisualizationOnce you've chosen the optimal number of clusters for your clustering algorithm using the scoring metric above, you can now visualize the results by executing the code block below. Note that, for experimentation purposes, you are welcome to adjust the number of clusters for your clustering algorithm to see various visualizations. The final visualization provided should, however, correspond with the optimal number of clusters.
###Code
# Display the results of the clustering from implementation
vs.cluster_results(reduced_data, preds, centers, pca_samples)
###Output
_____no_output_____
###Markdown
Implementation: Data RecoveryEach cluster present in the visualization above has a central point. These centers (or means) are not specifically data points from the data, but rather the *averages* of all the data points predicted in the respective clusters. For the problem of creating customer segments, a cluster's center point corresponds to *the average customer of that segment*. Since the data is currently reduced in dimension and scaled by a logarithm, we can recover the representative customer spending from these data points by applying the inverse transformations.In the code block below, you will need to implement the following: - Apply the inverse transform to `centers` using `pca.inverse_transform` and assign the new centers to `log_centers`. - Apply the inverse function of `np.log` to `log_centers` using `np.exp` and assign the true centers to `true_centers`.
###Code
# TODO: Inverse transform the centers
log_centers = pca.inverse_transform(centers)
# TODO: Exponentiate the centers
true_centers = np.exp(log_centers)
# Display the true centers
segments = ['Segment {}'.format(i) for i in range(0,len(centers))]
true_centers = pd.DataFrame(np.round(true_centers), columns = data.keys())
true_centers.index = segments
display(true_centers)
###Output
_____no_output_____
###Markdown
Question 8* Consider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project(specifically looking at the mean values for the various feature points). What set of establishments could each of the customer segments represent?**Hint:** A customer who is assigned to `'Cluster X'` should best identify with the establishments represented by the feature set of `'Segment X'`. Think about what each segment represents in terms their values for the feature points chosen. Reference these values with the mean values to get some perspective into what kind of establishment they represent.
###Code
# Display the true centers compare to the mean
segments = ['Segment {}'.format(i) for i in range(0,len(centers))]
true_centers = pd.DataFrame(np.round(true_centers), columns = data.keys())
true_centers.index = segments
display(true_centers)
display(true_centers-data.mean())
###Output
_____no_output_____
###Markdown
**Answer:**Customers in Segment 1 Have above average voulmes in Milk,Grocery / Detergents_Paper. previous in this project we discussed that extra expenses in these categories could indicate resellers / retailers like grocery stores.Customers in Segment 0 have an opposite pattern. In general the have lower volume But the are higher in Frozen en Fresh goods.The overall lower volume is logical because this represents Restaurants / Bars Question 9* For each sample point, which customer segment from* **Question 8** *best represents it? * Are the predictions for each sample point consistent with this?*Run the code block below to find which cluster each sample point is predicted to be.
###Code
display(samples)
# Display the predictions
for i, pred in enumerate(sample_preds):
print("Sample point", i, "predicted to be in Cluster", pred)
###Output
_____no_output_____
###Markdown
**Answer:**- Sample 1: Has a high volume in Fresh and low volumes in Milk,Grocery and detergents paper. So it is in the correct cluster.- Sample 2: High volume in Fresh and Froze this clearly belongs to cluster 0 the retail cluster- Sample 2: Milk, Grocery and Detergens_Paper are high So it belongs to Cluster 1 that of the Cafe/Restaurants ;) Conclusion In this final section, you will investigate ways that you can make use of the clustered data. First, you will consider how the different groups of customers, the ***customer segments***, may be affected differently by a specific delivery scheme. Next, you will consider how giving a label to each customer (which *segment* that customer belongs to) can provide for additional features about the customer data. Finally, you will compare the ***customer segments*** to a hidden variable present in the data, to see whether the clustering identified certain relationships. Question 10Companies will often run [A/B tests](https://en.wikipedia.org/wiki/A/B_testing) when making small changes to their products or services to determine whether making that change will affect its customers positively or negatively. The wholesale distributor is considering changing its delivery service from currently 5 days a week to 3 days a week. However, the distributor will only make this change in delivery service for customers that react positively. * How can the wholesale distributor use the customer segments to determine which customers, if any, would react positively to the change in delivery service?***Hint:** Can we assume the change affects all customers equally? How can we determine which group of customers it affects the most? **Answer:**Cafe Restaurants etc.. needs a daily replenishments of fresh goods, Retailers on the other hand can better predict the needed stocks and can maintain some stocks.So I think this schedule can best be tried on the customers that fall in Cluster 0 the retailers.To decide how the groups of customers are effect we can add some extra data for instance how much they order on avarage every week. the we pick the customer group with the lowest avarage value and is thus impact the least. Question 11Additional structure is derived from originally unlabeled data when using clustering techniques. Since each customer has a ***customer segment*** it best identifies with (depending on the clustering algorithm applied), we can consider *'customer segment'* as an **engineered feature** for the data. Assume the wholesale distributor recently acquired ten new customers and each provided estimates for anticipated annual spending of each product category. Knowing these estimates, the wholesale distributor wants to classify each new customer to a ***customer segment*** to determine the most appropriate delivery service. * How can the wholesale distributor label the new customers using only their estimated product spending and the **customer segment** data?**Hint:** A supervised learner could be used to train on the original customers. What would be the target variable? **Answer:**What we can do is train a supervised learner with the existing data and then use the customer segment as the target variable.Then after we trained this learner we can use the estimated annual spending to predict the correct customer segment. Visualizing Underlying DistributionsAt the beginning of this project, it was discussed that the `'Channel'` and `'Region'` features would be excluded from the dataset so that the customer product categories were emphasized in the analysis. By reintroducing the `'Channel'` feature to the dataset, an interesting structure emerges when considering the same PCA dimensionality reduction applied earlier to the original dataset.Run the code block below to see how each data point is labeled either `'HoReCa'` (Hotel/Restaurant/Cafe) or `'Retail'` the reduced space. In addition, you will find the sample points are circled in the plot, which will identify their labeling.
###Code
# Display the clustering results based on 'Channel' data
vs.channel_results(reduced_data, outliers, pca_samples)
###Output
_____no_output_____
###Markdown
Question 12* How well does the clustering algorithm and number of clusters you've chosen compare to this underlying distribution of Hotel/Restaurant/Cafe customers to Retailer customers? * Are there customer segments that would be classified as purely 'Retailers' or 'Hotels/Restaurants/Cafes' by this distribution? * Would you consider these classifications as consistent with your previous definition of the customer segments? **Answer:**The Found Cluster looks really similair to the foudn Channel data. Of the 3 samples sample 1 and 2 are classified correctly in our model. Only sample 0 is in a different cluster but as you see in the graph it lies on the border of thge two clusters.The number of clusters are consistent with found clusters. You can also see the customer segments here Hotel./Restaurants and retailers there are still some Hotels/Restaurants on the left which are in the retailer cluster. PS. I am really impressed about what can be achieved with PCA! > **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to **File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.
###Code
!!jupyter nbconvert *.ipynb
###Output
_____no_output_____ |
newcomer/O2O-Coupon-Usage-Forecast/plot.ipynb | ###Markdown
数据分析与可视化 导入包
###Code
import pandas as pd
from pyecharts import Bar, Pie, Line
###Output
_____no_output_____
###Markdown
读入数据并复制一份进行预处理,方便绘图
###Code
data = pd.read_csv('data/ccf_offline_stage1_train.csv')
offline = data.copy()
offline.head(5)
offline['Distance'].fillna(-1,inplace=True)
offline['date_received'] = pd.to_datetime(offline['Date_received'], format='%Y%m%d')
offline['date'] = pd.to_datetime(offline['Date'], format='%Y%m%d')
offline['discount_rate'] = offline['Discount_rate'].map(lambda x:float(x) if ':' not in str(x) else (float(str(x).split(':')[0])-float(str(x).split(':')[1])) / float(str(x).split(':')[0]))
offline['isManjian'] = offline['Discount_rate'].map(lambda x: 1 if ':' in str(x) else 0)
offline['weekday_Receive'] = offline['date_received'].apply(lambda x: x.isoweekday())
offline['label'] = list(map(lambda x, y: 1 if (x-y).total_seconds()/(60*60*24) <= 15 else 0, offline['date'], offline['date_received']))
offline.head(5)
###Output
_____no_output_____
###Markdown
1.每天领券次数
###Code
df_1 = offline[offline['Date_received'].notna()]
tmp = df_1.groupby('Date_received', as_index=False)['Coupon_id'].count()
tmp.columns = ['Date_received','count']
tmp.head(5)
bar_1 = Bar("每天被领券的数量",width=1500,height=600)
bar_1.add("",list(tmp['Date_received']),list(tmp['count']),xaxis_interval=1,xaxis_rotate=60,mark_line=['max'])
bar_1.render('imgs/bar_1.html')
###Output
_____no_output_____
###Markdown
2.每月各类消费折线图
###Code
offline['received_month'] = offline['date_received'].apply(lambda x:x.month)
consume_coupon = offline[offline['label'] == 1]['received_month'].value_counts(sort=False)
received = offline['received_month'].value_counts(sort=False)
offline['date_month'] = offline['date'].apply(lambda x:x.month)
consume = offline['date_month'].value_counts(sort=False)
consume_coupon.sort_index(inplace=True)
consume.sort_index(inplace=True)
received.sort_index(inplace=True)
line_1 = Line("每月各类消费折线图")
line_1.add("核销",list(range(1,7)),list(consume_coupon.values))
line_1.add("领取",list(range(1,7)),list(received.values))
line_1.add("消费",list(range(1,7)),list(consume.values))
line_1.render('imgs/line_1.html')
###Output
_____no_output_____
###Markdown
3.消费距离柱状图
###Code
offline['Distance'].fillna(-1,inplace=True)
dis = offline[offline['Distance'] != -1]['Distance'].value_counts()
dis.sort_index(inplace=True)
bar_2 = Bar("消费距离柱状图")
bar_2.add('',list(dis.index),list(dis.values))
bar_2.render('imgs/bar_2.html')
###Output
_____no_output_____
###Markdown
4.消费距离与核销率柱状图
###Code
rate = [offline[offline['Distance'] == i]['label'].value_counts()[1]*1.0 /
offline[offline['Distance'] == i]['label'].value_counts().sum() for i in range(11)]
bar_3 = Bar("消费距离与核销率柱状图")
bar_3.add('核销率',list(range(11)),list(rate))
bar_3.render('imgs/bar_3.html')
###Output
_____no_output_____
###Markdown
5.各类消费券数量占比饼图
###Code
pie_1 = Pie("各类消费券数量占比饼图")
pie_1.add('',['折扣','满减'],list(offline[offline['Date_received'].notna()]['isManjian'].value_counts(sort=False).values),is_label_show=True)
pie_1.render('imgs/pie_1.html')
###Output
_____no_output_____
###Markdown
6.核销优惠券的占比图
###Code
pie_2 = Pie("核销优惠券数量占比饼图")
pie_2.add('',['折扣','满减'],list(offline[offline['label']==1]['isManjian'].value_counts(sort=False).values),is_label_show=True)
pie_2.render('imgs/pie_2.html')
###Output
_____no_output_____
###Markdown
7.各种折扣率的优惠券领取与核销柱状图
###Code
bar_4 = Bar("各种折扣率的优惠券领取与核销柱状图")
received = offline['discount_rate'].value_counts(sort=False)
consume_coupon = offline[offline['label'] == 1]['discount_rate'].value_counts(sort=False)
consume_coupon[0.975000] = 0
consume_coupon.sort_index(inplace=True)
received.sort_index(inplace=True)
bar_4.add('领取',[float('%.4f' % x) for x in received.index],list(received.values),xaxis_rotate=50)
bar_4.add('核销',[float('%.4f' % x) for x in consume_coupon.index], list(consume_coupon.values),xaxis_rotate=50)
bar_4.render('imgs/bar_4.html')
###Output
_____no_output_____
###Markdown
8.每周内领券数与核销数折线图
###Code
consume_coupon = offline[offline['label'] == 1]['weekday_Receive'].value_counts()
consume_coupon.sort_index(inplace=True)
received = offline['weekday_Receive'].value_counts()
received.sort_index(inplace=True)
line_2 = Line("每周领券数与核销数折线图")
line_2.add('领券',list(range(1,8)),list(received.values),is_label_show=True)
line_2.add('核销',list(range(1,8)),list(consume_coupon.values),is_label_show=True)
line_2.render('imgs/line_2.html')
###Output
_____no_output_____
###Markdown
9.正负样本比例图
###Code
pie_3 = Pie("正-负比例饼图")
pie_3.add('',['负','正'],list(offline['label'].value_counts().values),is_label_show=True)
pie_3.render('imgs/pie_3.html')
###Output
_____no_output_____ |
LDA and PCA demo with Tensorflow Nets/LDA_PCA_TF.ipynb | ###Markdown
Demonstrating lda and pca Importing necessary libraries
###Code
import tensorflow as tf
from tensorflow import keras
import matplotlib.pyplot as plt
from matplotlib import image
import numpy as np
%matplotlib inline
import seaborn as sns; sns.set()
from PIL import Image
import os, sys
from sklearn.decomposition import PCA
from sklearn.model_selection import train_test_split
from sklearn.metrics import plot_confusion_matrix
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from mpl_toolkits.mplot3d import Axes3D
###Output
D:\anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
D:\anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
D:\anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
D:\anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
D:\anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
D:\anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
D:\anaconda3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
D:\anaconda3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
D:\anaconda3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
D:\anaconda3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
D:\anaconda3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
D:\anaconda3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
###Markdown
Class map. Dict containing values of each class
###Code
class_map = {0:"T-shirt/top",
1:"Trouser/pants",
2:"Pullover shirt",
3:"Dress",
4:"Coat",
5:"Sandal",
6:"Shirt",
7:"Sneaker",
8:"Bag",
9:"Ankle boot"}
###Output
_____no_output_____
###Markdown
Using fashion mnist dataset Has about 60000 train and 10000 test images rangin through 10 classes
###Code
mnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
###Output
_____no_output_____
###Markdown
Use index to see through the dataset (Train) Within indices [0:60000]
###Code
index = 2300
np.set_printoptions(linewidth=200)
plt.imshow(training_images[index])
print(class_map[training_labels[index]])
#print(training_images[index])
###Output
T-shirt/top
###Markdown
Class distribution of training set. Uniform
###Code
plt.hist(training_labels,bins = 10)
plt.title("Class Distribution train data")
plt.show()
###Output
_____no_output_____
###Markdown
Class distribution of test set. Also uniform Class distirbutions help debuggin classifier issues and having correct assumptions about data
###Code
plt.hist(test_labels,bins = 10)
plt.title("Class Distribution test data")
plt.show()
###Output
_____no_output_____
###Markdown
Normalizing data and converting to float tf needs float values
###Code
training_images = training_images/255.0
test_images = test_images/255.0
###Output
_____no_output_____
###Markdown
Applying PCA() on the dataset PCA maps the input feature space into another feature space based on eigenvectors. It considers variance of data It is unsupervised and helps remove correlated attributes. Denoising can also be done. Has disadvantages. Showcased later Displaying Correlation between a subset of input feature space
###Code
tot = list()
for i in range(0,3):
for j in range(0,3):
if i !=j:
cor = np.corrcoef(training_images.reshape(training_images.shape[0],-1)[:][i],training_images.reshape(training_images.shape[0],-1)[:][j])[0,1]
print("Correlation : ",i,j,"---",cor)
tot.append(cor)
print("Average correlation: ",sum(tot)/len(tot))
###Output
Correlation : 0 1 --- 0.1353670765955539
Correlation : 0 2 --- 0.16735204377956073
Correlation : 1 0 --- 0.13536707659555386
Correlation : 1 2 --- 0.5990603230873573
Correlation : 2 0 --- 0.16735204377956073
Correlation : 2 1 --- 0.5990603230873573
Average correlation: 0.30059314782082397
###Markdown
Converting to PCA feature space
###Code
train_vector = training_images.reshape(training_images.shape[0],-1)
test_vector = test_images.reshape(test_images.shape[0],-1)
pca = PCA(784)
projected = pca.fit_transform(train_vector)
projected_test = pca.fit_transform(test_vector)
###Output
_____no_output_____
###Markdown
Displaying correlation of pca feature space. Clearly reduced.
###Code
tot = list()
for i in range(0,3):
for j in range(0,3):
if i !=j:
cor = np.corrcoef(projected[:][i],projected[:][j])[0,1]
print("Correlation : ",i,j,"---",cor)
tot.append(cor)
print("Average correlation: ",sum(tot)/len(tot))
###Output
Correlation : 0 1 --- -0.19663072654592254
Correlation : 0 2 --- -0.46938335387138147
Correlation : 1 0 --- -0.19663072654592254
Correlation : 1 2 --- -0.158200412967003
Correlation : 2 0 --- -0.46938335387138147
Correlation : 2 1 --- -0.15820041296700302
Average correlation: -0.2747381644614357
###Markdown
Visualizing PCA vs LDA PCA feature space in 2D plot Evident that it maintains maximum variance
###Code
plt.scatter(projected[:, 0], projected[:, 1],
c=training_labels, edgecolor='none', alpha=0.5,
cmap=plt.cm.get_cmap('brg', 10))
plt.xlabel('component 1')
plt.ylabel('component 2')
plt.colorbar();
###Output
_____no_output_____
###Markdown
Converting inpout space to LDA feature space It is important to note that LDA is not only a dimensionality reducing technique, rather it is also a classificaton technique. It essentially converts the feature space to find the axis that1. Reduces the scatter within the class2. Increases distance/linear seperability between means
###Code
lda = LinearDiscriminantAnalysis(n_components=9)
projected_lda = lda.fit(train_vector, training_labels).transform(train_vector)
projected_lda_test = lda.fit(test_vector,test_labels).transform(test_vector)
###Output
_____no_output_____
###Markdown
2D plot of LDA feature space. The difference btween PCA and LDA is evident here. LDA plot has more separability
###Code
plt.scatter(projected_lda[:, 0], projected_lda[:, 1],
c=training_labels, edgecolor='none', alpha=0.5,
cmap=plt.cm.get_cmap('brg', 10))
plt.xlabel('component 1')
plt.ylabel('component 2')
plt.colorbar();
###Output
_____no_output_____
###Markdown
3D plot of the LDA feature space to inspect separability further
###Code
ax = Axes3D(plt.figure())
p = ax.scatter(projected_lda[:, 0], projected_lda[:, 1],projected_lda[:,2],
c=training_labels, alpha=0.5,
cmap=plt.cm.get_cmap('brg', 10))
plt.xlabel('component 1')
plt.ylabel('component 2')
ax.set_zlabel('component 3')
plt.colorbar(p);
###Output
_____no_output_____
###Markdown
LDA (As a classifier also) tends to overfit the data as it is a supervised technique. It aims to maximise the distance between class means using class labels and this behavior tends to overfit the data if not careful Here we choose class 0 (Shirt/top) and separate that class into 3 classes.There is no difference (Atleast not that much) between the images in class 0.But we are telling the algorithm that there is.
###Code
pseudo_target = np.random.randint(3,size = (6000,))
plt.hist(pseudo_target,bins = 3)
plt.title("Class Distribution pseudo")
plt.show()
###Output
_____no_output_____
###Markdown
Visualizing class 0. It is the same class for us.
###Code
idc = np.where(training_labels == 0)[0]
print(idc)
temp_train = np.array([training_images[i] for i in idc])
plt.imshow(temp_train[4])
###Output
[ 1 2 4 ... 59974 59985 59998]
###Markdown
Plotting LDA feature space of constructed pseudo classes. It is clear to see that the model tries to separate classes even when there is no difference
###Code
lda = LinearDiscriminantAnalysis(n_components=2)
ps_lda = lda.fit(temp_train.reshape(temp_train.shape[0],-1),pseudo_target).transform(temp_train.reshape(temp_train.shape[0],-1))
plt.scatter(ps_lda[:, 0], ps_lda[:, 1],
c=pseudo_target, edgecolor='none', alpha=0.5,
cmap=plt.cm.get_cmap('brg', 3))
plt.xlabel('component 1')
plt.ylabel('component 2')
plt.colorbar();
###Output
_____no_output_____
###Markdown
Exploring how classification fares on different feature spaces Network trained on Straight up original input space (Best performing model)Can be further optimized. But for now, this is good enough
###Code
class mycallbc(tf.keras.callbacks.Callback):
def on_epoch_end(self,epoch,log = {}):
if log["acc"]>=0.985:
print("\n\nReached enough accuracy")
self.model.stop_training = True
callbc = mycallbc()
init_model = tf.keras.models.Sequential([tf.keras.layers.Flatten(),
tf.keras.layers.Dense(units = 1024, activation = tf.nn.relu,kernel_regularizer = tf.keras.regularizers.l2(5e-4)),
tf.keras.layers.Dense(units = 1024, activation = tf.nn.relu,kernel_regularizer = tf.keras.regularizers.l2(5e-4)),
tf.keras.layers.Dense(units = 2048, activation = tf.nn.relu,kernel_regularizer = tf.keras.regularizers.l2(5e-4)),
tf.keras.layers.Dense(units = 2048, activation = tf.nn.relu,kernel_regularizer = tf.keras.regularizers.l2(5e-4)),
tf.keras.layers.Dense(units = 1024, activation = tf.nn.relu,kernel_regularizer = tf.keras.regularizers.l2(5e-4)),
tf.keras.layers.Dense(units = 10,activation = tf.nn.softmax)])
init_model.compile(optimizer = "adam",loss = "sparse_categorical_crossentropy",metrics = ["accuracy"])
init_model.fit(training_images,training_labels,batch_size = 2048,epochs = 80,callbacks = [callbc])
print("Train accuracy:",init_model.evaluate(training_images,training_labels)[1]*100)
print("Test accuracy:",init_model.evaluate(test_images,test_labels)[1]*100)
###Output
60000/60000 [==============================] - 9s 142us/sample - loss: 0.2195 - acc: 0.9616
Train accuracy: 96.16000056266785
10000/10000 [==============================] - 1s 140us/sample - loss: 0.4984 - acc: 0.8877
Test accuracy: 88.77000212669373
###Markdown
Network trained on PCA input space.The accuracies are really low. Showing that PCA is not a mandatory step in the machine learning pipeline.Here we have lost valuable information(features) by projecting into PCA feature space. And this stands as a disadvantage of PCA
###Code
feat = 512
class mycallbc(tf.keras.callbacks.Callback):
def on_epoch_end(self,epoch,log = {}):
if log["acc"]>=0.985:
print("\n\nReached enough accuracy")
self.model.stop_training = True
callbc = mycallbc()
pca_model = tf.keras.models.Sequential([tf.keras.layers.Input(shape = [feat]),
tf.keras.layers.Dense(units = 1024, activation = tf.nn.relu,kernel_regularizer = tf.keras.regularizers.l2(1)),
tf.keras.layers.Dense(units = 1024, activation = tf.nn.relu,kernel_regularizer = tf.keras.regularizers.l2(1)),
tf.keras.layers.Dense(units = 1024, activation = tf.nn.relu,kernel_regularizer = tf.keras.regularizers.l2(1)),
tf.keras.layers.Dense(units = 1024, activation = tf.nn.relu),
tf.keras.layers.Dense(units = 1024, activation = tf.nn.relu),
tf.keras.layers.Dense(units = 10,activation = tf.nn.softmax)])
pca_model.compile(optimizer = "adam",loss = "sparse_categorical_crossentropy",metrics = ["accuracy"])
pca_model.fit(projected[:,:feat],training_labels,batch_size = 2048,epochs = 30,callbacks = [callbc])
print("Train accuracy:",pca_model.evaluate(projected[:,:feat],training_labels)[1]*100)
print("Test accuracy:",pca_model.evaluate(projected_test[:,:feat],test_labels)[1]*100)
###Output
60000/60000 [==============================] - 5s 90us/sample - loss: 0.9656 - acc: 0.7551
Train accuracy: 75.51166415214539
10000/10000 [==============================] - 1s 81us/sample - loss: 2.9745 - acc: 0.4270
Test accuracy: 42.69999861717224
###Markdown
Network trained on LDA feature spaceSimilar to PCA, we seem to have lost a good dea of information. And hence the difference between training and test errors keeep increasing (Overfitting)
###Code
feat = 6
lda_model = tf.keras.models.Sequential([tf.keras.layers.Input(shape = [feat]),
tf.keras.layers.Dense(units = 512, activation = tf.nn.relu,
kernel_regularizer = tf.keras.regularizers.l2(1)),
tf.keras.layers.Dense(units = 10,activation = tf.nn.softmax)])
lda_model.compile(optimizer = "adam",loss = "sparse_categorical_crossentropy",metrics = ["accuracy"])
lda_model.fit(projected_lda[:,:feat],training_labels,batch_size = 512,epochs = 50)
print("Train accuracy:",lda_model.evaluate(projected_lda[:,:feat],training_labels)[1]*100)
print("Test accuracy:",lda_model.evaluate(projected_lda_test[:,:feat],test_labels)[1]*100)
###Output
60000/60000 [==============================] - 3s 50us/sample - loss: 0.6410 - acc: 0.7758
Train accuracy: 77.57999897003174
10000/10000 [==============================] - 0s 49us/sample - loss: 0.6113 - acc: 0.7654
Test accuracy: 76.53999924659729
###Markdown
Using LDA feature space of PCA feature space to train network Applying PCA to LDA first acts as a regualarizer to LDA and gives better performance than either one. (In this case)Definitely not arguing that PCA on LDA is a good regularization method. But merely stating observations. Because vanilla regularization (l1,l2,dropout) is much more efficient
###Code
lda = LinearDiscriminantAnalysis(n_components=9)
pca_lda = lda.fit(projected,training_labels).transform(projected)
pca_lda_test = lda.fit(projected_test,test_labels).transform(projected_test)
feat = 6
lda_pca_model = tf.keras.models.Sequential([tf.keras.layers.Input(shape = [feat]),
tf.keras.layers.Dense(units = 512, activation = tf.nn.relu,
kernel_regularizer = tf.keras.regularizers.l2(0.001)),
tf.keras.layers.Dense(units = 10,activation = tf.nn.softmax)])
lda_pca_model.compile(optimizer = "adam",loss = "sparse_categorical_crossentropy",metrics = ["accuracy"])
lda_pca_model.fit(pca_lda[:,:feat],training_labels,batch_size = 1024,epochs = 50)
print("Train accuracy:",lda_pca_model.evaluate(pca_lda[:,:feat],training_labels)[1]*100)
print("Test accuracy:",lda_pca_model.evaluate(pca_lda_test[:,:feat],test_labels)[1]*100)
###Output
60000/60000 [==============================] - 4s 59us/sample - loss: 0.5184 - acc: 0.8063
Train accuracy: 80.62833547592163
10000/10000 [==============================] - 1s 53us/sample - loss: 0.5304 - acc: 0.7869
Test accuracy: 78.6899983882904
|
docs/lectures/lecture14/notebook/GEC_Occlusion.ipynb | ###Markdown
Title :Image Occlusion Description :The aim of this exercise is to understand occlusion. Each pixel in an image has varying importance for the classification of the image. Occlusion involves running a patch over the entire image to see which pixels affect the classification the most. Instructions:- Define a convolutional neural network based on the architecture mentioned in the scaffold.- Load the trained model weights given in the `occlusion_model_weights.h5` file.- Take a quick look at the model architecture using `model.summary()`.- Use the helper function `occlusion` to visualize the delta loss as a mask moves across the image. The output will look similar to the one shown below. Hints: MaxPooling2D()Max pooling operation for 2D spatial data.compile()Configures the model for training.Conv2D()2D convolution layer (e.g. spatial convolution over images).flatten()Flattens the input. Dense()A regular densely-connected NN layer.**NOTE** - To run the test, comment the cells that call the occlusion function and then click on **Mark**.
###Code
# Import necessary libraries
import os
import random
import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.keras import backend as K
from sklearn.metrics import accuracy_score
from helper import occlusion, load_dataset
from tensorflow.keras.optimizers import SGD
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.applications import MobileNet
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.applications.mobilenet import preprocess_input
from tensorflow.keras.layers import Dense, Dropout, Flatten, Activation, Input, Conv2D, MaxPooling2D, InputLayer, ReLU
%matplotlib inline
# Initialize a sequential model
model = Sequential(name="Occlusion")
# First convolution layer
model.add(Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same', input_shape=(32, 32, 3)))
# Second convolution layer
model.add(Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
# First max-pooling layer
model.add(MaxPooling2D((2, 2)))
# Third convolution layer
model.add(Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
# Fourth convolution layer
model.add(Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
# Second max-pooling layer
model.add(MaxPooling2D((2, 2)))
# Fifth convolution layer
model.add(Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
# Sixth convolution layer
model.add(Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
# Third max-pooling layer
model.add(MaxPooling2D((2, 2)))
# Flatten layer
model.add(Flatten())
# Fully connected dense layer
model.add(Dense(128, activation='relu', kernel_initializer='he_uniform'))
# Output layer
model.add(Dense(10, activation='softmax'))
# Compiling the model
model.compile(optimizer=SGD(lr=0.001, momentum=0.9), loss='categorical_crossentropy', metrics=['accuracy'])
# Take a quick look at the model summary
model.summary()
# Load the weights of the pre-trained model
model.load_weights("occlusion_model_weights.h5")
###Output
_____no_output_____
###Markdown
⏸ Call the function `occlusion` (below) with image numbers 10, 12 and 35. What do you observe based on the occlusion map plotted for each image? A. The images are blurred more as compared to other images in the set. B. The images are incorrectly predicted because the model weights the wrong parts of the image to make the prediction. C. The images are correctly predicted as the network is giving high importance to the most telling features of the images.
###Code
###edTest(test_chow1) ###
# Submit an answer choice as a string below (eg. if you choose option C, put 'C')
answer1 = '___'
# Call the helper function occlusion with
# the trained model, a valid image number within 50, occlusion
# patch size
img_num = ___
patch_size = ___
occlusion(model, img_num , patch_size)
###Output
_____no_output_____
###Markdown
⏸ Call the `occlusion` function (below) with images 1, 15 and 30. What do you observe based on the plots?
###Code
###edTest(test_chow2) ###
# Type your answer here
answer2 = '___'
# Call the helper function occlusion with
# the trained model, a valid image number within 50, occlusion
# patch size
img_num = ___
patch_size = ___
occlusion(model, img_num , patch_size)
###Output
_____no_output_____ |
the_archive/archived_rapids_event_notebooks/SCIPY_2019/cuml/02-LogisticRegression.ipynb | ###Markdown
Exercise: Logistic Regression This notebook shows another class comparison between cuML and Scikit-learn: The `LogisticRegression`. The basic form of logistic regression is used to model the probability of a certain class or event happening based on a set of variables.We also use this as an example of how cuML can adapt to other GPU centric workflows, this time based on CuPy, a GPU centric NumPy like library for array manipulation: [CuPy](https://cupy.chainer.org)Thanks to the [CUDA Array Interface](https://numba.pydata.org/numba-doc/dev/cuda/cuda_array_interface.html) cuML is compatible with multiple GPU memory libraries that conform to the spec, and tehrefore can use objects from libraries such as CuPy or Pytorch without additional memory copies!Lets begin by importing our needed libraries:
###Code
import pandas as pd
# Lets use cupy in a similar fashion to how we use numpy
import cupy as cp
from sklearn import metrics, datasets
from sklearn.linear_model import LogisticRegression as skLogistic
from sklearn.preprocessing import binarize
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.model_selection import GridSearchCV
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
cm_bright = ListedColormap(['#FF0000', '#0000FF'])
###Output
_____no_output_____
###Markdown
Once again, lets use Scikit-learn to create a dataset to use:
###Code
a = datasets.make_classification(10000, n_features=2, n_informative=2, n_redundant=0,
n_clusters_per_class=1, class_sep=0.5, random_state=1485)
###Output
_____no_output_____
###Markdown
Now lets create our `X` and `y` arrays in CuPy:
###Code
X = cp.array(a[0], order='F') # the API of CuPy is almost identical to NumPy
y = cp.array(a[1], order='F')
###Output
_____no_output_____
###Markdown
Lets see how the dataset works:
###Code
plt.scatter(cp.asnumpy(X[:,0]), cp.asnumpy(X[:,1]), c=[cm_bright.colors[i] for i in cp.asnumpy(y)],
alpha=0.1);
###Output
_____no_output_____
###Markdown
Now lets divide our dataset into training and testing datasets in a simple manner:
###Code
# Split the data into a training and test set using NumPy like syntax
X_train = X[:8000, :].copy(order='F')
X_test = X[-2000:, :].copy(order='F')
y_train = y[:8000]
y_test = y[8000:10000]
###Output
_____no_output_____
###Markdown
Note that the resulting objects are still CuPy arrays in GPU:
###Code
X_train.__class__
###Output
_____no_output_____
###Markdown
Exercise: Fit the cuML and Scikit-learn `LogisticRegression` objects and compare them when they use as similar parameters as possible* Hint 1: the **default values** of parameters in cuML are **the same** as the default values for Scikit-learn most of the time, so we recommend to leave all parameters except for `solver` as the default * Hint 2: Remember the **solver can differ significantly between the libraries**, so look into the solvers offered by both libraries to make them match * Hint 3: Even though Scikit-learn expects Numpy objects, it **cannot** accept CuPy objects for many of its methods since it expects the memory to be on CPU (host), not on GPU (device)For convenience, the notebook offers a few cells to organize your work. 1. Fit Scikit-learn LogisticRegression and show its accuracy
###Code
from sklearn.metrics import accuracy_score
# useful methods: cp.asnumpy(cupy_array) converts cupy to numpy,
###Output
_____no_output_____
###Markdown
2. Fit cuML Regression and show its accuracy* Hint 1: Look at the data types expected by cuML methods: https://rapidsai.github.io/projects/cuml/en/stable/api.htmlcuml.LogisticRegression.fit one of the input vectors might not be of the expected data type!* Hint 2: as mentioned above, cuML has native support for CuPy objects
###Code
from cuml import LogisticRegression as cuLogistic
import numpy as np
# useful methods: cupy_array.astype(np_dtype) converts an array from one datatype to np_datatype, where np_datatype can be something like np.float32, np.float64, etc.
# useful methods: cudf_seris.to_array() converts a cuDF Series to a numpy array
# useful methods: cp.asnumpy(cupy_array) converts cupy to numpy,
###Output
_____no_output_____ |
docs/notebooks/Demo2+ OpenDSS to CYME with post-processing example.ipynb | ###Markdown
Demo2+ OpenDSS to CYME with post-processing example
###Code
%pylab inline
import pandas as pd
import matplotlib.pyplot as plt
import networkx as nx
#Set the feeder
_feeder='demo_2_plus_latest'
###Output
_____no_output_____
###Markdown
Read the OpenDSS model into DiTTo The OpenDSS model for Demo2+ is red exactly as before.
###Code
#Build the DiTTo model first...
from ditto.store import Store
from ditto.readers.opendss.read import reader
m=Store()
inputs={'master_file':'../inputs/opendss/{feeder}/master.dss'.format(feeder=_feeder),
'buscoordinates_file': '../inputs/opendss/{feeder}/buscoords.dss'.format(feeder=_feeder)}
#Instanciate the reader
opendss_reader=reader(**inputs)
#Parse...
opendss_reader.parse(m)
###Output
/Users/ngensoll/anaconda2/lib/python2.7/site-packages/fuzzywuzzy-0.15.1-py2.7.egg/fuzzywuzzy/fuzz.py:35: UserWarning: Using slow pure-python SequenceMatcher. Install python-Levenshtein to remove this warning
###Markdown
Modify the DiTTo model Here begins the interesting stuffs. Load the system_structure_modifier module which will handle the modifications to the DiTTo model we just built.
###Code
from ditto.modify.system_structure import system_structure_modifier
###Output
_____no_output_____
###Markdown
Instanciate it with the model and the name of the source node ('st_mat' for Demo2+).
###Code
modifier=system_structure_modifier(m,'st_mat')
###Output
_____no_output_____
###Markdown
Center-tap load processing First, we make sure that everything downstream of center-tap transformers have coherant phases. (Because of the way these are modelled in OpenDSS, we have everything as AB...).This is done by callling the center_tap_load_preprocessing method:
###Code
modifier.center_tap_load_preprocessing()
###Output
_____no_output_____
###Markdown
Feeder mapping Then, we wish to have a clean network structure with feeders and sub-transmission networks which is not included in the OpenDSS model.This can be done by calling the feeder_preprocessing method:
###Code
modifier.feeder_preprocessing()
###Output
Info: substation ctrafo(cramaee():d2_nssee5_12.47->d2_nssee5_69)d2d3 found downstream of substation ctrafo(cramaee():d2_nst1_69->st_mat)
Info: substation ctrafo(cramaee():d2_nssee3_12.47->d2_nssee3_69)d2d3 found downstream of substation ctrafo(cramaee():d2_nst0_69->st_mat)
###Markdown
The test_feeder_cut method gives some statistic about the feeder created. If some elements are in multiple feeders, the intersections will also be printed (nothing is printed here because feeders are disjoint).
###Code
modifier.test_feeder_cut()
###Output
Number of feeders defined = 8
Sizes of the feeders = [37528, 38227, 29279, 15999, 10194, 59112, 2958, 49050]
###Markdown
Write the modified model to CYME Finally, write the modified model to CYME in the same way as before:
###Code
from ditto.writers.cyme.write import writer
cyme_writer=writer(output_path='./',log_path='./log_tt.log')
cyme_writer.write(modifier.model)
###Output
_____no_output_____ |
car_damage_classification/car_damage_analysis.ipynb | ###Markdown
Model evaluation using SidekickIn this notebook you will learn how to use the Deployment API of the Peltarion platform via Sidekick to get predictions on samples and evaluate the performance of the deployed model in more detail.Note: This notebook requires installation of Sidekick. To install the package within the notebook, run the following code:`import sys!{sys.executable} -m pip install git+https://github.com/Peltarion/sidekickegg=sidekick`For more information about Sidekick, see:https://github.com/Peltarion/sidekick
###Code
import os
import operator
import itertools
import resource
import zipfile
from IPython.display import display, Image
import pandas as pd
from PIL import Image
import sidekick
###Output
_____no_output_____
###Markdown
Setup Path to preprocessed dataset
###Code
zip_path = '/Users/joakim/Downloads/preprocessed.zip'
extract_path = '/Users/joakim/Downloads'
dataset_path = os.path.join(extract_path, 'preprocessed')
###Output
_____no_output_____
###Markdown
Extract zip file
###Code
zip_ref = zipfile.ZipFile(zip_path, 'r')
zip_ref.extractall(extract_path)
zip_ref.close()
###Output
_____no_output_____
###Markdown
Platform deployment
###Code
deploy_url = 'https:...'
deploy_token = '...'
###Output
_____no_output_____
###Markdown
Helper functions
###Code
def get_max_score(pred):
max_key = 'None'
max_score = 0
dict = pred['class'].items()
for key,score in dict:
if score >= max_score:
max_key = key
max_score = score
return (max_key, max_score)
def get_image(path):
im = Image.open(os.path.join(dataset_path, path))
new_im = im.copy()
new_im.format = 'jpeg'
im.close()
return new_im
###Output
_____no_output_____
###Markdown
Getting single predictions Deployment
###Code
client = sidekick.Deployment(
# Enter URL and token
url=deploy_url,
token=deploy_token
)
###Output
_____no_output_____
###Markdown
Ground truth
###Code
index_path = os.path.join(dataset_path, 'index.csv')
df = pd.read_csv(index_path)
df = df.sample(frac=1, random_state=2323)
df.head()
###Output
_____no_output_____
###Markdown
Predict damage for one image
###Code
im_path_list = iter(list(df['image']))
im_path = next(im_path_list)
im = Image.open(os.path.join(dataset_path, im_path))
display(im)
pred = client.predict(image=im)
print(get_max_score(pred))
###Output
_____no_output_____
###Markdown
Predict damage for multiple images
###Code
first_rows = df.head()
for i, row in first_rows.iterrows():
img = Image.open(os.path.join(dataset_path, row['image']))
display(img)
pred = client.predict(image=img)
print('Ground truth: {}\nPrediction: {}'.format(row['class'], pred['class']))
###Output
_____no_output_____
###Markdown
Getting predictions (batch) Filter out training dataThe predictions on the evaluation subset will be used in the analysis of the deployed model.
###Code
# Validation data
eval_df = df[df['subset']=='V'].copy()
###Output
_____no_output_____
###Markdown
Batch request
###Code
eval_df['image_url'] = eval_df['image']
eval_df['image'] = eval_df['image'].apply(lambda path: get_image(path))
predictions = client.predict_lazy(eval_df.to_dict('record'))
eval_df.head(1)
#This may take several minutes...
preds = [p for p in predictions]
eval_df['pred'] = [p['class'] for p in preds]
eval_df.head(5)
###Output
_____no_output_____
###Markdown
Evaluation
###Code
dicts = eval_df['pred']
max_keys = []
max_scores = []
for i in dicts:
max_val = max(i.items(), key=lambda k: k[1])
max_keys.append(max_val[0])
max_scores.append(max_val[1])
eval_df['pred_class'] = max_keys
eval_df['pred_score'] = max_scores
eval_df.head(5)
###Output
_____no_output_____
###Markdown
Worst misclassified examples
###Code
wrong_df = eval_df.loc[eval_df['class'] != eval_df['pred_class']]
wrong_df = wrong_df.sort_values(by=['pred_score'], ascending=False)
first_rows = wrong_df.head(10)
for i, row in first_rows.iterrows():
display(row['image'])
print('Ground truth: {}, Prediction: {}, Score: {}'.format(row['class'], row['pred_class'], row['pred_score']))
###Output
_____no_output_____ |
Ugwu Lilian WT-21-138/Health Expendure.ipynb | ###Markdown
GDP and Health ExpendiureRicher countries can afford to invest more on healthcare, on work and road safety, and other measures that reduce mortality. On the other hand, richer countries may have less healthy lifestyles. Is there any relation between the wealth of a country and the life expectancy of its inhabitants?The following analysis checks whether there is any correlation between the total gross domestic product (GDP) of a country in 2018 and the life expectancy of people born in that country in 2018. Getting the dataTwo datasets of the World Bank are considered. One dataset, available at http://data.worldbank.org/indicator/NY.GDP.MKTP.CD, lists the GDP of the world's countries in current US dollars, for various years. The use of a common currency allows us to compare GDP values across countries. The other dataset, available at http://data.worldbank.org/indicator/SP.DYN.LE00.IN, lists the health expendure of the world's countries.
###Code
import warnings
warnings.simplefilter('ignore', FutureWarning)
import pandas as pd
YEAR = 2018
GDP_INDICATOR = 'NY.GDP.PCAP.CD'
gdpReset = pd.read_csv('WB 2018 PC.csv')
LIFE_INDICATOR = 'SH.XPD.CHEX.PP.CD'
lifeReset = pd.read_csv('WB HE 2018.csv')
lifeReset.head()
###Output
_____no_output_____
###Markdown
Cleaning the dataThere are countries with empty codes cell, which was dropped
###Code
gdpCountries = gdpReset.dropna()
lifeCountries = lifeReset.dropna()
COUNTRY = 'Country Name'
headings = [COUNTRY, GDP_INDICATOR]
gdpClean = gdpCountries[headings]
gdpClean.head()
###Output
_____no_output_____
###Markdown
The unnecessary columns can be dropped
###Code
LIFE = 'Health Expendiure'
lifeCountries[LIFE] = lifeCountries[LIFE_INDICATOR].apply(round)
headings = [COUNTRY, LIFE]
lifeClean = lifeCountries[headings]
lifeClean.head()
###Output
<ipython-input-6-196b4aafe5ac>:2: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
lifeCountries[LIFE] = lifeCountries[LIFE_INDICATOR].apply(round)
###Markdown
Combining the dataThe tables are combined through an inner join on the common 'country' column.
###Code
gdpVsLife = pd.merge(gdpClean, lifeClean, on=COUNTRY, how='inner')
gdpVsLife.head()
###Output
_____no_output_____
###Markdown
Calculating the correlationTo measure if the life expectancy and the GDP grow together, the Spearman rank correlation coefficient is used. It is a number from -1 (perfect inverse rank correlation: if one indicator increases, the other decreases) to 1 (perfect direct rank correlation: if one indicator increases, so does the other), with 0 meaning there is no rank correlation. A perfect correlation doesn't imply any cause-effect relation between the two indicators. A p-value below 0.05 means the correlation is statistically significant.
###Code
from scipy.stats import spearmanr
gdpColumn = gdpVsLife[GDP_INDICATOR]
lifeColumn = gdpVsLife[LIFE]
(correlation, pValue) = spearmanr(gdpColumn, lifeColumn)
print('The correlation is', correlation)
if pValue < 0.05:
print('It is statistically significant.')
else:
print('It is not statistically significant.')
###Output
The correlation is 0.08069238917304565
It is not statistically significant.
###Markdown
Showing the dataMeasures of correlation can be misleading, so it is best to see the overall picture with a scatterplot. The GDP axis uses a logarithmic scale to better display the vast range of GDP values, from a few million to several billion (million of million) pounds.
###Code
%matplotlib inline
gdpVsLife.plot(x=GDP_INDICATOR, y=LIFE, kind='scatter', grid=True, logx=True, figsize=(10, 4))
###Output
_____no_output_____
###Markdown
The plot shows there is no clear correlation: there are rich countries with low life expectancy, poor countries with high expectancy, and countries with around 10 thousand (104) million pounds GDP have almost the full range of values, from below 50 to over 80 years. Towards the lower and higher end of GDP, the variation diminishes. Above 40 thousand million pounds of GDP (3rd tick mark to the right of 104), most countries have an expectancy of 70 years or more, whilst below that threshold most countries' life expectancy is below 70 years.Comparing the 10 poorest countries and the 10 countries with the lowest life expectancy shows that total GDP is a rather crude measure. The population size should be taken into account for a more precise definiton of what 'poor' and 'rich' means. Furthermore, looking at the countries below, droughts and internal conflicts may also play a role in life expectancy.
###Code
# the 10 countries with lowest GDP
gdpVsLife.sort_values(GDP_INDICATOR).head(10)
# the 10 countries with lowest life expectancy
gdpVsLife.sort_values(LIFE).head(10)
###Output
_____no_output_____ |
Zad4/zad4-batchgcd.ipynb | ###Markdown
Example RSA public key: 30The value 30 is used to signify 'sequence';this is a container that carries a list of DER-encoded objects. 82 01 0ahow many bytes are in the DER-encoding of the object, not counting the object type and the length field 02the value 02 is used to signify 'integer' 82 01 01this is the length of the integer 0a 02 82 01 01 00 c0 9f fe 42 c2 78 0e ac 01 c7 cf dd ce ef d4 73 f1 de 9c 00 5c 40 51 c0 55 e2 0b 9b 22 55 ea bf 22 1c 28 3a d4 fc 8f c8 47 92 89 4f 1c 48 ab 9c 4f 1b 56 71 d2 17 4f 62 04 b4 c3 69 2f 96 cf 5c 76 ba 21 b6 b0 0d 94 d1 50 59 3e c1 a4 b4 8a c2 71 c7 b4 c9 ad 2a 52 96 22 0b e2 65 34 d7 37 d2 4c 11 2f dd 64 91 31 c2 30 33 56 c5 c7 65 61 8e 5a cd 43 01 2e 9f 96 57 92 78 f9 70 e6 e4 bc 9e 27 59 70 f1 71 c9 14 7d e3 7e 34 16 0d 52 ee 7e 36 cf d1 de df e7 2b a1 30 64 92 da 98 bf 33 8e 2f 55 11 38 ba 86 08 50 fa e3 59 c4 1b 03 1c f7 68 3b dd a9 20 38 d7 63 b6 ed 44 42 68 bb 10 a0 5c fa 8d c9 3a 9a aa bd 82 56 24 a2 13 dd 27 5e e5 9b f2 aa bd 17 4e f8 f6 67 b1 f9 b8 71 63 63 f8 9d 90 ed be 71 5e 75 b8 b8 b8 6c 99 83 06 06 3b 2e d4 0c 09 1c 46 5d 03 99 72 c8 35 a1 ba 93These 257 bytes are the actual integer, in bigendian format. Thi is the modulus (n) 02This signifies that the second element in the sequence encodes an integer 03This signifies that this integer is encoded in 3 bytes 01 00 01This signifies that the encoded integer is 0x10001 == 65537. This is the public exponent (e)
###Code
pubkey = rsa.PublicKey.load_pkcs1(df_certs['rsa_PEM'][0])
pubkey.n
pubkey_list = []
for site in df_certs['rsa_PEM']:
pubkey = rsa.PublicKey.load_pkcs1(site)
pubkey_list.append(pubkey)
###Output
_____no_output_____
###Markdown
e is always 65537
###Code
numbers = []
for pubkey in pubkey_list:
numbers.append(pubkey.n)
def producttree(X):
result = [X]
while len(X) > 1:
X = [np.prod(X[int(i*2):int((i+1)*2)]) for i in range(int((len(X)+1)/2))]
result.append(X)
return result
def batchgcd_faster(X):
counter = 1
prods = producttree(X)
R = prods.pop()
while prods:
X = prods.pop()
R = [R[math.floor(i/2)] % X[i]**2 for i in range(len(X))]
return [math.gcd(r//n,n) for r,n in zip(R,X)]
len(numbers)
%%time
result = batchgcd_faster(numbers[:10])
%%time
result = batchgcd_faster(numbers[:100])
%%time
result = batchgcd_faster(numbers[:5000])
[(i, numbers[i]) for i, e in enumerate(result) if e != 1]
%%time
tmp = producttree(numbers[:2000])
len(tmp[0])
###Output
_____no_output_____ |
Conv2d_np.ipynb | ###Markdown
Performance Test
###Code
import numpy as np
import matplotlib.pyplot as plt
import mlearn.functional as F
import mlearn as mlearn
import time
import torch
from time import time
import sys
np.set_printoptions(4)
torch.set_printoptions(4)
print(np.__version__)
EPOCHS = 100
_inputs = np.random.randn(EPOCHS,32,3,28,28)
_weights = np.random.randn(8,3,3,3)
_bias = np.random.randn((8))
padding = 3
stride = 2
# 3.15
# 4.29
inputs = mlearn.tensor(_inputs)
weights = mlearn.tensor(_weights,requires_grad=True)
bias = mlearn.tensor(_bias,requires_grad=True)
start = time()
for e,batch in enumerate(inputs):
c_out = F.conv_2d_experiment(batch,weights,bias,stride,padding)
mlearn_conv = (time() - start)
c_y = c_out.sum()
c_y.backward()
inputs = torch.tensor(_inputs)
w = torch.tensor(_weights,requires_grad=True)
b = torch.tensor(_bias,requires_grad=True)
start = time()
for e,batch in enumerate(inputs):
out = torch.nn.functional.conv2d(batch,w,b,stride,padding)
torch_conv = (time() - start)
y = out.sum()
y.backward()
print("mlearn -> %.5f"%mlearn_conv)
print("torch -> %.5f"%torch_conv)
print("比Pytorch慢 %.5f倍"%(mlearn_conv/torch_conv))
# check if result is the same
np.testing.assert_almost_equal(c_out.data,out.detach().numpy(),10)
np.testing.assert_almost_equal(weights.grad.numpy(), w.grad.numpy(),10)
np.testing.assert_almost_equal(bias.grad.numpy(), b.grad.numpy(),10)
###Output
_____no_output_____ |
google_colab_statistics.ipynb | ###Markdown
Описательная статистика
###Code
data(mtcars)
head(mtcars)
# Арифметическая средняя
mean(mtcars$mpg)
# Медиана
median(mtcars$mpg)
# Дисперсия
var(mtcars$mpg)
# Стандартное отклонение
sd(mtcars$mpg)
# Минимальное значение
min(mtcars$mpg)
# Максимальное значение
max(mtcars$mpg)
# Специальной функции для расчета стандартной ошибки среднего в R нет, однако для этого вполне
# подойдут уже имеющиеся функции. Как известно, стандартная ошибка среднего рассчитывается как отношение
# стандартного отклонения к квадратному корню из объема выборки.
SEmpg = sd(mtcars$mpg)/sqrt(length(mtcars$mpg))
SEmpg
# Квартили. При настройках, заданных по умолччанию, выполнение указанной команды приведет к расчету
# минимального и максимального значений, а также трех квартилей, то есть значений, которые делят
# совокупность на четыре равные части
quantile(mtcars$mpg)
# Разница между первым и третим квартилями носит название интерквартильный размах (ИКР)
# ИКР является робастным аналогом дисперсии и может быть рассчитан в R при помощи функции
IQR(mtcars$mpg)
# Децили
quantile(mtcars$mpg, p = seq(0,1,0.1))
# Размер вектора (как имеющиеся, так и отсутствующие значения)
length(mtcars$mpg)
# Количество неотсутствующих значений
sum(!is.na(mtcars$mpg))
# Порядковые номера минимальных и максимальных значений. Поиск элементов.
which.min(mtcars$mpg)
which.max(mtcars$mpg)
rownames(mtcars)[which.min(mtcars$mpg)]
rownames(mtcars)[which.max(mtcars$mpg)]
# Определение среднего объема двигателя у моделей с автоматической и ручной коропкой передач.
tapply(X=mtcars$disp, INDEX = mtcars$am, FUN = mean)
# Общая аналитика
summary(mtcars)
###Output
_____no_output_____ |
LEGALST-123/lab5/lab5_Bootstrap and Confidence Intervals.ipynb | ###Markdown
Data For this lab, we'll be using the American National Election Studies (ANES) data from the 2016 election. The codebook is available here: http://www.electionstudies.org/studypages/anes_pilot_2016/anes_pilot_2016_CodebookUserGuide.pdf
###Code
anes = pd.read_csv('../data/anes/anes_pilot_2016.csv')
anes.head()
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis Refer back to lab 1 for help plotting histograms. Write code that plots a histogram of the "Feeling Thermometer - Barack Obama" variable.
###Code
# Histogram code
###Output
_____no_output_____
###Markdown
What is the shape of the plot? Report the 25th, 50th, and 75th percentiles. Keep in mind that valid answers have domain [0,100].
###Code
# Save column into an object called 'obama'
# Find 25th percentile
# Find 50th percentile
# Find 75th percentile
###Output
_____no_output_____
###Markdown
What does this distrubtion tell you about the American peoples' thoughts on Obama? Question 1 Now do the same for "Feeling Thermometer - Donald Trump."
###Code
# Histogram
# Save an object called 'trump'
# Find 25th percentile
# Find 50th percentile
# Find 75th percentile
###Output
_____no_output_____
###Markdown
How do the two distributions compare? Both distributions have a significant amount of their points at the two extremes (0 or 100). What does this tell you about the standard deviation of the data? Do the American people have strong opinions regarding these two candidates? Bootstrap Write code that resamples the "ftobama" distribution, then plot a histogram. Be sure to resample the number of rows that exist in the dataset, with replacement.
###Code
# Find number of rows
# Resample the data
# Histogram
# 50th percentile/median
###Output
_____no_output_____
###Markdown
Question 2 How does the resampled median compare to the original median? Does this result make sense? Now, define a function titled "bootstrap_median" that takes the original sample, the column name we're concerned with, and the number of resamples as arguments. The function should calculate simulated medians and return them in an array.
###Code
# Define a function "bootstrap_median" with arguments "original_sample", "label", and "replications"
# that returns an array with the medians found in replications
def bootstrap_median(original_sample, label, replications):
"""Returns an array of bootstrapped sample medians:
original_sample: table containing the original sample
label: label of column containing the variable
replications: number of bootstrap samples
"""
just_one_column = original_sample.loc[:, label]
medians = []
for i in np.arange(replications):
...
return ...
###Output
_____no_output_____
###Markdown
Replicate the bootstrap 10,000 times, then save the results.
###Code
# Resample 10,000 times
###Output
_____no_output_____
###Markdown
Plot a histogram of the resampled medians, and plot the 95% confidence interval. (hint: to plot the confidence interval, try using the 2.5 percentile and 97.5 percentile values in a numpy array)
###Code
# Plot medians
...
plots.plot(np.array([pd.Series(medians).quantile(q=.025), pd.Series(medians).quantile(q=.975)]), np.array([0, 0]), color='yellow', lw=10, zorder=1)
###Output
_____no_output_____
###Markdown
Question 3 What can you infer about the likely population median given the above distribution? Finally, write a simulation that constructs 100 confidence intervals. (Remember to use the 2.5 and 97.5 percentiles!)
###Code
# Construct 100 confidence intervals
left_ends = []
right_ends = []
for i in np.arange(100):
...
intervals = pd.DataFrame(data={"Left": left_ends, "Right": right_ends})
###Output
_____no_output_____
###Markdown
Question 4 Finally, plot 100 confidence intervals (stacked on top of each other). What can you conclude about the median?
###Code
# Plot the confidence intervals
plots.figure(figsize=(8,8))
for i in np.arange(100):
ends = ...
plots.plot(ends, np.array([i + 1, i + 1]), color='gold')
plots.xlabel('Median')
plots.ylabel('Replication')
plots.title('Population Median and Intervals of Estimates');
###Output
_____no_output_____ |
tutorials/Magic Commands.ipynb | ###Markdown
Magic CommandsPlease see: http://ipython.readthedocs.io/en/stable/interactive/magics.html for further documentation of Jupyter Notebook/iPython Cell magic commands
###Code
%env
%history
%lsmagic
lsmagic
%man git
%pwd
%system
%time
%timeit
print('Hello')
%profile
%%bash
ls
%%javascript
alert( 'Hello, world!' );
%%ruby
puts 'Hello World!'
%%html
<button type="button">Click Me!</button>
###Output
_____no_output_____ |
Hexatonic Collections.ipynb | ###Markdown
Hexatonic Collections Included Collections- HEX0,1 (Pitch Classes C, E, or G) in 1-3 ordering- HEX1,2 (PCs C, F, A) in 1-3 ordering- HEX2,3 (PCs D, F Bb) in 1-3 ordering- HEX3,4 (PCs Eb, G, B) in 1-3 ordering- HEX0,1 (PCs C, F, A) in 3-1 ordering ("Augmented Scale")- HEX1,2 (PCs D, F, Bb) in 3-1 ordering ("Augmented Scale")- HEX2,3 (PCs Eb, G, B) in 3-1 ordering ("Augmented Scale")- HEX3,4 (PCs E, Ab C) in 3-1 ordering ("Augmented Scale") Remaining Collections- Prometheus Scale- Blues Scale- Tritone Scale Hexatonic Collections with their own Notebook- Wholetone Scale Importing Necessary Libraries
###Code
import music21, copy
from IPython.display import Markdown, display
###Output
_____no_output_____
###Markdown
Functions Printing to Markdown
###Code
# Printing to Markdown
# ------------------------
def print_md(string):
"""
Pretty print function to print to markdown.
"""
display(Markdown(string))
###Output
_____no_output_____
###Markdown
Time Signature Fitting of Collection
###Code
# Finding a time signature based upon a pitch class collection's length
# ----------------------------------------------------------------------
def find_time_signature(scale):
"""
Finds the best fitting time signature of a given scale in a music21
stream.
"""
scale_stream = scale.flat.notes
m = music21.stream.Measure()
[m.insert(i.offset, i) for i in scale_stream]
return music21.meter.bestTimeSignature(m)
###Output
_____no_output_____
###Markdown
Analyzing a Hexatonic Collection from a music21 Stream
###Code
# Finding the mode and ordering of a hexatonic collection
# -------------------------------------------------------
def analyze_hexatonic_collection(collection):
"""
Finding the hexatonic mode and ordering for chosen hexatonic collection
through analysis, meaning that a lookup table is not being used.
If you were to use it in a project consulting a lookup table would
benefit computing time.
"""
# convert the hexatonic collection into a note name sequence
mode_notes = [n.name for n in collection]
# converting notes in the hexatonic collection to pitch classes
pitch_classes = [music21.pitch.Pitch(n).pitchClass for n in mode_notes]
# interval between the first and second note
order = music21.interval.notesToChromatic(collection[0],collection[1])
# the range of possible hexatonic collections
the_range = 5
if(order.directed == 1):
# ordering of hexatonic collection with interval from first to second note
ordering = "1-3"
# checking for discrete pitch classes in hexatonic collection
for x in range(the_range):
if(all(n in pitch_classes for n in [x, x + 1])):
hex_mode = x, x+1
break
else:
# ordering of hexatonic collection with interval from first to second note
ordering = "3-1"
# checking for discrete pitch classes in hexatonic collection
for x in range(the_range):
if(all(n in pitch_classes for n in [x, x + 3])):
hex_mode = (x + 3) % 4, ((x + 3) % 4) + 1
break
return hex_mode, mode_notes[0], pitch_classes[0], ordering
###Output
_____no_output_____
###Markdown
Creating a Hexatonic Collection Using music21's AbstractScale.buildNetworkFromPitches()
###Code
# Hexatonic Collections
# ----------------------------------------------------------------------
# Since there is no pre-cooked hexatonic collection à la Straus in
# music21, we have to create a new scale network to create a hexatonic
# collection. HEX3,4 (starting on pitch class C) in 3-1 ordering is also
# known as the "Augmented scale."
def get_hexatonic_collection(pc="C4", ordering="1-3"):
"""
Builds a hexatonic collection based on a pitch center and either 1-3
or 3-1 ordering. The default pitch class for the pitch center in C4,
while the default ordering is 3-1.
"""
# container for custom scale based on a pitch sequence model
new_scale = music21.scale.AbstractScale()
# deciding what ordering the new scale will have: 1-3 (default)
# or 3-1
if(ordering == '3-1'):
new_scale.buildNetworkFromPitches(
["C#4", "E4", "F4", "G#4", "A4", "C5"])
else:
new_scale.buildNetworkFromPitches(
["C4", "C#4", "E4", "F4", "G#4", "A4"])
# creating a hexatonic collection based on a given pitch center
hex_col = new_scale._net.realizePitch(pc)
# opening a stream to hold a hexatonic collection
hexatonic_collection = music21.stream.Stream()
# filling the stream with pitch class
[hexatonic_collection.append(music21.note.Note(p)) for p in hex_col]
# pitch centet
pitch_center = music21.note.Note(pc).name
# return the collection
return hexatonic_collection, pitch_center, ordering
###Output
_____no_output_____
###Markdown
Example Creating a Hexatonic Collection
###Code
# Creating a hexatonic collection, providing the pitch center, and ordering
hexatonic_collection_1 = get_hexatonic_collection('f#4', '3-1')
# Creating a (backup) copy of the music21 stream
hexatonic_collection_2 = copy.deepcopy(hexatonic_collection_1[0])
# Finding a time signature that can hold the collection
hexatonic_collection_1[0].insert(0, find_time_signature(hexatonic_collection_1[0]))
# Print a header
print_md("### " + hexatonic_collection_1[1] + " Hexatonic Collection in " +
hexatonic_collection_1[2] + " Ordering")
# Printing the collection in Common Music Notation (CMN) format
hexatonic_collection_1[0].show()
###Output
_____no_output_____
###Markdown
Example Analyzing a Hexatonic Collection
###Code
# analyzing mode, pitch center, and ordering of a given hexatonic collection
analysis_results = analyze_hexatonic_collection(hexatonic_collection_2)
analysis_results
###Output
_____no_output_____
###Markdown
Formatting Analysis Results
###Code
# pretty printing results
print_md("*Mode*: HEX<sub>{},".format(analysis_results[0][0]) +
"{}</sub><br /> ".format(analysis_results[0][1]) +
"*Pitch Center*: {} ".format(analysis_results[1]) +
"({})<br /> ".format(analysis_results[2]) +
"*Ordering*: {}".format(analysis_results[3]))
###Output
_____no_output_____ |
others/a-comprehensive-ml-workflow-with-python.ipynb | ###Markdown
A Comprehensive Machine Learning Workflow with Python There are plenty of courses and tutorials that can help you learn machine learning from scratch but here in Kaggle, I want to solve Titanic competition a popular machine learning Dataset as a comprehensive workflow with python packages. After reading, you can use this workflow to solve other real problems and use it as a template to deal with machine learning problems.last update: 30/01/2019> You may be interested have a look at 10 Steps to Become a Data Scientist: 1. [Leren Python](https://www.kaggle.com/mjbahmani/the-data-scientist-s-toolbox-tutorial-1)2. [Python Packages](https://www.kaggle.com/mjbahmani/the-data-scientist-s-toolbox-tutorial-2)3. [Mathematics and Linear Algebra](https://www.kaggle.com/mjbahmani/linear-algebra-for-data-scientists)4. [Programming & Analysis Tools](https://www.kaggle.com/mjbahmani/20-ml-algorithms-15-plot-for-beginners)5. [Big Data](https://www.kaggle.com/mjbahmani/a-data-science-framework-for-quora)6. [Data visualization](https://www.kaggle.com/mjbahmani/top-5-data-visualization-libraries-tutorial)7. [Data Cleaning](https://www.kaggle.com/mjbahmani/machine-learning-workflow-for-house-prices)8. [How to solve a Problem?](https://www.kaggle.com/mjbahmani/the-data-scientist-s-toolbox-tutorial-2)9. You are in the ninth step10. [Deep Learning](https://www.kaggle.com/mjbahmani/top-5-deep-learning-frameworks-tutorial)---------------------------------------------------------------------you can Fork and Run this kernel on Github:> [ GitHub](https://github.com/mjbahmani/10-steps-to-become-a-data-scientist)------------------------------------------------------------------------------------------------------------- **I hope you find this kernel helpful and some UPVOTES would be very much appreciated** ----------- Notebook Content1. [Introduction](1) 1. [Courses](11) 1. [Kaggle kernels](12) 1. [Ebooks](13) 1. [CheatSheet](14)1. [Machine learning](2) 1. [Machine learning workflow](21) 1. [Real world Application Vs Competitions](22)1. [Problem Definition](3) 1. [Problem feature](31) 1. [Why am I using Titanic dataset](331) 1. [Aim](32) 1. [Variables](33) 1. [Types of Features](331) 1. [Categorical](3311) 1. [Ordinal](3312) 1. [Continous](3313)1. [ Inputs & Outputs](4) 1. [Inputs ](41) 1. [Outputs](42)1. [Installation](5) 1. [ jupyter notebook](51) 1. [What browsers are supported?](511) 1. [ kaggle kernel](52) 1. [Colab notebook](53) 1. [install python & packages](54) 1. [Loading Packages](55)1. [Exploratory data analysis](6) 1. [Data Collection](61) 1. [Visualization](62) 1. [Scatter plot](621) 1. [Box](622) 1. [Histogram](623) 1. [Multivariate Plots](624) 1. [Violinplots](625) 1. [Pair plot](626) 1. [Kde plot](25) 1. [Joint plot](26) 1. [Andrews curves](27) 1. [Heatmap](28) 1. [Radviz](29) 1. [Data Preprocessing](30) 1. [Data Cleaning](31)1. [Model Deployment](7) 1. [Families of ML algorithms](71) 1. [Prepare Features & Targets](72) 1. [Accuracy and precision](73) 1. [RandomForestClassifier](74) 1. [prediction](741) 1. [XGBoost](75) 1. [prediction](751) 1. [Logistic Regression](76) 1. [prediction](761) 1. [DecisionTreeRegressor ](77) 1. [HuberRegressor](78) 1. [ExtraTreeRegressor](79)1. [Conclusion](8)1. [References](9) 1- IntroductionThis is a **comprehensive ML techniques with python** , that I have spent for more than two months to complete it.It is clear that everyone in this community is familiar with Titanic dataset but if you need to review your information about the dataset please visit this [link](https://www.kaggle.com/c/titanic/data).I have tried to help **beginners** in Kaggle how to face machine learning problems. and I think it is a great opportunity for who want to learn machine learning workflow with python completely.I have covered most of the methods that are implemented for **Titanic** until **2018**, you can start to learn and review your knowledge about ML with a perfect dataset and try to learn and memorize the workflow for your journey in Data science world. 1-1 CoursesThere are a lot of online courses that can help you develop your knowledge, here I have just listed some of them:1. [Machine Learning Certification by Stanford University (Coursera)](https://www.coursera.org/learn/machine-learning/)2. [Machine Learning A-Z™: Hands-On Python & R In Data Science (Udemy)](https://www.udemy.com/machinelearning/)3. [Deep Learning Certification by Andrew Ng from deeplearning.ai (Coursera)](https://www.coursera.org/specializations/deep-learning)4. [Python for Data Science and Machine Learning Bootcamp (Udemy)](Python for Data Science and Machine Learning Bootcamp (Udemy))5. [Mathematics for Machine Learning by Imperial College London](https://www.coursera.org/specializations/mathematics-machine-learning)6. [Deep Learning A-Z™: Hands-On Artificial Neural Networks](https://www.udemy.com/deeplearning/)7. [Complete Guide to TensorFlow for Deep Learning Tutorial with Python](https://www.udemy.com/complete-guide-to-tensorflow-for-deep-learning-with-python/)8. [Data Science and Machine Learning Tutorial with Python – Hands On](https://www.udemy.com/data-science-and-machine-learning-with-python-hands-on/)9. [Machine Learning Certification by University of Washington](https://www.coursera.org/specializations/machine-learning)10. [Data Science and Machine Learning Bootcamp with R](https://www.udemy.com/data-science-and-machine-learning-bootcamp-with-r/)11. [Creative Applications of Deep Learning with TensorFlow](https://www.class-central.com/course/kadenze-creative-applications-of-deep-learning-with-tensorflow-6679)12. [Neural Networks for Machine Learning](https://www.class-central.com/mooc/398/coursera-neural-networks-for-machine-learning)13. [Practical Deep Learning For Coders, Part 1](https://www.class-central.com/mooc/7887/practical-deep-learning-for-coders-part-1)14. [Machine Learning](https://www.cs.ox.ac.uk/teaching/courses/2014-2015/ml/index.html) 1-2 Kaggle kernelsI want to thanks **Kaggle team** and all of the **kernel's authors** who develop this huge resources for Data scientists. I have learned from The work of others and I have just listed some more important kernels that inspired my work and I've used them in this kernel:1. [https://www.kaggle.com/ash316/eda-to-prediction-dietanic](https://www.kaggle.com/ash316/eda-to-prediction-dietanic)2. [https://www.kaggle.com/mrisdal/exploring-survival-on-the-titanic](https://www.kaggle.com/mrisdal/exploring-survival-on-the-titanic)3. [https://www.kaggle.com/yassineghouzam/titanic-top-4-with-ensemble-modeling](https://www.kaggle.com/yassineghouzam/titanic-top-4-with-ensemble-modeling)4. [https://www.kaggle.com/ldfreeman3/a-data-science-framework-to-achieve-99-accuracy](https://www.kaggle.com/ldfreeman3/a-data-science-framework-to-achieve-99-accuracy)5. [https://www.kaggle.com/startupsci/titanic-data-science-solutions](https://www.kaggle.com/startupsci/titanic-data-science-solutions)6. [scikit-learn-ml-from-start-to-finish](https://www.kaggle.com/jeffd23/scikit-learn-ml-from-start-to-finish)[go to top](top) 1-3 EbooksSo you love reading , here is **10 free machine learning books**1. [Probability and Statistics for Programmers](http://www.greenteapress.com/thinkstats/)2. [Bayesian Reasoning and Machine Learning](http://web4.cs.ucl.ac.uk/staff/D.Barber/textbook/091117.pdf)2. [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/)2. [Understanding Machine Learning](http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning/index.html)2. [A Programmer’s Guide to Data Mining](http://guidetodatamining.com/)2. [Mining of Massive Datasets](http://infolab.stanford.edu/~ullman/mmds/book.pdf)2. [A Brief Introduction to Neural Networks](http://www.dkriesel.com/_media/science/neuronalenetze-en-zeta2-2col-dkrieselcom.pdf)2. [Deep Learning](http://www.deeplearningbook.org/)2. [Natural Language Processing with Python](https://www.researchgate.net/publication/220691633_Natural_Language_Processing_with_Python)2. [Machine Learning Yearning](http://www.mlyearning.org/) 1-4 Cheat SheetsData Science is an ever-growing field, there are numerous tools & techniques to remember. It is not possible for anyone to remember all the functions, operations and formulas of each concept. That’s why we have cheat sheets. But there are a plethora of cheat sheets available out there, choosing the right cheat sheet is a tough task.[Top 28 Cheat Sheets for Machine Learning](https://www.analyticsvidhya.com/blog/2017/02/top-28-cheat-sheets-for-machine-learning-data-science-probability-sql-big-data/) [Go to top](top) 2- Machine LearningMachine Learning is a field of study that gives computers the ability to learn without being explicitly programmed.**Arthur Samuel, 1959** 2-1 Machine Learning WorkflowIf you have already read some [machine learning books](https://github.com/mjbahmani/10-steps-to-become-a-data-scientist/tree/master/Ebooks). You have noticed that there are different ways to stream data into machine learning.Most of these books share the following steps:1. Define Problem1. Specify Inputs & Outputs1. Exploratory data analysis1. Data Collection1. Data Preprocessing1. Data Cleaning1. Visualization1. Model Design, Training, and Offline Evaluation1. Model Deployment, Online Evaluation, and Monitoring1. Model Maintenance, Diagnosis, and RetrainingOf course, the same solution can not be provided for all problems, so the best way is to create a **general framework** and adapt it to new problem.**You can see my workflow in the below image** : **Data Science has so many techniques and procedures that can confuse anyone.** 2-1 Real world Application Vs CompetitionsWe all know that there are differences between real world problem and competition problem. The following figure that is taken from one of the courses in coursera, has partly made this comparison As you can see, there are a lot more steps to solve in real problems. [Go to top](top) 3- Problem DefinitionI think one of the important things when you start a new machine learning project is Defining your problem. that means you should understand business problem.( **Problem Formalization**)Problem Definition has four steps that have illustrated in the picture below: 3-1 Problem FeatureThe sinking of the Titanic is one of the most infamous shipwrecks in history. **On April 15, 1912**, during her maiden voyage, the Titanic sank after colliding with an iceberg, killing **1502 out of 2224** passengers and crew. That's why the name DieTanic. This is a very unforgetable disaster that no one in the world can forget.It took about $7.5 million to build the Titanic and it sunk under the ocean due to collision. The Titanic Dataset is a very good dataset for begineers to start a journey in data science and participate in competitions in Kaggle.ٌWe will use the classic titanic data set. This dataset contains information about **11 different variables**:1. Survival1. Pclass1. Name1. Sex1. Age1. SibSp1. Parch1. Ticket1. Fare1. Cabin1. Embarked> Note :You must answer the following question:How does your company expact to use and benfit from your model. 3-3-1 Why am I using Titanic dataset1. This is a good project because it is so well understood.1. Attributes are numeric and categorical so you have to figure out how to load and handle data.1. It is a ML problem, allowing you to practice with perhaps an easier type of supervised learning algorithm.1. We can define problem as clustering(unsupervised algorithm) project too.1. Because we love **Kaggle** :-) . 3-2 AimIt is your job to predict if a **passenger** survived the sinking of the Titanic or not. For each PassengerId in the test set, you must predict a 0 or 1 value for the Survived variable. 3-3 Variables1. **Age** : 1. Age is fractional if less than 1. If the age is estimated, is it in the form of xx.51. **Sibsp** : 1. The dataset defines family relations in this way... a. Sibling = brother, sister, stepbrother, stepsister b. Spouse = husband, wife (mistresses and fiancés were ignored)1. **Parch**: 1. The dataset defines family relations in this way... a. Parent = mother, father b. Child = daughter, son, stepdaughter, stepson c. Some children travelled only with a nanny, therefore parch=0 for them.1. **Pclass** : * A proxy for socio-economic status (SES) * 1st = Upper * 2nd = Middle * 3rd = Lower1. **Embarked** : * nominal datatype 1. **Name**: * nominal datatype . It could be used in feature engineering to derive the gender from title1. **Sex**: * nominal datatype 1. **Ticket**: * that have no impact on the outcome variable. Thus, they will be excluded from analysis1. **Cabin**: * is a nominal datatype that can be used in feature engineering1. **Fare**: * Indicating the fare1. **PassengerID**: * have no impact on the outcome variable. Thus, it will be excluded from analysis1. **Survival**: * **[dependent variable](http://www.dailysmarty.com/posts/difference-between-independent-and-dependent-variables-in-machine-learning)** , 0 or 1 3-3-1 Types of Features 3-3-1-1 CategoricalA categorical variable is one that has two or more categories and each value in that feature can be categorised by them. for example, gender is a categorical variable having two categories (male and female). Now we cannot sort or give any ordering to such variables. They are also known as Nominal Variables.1. **Categorical Features in the dataset: Sex,Embarked.** 3-3-1-2 OrdinalAn ordinal variable is similar to categorical values, but the difference between them is that we can have relative ordering or sorting between the values. For eg: If we have a feature like Height with values Tall, Medium, Short, then Height is a ordinal variable. Here we can have a relative sort in the variable.1. **Ordinal Features in the dataset: PClass** 3-3-1-3 Continous:A feature is said to be continous if it can take values between any two points or between the minimum or maximum values in the features column.1. **Continous Features in the dataset: Age** [Go to top](top) 4- Inputs & Outputs 4-1 InputsWhat's our input for this problem: 1. train.csv 1. test.csv 4-2 Outputs1. Your score is the percentage of passengers you correctly predict. This is known simply as "**accuracy**”.The Outputs should have exactly **2 columns**: 1. PassengerId (sorted in any order) 1. Survived (contains your binary predictions: 1 for survived, 0 for deceased) 5-Installation Windows:1. Anaconda (from https://www.continuum.io) is a free Python distribution for SciPy stack. It is also available for Linux and Mac.1. Canopy (https://www.enthought.com/products/canopy/) is available as free as well as commercial distribution with full SciPy stack for Windows, Linux and Mac.1. Python (x,y) is a free Python distribution with SciPy stack and Spyder IDE for Windows OS. (Downloadable from http://python-xy.github.io/) Linux:1. Package managers of respective Linux distributions are used to install one or more packages in SciPy stack.1. For Ubuntu Users:sudo apt-get install python-numpy python-scipy python-matplotlibipythonipythonnotebookpython-pandas python-sympy python-nose 5-1 Jupyter notebookI strongly recommend installing **Python** and **Jupyter** using the **[Anaconda Distribution](https://www.anaconda.com/download/)**, which includes Python, the Jupyter Notebook, and other commonly used packages for scientific computing and data science.1. First, download Anaconda. We recommend downloading Anaconda’s latest Python 3 version.2. Second, install the version of Anaconda which you downloaded, following the instructions on the download page.3. Congratulations, you have installed Jupyter Notebook! To run the notebook, run the following command at the Terminal (Mac/Linux) or Command Prompt (Windows): > jupyter notebook> 5-2 Kaggle KernelKaggle kernel is an environment just like you use jupyter notebook, it's an **extension** of the where in you are able to carry out all the functions of jupyter notebooks plus it has some added tools like forking et al. 5-3 Colab notebook**Colaboratory** is a research tool for machine learning education and research. It’s a Jupyter notebook environment that requires no setup to use. 5-3-1 What browsers are supported?Colaboratory works with most major browsers, and is most thoroughly tested with desktop versions of Chrome and Firefox. 5-3-2 Is it free to use?Yes. Colaboratory is a research project that is free to use. 5-3-3 What is the difference between Jupyter and Colaboratory?Jupyter is the open source project on which Colaboratory is based. Colaboratory allows you to use and share Jupyter notebooks with others without having to download, install, or run anything on your own computer other than a browser. [Go to top](top) 5-5 Loading PackagesIn this kernel we are using the following packages: 5-5-1 Import
###Code
from sklearn.metrics import make_scorer, accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn import preprocessing
import matplotlib.pylab as pylab
import matplotlib.pyplot as plt
from pandas import get_dummies
import matplotlib as mpl
import xgboost as xgb
import seaborn as sns
import pandas as pd
import numpy as np
import matplotlib
import warnings
import sklearn
import scipy
import numpy
import json
import sys
import csv
import os
###Output
_____no_output_____
###Markdown
5-5-2 Version
###Code
print('matplotlib: {}'.format(matplotlib.__version__))
print('sklearn: {}'.format(sklearn.__version__))
print('scipy: {}'.format(scipy.__version__))
print('seaborn: {}'.format(sns.__version__))
print('pandas: {}'.format(pd.__version__))
print('numpy: {}'.format(np.__version__))
print('Python: {}'.format(sys.version))
###Output
_____no_output_____
###Markdown
5-5-2 SetupA few tiny adjustments for better **code readability**
###Code
sns.set(style='white', context='notebook', palette='deep')
pylab.rcParams['figure.figsize'] = 12,8
warnings.filterwarnings('ignore')
mpl.style.use('ggplot')
sns.set_style('white')
%matplotlib inline
###Output
_____no_output_____
###Markdown
6- Exploratory Data Analysis(EDA) In this section, you'll learn how to use graphical and numerical techniques to begin uncovering the structure of your data. * Which variables suggest interesting relationships?* Which observations are unusual?* Analysis of the features!By the end of the section, you'll be able to answer these questions and more, while generating graphics that are both **insightful** and **beautiful**. then We will review analytical and statistical operations:* 5-1 Data Collection* 5-2 Visualization* 5-3 Data Preprocessing* 5-4 Data Cleaning >Note: You can change the order of the above steps. 6-1 Data Collection**Data collection** is the process of gathering and measuring data, information or any variables of interest in a standardized and established manner that enables the collector to answer or test hypothesis and evaluate outcomes of the particular collection.[techopedia]I start Collection Data by the training and testing datasets into Pandas DataFrames [Go to top](top)
###Code
# import train and test to play with it
df_train = pd.read_csv('../input/train.csv')
df_test = pd.read_csv('../input/test.csv')
###Output
_____no_output_____
###Markdown
>Note: * Each **row** is an observation (also known as : sample, example, instance, record)* Each **column** is a feature (also known as: Predictor, attribute, Independent Variable, input, regressor, Covariate) After loading the data via **pandas**, we should checkout what the content is, description and via the following:
###Code
type(df_train)
type(df_test)
###Output
_____no_output_____
###Markdown
6-2 Visualization**Data visualization** is the presentation of data in a pictorial or graphical format. It enables decision makers to see analytics presented visually, so they can grasp difficult concepts or identify new patterns.With interactive visualization, you can take the concept a step further by using technology to drill down into charts and graphs for more detail, interactively changing what data you see and how it’s processed.[SAS] In this section I show you **11 plots** with **matplotlib** and **seaborn** that is listed in the blew picture: [Go to top](top) 6-2-1 Scatter plotScatter plot Purpose To identify the type of relationship (if any) between two quantitative variables
###Code
# Modify the graph above by assigning each species an individual color.
g = sns.FacetGrid(df_train, hue="Survived", col="Pclass", margin_titles=True,
palette={1:"seagreen", 0:"gray"})
g=g.map(plt.scatter, "Fare", "Age",edgecolor="w").add_legend();
###Output
_____no_output_____
###Markdown
6-2-2 BoxIn descriptive statistics, a **box plot** or boxplot is a method for graphically depicting groups of numerical data through their quartiles. Box plots may also have lines extending vertically from the boxes (whiskers) indicating variability outside the upper and lower quartiles, hence the terms box-and-whisker plot and box-and-whisker diagram.[wikipedia]
###Code
ax= sns.boxplot(x="Pclass", y="Age", data=df_train)
ax= sns.stripplot(x="Pclass", y="Age", data=df_train, jitter=True, edgecolor="gray")
plt.show()
###Output
_____no_output_____
###Markdown
6-2-3 HistogramWe can also create a **histogram** of each input variable to get an idea of the distribution.
###Code
# histograms
df_train.hist(figsize=(15,20));
plt.figure();
###Output
_____no_output_____
###Markdown
It looks like perhaps two of the input variables have a Gaussian distribution. This is useful to note as we can use algorithms that can exploit this assumption.
###Code
df_train["Age"].hist();
f,ax=plt.subplots(1,2,figsize=(20,10))
df_train[df_train['Survived']==0].Age.plot.hist(ax=ax[0],bins=20,edgecolor='black',color='red')
ax[0].set_title('Survived= 0')
x1=list(range(0,85,5))
ax[0].set_xticks(x1)
df_train[df_train['Survived']==1].Age.plot.hist(ax=ax[1],color='green',bins=20,edgecolor='black')
ax[1].set_title('Survived= 1')
x2=list(range(0,85,5))
ax[1].set_xticks(x2)
plt.show()
f,ax=plt.subplots(1,2,figsize=(18,8))
df_train['Survived'].value_counts().plot.pie(explode=[0,0.1],autopct='%1.1f%%',ax=ax[0],shadow=True)
ax[0].set_title('Survived')
ax[0].set_ylabel('')
sns.countplot('Survived',data=df_train,ax=ax[1])
ax[1].set_title('Survived')
plt.show()
f,ax=plt.subplots(1,2,figsize=(18,8))
df_train[['Sex','Survived']].groupby(['Sex']).mean().plot.bar(ax=ax[0])
ax[0].set_title('Survived vs Sex')
sns.countplot('Sex',hue='Survived',data=df_train,ax=ax[1])
ax[1].set_title('Sex:Survived vs Dead')
plt.show()
sns.countplot('Pclass', hue='Survived', data=df_train)
plt.title('Pclass: Sruvived vs Dead')
plt.show()
###Output
_____no_output_____
###Markdown
6-2-4 Multivariate PlotsNow we can look at the interactions between the variables.First, let’s look at scatterplots of all pairs of attributes. This can be helpful to spot structured relationships between input variables.
###Code
# scatter plot matrix
pd.plotting.scatter_matrix(df_train,figsize=(10,10))
plt.figure();
###Output
_____no_output_____
###Markdown
Note the diagonal grouping of some pairs of attributes. This suggests a high correlation and a predictable relationship. 6-2-5 violinplots
###Code
# violinplots on petal-length for each species
sns.violinplot(data=df_train,x="Sex", y="Age")
f,ax=plt.subplots(1,2,figsize=(18,8))
sns.violinplot("Pclass","Age", hue="Survived", data=df_train,split=True,ax=ax[0])
ax[0].set_title('Pclass and Age vs Survived')
ax[0].set_yticks(range(0,110,10))
sns.violinplot("Sex","Age", hue="Survived", data=df_train,split=True,ax=ax[1])
ax[1].set_title('Sex and Age vs Survived')
ax[1].set_yticks(range(0,110,10))
plt.show()
###Output
_____no_output_____
###Markdown
6-2-6 pairplot
###Code
# Using seaborn pairplot to see the bivariate relation between each pair of features
sns.pairplot(df_train, hue="Sex");
###Output
_____no_output_____
###Markdown
6-2-7 kdeplot We can also replace the histograms shown in the diagonal of the pairplot by kde.
###Code
sns.FacetGrid(df_train, hue="Survived", size=5).map(sns.kdeplot, "Fare").add_legend()
plt.show();
###Output
_____no_output_____
###Markdown
6-2-8 jointplot
###Code
sns.jointplot(x='Fare',y='Age',data=df_train);
sns.jointplot(x='Fare',y='Age' ,data=df_train, kind='reg');
###Output
_____no_output_____
###Markdown
6-2-9 Swarm plot
###Code
sns.swarmplot(x='Pclass',y='Age',data=df_train);
###Output
_____no_output_____
###Markdown
6-2-10 Heatmap
###Code
plt.figure(figsize=(7,4))
sns.heatmap(df_train.corr(),annot=True,cmap='cubehelix_r') #draws heatmap with input as the correlation matrix calculted by(iris.corr())
plt.show();
###Output
_____no_output_____
###Markdown
6-2-11 Bar Plot
###Code
df_train['Pclass'].value_counts().plot(kind="bar");
###Output
_____no_output_____
###Markdown
6-2-12 Factorplot
###Code
sns.factorplot('Pclass','Survived',hue='Sex',data=df_train)
plt.show();
sns.factorplot('SibSp','Survived',hue='Pclass',data=df_train)
plt.show()
#let's see some others factorplot
f,ax=plt.subplots(1,2,figsize=(20,8))
sns.barplot('SibSp','Survived', data=df_train,ax=ax[0])
ax[0].set_title('SipSp vs Survived in BarPlot')
sns.factorplot('SibSp','Survived', data=df_train,ax=ax[1])
ax[1].set_title('SibSp vs Survived in FactorPlot')
plt.close(2)
plt.show();
###Output
_____no_output_____
###Markdown
6-2-13 distplot
###Code
f,ax=plt.subplots(1,3,figsize=(20,8))
sns.distplot(df_train[df_train['Pclass']==1].Fare,ax=ax[0])
ax[0].set_title('Fares in Pclass 1')
sns.distplot(df_train[df_train['Pclass']==2].Fare,ax=ax[1])
ax[1].set_title('Fares in Pclass 2')
sns.distplot(df_train[df_train['Pclass']==3].Fare,ax=ax[2])
ax[2].set_title('Fares in Pclass 3')
plt.show()
###Output
_____no_output_____
###Markdown
6-2-12 ConclusionWe have used Python to apply data visualization tools to theTitanic dataset. 6-3 Data Preprocessing**Data preprocessing** refers to the transformations applied to our data before feeding it to the algorithm. Data Preprocessing is a technique that is used to convert the raw data into a clean data set. In other words, whenever the data is gathered from different sources it is collected in raw format which is not feasible for the analysis.there are plenty of steps for data preprocessing and **we just listed some of them** :* removing Target column (id)* Sampling (without replacement)* Dealing with Imbalanced Data* Introducing missing values and treating them (replacing by average values)* Noise filtering* Data discretization* Normalization and standardization* PCA analysis* Feature selection (filter, embedded, wrapper) [Go to top](top) 6-3-1 FeaturesFeatures:* numeric* categorical* ordinal* datetime* coordinates Find the type of features in titanic dataset: 6-3-2 Explorer Dataset1- Dimensions of the dataset.2- Peek at the data itself.3- Statistical summary of all attributes.4- Breakdown of the data by the class variable.[7]Don’t worry, each look at the data is **one command**. These are useful commands that you can use again and again on future projects. [Go to top](top)
###Code
# shape
print(df_train.shape)
#columns*rows
df_train.size
###Output
_____no_output_____
###Markdown
> Note:how many NA elements in every column
###Code
df_train.isnull().sum()
###Output
_____no_output_____
###Markdown
If you want to remove all the null value, you can uncomment this line
###Code
# remove rows that have NA's
#train = train.dropna()
###Output
_____no_output_____
###Markdown
We can get a quick idea of how many instances (rows) and how many attributes (columns) the data contains with the shape property.You should see 891 instances and 12 attributes:
###Code
print(df_train.shape)
###Output
_____no_output_____
###Markdown
> Note:for getting some information about the dataset you can use **info()** command
###Code
print(df_train.info())
###Output
_____no_output_____
###Markdown
> Note:you see number of unique item for **Age** and **Pclass** with command below:
###Code
df_train['Age'].unique()
df_train["Pclass"].value_counts()
###Output
_____no_output_____
###Markdown
To check the first 5 rows of the data set, we can use head(5).
###Code
df_train.head(5)
###Output
_____no_output_____
###Markdown
To check out last 5 row of the data set, we use tail() function
###Code
df_train.tail()
###Output
_____no_output_____
###Markdown
To pop up 5 random rows from the data set, we can use **sample(5)** function
###Code
df_train.sample(5)
###Output
_____no_output_____
###Markdown
To give a statistical summary about the dataset, we can use **describe()
###Code
df_train.describe()
###Output
_____no_output_____
###Markdown
To check out how many null info are on the dataset, we can use **isnull().sum()
###Code
df_train.isnull().sum()
df_train.groupby('Pclass').count()
###Output
_____no_output_____
###Markdown
To print dataset **columns**, we can use columns atribute
###Code
df_train.columns
###Output
_____no_output_____
###Markdown
> Note:in pandas's data frame you can perform some query such as "where"
###Code
df_train.where(df_train ['Age']==30).head(2)
###Output
_____no_output_____
###Markdown
As you can see in the below in python, it is so easy perform some query on the dataframe:
###Code
df_train[df_train['Age']<7.2].head(2)
###Output
_____no_output_____
###Markdown
Seperating the data into dependent and independent variables
###Code
X = df_train.iloc[:, :-1].values
y = df_train.iloc[:, -1].values
###Output
_____no_output_____
###Markdown
> Note:Preprocessing and generation pipelines depend on a model type 6-4 Data Cleaning 1. When dealing with real-world data,** dirty data** is the norm rather than the exception. 1. We continuously need to predict correct values, impute missing ones, and find links between various data artefacts such as schemas and records. 1. We need to stop treating data cleaning as a piecemeal exercise (resolving different types of errors in isolation), and instead leverage all signals and resources (such as constraints, available statistics, and dictionaries) to accurately predict corrective actions.1. The primary goal of data cleaning is to detect and remove errors and **anomalies** to increase the value of data in analytics and decision making.[8] [Go to top](top) 6-4-1 Transforming FeaturesData transformation is the process of converting data from one format or structure into another format or structure[[wiki](https://en.wikipedia.org/wiki/Data_transformation)] 1. Age1. Cabin1. Fare1. Name
###Code
def simplify_ages(df):
df.Age = df.Age.fillna(-0.5)
bins = (-1, 0, 5, 12, 18, 25, 35, 60, 120)
group_names = ['Unknown', 'Baby', 'Child', 'Teenager', 'Student', 'Young Adult', 'Adult', 'Senior']
categories = pd.cut(df.Age, bins, labels=group_names)
df.Age = categories
return df
def simplify_cabins(df):
df.Cabin = df.Cabin.fillna('N')
df.Cabin = df.Cabin.apply(lambda x: x[0])
return df
def simplify_fares(df):
df.Fare = df.Fare.fillna(-0.5)
bins = (-1, 0, 8, 15, 31, 1000)
group_names = ['Unknown', '1_quartile', '2_quartile', '3_quartile', '4_quartile']
categories = pd.cut(df.Fare, bins, labels=group_names)
df.Fare = categories
return df
def format_name(df):
df['Lname'] = df.Name.apply(lambda x: x.split(' ')[0])
df['NamePrefix'] = df.Name.apply(lambda x: x.split(' ')[1])
return df
def drop_features(df):
return df.drop(['Ticket', 'Name', 'Embarked'], axis=1)
def transform_features(df):
df = simplify_ages(df)
df = simplify_cabins(df)
df = simplify_fares(df)
df = format_name(df)
df = drop_features(df)
return df
df_train = transform_features(df_train)
df_test = transform_features(df_test)
df_train.head()
###Output
_____no_output_____
###Markdown
6-4-2 Feature EncodingIn machine learning projects, one important part is feature engineering. It is very common to see categorical features in a dataset. However, our machine learning algorithm can only read numerical values. It is essential to encoding categorical features into numerical values[28]1. Encode labels with value between 0 and n_classes-11. LabelEncoder can be used to normalize labels.1. It can also be used to transform non-numerical labels (as long as they are hashable and comparable) to numerical labels.
###Code
def encode_features(df_train, df_test):
features = ['Fare', 'Cabin', 'Age', 'Sex', 'Lname', 'NamePrefix']
df_combined = pd.concat([df_train[features], df_test[features]])
for feature in features:
le = preprocessing.LabelEncoder()
le = le.fit(df_combined[feature])
df_train[feature] = le.transform(df_train[feature])
df_test[feature] = le.transform(df_test[feature])
return df_train, df_test
###Output
_____no_output_____
###Markdown
7- Model DeploymentIn this section have been applied plenty of ** learning algorithms** that play an important rule in your experiences and improve your knowledge in case of ML technique.> Note:The results shown here may be slightly different for your analysis because, for example, the neural network algorithms use random number generators for fixing the initial value of the weights (starting points) of the neural networks, which often result in obtaining slightly different (local minima) solutions each time you run the analysis. Also note that changing the seed for the random number generator used to create the train, test, and validation samples can change your results. 7-1 Families of ML algorithmsThere are several categories for machine learning algorithms, below are some of these categories:* Linear * Linear Regression * Logistic Regression * Support Vector Machines* Tree-Based * Decision Tree * Random Forest * GBDT* KNN* Neural Networks-----------------------------And if we want to categorize ML algorithms with the type of learning, there are below type:* Classification * k-Nearest Neighbors * LinearRegression * SVM * DT * NN * clustering * K-means * HCA * Expectation Maximization * Visualization and dimensionality reduction: * Principal Component Analysis(PCA) * Kernel PCA * Locally -Linear Embedding (LLE) * t-distributed Stochastic NeighborEmbedding (t-SNE) * Association rule learning * Apriori * Eclat* Semisupervised learning* Reinforcement Learning * Q-learning* Batch learning & Online learning* Ensemble Learning> Note:Here is no method which outperforms all others for all tasks [Go to top](top) 7-2 Prepare Features & TargetsFirst of all seperating the data into independent(Feature) and dependent(Target) variables.> Note:* X==>> Feature - independent* y==>> Target - dependent
###Code
#Encode Dataset
df_train, df_test = encode_features(df_train, df_test)
df_train.head()
df_test.head()
###Output
_____no_output_____
###Markdown
7-3 how to prevent overfitting & underfitting?1. graph on the left side: 1. we can predict that the line does not cover all the points shown in the graph. Such model tend to cause underfitting of data .It also called High Bias.1. graph on right side: 1. shows the predicted line covers all the points in graph. In such condition you can also think that it’s a good graph which cover all the points. But that’s not actually true, the predicted line into the graph covers all points which are noise and outlier. Such model are also responsible to predict poor result due to its complexity.It is also called High Variance.1. middle graph: 1. it shows a pretty good predicted line. It covers majority of the point in graph and also maintains the balance between bias and variance.[30] Prepare X(features) , y(target)
###Code
x_all = df_train.drop(['Survived', 'PassengerId'], axis=1)
y_all = df_train['Survived']
num_test = 0.3
X_train, X_test, y_train, y_test = train_test_split(x_all, y_all, test_size=num_test, random_state=100)
###Output
_____no_output_____
###Markdown
7-3 Accuracy and precisionWe know that the titanic problem is a binary classification and to evaluate, we just need to calculate the accuracy.1. **accuracy** 1. Your score is the percentage of passengers you correctly predict. This is known simply as "accuracy”.1. **precision** : 1. In pattern recognition, information retrieval and binary classification, precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances, 1. **recall** : 1. recall is the fraction of relevant instances that have been retrieved over the total amount of relevant instances. 1. **F-score** : 1. the F1 score is a measure of a test's accuracy. It considers both the precision p and the recall r of the test to compute the score: p is the number of correct positive results divided by the number of all positive results returned by the classifier, and r is the number of correct positive results divided by the number of all relevant samples (all samples that should have been identified as positive). The F1 score is the harmonic average of the precision and recall, where an F1 score reaches its best value at 1 (perfect precision and recall) and worst at 0.1. **What is the difference between accuracy and precision?** 1. "Accuracy" and "precision" are general terms throughout science. A good way to internalize the difference are the common "bullseye diagrams". In machine learning/statistics as a whole, accuracy vs. precision is analogous to bias vs. variance.
###Code
result=None
###Output
_____no_output_____
###Markdown
7-4 RandomForestClassifierA random forest is a meta estimator that fits a number of decision tree classifiers on various sub-samples of the dataset and uses averaging to improve the predictive accuracy and control over-fitting. The sub-sample size is always the same as the original input sample size but the samples are drawn with replacement if bootstrap=True (default).
###Code
# Choose the type of classifier.
rfc = RandomForestClassifier()
# Choose some parameter combinations to try
parameters = {'n_estimators': [4, 6, 9],
'max_features': ['log2', 'sqrt','auto'],
'criterion': ['entropy', 'gini'],
'max_depth': [2, 3, 5, 10],
'min_samples_split': [2, 3, 5],
'min_samples_leaf': [1,5,8]
}
# Type of scoring used to compare parameter combinations
acc_scorer = make_scorer(accuracy_score)
# Run the grid search
grid_obj = GridSearchCV(rfc, parameters, scoring=acc_scorer)
grid_obj = grid_obj.fit(X_train, y_train)
# Set the clf to the best combination of parameters
rfc = grid_obj.best_estimator_
# Fit the best algorithm to the data.
rfc.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
7-4-1 prediction
###Code
rfc_prediction = rfc.predict(X_test)
rfc_score=accuracy_score(y_test, rfc_prediction)
print(rfc_score)
###Output
_____no_output_____
###Markdown
7-5 XGBoost[XGBoost](https://en.wikipedia.org/wiki/XGBoost) is an open-source software library which provides a gradient boosting framework for C++, Java, Python, R, and Julia. it aims to provide a "Scalable, Portable and Distributed Gradient Boosting (GBM, GBRT, GBDT) Library".
###Code
xgboost = xgb.XGBClassifier(max_depth=3, n_estimators=300, learning_rate=0.05).fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
7-5-1 prediction
###Code
xgb_prediction = xgboost.predict(X_test)
xgb_score=accuracy_score(y_test, xgb_prediction)
print(xgb_score)
###Output
_____no_output_____
###Markdown
7-6 Logistic Regressionthe logistic model is a widely used statistical model that, in its basic form, uses a logistic function to model a binary dependent variable; many more complex extensions exist. In regression analysis, logistic regression (or logit regression) is estimating the parameters of a logistic model
###Code
logreg = LogisticRegression()
logreg.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
7-6-1 prediction
###Code
logreg_prediction = logreg.predict(X_test)
logreg_score=accuracy_score(y_test, logreg_prediction)
print(logreg_score)
###Output
_____no_output_____
###Markdown
7-7 DecisionTreeRegressorThe function to measure the quality of a split. Supported criteria are “mse” for the mean squared error, which is equal to variance reduction as feature selection criterion and minimizes the L2 loss using the mean of each terminal node, “friedman_mse”, which uses mean squared error with Friedman’s improvement score for potential splits, and “mae” for the mean absolute error, which minimizes the L1 loss using the median of each terminal node.
###Code
from sklearn.tree import DecisionTreeRegressor
# Define model. Specify a number for random_state to ensure same results each run
dt = DecisionTreeRegressor(random_state=1)
# Fit model
dt.fit(X_train, y_train)
dt_prediction = dt.predict(X_test)
dt_score=accuracy_score(y_test, dt_prediction)
print(dt_score)
###Output
_____no_output_____
###Markdown
7-9 ExtraTreeRegressorExtra Tree Regressor differ from classic decision trees in the way they are built. When looking for the best split to separate the samples of a node into two groups, random splits are drawn for each of the max_features randomly selected features and the best split among those is chosen. When max_features is set 1, this amounts to building a totally random decision tree.
###Code
from sklearn.tree import ExtraTreeRegressor
# Define model. Specify a number for random_state to ensure same results each run
etr = ExtraTreeRegressor()
# Fit model
etr.fit(X_train, y_train)
etr_prediction = etr.predict(X_test)
etr_score=accuracy_score(y_test, etr_prediction)
print(etr_score)
###Output
_____no_output_____
###Markdown
How do I submit?1. Fork and Commit this Kernel.1. Then navigate to the Output tab of the Kernel and "Submit to Competition".
###Code
X_train = df_train.drop("Survived",axis=1)
y_train = df_train["Survived"]
X_train = X_train.drop("PassengerId",axis=1)
X_test = df_test.drop("PassengerId",axis=1)
xgboost = xgb.XGBClassifier(max_depth=3, n_estimators=300, learning_rate=0.05).fit(X_train, y_train)
Y_pred = xgboost.predict(X_test)
###Output
_____no_output_____
###Markdown
You can change your model and submit the results of other models
###Code
submission = pd.DataFrame({
"PassengerId": df_test["PassengerId"],
"Survived": Y_pred
})
submission.to_csv('submission.csv', index=False)
###Output
_____no_output_____ |
2. Arrays/OddOccurances.ipynb | ###Markdown
-----Write a function: def solution(A)that, given an array A consisting of N integers fulfilling the above conditions, returns the value of the unpaired element.For example, given array A such that: A[0] = 9 A[1] = 3 A[2] = 9 A[3] = 3 A[4] = 9 A[5] = 7 A[6] = 9the function should return 7, as explained in the example above.Assume that: N is an odd integer within the range [1..1,000,000]; each element of array A is an integer within the range [1..1,000,000,000]; all but one of the values in A occur an even number of times.Complexity: expected worst-case time complexity is O(N); expected worst-case space complexity is O(1), beyond input storage (not counting the storage required for input arguments).Elements of input arrays can be modified.------- Creating sample Array
###Code
A = [9,3,9,3,9,7,9]
print len(A)
print sorted(A)
###Output
[3, 3, 7, 9, 9, 9, 9]
###Markdown
designing algorithm
###Code
single_num = sorted(A)[0]
count = 0
if len(A) == 0:
print A
for num in sorted(A):
print "num: ", num
if num % single_num:
print "single num: ", single_num
print "count: ", count
if count > 1:
single_num = num
count = 0
else:
print single_num
break
count += 1
#print single_num
def solution_1(A):
if len(A) == 0:
return A
elif len(A) == 1:
return A[0]
elif type(A) is int:
return A
elif len(A) % 2:
single_num = sorted(A)[0]
count = 0
for i,num in enumerate(sorted(A)):
if num != single_num:
print "num: ", num
print "single num: ", single_num
print "Count: ", count
if count > 1:
single_num = num
count = 1
elif count == 1:
return single_num
else:
count += 1
if i >= len(A)-1:
return single_num
print "end single num: ", single_num
print "end count: ", count
print "end num: ", num
print sorted(A), "\n"
print solution(A)
B = [2, 2, 3, 3, 4]
print sorted(B), "\n"
print solution(B)
###Output
[2, 2, 3, 3, 4]
end single num: 2
end count: 1
end num: 2
end single num: 2
end count: 2
end num: 2
num: 3
single num: 2
Count: 2
end single num: 3
end count: 1
end num: 3
end single num: 3
end count: 2
end num: 3
num: 4
single num: 3
Count: 2
4
###Markdown
version 2-
###Code
def solution_2(A):
if len(A) == 0:
return A
elif len(A) == 1:
return A[0]
elif type(A) is int:
return A
elif len(A) % 2:
single_num = sorted(A)[0]
count = 0
for i,num in enumerate(sorted(A)):
if num != single_num:
print "num: ", num
print "Count: ", count
print "single num: ", single_num
if count > 1:
single_num = num
count = 1
elif count == 1:
return str(single_num) +" " + str(sorted(A[i-5:i+5]))
else:
count += 1
if i >= len(A)-1:
return single_num
print sorted(B)
print solution_2(B)
print sorted(A)
print solution_2(A)
###Output
[3, 3, 7, 9, 9, 9, 9]
num: 7
Count: 2
single num: 3
num: 9
Count: 1
single num: 7
7 [7, 9]
###Markdown
V3
###Code
def solution_3(A):
if len(A) == 0:
return A
elif len(A) == 1:
return A[0]
elif type(A) is int:
return A
elif len(A) % 2:
single_num = sorted(A)[0]
count = 1
for i,num in enumerate(sorted(A)):
if num != single_num:
if count > 1:
single_num = num
count = 1
elif count == 1:
return single_num
else:
count += 1
if i >= len(A)-1:
return single_num
###Output
_____no_output_____
###Markdown
V4
###Code
def solution_4(A):
if len(A) == 0:
return A
elif len(A) == 1:
return A[0]
elif type(A) is int:
return A
elif len(A) % 2:
single_num = sorted(A)[0]
count = 1
for i,num in enumerate(sorted(A)):
if num != single_num:
if count > 1:
single_num = num
count = 1
elif count == 1:
return single_num
elif single_num == num:
count += 1
if i >= len(A)-1:
return single_num
sorted(A)
print A
print solution_4(A)
###Output
[3, 3, 7, 9, 9, 9, 9]
7
###Markdown
V 5 - Some where long the lines forgot about the fine line "All but one occurs Even number of times". Instead solved it for "all but one int does not have any pair, is only a single element". Fixing the solutions now.
###Code
def solution_5(A):
if len(A) == 0:
return A
elif len(A) == 1:
return A[0]
elif type(A) is int:
return A
elif len(A) % 2:
single_num = sorted(A)[0]
count = 0
for i,num in enumerate(sorted(A)):
if num != single_num:
if count > 1:
if count % 2:
print "count", count
print "num", num
print "single num", single_num
return single_num
single_num = num
count = 1
elif count == 1:
print "count", count
print "num", num
print "single num", single_num
return single_num
elif single_num == num:
count += 1
if i >= len(A)-1:
return single_num
print solution_5(A)
1 % 2
###Output
_____no_output_____
###Markdown
V5 .1
###Code
def solution_5_1(A):
if len(A) == 0:
return A
elif len(A) == 1:
return A[0]
elif type(A) is int:
return A
elif len(A) % 2:
single_num = sorted(A)[0]
count = 0
for i,num in enumerate(sorted(A)):
if num != single_num:
if count % 2:
print "count", count
print "num", num
print "single num", single_num
return single_num
single_num = num
count = 1
elif single_num == num:
count += 1
if i >= len(A)-1:
return single_num
print solution_5_1(A)
###Output
count 1
num 9
single num 7
7
###Markdown
V 5_2
###Code
def solution_5_2(A):
if len(A) == 0:
return A
elif len(A) == 1:
return A[0]
elif type(A) is int:
return A
elif len(A) % 2:
single_num = sorted(A)[0]
count = 0
for i,num in enumerate(sorted(A)):
if num != single_num:
if count % 2:
return single_num
else:
single_num = num
count = 1
elif single_num == num:
count += 1
if i >= len(A)-1:
if count % 2:
return single_num
else
#just in case-
return None
print solution_5_2(A)
###Output
7
###Markdown
[Result: 100%](https://codility.com/demo/results/trainingGK4AED-56F/)
###Code
###Output
_____no_output_____ |
Google IT Automation with Python/Google - Crash Course on Python/Week 3/Practice Quiz While Loops.ipynb | ###Markdown
Practice Quiz: While Loops
###Code
"""
2.Question 2
Fill in the blanks to make the print_prime_factors function print all the prime factors of a number.
A prime factor is a number that is prime and divides another without a remainder.
"""
def print_prime_factors(number):
# Start with two, which is the first prime
factor = 2
# Keep going until the factor is larger than the number
while factor <= number:
# Check if factor is a divisor of number
if number % factor == 0:
# If it is, print it and divide the original number
print(factor)
number = number / factor
else:
# If it's not, increment the factor by one
factor +=1
return "Done"
print_prime_factors(100)
# Should print 2,2,5,5
# DO NOT DELETE THIS COMMENT
"""
4.Question 4
Fill in the empty function so that it returns the sum of all the divisors of a number, without including it.
A divisor is a number that divides into another without a remainder.
"""
import math
# Function to calculate sum of all proper
# divisors num --> given natural number
def sum_divisors(num) :
# Final result of summation of divisors
if num == 0:
return 0
else:
result = 0
# find all divisors which divides 'num'
i = 2
while i<= (math.sqrt(num)) :
# if 'i' is divisor of 'num'
if (num % i == 0) :
# if both divisors are same then
# add it only once else add both
if (i == (num / i)) :
result = result + i;
else :
result = result + (i + num/i);
i = i + 1
# Add 1 to the result as 1 is also
# a divisor
hell = int(result + 1)
return (hell)
print(sum_divisors(0))
# 0
print(sum_divisors(3)) # Should sum of 1
# 1
print(sum_divisors(36)) # Should sum of 1+2+3+4+6+9+12+18
# 55
print(sum_divisors(102)) # Should be sum of 2+3+6+17+34+51
# 114
"""
5.Question 5
The multiplication_table function prints the results of a number passed to it multiplied by 1 through 5.
An additional requirement is that the result is not to exceed 25, which is done with the break statement.
Fill in the blanks to complete the function to satisfy these conditions.
"""
def multiplication_table(number):
# Initialize the starting point of the multiplication table
multiplier = 1
# Only want to loop through 5
while multiplier <= 5:
result = number*multiplier
# What is the additional condition to exit out of the loop?
if result>25 :
break
print(str(number) + "x" + str(multiplier) + "=" + str(result))
# Increment the variable for the loop
multiplier += 1
multiplication_table(3)
# Should print: 3x1=3 3x2=6 3x3=9 3x4=12 3x5=15
multiplication_table(5)
# Should print: 5x1=5 5x2=10 5x3=15 5x4=20 5x5=25
multiplication_table(8)
# Should print: 8x1=8 8x2=16 8x3=24
###Output
False
True
|
evaluate/data/simlex/preprocessing/preprocessing_simlex.ipynb | ###Markdown
Downloading and preprocessing the SimLex datasetThis notebook downloads and preprocesses the SimLex dataset.
###Code
%env URL=https://www.cl.cam.ac.uk/~fh295/SimLex-999.zip
! wget $URL
!unzip SimLex-999.zip
!ls
!ls SimLex-999
import pandas as pd
df = pd.read_csv('SimLex-999/SimLex-999.txt', sep='\t')
df.head()
###Output
_____no_output_____
###Markdown
Important notes from the README:- POS is for both words, meaning that SimLex only has word pairs with same POS.- Similarity ratings were collected on a 0-6 scale, but have been linearly mapped to 0-10.- Concreteness ratings are from the Nelson norms, and are on a 1-7 scale.- Concreteness quartile also from Nelson norms (unsure how it is calculated from two numbers)- Assoc(USF) = Nelson norms- Binary variable indicating whether the word pair is in the top 333 (third) of association ratings, as per the Nelson norm column
###Code
df.columns = ['word1', 'word2', 'POS', 'similarity', 'word1_concreteness', 'word2_concreteness',
'concreteness_quartile', 'nelson_norms', 'top_333_in_nelson', 'similarity_sd']
df.head()
outfile = '../simlex.csv'
df.to_csv(outfile, index=False)
###Output
_____no_output_____
###Markdown
Remove everything except this file to save space.
###Code
!find . -not -name 'preprocessing_simlex.ipynb' -print0 | xargs -0 rm --
! rm -rf SimLex-999
###Output
rm: "." and ".." may not be removed
rm: ./.ipynb_checkpoints: is a directory
rm: ./SimLex-999: is a directory
|
particle_detection_multi_image.ipynb | ###Markdown
Segmentation of nanoparticles imaged by EM Guillaume Witz Science IT Support, Bern University
###Code
import glob
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import skimage
import ipywidgets as ipw
import selectfile
import nanoparticles as nano
f = selectfile.FileBrowser()
###Output
_____no_output_____
###Markdown
Set image scale
###Code
nm_per_pixel = 0.0001589*1000
###Output
_____no_output_____
###Markdown
Select a folder with images
###Code
f.widget()
radii = nano.analyze_particles(f.path, 20,50, nm_per_pixel, match_quality = 0.15)
radii.to_csv(f.path+'/radii.csv', sep=',',index = False)
###Output
_____no_output_____ |
TabCorr/TabCorr_tabulate_MDPL2.ipynb | ###Markdown
Tabulating and saving correlations functions with TabCorr particle displacement done in separate script in `baryonic_effects/baryonification/displace_MDPL2_particles.py`
###Code
import numpy as np
from matplotlib import cm
from matplotlib import colors
import matplotlib.pyplot as plt
from halotools.sim_manager import CachedHaloCatalog
from halotools.mock_observables import wp, delta_sigma
from halotools.empirical_models import PrebuiltHodModelFactory
from tabcorr import TabCorr
import pandas as pd
from halotools.sim_manager import UserSuppliedHaloCatalog, UserSuppliedPtclCatalog,FakeSim
from halotools.mock_observables import delta_sigma, wp, return_xyz_formatted_array
from halotools.empirical_models import NFWProfile
from halotools.empirical_models import PrebuiltHodModelFactory, HodModelFactory
from halotools.empirical_models import AssembiasZheng07Cens, AssembiasZheng07Sats, TrivialPhaseSpace, NFWPhaseSpace
from halotools.utils import add_halo_hostid
import time
def create_halo_and_particle_catalogs_for_halotools(halo_catalog_path, particle_catalog_path):
halo_df = pd.read_csv(halo_catalog_path)
particles_df = pd.read_csv(particle_catalog_path, delimiter =' +', names=['x','y','z'], engine='python')
print('Files read.')
ptcl_x = particles_df['x'].values
ptcl_y = particles_df['y'].values
ptcl_z = particles_df['z'].values
particle_mass = 1.51e9
num_ptcl_per_dim = 3840
x = halo_df['x'].values
y = halo_df['y'].values
z = halo_df['z'].values
vx = halo_df['vx'].values
vy = halo_df['vy'].values
vz = halo_df['vz'].values
mass = halo_df['Mvir'].values
radius = halo_df['Rvir'].values/1e3 #convert to Mpc
ids = np.arange(0, len(halo_df))
upid = halo_df['upId'].values
simname = 'MDPL2'
#get concentrations
# nfw = NFWProfile(redshift=redshift, cosmology = Planck15, mdef = 'vir', conc_mass_model = 'dutton_maccio14')
# model_conc = nfw.conc_NFWmodel(prim_haloprop = mass)
concentrations = halo_df['Rvir'].values / halo_df['Rs'].values
print('Creating catalogs...')
particle_catalog = UserSuppliedPtclCatalog(x = ptcl_x, y = ptcl_y, z = ptcl_z, Lbox = Lbox, particle_mass = particle_mass,
redshift = redshift)
halo_catalog = UserSuppliedHaloCatalog(user_supplied_ptclcat = particle_catalog, redshift = redshift, simname = simname,
Lbox = Lbox, particle_mass = particle_mass, num_ptcl_per_dim =num_ptcl_per_dim,
halo_x = x, halo_y = y, halo_z = z,
halo_vx = vx, halo_vy = vy, halo_vz = vz,
halo_id = ids, halo_mvir = mass, halo_rvir = radius,
halo_nfw_conc = concentrations, halo_upid = upid )
#add hostid
add_halo_hostid(halo_catalog.halo_table)
return halo_catalog, particle_catalog
data_directory = '/Users/fardila/Documents/Data/baryonic_effects/CMASS/'
queried_halo_cat_file = 'halo_catalogs/mdpl2_hlist_0.65650_Mvir11.2.csv'
test_halo_cat_file = 'halo_catalogs/test.csv'
### "row_id","Mvir","Rvir","M200c","M500c","x","y","z","scale"
# full_halo_cat_file = 'halo_catalogs/cut_halo_df.pkl'
particle_cat_file = 'particle_catalogs/mdpl2_particles_0.6565_10m.dat'
test_particle_cat_file = 'particle_catalogs/test.dat'
### "x","y","z"
displacedA_particle_cat_file = 'particle_catalogs/MDPL2_bfc_particles_A.out'
displacedB_particle_cat_file = 'particle_catalogs/MDPL2_bfc_particles_B.out'
displacedC_particle_cat_file = 'particle_catalogs/MDPL2_bfc_particles_C.out'
redshift = (1./0.65650)-1. #z=0.523
Lbox = 1000. #Mpc/h
halo_catalog, particle_catalog = create_halo_and_particle_catalogs_for_halotools(data_directory+queried_halo_cat_file,
data_directory+particle_cat_file)
halo_catalog, displaced_particle_catalog = create_halo_and_particle_catalogs_for_halotools(data_directory+queried_halo_cat_file,
data_directory+displacedA_particle_cat_file)
###Output
Files read.
Creating catalogs...
###Markdown
Tabulate DS for MDPL2 (before baryonification)
###Code
particle_masses = halo_catalog.particle_mass
period=halo_catalog.Lbox
downsampling_factor = (halo_catalog.num_ptcl_per_dim**3)/float(len(particle_catalog.ptcl_table))
x = particle_catalog.ptcl_table['x']
y = particle_catalog.ptcl_table['y']
z = particle_catalog.ptcl_table['z']
particle_positions = return_xyz_formatted_array(x, y, z, period=period)
time1 = time.time()
# First, we tabulate the correlation functions in the halo catalog.
rp_bins = np.logspace(-1, 1, 20)
halotab = TabCorr.tabulate(halo_catalog, delta_sigma, particle_positions, rp_bins = rp_bins,
mode ='cross', period = period, particle_masses = particle_masses,
downsampling_factor = downsampling_factor )
# We can save the result for later use.
halotab.write('mdpl2_tabCorr_DS.hdf5')
print('{0} seconds'.format(time.time() - time1))
###Output
/Users/fardila/anaconda2/envs/baryonic_effects/lib/python3.7/site-packages/halotools/empirical_models/phase_space_models/analytic_models/monte_carlo_helpers.py:205: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
self.rad_prof_func_table_indices[digitized_param_list]
/Users/fardila/anaconda2/envs/baryonic_effects/lib/python3.7/site-packages/halotools/empirical_models/phase_space_models/analytic_models/monte_carlo_helpers.py:522: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
self.rad_prof_func_table_indices[digitized_param_list]
###Markdown
Tabulate DS for MDPL2 (after baryonification)
###Code
particle_masses = halo_catalog.particle_mass
period=halo_catalog.Lbox
downsampling_factor = (halo_catalog.num_ptcl_per_dim**3)/float(len(displaced_particle_catalog.ptcl_table))
displaced_x = displaced_particle_catalog.ptcl_table['x']
displaced_y = displaced_particle_catalog.ptcl_table['y']
displaced_z = displaced_particle_catalog.ptcl_table['z']
displaced_particle_positions = return_xyz_formatted_array(displaced_x, displaced_y, displaced_z, period=period)
time1 = time.time()
# First, we tabulate the correlation functions in the halo catalog.
rp_bins = np.logspace(-1, 1, 20)
halotab = TabCorr.tabulate(halo_catalog, delta_sigma, displaced_particle_positions, rp_bins = rp_bins,
mode ='cross', period = period, particle_masses = particle_masses,
downsampling_factor = downsampling_factor )
# We can save the result for later use.
halotab.write('mdpl2+baryonification_tabCorr_DS.hdf5')
print('{0} seconds'.format(time.time() - time1))
###Output
/Users/fardila/anaconda2/envs/baryonic_effects/lib/python3.7/site-packages/halotools/empirical_models/phase_space_models/analytic_models/monte_carlo_helpers.py:205: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
self.rad_prof_func_table_indices[digitized_param_list]
/Users/fardila/anaconda2/envs/baryonic_effects/lib/python3.7/site-packages/halotools/empirical_models/phase_space_models/analytic_models/monte_carlo_helpers.py:522: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
self.rad_prof_func_table_indices[digitized_param_list]
|
yolov5s/tutorial.ipynb | ###Markdown
This notebook was written by Ultralytics LLC, and is freely available for redistribution under the [GPL-3.0 license](https://choosealicense.com/licenses/gpl-3.0/). For more information please visit https://github.com/ultralytics/yolov5 and https://www.ultralytics.com. SetupClone repo, install dependencies and check PyTorch and GPU.
###Code
!git clone https://github.com/ultralytics/yolov5 # clone repo
%cd yolov5
%pip install -qr requirements.txt # install dependencies
import torch
from IPython.display import Image, clear_output # to display images
clear_output()
print('Setup complete. Using torch %s %s' % (torch.__version__, torch.cuda.get_device_properties(0) if torch.cuda.is_available() else 'CPU'))
###Output
Setup complete. Using torch 1.8.0+cu101 _CudaDeviceProperties(name='Tesla V100-SXM2-16GB', major=7, minor=0, total_memory=16160MB, multi_processor_count=80)
###Markdown
1. Inference`detect.py` runs inference on a variety of sources, downloading models automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases).
###Code
!python detect.py --weights yolov5s.pt --img 640 --conf 0.25 --source data/images/
Image(filename='runs/detect/exp/zidane.jpg', width=600)
###Output
Namespace(agnostic_nms=False, augment=False, classes=None, conf_thres=0.25, device='', exist_ok=False, img_size=640, iou_thres=0.45, name='exp', project='runs/detect', save_conf=False, save_txt=False, source='data/images/', update=False, view_img=False, weights=['yolov5s.pt'])
YOLOv5 🚀 v4.0-137-g9b11f0c torch 1.8.0+cu101 CUDA:0 (Tesla V100-SXM2-16GB, 16160.5MB)
Fusing layers...
Model Summary: 224 layers, 7266973 parameters, 0 gradients, 17.0 GFLOPS
image 1/2 /content/yolov5/data/images/bus.jpg: 640x480 4 persons, 1 bus, Done. (0.008s)
image 2/2 /content/yolov5/data/images/zidane.jpg: 384x640 2 persons, 1 tie, Done. (0.008s)
Results saved to runs/detect/exp
Done. (0.087)
###Markdown
Results are saved to `runs/detect`. A full list of available inference sources: 2. TestTest a model on [COCO](https://cocodataset.org/home) val or test-dev dataset to evaluate trained accuracy. Models are downloaded automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases). To show results by class use the `--verbose` flag. Note that `pycocotools` metrics may be 1-2% better than the equivalent repo metrics, as is visible below, due to slight differences in mAP computation. COCO val2017Download [COCO val 2017](https://github.com/ultralytics/yolov5/blob/74b34872fdf41941cddcf243951cdb090fbac17b/data/coco.yamlL14) dataset (1GB - 5000 images), and test model accuracy.
###Code
# Download COCO val2017
torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/coco2017val.zip', 'tmp.zip')
!unzip -q tmp.zip -d ../ && rm tmp.zip
# Run YOLOv5x on COCO val2017
!python test.py --weights yolov5x.pt --data coco.yaml --img 640 --iou 0.65
###Output
Namespace(augment=False, batch_size=32, conf_thres=0.001, data='./data/coco.yaml', device='', exist_ok=False, img_size=640, iou_thres=0.65, name='exp', project='runs/test', save_conf=False, save_hybrid=False, save_json=True, save_txt=False, single_cls=False, task='val', verbose=False, weights=['yolov5x.pt'])
YOLOv5 🚀 v4.0-137-g9b11f0c torch 1.8.0+cu101 CUDA:0 (Tesla V100-SXM2-16GB, 16160.5MB)
Downloading https://github.com/ultralytics/yolov5/releases/download/v4.0/yolov5x.pt to yolov5x.pt...
100% 168M/168M [00:02<00:00, 59.1MB/s]
Fusing layers...
Model Summary: 476 layers, 87730285 parameters, 0 gradients, 218.8 GFLOPS
[34m[1mval: [0mScanning '../coco/val2017' for images and labels... 4952 found, 48 missing, 0 empty, 0 corrupted: 100% 5000/5000 [00:01<00:00, 3236.68it/s]
[34m[1mval: [0mNew cache created: ../coco/val2017.cache
Class Images Labels P R [email protected] [email protected]:.95: 100% 157/157 [01:20<00:00, 1.95it/s]
all 5000 36335 0.749 0.619 0.68 0.486
Speed: 5.3/1.7/6.9 ms inference/NMS/total per 640x640 image at batch-size 32
Evaluating pycocotools mAP... saving runs/test/exp/yolov5x_predictions.json...
loading annotations into memory...
Done (t=0.43s)
creating index...
index created!
Loading and preparing results...
DONE (t=5.10s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=88.52s).
Accumulating evaluation results...
DONE (t=17.17s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.501
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.687
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.544
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.338
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.548
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.637
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.378
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.628
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.680
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.520
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.729
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.826
Results saved to runs/test/exp
###Markdown
COCO test-dev2017Download [COCO test2017](https://github.com/ultralytics/yolov5/blob/74b34872fdf41941cddcf243951cdb090fbac17b/data/coco.yamlL15) dataset (7GB - 40,000 images), to test model accuracy on test-dev set (20,000 images). Results are saved to a `*.json` file which can be submitted to the evaluation server at https://competitions.codalab.org/competitions/20794.
###Code
# Download COCO test-dev2017
torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/coco2017labels.zip', 'tmp.zip')
!unzip -q tmp.zip -d ../ && rm tmp.zip # unzip labels
!f="test2017.zip" && curl http://images.cocodataset.org/zips/$f -o $f && unzip -q $f && rm $f # 7GB, 41k images
%mv ./test2017 ./coco/images && mv ./coco ../ # move images to /coco and move /coco next to /yolov5
# Run YOLOv5s on COCO test-dev2017 using --task test
!python test.py --weights yolov5s.pt --data coco.yaml --task test
###Output
_____no_output_____
###Markdown
3. TrainDownload [COCO128](https://www.kaggle.com/ultralytics/coco128), a small 128-image tutorial dataset, start tensorboard and train YOLOv5s from a pretrained checkpoint for 3 epochs (note actual training is typically much longer, around **300-1000 epochs**, depending on your dataset).
###Code
# Download COCO128
torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/coco128.zip', 'tmp.zip')
!unzip -q tmp.zip -d ../ && rm tmp.zip
###Output
_____no_output_____
###Markdown
Train a YOLOv5s model on [COCO128](https://www.kaggle.com/ultralytics/coco128) with `--data coco128.yaml`, starting from pretrained `--weights yolov5s.pt`, or from randomly initialized `--weights '' --cfg yolov5s.yaml`. Models are downloaded automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases), and **COCO, COCO128, and VOC datasets are downloaded automatically** on first use.All training results are saved to `runs/train/` with incrementing run directories, i.e. `runs/train/exp2`, `runs/train/exp3` etc.
###Code
# Tensorboard (optional)
%load_ext tensorboard
%tensorboard --logdir runs/train
# Weights & Biases (optional)
%pip install -q wandb
!wandb login # use 'wandb disabled' or 'wandb enabled' to disable or enable
# Train YOLOv5s on COCO128 for 3 epochs
!python train.py --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt --nosave --cache
###Output
[34m[1mgithub: [0mup to date with https://github.com/ultralytics/yolov5 ✅
YOLOv5 🚀 v4.0-137-g9b11f0c torch 1.8.0+cu101 CUDA:0 (Tesla V100-SXM2-16GB, 16160.5MB)
Namespace(adam=False, batch_size=16, bucket='', cache_images=True, cfg='', data='./data/coco128.yaml', device='', entity=None, epochs=3, evolve=False, exist_ok=False, global_rank=-1, hyp='data/hyp.scratch.yaml', image_weights=False, img_size=[640, 640], linear_lr=False, local_rank=-1, log_artifacts=False, log_imgs=16, multi_scale=False, name='exp', noautoanchor=False, nosave=True, notest=False, project='runs/train', quad=False, rect=False, resume=False, save_dir='runs/train/exp', single_cls=False, sync_bn=False, total_batch_size=16, weights='yolov5s.pt', workers=8, world_size=1)
[34m[1mwandb: [0mInstall Weights & Biases for YOLOv5 logging with 'pip install wandb' (recommended)
Start Tensorboard with "tensorboard --logdir runs/train", view at http://localhost:6006/
2021-03-14 04:18:58.124672: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[34m[1mhyperparameters: [0mlr0=0.01, lrf=0.2, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0
Downloading https://github.com/ultralytics/yolov5/releases/download/v4.0/yolov5s.pt to yolov5s.pt...
100% 14.1M/14.1M [00:00<00:00, 63.1MB/s]
from n params module arguments
0 -1 1 3520 models.common.Focus [3, 32, 3]
1 -1 1 18560 models.common.Conv [32, 64, 3, 2]
2 -1 1 18816 models.common.C3 [64, 64, 1]
3 -1 1 73984 models.common.Conv [64, 128, 3, 2]
4 -1 1 156928 models.common.C3 [128, 128, 3]
5 -1 1 295424 models.common.Conv [128, 256, 3, 2]
6 -1 1 625152 models.common.C3 [256, 256, 3]
7 -1 1 1180672 models.common.Conv [256, 512, 3, 2]
8 -1 1 656896 models.common.SPP [512, 512, [5, 9, 13]]
9 -1 1 1182720 models.common.C3 [512, 512, 1, False]
10 -1 1 131584 models.common.Conv [512, 256, 1, 1]
11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
12 [-1, 6] 1 0 models.common.Concat [1]
13 -1 1 361984 models.common.C3 [512, 256, 1, False]
14 -1 1 33024 models.common.Conv [256, 128, 1, 1]
15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
16 [-1, 4] 1 0 models.common.Concat [1]
17 -1 1 90880 models.common.C3 [256, 128, 1, False]
18 -1 1 147712 models.common.Conv [128, 128, 3, 2]
19 [-1, 14] 1 0 models.common.Concat [1]
20 -1 1 296448 models.common.C3 [256, 256, 1, False]
21 -1 1 590336 models.common.Conv [256, 256, 3, 2]
22 [-1, 10] 1 0 models.common.Concat [1]
23 -1 1 1182720 models.common.C3 [512, 512, 1, False]
24 [17, 20, 23] 1 229245 models.yolo.Detect [80, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]
Model Summary: 283 layers, 7276605 parameters, 7276605 gradients, 17.1 GFLOPS
Transferred 362/362 items from yolov5s.pt
Scaled weight_decay = 0.0005
Optimizer groups: 62 .bias, 62 conv.weight, 59 other
[34m[1mtrain: [0mScanning '../coco128/labels/train2017' for images and labels... 128 found, 0 missing, 2 empty, 0 corrupted: 100% 128/128 [00:00<00:00, 2956.76it/s]
[34m[1mtrain: [0mNew cache created: ../coco128/labels/train2017.cache
[34m[1mtrain: [0mCaching images (0.1GB): 100% 128/128 [00:00<00:00, 205.30it/s]
[34m[1mval: [0mScanning '../coco128/labels/train2017.cache' for images and labels... 128 found, 0 missing, 2 empty, 0 corrupted: 100% 128/128 [00:00<00:00, 604584.36it/s]
[34m[1mval: [0mCaching images (0.1GB): 100% 128/128 [00:00<00:00, 144.17it/s]
Plotting labels...
[34m[1mautoanchor: [0mAnalyzing anchors... anchors/target = 4.26, Best Possible Recall (BPR) = 0.9946
Image sizes 640 train, 640 test
Using 2 dataloader workers
Logging results to runs/train/exp
Starting training for 3 epochs...
Epoch gpu_mem box obj cls total labels img_size
0/2 3.29G 0.04237 0.06417 0.02121 0.1277 183 640: 100% 8/8 [00:03<00:00, 2.41it/s]
Class Images Labels P R [email protected] [email protected]:.95: 100% 4/4 [00:04<00:00, 1.04s/it]
all 128 929 0.642 0.637 0.661 0.432
Epoch gpu_mem box obj cls total labels img_size
1/2 6.65G 0.04431 0.06403 0.019 0.1273 166 640: 100% 8/8 [00:01<00:00, 5.73it/s]
Class Images Labels P R [email protected] [email protected]:.95: 100% 4/4 [00:01<00:00, 3.21it/s]
all 128 929 0.662 0.626 0.658 0.433
Epoch gpu_mem box obj cls total labels img_size
2/2 6.65G 0.04506 0.06836 0.01913 0.1325 182 640: 100% 8/8 [00:01<00:00, 5.51it/s]
Class Images Labels P R [email protected] [email protected]:.95: 100% 4/4 [00:02<00:00, 1.35it/s]
all 128 929 0.658 0.625 0.661 0.433
Optimizer stripped from runs/train/exp/weights/last.pt, 14.8MB
Optimizer stripped from runs/train/exp/weights/best.pt, 14.8MB
3 epochs completed in 0.007 hours.
###Markdown
4. Visualize Weights & Biases Logging 🌟 NEW[Weights & Biases](https://www.wandb.com/) (W&B) is now integrated with YOLOv5 for real-time visualization and cloud logging of training runs. This allows for better run comparison and introspection, as well improved visibility and collaboration for teams. To enable W&B `pip install wandb`, and then train normally (you will be guided through setup on first use). During training you will see live updates at [https://wandb.ai/home](https://wandb.ai/home), and you can create and share detailed [Reports](https://wandb.ai/glenn-jocher/yolov5_tutorial/reports/YOLOv5-COCO128-Tutorial-Results--VmlldzozMDI5OTY) of your results. For more information see the [YOLOv5 Weights & Biases Tutorial](https://github.com/ultralytics/yolov5/issues/1289). Local LoggingAll results are logged by default to `runs/train`, with a new experiment directory created for each new training as `runs/train/exp2`, `runs/train/exp3`, etc. View train and test jpgs to see mosaics, labels, predictions and augmentation effects. Note a **Mosaic Dataloader** is used for training (shown below), a new concept developed by Ultralytics and first featured in [YOLOv4](https://arxiv.org/abs/2004.10934).
###Code
Image(filename='runs/train/exp/train_batch0.jpg', width=800) # train batch 0 mosaics and labels
Image(filename='runs/train/exp/test_batch0_labels.jpg', width=800) # test batch 0 labels
Image(filename='runs/train/exp/test_batch0_pred.jpg', width=800) # test batch 0 predictions
###Output
_____no_output_____
###Markdown
> `train_batch0.jpg` shows train batch 0 mosaics and labels> `test_batch0_labels.jpg` shows test batch 0 labels> `test_batch0_pred.jpg` shows test batch 0 _predictions_ Training losses and performance metrics are also logged to [Tensorboard](https://www.tensorflow.org/tensorboard) and a custom `results.txt` logfile which is plotted as `results.png` (below) after training completes. Here we show YOLOv5s trained on COCO128 to 300 epochs, starting from scratch (blue), and from pretrained `--weights yolov5s.pt` (orange).
###Code
from utils.plots import plot_results
plot_results(save_dir='runs/train/exp') # plot all results*.txt as results.png
Image(filename='runs/train/exp/results.png', width=800)
###Output
_____no_output_____
###Markdown
EnvironmentsYOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):- **Google Colab and Kaggle** notebooks with free GPU: - **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/GCP-Quickstart)- **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/AWS-Quickstart)- **Docker Image**. See [Docker Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/Docker-Quickstart) StatusIf this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training ([train.py](https://github.com/ultralytics/yolov5/blob/master/train.py)), testing ([test.py](https://github.com/ultralytics/yolov5/blob/master/test.py)), inference ([detect.py](https://github.com/ultralytics/yolov5/blob/master/detect.py)) and export ([export.py](https://github.com/ultralytics/yolov5/blob/master/models/export.py)) on MacOS, Windows, and Ubuntu every 24 hours and on every commit. AppendixOptional extras below. Unit tests validate repo functionality and should be run on any PRs submitted.
###Code
# Re-clone repo
%cd ..
%rm -rf yolov5 && git clone https://github.com/ultralytics/yolov5
%cd yolov5
# Reproduce
for x in 'yolov5s', 'yolov5m', 'yolov5l', 'yolov5x':
!python test.py --weights {x}.pt --data coco.yaml --img 640 --conf 0.25 --iou 0.45 # speed
!python test.py --weights {x}.pt --data coco.yaml --img 640 --conf 0.001 --iou 0.65 # mAP
# Unit tests
%%shell
export PYTHONPATH="$PWD" # to run *.py. files in subdirectories
rm -rf runs # remove runs/
for m in yolov5s; do # models
python train.py --weights $m.pt --epochs 3 --img 320 --device 0 # train pretrained
python train.py --weights '' --cfg $m.yaml --epochs 3 --img 320 --device 0 # train scratch
for d in 0 cpu; do # devices
python detect.py --weights $m.pt --device $d # detect official
python detect.py --weights runs/train/exp/weights/best.pt --device $d # detect custom
python test.py --weights $m.pt --device $d # test official
python test.py --weights runs/train/exp/weights/best.pt --device $d # test custom
done
python hubconf.py # hub
python models/yolo.py --cfg $m.yaml # inspect
python models/export.py --weights $m.pt --img 640 --batch 1 # export
done
# Profile
from utils.torch_utils import profile
m1 = lambda x: x * torch.sigmoid(x)
m2 = torch.nn.SiLU()
profile(x=torch.randn(16, 3, 640, 640), ops=[m1, m2], n=100)
# Evolve
!python train.py --img 640 --batch 64 --epochs 100 --data coco128.yaml --weights yolov5s.pt --cache --noautoanchor --evolve
!d=runs/train/evolve && cp evolve.* $d && zip -r evolve.zip $d && gsutil mv evolve.zip gs://bucket # upload results (optional)
# VOC
for b, m in zip([64, 48, 32, 16], ['yolov5s', 'yolov5m', 'yolov5l', 'yolov5x']): # zip(batch_size, model)
!python train.py --batch {b} --weights {m}.pt --data voc.yaml --epochs 50 --cache --img 512 --nosave --hyp hyp.finetune.yaml --project VOC --name {m}
###Output
_____no_output_____ |
notebooks/InformationModel.ipynb | ###Markdown
Example 1: estimate a pollution model with a gaussian process
###Code
# create an environment to observe
env = PollutionModelEnvironment("water", 100, 100)
env.evolve_speed = 1
env.p_pollution = 0.1
for t in range(120):
env.proceed()
plt.imshow(env.value, vmin=0, vmax=1.0)
# create an observation model
#im = ScalarFieldInformationModel_stored_observation("sample", env.width, env.height, \
# estimation_type="disk-fixed",
# estimation_radius=10)
im = ScalarFieldInformationModel_stored_observation("sample", env.width, env.height, \
estimation_type="gaussian-process"
)
# generate a series random observations
for i in range(20):
x = random.randint(0, env.width-1)
y = random.randint(0, env.height-1)
value = env.value[x,y]
obs = {"x": x, "y": y, "value": value}
im.add_observation(obs)
im.proceed(1)
#plt.imshow(im.value, vmin=0, vmax=1.0)
plt.imshow(im.uncertainty, vmin=0, vmax=1.0)
plt.imshow(im.uncertainty, vmin=0, vmax=1.0)
kernel = RationalQuadratic(length_scale = [2.0, 2.0], length_scale_bounds = [1, 100], \
alpha=0.1) +\
WhiteKernel(noise_level=0.2)
#kernel = RationalQuadratic(length_scale = [2.0, 2.0], length_scale_bounds = [1, 100], alpha=1000) +\
# WhiteKernel(noise_level=0.2)
#kernel = RBF(length_scale = [2.0, 2.0], length_scale_bounds = [1, 100]) +\
# WhiteKernel(noise_level=0.5)
im2 = ScalarFieldInformationModel_stored_observation("sample", env.width, env.height, \
estimation_type="gaussian-process",
gp_kernel = kernel
)
im2.observations.extend(im.observations)
im2.proceed(1.0)
plt.imshow(im2.value, vmin=0, vmax=1.0)
###Output
_____no_output_____ |
Chapter05/Exercise 5.09/.ipynb_checkpoints/Exercise 5.09-checkpoint.ipynb | ###Markdown
Execute the following commands to install the necessary libraries (if not installed already)!pip install tabula-py xlrd lxmlYou should have JAVA 8+ and Python 3.7+ installed
###Code
from tabula import read_pdf
import pandas as pd
df18_1 = read_pdf('../datasets/Housing_data.pdf',pages=[1],pandas_options={'header':None})
df18_1
df18_2 = read_pdf('../datasets/Housing_data.pdf',pages=[2],pandas_options={'header':None})
df18_2
import pandas as pd
df1 = pd.DataFrame(df18_1)
df2 = pd.DataFrame(df18_2)
df18=pd.concat([df1,df2],axis=1)
df18.values.tolist()
names=['CRIM','ZN','INDUS','CHAS','NOX','RM',
'AGE','DIS','RAD','TAX','PTRATIO','B','LSTAT','PRICE']
df18_1 = read_pdf('../datasets/Housing_data.pdf',pages=[1],pandas_options={'header':None,'names':names[:10]})
df18_2 = read_pdf('../datasets/Housing_data.pdf',pages=[2],pandas_options={'header':None,'names':names[10:]})
df1 = pd.DataFrame(df18_1)
df2 = pd.DataFrame(df18_2)
df18=pd.concat([df1,df2],axis=1)
df18.values.tolist()
###Output
_____no_output_____ |
src/probing-tasks/replicate/code_improvements.ipynb | ###Markdown
**Introduction:** In this notebook we compare the speed of edge probing of the following three implementations* jiant 1.3.2* jiant 2.2.0* our own implementation (https://github.com/SwiftPredator/How-Does-Bert-Answer-QA-DLP2021/blob/main/src/probing-tasks/replicate/probing_tasks.ipynb)Before running make sure to set the runtime type to GPU (both jiant versions don't support TPUs). **jiant 1.3.2:** First we install the dependencies for jiant 1.3.2 and clone the repository.
###Code
!pip install allennlp==0.8.4
!pip install overrides==3.1.0
!pip install jsondiff
!pip install sacremoses
!pip install pyhocon==0.3.35
!pip install transformers==2.6.0
!pip install python-Levenshtein==0.12.0
!pip install tensorflow==1.15.0
!python -m nltk.downloader perluniprops nonbreaking_prefixes punkt
!pip uninstall overrides
!pip install overrides==3.1.0
!pip install tensorflow==1.15
###Output
_____no_output_____
###Markdown
Restart the runtime now. Clone the jiant and OntoNotes repository and set some environment variables.
###Code
!git clone --branch v1.3.2 --recursive https://github.com/nyu-mll/jiant.git jiant
!git clone https://github.com/yuchenlin/OntoNotes-5.0-NER-BIO.git
import os
os.environ['JIANT_PROJECT_PREFIX'] = "/content/output"
os.environ['JIANT_DATA_DIR'] = "/content/data"
os.environ['WORD_EMBS_FILE'] = "/content/embs"
###Output
_____no_output_____
###Markdown
Copy the ontonotes path (/content/OntoNotes-5.0-NER-BIO/conll-formatted-ontonotes-5.0) to /content/jiant/probing/get_and_process_all_data.shComment out SPR data and tokenizing for OpenAI, Moses and bert-large in /content/jiant/probing/get_and_process_all_data.sh. Set JIANT_DATA_DIR to "/content/data". Run the next cell to preprocess the data
###Code
%cd /content/
!./jiant/probing/get_and_process_all_data.sh
###Output
_____no_output_____
###Markdown
Afterwards save the data to Google Drive.
###Code
from google.colab import drive
drive.mount('/content/drive')
!cp -r "/content/data" "/content/drive/MyDrive/data"
###Output
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
###Markdown
If you have already preprocessed and saved the data, you can just load it from Google Drive.
###Code
from google.colab import drive
drive.mount('/content/drive')
!cp -r "/content/drive/MyDrive/data" "/content/data"
###Output
_____no_output_____
###Markdown
Before running the next cell go to /content/jiant/jiant/config/defaults.conf and set max_vals in edges-tmpl-large and edges-tmpl-small to 5 and val_interval in edges-tmpl and edges-tmpl-small to 1000. Restart the runtime afterwards. To circumvent any difficulties with adding tasks, we just time tasks that are already implemented in jiant. For some reason the runtime has to be factory reset after each task.
###Code
tasks = [
#"edges-ner-ontonotes",
#"edges-rel-semeval",
"edges-coref-ontonotes",
]
models = [
"bert-base-uncased",
]
%cd /content/jiant/
import jiant.__main__ as main
from timeit import default_timer as timer
import os
import json
os.environ["JIANT_PROJECT_PREFIX"] = "/content/output/"
os.environ["JIANT_DATA_DIR"] = "/content/data/"
os.environ["WORD_EMBS_FILE"] = "/content/embs/"
with open("/content/results.json", "r") as f:
results = json.load(f)
if results is None:
results = {}
implementation_results = results.setdefault("jiant 1.3.2", {})
for model in models:
implementation_results[model] = {}
for task in tasks:
start = timer()
main.main([
"--config_file",
"/content/jiant/jiant/config/edgeprobe/edgeprobe_bert.conf",
"-o",
f"target_tasks={task},exp_name=timeit,input_module={model},max_seq_len=384"
])
end = timer()
implementation_results[model][task] = end - start
print(results)
with open("/content/results.json", "w") as f:
json.dump(results, f)
###Output
_____no_output_____
###Markdown
**jiant 2.2.0:** We follow the same steps as in the reproduced notebook.
###Code
!git clone https://github.com/SwiftPredator/How-Does-Bert-Answer-QA-DLP2021.git
# copy the modified jiant lib to the /content/
!mv "/content/How-Does-Bert-Answer-QA-DLP2021/src/probing-tasks/jiant" "/content/"
%cd jiant
!pip install -r requirements-no-torch.txt
!pip install --no-deps -e ./
!pip install gdown # lib to download file from googlde drive link
###Output
_____no_output_____
###Markdown
Restart the runtime now. When restarting the runtime run from here on.
###Code
%cd /content/jiant
import jiant.utils.python.io as py_io
import jiant.utils.display as display
import os
def init_task_config(task_name, size):
jiant_task = task_name
if(task_name == "sup-squad" or task_name == "sup-babi"):
jiant_task = "coref" # use coref task to probe supporting facts task because of the analog structure of jiant EP json format
os.makedirs("/content/tasks/configs/", exist_ok=True)
os.makedirs(f"/content/tasks/data/{task_name}", exist_ok=True)
py_io.write_json({
"task": jiant_task,
"paths": {
"train": f"/content/tasks/data/{task_name}/{size}/train.jsonl",
"val": f"/content/tasks/data/{task_name}/{size}/val.jsonl",
},
"name": jiant_task
}, f"/content/tasks/configs/{task_name}_config.json")
task_names = [
"ner",
"semeval",
"coref",
#"ques"
#"sup-squad",
#"sup-babi",
#"sup-hotpot",
]
size = "timing"
for task_name in task_names:
init_task_config(task_name, size)
# copy the task data to the tasks folder created above
!cp -r "/content/How-Does-Bert-Answer-QA-DLP2021/src/probing-tasks/data" "/content/tasks/"
import jiant.proj.main.export_model as export_model
models = [
"bert-base-uncased",
]
for model in models:
export_model.export_model(
hf_pretrained_model_name_or_path=model,
output_base_path=f"/content/models/{model}",
)
import jiant.shared.caching as caching
import jiant.proj.main.tokenize_and_cache as tokenize_and_cache
seq_length_options = {
"ner": 384,
"semeval": 384,
"coref": 384,
"ques": 128,
"sup-squad": 384,
"sup-babi": 384,
"sup-hotpot": 384,
}
# Tokenize and cache each task
def tokenize(task_name, model):
tokenize_and_cache.main(tokenize_and_cache.RunConfiguration(
task_config_path=f"/content/tasks/configs/{task_name}_config.json",
hf_pretrained_model_name_or_path=model,
output_dir=f"/content/cache/{task_name}",
phases=["train", "val"],
max_seq_length=seq_length_options[task_name],
))
for task_name in task_names:
for model in models:
tokenize(task_name, model)
import jiant.proj.main.scripts.configurator as configurator
def create_jiant_task_config(task_name):
jiant_run_config = configurator.SimpleAPIMultiTaskConfigurator(
task_config_base_path="/content/tasks/configs",
task_cache_base_path="/content/cache",
train_task_name_list=[task_name],
val_task_name_list=[task_name],
train_batch_size=32,
eval_batch_size=32,
epochs=50,
num_gpus=1,
).create_config()
os.makedirs("/content/tasks/run_configs/", exist_ok=True)
py_io.write_json(jiant_run_config, f"/content/tasks/run_configs/{task_name}_run_config.json")
#display.show_json(jiant_run_config)
import jiant.proj.main.runscript as main_runscript
def run_probing_task(task_name, model_name="bert-base-uncased", num_layers=1, bin_model_path=""):
hf_model_name = model_name
if(model_name == "bert-babi"):
hf_model_name = "bert-base-uncased"
run_args = main_runscript.RunConfiguration(
jiant_task_container_config_path=f"/content/tasks/run_configs/{task_name}_run_config.json",
output_dir=f"/content/tasks/runs/{task_name}",
hf_pretrained_model_name_or_path=hf_model_name,
model_path=f"/content/models/{model_name}/model/model.p",
model_config_path=f"/content/models/{model_name}/model/config.json",
learning_rate=1e-2,
eval_every_steps=100,
do_train=True,
do_val=True,
do_save=True,
force_overwrite=True,
num_hidden_layers=num_layers,
bin_model_path=bin_model_path,
)
return main_runscript.run_loop(run_args)
from timeit import default_timer as timer
def probe(model, task_name, n_layers, dataset_size):
init_task_config(task_name, dataset_size)
#tokenize(task_name, model)
create_jiant_task_config(task_name)
start = timer()
run_probing_task(task_name, model, n_layers)
end = timer()
return end - start
###Output
_____no_output_____
###Markdown
To avoid lenghty tokenization and caching we run for 50 epochs instead of 5, use train datasets with size 320 (10 batches) and evaluate every 100 steps. The results have to be multiplied by 10 to be comparable to the other implementations.
###Code
import json
import os
if os.path.isfile("/content/results.json"):
with open("/content/results.json", "w") as f:
results = json.load(f)
else:
results = {}
implementation_results = results.setdefault("jiant 2.2.0", {})
for model in models:
implementation_results[model] = {}
for task in task_names:
implementation_results[model][task] = probe(model, task, 1, "test") * 10
print(results)
with open("/content/results.json", "w") as f:
json.dump(results, f)
###Output
_____no_output_____
###Markdown
**Our own implementation:** Change the runtime to TPU now and run the following cell to install our code. Restart the runtime afterwards.
###Code
!git clone https://github.com/SwiftPredator/How-Does-Bert-Answer-QA-DLP2021/
!mv "/content/How-Does-Bert-Answer-QA-DLP2021/src/probing-tasks/data" "/content/"
!mv "/content/How-Does-Bert-Answer-QA-DLP2021/src/probing-tasks/replicate" "/content/"
%cd /content/replicate
!pip install -r requirements.txt
!pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.9-cp37-cp37m-linux_x86_64.whl
%cd /content/replicate
import torch
import torch.nn as nn
import torch.utils.data as data
from transformers import AutoModel, AutoTokenizer
from edge_probing_utils import (
JiantDatasetSingleSpan,
JiantDatasetTwoSpan
)
import edge_probing as ep
import edge_probing_tpu as ep_tpu
tasks = [
"ner",
"semeval",
"coref",
]
task_types = {
"ner": "single_span",
"semeval": "single_span",
"coref": "two_span",
}
models = [
"bert-base-uncased",
]
task_labels_to_ids = {
"ner": {'ORDINAL': 0, 'DATE': 1, 'PERSON': 2, 'LOC': 3, 'GPE': 4, 'QUANTITY': 5, 'ORG': 6, 'WORK_OF_ART': 7, 'CARDINAL': 8, 'TIME': 9, 'MONEY': 10, 'LANGUAGE': 11, 'NORP': 12, 'PERCENT': 13, 'EVENT': 14, 'LAW': 15, 'FAC': 16, 'PRODUCT': 17},
"coref": {"0": 0, "1": 1},
"semeval": {'Component-Whole(e2,e1)': 0, 'Other': 1, 'Instrument-Agency(e2,e1)': 2, 'Member-Collection(e1,e2)': 3, 'Entity-Destination(e1,e2)': 4, 'Content-Container(e1,e2)': 5, 'Message-Topic(e1,e2)': 6, 'Cause-Effect(e2,e1)': 7, 'Product-Producer(e2,e1)': 8, 'Member-Collection(e2,e1)': 9, 'Entity-Origin(e1,e2)': 10, 'Cause-Effect(e1,e2)': 11, 'Component-Whole(e1,e2)': 12, 'Message-Topic(e2,e1)': 13, 'Product-Producer(e1,e2)': 14, 'Entity-Origin(e2,e1)': 15, 'Content-Container(e2,e1)': 16, 'Instrument-Agency(e1,e2)': 17, 'Entity-Destination(e2,e1)': 18},
}
import torch_xla
import torch_xla.core.xla_model as xm
import os
import json
import matplotlib.pyplot as plt
from google.colab import drive
from timeit import default_timer as timer
loss_function = nn.BCELoss()
batch_size = 32
num_layers = [12]
num_workers = 0
device = xm.xla_device()
# Disable warnings.
os.environ["TOKENIZERS_PARALLELISM"] = "false"
def probe(model, task, size):
tokenizer = AutoTokenizer.from_pretrained(model)
train_data = ep.tokenize_jiant_dataset(
tokenizer,
*(ep.read_jiant_dataset(f"/content/data/{task}/{size}/train.jsonl")),
task_labels_to_ids[task],
)
val_data = ep.tokenize_jiant_dataset(
tokenizer,
*(ep.read_jiant_dataset(f"/content/data/{task}/{size}/val.jsonl")),
task_labels_to_ids[task],
)
test_data = ep.tokenize_jiant_dataset(
tokenizer,
*(ep.read_jiant_dataset(f"/content/data/{task}/{size}/test.jsonl")),
task_labels_to_ids[task],
)
if task_types[task] == "single_span":
train_data = JiantDatasetSingleSpan(train_data)
val_data = JiantDatasetSingleSpan(val_data)
test_data = JiantDatasetSingleSpan(test_data)
elif task_types[task] == "two_span":
train_data = JiantDatasetTwoSpan(train_data)
val_data = JiantDatasetTwoSpan(val_data)
test_data = JiantDatasetTwoSpan(test_data)
train_loader = data.DataLoader(train_data, batch_size=batch_size, shuffle=True, pin_memory=True, num_workers=num_workers)
val_loader = data.DataLoader(val_data, batch_size=batch_size, pin_memory=True, num_workers=num_workers)
test_loader = data.DataLoader(test_data, batch_size=batch_size, pin_memory=True, num_workers=num_workers)
start = timer()
ep_tpu.probing(ep.ProbeConfig(
train_loader,
val_loader,
test_loader,
model,
num_layers,
loss_function,
task_labels_to_ids[task],
task_types[task],
lr=0.0001,
max_evals=5,
eval_interval=1000,
dev=device,
))
end = timer()
return end - start
with open("/content/results.json", "r") as f:
results = json.load(f)
if results is None:
results = {}
implementation_results = results.setdefault("jiant 1.3.2", {})
for model in models:
implementation_results[model] = {}
for task in tasks:
implementation_results[model][task] = probe(model, task, "big")
print(results)
with open("/content/results.json", "w") as f:
json.dump(results, f)
###Output
_____no_output_____
###Markdown
**Visualization:** Change the task names in /content/results.json to be the same for all implementations
###Code
import matplotlib.pyplot as plt
import json
with open("/content/results.json", "r") as f:
results = json.load(f)
task_to_title = {
"coref": "coref",
"ques": "ques",
"sup-squad": "sup SQuAD",
"sup-babi": "sup bAbI",
"ner": "ner",
"semeval": "rel",
"adversarialqa": "adversarialQA"
}
for implementation in results.keys():
plt.scatter(
[task_to_title[key] for key in results[implementation]["bert-base-uncased"].keys()],
list(results[implementation]["bert-base-uncased"].values()),
)
plt.legend(list(results.keys()))
plt.show()
###Output
_____no_output_____ |
mongo-db-course/m220p/mflix-python/notebooks/error_handling.ipynb | ###Markdown
Writes with Error Handling At this point we've gotten pretty comfortable writing data to Mongo, creating and updating with different durabilities. We've even configured the driver to change the way our writes are perceived. But there are still times when the writes we send to the server will result in an error, and we've briefly discussed the way our application can deal with these errors.In this lesson we're gonna encounter some of the basic errors in the Pymongo driver, and how to handle these errors in a way that makes our application more consistent and reliable.
###Code
from pymongo import MongoClient, errors
uri = "mongodb+srv://m220-student:[email protected]/admin"
mc = MongoClient(uri)
lessons = mc.lessons
shipments = lessons.shipments
###Output
_____no_output_____
###Markdown
So here's a URI string connecting to our Atlas cluster, and I've initialized a client with that string.We're using a new collection called `shipments`, and the scenario for this lesson is that our application is a clothing manufacturer that also handles the shipping for their clothing items.
###Code
import time
import random
from pprint import pprint
shipments.drop()
cities = [ "Atlanta", "New York", "Miami", "Chicago", "Los Angeles", "Seattle", "Dallas" ]
products = [ "shoes", "pants", "shirts", "hats", "socks" ]
quantities = [ 10, 20, 40, 80, 160, 320, 640, 1280, 2560 ]
docs = []
for truck_id in range(30):
source = random.choice(cities)
destination = random.choice([c for c in cities if c != source])
product = random.choice(products)
quantity = random.choice(quantities)
doc = {
"truck_id": truck_id,
"source": source,
"destination": destination,
"product": product,
"quantity": quantity
}
docs.append(doc)
insert_response = shipments.insert_many(docs)
shipments.count_documents({})
###Output
_____no_output_____
###Markdown
This is a short script that's gonna create some test data for the clothing manufacturer. This is included in this notebook so you can test it out yourself.You can see the documents we're producing have 5 fields each, with this `truck_id` determined by the iteration of our loop (point to `truck_id`). The (point) source and destination are both derived from this `cities` array, and this (point to `destination`) part will make sure that the destination city is different from the source.Each shipment also has a product and a quantity, but the part we're gonna focus on is this (point) `truck_id` field. This is gonna record the truck currently allocated for this shipment, so that truck can be considered unavailable for any another shipments. This way when a new shipment comes in, we can make sure the truck that gets assigned to that shipment isn't already doing another one.
###Code
shipments.find_one()
###Output
_____no_output_____
###Markdown
Right now this loop only has 30 iterations (point to the loop) so we have exactly 30 documents in the `shipments` collection. If we take a look at one of them...(run command)Then we can see that they each have these five fields. The assumption I'm making for this data is that, while this (point) document exists in the collection, the shipment is still ongoing. When this shipment is complete, we would delete this document, or maybe add a flag to the document like `{ completed: True }`.This means that 30 documents in the `shipments` collection means 30 shipments that are happening right now. And if we tried to insert a new shipment, it has to have a unique `truck_id` (point). This way each truck is only assigned to one shipment at a time.
###Code
shipments.create_index("truck_id", unique=True)
###Output
_____no_output_____
###Markdown
So this is the way we're gonna enforce uniqueness among these `truck_id`s. This is called a unique index, which will create an index on the `truck_id` field, and also make sure that there are no duplicate `truck_id`s.(enter command)And it created this index called `truck_id_1`, the 1 meaning that the index is sorted in ascending order.
###Code
doc = {
"source": "New York",
"destination": "Atlanta",
"truck_id": 4,
"product": "socks",
"quantity": 40
}
try:
res = shipments.insert_one(doc)
print(res.inserted_id)
except errors.DuplicateKeyError:
truck_id = doc["truck_id"]
print(f"Truck #{truck_id} is currently performing a shipment. Please select another truck.")
###Output
Truck #4 is currently performing a shipment. Please select another truck.
###Markdown
Here's an example of a shipment being added to our collection. We want to ship 40 socks from New York to Atlanta, and we've chosen to assign truck 4 to perform this shipment. This (point) truck number could have been user input, or something determined by our application, but either way this is going to cause a `DuplicateKeyError` because we already have a shipment assigned to truck (point) 4.(enter command)So using the try-except block, our program prints out a message when a `DuplicateKeyError` is thrown. The message tells us that the truck we wanted to use has already been sent out. So the application allows the insert to fail, and then sends an error message up to the user to choose another truck.
###Code
import string
trucks = lessons.trucks
trucks.drop()
trucks.insert_many([
{ "_id": i, "license": "".join(random.choice(string.ascii_uppercase + string.digits) for _ in range(7)) } for i in range(50)
])
trucks.count_documents({})
###Output
_____no_output_____
###Markdown
But we can actually be a little more proactive in handling this error, if we know about the other trucks who are available for this job.Here's a new collection called `trucks`, which we're gonna use to find another available truck. This should insert exactly 50 documents into this collection...(enter command)And it looks like it worked.
###Code
trucks.find_one()
###Output
_____no_output_____
###Markdown
The documents each only have two fields: an `_id` from 0 to 49 (point), which will relate to the `truck_id` from the `shipments` collection. And I've assigned a random string of 7 uppercase letters and numbers to be the license plate number, although actually some US states only allow 6 characters.
###Code
doc = {
"source": "New York",
"destination": "Atlanta",
"truck_id": 4,
"product": "socks",
"quantity": 40
}
try:
res = shipments.insert_one(doc)
print(res.inserted_id)
except errors.DuplicateKeyError:
busy_trucks = set(shipments.distinct("truck_id"))
all_trucks = set(trucks.distinct("_id"))
available_trucks = all_trucks.difference(busy_trucks)
old_truck_id = doc["truck_id"]
if available_trucks:
chosen_truck = random.choice(list(available_trucks))
new_truck_id = doc["truck_id"] = chosen_truck
res = shipments.insert_one(doc)
print(f"Truck #{old_truck_id} is currently performing a shipment. Truck #{new_truck_id} has been sent out instead.")
else:
print(f"Truck #{old_truck_id} is currently performing a shipment. Could not find another truck.")
###Output
Truck #4 is currently performing a shipment. Truck #33 has been sent out instead.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.