path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
18_Python_Finance.ipynb | ###Markdown
Web Scraping do Site da B3 Site: www.b3.com.br
###Code
import pandas as pd
pd.set_option('display.min_rows', 50)
pd.set_option('display.max_rows', 200)
url = 'http://bvmf.bmfbovespa.com.br/indices/ResumoCarteiraTeorica.aspx?Indice=IBOV&idioma=pt-br'
pd.read_html(url, decimal=',', thousands='.', index_col='Código')[0][:-1]
def busca_carteira_teorica(indice):
url = 'http://bvmf.bmfbovespa.com.br/indices/ResumoCarteiraTeorica.aspx?Indice={}&idioma=pt-br'.format(indice.upper())
return pd.read_html(url, decimal=',', thousands='.', index_col='Código')[0][:-1]
ibov = busca_carteira_teorica('ibov')
ibov.sort_values('Part. (%)', ascending=False)
###Output
_____no_output_____
###Markdown
Índices de Segmentos e Setoriais Índice BM&FBOVESPA Financeiro (IFNC B3)
###Code
ifnc = busca_carteira_teorica('ifnc')
ifnc
###Output
_____no_output_____
###Markdown
Índice de BDRs Não Patrocinados-GLOBAL (BDRX B3)
###Code
bdrx = busca_carteira_teorica('bdrx')
bdrx
###Output
_____no_output_____
###Markdown
Índice de Consumo (ICON B3)
###Code
icon = busca_carteira_teorica('icon')
icon
###Output
_____no_output_____
###Markdown
Índice de Energia Elétrica (IEE B3)
###Code
iee = busca_carteira_teorica('iee')
iee
###Output
_____no_output_____
###Markdown
Índice de Fundos de Investimentos Imobiliários (IFIX B3)
###Code
ifix = busca_carteira_teorica('ifix')
ifix
###Output
_____no_output_____
###Markdown
Índice de Materiais Básicos BM&FBOVESPA (IMAT B3)
###Code
imat = busca_carteira_teorica('imat')
imat
###Output
_____no_output_____
###Markdown
Índice Dividendos BM&FBOVESPA (IDIV B3)
###Code
idiv = busca_carteira_teorica('idiv')
idiv
###Output
_____no_output_____
###Markdown
Índice do Setor Industrial (INDX B3)
###Code
indx = busca_carteira_teorica('indx')
indx
iee = busca_carteira_teorica('iee')
iee
###Output
_____no_output_____
###Markdown
Índice Imobiliário (IMOB B3)
###Code
imob = busca_carteira_teorica('imob')
imob
###Output
_____no_output_____
###Markdown
Índice MidLarge Cap (MLCX B3)
###Code
mlcx = busca_carteira_teorica('mlcx')
mlcx
###Output
_____no_output_____
###Markdown
Índice Small Cap (SMLL B3)
###Code
smll = busca_carteira_teorica('smll')
smll
###Output
_____no_output_____
###Markdown
Índice Utilidade Pública BM&FBOVESPA (UTIL B3
###Code
util = busca_carteira_teorica('util')
util
###Output
_____no_output_____
###Markdown
Índice Valor BM&FBOVESPA (IVBX 2 B3)
###Code
ivbx = busca_carteira_teorica('ivbx')
ivbx
###Output
_____no_output_____
###Markdown
Índices amplos Índice Bovespa (Ibovespa B3)
###Code
ibov = busca_carteira_teorica('ibov')
ibov
###Output
_____no_output_____
###Markdown
Índice Brasil 100 (IBrX 100 B3)
###Code
ibrx = busca_carteira_teorica('ibrx')
ibrx
###Output
_____no_output_____
###Markdown
Índice Brasil 50 (IBrX 50 B3)
###Code
ibxl = busca_carteira_teorica('ibxl')
ibxl
###Output
_____no_output_____
###Markdown
Índice Brasil Amplo (IBrA B3)
###Code
ibra = busca_carteira_teorica('ibra')
ibra
###Output
_____no_output_____
###Markdown
Índices de Governança Índice de Ações com Governança Corporativa Diferenciada (IGC B3)
###Code
igc = busca_carteira_teorica('igc')
igc
###Output
_____no_output_____
###Markdown
Índice de Ações com Tag Along Diferenciado (ITAG B3)
###Code
itag = busca_carteira_teorica('itag')
itag
###Output
_____no_output_____
###Markdown
Índice de Governança Corporativa Trade (IGCT B3)
###Code
igct = busca_carteira_teorica('igct')
igct
###Output
_____no_output_____
###Markdown
Índice de Governança Corporativa – Novo Mercado (IGC-NM B3)
###Code
ignm = busca_carteira_teorica('ignm')
ignm
###Output
_____no_output_____
###Markdown
Índices de Sustentabilidade Índice Carbono Eficiente (ICO2 B3)
###Code
ico2 = busca_carteira_teorica('ico2')
ico2
###Output
_____no_output_____
###Markdown
Índice de Sustentabilidade Empresarial (ISE B3)
###Code
ise = busca_carteira_teorica('ise')
ise
###Output
_____no_output_____
###Markdown
Composição Índices Amplos
###Code
pd.concat([ibov, ibrx, ibxl, ibra], keys=['IBOV', 'IBRX', 'IBXL', 'IBRA'], axis=1)
###Output
_____no_output_____
###Markdown
Composição Índices de Governança
###Code
pd.concat([ibov, igc, itag, igct, ignm], keys=['IBOV', 'IGC', 'ITAG', 'IGCT', 'IGNM'], axis=1)
###Output
_____no_output_____
###Markdown
Composição Índices de Sustentabilidade
###Code
pd.concat([ibov, ico2, ise], keys=['IBOV', 'ICO2', 'ISE'], axis=1)
###Output
_____no_output_____
###Markdown
Composição Índices de Segmentos e Setoriais
###Code
pd.concat([ibov, ifnc, bdrx, icon, iee, ifix, imat, idiv, indx, imob, mlcx, smll, util, ivbx], keys=['IBOV', 'IFNC', 'BDRX', 'ICON', 'IEE', 'IFIX', 'IMAT', 'IDIV', 'INDX', 'IMOB', 'MLCX', 'SMLL', 'UTIL', 'IVBX'], axis=1)
###Output
_____no_output_____ |
cheatsheets/cuDF/cuDF_Properties.ipynb | ###Markdown
cuDF Cheat Sheets sample code(c) 2020 NVIDIA, Blazing SQLDistributed under Apache License 2.0 Imports
###Code
import cudf
import numpy as np
###Output
_____no_output_____
###Markdown
Sample DataFrame
###Code
df = cudf.DataFrame(
[
(39, 6.88, np.datetime64('2020-10-08T12:12:01'), np.timedelta64(14378,'s'), 'C', 'D', 'data'
, 'RAPIDS.ai is a suite of open-source libraries that allow you to run your end to end data science and analytics pipelines on GPUs.')
, (11, 4.21, None, None , 'A', 'D', 'cuDF'
, 'cuDF is a Python GPU DataFrame (built on the Apache Arrow columnar memory format)')
, (31, 4.71, np.datetime64('2020-10-10T09:26:43'), np.timedelta64(12909,'s'), 'U', 'D', 'memory'
, 'cuDF allows for loading, joining, aggregating, filtering, and otherwise manipulating tabular data using a DataFrame style API.')
, (40, 0.93, np.datetime64('2020-10-11T17:10:00'), np.timedelta64(10466,'s'), 'P', 'B', 'tabular'
, '''If your workflow is fast enough on a single GPU or your data comfortably fits in memory on
a single GPU, you would want to use cuDF.''')
, (33, 9.26, np.datetime64('2020-10-15T10:58:02'), np.timedelta64(35558,'s'), 'O', 'D', 'parallel'
, '''If you want to distribute your workflow across multiple GPUs or have more data than you can fit
in memory on a single GPU you would want to use Dask-cuDF''')
, (42, 4.21, np.datetime64('2020-10-01T10:02:23'), np.timedelta64(20480,'s'), 'U', 'C', 'GPUs'
, 'BlazingSQL provides a high-performance distributed SQL engine in Python')
, (36, 3.01, np.datetime64('2020-09-30T14:36:26'), np.timedelta64(24409,'s'), 'T', 'D', None
, 'BlazingSQL is built on the RAPIDS GPU data science ecosystem')
, (38, 6.44, np.datetime64('2020-10-10T08:34:36'), np.timedelta64(90171,'s'), 'X', 'B', 'csv'
, 'BlazingSQL lets you ETL raw data directly into GPU memory as a GPU DataFrame (GDF)')
, (17, 5.28, np.datetime64('2020-10-09T08:34:40'), np.timedelta64(30532,'s'), 'P', 'D', 'dataframes'
, 'Dask is a flexible library for parallel computing in Python')
, (10, 8.28, np.datetime64('2020-10-03T03:31:21'), np.timedelta64(23552,'s'), 'W', 'B', 'python'
, None)
]
, columns = ['num', 'float', 'datetime', 'timedelta', 'char', 'category', 'word', 'string']
)
df['category'] = df['category'].astype('category')
###Output
_____no_output_____
###Markdown
--- Properties--- DataFrame cudf.core.dataframe.DataFrame.at()
###Code
df.at[3]
df.at[3:7]
df.at[2, 'string']
df.at[2:5, 'string']
df.at[2:5, ['string', 'float']]
###Output
_____no_output_____
###Markdown
cudf.core.dataframe.DataFrame.columns()
###Code
df.columns
###Output
_____no_output_____
###Markdown
cudf.core.dataframe.DataFrame.dtypes()
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
cudf.core.dataframe.DataFrame.iat()
###Code
df.iat[3]
df.iat[3:7]
df.iat[2, 7]
df.iat[2:5, 7]
df.iat[2:5, 6:8]
df.iat[2:5, [1,3,5]]
###Output
_____no_output_____
###Markdown
cudf.core.dataframe.DataFrame.iloc()
###Code
df.iloc[3]
df.iloc[3:5]
df.iloc[2, 7]
df.iloc[2:5, 7]
df.iloc[2:5, [4,5,7]]
df.iloc[[1,2,7], [4,5,6]]
###Output
_____no_output_____
###Markdown
cudf.core.dataframe.DataFrame.index()
###Code
df.index
###Output
_____no_output_____
###Markdown
cudf.core.dataframe.DataFrame.loc()
###Code
df.loc[3]
df.loc[3:6]
df.loc[2, 'string']
df.loc[3:6, ['string', 'float']]
###Output
_____no_output_____
###Markdown
cudf.core.dataframe.DataFrame.ndim()
###Code
df.ndim
###Output
_____no_output_____
###Markdown
cudf.core.dataframe.DataFrame.shape()
###Code
df.shape
###Output
_____no_output_____
###Markdown
cudf.core.dataframe.DataFrame.size()
###Code
df.size
###Output
_____no_output_____
###Markdown
cudf.core.dataframe.DataFrame.T()
###Code
df[['num']].T
###Output
_____no_output_____
###Markdown
cudf.core.dataframe.DataFrame.values()
###Code
df[['num', 'float']].values
###Output
_____no_output_____
###Markdown
Series cudf.core.series.Series.cat()
###Code
df['category'].cat
###Output
_____no_output_____
###Markdown
cudf.core.series.Series.data()
###Code
df['num'].data
###Output
_____no_output_____
###Markdown
cudf.core.series.Series.dt()
###Code
df['datetime'].dt
df['timedelta'].dt
###Output
_____no_output_____
###Markdown
cudf.core.series.Series.dtype()
###Code
df['num'].dtype
###Output
_____no_output_____
###Markdown
cudf.core.series.Series.has_nulls()
###Code
df['num'].has_nulls
df['string'].has_nulls
###Output
_____no_output_____
###Markdown
cudf.core.series.Series.iloc()
###Code
df['num'].iloc[1]
df['num'].iloc[1:4]
###Output
_____no_output_____
###Markdown
cudf.core.series.Series.index()
###Code
df['num'].index
###Output
_____no_output_____
###Markdown
cudf.core.series.Series.is_monotonic_decreasing()
###Code
df['num'].is_monotonic_decreasing
###Output
_____no_output_____
###Markdown
cudf.core.series.Series.is_monotonic_increasing()
###Code
df['num'].is_monotonic_decreasing
###Output
_____no_output_____
###Markdown
cudf.core.series.Series.is_monotonic()
###Code
df['num'].is_monotonic
###Output
_____no_output_____
###Markdown
cudf.core.series.Series.is_unique()
###Code
df['num'].is_unique
###Output
_____no_output_____
###Markdown
cudf.core.series.Series.loc()
###Code
df['num'].loc[3]
df['num'].loc[3:6]
###Output
_____no_output_____
###Markdown
cudf.core.series.Series.name()
###Code
df['float'].name
###Output
_____no_output_____
###Markdown
cudf.core.series.Series.ndim()
###Code
df['float'].ndim
###Output
_____no_output_____
###Markdown
cudf.core.series.Series.null_count()
###Code
df['float'].null_count
df['string'].null_count
###Output
_____no_output_____
###Markdown
cudf.core.series.Series.nullable()
###Code
df['num'].nullable
df['string'].nullable
###Output
_____no_output_____
###Markdown
cudf.core.series.Series.nullmask()
###Code
df['datetime'].nullmask
df['word'].nullmask
###Output
_____no_output_____
###Markdown
cudf.core.series.Series.shape()
###Code
df['num'].shape
###Output
_____no_output_____
###Markdown
cudf.core.series.Series.size()
###Code
df['float'].size
df['word'].size
###Output
_____no_output_____
###Markdown
cudf.core.series.Series.str()
###Code
df['word'].str
###Output
_____no_output_____
###Markdown
cudf.core.series.Series.valid_count()
###Code
df['float'].valid_count
df['word'].valid_count
###Output
_____no_output_____
###Markdown
cudf.core.series.Series.values()
###Code
df['num'].values
###Output
_____no_output_____ |
03_classification_annotated.ipynb | ###Markdown
**Chapter 3 – Classification**_This notebook contains all the sample code and solutions to the exercises in chapter 3._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "classification"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
MNIST
###Code
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version=1)
mnist.keys()
X, y = mnist["data"], mnist["target"]
X.shape
## 70000 images, 28x28 pixels
mnist
# import json
# def default(obj):
# if type(obj).__module__ == np.__name__:
# if isinstance(obj, np.ndarray):
# return obj.tolist()
# else:
# return obj.item()
# raise TypeError('Unknown type:', type(obj))
# js = json.dumps(mnist, default=default)
# print(js)
print(y.shape)
print(np.unique(y))
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
some_digit = X[0]
some_digit_image = some_digit.reshape(28, 28)
plt.imshow(some_digit_image, cmap=mpl.cm.binary_r)
plt.axis("off")
save_fig("some_digit_plot")
plt.show()
y[0]
y = y.astype(np.uint8)
def plot_digit(data):
image = data.reshape(28, 28)
plt.imshow(image, cmap = mpl.cm.binary,
interpolation="nearest")
plt.axis("off")
# EXTRA
def plot_digits(instances, images_per_row=10, **options):
size = 28
images_per_row = min(len(instances), images_per_row)
images = [instance.reshape(size,size) for instance in instances]
n_rows = (len(instances) - 1) // images_per_row + 1
row_images = []
n_empty = n_rows * images_per_row - len(instances)
images.append(np.zeros((size, size * n_empty)))
for row in range(n_rows):
rimages = images[row * images_per_row : (row + 1) * images_per_row]
row_images.append(np.concatenate(rimages, axis=1))
image = np.concatenate(row_images, axis=0)
plt.imshow(image, cmap = mpl.cm.binary, **options)
plt.axis("off")
plt.figure(figsize=(9,9))
example_images = X[:100]
plot_digits(example_images, images_per_row=10)
save_fig("more_digits_plot")
plt.show()
y[0]
X_train, X_test, y_train, y_test = X[:60000], X[60000:], y[:60000], y[60000:]
###Output
_____no_output_____
###Markdown
Binary classifier
###Code
y_train_5 = (y_train == 5)
y_test_5 = (y_test == 5)
###Output
_____no_output_____
###Markdown
**Note**: some hyperparameters will have a different defaut value in future versions of Scikit-Learn, such as `max_iter` and `tol`. To be future-proof, we explicitly set these hyperparameters to their future default values. For simplicity, this is not shown in the book.* [hinge loss](https://en.wikipedia.org/wiki/Hinge_loss)* [SGD](https://en.wikipedia.org/wiki/Stochastic_gradient_descent)
###Code
from sklearn.linear_model import SGDClassifier
sgd_clf = SGDClassifier(max_iter=1000, tol=1e-3, random_state=42)
sgd_clf.fit(X_train, y_train_5)
sgd_clf.predict([some_digit])
from sklearn.model_selection import cross_val_score
cross_val_score(sgd_clf, X_train, y_train_5, cv=3, scoring="accuracy")
from sklearn.model_selection import StratifiedKFold
from sklearn.base import clone
skfolds = StratifiedKFold(n_splits=3, shuffle=True, random_state=42)
for train_index, test_index in skfolds.split(X_train, y_train_5):
clone_clf = clone(sgd_clf)
X_train_folds = X_train[train_index]
y_train_folds = y_train_5[train_index]
X_test_fold = X_train[test_index]
y_test_fold = y_train_5[test_index]
clone_clf.fit(X_train_folds, y_train_folds)
y_pred = clone_clf.predict(X_test_fold)
n_correct = sum(y_pred == y_test_fold)
print(n_correct / len(y_pred))
###Output
0.9669
0.91625
0.96785
###Markdown
**Note**: `shuffle=True` was omitted by mistake in previous releases of the book.
###Code
from sklearn.base import BaseEstimator
class Never5Classifier(BaseEstimator):
def fit(self, X, y=None):
pass
def predict(self, X):
return np.zeros((len(X), 1), dtype=bool)
never_5_clf = Never5Classifier()
cross_val_score(never_5_clf, X_train, y_train_5, cv=3, scoring="accuracy")
###Output
_____no_output_____
###Markdown
**Warning**: this output (and many others in this notebook and other notebooks) may differ slightly from those in the book. Don't worry, that's okay! There are several reasons for this:* first, Scikit-Learn and other libraries evolve, and algorithms get tweaked a bit, which may change the exact result you get. If you use the latest Scikit-Learn version (and in general, you really should), you probably won't be using the exact same version I used when I wrote the book or this notebook, hence the difference. I try to keep this notebook reasonably up to date, but I can't change the numbers on the pages in your copy of the book.* second, many training algorithms are stochastic, meaning they rely on randomness. In principle, it's possible to get consistent outputs from a random number generator by setting the seed from which it generates the pseudo-random numbers (which is why you will see `random_state=42` or `np.random.seed(42)` pretty often). However, sometimes this does not suffice due to the other factors listed here.* third, if the training algorithm runs across multiple threads (as do some algorithms implemented in C) or across multiple processes (e.g., when using the `n_jobs` argument), then the precise order in which operations will run is not always guaranteed, and thus the exact result may vary slightly.* lastly, other things may prevent perfect reproducibility, such as Python maps and sets whose order is not guaranteed to be stable across sessions, or the order of files in a directory which is also not guaranteed.
###Code
from sklearn.model_selection import cross_val_predict
y_train_pred = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3)
sgd_clf.predict(X_train)
y_train_pred
from sklearn.metrics import confusion_matrix
confusion_matrix(y_train_5, y_train_pred)
###Output
_____no_output_____
###Markdown
|predicted (-) | predicted (+)| ||----|----|---||TN | FP| **actual -** | |FN | TP| **actual +** |* precision: * accuracy of positive prediction $\equiv TP/(TP+FP)$ * when threshold moves down, both TP and FP might goes up so it's not guaranteed to be monotonic * recall/sensitivity/TPR(TPrate): * the ratio of TP among all actual positives$\equiv TP/(TP + FN)$ * total number of TP and FN is fixed, therefore recall is monotonically decreasing with higher threshold values.
###Code
y_train_perfect_predictions = y_train_5 # pretend we reached perfection
cm = confusion_matrix(y_train_5, y_train_perfect_predictions)
print(cm)
print(cm[0,0],cm[0,1],cm[1,0],cm[1,1])
from sklearn.metrics import precision_score, recall_score
precision_score(y_train_5, y_train_pred)
cm = confusion_matrix(y_train_5, y_train_pred)
cm[1, 1] / (cm[0, 1] + cm[1, 1])
recall_score(y_train_5, y_train_pred)
cm[1, 1] / (cm[1, 0] + cm[1, 1])
###Output
_____no_output_____
###Markdown
* harmonic mean of precision and recall gives f1 score. $\equiv 2/(precision^{-1} + recall^{-1})$
###Code
from sklearn.metrics import f1_score
f1_score(y_train_5, y_train_pred)
cm[1, 1] / (cm[1, 1] + (cm[1, 0] + cm[0, 1]) / 2)
y_scores = sgd_clf.decision_function([some_digit])
y_scores
threshold = 0
y_some_digit_pred = (y_scores > threshold)
y_some_digit_pred
threshold = 8000
y_some_digit_pred = (y_scores > threshold)
y_some_digit_pred
y_scores = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3,
method="decision_function")
y_scores.shape
from sklearn.metrics import precision_recall_curve
precisions, recalls, thresholds = precision_recall_curve(y_train_5, y_scores)
plt.plot(thresholds,'o')
print(thresholds.shape)
print(precisions.shape)
print(recalls.shape)
def plot_precision_recall_vs_threshold(precisions, recalls, thresholds):
plt.plot(thresholds, precisions[:-1], "b--", label="Precision", linewidth=2)
plt.plot(thresholds, recalls[:-1], "g-", label="Recall", linewidth=2)
plt.legend(loc="center right", fontsize=16) # Not shown in the book
plt.xlabel("Threshold", fontsize=16) # Not shown
plt.grid(True) # Not shown
plt.axis([-50000, 50000, 0, 1]) # Not shown
recall_90_precision = recalls[np.argmax(precisions >= 0.90)]
threshold_90_precision = thresholds[np.argmax(precisions >= 0.90)]
plt.figure(figsize=(8, 4)) # Not shown
plot_precision_recall_vs_threshold(precisions, recalls, thresholds)
plt.plot([threshold_90_precision, threshold_90_precision], [0., 0.9], "r:") # Not shown
plt.plot([-50000, threshold_90_precision], [0.9, 0.9], "r:") # Not shown
plt.plot([-50000, threshold_90_precision], [recall_90_precision, recall_90_precision], "r:")# Not shown
plt.plot([threshold_90_precision], [0.9], "ro") # Not shown
plt.plot([threshold_90_precision], [recall_90_precision], "ro") # Not shown
save_fig("precision_recall_vs_threshold_plot") # Not shown
plt.show()
(y_train_pred == (y_scores > 0)).all()
def plot_precision_vs_recall(precisions, recalls):
plt.plot(recalls, precisions, "b-", linewidth=2)
plt.xlabel("Recall", fontsize=16)
plt.ylabel("Precision", fontsize=16)
plt.axis([0, 1, 0, 1])
plt.grid(True)
plt.figure(figsize=(8, 6))
plot_precision_vs_recall(precisions, recalls)
plt.plot([recall_90_precision, recall_90_precision], [0., 0.9], "r:")
plt.plot([0.0, recall_90_precision], [0.9, 0.9], "r:")
plt.plot([recall_90_precision], [0.9], "ro")
save_fig("precision_vs_recall_plot")
plt.show()
threshold_90_precision = thresholds[np.argmax(precisions >= 0.90)]
threshold_90_precision
y_train_pred_90 = (y_scores >= threshold_90_precision)
y_train_pred_90
precision_score(y_train_5, y_train_pred_90)
recall_score(y_train_5, y_train_pred_90)
###Output
_____no_output_____
###Markdown
ROC (receiver operating characteristics) curves. (recall/TPR vs. FPR)* FPR: ratio of false positives among all actual negatives $\equiv FP/(FP+TN)$; in other words, ratio of negative instances that are incorrectly classified as positive.* when to use which: As a rule of thumb, you should prefer the PR curve whenever the positive class is rare or when you care more about the FPs than the FNs, and the ROC curve otherwise. * For example, looking at the previous ROC curve (and the ROC AUC score), you may think that the classifier is really good. But this is mostly because there are few positives (5s) compared to the negatives (non-5s). In contrast, the PR curve makes it clear that the classifier has room for improvement (the curve could be closer to the top- right corner).
###Code
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_train_5, y_scores)
def plot_roc_curve(fpr, tpr, label=None):
plt.plot(fpr, tpr, linewidth=2, label=label)
plt.plot([0, 1], [0, 1], 'k--') # dashed diagonal
plt.axis([0, 1, 0, 1]) # Not shown in the book
plt.xlabel('False Positive Rate (Fall-Out)', fontsize=16) # Not shown
plt.ylabel('True Positive Rate (Recall)', fontsize=16) # Not shown
plt.grid(True) # Not shown
plt.figure(figsize=(8, 6)) # Not shown
plot_roc_curve(fpr, tpr)
fpr_90 = fpr[np.argmax(tpr >= recall_90_precision)] # Not shown
plt.plot([fpr_90, fpr_90], [0., recall_90_precision], "r:") # Not shown
plt.plot([0.0, fpr_90], [recall_90_precision, recall_90_precision], "r:") # Not shown
plt.plot([fpr_90], [recall_90_precision], "ro") # Not shown
save_fig("roc_curve_plot") # Not shown
plt.show()
from sklearn.metrics import roc_auc_score
roc_auc_score(y_train_5, y_scores)
###Output
_____no_output_____
###Markdown
**Note**: we set `n_estimators=100` to be future-proof since this will be the default value in Scikit-Learn 0.22.
###Code
from sklearn.ensemble import RandomForestClassifier
forest_clf = RandomForestClassifier(n_estimators=100, random_state=42)
y_probas_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3,
method="predict_proba")
y_probas_forest.shape
y_scores_forest = y_probas_forest[:, 1] # score = proba of positive class
fpr_forest, tpr_forest, thresholds_forest = roc_curve(y_train_5,y_scores_forest)
recall_for_forest = tpr_forest[np.argmax(fpr_forest >= fpr_90)]
plt.figure(figsize=(8, 6))
plt.plot(fpr, tpr, "b:", linewidth=2, label="SGD")
plot_roc_curve(fpr_forest, tpr_forest, "Random Forest")
plt.plot([fpr_90, fpr_90], [0., recall_90_precision], "r:")
plt.plot([0.0, fpr_90], [recall_90_precision, recall_90_precision], "r:")
plt.plot([fpr_90], [recall_90_precision], "ro")
plt.plot([fpr_90, fpr_90], [0., recall_for_forest], "r:")
plt.plot([fpr_90], [recall_for_forest], "ro")
plt.grid(True)
plt.legend(loc="lower right", fontsize=16)
save_fig("roc_curve_comparison_plot")
plt.show()
roc_auc_score(y_train_5, y_scores_forest)
y_train_pred_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3)
precision_score(y_train_5, y_train_pred_forest)
recall_score(y_train_5, y_train_pred_forest)
###Output
_____no_output_____
###Markdown
Multiclass(multinomial) classification* OvA(one vs all): one binary classifier for each class; run all for prediction and take the highest score one.* OvO (one vs one): pairwise binary classifier n/(n-1)/2 such classifiers in n-class problem. favors SVM as it scales poorly over sample size;; it seem sklearn default this to be OvR though.
###Code
from sklearn.svm import SVC
import time
start_time = time.time()
svm_clf = SVC(gamma="auto", random_state=42, decision_function_shape='ovo')
svm_clf.fit(X_train[:1000], y_train[:1000]) # y_train, not y_train_5
print(time.time() - start_time)
svm_clf.predict([some_digit])
some_digit_scores = svm_clf.decision_function([some_digit])
print(some_digit_scores)
print(some_digit_scores.shape)
import itertools
x = [0,1,2,3,4,5,6,7,8,9]
y = np.zeros([10,10])
indices = list(itertools.combinations(x, 2))
for i, index in enumerate(indices):
y[index[0],index[1]] = some_digit_scores[0,i]
plt.imshow(y, cmap='hot', interpolation='nearest')
plt.ylabel('positive')
plt.xlabel('negative')
plt.colorbar()
plt.show()
print(np.argmax(some_digit_scores))
print(some_digit_scores[0,37])
svm_clf.classes_
svm_clf.classes_[5]
###Output
_____no_output_____
###Markdown
* using OvR or default
###Code
start_time = time.time()
svm_clf = SVC(gamma="auto", random_state=42, decision_function_shape='ovr')
svm_clf.fit(X_train[:1000], y_train[:1000]) # y_train, not y_train_5
print(time.time() - start_time)
svm_clf.predict([some_digit])
some_digit_scores = svm_clf.decision_function([some_digit])
print(some_digit_scores)
print(some_digit_scores.shape)
from sklearn.multiclass import OneVsRestClassifier
start_time = time.time()
ovr_clf = OneVsRestClassifier(SVC(gamma="auto", random_state=42))
ovr_clf.fit(X_train[:1000], y_train[:1000])
print(time.time() - start_time)
ovr_clf.predict([some_digit])
len(ovr_clf.estimators_)
###Output
_____no_output_____
###Markdown
* use linear sgd classifier (takes really long time to run full dataset)
###Code
from sklearn.linear_model import SGDClassifier
start_time = time.time()
sgd_clf = SGDClassifier(max_iter=1000, tol=1e-3, random_state=42)
sgd_clf.fit(X_train, y_train)
print(time.time() - start_time)
sgd_clf.predict([some_digit])
sgd_clf.decision_function([some_digit])
from sklearn.model_selection import cross_val_score
cross_val_score(sgd_clf, X_train, y_train, cv=3, scoring="accuracy")
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train.astype(np.float64))
cross_val_score(sgd_clf, X_train_scaled, y_train, cv=3, scoring="accuracy")
X_train_scaled.shape
print(np.min(X_train_scaled), np.max(X_train_scaled))
###Output
-1.2742078920823614 244.94693302836035
###Markdown
* Question: what scaler did they use here?
###Code
plt.imshow(X_train_scaled[0,:].reshape([28,28]))
plt.colorbar()
plt.imshow(15/254.*X_train[0,:].reshape([28,28]))
plt.colorbar()
print(X_train[0,:])
print(X_train_scaled[10,:])
from sklearn.model_selection import cross_val_predict
from sklearn.metrics import confusion_matrix
start_time = time.time()
y_train_pred = cross_val_predict(sgd_clf, X_train_scaled, y_train, cv=3)
print(time.time() - start_time)
conf_mx = confusion_matrix(y_train, y_train_pred)
conf_mx
# since sklearn 0.22, you can use sklearn.metrics.plot_confusion_matrix()
def plot_confusion_matrix(matrix):
"""If you prefer color and a colorbar"""
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(111)
cax = ax.matshow(matrix)
fig.colorbar(cax)
plt.matshow(conf_mx, cmap=plt.cm.gray)
save_fig("confusion_matrix_plot", tight_layout=False)
plt.show()
row_sums = conf_mx.sum(axis=1, keepdims=True)
norm_conf_mx = conf_mx / row_sums
np.fill_diagonal(norm_conf_mx, 0)
plt.matshow(norm_conf_mx, cmap=plt.cm.gray)
save_fig("confusion_matrix_errors_plot", tight_layout=False)
plt.show()
cl_a, cl_b = 3, 5
X_aa = X_train[(y_train == cl_a) & (y_train_pred == cl_a)]
X_ab = X_train[(y_train == cl_a) & (y_train_pred == cl_b)]
X_ba = X_train[(y_train == cl_b) & (y_train_pred == cl_a)]
X_bb = X_train[(y_train == cl_b) & (y_train_pred == cl_b)]
plt.figure(figsize=(8,8))
plt.subplot(221); plot_digits(X_aa[:25], images_per_row=5)
plt.subplot(222); plot_digits(X_ab[:25], images_per_row=5)
plt.subplot(223); plot_digits(X_ba[:25], images_per_row=5)
plt.subplot(224); plot_digits(X_bb[:25], images_per_row=5)
save_fig("error_analysis_digits_plot")
plt.show()
###Output
Saving figure error_analysis_digits_plot
###Markdown
error analysis* https://medium.com/apprentice-journal/evaluating-multi-class-classifiers-12b2946e755b* Kappa = (observed accuracy - expected accuracy)/(1 - expected accuracy);``` Cats DogsCats| 22 | 9 |Dogs| 7 | 13 |Ground truth: Cats (29), Dogs (22)Machine Learning Classifier: Cats (31), Dogs (20)Total: (51)Observed Accuracy: ((22 + 13) / 51) = 0.69Expected Accuracy: ((29 * 31 / 51) + (22 * 20 / 51)) / 51 = 0.51 29 = 22 + 7 from ground truth, 31 = 22 + 9 pediction. for Cats class similar for dogs classKappa: (0.69 - 0.51) / (1 - 0.51) = 0.37``` * slightly different than confusion maxtrix shown above, here columns are actual truth, while rows are prediction.* micro-averaging vs. macro-averaging * A macro-average calculates the metric autonomously for each class to calculate the average. * the micro-average calculates average metric from the aggregate contributions of all classes. Micro -average is used in unbalanced datasets as this method takes the frequency of each class into consideration. Multilabel classification
###Code
from sklearn.neighbors import KNeighborsClassifier
y_train_large = (y_train >= 7)
y_train_odd = (y_train % 2 == 1)
y_multilabel = np.c_[y_train_large, y_train_odd]
knn_clf = KNeighborsClassifier()
knn_clf.fit(X_train, y_multilabel)
y_multilabel.shape
knn_clf.predict([some_digit])
# np.c_[np.array([[1,2,3]]), np.array([[4,5,6]]), np.array([[0,0,0]])]
# a = np.array([[1,2,3,4]])
# a = np.array([1,2,3,4])
# print(a.shape)
###Output
(4,)
###Markdown
**Warning**: the following cell may take a very long time (possibly hours depending on your hardware).
###Code
y_train_knn_pred = cross_val_predict(knn_clf, X_train, y_multilabel, cv=3)
f1_score(y_multilabel, y_train_knn_pred, average="macro")
###Output
_____no_output_____
###Markdown
Multioutput classification
###Code
noise = np.random.randint(0, 100, (len(X_train), 784))
X_train_mod = X_train + noise
noise = np.random.randint(0, 100, (len(X_test), 784))
X_test_mod = X_test + noise
y_train_mod = X_train
y_test_mod = X_test
some_index = 0
plt.subplot(121); plot_digit(X_test_mod[some_index])
plt.subplot(122); plot_digit(y_test_mod[some_index])
save_fig("noisy_digit_example_plot")
plt.show()
knn_clf.fit(X_train_mod, y_train_mod)
clean_digit = knn_clf.predict([X_test_mod[some_index]])
plot_digit(clean_digit)
save_fig("cleaned_digit_example_plot")
###Output
Saving figure cleaned_digit_example_plot
###Markdown
Extra material Dummy (ie. random) classifier
###Code
from sklearn.dummy import DummyClassifier
dmy_clf = DummyClassifier(strategy="prior")
y_probas_dmy = cross_val_predict(dmy_clf, X_train, y_train_5, cv=3, method="predict_proba")
y_scores_dmy = y_probas_dmy[:, 1]
fprr, tprr, thresholdsr = roc_curve(y_train_5, y_scores_dmy)
plot_roc_curve(fprr, tprr)
###Output
_____no_output_____
###Markdown
KNN classifier
###Code
from sklearn.neighbors import KNeighborsClassifier
knn_clf = KNeighborsClassifier(weights='distance', n_neighbors=4)
knn_clf.fit(X_train, y_train)
y_knn_pred = knn_clf.predict(X_test)
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_knn_pred)
from scipy.ndimage.interpolation import shift
def shift_digit(digit_array, dx, dy, new=0):
return shift(digit_array.reshape(28, 28), [dy, dx], cval=new).reshape(784)
plot_digit(shift_digit(some_digit, 5, 1, new=100))
X_train_expanded = [X_train]
y_train_expanded = [y_train]
for dx, dy in ((1, 0), (-1, 0), (0, 1), (0, -1)):
shifted_images = np.apply_along_axis(shift_digit, axis=1, arr=X_train, dx=dx, dy=dy)
X_train_expanded.append(shifted_images)
y_train_expanded.append(y_train)
X_train_expanded = np.concatenate(X_train_expanded)
y_train_expanded = np.concatenate(y_train_expanded)
X_train_expanded.shape, y_train_expanded.shape
knn_clf.fit(X_train_expanded, y_train_expanded)
y_knn_expanded_pred = knn_clf.predict(X_test)
accuracy_score(y_test, y_knn_expanded_pred)
ambiguous_digit = X_test[2589]
knn_clf.predict_proba([ambiguous_digit])
plot_digit(ambiguous_digit)
###Output
_____no_output_____
###Markdown
Exercise solutions 1. An MNIST Classifier With Over 97% Accuracy **Warning**: the next cell may take hours to run, depending on your hardware.
###Code
from sklearn.model_selection import GridSearchCV
param_grid = [{'weights': ["uniform", "distance"], 'n_neighbors': [3, 4, 5]}]
knn_clf = KNeighborsClassifier()
grid_search = GridSearchCV(knn_clf, param_grid, cv=5, verbose=3)
grid_search.fit(X_train, y_train)
grid_search.best_params_
grid_search.best_score_
from sklearn.metrics import accuracy_score
y_pred = grid_search.predict(X_test)
accuracy_score(y_test, y_pred)
###Output
_____no_output_____
###Markdown
2. Data Augmentation
###Code
from scipy.ndimage.interpolation import shift
def shift_image(image, dx, dy):
image = image.reshape((28, 28))
shifted_image = shift(image, [dy, dx], cval=0, mode="constant")
return shifted_image.reshape([-1])
image = X_train[1000]
shifted_image_down = shift_image(image, 0, 5)
shifted_image_left = shift_image(image, -5, 0)
plt.figure(figsize=(12,3))
plt.subplot(131)
plt.title("Original", fontsize=14)
plt.imshow(image.reshape(28, 28), interpolation="nearest", cmap="Greys")
plt.subplot(132)
plt.title("Shifted down", fontsize=14)
plt.imshow(shifted_image_down.reshape(28, 28), interpolation="nearest", cmap="Greys")
plt.subplot(133)
plt.title("Shifted left", fontsize=14)
plt.imshow(shifted_image_left.reshape(28, 28), interpolation="nearest", cmap="Greys")
plt.show()
X_train_augmented = [image for image in X_train]
y_train_augmented = [label for label in y_train]
for dx, dy in ((1, 0), (-1, 0), (0, 1), (0, -1)):
for image, label in zip(X_train, y_train):
X_train_augmented.append(shift_image(image, dx, dy))
y_train_augmented.append(label)
X_train_augmented = np.array(X_train_augmented)
y_train_augmented = np.array(y_train_augmented)
shuffle_idx = np.random.permutation(len(X_train_augmented))
X_train_augmented = X_train_augmented[shuffle_idx]
y_train_augmented = y_train_augmented[shuffle_idx]
knn_clf = KNeighborsClassifier(**grid_search.best_params_)
knn_clf.fit(X_train_augmented, y_train_augmented)
y_pred = knn_clf.predict(X_test)
accuracy_score(y_test, y_pred)
###Output
_____no_output_____
###Markdown
By simply augmenting the data, we got a 0.5% accuracy boost. :) 3. Tackle the Titanic dataset The goal is to predict whether or not a passenger survived based on attributes such as their age, sex, passenger class, where they embarked and so on. First, login to [Kaggle](https://www.kaggle.com/) and go to the [Titanic challenge](https://www.kaggle.com/c/titanic) to download `train.csv` and `test.csv`. Save them to the `datasets/titanic` directory. Next, let's load the data:
###Code
import os
TITANIC_PATH = os.path.join("datasets", "titanic")
import pandas as pd
def load_titanic_data(filename, titanic_path=TITANIC_PATH):
csv_path = os.path.join(titanic_path, filename)
return pd.read_csv(csv_path)
train_data = load_titanic_data("train.csv")
test_data = load_titanic_data("test.csv")
###Output
_____no_output_____
###Markdown
The data is already split into a training set and a test set. However, the test data does *not* contain the labels: your goal is to train the best model you can using the training data, then make your predictions on the test data and upload them to Kaggle to see your final score. Let's take a peek at the top few rows of the training set:
###Code
train_data.head()
###Output
_____no_output_____
###Markdown
The attributes have the following meaning:* **Survived**: that's the target, 0 means the passenger did not survive, while 1 means he/she survived.* **Pclass**: passenger class.* **Name**, **Sex**, **Age**: self-explanatory* **SibSp**: how many siblings & spouses of the passenger aboard the Titanic.* **Parch**: how many children & parents of the passenger aboard the Titanic.* **Ticket**: ticket id* **Fare**: price paid (in pounds)* **Cabin**: passenger's cabin number* **Embarked**: where the passenger embarked the Titanic Let's get more info to see how much data is missing:
###Code
train_data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 12 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 PassengerId 891 non-null int64
1 Survived 891 non-null int64
2 Pclass 891 non-null int64
3 Name 891 non-null object
4 Sex 891 non-null object
5 Age 714 non-null float64
6 SibSp 891 non-null int64
7 Parch 891 non-null int64
8 Ticket 891 non-null object
9 Fare 891 non-null float64
10 Cabin 204 non-null object
11 Embarked 889 non-null object
dtypes: float64(2), int64(5), object(5)
memory usage: 83.7+ KB
###Markdown
Okay, the **Age**, **Cabin** and **Embarked** attributes are sometimes null (less than 891 non-null), especially the **Cabin** (77% are null). We will ignore the **Cabin** for now and focus on the rest. The **Age** attribute has about 19% null values, so we will need to decide what to do with them. Replacing null values with the median age seems reasonable. The **Name** and **Ticket** attributes may have some value, but they will be a bit tricky to convert into useful numbers that a model can consume. So for now, we will ignore them. Let's take a look at the numerical attributes:
###Code
train_data.describe()
###Output
_____no_output_____
###Markdown
* Yikes, only 38% **Survived**. :( That's close enough to 40%, so accuracy will be a reasonable metric to evaluate our model.* The mean **Fare** was £32.20, which does not seem so expensive (but it was probably a lot of money back then).* The mean **Age** was less than 30 years old. Let's check that the target is indeed 0 or 1:
###Code
train_data["Survived"].value_counts()
###Output
_____no_output_____
###Markdown
Now let's take a quick look at all the categorical attributes:
###Code
train_data["Pclass"].value_counts()
train_data["Sex"].value_counts()
train_data["Embarked"].value_counts()
###Output
_____no_output_____
###Markdown
The Embarked attribute tells us where the passenger embarked: C=Cherbourg, Q=Queenstown, S=Southampton. **Note**: the code below uses a mix of `Pipeline`, `FeatureUnion` and a custom `DataFrameSelector` to preprocess some columns differently. Since Scikit-Learn 0.20, it is preferable to use a `ColumnTransformer`, like in the previous chapter. Now let's build our preprocessing pipelines. We will reuse the `DataframeSelector` we built in the previous chapter to select specific attributes from the `DataFrame`:
###Code
from sklearn.base import BaseEstimator, TransformerMixin
class DataFrameSelector(BaseEstimator, TransformerMixin):
def __init__(self, attribute_names):
self.attribute_names = attribute_names
def fit(self, X, y=None):
return self
def transform(self, X):
return X[self.attribute_names]
###Output
_____no_output_____
###Markdown
Let's build the pipeline for the numerical attributes:
###Code
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
num_pipeline = Pipeline([
("select_numeric", DataFrameSelector(["Age", "SibSp", "Parch", "Fare"])),
("imputer", SimpleImputer(strategy="median")),
])
num_pipeline.fit_transform(train_data)
###Output
_____no_output_____
###Markdown
We will also need an imputer for the string categorical columns (the regular `SimpleImputer` does not work on those):
###Code
# Inspired from stackoverflow.com/questions/25239958
class MostFrequentImputer(BaseEstimator, TransformerMixin):
def fit(self, X, y=None):
self.most_frequent_ = pd.Series([X[c].value_counts().index[0] for c in X],
index=X.columns)
return self
def transform(self, X, y=None):
return X.fillna(self.most_frequent_)
from sklearn.preprocessing import OneHotEncoder
###Output
_____no_output_____
###Markdown
Now we can build the pipeline for the categorical attributes:
###Code
cat_pipeline = Pipeline([
("select_cat", DataFrameSelector(["Pclass", "Sex", "Embarked"])),
("imputer", MostFrequentImputer()),
("cat_encoder", OneHotEncoder(sparse=False)),
])
cat_pipeline.fit_transform(train_data)
###Output
_____no_output_____
###Markdown
Finally, let's join the numerical and categorical pipelines:
###Code
from sklearn.pipeline import FeatureUnion
preprocess_pipeline = FeatureUnion(transformer_list=[
("num_pipeline", num_pipeline),
("cat_pipeline", cat_pipeline),
])
###Output
_____no_output_____
###Markdown
Cool! Now we have a nice preprocessing pipeline that takes the raw data and outputs numerical input features that we can feed to any Machine Learning model we want.
###Code
X_train = preprocess_pipeline.fit_transform(train_data)
X_train
###Output
_____no_output_____
###Markdown
Let's not forget to get the labels:
###Code
y_train = train_data["Survived"]
###Output
_____no_output_____
###Markdown
We are now ready to train a classifier. Let's start with an `SVC`:
###Code
from sklearn.svm import SVC
svm_clf = SVC(gamma="auto")
svm_clf.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Great, our model is trained, let's use it to make predictions on the test set:
###Code
X_test = preprocess_pipeline.transform(test_data)
y_pred = svm_clf.predict(X_test)
###Output
_____no_output_____
###Markdown
And now we could just build a CSV file with these predictions (respecting the format excepted by Kaggle), then upload it and hope for the best. But wait! We can do better than hope. Why don't we use cross-validation to have an idea of how good our model is?
###Code
from sklearn.model_selection import cross_val_score
svm_scores = cross_val_score(svm_clf, X_train, y_train, cv=10)
svm_scores.mean()
###Output
_____no_output_____
###Markdown
Okay, over 73% accuracy, clearly better than random chance, but it's not a great score. Looking at the [leaderboard](https://www.kaggle.com/c/titanic/leaderboard) for the Titanic competition on Kaggle, you can see that you need to reach above 80% accuracy to be within the top 10% Kagglers. Some reached 100%, but since you can easily find the [list of victims](https://www.encyclopedia-titanica.org/titanic-victims/) of the Titanic, it seems likely that there was little Machine Learning involved in their performance! ;-) So let's try to build a model that reaches 80% accuracy. Let's try a `RandomForestClassifier`:
###Code
from sklearn.ensemble import RandomForestClassifier
forest_clf = RandomForestClassifier(n_estimators=100, random_state=42)
forest_scores = cross_val_score(forest_clf, X_train, y_train, cv=10)
forest_scores.mean()
###Output
_____no_output_____
###Markdown
That's much better! Instead of just looking at the mean accuracy across the 10 cross-validation folds, let's plot all 10 scores for each model, along with a box plot highlighting the lower and upper quartiles, and "whiskers" showing the extent of the scores (thanks to Nevin Yilmaz for suggesting this visualization). Note that the `boxplot()` function detects outliers (called "fliers") and does not include them within the whiskers. Specifically, if the lower quartile is $Q_1$ and the upper quartile is $Q_3$, then the interquartile range $IQR = Q_3 - Q_1$ (this is the box's height), and any score lower than $Q_1 - 1.5 \times IQR$ is a flier, and so is any score greater than $Q3 + 1.5 \times IQR$.
###Code
plt.figure(figsize=(8, 4))
plt.plot([1]*10, svm_scores, ".")
plt.plot([2]*10, forest_scores, ".")
plt.boxplot([svm_scores, forest_scores], labels=("SVM","Random Forest"))
plt.ylabel("Accuracy", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
To improve this result further, you could:* Compare many more models and tune hyperparameters using cross validation and grid search,* Do more feature engineering, for example: * replace **SibSp** and **Parch** with their sum, * try to identify parts of names that correlate well with the **Survived** attribute (e.g. if the name contains "Countess", then survival seems more likely),* try to convert numerical attributes to categorical attributes: for example, different age groups had very different survival rates (see below), so it may help to create an age bucket category and use it instead of the age. Similarly, it may be useful to have a special category for people traveling alone since only 30% of them survived (see below).
###Code
train_data["AgeBucket"] = train_data["Age"] // 15 * 15
train_data[["AgeBucket", "Survived"]].groupby(['AgeBucket']).mean()
train_data["RelativesOnboard"] = train_data["SibSp"] + train_data["Parch"]
train_data[["RelativesOnboard", "Survived"]].groupby(['RelativesOnboard']).mean()
###Output
_____no_output_____
###Markdown
4. Spam classifier First, let's fetch the data:
###Code
import os
import tarfile
import urllib
DOWNLOAD_ROOT = "http://spamassassin.apache.org/old/publiccorpus/"
HAM_URL = DOWNLOAD_ROOT + "20030228_easy_ham.tar.bz2"
SPAM_URL = DOWNLOAD_ROOT + "20030228_spam.tar.bz2"
SPAM_PATH = os.path.join("datasets", "spam")
def fetch_spam_data(spam_url=SPAM_URL, spam_path=SPAM_PATH):
if not os.path.isdir(spam_path):
os.makedirs(spam_path)
for filename, url in (("ham.tar.bz2", HAM_URL), ("spam.tar.bz2", SPAM_URL)):
path = os.path.join(spam_path, filename)
if not os.path.isfile(path):
urllib.request.urlretrieve(url, path)
tar_bz2_file = tarfile.open(path)
tar_bz2_file.extractall(path=SPAM_PATH)
tar_bz2_file.close()
fetch_spam_data()
###Output
_____no_output_____
###Markdown
Next, let's load all the emails:
###Code
HAM_DIR = os.path.join(SPAM_PATH, "easy_ham")
SPAM_DIR = os.path.join(SPAM_PATH, "spam")
ham_filenames = [name for name in sorted(os.listdir(HAM_DIR)) if len(name) > 20]
spam_filenames = [name for name in sorted(os.listdir(SPAM_DIR)) if len(name) > 20]
len(ham_filenames)
len(spam_filenames)
###Output
_____no_output_____
###Markdown
We can use Python's `email` module to parse these emails (this handles headers, encoding, and so on):
###Code
import email
import email.policy
def load_email(is_spam, filename, spam_path=SPAM_PATH):
directory = "spam" if is_spam else "easy_ham"
with open(os.path.join(spam_path, directory, filename), "rb") as f:
return email.parser.BytesParser(policy=email.policy.default).parse(f)
ham_emails = [load_email(is_spam=False, filename=name) for name in ham_filenames]
spam_emails = [load_email(is_spam=True, filename=name) for name in spam_filenames]
###Output
_____no_output_____
###Markdown
Let's look at one example of ham and one example of spam, to get a feel of what the data looks like:
###Code
print(ham_emails[1].get_content().strip())
print(spam_emails[6].get_content().strip())
###Output
Help wanted. We are a 14 year old fortune 500 company, that is
growing at a tremendous rate. We are looking for individuals who
want to work from home.
This is an opportunity to make an excellent income. No experience
is required. We will train you.
So if you are looking to be employed from home with a career that has
vast opportunities, then go:
http://www.basetel.com/wealthnow
We are looking for energetic and self motivated people. If that is you
than click on the link and fill out the form, and one of our
employement specialist will contact you.
To be removed from our link simple go to:
http://www.basetel.com/remove.html
4139vOLW7-758DoDY1425FRhM1-764SMFc8513fCsLl40
###Markdown
Some emails are actually multipart, with images and attachments (which can have their own attachments). Let's look at the various types of structures we have:
###Code
def get_email_structure(email):
if isinstance(email, str):
return email
payload = email.get_payload()
if isinstance(payload, list):
return "multipart({})".format(", ".join([
get_email_structure(sub_email)
for sub_email in payload
]))
else:
return email.get_content_type()
from collections import Counter
def structures_counter(emails):
structures = Counter()
for email in emails:
structure = get_email_structure(email)
structures[structure] += 1
return structures
structures_counter(ham_emails).most_common()
structures_counter(spam_emails).most_common()
###Output
_____no_output_____
###Markdown
It seems that the ham emails are more often plain text, while spam has quite a lot of HTML. Moreover, quite a few ham emails are signed using PGP, while no spam is. In short, it seems that the email structure is useful information to have. Now let's take a look at the email headers:
###Code
for header, value in spam_emails[0].items():
print(header,":",value)
###Output
Return-Path : <[email protected]>
Delivered-To : [email protected]
Received : from localhost (localhost [127.0.0.1]) by phobos.labs.spamassassin.taint.org (Postfix) with ESMTP id 136B943C32 for <zzzz@localhost>; Thu, 22 Aug 2002 08:17:21 -0400 (EDT)
Received : from mail.webnote.net [193.120.211.219] by localhost with POP3 (fetchmail-5.9.0) for zzzz@localhost (single-drop); Thu, 22 Aug 2002 13:17:21 +0100 (IST)
Received : from dd_it7 ([210.97.77.167]) by webnote.net (8.9.3/8.9.3) with ESMTP id NAA04623 for <[email protected]>; Thu, 22 Aug 2002 13:09:41 +0100
From : [email protected]
Received : from r-smtp.korea.com - 203.122.2.197 by dd_it7 with Microsoft SMTPSVC(5.5.1775.675.6); Sat, 24 Aug 2002 09:42:10 +0900
To : [email protected]
Subject : Life Insurance - Why Pay More?
Date : Wed, 21 Aug 2002 20:31:57 -1600
MIME-Version : 1.0
Message-ID : <0103c1042001882DD_IT7@dd_it7>
Content-Type : text/html; charset="iso-8859-1"
Content-Transfer-Encoding : quoted-printable
###Markdown
There's probably a lot of useful information in there, such as the sender's email address ([email protected] looks fishy), but we will just focus on the `Subject` header:
###Code
spam_emails[0]["Subject"]
###Output
_____no_output_____
###Markdown
Okay, before we learn too much about the data, let's not forget to split it into a training set and a test set:
###Code
import numpy as np
from sklearn.model_selection import train_test_split
X = np.array(ham_emails + spam_emails, dtype=object)
y = np.array([0] * len(ham_emails) + [1] * len(spam_emails))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
###Output
_____no_output_____
###Markdown
Okay, let's start writing the preprocessing functions. First, we will need a function to convert HTML to plain text. Arguably the best way to do this would be to use the great [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/) library, but I would like to avoid adding another dependency to this project, so let's hack a quick & dirty solution using regular expressions (at the risk of [un̨ho͞ly radiańcé destro҉ying all enli̍̈́̂̈́ghtenment](https://stackoverflow.com/a/1732454/38626)). The following function first drops the `` section, then converts all `` tags to the word HYPERLINK, then it gets rid of all HTML tags, leaving only the plain text. For readability, it also replaces multiple newlines with single newlines, and finally it unescapes html entities (such as `>` or ` `):
###Code
import re
from html import unescape
def html_to_plain_text(html):
text = re.sub('<head.*?>.*?</head>', '', html, flags=re.M | re.S | re.I)
text = re.sub('<a\s.*?>', ' HYPERLINK ', text, flags=re.M | re.S | re.I)
text = re.sub('<.*?>', '', text, flags=re.M | re.S)
text = re.sub(r'(\s*\n)+', '\n', text, flags=re.M | re.S)
return unescape(text)
###Output
_____no_output_____
###Markdown
Let's see if it works. This is HTML spam:
###Code
html_spam_emails = [email for email in X_train[y_train==1]
if get_email_structure(email) == "text/html"]
sample_html_spam = html_spam_emails[7]
print(sample_html_spam.get_content().strip()[:1000], "...")
###Output
<HTML><HEAD><TITLE></TITLE><META http-equiv="Content-Type" content="text/html; charset=windows-1252"><STYLE>A:link {TEX-DECORATION: none}A:active {TEXT-DECORATION: none}A:visited {TEXT-DECORATION: none}A:hover {COLOR: #0033ff; TEXT-DECORATION: underline}</STYLE><META content="MSHTML 6.00.2713.1100" name="GENERATOR"></HEAD>
<BODY text="#000000" vLink="#0033ff" link="#0033ff" bgColor="#CCCC99"><TABLE borderColor="#660000" cellSpacing="0" cellPadding="0" border="0" width="100%"><TR><TD bgColor="#CCCC99" valign="top" colspan="2" height="27">
<font size="6" face="Arial, Helvetica, sans-serif" color="#660000">
<b>OTC</b></font></TD></TR><TR><TD height="2" bgcolor="#6a694f">
<font size="5" face="Times New Roman, Times, serif" color="#FFFFFF">
<b> Newsletter</b></font></TD><TD height="2" bgcolor="#6a694f"><div align="right"><font color="#FFFFFF">
<b>Discover Tomorrow's Winners </b></font></div></TD></TR><TR><TD height="25" colspan="2" bgcolor="#CCCC99"><table width="100%" border="0" ...
###Markdown
And this is the resulting plain text:
###Code
print(html_to_plain_text(sample_html_spam.get_content())[:1000], "...")
###Output
OTC
Newsletter
Discover Tomorrow's Winners
For Immediate Release
Cal-Bay (Stock Symbol: CBYI)
Watch for analyst "Strong Buy Recommendations" and several advisory newsletters picking CBYI. CBYI has filed to be traded on the OTCBB, share prices historically INCREASE when companies get listed on this larger trading exchange. CBYI is trading around 25 cents and should skyrocket to $2.66 - $3.25 a share in the near future.
Put CBYI on your watch list, acquire a position TODAY.
REASONS TO INVEST IN CBYI
A profitable company and is on track to beat ALL earnings estimates!
One of the FASTEST growing distributors in environmental & safety equipment instruments.
Excellent management team, several EXCLUSIVE contracts. IMPRESSIVE client list including the U.S. Air Force, Anheuser-Busch, Chevron Refining and Mitsubishi Heavy Industries, GE-Energy & Environmental Research.
RAPIDLY GROWING INDUSTRY
Industry revenues exceed $900 million, estimates indicate that there could be as much as $25 billi ...
###Markdown
Great! Now let's write a function that takes an email as input and returns its content as plain text, whatever its format is:
###Code
def email_to_text(email):
html = None
for part in email.walk():
ctype = part.get_content_type()
if not ctype in ("text/plain", "text/html"):
continue
try:
content = part.get_content()
except: # in case of encoding issues
content = str(part.get_payload())
if ctype == "text/plain":
return content
else:
html = content
if html:
return html_to_plain_text(html)
print(email_to_text(sample_html_spam)[:100], "...")
###Output
OTC
Newsletter
Discover Tomorrow's Winners
For Immediate Release
Cal-Bay (Stock Symbol: CBYI)
Wat ...
###Markdown
Let's throw in some stemming! For this to work, you need to install the Natural Language Toolkit ([NLTK](http://www.nltk.org/)). It's as simple as running the following command (don't forget to activate your virtualenv first; if you don't have one, you will likely need administrator rights, or use the `--user` option):`$ pip3 install nltk`
###Code
try:
import nltk
stemmer = nltk.PorterStemmer()
for word in ("Computations", "Computation", "Computing", "Computed", "Compute", "Compulsive"):
print(word, "=>", stemmer.stem(word))
except ImportError:
print("Error: stemming requires the NLTK module.")
stemmer = None
###Output
Computations => comput
Computation => comput
Computing => comput
Computed => comput
Compute => comput
Compulsive => compuls
###Markdown
We will also need a way to replace URLs with the word "URL". For this, we could use hard core [regular expressions](https://mathiasbynens.be/demo/url-regex) but we will just use the [urlextract](https://github.com/lipoja/URLExtract) library. You can install it with the following command (don't forget to activate your virtualenv first; if you don't have one, you will likely need administrator rights, or use the `--user` option):`$ pip3 install urlextract`
###Code
# if running this notebook on Colab, we just pip install urlextract
try:
import google.colab
!pip install -q -U urlextract
except ImportError:
pass # not running on Colab
try:
import urlextract # may require an Internet connection to download root domain names
url_extractor = urlextract.URLExtract()
print(url_extractor.find_urls("Will it detect github.com and https://youtu.be/7Pq-S557XQU?t=3m32s"))
except ImportError:
print("Error: replacing URLs requires the urlextract module.")
url_extractor = None
###Output
['github.com', 'https://youtu.be/7Pq-S557XQU?t=3m32s']
###Markdown
We are ready to put all this together into a transformer that we will use to convert emails to word counters. Note that we split sentences into words using Python's `split()` method, which uses whitespaces for word boundaries. This works for many written languages, but not all. For example, Chinese and Japanese scripts generally don't use spaces between words, and Vietnamese often uses spaces even between syllables. It's okay in this exercise, because the dataset is (mostly) in English.
###Code
from sklearn.base import BaseEstimator, TransformerMixin
class EmailToWordCounterTransformer(BaseEstimator, TransformerMixin):
def __init__(self, strip_headers=True, lower_case=True, remove_punctuation=True,
replace_urls=True, replace_numbers=True, stemming=True):
self.strip_headers = strip_headers
self.lower_case = lower_case
self.remove_punctuation = remove_punctuation
self.replace_urls = replace_urls
self.replace_numbers = replace_numbers
self.stemming = stemming
def fit(self, X, y=None):
return self
def transform(self, X, y=None):
X_transformed = []
for email in X:
text = email_to_text(email) or ""
if self.lower_case:
text = text.lower()
if self.replace_urls and url_extractor is not None:
urls = list(set(url_extractor.find_urls(text)))
urls.sort(key=lambda url: len(url), reverse=True)
for url in urls:
text = text.replace(url, " URL ")
if self.replace_numbers:
text = re.sub(r'\d+(?:\.\d*(?:[eE]\d+))?', 'NUMBER', text)
if self.remove_punctuation:
text = re.sub(r'\W+', ' ', text, flags=re.M)
word_counts = Counter(text.split())
if self.stemming and stemmer is not None:
stemmed_word_counts = Counter()
for word, count in word_counts.items():
stemmed_word = stemmer.stem(word)
stemmed_word_counts[stemmed_word] += count
word_counts = stemmed_word_counts
X_transformed.append(word_counts)
return np.array(X_transformed)
###Output
_____no_output_____
###Markdown
Let's try this transformer on a few emails:
###Code
X_few = X_train[:3]
X_few_wordcounts = EmailToWordCounterTransformer().fit_transform(X_few)
X_few_wordcounts
###Output
_____no_output_____
###Markdown
This looks about right! Now we have the word counts, and we need to convert them to vectors. For this, we will build another transformer whose `fit()` method will build the vocabulary (an ordered list of the most common words) and whose `transform()` method will use the vocabulary to convert word counts to vectors. The output is a sparse matrix.
###Code
from scipy.sparse import csr_matrix
class WordCounterToVectorTransformer(BaseEstimator, TransformerMixin):
def __init__(self, vocabulary_size=1000):
self.vocabulary_size = vocabulary_size
def fit(self, X, y=None):
total_count = Counter()
for word_count in X:
for word, count in word_count.items():
total_count[word] += min(count, 10)
most_common = total_count.most_common()[:self.vocabulary_size]
self.most_common_ = most_common
self.vocabulary_ = {word: index + 1 for index, (word, count) in enumerate(most_common)}
return self
def transform(self, X, y=None):
rows = []
cols = []
data = []
for row, word_count in enumerate(X):
for word, count in word_count.items():
rows.append(row)
cols.append(self.vocabulary_.get(word, 0))
data.append(count)
return csr_matrix((data, (rows, cols)), shape=(len(X), self.vocabulary_size + 1))
vocab_transformer = WordCounterToVectorTransformer(vocabulary_size=10)
X_few_vectors = vocab_transformer.fit_transform(X_few_wordcounts)
X_few_vectors
X_few_vectors.toarray()
###Output
_____no_output_____
###Markdown
What does this matrix mean? Well, the 99 in the second row, first column, means that the second email contains 99 words that are not part of the vocabulary. The 11 next to it means that the first word in the vocabulary is present 11 times in this email. The 9 next to it means that the second word is present 9 times, and so on. You can look at the vocabulary to know which words we are talking about. The first word is "the", the second word is "of", etc.
###Code
vocab_transformer.vocabulary_
###Output
_____no_output_____
###Markdown
We are now ready to train our first spam classifier! Let's transform the whole dataset:
###Code
from sklearn.pipeline import Pipeline
preprocess_pipeline = Pipeline([
("email_to_wordcount", EmailToWordCounterTransformer()),
("wordcount_to_vector", WordCounterToVectorTransformer()),
])
X_train_transformed = preprocess_pipeline.fit_transform(X_train)
###Output
_____no_output_____
###Markdown
**Note**: to be future-proof, we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22.
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
log_clf = LogisticRegression(solver="lbfgs", max_iter=1000, random_state=42)
score = cross_val_score(log_clf, X_train_transformed, y_train, cv=3, verbose=3)
score.mean()
###Output
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.1s remaining: 0.0s
###Markdown
Over 98.5%, not bad for a first try! :) However, remember that we are using the "easy" dataset. You can try with the harder datasets, the results won't be so amazing. You would have to try multiple models, select the best ones and fine-tune them using cross-validation, and so on.But you get the picture, so let's stop now, and just print out the precision/recall we get on the test set:
###Code
from sklearn.metrics import precision_score, recall_score
X_test_transformed = preprocess_pipeline.transform(X_test)
log_clf = LogisticRegression(solver="lbfgs", max_iter=1000, random_state=42)
log_clf.fit(X_train_transformed, y_train)
y_pred = log_clf.predict(X_test_transformed)
print("Precision: {:.2f}%".format(100 * precision_score(y_test, y_pred)))
print("Recall: {:.2f}%".format(100 * recall_score(y_test, y_pred)))
###Output
_____no_output_____ |
notebooks/Exploration of player data with week by week differencing.ipynb | ###Markdown
We can use difflib.get_close_matches() in order to replace player name strings in the old datasets.
###Code
temp = copy.copy(df_1_Rec)
temp.Player = [difflib.get_close_matches(df_1_Rec.Player.iloc[x] , df_2_Rec.Player.unique(), 1)[0] for x in range(len(df_1_Rec))]
temp = pd.merge(temp,df_4_Rec, how='outer', on=['Player'])
###Output
_____no_output_____
###Markdown
Based on our calculation above, there is only one issue with the player names between these two sets now. This might have to be fixed manually, or modifying the difflib function might help.
###Code
temp
temp2 = copy.copy(temp.iloc[:, 1:4])
for i in [0,1,2,3,5,6,7,8,9,10]:
temp2 = pd.concat([temp2, pd.to_numeric(temp.iloc[:,18+i].fillna(0), errors='coerce') - pd.to_numeric(temp.iloc[:,4+i].fillna(0), errors='coerce')], axis=1)
temp2.columns = ['Player', 'Team', 'Position', 'Rec', 'Yds', 'Avg', 'Yds/G', 'TDs', '20+', '40+', '1st', '1st%', 'Fumbles']
temp2 = temp2.iloc[:,[0,1,2,3,4,7,12]]
temp2["FFP"] = temp2.Rec+temp2.Yds*0.1+temp2.TDs*6-temp2.Fumbles*2
temp2
temp2.to_csv('WEEK4_DATA/WEEK4_2018_NFL_RECEIVING', index=False)
df = pd.DataFrame(pd.read_csv('WEEK3_DATA/WEEK3_2018_NFL_RECEIVING'))
df.columns
temp = temp.iloc[:,[1,2,3,4,5,9,-1]]
temp.columns = ['Player', 'Team', 'Position', 'Rec', 'Yds', 'TDs', 'Fumbles']
temp['FFP'] = temp.Rec+temp.Yds*0.1+temp.TDs*6-temp.Fumbles*2
temp.to_csv('WEEK1_DATA/WEEK1_2018_NFL_RECEIVING', index=False)
###Output
_____no_output_____ |
FashionMnist_usingMLP.ipynb | ###Markdown
shape of data
###Code
print(x_train.shape)
print(x_test.shape)
print(y_train.shape)
print(y_test.shape)
###Output
(10000,)
###Markdown
2) preprocessing stage Normalize all the images
###Code
x_train=x_train/255.0
x_test=x_test/255.0
def fashion_model():
#method for building the MLP model
model=tf.keras.models.Sequential([ tf.keras.layers.Flatten(),
tf.keras.layers.Dense(256,activation='relu'),
tf.keras.layers.Dense(128,activation='relu'),
tf.keras.layers.Dense(10,activation='softmax')
])
model.compile(loss='sparse_categorical_crossentropy',optimizer='sgd',metrics=['accuracy'])
history=model.fit(x_train,y_train,epochs=1)
model.save('fashion_model.h5')
return model,history.epoch, history.history['accuracy'][-1]
fashion_model()
def load_fashion_and_predict():
model = keras.models.load_model('/content/fashion_model.h5')
prediction=model.predict([1][0])
return prediction
load_fashion_and_predict()
from keras.preprocessing.image import load_img
from keras.preprocessing.image import img_to_array
from keras.models import load_model
# load and prepare the image
def load_image(filename):
# load the image
img = load_img(filename, grayscale=True, target_size=(28, 28))
# convert to array
img = img_to_array(img)
# reshape into a single sample with 1 channel
img = img.reshape(1, 28, 28, 1)
# prepare pixel data
img = img.astype('float32')
img = img / 255.0
return img
# load an image and predict the class
def run_example():
# load the image
img = load_image('/content/sample_image.png')
# load model
model = load_model('/content/fashion_model.h5')
# predict the class
result = model.predict(img)
print(np.argmax(result[0]))
# entry point, run the example
run_example()
###Output
_____no_output_____ |
Notes on feature ranking with CART and Random Forests in sklearn.ipynb | ###Markdown
Introduction In this notebook I'm reviewing heuristics for ranking features in decision trees and random forests, and their implementation in sklearn. In the text I sometimes use variable as a synonym of feature.Feature selection is a step embedded in the process of growing decision trees. An attractive side effect is that once model is built, we can retrive the relative importance of each feature in the decision making procees. This not only increases the general interpretability of the model, but can help both in exploratory data analysis as well as in feature engineering piepelines. A nice example of how tree based models can be used to improve feature engineering can be found in the winner recap notes of the Criteo Kaggle competition (http://machine-learning-notes.blogspot.nl/2014/12/kaggle-criteo-winner-method-recap.html). Tree based models, and ensambles of trees, are very powerful and well understood methods for supervised learning. Ensamles such as Random Forests are roboust and stable methods for both classification and regression, while decision trees allow for conceptually simple and easilly interpretable models. For an overview of tree models in classification and regression in sklean, there is an excellent talk from Gilles Louppe at PyData Paris 2015 (http://www.slideshare.net/glouppe/slides-46767187). A well regarded paper from the same author that provides a thorough analysis of the mathematics of feature selection in tree ensamles is "Understanding variable importances in forests of randomized trees" (Louppe et al. 2014). With this notebook I'm attempting to fill some gaps and bridge literature review to implementation (sklearn). Data I'll be using the Boston dataset (http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_boston.html) for regression and the Iris dataset (http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_iris.html) for classification examples.
###Code
boston = load_boston()
iris = load_iris()
###Output
_____no_output_____
###Markdown
Tree models for regression and classification Tree based models are non parametric methods that recursively partition the feature space into rectangular disjoint subsets, and fit a model in each of them. Assuming the space is partitioned in *M* regions, a *regression tree* would predict a response $Y$ given inputs $X = \{x_1, x_2, .. x_n\}$ as $$f(x_i) = \sum\limits_{m=1}^M c_mI(x_i\in R_m)$$ where $I$ is the indicator function and $c_m$ a constant that models $Y$ in region $R_m$. This model can be represented by a tree that has $R_1..R_m$ as its terminal nodes. In both the regression and classification cases, the algorithm decides the splitting variables and split points, and tree topology. Essentially, at each split point, the algorithm performs a feature selection step using an heuristic to estimate information gain; this is represented by a numerical value known as the *Gini importance* of a feature (not to be confused with the Gini index!).We can exploit this very same process to rank features (features engineering) and to explain their imporance with regards to the models we want to learn from data. The main difference between the regression and classification case is the criterion employed to evaluate the quality of a split when growing a tree. Regardless of this aspect, in sklearn, the importance of a variable is calculated as the Gini Importance or "mean decreased impurity" (http://stackoverflow.com/questions/15810339/how-are-feature-importances-in-randomforestclassifier-determined). See"Classification and regression trees" (Breiman, Friedman, 1984). Gini importance (MDI) of a variable Gini importance, or *mean decreased impurity* is defined as the total decrease in node impurity $i(s, n)$, at split $s$ for some impurity function $i(n)$ . That is$$ \Delta i(s, n) = i(n) - p_L i(t_L) - p_R i(t_R) $$Where $p_L$ and $p_R$ are the proportion of $N_L$ and $N_R$ samples in the left and right splits, over the total number of samples $N_n$ for node $n$. [Under the hood](https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tree/_tree.pyxL3340), sklearn calculates MDI as follows: ```pythoncdef class Tree: [...] cpdef compute_feature_importances(self, normalize=True): [...] while node != end_node: if node.left_child != _TREE_LEAF: ... and node.right_child != _TREE_LEAF: left = &nodes[node.left_child] right = &nodes[node.right_child] importance_data[node.feature] += ( node.weighted_n_node_samples * node.impurity - left.weighted_n_node_samples * left.impurity - right.weighted_n_node_samples * right.impurity) node += 1 importances /= nodes[0].weighted_n_node_samples [...]``` This method looks at a node and its left, right children. A list of split variables (*node.feature* objects) and the associated importance score is kept in *importance_data*. The node impurity is weighted by the probability of reaching that node, approximated by the proportion of samples (*weighted_n_node_samples*) reaching that node. ```pythonnode.weighted_n_node_samples * node.impurity -left.weighted_n_node_samples * left.impurity -right.weighted_n_node_samples * right.impurity``` The *impurity* criteria are defined by implementations of the *Criterion* interface. For classification trees, impurity criteria are *Entropy* - cross entropy - and *Gini*, the Gini index. For regression trees the impurity criteria is *MSE* (mean squared error). Regression trees As said, *sklearn.tree.DecisionTreeRegressor* uses mean squared error (MSE) to determine the quality of a split; that is, an estimate $\hat{c}_m$ for $c_m$ is calcuated so to minimize the sum of squares $\sum (y_i - \hat{f}(x_i))^2$ with $y_i$ being the target variable and $\hat{f}(x_i)$ its predicted value. The best (*proof!*) estimator is obtained by taking $\hat{f}({x_i}) = avg(y_i | x_i \in R_m)$. By bias variance decompostion of the squared error , we have $Var[\hat{f}(x_i)] = E[(\hat{f}(x_i) - E[\hat{f}(x_i)])^2]$. So, for a split node, the mean squared error can then be calculated as the sum of variance of the left and right split: $MSE = Var_{left} + Var_{right}$
###Code
# To keep the diagram tractable, restrict the tree to at most 5 leaf nodes
reg = DecisionTreeRegressor(max_leaf_nodes=5)
reg.fit(boston.data, boston.target)
dot_data = StringIO()
export_graphviz(reg, out_file="/tmp/tree.dot",
feature_names=boston.feature_names)
# pydot.graph_from_dot_data is broken in my env
!dot -Tpng /tmp/tree.dot -o /tmp/tree.png
from IPython.display import Image
Image(filename='/tmp/tree.png')
###Output
_____no_output_____
###Markdown
In the example above, with 5 terminal nodes, we identify three split variables: RM, DIS and LSTAT. For each non terminal node, the diagram shows the split variables and split value, the MSE and the number of datapoints (samples) contained in the resulting partitioned region. Terminal nodes, on the other hand, report the value for the response we want to predict.We can retrieve the Gini importance of each feature in the fitted model with the *feature\_importances_* property.
###Code
reg = DecisionTreeRegressor(max_leaf_nodes=5)
reg.fit(boston.data, boston.target)
plt = pd.DataFrame(zip(boston.feature_names, reg.feature_importances_),
columns=['feature', 'Gini importance']).plot(kind='bar', x='feature')
_ = plt.set_title('Regression tree on the Boston dataset (5 leaf nodes)')
###Output
_____no_output_____
###Markdown
Things become a bit more interesting when we grow larger trees.
###Code
reg = DecisionTreeRegressor()
reg.fit(boston.data, boston.target)
plt = pd.DataFrame(zip(boston.feature_names, reg.feature_importances_),
columns=['feature', 'Gini importance']).plot(kind='bar', x='feature')
_ = plt.set_title('Regression tree on the Boston dataset')
###Output
_____no_output_____
###Markdown
Classification trees Assume a classification task where the target $k$ classes take values in $0,1,...,K−1$. If node $m$ represents a region $R_m$ with $N_m$ observations, then let $p_{mk} = \frac{1}{N_m} \sum_{x_i \in R_m} I(y_i = k)$ be the proportion of class $k$ observations in node $m$. $sklearn.DecisionTreeClassifier$ allows two impurity criteria for determining splits in such a setting. On the Iris dataset the two criteria agree on which features are important. Experimental results suggest that for tree induction purposes both impurity measures generally lead to similar results (Tan et. al, Introduction to Data Mining). This is not entirely surprising given that both mesaures are particular cases of Tsallis entropy $H_\beta = \frac{1}{\beta-1}(1 - \sum_{k=0}^{K-1} p^{\beta}_{mk})$. For Gini index $\beta=2$, while cross entroy is recovered by taking $\beta \rightarrow 1$.Cross *entropy* might give higher scores to balanced partitions when there are many classes though. See "Technical Note: Some Properties of Splitting Criteria" (Breiman , 1996). On the other hand working in log scale *might* introduce computational overhead. Gini Index Gini Index is defined as $\sum_{k=0}^{K-1} p_{mk} (1 - p_{mk}) = 1 - \sum_{k=0}^{K-1} p_{mk}^2$. It is a measure of statistical dispersion commonly used to measure inequality among values of frequency distributions. An interpretation of the Gini index is popular in social sciences as a way to represent levels of income http://en.wikipedia.org/wiki/Gini_coefficient of a nation's residents. In other words, it is the area under the Lorentz curve (http://en.wikipedia.org/wiki/Lorenz_curve).
###Code
cls = DecisionTreeClassifier(criterion='gini', splitter='best')
est = cls.fit(iris.data, iris.target)
zip(iris.feature_names, cls.feature_importances_)
plt = pd.DataFrame(zip(iris.feature_names, cls.feature_importances_),
columns=['feature', 'Gini importance']).plot(kind='bar', x='feature')
_ = plt.set_title('Classification tree on the Iris dataset (criterion=gini)')
###Output
_____no_output_____
###Markdown
Cross Entropy Cross-entropy is defined as $- \sum_{k=0}^{K-1} p_{mk} \log(p_{mk})$. Intutively it tells us the amount of information contained at each split node.
###Code
cls = DecisionTreeClassifier(criterion='entropy', splitter='best')
est = cls.fit(iris.data, iris.target)
zip(iris.feature_names, cls.feature_importances_)
plt = pd.DataFrame(zip(iris.feature_names, cls.feature_importances_),
columns=['feature', 'Gini importance']).plot(kind='bar', x='feature')
_ = plt.set_title('Classification tree on the Iris dataset (criterion=entropy)')
###Output
_____no_output_____ |
.ipynb_checkpoints/Create EMA Dataset-checkpoint.ipynb | ###Markdown
Once we have the data downloaded we can start to create our dataset. We will begin by using data for the last 12 seasons. This should give us enough data to make good predictions, going any further back and the data might not as relevant. Due to the nature of football data being time-series data (ie: matches occur over the course of a season) we will be using full seasons (or two seasons) for our test data. We can also use a repeated K-fold to check our accuracy. We will first load our data as seperate seasons. We are removing any rows containing NaNs and converting Date to a datetime object, we are also adding a gameId column so that we can process our data easier.
###Code
# Run this once to concatenate all seasons together
# df1 = pd.read_csv(os.path.join(DATA_PATH, 'season0708.csv'))
# df2 = pd.read_csv(os.path.join(DATA_PATH, 'season0809.csv'))
# df3 = pd.read_csv(os.path.join(DATA_PATH, 'season0910.csv'))
# df4 = pd.read_csv(os.path.join(DATA_PATH, 'season1011.csv'))
# df5 = pd.read_csv(os.path.join(DATA_PATH, 'season1112.csv'))
# df6 = pd.read_csv(os.path.join(DATA_PATH, 'season1213.csv'))
# df7 = pd.read_csv(os.path.join(DATA_PATH, 'season1314.csv'))
# df8 = pd.read_csv(os.path.join(DATA_PATH, 'season1415.csv'))
# df9 = pd.read_csv(os.path.join(DATA_PATH, 'season1516.csv'))
# df10 = pd.read_csv(os.path.join(DATA_PATH, 'season1617.csv'))
# df11 = pd.read_csv(os.path.join(DATA_PATH, 'season1718.csv'))
# df12 = pd.read_csv(os.path.join(DATA_PATH, 'season1819.csv'))
# df13 = pd.read_csv(os.path.join(DATA_PATH, 'season1920.csv'))
# df = pd.concat([df1, df2, df3, df4, df5, df6, df7,
# df8, df9, df10, df11, df12, df13],
# ignore_index=True, sort=False)
# df.to_csv(os.path.join(DATA_PATH, 'all_seasons_joined.csv'))
def create_df(path):
"""
Function to convert date to datetime and add 'Id' column
"""
df = (pd.read_csv(path, dtype={'season': str})
.assign(Date=lambda df: pd.to_datetime(df.Date))
.pipe(lambda df: df.dropna(thresh=len(df) - 2, axis=1)) # Drop cols with NAs
.dropna(axis=0) # Drop rows with NAs
.rename(columns={'Unnamed: 0': 'gameId'})
.sort_values('gameId')
.reset_index(drop=True)
)
return df
df = create_df(os.path.join(DATA_PATH, 'all_seasons_joined.csv'))
df.columns
###Output
_____no_output_____
###Markdown
In order to add exponential moving averages we first need to restructure our dataset so that every row is a seperate team, rather than a match.
###Code
# Define a function which restructures our DataFrame
def create_multiline_df_stats(old_stats_df):
# Create a list of columns we want and their mappings to more interpretable names
home_stats_cols = ['Date', 'season', 'HomeTeam', 'FTHG', 'FTAG', 'HTHG', 'HTAG', 'HS', 'AS', 'HST', 'AST', 'HF', 'AF', 'HC', 'AC', 'HY', 'AY',
'HR', 'AR']
away_stats_cols = ['Date', 'season', 'AwayTeam', 'FTAG', 'FTHG', 'HTAG', 'HTHG', 'AS', 'HS', 'AST', 'HST', 'AF', 'HF', 'AC', 'HC', 'AY', 'HY',
'AR', 'HR']
stats_cols_mapping = ['Date', 'season', 'Team', 'goalsFor', 'goalsAgainst', 'halfTimeGoalsFor', 'halfTimeGoalsAgainst', 'shotsFor',
'shotsAgainst', 'shotsOnTargetFor', 'shotsOnTargetAgainst', 'freesFor', 'freesAgainst',
'cornersFor', 'cornersAgainst', 'yellowsFor', 'yellowsAgainst', 'redsFor', 'redsAgainst']
# Create a dictionary of the old column names to new column names
home_mapping = {old_col: new_col for old_col, new_col in zip(home_stats_cols, stats_cols_mapping)}
away_mapping = {old_col: new_col for old_col, new_col in zip(away_stats_cols, stats_cols_mapping)}
# Put each team onto an individual row
multi_line_stats = (old_stats_df[['gameId'] + home_stats_cols] # Filter for only the home team columns
.rename(columns=home_mapping) # Rename the columns
.assign(homeGame=1) # Assign homeGame=1 so that we can use a general function later
.append((old_stats_df[['gameId'] + away_stats_cols]) # Append the away team columns
.rename(columns=away_mapping) # Rename the away team columns
.assign(homeGame=0), sort=True)
.sort_values(by='gameId') # Sort the values
.reset_index(drop=True))
return multi_line_stats
# Define a function which creates an EMA DataFrame from the stats DataFrame
def create_stats_features_ema(stats, span):
# Create a restructured DataFrames so that we can calculate EMA
multi_line_stats = create_multiline_df_stats(stats)
# Create a copy of the DataFrame
ema_features = multi_line_stats[['Date', 'season', 'gameId', 'Team', 'homeGame']].copy()
# Get the columns that we want to create EMA for
feature_names = multi_line_stats.drop(columns=['Date', 'season', 'gameId', 'Team', 'homeGame']).columns
# Loop over the features
for feature_name in feature_names:
feature_ema = (multi_line_stats.groupby('Team')[feature_name] # Calculate the EMA
.transform(lambda row: row.ewm(span=span, min_periods=2)
.mean()
.shift(1))) # Shift the data down 1 so we don't leak data
ema_features[feature_name] = feature_ema # Add the new feature to the DataFrame
return ema_features
# Add weighted average to each row with a span of 50.
df = create_stats_features_ema(df, 50)
df.tail()
df.columns
pd.DataFrame(df.groupby('Team')
.goalsFor
.mean()
.sort_values(ascending=False)[:10])
###Output
_____no_output_____
###Markdown
We now need to restructure our dataset back to having a match on each row as this will be a much easier format for machine learning.
###Code
def restructure_stats_features(stats_features):
non_features = ['homeGame', 'Team', 'gameId']
stats_features_restructured = (stats_features.query('homeGame == 1')
.rename(columns={col: 'f_' + col + 'Home' for col in stats_features.columns if col not in non_features})
.rename(columns={'Team': 'HomeTeam'})
.pipe(pd.merge, (stats_features.query('homeGame == 0')
.rename(columns={'Team': 'AwayTeam'})
.rename(columns={col: 'f_' + col + 'Away' for col in stats_features.columns
if col not in non_features})), on=['gameId'])
.dropna())
return stats_features_restructured
df = restructure_stats_features(df)
df.tail()
df.shape
df.to_csv(os.path.join(DATA_PATH, 'EMA_data.csv'))
###Output
_____no_output_____ |
scripts/Pytorch-Geom-HRG-GCN.ipynb | ###Markdown
Reference: https://pytorch-geometric.readthedocs.io/en/latest/notes/introduction.htmllearning-methods-on-graphs
###Code
from torch_geometric.data import Data
import json
from collections import Counter
import torch
from tqdm import tqdm
def read_json_to_list(pth):
res = []
with open(pth, "r") as fin:
for line in fin:
res.append(json.loads(line.strip()))
return res
hrgs = read_json_to_list("/usr0/home/amadaan/data/audio/LJSpeech-1.1/TTS/hrg.jsonl")
len(hrgs)
###Output
_____no_output_____
###Markdown
Make vocab, init random embeddings
###Code
def get_tokens(hrgs):
tokens = []
for hrg in hrgs:
for word_rep in hrg["hrg"]:
tokens.extend(get_tokens_from_word_rep(word_rep))
return tokens
def get_tokens_from_word_rep(word_rep):
tokens = []
tokens.append(word_rep["word"])
for daughter in word_rep["daughters"]:
tokens.append(daughter["syll"])
return tokens
def make_vocab(hrgs):
tokens = Counter(list(get_tokens(hrgs)))
tokens = [w[0] for w in tokens.items() if w[1] > 1]
tokens.extend([str(i) for i in range(20)]) # position
tokens.extend(["<W>", "<SYLL>", "<UNK>"])
tok2id = {w:i for i, w in enumerate(tokens)}
id2tok = {i:w for w, i in tok2id.items()}
return tok2id, id2tok
tok2id, id2tok = make_vocab(hrgs)
def get_tok2id(tok):
if tok in tok2id:
return tok2id[tok]
return tok2id["<UNK>"]
n_embed = 64
hrgs[0]
embeddings = torch.rand(len(tok2id), n_embed)
###Output
_____no_output_____
###Markdown
Convert HRGs to PyTorchGeom Objects
###Code
def hrg_to_graph(hrg):
"""
Converts the HRG to graph,
Returns:
Edge index: (num_edges, 2)
Node features: (num_nodes, feature_dim)
"""
words, sylls = [], []
node_idx = {}
edges = []
node_features = []
syll_node_idxs = set()
for i, word_rep in enumerate(hrg["hrg"]):
word_node = f"{word_rep['word']}-{i}"
word_node_id = get_tok2id(word_rep['word'])
node_idx[word_node] = len(node_idx)
node_features.append(embeddings[word_node_id, :])
for j, syll in enumerate(word_rep["daughters"]):
syll_node = f"{syll['syll']}-{i}-{j}"
syll_node_id = get_tok2id(syll['syll'])
node_idx[syll_node] = len(node_idx)
syll_node_idxs.add(node_idx[syll_node])
node_features.append(embeddings[syll_node_id, :])
edges.append([node_idx[word_node], node_idx[syll_node]])
return torch.tensor(edges, dtype=torch.long), torch.stack(node_features).float(),\
torch.tensor(list(syll_node_idxs), dtype=torch.long)
hrg_to_graph(hrgs[0])[2]
py_geom_graphs = []
for hrg in tqdm(hrgs, total=len(hrgs)):
edge_index, node_features, syll_nodes = hrg_to_graph(hrg)
data = Data(x=node_features, edge_index=edge_index.t().contiguous(), syll_nodes=syll_nodes)
py_geom_graphs.append(data)
??
from torch_geometric.data import DataLoader
loader = DataLoader(py_geom_graphs, batch_size=32, shuffle=True)
py_geom_graphs[1]
batches = []
for batch in loader:
batches.append(batch)
batches[0].num_graphs
type(batches[0])
from torch_geometric.data.batch import Batch
Batch.from_data_list(batches[0].to_data_list())
batches[0]
x = 0
for g in batches[0].to_data_list():
x += g.x.shape[0]
print(x)
###Output
_____no_output_____
###Markdown
Sample GCN
###Code
import torch
import torch.nn.functional as F
from torch_geometric.nn import GCNConv
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = GCNConv(n_embed, 16)
self.conv2 = GCNConv(16, 200)
def forward(self, data):
x, edge_index = data.x, data.edge_index
x = self.conv1(x, edge_index)
x = F.relu(x)
x = F.dropout(x, training=self.training)
x = self.conv2(x, edge_index)
return F.log_softmax(x, dim=1)
conv1 = GCNConv(n_embed, 16).cuda()
x, edge_index = batches[0].x, batches[0].edge_index
type(batch.x)
edge_index.shape
x.shape
x = conv1(x.cuda(), edge_index.cuda())
x.shape
batches[0].to_data_list()[0].syll_nodes.shape
batches[0].to_data_list()
x.shape
batch_graphs = batches[0].to_data_list()
batch_graphs[0].x.shape[0]
batch_graphs[0].syll_nodes.shape
offset = 0
x[offset + batch_graphs[0].syll_nodes]
offset += batch_graphs[0].x.shape[0]
x[offset + batch_graphs[1].syll_nodes].shape
??torch_geometric.data.batch.Batch
def break_into_utterances(x, batch):
offset = 0
res = []
batch_graphs = batch.to_data_list()
for graph in batch_graphs:
res.append(x[offset + graph.syll_nodes])
offset += graph.x.shape[0]
return res
res = break_into_utterances(x, batches[0])
res
###Output
_____no_output_____ |
files/spring2020/12-intro-modeling-2/01-matrix-regression-gradient-decent-python.ipynb | ###Markdown
[](http://rpi.analyticsdojo.com)Linear Regressionrpi.analyticsdojo.com Adopted from Hands-On Machine Learning with Scikit-Learn and TensorFlow **Chapter 4 – Training Linear Models**. [You can access the book here.](http://proquestcombo.safaribooksonline.com.libproxy.rpi.edu/book/programming/9781491962282.) Origional Material has been released under this license.Apache LicenseVersion 2.0, January 2004http://www.apache.org/licenses/ in [this repository](https://github.com/ageron/handson-ml). Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
###Code
# To support both python 2 and python 3
#from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Let's generate some random data.
import numpy as np
X = 2 * np.random.rand(100, 1)
y = 4 + 3 * X + np.random.randn(100, 1)
plt.plot(X, y, "b.")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.axis([0, 2, 0, 15])
###Output
_____no_output_____
###Markdown
Linear Regression - Linear regression involves fitting the optimal values for \theta that minimize the error.$$h_0(x) = \theta_0 + \theta_1x$$Below, we are just adding a constant using the [numpy concatenate function](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.c_.html).
###Code
#This will add a 1 to the X matrix
X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance
X_b
###Output
_____no_output_____
###Markdown
Linear regression using the Normal EquationUsing matrix calculus, we can actually solve for the optimal value for theta. The regression question below calculates the optimal theta. These are the coefficients relevant to understand. $$ \theta = (X^T X)^{-1}X^T \vec{y} $$In order to calculate this, we are using the `dot` product function for Numpy and `T` to transpose matrix. `linalg.inv(a)` takes the inverse. `np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y)`
###Code
theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y)
#This is the intercept and the coefficient.
theta_best
#This just Calcultes the line.
X_new = np.array([[0], [2]])
X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance
y_predict = X_new_b.dot(theta_best)
y_predict
###Output
_____no_output_____
###Markdown
The figure in the book actually corresponds to the following code, with a legend and axis labels:
###Code
plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions")
plt.plot(X, y, "b.")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.legend(loc="upper left", fontsize=14)
plt.axis([0, 2, 0, 15])
# We can also do this much easier with the linear regression model.
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(X, y)
lin_reg.intercept_, lin_reg.coef_
lin_reg.predict(X_new)
###Output
_____no_output_____
###Markdown
Linear regression using batch gradient descentWhere `m` is the number of iteratations(1) $$Gradient = \frac{2}{m}X^T(X\theta - y)$$(2) $$\theta = \theta - \eta Gradient$$(3) $$\theta := \theta - \eta\frac{2}{m}X^T(X\theta - y)$$
###Code
eta = 0.1#learning rate
n_iterations = 1000
m = 100 #size of training set
theta = np.random.randn(2,1) #Starting point.
for iteration in range(n_iterations):
gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y)
theta = theta - eta * gradients
print("Ending:", theta)
#
theta
X_new_b.dot(theta)
###Output
_____no_output_____ |
03_knn.ipynb | ###Markdown
设特征空间$\mathcal{X}$是$n$维实数向量空间$R^{n}$,$x_{i},x_{j} \in \mathcal{X},x_{i} = \left( x_{i}^{\left( 1 \right)},x_{i}^{\left( 2 \right) },\cdots,x_{i}^{\left( n \right) } \right)^{T},x_{j} = \left( x_{j}^{\left( 1 \right)},x_{j}^{\left( 2 \right) },\cdots,x_{j}^{\left( n \right) } \right)^{T}$,$x_{i},x_{j}$的$L_{p}$距离或Minkowski(闵科夫斯基)距离\begin{align*} \\ & L_{p} \left( x_{i},x_{j} \right) = \left( \sum_{l=1}^{N} \left| x_{i}^{\left(l\right)} - x_{j}^{\left( l \right)} \right|^{p} \right)^{\dfrac{1}{p}}\end{align*} 其中,$p \geq 1$。当$p=2$时,称为欧氏距离,即\begin{align*} \\ & L_{2} \left( x_{i},x_{j} \right) = \left( \sum_{l=1}^{N} \left| x_{i}^{\left(l\right)} - x_{j}^{\left( l \right)} \right|^{2} \right)^{\dfrac{1}{2}}\end{align*} 当$p=1$时,称为曼哈顿距离,即\begin{align*} \\ & L_{1} \left( x_{i},x_{j} \right) = \sum_{l=1}^{N} \left| x_{i}^{\left(l\right)} - x_{j}^{\left( l \right)} \right| \end{align*} 当$p=\infty$时,称为切比雪夫距离,是各个坐标距离的最大值,即\begin{align*} \\ & L_{\infty} \left( x_{i},x_{j} \right) = \max_{l} \left| x_{i}^{\left(l\right)} - x_{j}^{\left( l \right)} \right| \end{align*}
###Code
def minkowski_distance_p(x, y, p=2):
"""
Parameters:
------------
x: (M, K) array_like
y: (M, K) array_like
p: float, 1<= p <= infinity
------------
计算M个K维向量的距离,但是该距离没有开p次根号
"""
#把输入的array_like类型的数据转换成numpy中的ndarray
x = np.asarray(x)
y = np.asarray(y)
##axis=-1沿最后一个坐标轴,0,1沿着第一,第二个坐标轴
if p == np.inf:
return np.max(np.abs(x-y), axis=-1)
else:
return np.sum(np.abs(x-y)**p, axis=-1)
def minkowski_distance(x, y, p=2):
if p==np.inf:
return minkowski_distance_p(x, y, np.inf)
else:
return minkowski_distance_p(x, y, p)**(1./p)
###Output
_____no_output_____
###Markdown
平衡kd树构造算法: 输入:$k$维空间数据集$T = \left\{ x_{1}, x_{2}, \cdots, x_{N} \right\}$,其中$x_{i} = \left( x_{i}^{\left(1\right)}, x_{i}^{\left(1\right)},\cdots,x_{i}^{\left(k\right)} \right)^{T}, i = 1, 2, \cdots, N$; 输出:kd树 1. 开始:构造根结点,根结点对应于包涵$T$的$k$维空间的超矩形区域。 选择$x^{\left( 1 \right)}$为坐标轴,以$T$中所欲实例的$x^{\left( 1 \right)}$坐标的中位数为切分点,将根结点对应的超矩形区域切分成两个子区域。切分由通过切分点并与坐标轴$x^{\left( 1 \right)}$垂直的超平面实现。 由根结点生成深度为1的左、右子结点:坐子结点对应坐标$x^{\left( 1 \right)}$小于切分点的子区域,右子结点对应于坐标$x^{\left( 1 \right)}$大与切分点的子区域。 将落在切分超平面上的实例点保存在跟结点。2. 重复:对深度为$j$的结点,选择$x^{\left( l \right)}$为切分坐标轴,$l = j \left(\bmod k \right) + 1 $,以该结点的区域中所由实例的$x^{\left( l \right)}$坐标的中位数为切分点,将该结点对应的超矩形区域切分为两个子区域。切分由通过切分点并与坐标轴$x^{\left( l \right)}$垂直的超平面实现。 由根结点生成深度为$j+1$的左、右子结点:坐子结点对应坐标$x^{\left( l \right)}$小于切分点的子区域,右子结点对应于坐标$x^{\left( l \right)}$大与切分点的子区域。 将落在切分超平面上的实例点保存在跟结点。3. 直到两个子区域没有实例存在时停止。 关于kd树划分的几点说明:1. 切分的粒度不用那么细,叶子结点上可以保留多个值,规定叶子结点的大小就好了。2. 对划分坐标轴的选取可以采用维度最大间隔的方法,使用第d个坐标,该坐标点的最大值和最小值差最大。$argmax\{d\} = max\{max\{x^d\}-min\{x^d\}\}$3. 使用平均值而不是采用中位数作为切分点最后两点的结合以及一些细节在论文[Analysis of Approximate Nearest NeighborSearching with Clustered Point Sets](https://arxiv.org/abs/cs/9901013)中称为 Sliding-midpoint split。最后的实现也采用了这种方法,以及大量的参考了[Scipy中kd树实现代码](https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.spatial.KDTree.htmlscipy.spatial.KDTree)
###Code
"""
定义一个超矩形区域
"""
class Rectangle(object):
"""Hyperrectangle class.
Represents a Cartesian product of intervals.
"""
def __init__(self, maxes, mins):
"""Construct a hyperrectangle."""
self.maxes = np.maximum(maxes,mins).astype(np.float)
self.mins = np.minimum(maxes,mins).astype(np.float)
self.m, = self.maxes.shape
def __repr__(self):
return "<Rectangle %s>" % list(zip(self.mins, self.maxes))
def volume(self):
"""Total volume."""
return np.prod(self.maxes-self.mins)
def split(self, d, split):
"""
Produce two hyperrectangles by splitting.
In general, if you need to compute maximum and minimum
distances to the children, it can be done more efficiently
by updating the maximum and minimum distances to the parent.
Parameters
----------
d : int
Axis to split hyperrectangle along.
split :
Input.
"""
mid = np.copy(self.maxes)
mid[d] = split
less = Rectangle(self.mins, mid)
mid = np.copy(self.mins)
mid[d] = split
greater = Rectangle(mid, self.maxes)
return less, greater
###Output
_____no_output_____
###Markdown
kd树的最近邻搜索算法: 输入:kd树;目标点$x$ 输出:$x$的最近邻 1. 在kd树中找出包含目标点$x$的叶结点:从跟结点出发,递归地向下访问kd树。若目标点$x$当前维的坐标小于切分点的坐标,则移动到左子结点,否则移动到右子结点。直到子结点为叶结点为止。 2. 以此叶结点为“当前最近点”。3. 递归地向上回退,在每个结点进行以下操作: 3.1 如果该结点保存的实例点比当前最近点距离目标点更近,则以该实例点为“当前最近点”。 3.2 当前最近点一定存在于该结点一个子结点对应的区域。检查该子结点的父结点的另一子结点对应的区域是否有更近的点。具体地,检查另一子结点对应的区域是否与以目标点为球心、以目标点与“当前最近点”间的距离为半径的超球体相交。 如果相交,可能在另一个子结点对应的区域内存在距目标点更近的点,移动到另一个子结点。接着,递归地进行最近邻搜索; 如果不相交,向上回退。 4. 当回退到根结点时,搜索结束。最后的“当前最近点”即为$x$的当前最近邻点。 kd树的k近邻搜索算法,需要使用优先队列$neighbors$保存$k$个搜索结果,搜索的过程也可以不使用递归,用另外一个优先队列$q$来存储接下来需要搜索的结点
###Code
class KDTree(object):
def __init__(self, data, leafsize=10):
"""
"""
self.data= np.asarray(data)
self.n, self.m = self.data.shape
self.leafsize = leafsize
if self.leafsize < 1:
raise ValueError("leafsize must be at least 1")
self.maxes = np.max(self.data, axis=0)
self.mins = np.min(self.data, axis=0)
self.tree = self.__build(np.arange(self.n), self.maxes, self.mins)
"""
定义结点类,作为叶子结点和内部结点的父类
是必须重写比较运算符吗???
"""
class node(object):
def __it__(self, other):
return id(self) < id(other)
def __gt__(self, other):
return id(self) > id(other)
def __le__(self, other):
return id(self) <= id(other)
def __ge__(self, other):
return id(self) >= id(other)
def __eq__(self, other):
return id(self) == id(other)
"""
定义叶子结点
"""
class leafnode(node):
def __init__(self, idx):
self.idx = idx
self.children = len(self.idx)
"""
定义内部结点
"""
class innernode(node):
def __init__(self, split_dim, split, less, greater):
"""
split_dim: 在某个维度上进行的划分
split:在该维度上的划分点
"""
self.split_dim = split_dim
self.split = split
self.less = less
self.greater = greater
self.children = less.children+greater.children
"""
仅开头带双下划线__的命名 用于对象的数据封装,以此命名的属性或者方法为类的私有属性或者私有方法。
如果在外部直接访问私有属性或者方法,是不可行的,这就起到了隐藏数据的作用。
但是这种实现机制并不是很严格,机制是通过自动"变形"实现的,类中所有以双下划线开头的名称__name都会自动变为"_类名__name"的新名称。
使用"_类名__name"就可以访问了,如._KDTree__build()。同时机制可以阻止继承类重新定义或者更改方法的实现。
"""
def __build(self, idx, maxes, mins):
if len(idx) <= self.leafsize:
return KDTree.leafnode(idx)
else:
#在第d维上进行划分,选自第d维的依据是该维度的间隔最大
d = np.argmax(maxes-mins)
#第d维上的数据
data = self.data[idx][d]
#第d维上的区间端点
maxval, minval = maxes[d], mins[d]
if maxval == minval:
#所有的点值都相同
return KDTree.leafnode(idx)
"""
Splitting Methods
sliding midpoint rule;
see Maneewongvatana and Mount 1999"""
split = (maxval + minval) / 2
#分别返回小于等于,大于split值的元素的索引
less_idx = np.nonzero(data <= split)[0]
greater_idx = np.nonzero(data>split)[0]
#对于极端的划分情况进行调整
if len(less_idx) == 0:
split = np.min(data)
less_idx = np.nonzero(data <= split)[0]
greater_idx = np.nonzero(data > split)[0]
if len(greater_idx) == 0:
split = np.max(data)
less_idx = np.nonzero(data < split)[0]
greater_idx = np.nonzero(data >= split)[0]
if len(less_idx) == 0:
# _still_ zero? all must have the same value
if not np.all(data == data[0]):
raise ValueError("Troublesome data array: %s" % data)
split = data[0]
less_idx = np.arange(len(data)-1)
greater_idx = np.array([len(data)-1])
#递归调用左边和右边
lessmaxes = np.copy(maxes)
lessmaxes[d] = split
greatermins = np.copy(mins)
greatermins[d] = split
return KDTree.innernode(d, split,
self.__build(idx[less_idx],lessmaxes,mins),
self.__build(idx[greater_idx],maxes,greatermins))
def query(self, x, k=1, p=2, distance_upper_bound=np.inf):
x = np.asarray(x)
#距离下界,形象化的思考一下哈,点x出现在mins和maxes的位置分三种情况
side_distances = np.maximum(0,np.maximum(x-self.maxes,self.mins-x))
if p != np.inf:
side_distances **= p
min_distance = np.sum(side_distances)
else:
min_distance = np.amax(side_distances)
if p != np.inf and distance_upper_bound != np.inf:
distance_upper_bound = distance_upper_bound**p
q, neighbors = [(min_distance, tuple(side_distances), self.tree)], []
"""
q: 维护搜索的优先队列
# entries are:
(minimum distance between the cell and the target,
distances between the nearest side of the cell and the target,
the head node of the cell)
neighbors: priority queue for the nearest neighbors
用于保存k近邻结果的优先队列,heapq默认是最小堆,为了立即能够得到队列中点的最大距离来更新距离上界upper bound,可以存储距离的相反数。
#entries are (-distance**p, index)
"""
while q:
min_distance, side_distances, node = heappop(q)
if isinstance(node, KDTree.leafnode):
# 对于叶子结点,就一个个暴力排除
data = self.data[node.idx]
# 把x沿x-轴扩充,然后和叶子结点上的点比较大小
ds = minkowski_distance_p(data,x[np.newaxis,:],p)
for i in range(len(ds)):
if ds[i] < distance_upper_bound:
if len(neighbors) == k:
heappop(neighbors)
heappush(neighbors, (-ds[i], node.idx[i]))
if len(neighbors) == k: #更新距离上界
distance_upper_bound = -neighbors[0][0]
else:
# we don't push cells that are too far onto the queue at all,
# but since the distance_upper_bound decreases, we might get
# here even if the cell's too far
if min_distance > distance_upper_bound:
# since this is the nearest cell, we're done, bail out
break
# compute minimum distances to the children and push them on
if x[node.split_dim] < node.split:
near, far = node.less, node.greater
else:
near, far = node.greater, node.less
# near child is at the same distance as the current node
heappush(q,(min_distance, side_distances, near))
# 对于far child需要进行距离判断,用新的距离替代原来的距离,然后和距离上界比较
sd = list(side_distances)
if p == np.inf:
min_distance = max(min_distance, abs(node.split-x[node.split_dim]))
elif p == 1:
sd[node.split_dim] = np.abs(node.split-x[node.split_dim])
min_distance = (min_distance - side_distances[node.split_dim]) + sd[node.split_dim]
else:
sd[node.split_dim] = np.abs(node.split-x[node.split_dim])**p
min_distance = (min_distance - side_distances[node.split_dim]) + sd[node.split_dim]
if min_distance <= distance_upper_bound:
heappush(q,(min_distance, tuple(sd), far))
if p == np.inf:
return sorted([(-d,i) for (d,i) in neighbors])
else:
return sorted([((-d)**(1./p),i) for (d,i) in neighbors])
data = [[2,3],[5,4],[9,6],[4,7],[8,1],[7,2]]
kd = KDTree(data)
ans = kd.query([2, 1], k=2)
for item in ans:
print("距离:%f, 索引:%d" %(item[0], item[1]))
np.random.random((2,3))
from time import clock
t0 = clock()
kd2 = KDTree(np.random.random(( 400000, 3))) # 构建包含四十万个3维空间样本点的kd树
ret2 = kd2.query([0.1,0.5,0.8]) # 四十万个样本点中寻找离目标最近的点
t1 = clock()
print ("time: ",t1-t0, "s")
print (ret2)
###Output
time: 0.08968533333330697 s
[(0.23022348089627068, 1)]
|
examples/colab/component_examples/sequence2sequence/T5_question_answering.ipynb | ###Markdown
[](https://colab.research.google.com/github/JohnSnowLabs/nlu/blob/master/examples/colab/component_examples/sequence2sequence/T5_question_answering.ipynb) `Open book` and `Closed book` question answering with Google's T5 With the latest NLU release and Google's T5 you can answer **general knowledge based questions given no context** and in addition answer **questions on text databases**. These questions can be asked in natural human language and answerd in just 1 line with NLU!. What is a `open book question`? You can imagine an `open book` question similar to an examen where you are allowed to bring in text documents or cheat sheets that help you answer questions in an examen. Kinda like bringing a history book to an history examen. In `T5's` terms, this means the model is given a `question` and an **additional piece of textual information** or so called `context`.This enables the `T5` model to answer questions on textual datasets like `medical records`,`newsarticles` , `wiki-databases` , `stories` and `movie scripts` , `product descriptions`, 'legal documents' and many more.You can answer `open book question` in 1 line of code, leveraging the latest NLU release and Google's T5. All it takes is : ```pythonnlu.load('answer_question').predict("""Where did Jebe die?context: Ghenkis Khan recalled Subtai back to Mongolia soon afterwards, and Jebe died on the road back to Samarkand""")>>> Output: Samarkand```Example for answering medical questions based on medical context``` pythonquestion ='''What does increased oxygen concentrations in the patient’s lungs displace? context: Hyperbaric (high-pressure) medicine uses special oxygen chambers to increase the partial pressure of O 2 around the patient and, when needed, the medical staff. Carbon monoxide poisoning, gas gangrene, and decompression sickness (the ’bends’) are sometimes treated using these devices. Increased O 2 concentration in the lungs helps to displace carbon monoxide from the heme group of hemoglobin. Oxygen gas is poisonous to the anaerobic bacteria that cause gas gangrene, so increasing its partial pressure helps kill them. Decompression sickness occurs in divers who decompress too quickly after a dive, resulting in bubbles of inert gas, mostly nitrogen and helium, forming in their blood. Increasing the pressure of O 2 as soon as possible is part of the treatment.'''Predict on text data with T5nlu.load('answer_question').predict(question)>>> Output: carbon monoxide ```Take a look at this example on a recent news article snippet : ```pythonquestion1 = 'Who is Jack ma?'question2 = 'Who is founder of Alibaba Group?'question3 = 'When did Jack Ma re-appear?'question4 = 'How did Alibaba stocks react?'question5 = 'Whom did Jack Ma meet?'question6 = 'Who did Jack Ma hide from?' from https://www.bbc.com/news/business-55728338 news_article_snippet = """ context:Alibaba Group founder Jack Ma has made his first appearance since Chinese regulators cracked down on his business empire.His absence had fuelled speculation over his whereabouts amid increasing official scrutiny of his businesses.The billionaire met 100 rural teachers in China via a video meeting on Wednesday, according to local government media.Alibaba shares surged 5% on Hong Kong's stock exchange on the news.""" join question with context, works with Pandas DF aswell!questions = [ question1+ news_article_snippet, question2+ news_article_snippet, question3+ news_article_snippet, question4+ news_article_snippet, question5+ news_article_snippet, question6+ news_article_snippet,]nlu.load('answer_question').predict(questions)```This will output a Pandas Dataframe similar to this : |Answer|Question||-----|---------|Alibaba Group founder| Who is Jack ma? | |Jack Ma |Who is founder of Alibaba Group? | Wednesday | When did Jack Ma re-appear? | surged 5% | How did Alibaba stocks react? | 100 rural teachers | Whom did Jack Ma meet? | Chinese regulators |Who did Jack Ma hide from?| What is a `closed book question`? A `closed book question` is the exact opposite of a `open book question`. In an examen scenario, you are only allowed to use what you have memorized in your brain and nothing else. In `T5's` terms this means that T5 can only use it's stored weights to answer a `question` and is given **no aditional context**. `T5` was pre-trained on the [C4 dataset](https://commoncrawl.org/) which contains **petabytes of web crawling data** collected over the last 8 years, including Wikipedia in every language.This gives `T5` the broad knowledge of the internet stored in it's weights to answer various `closed book questions` You can answer `closed book question` in 1 line of code, leveraging the latest NLU release and Google's T5. You need to pass one string to NLU, which starts which a `question` and is followed by a `context:` tag and then the actual context contents. All it takes is : ```pythonnlu.load('en.t5').predict('Who is president of Nigeria?')>>> Muhammadu Buhari ``````pythonnlu.load('en.t5').predict('What is the most spoken language in India?')>>> Hindi``````pythonnlu.load('en.t5').predict('What is the capital of Germany?')>>> Berlin```
###Code
import os
! apt-get update -qq > /dev/null
# Install java
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"]
! pip install nlu pyspark==2.4.7 > /dev/null
import nlu
###Output
_____no_output_____
###Markdown
Closed book question answering example
###Code
t5_closed_book = nlu.load('en.t5')
t5_closed_book.predict('What is the capital of Germany?')
t5_closed_book.predict('Who is president of Nigeria?')
t5_closed_book.predict('What is the most spoken language in India?')
###Output
_____no_output_____
###Markdown
Open Book question examples**Your context must be prefixed with `context:`**
###Code
t5_open_book = nlu.load('answer_question')
t5_open_book.predict("""Where did Jebe die?
context: Ghenkis Khan recalled Subtai back to Mongolia soon afterwards, and Jebe died on the road back to Samarkand""" )
###Output
_____no_output_____
###Markdown
Open Book question example on a Story
###Code
question1 = 'What does Jimmy like to eat for breakfast usually?'
question2 = 'Why was Jimmy suprised?'
story = """ context:
Once upon a time, there was a squirrel named Joey.
Joey loved to go outside and play with his cousin Jimmy.
Joey and Jimmy played silly games together, and were always laughing.
One day, Joey and Jimmy went swimming together 50 at their Aunt Julie’s pond.
Joey woke up early in the morning to eat some food before they left.
He couldn’t find anything to eat except for pie!
Usually, Joey would eat cereal, fruit (a pear), or oatmeal for breakfast.
After he ate, he and Jimmy went to the pond.
On their way there they saw their friend Jack Rabbit.
They dove into the water and swam for several hours.
The sun was out, but the breeze was cold.
Joey and Jimmy got out of the water and started walking home.
Their fur was wet, and the breeze chilled them.
When they got home, they dried off, and Jimmy put on his favorite purple shirt.
Joey put on a blue shirt with red and green dots.
The two squirrels ate some food that Joey’s mom, Jasmine, made and went off to bed.
"""
questions = [
question1+ story,
question2+ story,]
t5_open_book.predict(questions)
###Output
_____no_output_____
###Markdown
Open book question example on news article
###Code
question1 = 'Who is Jack ma?'
question2 = 'Who is founder of Alibaba Group?'
question3 = 'When did Jack Ma re-appear?'
question4 = 'How did Alibaba stocks react?'
question5 = 'Whom did Jack Ma meet?'
question6 = 'Who did Jack Ma hide from?'
# from https://www.bbc.com/news/business-55728338
news_article_snippet = """ context:
Alibaba Group founder Jack Ma has made his first appearance since Chinese regulators cracked down on his business empire.
His absence had fuelled speculation over his whereabouts amid increasing official scrutiny of his businesses.
The billionaire met 100 rural teachers in China via a video meeting on Wednesday, according to local government media.
Alibaba shares surged 5% on Hong Kong's stock exchange on the news.
"""
questions = [
question1+ news_article_snippet,
question2+ news_article_snippet,
question3+ news_article_snippet,
question4+ news_article_snippet,
question5+ news_article_snippet,
question6+ news_article_snippet,]
t5_open_book.predict(questions)
# define Data, add additional context tag between sentence
question ='''
What does increased oxygen concentrations in the patient’s lungs displace?
context: Hyperbaric (high-pressure) medicine uses special oxygen chambers to increase the partial pressure of O 2 around the patient and, when needed, the medical staff. Carbon monoxide poisoning, gas gangrene, and decompression sickness (the ’bends’) are sometimes treated using these devices. Increased O 2 concentration in the lungs helps to displace carbon monoxide from the heme group of hemoglobin. Oxygen gas is poisonous to the anaerobic bacteria that cause gas gangrene, so increasing its partial pressure helps kill them. Decompression sickness occurs in divers who decompress too quickly after a dive, resulting in bubbles of inert gas, mostly nitrogen and helium, forming in their blood. Increasing the pressure of O 2 as soon as possible is part of the treatment.
'''
#Predict on text data with T5
t5_open_book.predict(question)
###Output
_____no_output_____ |
dgl_smell.ipynb | ###Markdown
###Code
pip install dgl
pip install ogb
pip install rdkit-pypi
!wget https://raw.githubusercontent.com/napoles-uach/DGL_Smell/main/train.csv
import pandas as pd
df=pd.read_csv('train.csv')
df.head()
import dgl
from dgl.data import DGLDataset
import torch
import torch as th
import os
from ogb.utils.features import (allowable_features, atom_to_feature_vector,
bond_to_feature_vector, atom_feature_vector_to_dict, bond_feature_vector_to_dict)
from rdkit import Chem
import numpy as np
def smiles2graph(smiles_string):
"""
Converts SMILES string to graph Data object
:input: SMILES string (str)
:return: graph object
"""
mol = Chem.MolFromSmiles(smiles_string)
A = Chem.GetAdjacencyMatrix(mol)
A = np.asmatrix(A)
nnodes=len(A)
nz = np.nonzero(A)
edge_list=[]
src=[]
dst=[]
for i in range(nz[0].shape[0]):
src.append(nz[0][i])
dst.append(nz[1][i])
u, v = src, dst
g = dgl.graph((u, v))
bg=dgl.to_bidirected(g)
return bg
def feat_vec(smiles_string):
"""
Returns atom features for a molecule given a smiles string
"""
# atoms
mol = Chem.MolFromSmiles(smiles_string)
atom_features_list = []
for atom in mol.GetAtoms():
atom_features_list.append(atom_to_feature_vector(atom))
x = np.array(atom_features_list, dtype = np.int64)
return x
# this block uses the column SENTENCE to build one hot encoding for each sentence
# All sentences are stored in a list, repetitions are eliminated and ordered alphabetically
# this gives 109 strings whose indexes are used to build one hot encoding vectors on 109 entries.
# Example: fresh,ethereal,fruity gives a vector with ones in entries 42, 47, and 48.
lista_senten=df['SENTENCE'].to_list()
olores=[]
for olor in lista_senten:
olor=olor.split(",")
olores.append(olor)
# da formato correcto a las sentencias en forma de lista
from pandas.core.common import flatten
olores=list(flatten(olores))
#crea una sola lista con los olores
olores = list(dict.fromkeys(olores))
#elimina repeticiones
olores.sort()
#ordena alfabeticamente
inoh=[]
for olor in lista_senten:
olor=olor.split(",")
indices=[]
for t in olor:
if t in olores:
indices.append(olores.index(t))
inoh.append(indices)
onehot=[]
for i in inoh:
ohv=np.zeros(len(olores))
ohv[i]=1
onehot.append(ohv)
lista_senten[1]
onehot[1]
# This block makes a list of graphs
lista_mols=df['SMILES'].to_list()
j=0
graphs=[]
execptions=[]
for mol in lista_mols:
g_mol=smiles2graph(mol)
try:
g_mol.ndata['feat']=torch.tensor(feat_vec(mol))
except:
execptions.append(j)
graphs.append(g_mol) #agrega grafo de mol
j+=1
labels=onehot
# Some smiles are not well processed, so they are droped
for i in execptions:
graphs.pop(i)
labels.pop(i)
class SyntheticDataset(DGLDataset):
def __init__(self):
super().__init__(name='synthetic')
def process(self):
#edges = pd.read_csv('./graph_edges.csv')
#properties = pd.read_csv('./graph_properties.csv')
self.graphs = graphs
self.labels = torch.LongTensor(labels)
def __getitem__(self, i):
return self.graphs[i], self.labels[i]
def __len__(self):
return len(self.graphs)
dataset = SyntheticDataset()
graph, label = dataset[0]
print(graph, label)
###Output
Graph(num_nodes=14, num_edges=28,
ndata_schemes={'feat': Scheme(shape=(9,), dtype=torch.int64)}
edata_schemes={}) tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
|
crossValidation.ipynb | ###Markdown
Cross ValidationThis section documents how to do cross-validation in Scikit-Learn. Cross validation is ourcritical model evaluation system. It tries to simulate how a model would perform on clean data by splitting it into training and testing samples. To keep things simple we will stickwith the basic linear model that we used for monte-carlo examples in class. Also,the only model fit will be a basic linear regression. Everything that is done here caneasily be extended to any of the models in the Scikit-learn family of ML models.
###Code
# Load helpers
# Will try to just load what I need on this
%matplotlib inline
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import cross_validate
from sklearn.model_selection import ShuffleSplit
###Output
_____no_output_____
###Markdown
Linear model data generationThis model is from the class notes, and generates a simple linear model with M predictors. We used it to generate overfitting even in linear model space.
###Code
# Function to generate linear data experiments
def genLinData(N,M,noise):
# y = x_1 + x_2 .. x_M + eps
# X's scaled so the variance of explained part is same order as noise variance (if std(eps) = 1)
sigNoise = np.sqrt(1./M)
X = np.random.normal(size=(N,M),loc=0,scale=sigNoise)
eps = np.random.normal(size=N,loc=0,scale=noise)
y = np.sum(X,axis=1)+eps
return X,y
###Output
_____no_output_____
###Markdown
Over fitting in one run using train_test_split
###Code
# Basic overfitting example
X, y = genLinData(200,50,1.0)
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.25)
# Now run regression
# print score, which is R-squared (fit)
lr = LinearRegression()
lr.fit(X_train, y_train)
print(lr.score(X_train,y_train))
print(lr.score(X_test,y_test))
###Output
0.6490379774856974
0.2253399639542668
###Markdown
Raw Python for the appropriate simulation of many test scores
###Code
nmc = 100
X, y = genLinData(200,50,1.0)
scoreVec = np.zeros(nmc)
for i in range(nmc):
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.25)
# Now run regression
# print score, which is R-squared (fit)
lr = LinearRegression()
lr.fit(X_train, y_train)
scoreVec[i] = lr.score(X_test,y_test)
print(np.mean(scoreVec))
print(np.std(scoreVec))
print(np.mean(scoreVec<0))
###Output
0.44224466961870573
0.1097046822176599
0.0
###Markdown
Automate this by building a function
###Code
# A function to automate MC experiments
def MCtraintest(nmc,X,y,modelObj,testFrac):
trainScore = np.zeros(nmc)
testScore = np.zeros(nmc)
for i in range(nmc):
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=testFrac)
modelObj.fit(X_train,y_train)
trainScore[i] = modelObj.score(X_train,y_train)
testScore[i] = modelObj.score(X_test,y_test)
return trainScore,testScore
nmc = 100
lr = LinearRegression()
trainS, testS = MCtraintest(nmc,X,y,lr,0.25)
print(np.mean(trainS))
print(np.std(trainS))
print(np.mean(testS))
print(np.std(testS))
###Output
0.761162190524635
0.02293850535797592
0.46753101981207146
0.12839692764931585
###Markdown
Scikit-learn functions* Scikit-learn has many built in functions for cross validation. * Here are a few of them. cross-validate* This general functions does many things. * This first example uses it on a data set, and performs an even more basic cross-validation than we have been doing. * This is called k-fold cross-validation.* It splits the data set into k parts. Then trains on k-1 parts, and tests on the remaining 1 part.* This is a very standard cross-validation system* It returns a rich dictionary of results
###Code
# X, y = genLinData(200,50,1.0)
lr = LinearRegression()
CVInfo = cross_validate(lr, X, y, cv=5,return_train_score=True)
print(np.mean(CVInfo['train_score']))
print(np.mean(CVInfo['test_score']))
###Output
_____no_output_____
###Markdown
ShuffleSplit* ShuffleSplit function can add a randomized train/test split to cross-validate* Here is how you do it
###Code
# X, y = genLinData(200,50,1.0)
lr = LinearRegression()
shuffle = ShuffleSplit(n_splits=100, test_size=.25, random_state=0)
CVInfo = cross_validate(lr, X, y, cv=shuffle,return_train_score=True)
print(np.mean(CVInfo['train_score']))
print(np.mean(CVInfo['test_score']))
###Output
_____no_output_____
###Markdown
cross_val_score* This is a very basic cross validation system* It returns a simple vector of test set (only) scores* Also, uses ShuffleSplit
###Code
# X, y = genLinData(200,50,1.0)
lr = LinearRegression()
shuffle = ShuffleSplit(n_splits=100, test_size=.25, random_state=0)
CVScores = cross_val_score(lr, X, y, cv=shuffle)
print(np.mean(CVScores))
print(CVScores)
###Output
_____no_output_____ |
Lecture 03/6.006.L3.ipynb | ###Markdown
SortingLecture by Srini Devdas at MITVideo link: [https://www.youtube.com/watch?v=Kg4bqzAqRBM&list=PLUl4u3cNGP61Oq3tWYp6V_F-5jb5L2iHb&index=3](https://www.youtube.com/watch?v=Kg4bqzAqRBM&list=PLUl4u3cNGP61Oq3tWYp6V_F-5jb5L2iHb&index=3)
###Code
import random, time
def generateArray(n: int=10, min_range: int=-100, max_range: int=100) -> list:
'''
Helper function to generate an array of size n
where elements are in the (min-range, max_range) range
'''
array=[random.randint(min_range, max_range) for i in range(n)]
return array
###Output
_____no_output_____
###Markdown
Insertion Sort ```For i=1..n The first element is sorted by default insert A[i] into sorted array A[0:i-1] pairwise swaps down to the correct position```
###Code
def insertionSort(array: list, ascending: bool= True, verbose: int = 1) -> None:
'''
This function sorts the array in-place and does not return anything
ascending: if True, the array will be returned in ascending order, else in descending order
verbose: 0 - doesn't show any ouptut.
1 - displays array before and after sorting and the number of swaps
2 - displays every time a swap is made
'''
assert verbose in [0,1,2], "Invalid value for verbose"
if verbose:
print("Before sorting")
print(array)
if ascending:
mult_factor=1
else:
mult_factor=-1
swap_count=0
for key_idx in range(1,len(array)):
for inner_idx in range(key_idx):
key,el=array[key_idx]*mult_factor, array[inner_idx]*mult_factor
if key<el:
array[inner_idx],array[key_idx]=array[key_idx],array[inner_idx]
swap_count+=1
if verbose>1:
print(f"After swap #{swap_count}", array)
if verbose:
print(f"After sorting. Total number of swaps: {swap_count}")
print(array)
###Output
_____no_output_____
###Markdown
**Improvement to insertion sort**- *Observation*: All elements before the current one are already sorted- *Action*: Use binary search instead of pairwise swaps. **Note**: The number of swaps does not decrease since all elements have to be moved to put the key in the correct position **TO-DO** Implement ascending/descending for binary insertion sort
###Code
def rotateElements(arr,start,end) -> None:
assert start<len(arr) and end<len(arr) and start<=end, "Cannot move elements"
for i in range(end, start, -1):
arr[i],arr[i-1]=arr[i-1],arr[i]
return
def binarySearch(arr, val, start, end):
if start==end:
temp=arr[start]
if temp>val:
return start
else:
return start+1
if start>end:
return start
mid=(start+end)//2
temp=array[mid]
if temp>val:
return binarySearch(arr, val,start, mid-1)
elif temp<val:
return binarySearch(arr, val, mid+1, end)
else:
return mid
def improvedInsertionSort(array: list, ascending: bool= True, verbose: int = 1) -> None:
'''
This function sorts the array in-place and does not return anything
ascending: if True, the array will be returned in ascending order, else in descending order
verbose: 0 - doesn't show any ouptut.
1 - displays array before and after sorting and the number of swaps
2 - displays every time a swap is made
'''
assert verbose in [0,1,2], "Invalid value for verbose"
if verbose:
print("Before sorting")
print(array)
swap_count=0
for key_idx in range(1,len(array)):
lo, hi= 0, key_idx-1
key=array[key_idx]
pos=binarySearch(array,key,lo,hi)
if pos!=key_idx:
rotateElements(array, pos, key_idx)
swap_count+=key_idx-pos
if verbose>1:
print(array)
if verbose:
print(f"After sorting. Total number of swaps: {swap_count}")
print(array)
###Output
_____no_output_____
###Markdown
Merge Sort ```Break down list into single elements, which by definition are sortedMerge each of these one-element lists into a sorted list```
###Code
def mergeSort(array):
'''
Breaks down the list into single-element lists
'''
if len(array)>1:
mid=len(array)//2
left_arr=array[:mid]
right_arr=array[mid:]
mergeSort(left_arr)
mergeSort(right_arr)
return merge(left_arr, right_arr, array)
def merge(left_arr, right_arr, array):
'''
Takes two sorted lists as input and
merges them in O(|left_arr| + |right_arr|) time
'''
i,j,k=0,0,0
while(i<len(left_arr) and j<len(right_arr)):
if left_arr[i]<right_arr[j]:
array[k]=left_arr[i]
i+=1
else:
array[k]=right_arr[j]
j+=1
k+=1
while i<len(left_arr):
array[k]=left_arr[i]
i+=1
k+=1
while j<len(right_arr):
array[k]=right_arr[j]
j+=1
k+=1
###Output
_____no_output_____ |
art_tweets.ipynb | ###Markdown
Number of Nodes
###Code
%cypher match (n) return count(*) as num_nodes
%cypher match (n:tweet) return count (*) as num_tweets
%cypher match (n:user) return count (*) as num_users
%cypher match (n:hashtag) return count (*) as num_hashtags
###Output
1 rows affected.
###Markdown
Number of Edges
###Code
%cypher match (n)-[r]->() return count(*) as num_edges
###Output
1 rows affected.
###Markdown
Top Tweets
###Code
top_tweets = %cypher match (n:tweet)-[r]-(m:tweet) return n.tid, n.text, count(r) as deg order by deg desc limit 10
top_tweets.get_dataframe()
###Output
_____no_output_____
###Markdown
Top Hashtags
###Code
top_tags = %cypher match (n:hashtag)-[r]-(m) return n.hashtag, count(r) as deg order by deg desc limit 10
top_tags.get_dataframe()
###Output
_____no_output_____
###Markdown
Top Users
###Code
top_users = %cypher match (n:user)-[r]-(m) return n.uid, n.screen_name, count(r) as deg order by deg desc limit 10
top_users.get_dataframe()
###Output
_____no_output_____
###Markdown
Top Languages
###Code
top_langs = %cypher match (n:tweet) where n.lang is not null return distinct n.lang, count(n.lang) as num_tweets order by num_tweets desc
top_langs.get_dataframe().head(20)
###Output
_____no_output_____
###Markdown
Top Cities
###Code
top_locs = %cypher match (n:tweet) where n.full_name is not null return distinct n.full_name, count(n.full_name) as num_tweets order by num_tweets desc
top_locs.get_dataframe().head(20)
###Output
_____no_output_____
###Markdown
Top Countries
###Code
top_countries = %cypher match (n:tweet) where n.country is not null return distinct n.country, count(n.country) as num_tweets order by num_tweets desc
top_countries.get_dataframe().head(20)
###Output
_____no_output_____ |
2_Neural_network/intro_to_pytorch/Part 2 - Neural Networks in PyTorch .ipynb | ###Markdown
Neural networks with PyTorchDeep learning networks tend to be massive with dozens or hundreds of layers, that's where the term "deep" comes from. You can build one of these deep networks using only weight matrices as we did in the previous notebook, but in general it's very cumbersome and difficult to implement. PyTorch has a nice module `nn` that provides a nice way to efficiently build large neural networks.
###Code
# Import necessary packages
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import torch
import helper
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Now we're going to build a larger network that can solve a (formerly) difficult problem, identifying text in an image. Here we'll use the MNIST dataset which consists of greyscale handwritten digits. Each image is 28x28 pixels, you can see a sample belowOur goal is to build a neural network that can take one of these images and predict the digit in the image.First up, we need to get our dataset. This is provided through the `torchvision` package. The code below will download the MNIST dataset, then create training and test datasets for us. Don't worry too much about the details here, you'll learn more about this later.
###Code
### Run this cell
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,)),
])
# Download and load the training data
trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
###Output
_____no_output_____
###Markdown
We have the training data loaded into `trainloader` and we make that an iterator with `iter(trainloader)`. Later, we'll use this to loop through the dataset for training, like```pythonfor image, label in trainloader: do things with images and labels```You'll notice I created the `trainloader` with a batch size of 64, and `shuffle=True`. The batch size is the number of images we get in one iteration from the data loader and pass through our network, often called a *batch*. And `shuffle=True` tells it to shuffle the dataset every time we start going through the data loader again. But here I'm just grabbing the first batch so we can check out the data. We can see below that `images` is just a tensor with size `(64, 1, 28, 28)`. So, 64 images per batch, 1 color channel, and 28x28 images.
###Code
dataiter = iter(trainloader)
images, labels = dataiter.next()
print(type(images))
print(images.shape)
print(labels.shape)
###Output
<class 'torch.Tensor'>
torch.Size([64, 1, 28, 28])
torch.Size([64])
###Markdown
This is what one of the images looks like.
###Code
plt.imshow(images[1].numpy().squeeze(), cmap='Greys_r');
###Output
_____no_output_____
###Markdown
First, let's try to build a simple network for this dataset using weight matrices and matrix multiplications. Then, we'll see how to do it using PyTorch's `nn` module which provides a much more convenient and powerful method for defining network architectures.The networks you've seen so far are called *fully-connected* or *dense* networks. Each unit in one layer is connected to each unit in the next layer. In fully-connected networks, the input to each layer must be a one-dimensional vector (which can be stacked into a 2D tensor as a batch of multiple examples). However, our images are 28x28 2D tensors, so we need to convert them into 1D vectors. Thinking about sizes, we need to convert the batch of images with shape `(64, 1, 28, 28)` to a have a shape of `(64, 784)`, 784 is 28 times 28. This is typically called *flattening*, we flattened the 2D images into 1D vectors.Previously you built a network with one output unit. Here we need 10 output units, one for each digit. We want our network to predict the digit shown in an image, so what we'll do is calculate probabilities that the image is of any one digit or class. This ends up being a discrete probability distribution over the classes (digits) that tells us the most likely class for the image. That means we need 10 output units for the 10 classes (digits). We'll see how to convert the network output into a probability distribution next.> **Exercise:** Flatten the batch of images `images`. Then build a multi-layer network with 784 input units, 256 hidden units, and 10 output units using random tensors for the weights and biases. For now, use a sigmoid activation for the hidden layer. Leave the output layer without an activation, we'll add one that gives us a probability distribution next.
###Code
## Solution
def activation(x):
return 1/(1+torch.exp(-x))
# Flatten the input images
inputs = images.view(images.shape[0], -1)
# Create parameters
w1 = torch.randn(784, 256)
b1 = torch.randn(256)
w2 = torch.randn(256, 10)
b2 = torch.randn(10)
h = activation(torch.mm(inputs, w1) + b1)
out = torch.mm(h, w2) + b2
###Output
_____no_output_____
###Markdown
Now we have 10 outputs for our network. We want to pass in an image to our network and get out a probability distribution over the classes that tells us the likely class(es) the image belongs to. Something that looks like this:Here we see that the probability for each class is roughly the same. This is representing an untrained network, it hasn't seen any data yet so it just returns a uniform distribution with equal probabilities for each class.To calculate this probability distribution, we often use the [**softmax** function](https://en.wikipedia.org/wiki/Softmax_function). Mathematically this looks like$$\Large \sigma(x_i) = \cfrac{e^{x_i}}{\sum_k^K{e^{x_k}}}$$What this does is squish each input $x_i$ between 0 and 1 and normalizes the values to give you a proper probability distribution where the probabilites sum up to one.> **Exercise:** Implement a function `softmax` that performs the softmax calculation and returns probability distributions for each example in the batch. Note that you'll need to pay attention to the shapes when doing this. If you have a tensor `a` with shape `(64, 10)` and a tensor `b` with shape `(64,)`, doing `a/b` will give you an error because PyTorch will try to do the division across the columns (called broadcasting) but you'll get a size mismatch. The way to think about this is for each of the 64 examples, you only want to divide by one value, the sum in the denominator. So you need `b` to have a shape of `(64, 1)`. This way PyTorch will divide the 10 values in each row of `a` by the one value in each row of `b`. Pay attention to how you take the sum as well. You'll need to define the `dim` keyword in `torch.sum`. Setting `dim=0` takes the sum across the rows while `dim=1` takes the sum across the columns.
###Code
## Solution
def softmax(x):
return torch.exp(x)/torch.sum(torch.exp(x), dim=1).view(-1, 1)
probabilities = softmax(out)
# Does it have the right shape? Should be (64, 10)
print(probabilities.shape)
# Does it sum to 1?
print(probabilities.sum(dim=1))
###Output
torch.Size([64, 10])
tensor([1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000])
###Markdown
Building networks with PyTorchPyTorch provides a module `nn` that makes building networks much simpler. Here I'll show you how to build the same one as above with 784 inputs, 256 hidden units, 10 output units and a softmax output.
###Code
from torch import nn
class Network(nn.Module):
def __init__(self):
super().__init__()
# Inputs to hidden layer linear transformation
self.hidden = nn.Linear(784, 256)
# Output layer, 10 units - one for each digit
self.output = nn.Linear(256, 10)
# Define sigmoid activation and softmax output
self.sigmoid = nn.Sigmoid()
self.softmax = nn.Softmax(dim=1)
def forward(self, x):
# Pass the input tensor through each of our operations
x = self.hidden(x)
x = self.sigmoid(x)
x = self.output(x)
x = self.softmax(x)
return x
###Output
_____no_output_____
###Markdown
Let's go through this bit by bit.```pythonclass Network(nn.Module):```Here we're inheriting from `nn.Module`. Combined with `super().__init__()` this creates a class that tracks the architecture and provides a lot of useful methods and attributes. It is mandatory to inherit from `nn.Module` when you're creating a class for your network. The name of the class itself can be anything.```pythonself.hidden = nn.Linear(784, 256)```This line creates a module for a linear transformation, $x\mathbf{W} + b$, with 784 inputs and 256 outputs and assigns it to `self.hidden`. The module automatically creates the weight and bias tensors which we'll use in the `forward` method. You can access the weight and bias tensors once the network (`net`) is created with `net.hidden.weight` and `net.hidden.bias`.```pythonself.output = nn.Linear(256, 10)```Similarly, this creates another linear transformation with 256 inputs and 10 outputs.```pythonself.sigmoid = nn.Sigmoid()self.softmax = nn.Softmax(dim=1)```Here I defined operations for the sigmoid activation and softmax output. Setting `dim=1` in `nn.Softmax(dim=1)` calculates softmax across the columns.```pythondef forward(self, x):```PyTorch networks created with `nn.Module` must have a `forward` method defined. It takes in a tensor `x` and passes it through the operations you defined in the `__init__` method.```pythonx = self.hidden(x)x = self.sigmoid(x)x = self.output(x)x = self.softmax(x)```Here the input tensor `x` is passed through each operation a reassigned to `x`. We can see that the input tensor goes through the hidden layer, then a sigmoid function, then the output layer, and finally the softmax function. It doesn't matter what you name the variables here, as long as the inputs and outputs of the operations match the network architecture you want to build. The order in which you define things in the `__init__` method doesn't matter, but you'll need to sequence the operations correctly in the `forward` method.Now we can create a `Network` object.
###Code
# Create the network and look at it's text representation
model = Network()
model
###Output
_____no_output_____
###Markdown
You can define the network somewhat more concisely and clearly using the `torch.nn.functional` module. This is the most common way you'll see networks defined as many operations are simple element-wise functions. We normally import this module as `F`, `import torch.nn.functional as F`.
###Code
import torch.nn.functional as F
class Network(nn.Module):
def __init__(self):
super().__init__()
# Inputs to hidden layer linear transformation
self.hidden = nn.Linear(784, 256)
# Output layer, 10 units - one for each digit
self.output = nn.Linear(256, 10)
def forward(self, x):
# Hidden layer with sigmoid activation
x = F.sigmoid(self.hidden(x))
# Output layer with softmax activation
x = F.softmax(self.output(x), dim=1)
return x
###Output
_____no_output_____
###Markdown
Activation functionsSo far we've only been looking at the softmax activation, but in general any function can be used as an activation function. The only requirement is that for a network to approximate a non-linear function, the activation functions must be non-linear. Here are a few more examples of common activation functions: Tanh (hyperbolic tangent), and ReLU (rectified linear unit).In practice, the ReLU function is used almost exclusively as the activation function for hidden layers. Your Turn to Build a Network> **Exercise:** Create a network with 784 input units, a hidden layer with 128 units and a ReLU activation, then a hidden layer with 64 units and a ReLU activation, and finally an output layer with a softmax activation as shown above. You can use a ReLU activation with the `nn.ReLU` module or `F.relu` function.It's good practice to name your layers by their type of network, for instance 'fc' to represent a fully-connected layer. As you code your solution, use `fc1`, `fc2`, and `fc3` as your layer names.
###Code
## Solution
class Network(nn.Module):
def __init__(self):
super().__init__()
# Defining the layers, 128, 64, 10 units each
self.fc1 = nn.Linear(784, 128)
self.fc2 = nn.Linear(128, 64)
# Output layer, 10 units - one for each digit
self.fc3 = nn.Linear(64, 10)
def forward(self, x):
''' Forward pass through the network, returns the output logits '''
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
x = F.relu(x)
x = self.fc3(x)
x = F.softmax(x, dim=1)
return x
model = Network()
model
###Output
_____no_output_____
###Markdown
Initializing weights and biasesThe weights and such are automatically initialized for you, but it's possible to customize how they are initialized. The weights and biases are tensors attached to the layer you defined, you can get them with `model.fc1.weight` for instance.
###Code
print(model.fc1.weight)
print(model.fc1.bias)
###Output
Parameter containing:
tensor([[ 0.0303, 0.0247, 0.0203, ..., -0.0096, 0.0325, -0.0308],
[ 0.0286, 0.0208, 0.0282, ..., 0.0155, 0.0251, 0.0305],
[-0.0327, 0.0202, -0.0023, ..., -0.0312, -0.0091, -0.0203],
...,
[ 0.0236, 0.0162, -0.0151, ..., 0.0218, 0.0133, 0.0050],
[ 0.0152, -0.0154, 0.0147, ..., 0.0254, 0.0016, -0.0102],
[-0.0142, -0.0009, -0.0167, ..., -0.0344, 0.0329, 0.0084]],
requires_grad=True)
Parameter containing:
tensor([-4.9410e-03, 3.1008e-02, -1.2816e-02, 3.3149e-02, 1.8155e-02,
-2.7854e-02, 1.5113e-02, 1.4183e-02, -3.1788e-02, 2.0937e-02,
-2.5365e-02, 2.2258e-02, 2.1894e-02, 3.0449e-02, 9.6907e-03,
1.2307e-02, -2.3984e-02, 2.2405e-02, -2.3917e-03, 1.7656e-03,
2.7852e-02, -1.0032e-02, 3.4842e-02, 9.7149e-03, -3.0658e-02,
-1.1512e-02, 2.8253e-02, 8.2238e-03, 6.2211e-03, -2.6598e-02,
-2.7131e-02, 1.6534e-02, 5.5935e-03, -2.2016e-02, -1.6198e-03,
1.5000e-02, 2.6743e-03, -3.6234e-03, -3.1462e-02, 1.5911e-02,
1.7786e-02, 5.5933e-03, 2.9720e-02, -1.9199e-02, 2.8142e-02,
-2.6229e-02, 7.6062e-03, -3.1366e-02, 2.4301e-02, 3.1553e-02,
-4.3286e-03, 3.3555e-02, 1.5849e-02, -5.0726e-03, -2.1366e-02,
3.5055e-02, 3.1942e-02, 3.3061e-03, -1.9475e-03, -1.4672e-02,
-1.5711e-02, 3.0720e-02, -3.1663e-03, -9.8440e-03, 5.2348e-03,
1.3381e-02, 3.8120e-03, 4.5848e-03, -3.5209e-03, 4.3512e-03,
1.7837e-02, 2.7730e-02, -2.7186e-02, 2.6336e-02, 1.1688e-02,
-6.3765e-03, -1.6544e-02, -1.3445e-02, -2.4220e-02, -1.9764e-02,
3.1500e-02, 2.8748e-04, 1.0275e-02, 3.2774e-04, -1.1330e-02,
9.7137e-03, -2.1778e-02, 2.5887e-02, 2.8984e-02, -2.4236e-02,
3.2555e-02, -4.7531e-05, 3.4844e-02, 1.9684e-02, -1.7230e-02,
-2.8936e-02, 3.4231e-03, -1.8061e-04, 2.5501e-02, -2.3185e-02,
3.4177e-02, 7.2736e-05, 5.8774e-03, 2.2466e-02, -5.0424e-03,
-3.0295e-02, -8.8438e-03, 2.7634e-02, 1.6712e-02, -2.8233e-03,
2.8857e-02, -3.3587e-03, 1.3444e-02, 1.9077e-02, 2.2042e-02,
2.3956e-03, -3.8368e-03, -3.1524e-02, 3.2758e-02, 3.1700e-02,
1.5248e-02, 1.2044e-02, 9.6497e-03, -1.7745e-03, -2.8037e-02,
5.2726e-03, 1.2630e-02, -6.0728e-03], requires_grad=True)
###Markdown
For custom initialization, we want to modify these tensors in place. These are actually autograd *Variables*, so we need to get back the actual tensors with `model.fc1.weight.data`. Once we have the tensors, we can fill them with zeros (for biases) or random normal values.
###Code
# Set biases to all zeros
model.fc1.bias.data.fill_(0)
# sample from random normal with standard dev = 0.01
model.fc1.weight.data.normal_(std=0.01)
###Output
_____no_output_____
###Markdown
Forward passNow that we have a network, let's see what happens when we pass in an image.
###Code
# Grab some data
dataiter = iter(trainloader)
images, labels = dataiter.next()
# Resize images into a 1D vector, new shape is (batch size, color channels, image pixels)
images.resize_(64, 1, 784)
# or images.resize_(images.shape[0], 1, 784) to automatically get batch size
# Forward pass through the network
img_idx = 0
ps = model.forward(images[img_idx,:])
img = images[img_idx]
helper.view_classify(img.view(1, 28, 28), ps)
###Output
_____no_output_____
###Markdown
As you can see above, our network has basically no idea what this digit is. It's because we haven't trained it yet, all the weights are random! Using `nn.Sequential`PyTorch provides a convenient way to build networks like this where a tensor is passed sequentially through operations, `nn.Sequential` ([documentation](https://pytorch.org/docs/master/nn.htmltorch.nn.Sequential)). Using this to build the equivalent network:
###Code
# Hyperparameters for our network
input_size = 784
hidden_sizes = [128, 64]
output_size = 10
# Build a feed-forward network
model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]),
nn.ReLU(),
nn.Linear(hidden_sizes[0], hidden_sizes[1]),
nn.ReLU(),
nn.Linear(hidden_sizes[1], output_size),
nn.Softmax(dim=1))
print(model)
# Forward pass through the network and display output
images, labels = next(iter(trainloader))
images.resize_(images.shape[0], 1, 784)
ps = model.forward(images[0,:])
helper.view_classify(images[0].view(1, 28, 28), ps)
###Output
Sequential(
(0): Linear(in_features=784, out_features=128, bias=True)
(1): ReLU()
(2): Linear(in_features=128, out_features=64, bias=True)
(3): ReLU()
(4): Linear(in_features=64, out_features=10, bias=True)
(5): Softmax(dim=1)
)
###Markdown
The operations are availble by passing in the appropriate index. For example, if you want to get first Linear operation and look at the weights, you'd use `model[0]`.
###Code
print(model[0])
model[0].weight
###Output
Linear(in_features=784, out_features=128, bias=True)
###Markdown
You can also pass in an `OrderedDict` to name the individual layers and operations, instead of using incremental integers. Note that dictionary keys must be unique, so _each operation must have a different name_.
###Code
from collections import OrderedDict
model = nn.Sequential(OrderedDict([
('fc1', nn.Linear(input_size, hidden_sizes[0])),
('relu1', nn.ReLU()),
('fc2', nn.Linear(hidden_sizes[0], hidden_sizes[1])),
('relu2', nn.ReLU()),
('output', nn.Linear(hidden_sizes[1], output_size)),
('softmax', nn.Softmax(dim=1))]))
model
###Output
_____no_output_____
###Markdown
Now you can access layers either by integer or the name
###Code
print(model[0])
print(model.fc1)
###Output
Linear(in_features=784, out_features=128, bias=True)
Linear(in_features=784, out_features=128, bias=True)
|
Beginners_Guide_Math_LinAlg/Math/I_05_LU_decomposition_of_A.ipynb | ###Markdown
LU Decomposition+ This notebook is part of lecture 4 *Factorization into LU* in the OCW MIT course 18.06 [1]+ Created by me, Dr Juan H Klopper + Head of Acute Care Surgery + Groote Schuur Hospital + University Cape Town + Email me with your thoughts, comments, suggestions and corrections Linear Algebra OCW MIT18.06 IPython notebook [2] study notes by Dr Juan H Klopper is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.+ [1] OCW MIT 18.06+ [2] Fernando Pérez, Brian E. Granger, IPython: A System for Interactive Scientific Computing, Computing in Science and Engineering, vol. 9, no. 3, pp. 21-29, May/June 2007, doi:10.1109/MCSE.2007.53. URL: http://ipython.org
###Code
from IPython.core.display import HTML, Image
css_file = 'style.css'
HTML(open(css_file, 'r').read())
import numpy as np
from sympy import *
import matplotlib.pyplot as plt
import seaborn as sns
from IPython.display import Image
from warnings import filterwarnings
init_printing(use_latex = 'mathjax')
%matplotlib inline
filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
LU decomposition of a matrix A * We will decompose the matrix A into and upper and lower triangular matrix, such that multiplying these will result back into A$$ A = LU $$ Turning the matrix of coefficients into Upper triangular form * Consider the following matrix of coefficients$$ \begin{bmatrix} 1 & -2 & 1 \\ 3 & 2 & -2 \\ 6 & -1 & -1 \end{bmatrix} $$* Successive elementary row operation follow * Which is nothing other than matrix multiplication of the elementary matrices * An elementary matrix is an identity matrix on which one elementary row operation was performed
###Code
A = Matrix([[1, -2, 1], [3, 2, -2], [6, -1, -1]])
A
eye(3)
###Output
_____no_output_____
###Markdown
* We have to get a -3 in the first **pivot** (the 1 in row 1, column 1) to get rid of the 3 in position row 2, column 1 (we call the resulting matrix E21, referring to the row 2, column 1)* Then we add the new row 1 to row two Row one of the identity matrix is then (-3,0,0) (but we leave it (1,0,0) in E21) and adding this to row 2 leaves (-3,1,0)
###Code
E21 = Matrix([[1, 0, 0], [-3, 1, 0], [0, 0, 1]])
E21
E21 * A # The resulting matrix after multiplication by E21
###Output
_____no_output_____
###Markdown
* We do the same to get rid of the 6 in row 3, column 1 * Multiplying row 1 (of the identity matrix) by -6 and adding this new row to row 3 (but again leaving row 1 as (1,0,0) in E31)
###Code
E31 = Matrix([[1, 0, 0], [0, 1, 0], [-6, 0, 1]])
E31
E31 * E21 * A # This got rid of the leading 6 in row 3
###Output
_____no_output_____
###Markdown
* Now the 8 in row 2, column 2 is the **pivot** and we need to get rid of the 11 in row 3, column 2* Unfortunately we have an 8 and an 11 to deal with* We will have to do two elementary row operations * -11 times row 2 of the identity matrix (0,-11,0) * Added to 8 times row 3 (0,0,8) &8756; (0,-11,8)
###Code
E32 = Matrix([[1, 0 , 0], [0, 1, 0], [0, -11, 8]])
E32
U = E32 * E31 * E21 * A
U # We call is U for upper triangular
###Output
_____no_output_____
###Markdown
* The matrix is now in upper triangular form$$ \left( { E }_{ 32 } \right) \left( { E }_{ 31 } \right) \left( { E }_{ 21 } \right) A=U $$ Calculating the Lower triangular from * Note, to reverse this process we would have to do the following:$$ { \left( { E }_{ 21 } \right) }^{ -1 }{ \left( { E }_{ 31 } \right) }^{ -1 }{ \left( { E }_{ 32 } \right) }^{ -1 }\left( { E }_{ 32 } \right) \left( { E }_{ 31 } \right) \left( { E }_{ 21 } \right) A=A $$* The inverse of a matrix can be calculated using the sympy method .*inv*() * We can check this with a Boolean request
###Code
E21.inv() * E31.inv() * E32.inv() * E32 * E31 * E21 * A == A # The Boolean double equal signs asks: Is the
# left-hand side equal to the right-hand side?
###Output
_____no_output_____
###Markdown
* Indeed, we will be back with the identity matrix just multiplying the inverse elementary matrices and the elementary matrices
###Code
E21.inv() * E31.inv() * E32.inv() * E32 * E31 * E21
###Output
_____no_output_____
###Markdown
* Multiplying the inverse elementary matrices on the left, must also have it happen on the right$$ { \left( { E }_{ 21 } \right) }^{ -1 }{ \left( { E }_{ 31 } \right) }^{ -1 }{ \left( { E }_{ 32 } \right) }^{ -1 }\left( { E }_{ 32 } \right) \left( { E }_{ 31 } \right) \left( { E }_{ 21 } \right) A={ \left( { E }_{ 21 } \right) }^{ -1 }{ \left( { E }_{ 31 } \right) }^{ -1 }{ \left( { E }_{ 32 } \right) }^{ -1 }U $$ * The multiplication of these inverse elementary matrices is lower triangular* We can call in L$$ A=LU $$
###Code
L = E21.inv() * E31.inv() * E32.inv()
L
A == L * U # Checking this with a Boolean question
A, L * U # They are identical
###Output
_____no_output_____
###Markdown
Doing this in one go using sympy
###Code
L, U, _ = A.LUdecomposition()
L
U # Note the difference from the U above
L * U # Back to A
###Output
_____no_output_____
###Markdown
What's special about L? * This only works when no row interchange happens* It also actually only works when doing the conventional subtracting the scalar multiplication of a row from another row, leaving the positive scalar as opposed to the negatives I use, allowing me to add the two rows (as opposed to subtraction)* Note the 3 (in row 2, column 1) and the 6 (in row 3, column 1)* They are the row multiplications we have to do for E21 and E31* The &185;&185; / &8328; is what we did for E32 (we just did it in two steps so as not to use fractions) Row exchanges * We have to allow row exchanges if the pivot contains a zero * For an example, from a 3&215;3 identity matrix we could have:
###Code
eye(3)
###Output
_____no_output_____
###Markdown
* Exchanging rows one and two would be:
###Code
Matrix([[0, 1, 0], [1, 0, 0], [0, 0, 1]])
A, Matrix([[0, 1, 0], [1, 0, 0], [0, 0, 1]]) * A # Showing row exchange
###Output
_____no_output_____
###Markdown
* How many permutations of row exchanges are there?$$ {n!} $$ * In a 3&215;3 matrix there are 3! = 6 permutations * Multiplying any of them will result in one of the 6 * They are inverses of each other * The inverse are the transposes* For 4&215;4 there are 4! = 24 Example problems Example problem 01 * Perform LU decomposition of:$$ \begin{bmatrix} 1 & 0 & 1 \\ a & a & a \\ b & b & a \end{bmatrix} $$* For which values of *a* and *b* does L and U exist? Solution
###Code
a, b = symbols('a b')
A = Matrix([[1, 0, 1], [a, a, a], [b, b, a]])
A
L,U, _ = A.LUdecomposition()
L, U
###Output
_____no_output_____
###Markdown
* Checking
###Code
L * U == A
###Output
_____no_output_____
###Markdown
* For existence: * *a* &8800; 0* It's easy to see why, since if a equals zero, we will have a zero in a pivot position and we will have to do row exchange, which is not allowed for LU-decomposition Hints and tips
###Code
E21, E21.inv() # To take the inverse of an elementary matrix, simply change the sign of the off-diagonal elements and
# multiply each element by 1 over the determinant
# The determinant is easy to do for these *n* = 3 square matrices, since the top row is (1,0,0)
E31, E31.inv()
E32, E32.inv()
###Output
_____no_output_____ |
Universal Sentence Encoder.ipynb | ###Markdown
Based on https://tfhub.dev/google/universal-sentence-encoder-large/5
###Code
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
tf.get_logger().setLevel('ERROR')
# load the model
embed = hub.load("/Users/nicholasdinicola/Desktop/ASTON/Dissertation /universal-sentence-encoder-large_5")
texts = [
"shares crashed",
"stock tumbled",
"shares popped",
"stock jumped"
]
embeddings = embed(texts)
print(embeddings)
###Output
tf.Tensor(
[[ 0.03329532 -0.02210531 0.03445317 ... -0.03690788 0.01125785
0.03272377]
[ 0.01449671 -0.03980014 0.01547467 ... -0.07442201 0.01560574
-0.03663752]
[ 0.00608954 -0.03756524 0.01019685 ... -0.07720214 0.02829976
0.02770101]
[ 0.05248757 -0.06430706 -0.01321932 ... -0.09114028 -0.02422299
0.04228854]], shape=(4, 512), dtype=float32)
###Markdown
SimilaritySince the values are normalized, the inner product of encodings can be treated as a similarity matrix.
###Code
sim_matrix = np.inner(embeddings, embeddings)
sim_matrix
for i, t in enumerate(texts):
print(t)
most_similar_idx = (-sim_matrix[i]).argsort()[1:2][0]
print(">", texts[most_similar_idx])
print("-"*10)
texts = [
"After the earnings report the stock crashed",
"After the earnings report the tumbled",
"After the earnings report the popped",
"After the earnings report the jumped"
]
embeddings = embed(texts)
sim_matrix = np.inner(embeddings, embeddings)
for i, t in enumerate(texts):
print(t)
most_similar_idx = (-sim_matrix[i]).argsort()[1:2][0]
print(">", texts[most_similar_idx])
print("-"*10)
###Output
After the earnings report the stock crashed
> After the earnings report the tumbled
----------
After the earnings report the tumbled
> After the earnings report the jumped
----------
After the earnings report the popped
> After the earnings report the tumbled
----------
After the earnings report the jumped
> After the earnings report the tumbled
----------
|
01-intro-pytorch/bow_pytorch.ipynb | ###Markdown
###Code
from google.colab import drive
drive.mount('/gdrive')
%cd "/gdrive/MyDrive/Colab Notebooks/git/nn4nlp-code/01-intro-pytorch/"
%pwd
!ls
###Output
bow.py cbow.py model.py
bow-pytorch.ipynb deep_cbow.py new_file_add_on_colab
|
notebooks/fig5-diet-cron/analysis-caloric-restriction-dbbact.ipynb | ###Markdown
Load the data
###Code
# supress warning about samples without metadata
ca.set_log_level('ERROR')
dat=ca.read_amplicon('./all.cron.biom','./map.cron.txt',normalize=10000,min_reads=1000)
ca.set_log_level('INFO')
###Output
_____no_output_____
###Markdown
remove reverse read sequences
###Code
badseqs=[]
for cseq in dat.feature_metadata.index.values:
if cseq[:2]=='CC':
badseqs.append(cseq)
print('%d bad sequences out of %d'% (len(badseqs), len(dat.feature_metadata)))
dat=dat.filter_ids(badseqs,negate=True)
datc=dat.cluster_features(10)
###Output
2021-06-06 14:05:43 INFO After filtering, 1980 remain.
###Markdown
Get just the human samples
###Code
hum=datc.filter_samples('type','Hum').cluster_features(10)
hum=hum.sort_samples('BMI category')
###Output
_____no_output_____
###Markdown
let's look at the data
###Code
cu.splot(hum,'diet',barx_fields=['BMI category'])
###Output
creating logger
###Markdown
Identify differentially abundant bacteria The significance level
###Code
alpha=0.01
###Output
_____no_output_____
###Markdown
Only compare lean individuals, and choose one random sample per individual if more than one exists
###Code
tt=hum.filter_samples('BMI category','lean')
# tt=tt.aggregate_by_metadata('human_donor',agg='random')
tt=tt.downsample('human_donor',axis='s',keep=1,random_seed=2018)
###Output
_____no_output_____
###Markdown
Number of samples in each group
###Code
tt.sample_metadata['diet'].value_counts()
###Output
_____no_output_____
###Markdown
Get the bacteria which are different
###Code
dd=tt.diff_abundance('diet','CRON','AMER',random_seed=2018, alpha=alpha)
###Output
2021-06-06 14:05:54 INFO 99 samples with both values
2021-06-06 14:05:54 INFO After filtering, 1332 remain.
2021-06-06 14:05:54 INFO 33 samples with value 1 (['CRON'])
2021-06-06 14:05:55 INFO number of higher in CRON: 141. number of higher in AMER : 28. total 169
###Markdown
What we get
###Code
f=dd.sort_samples('diet').plot(sample_field='diet',gui='jupyter',barx_fields=['diet'])
f.save_figure('./caloric-restriction-diff-bact.pdf')
###Output
_____no_output_____
###Markdown
The rotated heatmaps
###Code
import numpy as np
# scale 100, sort by subject
tt=dd.normalize(100).sort_samples('diet')
# lets flip the order
tt=tt.reorder(np.arange(len(tt.feature_metadata)-1,0,-1), axis='f')
# and rotate
tt=rotate_exp(tt)
tt=flip_data(tt,'s')
tt=flip_data(tt,'f')
f=tt.plot(feature_field=None,gui='jupyter',clim=[0,100])
f.save_figure('./caloric-restriction-diff-bact-rotated.pdf')
###Output
_____no_output_____
###Markdown
have a look to see these are not bmi related (even though we compared only to lean bmi)
###Code
f=hum.filter_ids(dd.feature_metadata.index).sort_samples('diet').plot(sample_field='diet',gui='qt5',barx_fields=['BMI category'])
###Output
_____no_output_____
###Markdown
Plot the enriched terms the dbBact release to use
###Code
# max_id = 3925 # dbbact release 10-20
max_id = 6237 # dbbact release 2021.05
###Output
_____no_output_____
###Markdown
The number of terms to show
###Code
numterms=6
db=ca.database._get_database_class('dbbact')
import matplotlib
matplotlib.rc('ytick', labelsize=15)
f,res = dd.plot_diff_abundance_enrichment(max_show=numterms,ignore_exp=True, use_term_pairs=False, colors=['green','red'], num_results_needed=numterms, min_appearances=2, random_seed=2018, max_id=max_id)
f.set_xlim([-1,1])
f.figure.set_size_inches(6.3,2)
f.figure
# save the csv table of the terms
res.feature_metadata.to_csv('./terms.csv')
f.figure.savefig('caloric-restriction-diff-terms.pdf')
f.figure.savefig('caloric-restriction-diff-terms.svg')
# save the tsv table of the terms
res.feature_metadata.to_csv('./terms-list.tsv',sep='\t')
###Output
_____no_output_____
###Markdown
Create also the full enriched terms list for the supplementary tablewe use min_appearances=1 so we'll get even the terms enriched in only 1 experiment (as opposed to the bar plot where we showed only the terms enriched in 2 experiments
###Code
positive = dd.feature_metadata['_calour_stat'] > 0
positive = dd.feature_metadata.index.values[positive.values]
enriched, term_features, features = dd.enrichment(features=positive, dbname='dbbact', ignore_exp=True, random_seed=2018,max_id=max_id, min_appearances=1)
enriched.to_csv('./all-enriched-terms.tsv',sep='\t')
###Output
_____no_output_____
###Markdown
And the B/W per-term heatmap
###Code
tres=flip_data(res,'f')
tres=flip_data(tres,'s')
f=tres.plot(gui='jupyter',clim=[0,500],
feature_field='term',yticklabel_kwargs={'rotation':0},yticklabel_len=35,
cmap='Greys',norm=None)
f.figure.set_size_inches(6.3,2)
f.figure
f.save_figure('./caloric-restriction-diff-terms-heatmap.pdf')
f=tres.plot(gui='jupyter',clim=[0,500], cmap='Greys',norm=None)
f.save_figure('./caloric-restriction-diff-terms-heatmap-no-labels.pdf')
###Output
_____no_output_____
###Markdown
The BMI term zoom
###Code
db=ca.database._get_database_class('dbbact')
%matplotlib inline
f=db.plot_term_venn_all('low bmi',dd,max_id=max_id,use_exact=True)
f.savefig('./venn-low-bmi.pdf')
f=db.plot_term_venn_all('high bmi',dd,max_id=max_id,use_exact=True)
f.savefig('./venn-high-bmi.pdf')
###Output
_____no_output_____
###Markdown
Also let's view this as a heatmap for the term annotations for sequences in both groups
###Code
db.show_term_details_diff('low bmi',dd,gui='qt5')
###Output
_____no_output_____ |
colabs/run_sort_tracker_on_colab.ipynb | ###Markdown
Tracking people in a room using YOLOv5 and Deep SORTOne must use CPU for inference! Part 1. Download github repository and install requirements
###Code
!git clone https://github.com/maxmarkov/track_and_count
!pip3 install -r track_and_count/requirements.txt --quiet
%cd track_and_count/yolov5
!./weights/download_weights.sh
%cd ..
###Output
/root/track_and_count/yolov5
Downloading https://github.com/ultralytics/yolov5/releases/download/v3.0/yolov5s.pt to weights/yolov5s.pt...
100% 14.5M/14.5M [00:00<00:00, 98.9MB/s]
Downloading https://github.com/ultralytics/yolov5/releases/download/v3.0/yolov5m.pt to weights/yolov5m.pt...
100% 41.9M/41.9M [00:00<00:00, 91.6MB/s]
Downloading https://github.com/ultralytics/yolov5/releases/download/v3.0/yolov5l.pt to weights/yolov5l.pt...
100% 91.6M/91.6M [00:01<00:00, 92.4MB/s]
Downloading https://github.com/ultralytics/yolov5/releases/download/v3.0/yolov5x.pt to weights/yolov5x.pt...
100% 170M/170M [00:01<00:00, 94.4MB/s]
/root/track_and_count
###Markdown
Part 2. Run tracker on example.
###Code
!python3 track_yolov5_sort.py --source example/running.mp4 --weights yolov5/weights/yolov5s.pt --conf 0.4 --max_age 50 --min_hits 10 --iou_threshold 0.3
###Output
_____no_output_____
###Markdown
Download video with inference
###Code
from google.colab import files
f = 'inference/output/running.mp4'
files.download(f)
###Output
_____no_output_____ |
docs/tutorials/10 - Duramat Webinar Simulations.ipynb | ###Markdown
Duramat Webinar: US NREL Electric Futures 2021This journal simulates the Reference and High Electrification scenarios from Electrification Futures, and comparing to a glass baseline with High bifacial future projection. Installed Capacity considerations from bifacial installations are not considered here.Results from this journal were presented during Duramat's webinar April 2021 – “The Impacts of Module Reliability and Lifetime on PV in the Circular Economy" presented by Teresa Barnes, Silvana Ayala, and Heather Mirletz, NREL.
###Code
import os
from pathlib import Path
testfolder = str(Path().resolve().parent.parent / 'PV_ICE' / 'TEMP')
# Another option using relative address; for some operative systems you might need '/' instead of '\'
# testfolder = os.path.abspath(r'..\..\PV_DEMICE\TEMP')
print ("Your simulation will be stored in %s" % testfolder)
MATERIALS = ['glass','silver','silicon', 'copper','aluminium_frames']
MATERIAL = MATERIALS[0]
MODULEBASELINE = r'..\baselines\ElectrificationFutures_2021\baseline_modules_US_NREL_Electrification_Futures_2021_basecase.csv'
MODULEBASELINE_High = r'..\baselines\ElectrificationFutures_2021\baseline_modules_US_NREL_Electrification_Futures_2021_LowREHighElec.csv'
import PV_ICE
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
PV_ICE.__version__
plt.rcParams.update({'font.size': 22})
plt.rcParams['figure.figsize'] = (12, 5)
r1 = PV_ICE.Simulation(name='Simulation1', path=testfolder)
r1.createScenario(name='base', file=MODULEBASELINE)
for mat in range (0, len(MATERIALS)):
MATERIALBASELINE = r'..\baselines\baseline_material_'+MATERIALS[mat]+'.csv'
r1.scenario['base'].addMaterial(MATERIALS[mat], file=MATERIALBASELINE)
r1.createScenario(name='high', file=MODULEBASELINE_High)
for mat in range (0, len(MATERIALS)):
MATERIALBASELINE = r'..\baselines\baseline_material_'+MATERIALS[mat]+'.csv'
r1.scenario['high'].addMaterial(MATERIALS[mat], file=MATERIALBASELINE)
r2 = PV_ICE.Simulation(name='bifacialTrend', path=testfolder)
r2.createScenario(name='base', file=MODULEBASELINE)
MATERIALBASELINE = r'..\baselines\PVSC_2021\baseline_material_glass_bifacialTrend.csv'
r2.scenario['base'].addMaterial('glass', file=MATERIALBASELINE)
for mat in range (1, len(MATERIALS)):
MATERIALBASELINE = r'..\baselines\baseline_material_'+MATERIALS[mat]+'.csv'
r2.scenario['base'].addMaterial(MATERIALS[mat], file=MATERIALBASELINE)
r2.createScenario(name='high', file=MODULEBASELINE_High)
MATERIALBASELINE = r'..\baselines\PVSC_2021\baseline_material_glass_bifacialTrend.csv'
r2.scenario['high'].addMaterial('glass', file=MATERIALBASELINE)
for mat in range (1, len(MATERIALS)):
MATERIALBASELINE = r'..\baselines\baseline_material_'+MATERIALS[mat]+'.csv'
r2.scenario['high'].addMaterial(MATERIALS[mat], file=MATERIALBASELINE)
IRENA= False
ELorRL = 'EL'
if IRENA:
if ELorRL == 'RL':
weibullInputParams = {'alpha': 5.3759} # Regular-loss scenario IRENA
if ELorRL == 'EL':
weibullInputParams = {'alpha': 2.49} # Regular-loss scenario IRENA
r1.calculateMassFlow(weibullInputParams=weibullInputParams, weibullAlphaOnly=True)
r2.calculateMassFlow(weibullInputParams=weibullInputParams, weibullAlphaOnly=True)
title_Method = 'Irena_'+ELorRL
else:
r1.calculateMassFlow()
r2.calculateMassFlow()
title_Method = 'PVICE'
###Output
Working on Scenario: base
********************
Finished Area+Power Generation Calculations
==> Working on Material : glass
==> Working on Material : silver
==> Working on Material : silicon
==> Working on Material : copper
==> Working on Material : aluminium_frames
Working on Scenario: high
********************
Finished Area+Power Generation Calculations
==> Working on Material : glass
==> Working on Material : silver
==> Working on Material : silicon
==> Working on Material : copper
==> Working on Material : aluminium_frames
Working on Scenario: base
********************
Finished Area+Power Generation Calculations
==> Working on Material : glass
==> Working on Material : silver
==> Working on Material : silicon
==> Working on Material : copper
==> Working on Material : aluminium_frames
Working on Scenario: high
********************
Finished Area+Power Generation Calculations
==> Working on Material : glass
==> Working on Material : silver
==> Working on Material : silicon
==> Working on Material : copper
==> Working on Material : aluminium_frames
###Markdown
Creating Summary of results
###Code
objects = [r1, r2]
scenarios = ['base', 'high']
USyearly=pd.DataFrame()
keyword='mat_Total_Landfilled'
materials = ['glass', 'silicon', 'silver', 'copper', 'aluminium_frames']
# Loop over objects
for kk in range(0, len(objects)):
obj = objects[kk]
# Loop over Scenarios
for jj in range(0, len(scenarios)):
case = scenarios[jj]
for ii in range (0, len(materials)):
material = materials[ii]
foo = obj.scenario[case].material[material].materialdata[keyword].copy()
foo = foo.to_frame(name=material)
USyearly["Waste_"+material+'_'+obj.name+'_'+case] = foo[material]
filter_col = [col for col in USyearly if (col.startswith('Waste') and col.endswith(obj.name+'_'+case)) ]
USyearly['Waste_Module_'+obj.name+'_'+case] = USyearly[filter_col].sum(axis=1)
# Converting to grams to Tons.
USyearly.head(20)
keyword='mat_Total_EOL_Landfilled'
materials = ['glass', 'silicon', 'silver', 'copper', 'aluminium_frames']
# Loop over objects
for kk in range(0, len(objects)):
obj = objects[kk]
# Loop over Scenarios
for jj in range(0, len(scenarios)):
case = scenarios[jj]
for ii in range (0, len(materials)):
material = materials[ii]
foo = obj.scenario[case].material[material].materialdata[keyword].copy()
foo = foo.to_frame(name=material)
USyearly["Waste_EOL_"+material+'_'+obj.name+'_'+case] = foo[material]
filter_col = [col for col in USyearly if (col.startswith('Waste') and col.endswith(obj.name+'_'+case)) ]
USyearly['Waste_EOL_Module_'+obj.name+'_'+case] = USyearly[filter_col].sum(axis=1)
# Converting to grams to Tons.
USyearly.head(20)
keyword='mat_Virgin_Stock'
materials = ['glass', 'silicon', 'silver', 'copper', 'aluminium_frames']
# Loop over objects
for kk in range(0, len(objects)):
obj = objects[kk]
# Loop over Scenarios
for jj in range(0, len(scenarios)):
case = scenarios[jj]
for ii in range (0, len(materials)):
material = materials[ii]
foo = obj.scenario[case].material[material].materialdata[keyword].copy()
foo = foo.to_frame(name=material)
USyearly["VirginStock_"+material+'_'+obj.name+'_'+case] = foo[material]
filter_col = [col for col in USyearly if (col.startswith('VirginStock_') and col.endswith(obj.name+'_'+case)) ]
USyearly['VirginStock_Module_'+obj.name+'_'+case] = USyearly[filter_col].sum(axis=1)
###Output
_____no_output_____
###Markdown
Converting to grams to METRIC Tons.
###Code
USyearly = USyearly/1000000 # This is the ratio for Metric tonnes
#907185 -- this is for US tons
UScum = USyearly.copy()
UScum = UScum.cumsum()
UScum.head()
###Output
_____no_output_____
###Markdown
Adding Installed Capacity to US
###Code
keyword='Installed_Capacity_[W]'
materials = ['glass', 'silicon', 'silver', 'copper', 'aluminium_frames']
# Loop over SF Scenarios
for kk in range(0, len(objects)):
obj = objects[kk]
# Loop over Scenarios
for jj in range(0, len(scenarios)):
case = scenarios[jj]
foo = obj.scenario[case].data[keyword]
foo = foo.to_frame(name=keyword)
UScum["Capacity_"+obj.name+'_'+case] = foo[keyword]
UScum.tail(20)
###Output
_____no_output_____
###Markdown
Mining Capacity
###Code
USyearly.index = r1.scenario['base'].data['year']
UScum.index = r1.scenario['base'].data['year']
mining2020_aluminum = 65267000
mining2020_silver = 22260
mining2020_copper = 20000000
mining2020_silicon = 8000000
objects = [r1, r2]
scenarios = ['base', 'high']
###Output
_____no_output_____
###Markdown
PLOTTING GALORE
###Code
plt.rcParams.update({'font.size': 10})
plt.rcParams['figure.figsize'] = (12, 8)
keyw='VirginStock_'
materials = ['glass', 'silicon', 'silver', 'copper', 'aluminium_frames']
fig, axs = plt.subplots(1,1, figsize=(4, 6), facecolor='w', edgecolor='k')
fig.subplots_adjust(hspace = .3, wspace=.2)
# Loop over CASES
name2 = 'Simulation1_high'
name0 = 'Simulation1_base'
# ROW 2, Aluminum and Silicon: g- 4 aluminum k - 1 silicon orange - 3 copper gray - 2 silver
axs.plot(USyearly[keyw+materials[2]+'_'+name2]*100/mining2020_silver,
color = 'gray', linewidth=2.0, label='Silver')
axs.fill_between(USyearly.index, USyearly[keyw+materials[2]+'_'+name0]*100/mining2020_silver, USyearly[keyw+materials[2]+'_'+name2]*100/mining2020_silver,
color='gray', lw=3, alpha=.3)
axs.plot(USyearly[keyw+materials[1]+'_'+name2]*100/mining2020_silicon,
color = 'k', linewidth=2.0, label='Silicon')
axs.fill_between(USyearly.index, USyearly[keyw+materials[1]+'_'+name0]*100/mining2020_silicon,
USyearly[keyw+materials[1]+'_'+name2]*100/mining2020_silicon,
color='k', lw=3, alpha=.5)
axs.plot(USyearly[keyw+materials[4]+'_'+name2]*100/mining2020_aluminum,
color = 'g', linewidth=2.0, label='Aluminum')
axs.fill_between(USyearly.index, USyearly[keyw+materials[4]+'_'+name0]*100/mining2020_aluminum,
USyearly[keyw+materials[4]+'_'+name2]*100/mining2020_aluminum,
color='g', lw=3, alpha=.3)
axs.plot(USyearly[keyw+materials[3]+'_'+name2]*100/mining2020_copper,
color = 'orange', linewidth=2.0, label='Copper')
axs.fill_between(USyearly.index, USyearly[keyw+materials[3]+'_'+name0]*100/mining2020_copper,
USyearly[keyw+materials[3]+'_'+name2]*100/mining2020_copper,
color='orange', lw=3, alpha=.3)
axs.set_xlim([2020,2050])
axs.legend()
#axs.set_yscale('log')
#axs.set_ylabel('Virgin material needs as a percentage of 2020 global mining production capacity [%]')
fig.savefig(title_Method+' Fig_1x1_MaterialNeeds Ratio to Production_NREL2018.png', dpi=600)
plt.rcParams.update({'font.size': 15})
plt.rcParams['figure.figsize'] = (15, 8)
keyw='VirginStock_'
materials = ['glass', 'silicon', 'silver', 'copper', 'aluminium_frames']
f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios': [3, 1]})
########################
# SUBPLOT 1
########################
#######################
# loop plotting over scenarios
name2 = 'Simulation1_high'
name0 = 'Simulation1_base'
# SCENARIO 1 ***************
modulemat = (USyearly[keyw+materials[0]+'_'+name0]+USyearly[keyw+materials[1]+'_'+name0]+
USyearly[keyw+materials[2]+'_'+name0]+USyearly[keyw+materials[3]+'_'+name0]+
USyearly[keyw+materials[4]+'_'+name0])
glassmat = (USyearly[keyw+materials[0]+'_'+name0])
modulemat = modulemat/1000000
glassmat = glassmat/1000000
a0.plot(USyearly.index, modulemat, 'k.', linewidth=5, label='S1: '+name0+' module mass')
a0.plot(USyearly.index, glassmat, 'k', linewidth=5, label='S1: '+name0+' glass mass only')
a0.fill_between(USyearly.index, glassmat, modulemat, color='k', alpha=0.3,
interpolate=True)
# SCENARIO 2 ***************
modulemat = (USyearly[keyw+materials[0]+'_'+name2]+USyearly[keyw+materials[1]+'_'+name2]+
USyearly[keyw+materials[2]+'_'+name2]+USyearly[keyw+materials[3]+'_'+name2]+
USyearly[keyw+materials[4]+'_'+name2])
glassmat = (USyearly[keyw+materials[0]+'_'+name2])
modulemat = modulemat/1000000
glassmat = glassmat/1000000
a0.plot(USyearly.index, modulemat, 'c.', linewidth=5, label='S2: '+name2+' module mass')
a0.plot(USyearly.index, glassmat, 'c', linewidth=5, label='S2: '+name2+' glass mass only')
a0.fill_between(USyearly.index, glassmat, modulemat, color='c', alpha=0.3,
interpolate=True)
a0.legend()
a0.set_title('Yearly Virgin Material Needs by Scenario')
a0.set_ylabel('Mass [Million Tonnes]')
a0.set_xlim([2020, 2050])
a0.set_xlabel('Years')
########################
# SUBPLOT 2
########################
#######################
# Calculate
cumulations2050 = {}
for ii in range(0, len(materials)):
matcum = []
matcum.append(UScum[keyw+materials[ii]+'_'+name0].loc[2050])
matcum.append(UScum[keyw+materials[ii]+'_'+name2].loc[2050])
cumulations2050[materials[ii]] = matcum
dfcumulations2050 = pd.DataFrame.from_dict(cumulations2050)
dfcumulations2050 = dfcumulations2050/1000000 # in Million Tonnes
dfcumulations2050['bottom1'] = dfcumulations2050['glass']
dfcumulations2050['bottom2'] = dfcumulations2050['bottom1']+dfcumulations2050['aluminium_frames']
dfcumulations2050['bottom3'] = dfcumulations2050['bottom2']+dfcumulations2050['silicon']
dfcumulations2050['bottom4'] = dfcumulations2050['bottom3']+dfcumulations2050['copper']
## Plot BARS Stuff
ind=np.arange(2)
width=0.35 # width of the bars.
p0 = a1.bar(ind, dfcumulations2050['glass'], width, color='c')
p1 = a1.bar(ind, dfcumulations2050['aluminium_frames'], width,
bottom=dfcumulations2050['bottom1'])
p2 = a1.bar(ind, dfcumulations2050['silicon'], width,
bottom=dfcumulations2050['bottom2'])
p3 = a1.bar(ind, dfcumulations2050['copper'], width,
bottom=dfcumulations2050['bottom3'])
p4 = a1.bar(ind, dfcumulations2050['silver'], width,
bottom=dfcumulations2050['bottom4'])
a1.yaxis.set_label_position("right")
a1.yaxis.tick_right()
a1.set_ylabel('Virgin Material Cumulative Needs 2020-2050 [Million Tonnes]')
a1.set_xlabel('Scenario')
a1.set_xticks(ind, ('S1', 'S2'))
#plt.yticks(np.arange(0, 81, 10))
a1.legend((p0[0], p1[0], p2[0], p3[0], p4[0] ), ('Glass', 'aluminium_frames', 'Silicon','Copper','Silver'))
f.tight_layout()
f.savefig(title_Method+' Fig_2x1_Yearly Virgin Material Needs by Scenario and Cumulatives_NREL2018.png', dpi=600)
print("Cumulative Virgin Needs by 2050 Million Tones by Scenario")
dfcumulations2050[['glass','silicon','silver','copper','aluminium_frames']].sum(axis=1)
###Output
C:\ProgramData\Anaconda3\lib\site-packages\ipykernel_launcher.py:89: MatplotlibDeprecationWarning: Passing the minor parameter of set_ticks() positionally is deprecated since Matplotlib 3.2; the parameter will become keyword-only two minor releases later.
###Markdown
Bonus: Bifacial Trend Cumulative Virgin Needs (not plotted, just values)
###Code
name2 = 'bifacialTrend_high'
name0 = 'bifacialTrend_base'
cumulations2050 = {}
for ii in range(0, len(materials)):
matcum = []
matcum.append(UScum[keyw+materials[ii]+'_'+name0].loc[2050])
matcum.append(UScum[keyw+materials[ii]+'_'+name2].loc[2050])
cumulations2050[materials[ii]] = matcum
dfcumulations2050 = pd.DataFrame.from_dict(cumulations2050)
dfcumulations2050 = dfcumulations2050/1000000 # in Million Tonnes
print("Cumulative Virgin Needs by 2050 Million Tones by Scenario for Bifacial Trend")
dfcumulations2050[['glass','silicon','silver','copper','aluminium_frames']].sum(axis=1)
###Output
Cumulative Virgin Needs by 2050 Million Tones by Scenario for Bifacial Trend
###Markdown
Waste by year
###Code
plt.rcParams.update({'font.size': 15})
plt.rcParams['figure.figsize'] = (15, 8)
keyw='Waste_'
materials = ['glass', 'silicon', 'silver', 'copper', 'aluminium_frames']
f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios': [3, 1]})
########################
# SUBPLOT 1
########################
#######################
# loop plotting over scenarios
name2 = 'Simulation1_high'
name0 = 'Simulation1_base'
# SCENARIO 1 ***************
modulemat = (USyearly[keyw+materials[0]+'_'+name0]+USyearly[keyw+materials[1]+'_'+name0]+
USyearly[keyw+materials[2]+'_'+name0]+USyearly[keyw+materials[3]+'_'+name0]+
USyearly[keyw+materials[4]+'_'+name0])
glassmat = (USyearly[keyw+materials[0]+'_'+name0])
modulemat = modulemat/1000000
glassmat = glassmat/1000000
a0.plot(USyearly.index, modulemat, 'k.', linewidth=5, label='S1: '+name0+' module mass')
a0.plot(USyearly.index, glassmat, 'k', linewidth=5, label='S1: '+name0+' glass mass only')
a0.fill_between(USyearly.index, glassmat, modulemat, color='k', alpha=0.3,
interpolate=True)
# SCENARIO 2 ***************
modulemat = (USyearly[keyw+materials[0]+'_'+name2]+USyearly[keyw+materials[1]+'_'+name2]+
USyearly[keyw+materials[2]+'_'+name2]+USyearly[keyw+materials[3]+'_'+name2]+
USyearly[keyw+materials[4]+'_'+name2])
glassmat = (USyearly[keyw+materials[0]+'_'+name2])
modulemat = modulemat/1000000
glassmat = glassmat/1000000
a0.plot(USyearly.index, modulemat, 'c.', linewidth=5, label='S2: '+name2+' module mass')
a0.plot(USyearly.index, glassmat, 'c', linewidth=5, label='S2: '+name2+' glass mass only')
a0.fill_between(USyearly.index, glassmat, modulemat, color='c', alpha=0.3,
interpolate=True)
a0.legend()
a0.set_title('Yearly Material Waste by Scenario')
a0.set_ylabel('Mass [Million Tonnes]')
a0.set_xlim([2020, 2050])
a0.set_xlabel('Years')
########################
# SUBPLOT 2
########################
#######################
# Calculate
cumulations2050 = {}
for ii in range(0, len(materials)):
matcum = []
matcum.append(UScum[keyw+materials[ii]+'_'+name0].loc[2050])
matcum.append(UScum[keyw+materials[ii]+'_'+name2].loc[2050])
cumulations2050[materials[ii]] = matcum
dfcumulations2050 = pd.DataFrame.from_dict(cumulations2050)
dfcumulations2050 = dfcumulations2050/1000000 # in Million Tonnes
dfcumulations2050['bottom1'] = dfcumulations2050['glass']
dfcumulations2050['bottom2'] = dfcumulations2050['bottom1']+dfcumulations2050['aluminium_frames']
dfcumulations2050['bottom3'] = dfcumulations2050['bottom2']+dfcumulations2050['silicon']
dfcumulations2050['bottom4'] = dfcumulations2050['bottom3']+dfcumulations2050['copper']
## Plot BARS Stuff
ind=np.arange(2)
width=0.35 # width of the bars.
p0 = a1.bar(ind, dfcumulations2050['glass'], width, color='c')
p1 = a1.bar(ind, dfcumulations2050['aluminium_frames'], width,
bottom=dfcumulations2050['bottom1'])
p2 = a1.bar(ind, dfcumulations2050['silicon'], width,
bottom=dfcumulations2050['bottom2'])
p3 = a1.bar(ind, dfcumulations2050['copper'], width,
bottom=dfcumulations2050['bottom3'])
p4 = a1.bar(ind, dfcumulations2050['silver'], width,
bottom=dfcumulations2050['bottom4'])
a1.yaxis.set_label_position("right")
a1.yaxis.tick_right()
a1.set_ylabel('Cumulative Waste by 2050 [Million Tonnes]')
a1.set_xlabel('Scenario')
a1.set_xticks(ind, ('S1', 'S2'))
#plt.yticks(np.arange(0, 81, 10))
a1.legend((p0[0], p1[0], p2[0], p3[0], p4[0] ), ('Glass', 'aluminium_frames', 'Silicon','Copper','Silver'))
f.tight_layout()
f.savefig(title_Method+' Fig_2x1_Yearly WASTE by Scenario and Cumulatives_NREL2018.png', dpi=600)
print("Cumulative Waste by 2050 Million Tones by case")
dfcumulations2050[['glass','silicon','silver','copper','aluminium_frames']].sum(axis=1)
plt.rcParams.update({'font.size': 15})
plt.rcParams['figure.figsize'] = (15, 8)
keyw='Waste_EOL_'
materials = ['glass', 'silicon', 'silver', 'copper', 'aluminium_frames']
f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios': [3, 1]})
########################
# SUBPLOT 1
########################
#######################
# loop plotting over scenarios
name2 = 'Simulation1_high'
name0 = 'Simulation1_base'
# SCENARIO 1 ***************
modulemat = (USyearly[keyw+materials[0]+'_'+name0]+USyearly[keyw+materials[1]+'_'+name0]+
USyearly[keyw+materials[2]+'_'+name0]+USyearly[keyw+materials[3]+'_'+name0]+
USyearly[keyw+materials[4]+'_'+name0])
glassmat = (USyearly[keyw+materials[0]+'_'+name0])
modulemat = modulemat/1000000
glassmat = glassmat/1000000
a0.plot(USyearly.index, modulemat, 'k.', linewidth=5, label='S1: '+name0+' module mass')
a0.plot(USyearly.index, glassmat, 'k', linewidth=5, label='S1: '+name0+' glass mass only')
a0.fill_between(USyearly.index, glassmat, modulemat, color='k', alpha=0.3,
interpolate=True)
# SCENARIO 2 ***************
modulemat = (USyearly[keyw+materials[0]+'_'+name2]+USyearly[keyw+materials[1]+'_'+name2]+
USyearly[keyw+materials[2]+'_'+name2]+USyearly[keyw+materials[3]+'_'+name2]+
USyearly[keyw+materials[4]+'_'+name2])
glassmat = (USyearly[keyw+materials[0]+'_'+name2])
modulemat = modulemat/1000000
glassmat = glassmat/1000000
a0.plot(USyearly.index, modulemat, 'c.', linewidth=5, label='S2: '+name2+' module mass')
a0.plot(USyearly.index, glassmat, 'c', linewidth=5, label='S2: '+name2+' glass mass only')
a0.fill_between(USyearly.index, glassmat, modulemat, color='c', alpha=0.3,
interpolate=True)
a0.legend()
a0.set_title('Yearly Material Waste by Scenario')
a0.set_ylabel('Mass [Million Tonnes]')
a0.set_xlim([2020, 2050])
a0.set_xlabel('Years')
########################
# SUBPLOT 2
########################
#######################
# Calculate
cumulations2050 = {}
for ii in range(0, len(materials)):
matcum = []
matcum.append(UScum[keyw+materials[ii]+'_'+name0].loc[2050])
matcum.append(UScum[keyw+materials[ii]+'_'+name2].loc[2050])
cumulations2050[materials[ii]] = matcum
dfcumulations2050 = pd.DataFrame.from_dict(cumulations2050)
dfcumulations2050 = dfcumulations2050/1000000 # in Million Tonnes
dfcumulations2050['bottom1'] = dfcumulations2050['glass']
dfcumulations2050['bottom2'] = dfcumulations2050['bottom1']+dfcumulations2050['aluminium_frames']
dfcumulations2050['bottom3'] = dfcumulations2050['bottom2']+dfcumulations2050['silicon']
dfcumulations2050['bottom4'] = dfcumulations2050['bottom3']+dfcumulations2050['copper']
## Plot BARS Stuff
ind=np.arange(2)
width=0.35 # width of the bars.
p0 = a1.bar(ind, dfcumulations2050['glass'], width, color='c')
p1 = a1.bar(ind, dfcumulations2050['aluminium_frames'], width,
bottom=dfcumulations2050['bottom1'])
p2 = a1.bar(ind, dfcumulations2050['silicon'], width,
bottom=dfcumulations2050['bottom2'])
p3 = a1.bar(ind, dfcumulations2050['copper'], width,
bottom=dfcumulations2050['bottom3'])
p4 = a1.bar(ind, dfcumulations2050['silver'], width,
bottom=dfcumulations2050['bottom4'])
a1.yaxis.set_label_position("right")
a1.yaxis.tick_right()
a1.set_ylabel('Cumulative EOL Only Waste by 2050 [Million Tonnes]')
a1.set_xlabel('Scenario')
a1.set_xticks(ind, ('S1', 'S2'))
#plt.yticks(np.arange(0, 81, 10))
a1.legend((p0[0], p1[0], p2[0], p3[0], p4[0] ), ('Glass', 'aluminium_frames', 'Silicon','Copper','Silver'))
f.tight_layout()
f.savefig(title_Method+' Fig_2x1_Yearly EOL Only WASTE by Scenario and Cumulatives_NREL2018.png', dpi=600)
print("Cumulative Eol Only Waste by 2050 Million Tones by case")
dfcumulations2050[['glass','silicon','silver','copper','aluminium_frames']].sum(axis=1)
###Output
C:\ProgramData\Anaconda3\lib\site-packages\ipykernel_launcher.py:89: MatplotlibDeprecationWarning: Passing the minor parameter of set_ticks() positionally is deprecated since Matplotlib 3.2; the parameter will become keyword-only two minor releases later.
|
.ipynb_checkpoints/MaxCut_Isaac-checkpoint.ipynb | ###Markdown
This is a quick and inefficient MaxCut prototype using QuTiP
###Code
# Number of qubits
N= 4
# Define Operators
def gen_cij(edge):
i,j = edge
Id = [qeye(2) for n in range(N)]
si_n = tensor(Id)
Id[i] = sigmaz()
Id[j] = sigmaz()
zij = tensor(Id)
return 0.5*(si_n - zij)
def gen_B():
b_op = 0*tensor([qeye(2) for j in range(N)])
for i in range(N):
Id = [qeye(2) for j in range(N)]
Id[i] = sigmax()
b_op += tensor(Id)
return b_op
def gen_init():
init = tensor([basis(2,0) for i in range(N)])
x_all = tensor([hadamard_transform(1) for i in range(N)])
return (x_all*init).unit()
ψ_init = gen_init()
edges = [[0,1],[1,2],[2,3],[3,0]]
C = np.sum(gen_cij(edge) for edge in edges)
B = gen_B()
C_vals, C_vecs = C.eigenstates()
def gen_U(angles):
L = len(angles)
γs = angles[:int(L/2)]
βs = angles[int(L/2):]
U = np.prod([(-1j*βs[i]*B).expm()*(-1j*γs[i]*C).expm() for i in range(int(L/2))])
return U
def cost(angles,num_measures=0):
""" The cost function of MaxCut QAOA
Args:
angles (list): The list of angles. The first half contains the γs and the second half has βs.
num_measures (int): The number of measurement shots.
Returns:
float: The negative of the energy of the C expression. """
U_temp = gen_U(angles)
ψ_temp = U_temp*ψ_init
if num_measures == 0:
energy = -expect(C,ψ_temp)
else:
sim_counts = np.random.multinomial(num_measures,expect(ket2dm(ψ_temp),C_vecs))/num_measures
energy = - np.dot(C_vals,sim_counts)
return energy
cost([1,2],0)
###Output
_____no_output_____
###Markdown
Comparing some optimization routines Single layer Nelder-Mead
###Code
def averaged(x): # Given a list of arrays, outputs their average and standard deviation
# First, find the maximum length among the arrays.
maxlength = 0
for array in x:
if len(array) > maxlength:
maxlength = len(array)
# Pad elements at the end of each arrays, so that all of them have the same length.
for array in x:
array += [array[-1]] * (maxlength - len(array))
# Take the mean and standard deviation.
mean = []
std = []
for i in range(maxlength):
data = [x[j][i] for j in range(len(x))]
mean.append(np.mean(data))
std.append(np.std(data))
return mean, std
num_layers = 2
n_measures_list = [100,1000,10000,100000]
n_trials = 10
stats_NM = []
def callback_NM(x): # The callback function records the "noiseless" value of the cost function.
stats_NM_temp.append(cost(x))
for n_measures in n_measures_list:
stats_NM = []
for i in range(n_trials):
# Initialize the history array
stats_NM_temp = []
# Record the data for n_trials iterations
sol_NM = minimize(cost,np.random.rand(2*num_layers),callback=callback_NM,args = (n_measures),method='Nelder-Mead')
#sol_NM = sparse_minimize(cost, np.random.rand(2*num_layers), cutoff=100,samples=50, args= 100)
stats_NM.append(stats_NM_temp)
# Take the number of average and standard deviation
mean, std = averaged(stats_NM)
plt.errorbar(x=[i * n_measures for i in range(len(mean))], y=mean, yerr=std)
plt.title('n_measures ={}'.format(n_measures))
plt.xlabel('# samples')
plt.ylabel('cost')
plt.show()
for i in range(n_trials):
#plt.plot(stats_NM[i],label=f'trial #{i}')
plt.plot(stats_NM[i])
plt.legend()
plt.xlabel('# steps')
plt.ylabel('cost');
end = []
for i in range(len(stats_NM)):
end.append(stats_NM[i][:-1])
a=np.hstack(end)
plt.hist(a)
plt.show()
###Output
_____no_output_____
###Markdown
Sparse Optimization. PreliminarySparse optimization works very well! We first begin with a frequency cutoff of $50$. We only sample $30$ data points. This choice is arbitrary.
###Code
n_measures_list = [100,1000,10000]
n_trials = 10
stats_so = []
n_samples = 30
n_cutoff=50
for n_measures in n_measures_list:
stats_so = []
for i in range(n_trials):
# Record the data for n_trials iterations
x0, value, history = sparse_minimize(cost, np.random.rand(2*num_layers), cutoff=n_cutoff,samples=n_samples, args= n_measures)
stats_so.append(history)
# Take the number of average and standard deviation
print("Trial={}".format(i))
mean, std = averaged(stats_so)
plt.errorbar(x=[i * n_measures * n_samples for i in range(len(mean))], y=mean, yerr=std)
plt.title('n_measures ={}'.format(n_measures))
plt.xlabel('# samples')
plt.ylabel('cost')
plt.show()
###Output
itt=19, which=3, cost=-3.914753201908698, x0=[4.0212386 4.64955713 5.52920307 5.02654825]Trial=0
itt=19, which=3, cost=-3.8780348810131997, x0=[2.38761042 4.52389342 0.75398224 1.13097336]Trial=1
itt=19, which=3, cost=-3.814543090814257, x0=[2.136283 4.77522083 2.26194671 4.39822972]Trial=2
itt=19, which=3, cost=-3.856281352264909, x0=[1.25663706 3.89557489 5.15221195 3.64424748]Trial=3
itt=19, which=3, cost=-3.9603930578132873, x0=[1.13097336 0.75398224 5.15221195 5.27787566]Trial=4
itt=19, which=3, cost=-3.7573044510065983, x0=[4.0212386 4.1469023 0.50265482 5.15221195]Trial=5
itt=19, which=3, cost=-3.9984457819530115, x0=[1.13097336 4.0212386 5.15221195 5.27787566]Trial=6
itt=19, which=3, cost=-3.875858935182515, x0=[1.00530965 1.25663706 2.38761042 0.37699112]Trial=7
itt=19, which=3, cost=-3.9618390342343694, x0=[3.89557489 1.50796447 3.89557489 0.37699112]Trial=8
itt=19, which=3, cost=-3.9617078794297225, x0=[3.89557489 1.50796447 3.89557489 2.0106193 ]Trial=9
###Markdown
Sparse optimization. Fixing the total sample size and changing the sample per each data point.Given a fixed number of samples, we vary the number of samples per each data point. With a sample per angle of $100$, one reaches the minimum at ~$200000$ samples, whereas with a sample per angle of $1000$, one reaches the minimum at ~$1300000$ samples. It gets worse with a sample per angle of $10000$.
###Code
n_measures_total = 600000
n_measures_list = [100,1000,10000]
n_trials = 10
stats_so = []
n_samples = 30
n_cutoff=50
for n_measures in n_measures_list:
stats_so = []
for i in range(n_trials):
# Record the data for n_trials iterations
x0, value, history = sparse_minimize(cost, np.random.rand(2*num_layers), cutoff=n_cutoff,samples=n_samples, itt = int(n_measures_total/n_measures/n_samples), args= n_measures)
stats_so.append(history)
# Take the number of average and standard deviation
print("Trial={}".format(i))
mean, std = averaged(stats_so)
plt.errorbar(x=[i * n_measures * n_samples for i in range(len(mean))], y=mean, yerr=std)
plt.title('n_measures ={}'.format(n_measures))
plt.xlabel('# samples')
plt.ylabel('cost')
plt.show()
###Output
itt=199, which=3, cost=-3.863922854877589, x0=[0.62831853 4.90088454 2.26194671 3.64424748]Trial=0
itt=199, which=3, cost=-3.998445781953009, x0=[1.13097336 4.0212386 2.0106193 2.136283 ]Trial=1
itt=199, which=3, cost=-3.9125278140884547, x0=[1.13097336 3.89557489 3.64424748 5.27787566]Trial=2
itt=199, which=3, cost=-3.5872245719211624, x0=[1.00530965 1.63362818 0.87964594 0.25132741]Trial=3
itt=199, which=3, cost=-3.957548690259249, x0=[4.0212386 1.38230077 2.26194671 5.15221195]Trial=4
itt=199, which=3, cost=-3.9911593278919137, x0=[3.89557489 1.63362818 5.52920307 0.37699112]Trial=5
itt=199, which=3, cost=-3.912527814088449, x0=[5.15221195 2.38761042 2.63893783 4.1469023 ]Trial=6
itt=199, which=3, cost=-3.970043616382802, x0=[1.13097336 1.00530965 0.50265482 5.27787566]Trial=7
itt=199, which=3, cost=-3.9765064702886628, x0=[3.89557489 4.77522083 0.75398224 5.15221195]Trial=8
itt=199, which=3, cost=-3.9765064702886543, x0=[2.38761042 1.50796447 2.38761042 4.27256601]Trial=9
###Markdown
Sparse-SPSASame setting. n_cutoff=50, n_samples=30. Not great.
###Code
n_measures_list = [100,1000,10000]
n_trials = 10
stats_so = []
n_samples = 30
n_cutoff=50
for n_measures in n_measures_list:
stats_so = []
for i in range(n_trials):
# Record the data for n_trials iterations
x0, value, history = sparse_spsa(cost, np.random.rand(2*num_layers), cutoff=n_cutoff,samples=n_samples, itt = 50, args= n_measures)
stats_so.append(history)
# Take the number of average and standard deviation
print("Trial={}".format(i))
mean, std = averaged(stats_so)
plt.errorbar(x=[i * n_measures * n_samples for i in range(len(mean))], y=mean, yerr=std)
plt.title('n_measures ={}'.format(n_measures))
plt.xlabel('# samples')
plt.ylabel('cost')
plt.show()
###Output
itt=49, cost=-3.977889414396259, x0=[5.43943588 4.70428669 2.32529753 2.75145479]Trial=0
itt=49, cost=-3.7561854217162844, x0=[5.04161744 5.65814039 2.63007965 4.1683354 ]Trial=1
itt=49, cost=-3.6789989288815748, x0=[2.38056479 1.36102291 5.69567369 2.56902823]Trial=2
itt=49, cost=-3.91368333045663, x0=[0.76202293 1.62948287 3.87813319 3.49089011]Trial=3
itt=49, cost=-3.8754781405016483, x0=[0.63122875 4.89395494 3.86737186 3.63126827]Trial=4
itt=49, cost=-3.9204567200784, x0=[5.26695984 5.18435503 2.689149 2.61946215]Trial=5
itt=49, cost=-3.1476781743534685, x0=[4.534174 0.75466943 2.92977918 3.43053978]Trial=6
itt=49, cost=-3.942996911205269, x0=[0.68858303 1.60588269 0.83740183 1.97229072]Trial=7
itt=49, cost=-3.7798814414875, x0=[2.13981726 5.21380335 5.72601048 5.87773258]Trial=8
itt=49, cost=-3.2186424036900734, x0=[3.66357507 5.79151136 0.44819753 0.82418211]Trial=9
###Markdown
n_cutoff=100, n_samples
###Code
n_measures_list = [100,1000,10000]
n_trials = 10
stats_so = []
n_samples = 60
n_cutoff=100
for n_measures in n_measures_list:
stats_so = []
for i in range(n_trials):
# Record the data for n_trials iterations
x0, value, history = sparse_spsa(cost, np.random.rand(2*num_layers), cutoff=n_cutoff,samples=n_samples, itt = 50, args= n_measures)
stats_so.append(history)
# Take the number of average and standard deviation
print("Trial={}".format(i))
mean, std = averaged(stats_so)
plt.errorbar(x=[i * n_measures * n_samples for i in range(len(mean))], y=mean, yerr=std)
plt.title('n_measures ={}'.format(n_measures))
plt.xlabel('# samples')
plt.ylabel('cost')
plt.show()
###Output
itt=49, cost=-3.944653181757502, x0=[5.46540037 1.66856834 3.83965652 2.75946609]Trial=0
itt=49, cost=-3.961472591382087, x0=[4.25904834 4.09869731 3.6284231 3.64472822]Trial=1
itt=49, cost=-3.942542166005555, x0=[2.04791019 2.06251503 2.68660073 2.60460223]Trial=2
itt=49, cost=-3.9604005348375635, x0=[2.30297516 4.79480474 2.26205358 2.78913436]Trial=3
itt=49, cost=-3.788334595180927, x0=[2.13600564 5.44941645 1.04101855 5.66863069]Trial=4
itt=49, cost=-3.9317123422653477, x0=[5.12083702 5.39123418 1.04986884 4.21217541]Trial=5
itt=49, cost=-3.9549452576982516, x0=[2.45756856 1.5409327 0.74153699 5.89255567]Trial=6
itt=49, cost=-3.794729523752649, x0=[2.09196111 2.11931361 5.6965216 5.6794406 ]Trial=7
itt=49, cost=-3.6935761035660053, x0=[4.10053262 4.1466945 0.72028291 1.95323457]Trial=8
itt=49, cost=-3.9599779307234777, x0=[4.21197455 1.03869489 2.06770273 5.21942071]Trial=9
###Markdown
BFGS It seems that BFGS doesn't work with noise!
###Code
stats_BFGS = []
n_measures = 10000
def callback_BFGS(x):
stats_BFGS.append(cost(x))
sol_BFGS = minimize(cost,np.random.rand(2*num_layers),args = (n_measures),callback=callback_BFGS,method='BFGS')
sol_BFGS.x,sol_BFGS.fun #Noisy cost of the optimal solution
cost(sol_BFGS.x,0) # Noiseless cost of the optimal solution
plt.plot(stats_BFGS)
# plt.ylim([-1,-3])
###Output
_____no_output_____
###Markdown
Two layers Nelder-Mead
###Code
num_layers = 2
stats_NM = []
n_measures = 10000
def callback_NM(x): # The callback function records the "noiseless" value of the cost function.
stats_NM.append(cost(x))
sol_NM = minimize(cost,np.random.rand(2*num_layers),callback=callback_NM,args = (n_measures),method='Nelder-Mead')
plt.plot(stats_NM)
sol_NM.x,sol_NM.fun
cost(sol_NM.x,0) # Noiseless cost of the optimal solution
stats_BFGS = []
n_measures = 10000
def callback_BFGS(x):
stats_BFGS.append(cost(x))
sol_BFGS = minimize(cost,np.random.rand(2*num_layers),args = (n_measures),callback=callback_BFGS,method='BFGS')
plt.plot(stats_BFGS)
sol_BFGS.x,sol_BFGS.fun
cost(sol_BFGS.x,0) # Noiseless cost of the optimal solution
###Output
_____no_output_____
###Markdown
Comparing with Rigetti's solution
###Code
ψ_init = gen_init()
angle_list = [2.35394,1.18] #[γ,β]
U_mat = gen_U(angle_list)
ψ = U_mat*ψ_init
energy = expect(C,ψ)
print(f"The optimized value is {energy}")
###Output
The optimized value is 2.9999608711896224
###Markdown
Plotting the wavefuntion weights
###Code
plt.bar(np.arange(16),np.abs(ψ.full()).flatten())
###Output
_____no_output_____ |
AIND-sudoku/sudoku_sol.ipynb | ###Markdown
sudoku solution
###Code
from utils import *
row_units = [cross(r, cols) for r in rows]
column_units = [cross(rows, c) for c in cols]
square_units = [cross(rs, cs) for rs in ('ABC','DEF','GHI') for cs in ('123','456','789')]
diag_unit_01=[a+b for a,b in zip("ABCDEFGHI","123456789")]
diag_unit_02=[a+b for a,b in zip("ABCDEFGHI","987654321")]
unitlist = row_units + column_units + square_units
unitlist.append(diag_unit_01)
unitlist.append(diag_unit_02)
# TODO: Update the unit list to add the new diagonal units
unitlist = unitlist
units = dict((s, [u for u in unitlist if s in u]) for s in boxes)
peers = dict((s, set(sum(units[s],[]))-set([s])) for s in boxes)
def naked_twins(values):
potential_twins = [item for item in values.keys() if len(values[item]) == 2]
# Collect boxes that have the same elements
naked_twins = [[x,y] for x in potential_twins for y in peers[x] if set(values[x])==set(values[y]) ]
# For each pair of naked twins,
for i in range(len(naked_twins)):
box1 = naked_twins[i][0]
box2 = naked_twins[i][1]
# 1- compute intersection of peers
peers1 = set(peers[box1])
peers2 = set(peers[box2])
peers_int = peers1 & peers2
# 2- Delete the two digits in naked twins from all common peers.
for peer_val in peers_int:
if len(values[peer_val])>2:
for rm_val in values[box1]:
values = assign_value(values, peer_val, values[peer_val].replace(rm_val,''))
return values
"""Eliminate values using the naked twins strategy.
Parameters
----------
values(dict)
a dictionary of the form {'box_name': '123456789', ...}
Returns
-------
dict
The values dictionary with the naked twins eliminated from peers
Notes
-----
Your solution can either process all pairs of naked twins from the input once,
or it can continue processing pairs of naked twins until there are no such
pairs remaining -- the project assistant test suite will accept either
convention. However, it will not accept code that does not process all pairs
of naked twins from the original input. (For example, if you start processing
pairs of twins and eliminate another pair of twins before the second pair
is processed then your code will fail the PA test suite.)
The first convention is preferred for consistency with the other strategies,
and because it is simpler (since the reduce_puzzle function already calls this
strategy repeatedly).
"""
# TODO: Implement this function!
raise NotImplementedError
def eliminate(values):
for item in values.keys():
if len(values[item])==1:
val=values[item]
for peer in peers[item]:
if peer in values:
values[peer]=values[peer].replace(val,"")
"""Apply the eliminate strategy to a Sudoku puzzle
The eliminate strategy says that if a box has a value assigned, then none
of the peers of that box can have the same value.
Parameters
----------
values(dict)
a dictionary of the form {'box_name': '123456789', ...}
Returns
-------
dict
The values dictionary with the assigned values eliminated from peers
"""
# TODO: Copy your code from the classroom to complete this function
return values
raise NotImplementedError
def only_choice(values):
for unit in unitlist:
for i in "123456789":
s=[]
for box in unit:
if i in values[box]:
s.append(box)
if len(s)==1:
values[s[0]]=i
"""Apply the only choice strategy to a Sudoku puzzle
The only choice strategy says that if only one box in a unit allows a certain
digit, then that box must be assigned that digit.
Parameters
----------
values(dict)
a dictionary of the form {'box_name': '123456789', ...}
Returns
-------
dict
The values dictionary with all single-valued boxes assigned
Notes
-----
You should be able to complete this function by copying your code from the classroom
"""
# TODO: Copy your code from the classroom to complete this function
return values
raise NotImplementedError
def reduce_puzzle(values):
stalled = False
while not stalled:
# Check how many boxes have a determined value
solved_values_before = len([box for box in values.keys() if len(values[box]) == 1])
values=eliminate(values)
values=only_choice(values)
# Your code here: Use the Eliminate Strategy
# Your code here: Use the Only Choice Strategy
# Check how many boxes have a determined value, to compare
solved_values_after = len([box for box in values.keys() if len(values[box]) == 1])
# If no new values were added, stop the loop.
stalled = solved_values_before == solved_values_after
# Sanity check, return False if there is a box with zero available values:
if len([box for box in values.keys() if len(values[box]) == 0]):
return False
return values
raise NotImplementedError
def search(values):
values = reduce_puzzle(values)
if values is False:
return False ## Failed earlier
if all(len(values[s]) == 1 for s in boxes):
return values ## Solved!
# Choose one of the unfilled squares with the fewest possibilities
n,s = min((len(values[s]), s) for s in boxes if len(values[s]) > 1)
# Now use recurrence to solve each one of the resulting sudokus, and
print(n,s)
for value in values[s]:
new_sudoku = values.copy()
new_sudoku[s] = value
attempt = search(new_sudoku)
if attempt:
return attempt
return values
raise NotImplementedError
def solve(grid):
"""Find the solution to a Sudoku puzzle using search and constraint propagation
Parameters
----------
grid(string)
a string representing a sudoku grid.
Ex. '2.............62....1....7...6..8...3...9...7...6..4...4....8....52.............3'
Returns
-------
dict or False
The dictionary representation of the final sudoku grid or False if no solution exists.
"""
values = grid2values(grid)
values = search(values)
return values
if __name__ == "__main__":
diag_sudoku_grid = '2.7...........62....1....7...6..8...3...9...7...6..4...4....8....52.............3'
display(grid2values(diag_sudoku_grid))
result = solve(diag_sudoku_grid)
display(result)
try:
import PySudoku
PySudoku.play(grid2values(diag_sudoku_grid), result, history)
except SystemExit:
pass
except:
print('We could not visualize your board due to a pygame issue. Not a problem! It is not a requirement.')
###Output
2 123456789 7 |123456789 123456789 123456789 |123456789 123456789 123456789
123456789 123456789 123456789 |123456789 123456789 6 | 2 123456789 123456789
123456789 123456789 1 |123456789 123456789 123456789 |123456789 7 123456789
------------------------------+------------------------------+------------------------------
123456789 123456789 6 |123456789 123456789 8 |123456789 123456789 123456789
3 123456789 123456789 |123456789 9 123456789 |123456789 123456789 7
123456789 123456789 123456789 | 6 123456789 123456789 | 4 123456789 123456789
------------------------------+------------------------------+------------------------------
123456789 4 123456789 |123456789 123456789 123456789 | 8 123456789 123456789
123456789 123456789 5 | 2 123456789 123456789 |123456789 123456789 123456789
123456789 123456789 123456789 |123456789 123456789 123456789 |123456789 123456789 3
2 6 7 |9 4 5 |3 8 1
8 5 3 |7 1 6 |2 4 9
4 9 1 |8 2 3 |5 7 6
------+------+------
5 7 6 |4 3 8 |1 9 2
3 8 4 |1 9 2 |6 5 7
1 2 9 |6 5 7 |4 3 8
------+------+------
6 4 2 |3 7 9 |8 1 5
9 3 5 |2 8 1 |7 6 4
7 1 8 |5 6 4 |9 2 3
We could not visualize your board due to a pygame issue. Not a problem! It is not a requirement.
|
Jupyter Notebook/.ipynb_checkpoints/svm-checkpoint.ipynb | ###Markdown
Read the CSV and Perform Basic Data Cleaning
###Code
df = pd.read_csv("exoplanet_data.csv")
# Drop the null columns where all values are null
df = df.dropna(axis='columns', how='all')
# Drop the null rows
df = df.dropna()
df.shape
df.head()
df.describe()
df.keys()
###Output
_____no_output_____
###Markdown
Select your features (columns)
###Code
# Set features. This will also be used as your x values.
#selected_features = df[['koi_disposition', 'koi_fpflag_nt', 'koi_fpflag_ss', 'koi_fpflag_co',
# 'koi_fpflag_ec', 'koi_period', 'koi_time0bk', 'koi_duration', 'ra','dec',]]
target = df["koi_disposition"]
data = df.drop("koi_disposition", axis=1)
feature_names = data.columns
data.head()
###Output
_____no_output_____
###Markdown
Create a Train Test SplitUse `koi_disposition` for the y values
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(data, target, random_state=42)
X_train.head()
###Output
_____no_output_____
###Markdown
Pre-processingScale the data using the MinMaxScaler and perform some feature selection
###Code
from sklearn.preprocessing import MinMaxScaler
# Scale your data
X_scaler = MinMaxScaler().fit(X_train)
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
# testing
X_test_scaled
###Output
_____no_output_____
###Markdown
Train the Model
###Code
from sklearn.preprocessing import MinMaxScaler
X_minmax = MinMaxScaler().fit(X_train)
X_train_minmax = X_minmax.transform(X_train)
X_test_minmax = X_minmax.transform(X_test)
from sklearn.svm import SVC
model = SVC(kernel='linear')
model.fit(X_train_scaled, y_train)
print(f"Training Data Score: {model.score(X_train_scaled, y_train)}")
print(f"Testing Data Score: {model.score(X_test_scaled, y_test)}")
plt.figure(figsize=(10,7))
sns.heatmap(df.corr(),annot=True)
###Output
_____no_output_____
###Markdown
Hyperparameter TuningUse `GridSearchCV` to tune the model's parameters
###Code
# Create the GridSearchCV model
from sklearn.model_selection import GridSearchCV
param_grid = {'C': [1, 5, 10],
'gamma': [0.0001, 0.0005, 0.001]}
grid = GridSearchCV(model, param_grid, verbose=3)
# Train the model with GridSearch
grid.fit(X_train_scaled, y_train)
print(grid.best_params_)
print(grid.best_score_)
grid.score(X_train_scaled, y_train)
predictions = grid.predict(X_test_scaled)
print(predictions)
from sklearn.metrics import classification_report
print(classification_report(y_test, predictions))
###Output
_____no_output_____
###Markdown
Save the Model
###Code
# save your model by updating "your_name" with your name
# and "your_model" with your model variable
# be sure to turn this in to BCS
# if joblib fails to import, try running the command to install in terminal/git-bash
import joblib
filename = '../Models/H_svm.sav'
joblib.dump(model, filename)
###Output
_____no_output_____ |
gabarito_aula_3.ipynb | ###Markdown
Curso básico de ferramentas computacionais para astronomiaContato: Julia Gschwend ([[email protected]](mailto:[email protected])) Github: https://github.com/linea-it/minicurso-jupyter Site: https://minicurso-ed2.linea.gov.br/ Última verificação: 26/08/2021 Exercícios Aula 3 - Acesso a dados pelo LIneA Science Server, Pandas DataFrame Parte 1 - repita os passos da aula 3 Sugestões: * Execute as células abaixo removendo os marcadores de comentário ("") para refazer todos os passos demonstrados na aula. * Experimente pequenas variações no código e compare os resultados. * Use as células tipo Markdown para adicionar seus comentários ou informações adicionais. Índice1.1 [SQL básico](sql)1.2 [Download dos dados](download)1.3 [Manipulação dos dados usando NumPy](numpy)1.4 [Manipulação dos dados usando Pandas](pandas) 1.1 SQL básico [slides aula 3](https://docs.google.com/presentation/d/1lK8XNvj1MG_oC39iNfEA16PiU10mzhgUkO0irmxgTmE/preview?slide=id.ge8847134d3_0_821) 1.2 Download dos dados (opcional)Se você ainda não tem acesso ao [LIneA Science Server](https://desportal2.cosmology.illinois.edu) e deseja pular essa etapa, vá para a [seção 1.3](numpy). Para exemplificar a leitura de dados a partir de um Jupyter Notebook, vamos criar um arquivo com dados baixados da ferramenta **User Query** da plataforma [LIneA Science Server](https://desportal2.cosmology.illinois.edu). Antes de executar a query que vai dar origem ao arquivo, vamos consultar o tamanho (número de linhas) da tabela que ela vai gerar utilizando a seguinte query: ```sqlSELECT COUNT(*)FROM DES_ADMIN.DR2_MAINWHERE ra > 35 and ra < 36AND dec > -10 and dec < -9AND mag_auto_g between 15 and 23AND CLASS_STAR_G < 0.5AND FLAGS_I <=4```Copie a qurery acima e cole no campo `SQL Sentence`. Em seguida pressione o botão `Preview`. O resultado esperado é de 8303 objetos. Substitua o texto pela query abaixo para criar a tabela selecionando as colunas que vamos utilizar na demonstração.```sqlSELECT coadd_object_id, ra ,dec, mag_auto_g, mag_auto_r, mag_auto_i, magerr_auto_g, magerr_auto_r, magerr_auto_i, flags_iFROM DES_ADMIN.DR2_MAINWHERE ra > 35 and ra < 36AND dec > -10 and dec < -9AND mag_auto_g between 15 and 23AND CLASS_STAR_G < 0.5AND FLAGS_I <=4```Clique no botão de `Play`(Excecute Query) no canto superior esquerdo, escolha um nome para a sua tabela e pressione `Start`. Você receberá um email de notificação quando a sua tabela estiver pronta e ela aparecerá no menu `My Tables`. Clique no botão de seta para baixo ao lado do nome da tabela para abrir o menu e clique em `Download`. Você receberá um email com um link para o download do catálogo em formato `.zip`. Os dados acessados pelo Science Server estão hospedados nos computadores do NCSA. Faremos o download deste subconjunto de dados no formato _comma-separated values_ (CSV) e em seguida o upload do arquivo gerado para o JupyterHub. Sugerimos nomear o arquivo como `galaxias.csv` e guardá-lo dentro da pasta `dados`. Nas células abaixo vamos praticar a leitura e escrita de arquivos e a manipulação dados usando duas bibliotecas diferentes. 1.2 NumpyDocumentação da biblioteca NumPy: https://numpy.org/doc/stable/index.htmlPara começar, importe a biblioteca.
###Code
import numpy as np
print('NumPy version: ', np.__version__)
###Output
_____no_output_____
###Markdown
Leia o arquivo e atribua todo o seu conteúdo a uma variável usando a função `loadtxt`([doc](https://numpy.org/doc/stable/reference/generated/numpy.loadtxt.html)):
###Code
tabela = np.loadtxt("dados/galaxias.csv", delimiter=",", skiprows=1)
tabela
###Output
_____no_output_____
###Markdown
Confira o tipo de objeto da variável.
###Code
type(tabela)
###Output
_____no_output_____
###Markdown
Consulte o tipo de dados dos elementos contidos no array.
###Code
tabela.dtype
###Output
_____no_output_____
###Markdown
Consulte as dimensões do array (linhas,colunas).
###Code
tabela.shape
tabela[0].dtype
###Output
_____no_output_____
###Markdown
Usar o argumento `unpack` da função `loadtxt` para obter as colunas separadamente. Quando ativado, o argumento `unpack` traz a tabela transposta.
###Code
tabela_unpack = np.loadtxt("dados/galaxias.csv", delimiter=",", unpack=True, skiprows=1)
tabela_unpack.shape
###Output
_____no_output_____
###Markdown
Se você já sabe a ordem das colunas, pode atribuir cada uma delas a uma variável. Dica: pegar os nomes do cabeçalho do arquivo.
###Code
coadd_object_id,dec,flags_i,magerr_auto_g,magerr_auto_i,\
magerr_auto_r,mag_auto_g,mag_auto_i,mag_auto_r,meta_id,ra = tabela_unpack
coadd_object_id
###Output
_____no_output_____
###Markdown
Você pode fazer isso direto ao ler o arquivo. OBS: ao usar os mesmos nomes para as variáveis, está sobrescrevendo as anteriores.
###Code
coadd_object_id,dec,flags_i,magerr_auto_g,magerr_auto_i,\
magerr_auto_r,mag_auto_g,mag_auto_i,mag_auto_r,meta_id,ra = np.loadtxt("dados/galaxias.csv", delimiter=",", unpack=True, skiprows=1)
coadd_object_id
###Output
_____no_output_____
###Markdown
Faça um filtros (ou máscara) para galáxias brilhantes (mag i < 20).
###Code
mask = (mag_auto_i < 20)
mask
###Output
_____no_output_____
###Markdown
Confira o tamanho do array **sem** filtro.
###Code
len(mag_auto_i)
###Output
_____no_output_____
###Markdown
Confira o tamanho do array **com** filtro.
###Code
len(mag_auto_i[mask])
###Output
_____no_output_____
###Markdown
A "máscara" (ou filtro) tambem pode ser aplicada a qualquer outro array com as mesmas dimensões. Teste com outra coluna.
###Code
len(ra[mask])
###Output
_____no_output_____
###Markdown
A máscara é útil para criar um subconjunto dos dados. Crie um subconjunto com 3 colunas e com as linhas que satisfazem a condição do filtro.
###Code
subset_tabela = np.array([coadd_object_id[mask], mag_auto_i[mask], magerr_auto_i[mask]])
###Output
_____no_output_____
###Markdown
Salve este subconjunto em um arquivo usando a função `savetxt`.
###Code
np.savetxt("subset_galaxias.csv", subset_tabela)
###Output
_____no_output_____
###Markdown
Dê uma ohada no arquivo salvo (duplo clique no arquivo leva para o visualizador de tabelas). Algo de errado com o número de linhas?
###Code
np.savetxt("subset_galaxias.csv", subset_tabela.T)
###Output
_____no_output_____
###Markdown
E agora? Algo errado com o número de colunas?
###Code
np.savetxt("subset_galaxias.csv", subset_tabela.T, delimiter=",")
###Output
_____no_output_____
###Markdown
Como ficou a formatação? Que tal deixar mais legível e definir o número de espaços?
###Code
np.savetxt("subset_galaxias.csv", subset_tabela.T, delimiter=",", fmt=["%12i", "%10.4f", "%10.4f"])
###Output
_____no_output_____
###Markdown
Que tal um cabeçalho para nomear as colunas?
###Code
np.savetxt("subset_galaxias.csv", subset_tabela.transpose(), delimiter=",", fmt=["%12i", "%10.4f", "%10.4f"],
header="objects id, magnitude, mag error")
###Output
_____no_output_____
###Markdown
1.2 Pandas Documentação da biblioteca Pandas: https://pandas.pydata.org/docs/
###Code
import pandas as pd
print('Pandas version: ', pd.__version__)
###Output
_____no_output_____
###Markdown
Pandas Serieshttps://pandas.pydata.org/docs/reference/api/pandas.Series.html Pandas DataFramehttps://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html ```python class pandas.DataFrame(data=None, index=None, columns=None, dtype=None, copy=None)```_"Two-dimensional, size-mutable, potentially heterogeneous tabular data. Data structure also contains labeled axes (rows and columns). Arithmetic operations align on both row and column labels. Can be thought of as a dict-like container for Series objects. The primary pandas data structure."_ Exemplo de criação de um DataFrame a partir de um dicionário:
###Code
dic = {"nomes": ["Huguinho", "Zezinho", "Luizinho"],
"cores": ["vermelho", "verde", "azul"],
"características": ["nerd", "aventureiro", "criativo"]}
sobrinhos = pd.DataFrame(dic)
sobrinhos
###Output
_____no_output_____
###Markdown
Um DataFrame também é o resultado da leitura de uma tabela usando a função `read_csv` do Pandas.
###Code
dados = pd.read_csv("dados/galaxias.csv")
dados
###Output
_____no_output_____
###Markdown
Vamos explorar algumas funções e atributos úteis de um objeto to tipo _DataFrame_, começando pela sua "ficha técnica".
###Code
dados.info()
###Output
_____no_output_____
###Markdown
Imprima as primeiras linhas.
###Code
dados.head()
###Output
_____no_output_____
###Markdown
Defina o número de linhas exibidas.
###Code
dados.head(3)
###Output
_____no_output_____
###Markdown
Imprima as últimas linhas.
###Code
dados.tail()
###Output
_____no_output_____
###Markdown
Renomeie a coluna `coadd_object_id`
###Code
dados.rename(columns={"coadd_object_id":"ID"})
dados
dados.rename(columns={"coadd_object_id":"ID"}, inplace=True)
dados
###Output
_____no_output_____
###Markdown
Use a coluna coadd_object_id IDs como índice no DataFrame.
###Code
dados.set_index("ID")
dados
dados.set_index("ID", inplace=True)
dados
###Output
_____no_output_____
###Markdown
Imprima a descrição do _DataFrame_ (resumão com estatísticas básicas).
###Code
dados.describe()
###Output
_____no_output_____
###Markdown
Faça um filtro para selecionar objetos com fotometria perfeita (flags_i = 0).
###Code
dados.query('flags_i == 0')
###Output
_____no_output_____
###Markdown
Confira o tamanho do _DataFrame_ filtrado e compare com o original.
###Code
dados.query('flags_i == 0').count()
dados.count()
###Output
_____no_output_____
###Markdown
Tratando dados qualitativos: flags de qualidade
###Code
dados.flags_i
type(dados.flags_i)
###Output
_____no_output_____
###Markdown
Contagem dos valores de cada categoria (cada flag).
###Code
dados.flags_i.value_counts()
###Output
_____no_output_____
###Markdown
A classe _Series_ possui alguns gráficos simples embutidos, por exemplo, o gráfico de barras (histograma).
###Code
%matplotlib inline
dados.flags_i.value_counts().plot(kind='bar')
###Output
_____no_output_____
###Markdown
Gráfico de pizza:
###Code
dados.flags_i.value_counts().plot(kind='pie')
###Output
_____no_output_____
###Markdown
******* Parte 2 - pratique a manipulação de um _DataFrame_ com dados não sensíveis dos alunos Os dados não sensíveis dos alunos desta edição estão disponíveis no arquivo `alunos.csv`, dentro do diretório `dados`.As células abaixo contém instruções para treinar os comandos da biblioteca _Pandas_ vistos acima. Cria células novas caso ache necessário. Leia o arquivo e atribua o dataset resultante a uma variável.
###Code
alunos = pd.read_csv("dados/alunos.csv")
alunos.head()
###Output
_____no_output_____
###Markdown
Imprima as informações do _DataFrame_ criado com o métofo `info`. Compare com as informações do _dataset_ de galáxias.
###Code
alunos.info()
###Output
_____no_output_____
###Markdown
Imprima as primeiras linhas.
###Code
alunos.head()
###Output
_____no_output_____
###Markdown
Imprima as 2 últimas linhas
###Code
alunos.tail(2)
###Output
_____no_output_____
###Markdown
Renomeie as colunas com letras maiúsculas.
###Code
columns = {}
for column in alunos.columns:
columns[column] = column.upper()
alunos.rename(columns=columns, inplace=True)
alunos
###Output
_____no_output_____
###Markdown
Imprima a descrição do _DataFrame_ (resumão com estatísticas básicas). Compare o resultado com o _DataFrame_ de galáxias. Por que os atributos são diferentes?
###Code
alunos.describe()
###Output
_____no_output_____
###Markdown
Faça um filtro (ou uma "query") para selecionar apenas os alunos do Minicurso Jupyter Notebook.
###Code
alunos.query("MINICURSO != 'SS' ")
###Output
_____no_output_____
###Markdown
Imprima o número de alunos do Minicurso Jupyter Notebook por estado.
###Code
alunos.query("MINICURSO != 'SS' ")["UF"].value_counts()
###Output
_____no_output_____
###Markdown
Faça um histograma dos estados dos alunos do Minicurso Jupyter Notebook usando o método `plot` da classe _Series_.
###Code
alunos.query("MINICURSO != 'SS' ")["UF"].value_counts().plot(kind='bar')
###Output
_____no_output_____
###Markdown
Faça um gráfico de pizza dos perídos dos alunos do Minicurso Jupyter Notebook usando o método `plot` da classe _Series_.
###Code
alunos.query("MINICURSO != 'SS' ")["PERÍODO"].value_counts().plot(kind='pie')
###Output
_____no_output_____ |
Lesson1/Numbers.ipynb | ###Markdown
Numbers IntegersWhole numbers, positive or negative. For example: 2 and -2 Floating Point NumbersHave a decimal point in them, or use an exponential (e) to define the number.For example 2.0 and -2.1 are examples of floating point numbers.4E2 (4 times 10 to the power of 2) is also an example of a floating point number in Python.
###Code
2+1
2-1
2*2
3*2
3/2
float(3)/2
2**3
4**0.5
2 + 10 * 10 + 3
(2 + 10) * (10 + 3)
a = 5
a
a + a
a = 10
a
a = a + a
a
my_income = 100
tax_rate = 0.1
my_taxes = my_income * tax_rate
my_taxes
###Output
_____no_output_____ |
2.NLP_and_Preprocessing/Tockenizer.ipynb | ###Markdown
Tokenize
###Code
import numpy as np
from konlpy.tag import *
from sklearn.feature_extraction.text import CountVectorizer
posToUse=["NNP","NNG","MAG","NP","VV","VV+EF",'XSV+EC']
def getTokens(s):
global posToUse
return [ i[0] for i in Mecab().pos(s) if i[1] in posToUse ]
s="나는 너를 사랑한다"
print( getTokens(s))
###Output
['나', '너', '사랑', '한다']
###Markdown
1. Simple Version by NLTK
###Code
from nltk.tokenize import word_tokenize
import nltk
nltk.download('punkt')
s="나는 너를 사랑한다"
print(word_tokenize("나는 나를 사랑한다"))
from nltk.tokenize import WordPunctTokenizer
WordPunctTokenizer().tokenize(s)
###Output
_____no_output_____
###Markdown
2. Keras Tokenizer
###Code
corpus = [
"고인물은 게임을 즐긴다.",
"남중호는 게임을 잘한다.",
"남중구는 게임을 못한다.",
"나는 나란 놈 사랑한다.18",
"너의 게임 나의 게임 사랑하라"
]
s=" ".join(corpus)
print('All string :' ,s)
from tensorflow.keras.preprocessing.text import Tokenizer
tokenizer = Tokenizer()
sentences=[getTokens(r) for r in corpus]
tokenizer.fit_on_texts (sentences)
print(tokenizer.texts_to_sequences(sentences))
tokenizer.word_index
###Output
_____no_output_____
###Markdown
3. Keras pad_sequence
###Code
from tensorflow.keras.preprocessing.sequence import pad_sequences
sentences=[getTokens(r) for r in corpus]
encoded = tokenizer.texts_to_sequences(sentences)
pad_sequences(encoded)
pad_sequences(encoded,padding = 'post')
###Output
_____no_output_____
###Markdown
cf) Sentence Tokenizer
###Code
from nltk.tokenize import sent_tokenize
text="Mr. Nam은 게임을 즐긴다. 남중호는 게임을 잘한다...그런가 보다."
print(sent_tokenize(text))
?tokenizer.texts_to_sequences
###Output
_____no_output_____ |
CATBoost-v2.ipynb | ###Markdown
CATBOOST
###Code
df_train = pd.read_csv('../data/train_con_features_encoded.csv', index_col='Unnamed: 0')
df_test = pd.read_csv('../data/test_con_features_encoded.csv', index_col='Unnamed: 0')
display(df_train.head())
#Guardo y remuevo la columna id de los datos
id_col = df_test['id']
df_train = df_train.drop(['id'], axis=1)
df_test = df_test.drop(['id'], axis=1)
#Separo features de entrenamiento del precio
feature_cols = df_train.columns.tolist()
feature_cols.remove('precio')
X = df_train[feature_cols]
y = df_train['precio']
feature_cols
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.25, random_state=0)
print(X_train.shape, y_train.shape)
print(X_test.shape, y_test.shape)
from catboost import CatBoostRegressor
'''CatBoost = CatBoostRegressor(loss_function='MAE')
params = {'depth':[10,15],
'iterations':[1000],
'learning_rate':[0.05,0.1,0.12],
'l2_leaf_reg':[10, 15],
}
#border_count
#ctr_border_count
grid_search_result = CatBoost.grid_search(params, X=X_train, y=y_train, plot=True)'''
iterations = 1000 #best=1000 MAE=551393 CATBOOST ENC(ciudades) + ONE HOT ENC ---> k=50
depth = 10 #best=10 MAE=550847 COUNT ENCODED ---> k=50
l2_leaf_reg = 6 #best=6
learning_rate = 0.07 #best=0.07
border_count = 128 #best=128
subsample = 0.9 #best=0.9
colsample_bylevel = 0.9 #best=0.9
CatBoost = CatBoostRegressor(iterations=iterations, loss_function='MAE', depth=depth, l2_leaf_reg=l2_leaf_reg,
learning_rate=learning_rate, border_count=border_count, subsample=subsample, colsample_bylevel=colsample_bylevel)
CatBoost_fit = CatBoost.fit(X_train, y_train)
CatBoost_pred = CatBoost_fit.predict(X_test)
from sklearn.metrics import mean_absolute_error
CatBoost_mae = mean_absolute_error(y_test, CatBoost_pred)
CatBoost_mae_train = mean_absolute_error(y_train, CatBoost_fit.predict(X_train))
print(f"MAE CATBoost (train): {CatBoost_mae_train:.5f}")
print(f"MAE CATBoost: {CatBoost_mae:.5f}")
print("------------------------------")
CatBoost_fit.feature_importances_
features = pd.DataFrame(index=feature_cols)
features['imp'] = CatBoost_fit.feature_importances_
features = features.sort_values(['imp'], ascending = False)
features
plt.style.use('default')
plt.rcParams['figure.figsize'] = (10, 10)
sns.set(style="whitegrid")
g = sns.barplot(y=features.index, x=features.imp, \
palette=sns.color_palette("Reds_d", 10));
g.set_title('Importancia de Features de CATBoost', fontsize=15);
g.set_xlabel('Valor');
g.set_ylabel('Nombre del Feature');
CatBoost_pred_submit = CatBoost.fit(X, y).predict(df_test)
resultado_submit = pd.DataFrame(index=df_test.index)
resultado_submit['id'] = id_col
resultado_submit['target'] = CatBoost_pred_submit
display(resultado_submit.head())
resultado_submit.to_csv('../data/submitCATBoost-v2.csv',index=False)
###Output
_____no_output_____ |
code/evo_tune_201023/.ipynb_checkpoints/generate_subsample_pfamA_target_motor_toolkit-checkpoint.ipynb | ###Markdown
Filter out motor_toolkit n >= 5000
###Code
motor_toolkit = motor_toolkit.loc[motor_toolkit["Length"] < 5000,:]
motor_toolkit =
###Output
_____no_output_____ |
023 - Modelo de dados em Python/023 - Modelo de dados em Python.ipynb | ###Markdown
Códigos fortemente baseados no Capítulo 1 do livro de Luciano Ramalho: **Ramalho, L. (2015). *Python fluente: Programação clara, concisa e eficaz*. Novatec.**
###Code
from IPython.display import Image
Image(filename = 'C:/Users/User/Desktop/Python para Psicólogos/python-geral/023 - Modelo de dados em Python/Ramalho (2015).jpg')
###Output
_____no_output_____
###Markdown
**Um baralho pythônico: ilustrando os métodos especiais (ou métodos dunder) `__getitem__` e `__len__`**
###Code
import collections
Card = collections.namedtuple('Card', ['rank', 'suit'])
class FrenchDeck:
# Cria um baralho; implementação atual não permite embaralhamento; ver Cap. 11
ranks = [str(n) for n in range(2, 11)] + list('JQKA') # valores
suits = 'espadas ouros paus copas'.split() # naipes
def __init__(self):
self._cards = [Card(rank, suit) for suit in self.suits for rank in self.ranks]
def __len__(self):
return len(self._cards)
def __getitem__(self, position):
return self._cards[position]
# Criando uma carta
beer_card = Card('7', 'ouros')
beer_card
# Cria um objeto da classe FrenchDeck, que representará um baralho
deck = FrenchDeck()
len(deck)
# Acessa cartas em posições específicas do baralho
print(deck[0])
print(deck[-2])
# Selecionando cartas aleatórias do baralho
from random import choice
for i in range(0, 3):
x = choice(deck)
print(x)
# o uso do método dunder __getitem__ possibilita fatiamento:
print(deck[12::13])
print()
# e torna o objeto da classe FrenchDeck iterável
for card in deck: # for card in reversed(deck) --> mesma operação, mas na ordem inversa
print(card)
# sem o método dunder __contains__, o operador in leva a uma varredura sequencial
print(Card('Q', 'copas') in deck)
print(Card('22', 'paus') in deck)
# usando a ordenação (valoração) do baralho
suit_values = dict(espadas = 3, copas = 2, ouros = 1, paus = 0)
def spades_high(card):
rank_value = FrenchDeck.ranks.index(card.rank)
return rank_value * len(suit_values) + suit_values[card.suit]
# listando o baralho em ordem crescente de classificação
for card in sorted(deck, key = spades_high):
print(card)
###Output
Card(rank='2', suit='paus')
Card(rank='2', suit='ouros')
Card(rank='2', suit='copas')
Card(rank='2', suit='espadas')
Card(rank='3', suit='paus')
Card(rank='3', suit='ouros')
Card(rank='3', suit='copas')
Card(rank='3', suit='espadas')
Card(rank='4', suit='paus')
Card(rank='4', suit='ouros')
Card(rank='4', suit='copas')
Card(rank='4', suit='espadas')
Card(rank='5', suit='paus')
Card(rank='5', suit='ouros')
Card(rank='5', suit='copas')
Card(rank='5', suit='espadas')
Card(rank='6', suit='paus')
Card(rank='6', suit='ouros')
Card(rank='6', suit='copas')
Card(rank='6', suit='espadas')
Card(rank='7', suit='paus')
Card(rank='7', suit='ouros')
Card(rank='7', suit='copas')
Card(rank='7', suit='espadas')
Card(rank='8', suit='paus')
Card(rank='8', suit='ouros')
Card(rank='8', suit='copas')
Card(rank='8', suit='espadas')
Card(rank='9', suit='paus')
Card(rank='9', suit='ouros')
Card(rank='9', suit='copas')
Card(rank='9', suit='espadas')
Card(rank='10', suit='paus')
Card(rank='10', suit='ouros')
Card(rank='10', suit='copas')
Card(rank='10', suit='espadas')
Card(rank='J', suit='paus')
Card(rank='J', suit='ouros')
Card(rank='J', suit='copas')
Card(rank='J', suit='espadas')
Card(rank='Q', suit='paus')
Card(rank='Q', suit='ouros')
Card(rank='Q', suit='copas')
Card(rank='Q', suit='espadas')
Card(rank='K', suit='paus')
Card(rank='K', suit='ouros')
Card(rank='K', suit='copas')
Card(rank='K', suit='espadas')
Card(rank='A', suit='paus')
Card(rank='A', suit='ouros')
Card(rank='A', suit='copas')
Card(rank='A', suit='espadas')
###Markdown
**Como os métodos especiais são usados** eles são chamados pelo interpretador, não pelo usuário. usuário: `for i in minha_lista:` implicitamente chama: `iter(minha_lista)` se o método `__iter__` estiver disponível para a instância `minha_lista`, então é chamado `minha_lista.__iter__()`. **Emulando tipos numéricos**
###Code
from math import hypot
class Vector(object):
"""Classe construtora de vetores euclidianos bidimensionais."""
def __init__(self, x = 0, y = 0):
self.x = x
self.y = y
def __repr__(self):
return f'Vector({self.x}, {self.y})'
def __abs__(self):
return hypot(self.x, self.y)
def __bool__(self):
return bool(abs(self))
def __add__(self, other):
x = self.x + other.x
y = self.y + other.y
return Vector(x, y)
def __mul__(self, scalar):
return Vector(self.x * scalar, self.y * scalar)
# Criando instâncias da classe Vector()
v1 = Vector(2, 4)
v2 = Vector(2, 1)
# as chamadas a seguir não invocam explicitamente os métodos dunder
# soma de vetores
v3 = v1 + v2
print(v3)
# retorna o comprimento da hipotenusa do vetor, considerando a origem (0, 0)
v = Vector(3, 4)
print(abs(v))
# multipla um vetor por um escalar
v4 = v * 4
print(v4)
print(abs(v * 4))
# O caso do método dunder __bool__
x = list()
y = list('Planeta Terra')
# listas não possuem o método x.__bool__()
# neste caso, bool(x) invoca x.__len__() e retorna False se len == 0
print(bool(x)) # False --> a magnitude do vetor É zero
print(bool(y)) # True --> a magnitude do vetor NÃO É zero
###Output
False
True
|
apps/Example - Abstract analysis without geoferenced locations.ipynb | ###Markdown
Workflow for Abstract Analysis without Geoferenced LocationsIn this application, we showcase the capability for pre-process network graphs and simplify their topologiesThe workflow is structured as follows:1. Import relevant modules / tools / packages2. Define Hydrogen Supply Chain Alternatives3. Run Supply Chain Model4. Postprocessing 1. Import relevant modules / tools / packages
###Code
from HIM.workflow.scenarioExample import *
#import processing as prc
from HIM import dataHandling as sFun
from HIM import optiSetup as optiFun
from HIM import hscClasses as hscFun
from HIM import plotFunctions as pFun
import copy as copy
import os
pathPlot=None
from HIM import hscAbstract as hscA
###Output
_____no_output_____
###Markdown
2. Define Hydrogen Supply Chain Alternatives
###Code
hscPathways=[["Electrolyzer","None","Compressor", "GH2-Cavern","None","Compressor","Pipeline","Compressor","GH2-Truck","GH2 (Trailer)"],
["Electrolyzer","None","Compressor", "GH2-Cavern","None","Compressor","Pipeline","None","Pipeline","GH2 (Pipeline)"],
["Electrolyzer","None","Compressor", "GH2-Cavern","None","Compressor","GH2-Truck","None","None","GH2 (Trailer)"],
["Electrolyzer","None","Compressor", "GH2-Cavern","None","Liquefaction","LH2-Truck","None","None","LH2"],
["Electrolyzer","None","Compressor", "GH2-Cavern","None","Hydrogenation","LOHC-Truck","None","None","LOHC (NG)"],
["Electrolyzer","None","Liquefaction", "LH2-Tank","Evaporation","Compressor","Pipeline","Compressor","GH2-Truck","GH2 (Trailer)"],
["Electrolyzer","None","Liquefaction", "LH2-Tank","Evaporation","Compressor","Pipeline","None","Pipeline","GH2 (Pipeline)"],
["Electrolyzer","None","Liquefaction", "LH2-Tank","Evaporation","Compressor","GH2-Truck","None","None","GH2 (Trailer)"],
["Electrolyzer","Liquefaction","None", "LH2-Tank","None","None","LH2-Truck","None","None","LH2"],
["Electrolyzer","None","Hydrogenation", "LOHC-Tank","Dehydrogenation","Compressor","Pipeline","Compressor","GH2-Truck","GH2 (Trailer)"],
["Electrolyzer","None","Hydrogenation", "LOHC-Tank","Dehydrogenation","Compressor","Pipeline","None","Pipeline","GH2 (Pipeline)"],
["Electrolyzer","None","Hydrogenation", "LOHC-Tank","Dehydrogenation","Compressor","GH2-Truck","None","None","GH2 (Trailer)"],
["Electrolyzer","Hydrogenation","None", "LOHC-Tank","None","None","LOHC-Truck","None","None","LOHC (NG)"]]
dfHSC=pd.DataFrame(hscPathways, columns=["Production",
"Connector1",
"Connector2",
"Storage",
"Connector3",
"Connector4",
"Transport1",
"Connector5",
"Transport2",
"Station"])
nameHSC=[]
for key, values in dfHSC.iterrows():
if values["Transport2"]=="None":
nameHSC.append(values["Storage"]+"\n"+values["Transport1"])
elif values["Transport2"]=="Pipeline":
nameHSC.append(values["Storage"]+"\nPipePipe")
else:
nameHSC.append(values["Storage"]+"\nPipe+"+values["Transport2"])
dfHSC["General"]=nameHSC
dfHSC
###Output
_____no_output_____
###Markdown
3. Run Abstract Supply Chain Model
###Code
res=100
###Output
_____no_output_____
###Markdown
Calculation of demand and distance arrays
###Code
demArr, distArr = builtDemDistArray(demMax=100000, # maximal demand
distMax=500, # maximal distance
res=res) # Resolution
Results={}
for x in range(len(hscPathways)):
Results[x]=hscA.HSCAbstract(demArr=demArr,
distArr=distArr,
dfTable=dfTable,
listHSC=hscPathways[x])
Results[x].calcHSC()
###Output
C:\ProgramData\Anaconda3\envs\him\lib\site-packages\pandas\core\indexing.py:1418: FutureWarning:
Passing list-likes to .loc or [] with any missing label will raise
KeyError in the future, you can use .reindex() as an alternative.
See the documentation here:
https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#deprecate-loc-reindex-listlike
return self._getitem_tuple(key)
###Markdown
Initialization
###Code
totalCost = np.empty((len(distArr), demArr.shape[1], len(hscPathways)))
minimalCost = np.empty((len(distArr), demArr.shape[1]))
###Output
_____no_output_____
###Markdown
extract data
###Code
for i in range(len(hscPathways)):
# Read interesting Numbers
totalCost[:, :, i] = Results[i].cumCost["Station"]
###Output
_____no_output_____
###Markdown
Cut disturbing Data
###Code
cuttedCost = copy.copy(totalCost)
lineCost = copy.copy(totalCost)
lineCost[:, :, :] = np.nan
minimalLineCost = np.empty((len(distArr), demArr.shape[1]))
minimalLineCost[:, :] = np.nan
for i in range(len(distArr)):
for j in range(len(demArr.T)):
minimalCost[i, j] = min(cuttedCost[i, j, :])
for k in range(len(hscPathways)):
if minimalCost[i, j] < cuttedCost[i, j, k]:
cuttedCost[i, j, k] = np.nan
if (
np.isnan(cuttedCost[i, j, k]) and np.isnan(cuttedCost[i - 1, j, k]) == False) or (np.isnan( cuttedCost[i, j, k]) == False and np.isnan(cuttedCost[i - 1, j, k]) == True):
if not(i == 0):
minimalLineCost[i, j] = minimalCost[i, j]
minimalLineCost[i - 1, j] = minimalCost[i - 1, j]
minimalLineCost[i, j - 1] = minimalCost[i, j - 1]
elif (np.isnan(cuttedCost[i, j, k]) and np.isnan(cuttedCost[i, j - 1, k]) == False) or (np.isnan(cuttedCost[i, j, k]) == False and np.isnan(cuttedCost[i, j - 1, k]) == True):
if not j == 0:
minimalLineCost[i, j] = minimalCost[i, j]
minimalLineCost[i - 1, j] = minimalCost[i - 1, j]
minimalLineCost[i, j - 1] = minimalCost[i, j - 1]
###Output
_____no_output_____
###Markdown
4. Postprocessing
###Code
pFun.trisurfplotMin(
demArr/1000,
distArr,
cuttedCost,
minimalCost,
minimalLineCost,
dfHSC,
figSize=(7,4),
zmax=10.5,
savePath=os.path.join(os.getcwd(),'results','FigureComparison.png'))
###Output
c:\users\l.kotzur\sciebo\fzj\04_public\him\HIM\plotFunctions.py:165: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.
plt.tight_layout()
|
tutorial_files/2_xbase_routing.ipynb | ###Markdown
Module 2: XBase Routing APIIn this module, you will learn the basics about the routing grid system in XBase. We will go over how tracks are defined, how to create wires, vias and pins, and how to define the size of a layout cell. XBase Routing Grid```yamlrouting_grid: layers: [4, 5, 6, 7] spaces: [0.06, 0.1, 0.12, 0.2] widths: [0.06, 0.1, 0.12, 0.2] bot_dir: 'x'```In XBase, all wires and vias have to be drawn on the routing grid, which is usually defined in a specification file, as shown above. On each layer, all wires must travel in the same direction (horizontal or vertical), and wire direction alternates between each layers. The routing grid usually starts on an intermediate layer (metal 4 in the above example), and lower layers are reserved for device primitives routing. As seen above, different layers can define different wire pitch, with the wire pitch generally increasing as you move up the metal stack.All layout cell dimensions in XBase must also be quantized to the routing grid, meaning that a layout cell must contain integer number of tracks on all metal layers it uses. Because of the difference in wire pitch, a layout cell that use more layers generally have coarser quantization compared with a layout cell that use fewer layers. XBase Routing TracksThe figure above shows some wires drawn in XBase. Track pitch is the sum of unit width and space, and track number 0 is defined as the wire that's half-pitch away from left or bottom boundary. From the figure, you can see spacing between wires follows the formula $S = sp + N \cdot p$, where $N$ is the number of tracks in between.XBase also supports drawing thicker wires by using multiple adjacent tracks. Wire width follows the formula $W = w + (N - 1)\cdot p$, where $N$ is the number of tracks a wire uses. One issue with this scheme is that even width wires wastes more space compared to odd width wires. For example, in the above figure, although tracks 1 and 3 are empty, no wire can be drawn there because it will violate minimum spacing rule to the wire centered on track 2. As a result, the wire on track 2 takes up 3 tracks although it is only 2 tracks wide.To work around this issues, XBase allows you to place wires on half-integer tracks. In the above figure, the 2 tracks wide wire is moved to track 1.5 from track 2, and thus wires can still be drawn on tracks 0 and 3, making the layout more space efficient. As an added benefit, track -0.5 is now on top of the left-most/bottom-most boundary, so it is now possible to share a wire with adjacent layout cells, such as supply wires in a custom digital standard cell. TrackID and WireArray```pythonclass TrackID(object): def __init__(self, layer_id, track_idx, width=1, num=1, pitch=0.0): type: (int, Union[float, int], int, int, Union[float, int]) -> None class WireArray(object): def __init__(self, track_id, lower, upper): type: (TrackID, float, float) -> None```Routing track locations are represented by the `TrackID` Python object. It has built-in support for drawing a multi-wire bus by specifying the optional `num` and `pitch` parameters, which defines the number of wires in the bus and the number of track pitches between adjacent wires. The `layer_id` parameter specifies the routing layer ID, the `track_idx` parameter specifies the track index of the left-most/bottom-most wire, and `width` specifies the number of tracks a single wire uses.Physical wires in XBase are represented by the `WireArray` Python object. It contains a `TrackID` object describes the location of the wires, and `lower` and `upper` attributes describes the starting and ending coordinate of those wires along the track. For example, a horizontal wire starting at $x = 0.5$ um and ending at $x = 3.0$ um will have `lower = 0.5`, and `upper = 3.0`.One last note is that layout pins can only be added on `WireArray` objects. This guarantees that pins of a layout cell will always be on the routing grid. BAG Layout Generation Code```pythondef gen_layout(prj, specs, dsn_name, demo_class): get information from specs dsn_specs = specs[dsn_name] impl_lib = dsn_specs['impl_lib'] layout_params = dsn_specs['layout_params'] gen_cell = dsn_specs['gen_cell'] create layout template database tdb = make_tdb(prj, specs, impl_lib) compute layout print('computing layout') template = tdb.new_template(params=layout_params, temp_cls=temp_cls) template = tdb.new_template(params=layout_params, temp_cls=demo_class) create layout in OA database print('creating layout') tdb.batch_layout(prj, [template], [gen_cell]) return corresponding schematic parameters print('layout done') return template.sch_params```The above code snippet (taking from `xbase_demo.core` module) shows how layout is generated. First, user create a layout database object, which keeps track of layout hierarchy. Then, user uses the layout database object to create new layout instances given layout generator class and parameters. Finally, layout database uses `BagProject` instance to create the generated layouts in Virtuoso. The generated layout will also contain the corresponding schematic parameters, which can be passed to schematic generator later. BAG TemplateDB Creation Code```pythondef make_tdb(prj, specs, impl_lib): grid_specs = specs['routing_grid'] layers = grid_specs['layers'] spaces = grid_specs['spaces'] widths = grid_specs['widths'] bot_dir = grid_specs['bot_dir'] create RoutingGrid object routing_grid = RoutingGrid(prj.tech_info, layers, spaces, widths, bot_dir) create layout template database tdb = TemplateDB('template_libs.def', routing_grid, impl_lib, use_cybagoa=True) return tdb```For reference, the above code snippet shows how the layout database object is created. A `RoutingGrid` object is created from routing grid parameters specified in the specification file, which is then used to construct the `TemplateDB` layout database object. Routing ExampleThe code box below defines a `RoutingDemo` layout generator class, which is a simply layout containing only wires, vias, and pins. All layout creation happens in the `draw_layout()` function, the functions/attributes of interests are:* `add_wires()`: Create one or more physical wires, with the given options.* `connect_to_tracks()`: Connect two `WireArray`s on adjacent layers by extending them to their intersection and adding vias.* `connnect_wires()`: Connect multiple `WireArrays` on the same layer together. * `add_pin()`: Add a pin object on top of a `WireArray` object.* `self.size`: A 3-tuple describing the size of this layout cell.* `self.bound_box`: A `BBox` object representing the bounding box of this layout cell, computed from `self.size`.To see the layout in action, evaluate the code box below by selecting the cell and pressing Ctrl+Enter. A `DEMO_ROUTING` library will be created in Virtuoso with a single `ROUTING_DEMO` layout cell in it. Feel free to play around with the numbers and re-evaluating the cell, and the layout in Virtuoso should update.Exercise 1: There are currently 3 wires labeled "pin3". Change that to 4 wires by adding an extra wire with the same pitch on the right.Exercise 2: Connect all wires labeled "pin3" to the wire labeled "pin1". Hint: use `connect_to_tracks()` and `connect_wires()`.
###Code
from bag.layout.routing import TrackID
from bag.layout.template import TemplateBase
class RoutingDemo(TemplateBase):
def __init__(self, temp_db, lib_name, params, used_names, **kwargs):
super(RoutingDemo, self).__init__(temp_db, lib_name, params, used_names, **kwargs)
@classmethod
def get_params_info(cls):
return {}
def draw_layout(self):
# Metal 4 is horizontal, Metal 5 is vertical
hm_layer = 4
vm_layer = 5
# add a horizontal wire on track 0, from X=0.1 to X=0.3
warr1 = self.add_wires(hm_layer, 0, 0.1, 0.3)
# print WireArray object
print(warr1)
# print lower, middle, and upper coordinate of wire.
print(warr1.lower, warr1.middle, warr1.upper)
# print TrackID object associated with WireArray
print(warr1.track_id)
# add a horizontal wire on track 1, from X=0.1 to X=0.3,
# coordinates specified in resolution units
warr2 = self.add_wires(hm_layer, 1, 100, 300, unit_mode=True)
# add another wire on track 1, from X=0.35 to X=0.45
warr2_ext = self.add_wires(hm_layer, 1, 350, 450, unit_mode=True)
# connect wires on the same track, in this case warr2 and warr2_ext
self.connect_wires([warr2, warr2_ext])
# add a horizontal wire on track 2.5, from X=0.2 to X=0.4
self.add_wires(hm_layer, 2.5, 200, 400, unit_mode=True)
# add a horizontal wire on track 4, from X=0.2 to X=0.4, with 2 tracks wide
warr3 = self.add_wires(hm_layer, 4, 200, 400, width=2, unit_mode=True)
# add 3 parallel vertical wires starting on track 6 and use every other track
warr4 = self.add_wires(vm_layer, 6, 100, 400, num=3, pitch=2, unit_mode=True)
print(warr4)
# create a TrackID object representing a vertical track
tid = TrackID(vm_layer, 3, width=2, num=1, pitch=0)
# connect horizontal wires to the vertical track
warr5 = self.connect_to_tracks([warr1, warr3], tid)
print(warr5)
# add a pin on a WireArray
self.add_pin('pin1', warr1)
# add a pin, but make label different than net name. Useful for LVS connect
self.add_pin('pin2', warr2, label='pin2:')
# add_pin also works for WireArray representing multiple wires
self.add_pin('pin3', warr4)
# add a pin (so it is visible in BAG), but do not create the actual layout
# in OA. This is useful for hiding pins on lower levels of hierarchy.
self.add_pin('pin4', warr3, show=False)
# set the size of this template
top_layer = vm_layer
num_h_tracks = 6
num_v_tracks = 11
# size is 3-element tuple of top layer ID, number of top
# vertical tracks, and number of top horizontal tracks
self.size = top_layer, num_v_tracks, num_h_tracks
# print bounding box of this template
print(self.bound_box)
# add a M7 rectangle to visualize bounding box in layout
self.add_rect('M1', self.bound_box)
from pathlib import Path
# import bag package
from bag.core import BagProject
from bag.io.file import read_yaml
# import BAG demo Python modules
import xbase_demo.core as demo_core
# load circuit specifications from file
spec_fname = Path(os.environ['BAG_WORK_DIR']) / Path('specs_demo/demo.yaml')
top_specs = read_yaml(spec_fname)
# obtain BagProject instance
local_dict = locals()
bprj = local_dict.get('bprj', BagProject())
demo_core.routing_demo(bprj, top_specs, RoutingDemo)
###Output
_____no_output_____ |
GoogleColab/.Math/Wilcoxon Sign-Rank Test/ParticlesCollider/muons.ipynb | ###Markdown
import
###Code
import numpy as np
import matplotlib.pylab as plt
!pip install git+https://github.com/mattbellis/h5hep.git
import h5hep
!pip install git+https://github.com/mattbellis/particle_physics_simplified.git
import pps_tools as pps
# Download the dataset
# from
# https://github.com/particle-physics-playground/playground/tree/master/data
# ~~~~~~~~~~~~~~~~ #
pps.download_from_drive('dimuons_100k.hdf5')
infile = 'data/dimuons_100k.hdf5'
###Output
_____no_output_____
###Markdown
build colisions
###Code
collisions = pps.get_collisions(infile,experiment='CMS',verbose=False)
print(len(collisions), " collisions") # This line is optional, and simply tells you how many events are in the file.
second_collision = collisions[1] # the second event
print("First event: ",second_collision)
all_muons = second_collision['muons'] # all of the jets in the first event
print("All muons: ",all_muons)
first_muon = all_muons[0] # the first jet in the first event
print("First muon: ",first_muon)
muon_energy = first_muon['e'] # the energy of the first photon
print("First muon's energy: ",muon_energy)
energies = []
for collision in collisions: # loops over all the events in the file
muons = collision['muons'] # gets the list of all muons in the event
for muon in muons: # loops over each muon in the current event
e = muon['e'] # gets the energy of the muon
energies.append(e) # puts the energy in a list
#plot first energies in a histogram
plt.hist(energies,bins=50,range=(0,100));
alldata = pps.get_all_data(infile,verbose=False)
nentries = pps.get_number_of_entries(alldata)
print("# entries: ",nentries) # This optional line tells you how many events are in the file
for entry in range(nentries): # This range will loop over ALL of the events
collision = pps.get_collision(alldata,entry_number=entry,experiment='CMS')
for entry in range(0,int(nentries/2)): # This range will loop over the first half of the events
collision = pps.get_collision(alldata,entry_number=entry,experiment='CMS')
for entry in range(int(nentries/2),nentries): # This range will loop over the second half of the events
collision = pps.get_collision(alldata,entry_number=entry,experiment='CMS')
#second energies in a histogram
energies = []
for event in range(0,int(nentries/3)): # Loops over first 3rd of all events
collision = pps.get_collision(alldata,entry_number=event,experiment='CMS') # organizes the data so you can interface with it
muons = collision['muons'] # gets the list of all photons in the current event
for muon in muons: # loops over all photons in the event
e = muon['e'] # gets the energy of the photon
energies.append(e) # adds the energy to a list
plt.hist(energies,bins=50,range=(0,100));
###Output
_____no_output_____ |
v0.12.2/examples/notebooks/generated/glm.ipynb | ###Markdown
Generalized Linear Models
###Code
%matplotlib inline
import numpy as np
import statsmodels.api as sm
from scipy import stats
from matplotlib import pyplot as plt
plt.rc("figure", figsize=(16,8))
plt.rc("font", size=14)
###Output
_____no_output_____
###Markdown
GLM: Binomial response data Load Star98 data In this example, we use the Star98 dataset which was taken with permission from Jeff Gill (2000) Generalized linear models: A unified approach. Codebook information can be obtained by typing:
###Code
print(sm.datasets.star98.NOTE)
###Output
::
Number of Observations - 303 (counties in California).
Number of Variables - 13 and 8 interaction terms.
Definition of variables names::
NABOVE - Total number of students above the national median for the
math section.
NBELOW - Total number of students below the national median for the
math section.
LOWINC - Percentage of low income students
PERASIAN - Percentage of Asian student
PERBLACK - Percentage of black students
PERHISP - Percentage of Hispanic students
PERMINTE - Percentage of minority teachers
AVYRSEXP - Sum of teachers' years in educational service divided by the
number of teachers.
AVSALK - Total salary budget including benefits divided by the number
of full-time teachers (in thousands)
PERSPENK - Per-pupil spending (in thousands)
PTRATIO - Pupil-teacher ratio.
PCTAF - Percentage of students taking UC/CSU prep courses
PCTCHRT - Percentage of charter schools
PCTYRRND - Percentage of year-round schools
The below variables are interaction terms of the variables defined
above.
PERMINTE_AVYRSEXP
PEMINTE_AVSAL
AVYRSEXP_AVSAL
PERSPEN_PTRATIO
PERSPEN_PCTAF
PTRATIO_PCTAF
PERMINTE_AVTRSEXP_AVSAL
PERSPEN_PTRATIO_PCTAF
###Markdown
Load the data and add a constant to the exogenous (independent) variables:
###Code
data = sm.datasets.star98.load(as_pandas=False)
data.exog = sm.add_constant(data.exog, prepend=False)
###Output
_____no_output_____
###Markdown
The dependent variable is N by 2 (Success: NABOVE, Failure: NBELOW):
###Code
print(data.endog[:5,:])
###Output
[[452. 355.]
[144. 40.]
[337. 234.]
[395. 178.]
[ 8. 57.]]
###Markdown
The independent variables include all the other variables described above, as well as the interaction terms:
###Code
print(data.exog[:2,:])
###Output
[[3.43973000e+01 2.32993000e+01 1.42352800e+01 1.14111200e+01
1.59183700e+01 1.47064600e+01 5.91573200e+01 4.44520700e+00
2.17102500e+01 5.70327600e+01 0.00000000e+00 2.22222200e+01
2.34102872e+02 9.41688110e+02 8.69994800e+02 9.65065600e+01
2.53522420e+02 1.23819550e+03 1.38488985e+04 5.50403520e+03
1.00000000e+00]
[1.73650700e+01 2.93283800e+01 8.23489700e+00 9.31488400e+00
1.36363600e+01 1.60832400e+01 5.95039700e+01 5.26759800e+00
2.04427800e+01 6.46226400e+01 0.00000000e+00 0.00000000e+00
2.19316851e+02 8.11417560e+02 9.57016600e+02 1.07684350e+02
3.40406090e+02 1.32106640e+03 1.30502233e+04 6.95884680e+03
1.00000000e+00]]
###Markdown
Fit and summary
###Code
glm_binom = sm.GLM(data.endog, data.exog, family=sm.families.Binomial())
res = glm_binom.fit()
print(res.summary())
###Output
Generalized Linear Model Regression Results
==============================================================================
Dep. Variable: ['y1', 'y2'] No. Observations: 303
Model: GLM Df Residuals: 282
Model Family: Binomial Df Model: 20
Link Function: logit Scale: 1.0000
Method: IRLS Log-Likelihood: -2998.6
Date: Tue, 02 Feb 2021 Deviance: 4078.8
Time: 06:54:02 Pearson chi2: 4.05e+03
No. Iterations: 5
Covariance Type: nonrobust
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -0.0168 0.000 -38.749 0.000 -0.018 -0.016
x2 0.0099 0.001 16.505 0.000 0.009 0.011
x3 -0.0187 0.001 -25.182 0.000 -0.020 -0.017
x4 -0.0142 0.000 -32.818 0.000 -0.015 -0.013
x5 0.2545 0.030 8.498 0.000 0.196 0.313
x6 0.2407 0.057 4.212 0.000 0.129 0.353
x7 0.0804 0.014 5.775 0.000 0.053 0.108
x8 -1.9522 0.317 -6.162 0.000 -2.573 -1.331
x9 -0.3341 0.061 -5.453 0.000 -0.454 -0.214
x10 -0.1690 0.033 -5.169 0.000 -0.233 -0.105
x11 0.0049 0.001 3.921 0.000 0.002 0.007
x12 -0.0036 0.000 -15.878 0.000 -0.004 -0.003
x13 -0.0141 0.002 -7.391 0.000 -0.018 -0.010
x14 -0.0040 0.000 -8.450 0.000 -0.005 -0.003
x15 -0.0039 0.001 -4.059 0.000 -0.006 -0.002
x16 0.0917 0.015 6.321 0.000 0.063 0.120
x17 0.0490 0.007 6.574 0.000 0.034 0.064
x18 0.0080 0.001 5.362 0.000 0.005 0.011
x19 0.0002 2.99e-05 7.428 0.000 0.000 0.000
x20 -0.0022 0.000 -6.445 0.000 -0.003 -0.002
const 2.9589 1.547 1.913 0.056 -0.073 5.990
==============================================================================
###Markdown
Quantities of interest
###Code
print('Total number of trials:', data.endog[0].sum())
print('Parameters: ', res.params)
print('T-values: ', res.tvalues)
###Output
Total number of trials: 807.0
Parameters: [-1.68150366e-02 9.92547661e-03 -1.87242148e-02 -1.42385609e-02
2.54487173e-01 2.40693664e-01 8.04086739e-02 -1.95216050e+00
-3.34086475e-01 -1.69022168e-01 4.91670212e-03 -3.57996435e-03
-1.40765648e-02 -4.00499176e-03 -3.90639579e-03 9.17143006e-02
4.89898381e-02 8.04073890e-03 2.22009503e-04 -2.24924861e-03
2.95887793e+00]
T-values: [-38.74908321 16.50473627 -25.1821894 -32.81791308 8.49827113
4.21247925 5.7749976 -6.16191078 -5.45321673 -5.16865445
3.92119964 -15.87825999 -7.39093058 -8.44963886 -4.05916246
6.3210987 6.57434662 5.36229044 7.42806363 -6.44513698
1.91301155]
###Markdown
First differences: We hold all explanatory variables constant at their means and manipulate the percentage of low income households to assess its impact on the response variables:
###Code
means = data.exog.mean(axis=0)
means25 = means.copy()
means25[0] = stats.scoreatpercentile(data.exog[:,0], 25)
means75 = means.copy()
means75[0] = lowinc_75per = stats.scoreatpercentile(data.exog[:,0], 75)
resp_25 = res.predict(means25)
resp_75 = res.predict(means75)
diff = resp_75 - resp_25
###Output
_____no_output_____
###Markdown
The interquartile first difference for the percentage of low income households in a school district is:
###Code
print("%2.4f%%" % (diff*100))
###Output
-11.8753%
###Markdown
Plots We extract information that will be used to draw some interesting plots:
###Code
nobs = res.nobs
y = data.endog[:,0]/data.endog.sum(1)
yhat = res.mu
###Output
_____no_output_____
###Markdown
Plot yhat vs y:
###Code
from statsmodels.graphics.api import abline_plot
fig, ax = plt.subplots()
ax.scatter(yhat, y)
line_fit = sm.OLS(y, sm.add_constant(yhat, prepend=True)).fit()
abline_plot(model_results=line_fit, ax=ax)
ax.set_title('Model Fit Plot')
ax.set_ylabel('Observed values')
ax.set_xlabel('Fitted values');
###Output
_____no_output_____
###Markdown
Plot yhat vs. Pearson residuals:
###Code
fig, ax = plt.subplots()
ax.scatter(yhat, res.resid_pearson)
ax.hlines(0, 0, 1)
ax.set_xlim(0, 1)
ax.set_title('Residual Dependence Plot')
ax.set_ylabel('Pearson Residuals')
ax.set_xlabel('Fitted values')
###Output
_____no_output_____
###Markdown
Histogram of standardized deviance residuals:
###Code
from scipy import stats
fig, ax = plt.subplots()
resid = res.resid_deviance.copy()
resid_std = stats.zscore(resid)
ax.hist(resid_std, bins=25)
ax.set_title('Histogram of standardized deviance residuals');
###Output
_____no_output_____
###Markdown
QQ Plot of Deviance Residuals:
###Code
from statsmodels import graphics
graphics.gofplots.qqplot(resid, line='r')
###Output
_____no_output_____
###Markdown
GLM: Gamma for proportional count response Load Scottish Parliament Voting data In the example above, we printed the ``NOTE`` attribute to learn about the Star98 dataset. statsmodels datasets ships with other useful information. For example:
###Code
print(sm.datasets.scotland.DESCRLONG)
###Output
This data is based on the example in Gill and describes the proportion of
voters who voted Yes to grant the Scottish Parliament taxation powers.
The data are divided into 32 council districts. This example's explanatory
variables include the amount of council tax collected in pounds sterling as
of April 1997 per two adults before adjustments, the female percentage of
total claims for unemployment benefits as of January, 1998, the standardized
mortality rate (UK is 100), the percentage of labor force participation,
regional GDP, the percentage of children aged 5 to 15, and an interaction term
between female unemployment and the council tax.
The original source files and variable information are included in
/scotland/src/
###Markdown
Load the data and add a constant to the exogenous variables:
###Code
data2 = sm.datasets.scotland.load()
data2.exog = sm.add_constant(data2.exog, prepend=False)
print(data2.exog[:5,:])
print(data2.endog[:5])
###Output
[[7.12000e+02 2.10000e+01 1.05000e+02 8.24000e+01 1.35660e+04 1.23000e+01
1.49520e+04 1.00000e+00]
[6.43000e+02 2.65000e+01 9.70000e+01 8.02000e+01 1.35660e+04 1.53000e+01
1.70395e+04 1.00000e+00]
[6.79000e+02 2.83000e+01 1.13000e+02 8.63000e+01 9.61100e+03 1.39000e+01
1.92157e+04 1.00000e+00]
[8.01000e+02 2.71000e+01 1.09000e+02 8.04000e+01 9.48300e+03 1.36000e+01
2.17071e+04 1.00000e+00]
[7.53000e+02 2.20000e+01 1.15000e+02 6.47000e+01 9.26500e+03 1.46000e+01
1.65660e+04 1.00000e+00]]
[60.3 52.3 53.4 57. 68.7]
###Markdown
Model Fit and summary
###Code
glm_gamma = sm.GLM(data2.endog, data2.exog, family=sm.families.Gamma(sm.families.links.log()))
glm_results = glm_gamma.fit()
print(glm_results.summary())
###Output
Generalized Linear Model Regression Results
==============================================================================
Dep. Variable: y No. Observations: 32
Model: GLM Df Residuals: 24
Model Family: Gamma Df Model: 7
Link Function: log Scale: 0.0035927
Method: IRLS Log-Likelihood: -83.110
Date: Tue, 02 Feb 2021 Deviance: 0.087988
Time: 06:54:04 Pearson chi2: 0.0862
No. Iterations: 7
Covariance Type: nonrobust
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -0.0024 0.001 -2.466 0.014 -0.004 -0.000
x2 -0.1005 0.031 -3.269 0.001 -0.161 -0.040
x3 0.0048 0.002 2.946 0.003 0.002 0.008
x4 -0.0067 0.003 -2.534 0.011 -0.012 -0.002
x5 8.173e-06 7.19e-06 1.136 0.256 -5.93e-06 2.23e-05
x6 0.0298 0.015 2.009 0.045 0.001 0.059
x7 0.0001 4.33e-05 2.724 0.006 3.31e-05 0.000
const 5.6581 0.680 8.318 0.000 4.325 6.991
==============================================================================
###Markdown
GLM: Gaussian distribution with a noncanonical link Artificial data
###Code
nobs2 = 100
x = np.arange(nobs2)
np.random.seed(54321)
X = np.column_stack((x,x**2))
X = sm.add_constant(X, prepend=False)
lny = np.exp(-(.03*x + .0001*x**2 - 1.0)) + .001 * np.random.rand(nobs2)
###Output
_____no_output_____
###Markdown
Fit and summary (artificial data)
###Code
gauss_log = sm.GLM(lny, X, family=sm.families.Gaussian(sm.families.links.log()))
gauss_log_results = gauss_log.fit()
print(gauss_log_results.summary())
###Output
Generalized Linear Model Regression Results
==============================================================================
Dep. Variable: y No. Observations: 100
Model: GLM Df Residuals: 97
Model Family: Gaussian Df Model: 2
Link Function: log Scale: 1.0531e-07
Method: IRLS Log-Likelihood: 662.92
Date: Tue, 02 Feb 2021 Deviance: 1.0215e-05
Time: 06:54:04 Pearson chi2: 1.02e-05
No. Iterations: 7
Covariance Type: nonrobust
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -0.0300 5.6e-06 -5361.316 0.000 -0.030 -0.030
x2 -9.939e-05 1.05e-07 -951.091 0.000 -9.96e-05 -9.92e-05
const 1.0003 5.39e-05 1.86e+04 0.000 1.000 1.000
==============================================================================
|
notebooks/1.01_deep_particle_crossing_locations.ipynb | ###Markdown
1.01 Deep Particle Crossing Locations---Author: Riley X. BradyDate: 11/18/2020This notebook pulls in the 19,002 particles that have been identified as those who last cross 1000 m in the ACC (S of 45S and outside of the annual sea ice zone) and finds the x, y location in which they last upwell at 1000 m. These locations are used in subsequent notebooks for filtering and visualization.
###Code
%load_ext lab_black
%load_ext autoreload
%autoreload 2
import numpy as np
import xarray as xr
from dask.distributed import Client
print(f"xarray: {xr.__version__}")
print(f"numpy: {np.__version__}")
# This is my TCP client from the `launch_cluster` notebook. I use it
# for distributed computing with `dask` on NCAR's machine, Casper.
client = Client("tcp://...")
# Load in the `zarr` file, which is pre-chunked and already has been
# filtered from the original 1,000,000 particles to the 19,002 that
# upwell last across 1000 m S of 45S and outside of the annual sea ice
# edge.
filepath = "../data/southern_ocean_deep_upwelling_particles.zarr/"
ds = xr.open_zarr(filepath, consolidated=True)
def _compute_idx_of_last_1000m_crossing(z):
"""Find index of final time particle upwells across 1000 m.
z : zLevelParticle
"""
currentDepth = z
previousDepth = np.roll(z, 1)
previousDepth[0] = 999 # So we're not dealing with a nan here.
cond = (currentDepth >= -1000) & (previousDepth < -1000)
idx = (
len(cond) - np.flip(cond).argmax() - 1
) # Finds last location that condition is true.
return idx
def compute_xy_of_last_crossing(x, y, z):
"""Convert crossing point into x, y coordinates.
x : lonParticle (radians)
y : latParticle (radians)
z : zLevelParticle (m)
"""
idx = _compute_idx_of_last_1000m_crossing(z)
lon = np.rad2deg(x[idx])
lat = np.rad2deg(y[idx])
return np.array([lon, lat])
ds = ds.chunk({"time": -1, "nParticles": 6000})
x = ds.lonParticle.persist()
y = ds.latParticle.persist()
z = ds.zLevelParticle.persist()
result = xr.apply_ufunc(
compute_xy_of_last_crossing,
x,
y,
z,
input_core_dims=[["time"], ["time"], ["time"]],
vectorize=True,
dask="parallelized",
output_core_dims=[["coordinate"]],
output_dtypes=[float],
dask_gufunc_kwargs={"output_sizes": {"coordinate": 2}},
)
%time crossings = result.compute()
crossings = crossings.assign_coords(coordinate=["x", "y"])
ds = xr.Dataset()
ds["lon_crossing"] = crossings.sel(coordinate="x")
ds["lat_crossing"] = crossings.sel(coordinate="y")
ds.attrs[
"description"
] = "x/y locations of *final* 1000m crossing for particles that occur < 45S; reach 200 m following this crossing; and happen outside of the 75% annual sea ice zone."
ds.to_netcdf("../data/postproc/1000m.crossing.locations.nc")
###Output
_____no_output_____ |
bayesian_decision_tree/naive_bayesian_sklearn.ipynb | ###Markdown
Naive Bayes with SklearnUse naive bayes with TF_IDF to conduct sentiment analysis on movie reviews
###Code
import pandas as pd
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
1. Define Training/Test Set
###Code
#load movie reviews
reviews = pd.read_csv('ratings_train.txt', delimiter='\t')
reviews.head()
#load test set
test_reviews = pd.read_csv('ratings_test.txt', delimiter='\t')
test_reviews.head()
###Output
_____no_output_____
###Markdown
2. Exploratory Data Analysis
###Code
#size of each data
print('Train set : ', reviews.shape)
print('Test set : ', test_reviews.shape)
###Output
Train set : (150000, 3)
Test set : (50000, 3)
###Markdown
* **150,000** training set and **50,000** test set* 0 is **negative** and 1 is **positive**
###Code
#compare positive vs. negative reviews in training dataset
reviews.label.value_counts()
#compare positive vs. negative reviews in test dataset
test_reviews.label.value_counts()
###Output
_____no_output_____
###Markdown
**Let's explore the distribution of each review's sentence length!**
###Code
import seaborn as sns
reviews['length'] = reviews['document'].apply(lambda x: len(str(x)))
reviews.head()
reviews.sort_values('length', ascending=False).head()
###Output
_____no_output_____
###Markdown
**Let's draw histogram**
###Code
#overall training dataset sentence length
sns.distplot(reviews['length'], kde=False)
#positive training dataset sentence length
sns.distplot(reviews[reviews['label'] == 1]['length'], kde=False)
#negative training dataset sentence length
sns.distplot(reviews[reviews['label'] == 0]['length'], kde=False)
###Output
_____no_output_____
###Markdown
What we can conclude:* There is no significant difference between lengths of positive/negative reviews
###Code
reviews.length.describe()
###Output
_____no_output_____
###Markdown
3. Text Preprocessing
###Code
# korean nlp
import konlpy
from konlpy.tag import Okt
okt = Okt()
def parse(s):
try:
return okt.nouns(s)
except:
return []
reviews['parsed_doc'] = reviews.document.apply(parse)
reviews.head()
###Output
_____no_output_____
###Markdown
4. Vectorization (Bag of Words)* Vectorize bag of word counts for each sentences* Use term frequency for every word in bag of words* If the corpus is **cat, love, i, do, like, him, you**, then sentence "I love you only you" will be **(0, 1, 1, 0, 0, 0, 2)**
###Code
from sklearn.feature_extraction.text import CountVectorizer
#create a bag of words transformer
bow_transformer = CountVectorizer(analyzer=parse).fit(reviews.document)
#length of corpus
len(bow_transformer.vocabulary_)
###Output
_____no_output_____
###Markdown
Length of the corpus is 38,648, which means each vectorized sentence will be of the same length
###Code
sample = reviews.document.iloc[3]
sample
sample_bow = bow_transformer.transform([sample])
print(sample_bow)
print(bow_transformer.get_feature_names()[2509])
print(bow_transformer.get_feature_names()[2630])
print(bow_transformer.get_feature_names()[26019])
#this will "transform" each sentences in terms of the vectorized bag of words above
reviews_bow = bow_transformer.transform(reviews.document)
#we have 150,000 training dataset, which corresponds to the below
# sparse matrix
reviews_bow.shape
#occurence of non-zero values in the sparse matrix
reviews_bow.nnz
#calculate sparsity
sparsity = 100 * reviews_bow.nnz / (reviews_bow.shape[0] * reviews_bow.shape[1])
print(round(sparsity, 3))
###Output
0.015
###Markdown
**0.015%** of sparse matrix consist of zeros 5. Normalization of Vectors (TF-IDF) 5.1. TF (Term Frequency)Term Frequency, which measures how frequently a term occurs in a document. Since every document is different in length, it is possible that a term would appear much more times in long documents than shorter ones. Thus, the term frequency is often divided by the document length as a way of normalization:$TF(t) = \large \frac{\text{The number of times term t appears in a document}}{\text{The total number of terms in the document}}$ 5.2. IDF (Inverse Document Frequency)Inverse Document Frequency, which measures how important a term is. While computing TF, all terms are considered equally important. However it is known that certain terms, such as "is", "of", and "that", may appear a lot of times but have little importance. Thus we need to weigh down the frequent terms while scale up the rare ones, by computing the following:$IDF(t) = log_{e}\large\frac{\text{The total number of documents}}{\text{The number of documents with term t in it}}$(예시) 100개의 단어를 포함하고 있는 document를 생각해보자. "cat"이라는 단어가 그 document에 3번 나온다고 가정하자. 10,000,000의 문서 중에서 1,000의 문서에서만 "cat"이 나온다고 가정하자.* TF : 3/100=0.03* IDF : loge(10,000,000/1,000)=4* TF-IDF : 0.03×4=0.12
###Code
from sklearn.feature_extraction.text import TfidfTransformer
tfidf_transformer = TfidfTransformer().fit(reviews_bow)
sample_tfidf = tfidf_transformer.transform(sample_bow)
print(sample_tfidf)
print(tfidf_transformer.idf_[bow_transformer.vocabulary_['교도소']])
print(tfidf_transformer.idf_[bow_transformer.vocabulary_['이야기']])
reviews_tfidf = tfidf_transformer.transform(reviews_bow)
reviews_tfidf.shape
###Output
_____no_output_____
###Markdown
6. Training
###Code
from sklearn.naive_bayes import MultinomialNB
#train our model
sentiment_detect_model = MultinomialNB().fit(reviews_tfidf, reviews['label'])
#check sample sentence
sentiment_detect_model.predict(sample_tfidf)[0]
#negative
reviews['label'].iloc[3]
# check accuracy of training data
from sklearn.metrics import accuracy_score
train_preds = sentiment_detect_model.predict(reviews_tfidf)
train_preds[:10]
train_targets = reviews['label'].values
accuracy_score(train_targets, train_preds)
###Output
_____no_output_____
###Markdown
7. Testing Model
###Code
#vectorize test set in terms of bag of words vertors
test_reviews_bow = bow_transformer.transform(test_reviews.document)
#apply tf-idf vectors to test data
test_reviews_tfidf = tfidf_transformer.transform(test_reviews_bow)
# predict test data
test_preds = sentiment_detect_model.predict(test_reviews_tfidf)
# true values from test data
test_targets = test_reviews['label'].values
test_targets
# accuracy
accuracy_score(test_targets, test_preds)
###Output
_____no_output_____ |
static-web/simple-html-web-scraping-with-beautiful-soup.ipynb | ###Markdown
Simple HTML Web Scraping with Beautiful Soup Environment setup Libraries pip install BeautifulSoup4 pip install pandas Web scraping
###Code
# import libraries
from bs4 import BeautifulSoup
import urllib.request
import csv
# specify the url
# I will obtain data from University of Waterloo's CS courses
# prerequisite chart.
urlpage = 'https://cs.uwaterloo.ca/current-undergraduate-students/majors/prerequisite-chain-computer-science-major-courses/cs-prerequisite-chart'
# query the website to return the html and store
page_html = urllib.request.urlopen(urlpage)
# parse html with beautiful soup and store
soup = BeautifulSoup(page_html, 'html.parser')
# test
# If the ouput is empty or is an error,
# further debugging is required.
soup
# find results 'table'
table = soup.find('tbody')
results = table.findAll('tr')
print('Number of results', len(results))
results
rows = []
rows.append(['Course', 'Title', 'Prereqs', 'Coreqs', 'Successors', 'Terms offered', 'Open to non-CS majors'])
# loop over results
for res in results:
# get course name written between th tags
# remove unwanted spaces to make course name uniform
course = res.find('th').get_text()
course = course.strip('\n').replace(" ", "")
# test
# print(course)
# obtain other data by columns
data = res.find_all('td')
# write columns to variables
title = data[0].getText()
prereqs = data[1].getText()
coreqs = data[2].getText()
succ = data[3].getText()
terms = data[4].getText()
non_cs = data[5].getText()
# remove newline
prereqs = prereqs.strip('\n')
coreqs = coreqs.strip('\n')
succ = succ.strip('\n')
# test
# print('--------------------')
# print(title)
# print('--------------------')
# print(prereqs)
# print('--------------------')
# print(coreqs)
# print('--------------------')
# print(succ)
# print('--------------------')
# print(terms)
# print('--------------------')
# print(non_cs)
# print('====================')
rows.append([course, title, prereqs, coreqs, succ, terms, non_cs])
print(rows)
# create CSV and write to output file
# w : writing
with open('courses.csv','w', newline='') as f_output:
csv_output = csv.writer(f_output)
csv_output.writerows(rows)
###Output
_____no_output_____ |
jupyter/notebooks/MLManager Clearsence Demo.ipynb | ###Markdown
s {}h1, h2, h3, h4, h5, h6, table, button, a, p, blockquote {font-family:Geneva;}.log {transition: all .2s ease-in-out;}.log:hover {atransform: scale(1.05);}Welcome to Splice Machine MLManagerThe data platform for intelligent applications blockquote{ font-size: 15px; background: f9f9f9; border-left: 10px solid ccc; margin: .5em 10px; padding: 30em, 10px; quotes: "\201C""\201D""\2018""\2019"; padding: 10px 20px; line-height: 1.4;}blockquote:before { content: open-quote; display: inline; height: 0; line-height: 0; left: -10px; position: relative; top: 30px; bottom:30px; color: ccc; font-size: 3em; display:none;}p{ margin: 0;}footer{ margin:0; text-align: right; font-size: 1em; font-style: italic;}Why use Splice Machine MLSplice Machine ML isn't just a machine learning platform, it is a complete machine learning lifecycle management solution, giving you total control of your models, from retrieving data to scalable deployment. Our platform runs directly on Apache Spark, allowing you to complete massive jobs in parallelOur native PySpliceContext lets you directly access the data in your database and convert as a Spark DataFrame, no ETL.MLFlow is integrated directly into all Splice Machine clusters, allowing you to keep track of your entire Data Science workflowAfter you have found the best model for your task, you can easily deploy it live to AWS SageMaker or AzureML to make predictions in real time.MLFlow does not force a standard workflow, instead it allows teams to develop their own methodology easily that fits their teams and problemsIn this demo we will guide you through the entire MLManager life cycle.Your friends at Splice Machine How does this work?blockquote{ font-size: 15px; background: f9f9f9; border-left: 10px solid ccc; margin: .5em 10px; padding: 30em, 10px; quotes: "\201C""\201D""\2018""\2019"; padding: 10px 20px; line-height: 1.4;}blockquote:before { content: open-quote; display: inline; height: 0; line-height: 0; left: -10px; position: relative; top: 30px; bottom:30px; color: ccc; font-size: 3em; display:none;}p{ margin: 0;}footer{ margin:0; text-align: right; font-size: 1em; font-style: italic;}Jupyter Jupyter notebooks are a simple, easy and intuitive way to do data science, directly in your browser. Any Spark computations you run inside of the notebook are executed right on your cluster's Spark executors.Jupyter notebooks also make machine learning easier. By using Jupyter magics, you can run different languages inside the same notebook. The language you want to run is signified by a %% sign followed by a magic at the top of a cell. For example, one of the interpreters you will become very familiar with while using our platform %%sql magic. In the %%sql magic you can run standard SQL queries and visualize the results in Jupyter's built in visulaization tools. This entire demo was written inside a Jupyter notebookSplice MachineMLFlowAs a data scientist constantly creating new models and testing new features, it is necessary to effectively track and manage those different ML runs. MLFlow allows you to track entire experiments and individual run parameters and metrics. The way you organize your flow is unique to you, and the intuitive Python API allows you to organize your delevopement process and run with it. Ready? Let's get started. Problem statement: Can we predict the likelihood of fraudulent transactions after training on historical actuals? We're going to find out using Splice Machine's MLManager
###Code
# !pip install seaborn statsmodels
from utils import *
hide_toggle()
!wget https://splice-releases.s3.amazonaws.com/jdbc-driver/db-client-2.7.0.1815.jar
%%sql
%classpath add jar db-client-2.7.0.1815.jar
%defaultDatasource jdbc:splice://host.docker.internal:1527/splicedb;user=splice;password=admin
%%sql
create schema cc_fraud;
set schema cc_fraud;
--drop table if exists cc_fraud.cc_fraud_data;
create table cc_fraud.cc_fraud_data (
time_offset integer,
v1 double,
v2 double,
v3 double,
v4 double,
v5 double,
v6 double,
v7 double,
v8 double,
v9 double,
v10 double,
v11 double,
v12 double,
v13 double,
v14 double,
v15 double,
v16 double,
v17 double,
v18 double,
v19 double,
v20 double,
v21 double,
v22 double,
v23 double,
v24 double,
v25 double,
v26 double,
v27 double,
v28 double,
amount decimal(10,2),
class_result int
);
call SYSCS_UTIL.IMPORT_DATA (
'cc_fraud',
'cc_fraud_data',
null,
's3a://splice-demo/kaggle-fraud-data/creditcard.csv',
',',
null,
null,
null,
null,
-1,
's3a://splice-demo/kaggle-fraud-data/bad',
null,
null);
%%sql
select top 10 * from cc_fraud.cc_fraud_data
%%sql
select class_result, count(*) from cc_fraud.cc_fraud_data group by class_result
%%sql
explain select class_result, count(*) from cc_fraud.cc_fraud_data group by class_result
###Output
_____no_output_____
###Markdown
Connecting to your databaseNow, let's establish a connection to your database using Python via our Native Spark Datasource. We will use the PySpliceContext to establish our direct connection-- it allows us to do inserts, selects, upserts, updates and many more functions without serializationSplice Machine
###Code
from pyspark.sql import SparkSession
# from splicemachine.spark.context import PySpliceContext
# Create our Spark Session
spark = SparkSession.builder.getOrCreate()
sc = spark.sparkContext
# Create out Native Database Connection
splice = PySpliceContext(spark)
###Output
_____no_output_____
###Markdown
Let's create our MLManager When you create an MLManager object, a tracking URL is returned to you. There is one tracking URL _per cluster_ so if you create another one in a new notebook, it will return the same tracking URL. This is useful because you can create multiple different experiments across all notebooks, and all will be tracked in the MLFlow UI.Splice Machine
###Code
import os
os.environ['MLFLOW_URL'] = 'mlflow:5001'
hide_toggle()
from splicemachine.ml.management import MLManager
manager = MLManager(splice)
###Output
Tracking Model Metadata on MLFlow Server @ http://mlflow:5001
###Markdown
Loading The DataData LoadingLoading data into Splice Machine couldn't be easier, no matter the source. Because we connect directly to our database source, there is no ETL necessary.Splice Machine Let's import our data into a Spark DataFrame using our PySpliceContext Now is also a good time to create our MLFlow Experiment which we will call fraud_demo
###Code
#create our MLFlow experiment
manager.create_experiment('fraud_demo')
df = splice.df("SELECT * FROM cc_fraud.cc_fraud_data")
df = df.withColumnRenamed('CLASS_RESULT', 'label')
display(df.limit(10).toPandas())
###Output
Experiment fraud_demo already exists... setting to active experiment
Active experiment has id 1
###Markdown
We can now see our experiment in the MLFlow UI at port 5001 Data investigation Before going further, it's important to look at the correlations between all of your features and each other as well as the label We can easily create a heatmap to compare all features against each other and the label
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
for i in df.columns:
df = df.withColumn(i,df[i].cast(FloatType()))
pdf = df.limit(5000).toPandas()
correlations = pdf.corr()
correlations.style.set_precision(2)
plt.rcParams["figure.figsize"] = (8,12)
plt.matshow(correlations, cmap='coolwarm')
ticks = [i for i in range(len(correlations.columns))]
plt.xticks(ticks, correlations.columns)
plt.yticks(ticks, correlations.columns)
plt.title('Fraud Data correlation heatmap')
plt.show()
###Output
_____no_output_____
###Markdown
Ben's run Ben, our first Data Scientist, has an idea for the steps to build this model. He will create a run and log his name as to keep track of what he did manager.start_run() You can set tags to your run such as team, purpose, or anything you'd like to track your runs. You can also set a run_name as a parameter. The user_id will automatically be added as the user that is signed into this notebook (currently that's me, Ben) If you navigate to the mlflow port you will now see the fraud-demo experiment, but there is nothing in that experiment yet. Let's start our first run and track our progress
###Code
#start our first MLFlow run
tags = {
'team': 'Clearsense',
'purpose': 'fraud r&d',
'attempt-date': '11/07/2019',
'attempt-number': '1'
}
manager.start_run(tags=tags)
###Output
_____no_output_____
###Markdown
Let's look at some of the attributes of this dataset:* Because we have so few fraud examples, we need to oversample our fraudulent transactions and undersample the non-fraud transactions* We need to make sure the model isn't overfit and doesn't always predict non-fraud (due to the lack of fraud data) so we can't only rely on accuracy* We want to pick a model that doesn't have a high overfitting rate Let's define our PipelineYou can use Spark's Pipeline class to define a set of Transformers that set up your dataset for modelingWe'll then use MLManager to log our Pipeline stages
###Code
from pyspark.ml.feature import StandardScaler, VectorAssembler
from pyspark.ml import Pipeline,PipelineModel
from pyspark.ml.classification import RandomForestClassifier, MultilayerPerceptronClassifier
feature_cols = df.columns[:-1]
assembler = VectorAssembler(inputCols=feature_cols, outputCol='features')
scaler = StandardScaler(inputCol="features", outputCol='scaledFeatures')
rf = RandomForestClassifier()
stages = [assembler,scaler,rf]
mlpipe = Pipeline(stages=stages)
manager.log_pipeline_stages(mlpipe)
###Output
_____no_output_____
###Markdown
Model setupNow we can set up our modeling process. We will use our OverSampleCrossValidator to properly oversample our dataset for model building.While we do that, we'll add just a few lines of code to track all of our moves in MLFlow
###Code
from utils1 import OverSampleCrossValidator as OSCV
from pyspark.ml.tuning import ParamGridBuilder
from pyspark.ml.evaluation import BinaryClassificationEvaluator,MulticlassClassificationEvaluator
import pandas as pd
import numpy as np
# Define evaluation metrics
PRevaluator = BinaryClassificationEvaluator(metricName = 'areaUnderPR') # Because this is a needle in haystack problem
AUCevaluator = BinaryClassificationEvaluator(metricName = 'areaUnderROC')
ACCevaluator = MulticlassClassificationEvaluator(metricName="accuracy")
f1evaluator = MulticlassClassificationEvaluator(metricName="f1")
# Define hyperparameters to try
params = {rf.maxDepth: [5,15], \
rf.numTrees: [10,30], \
rf.minInfoGain: [0.0,2.0]}
paramGrid_stages = ParamGridBuilder()
for param in params:
paramGrid_stages.addGrid(param,params[param])
paramGrid = paramGrid_stages.build()
# Create the CrossValidator
fraud_cv = OSCV(estimator=mlpipe,
estimatorParamMaps=paramGrid,
evaluator=PRevaluator,
numFolds=3,
label = 'label',
seed = 1234,
parallelism = 3,
altEvaluators = [ACCevaluator, f1evaluator, AUCevaluator])
###Output
_____no_output_____
###Markdown
Run the CVNow we can run the CrossValidator and log the results to MLFlow
###Code
df = df.withColumnRenamed('Amount', 'label')
manager.start_timer('with_oversample')
fraud_cv_model, alt_metrics = fraud_cv.fit(df)
execution_time = manager.log_and_stop_timer()
print(f"--- {execution_time} seconds == {execution_time/60} minutes == {execution_time/60/60} hours")
# Grab metrics of best model
best_avg_prauc = max(mycvModel.avgMetrics)
best_performing_model = np.argmax(fraud_cv_model.avgMetrics)
# metrics at the best performing model for this iteration
best_avg_acc = [alt_metrics[i][0] for i in range(len(alt_metrics))][best_performing_model]
best_avg_f1 = [alt_metrics[i][1] for i in range(len(alt_metrics))][best_performing_model]
best_avg_rocauc = [alt_metrics[i][2] for i in range(len(alt_metrics))][best_performing_model]
print(f"The Best average (Area under PR) for this grid search: {best_avg_prauc}")
print(f"The Best average (Accuracy) for this grid search: {best_avg_acc}")
print(f"The Best average (F1) for this grid search: {best_avg_f1}")
print(f"The Best average (Area under ROC) for this grid search: {best_avg_rocauc}")
evals = [('areaUnderPR',best_avg_prauc), ('Accuracy',best_avg_acc),('F1',best_avg_f1),('areaUnderROC',best_avg_rocauc)]
manager.log_metrics(evals)
# Get the best parameters
bestParamsCombination = {}
for stage in fraud_cv_model.bestModel.stages:
bestParams = stage.extractParamMap()
for param in params:
if param in bestParams:
bestParamsCombination[param] = bestParams[param]
#log the hyperparams
manager.log_params(list(bestParamsCombination.items()))
print("Best Param Combination according to f1 is: \n")
print(pd.DataFrame([(str(i.name),str(bestParamsCombination[i]))for i in bestParamsCombination], columns = ['Param','Value']))
# Feature importance of the Principal comp
importances = fraud_cv_model.bestModel.stages[-1].featureImportances.toArray()
top_5_idx = np.argsort(importances)[-5:]
top_5_values = [importances[i] for i in top_5_idx]
top_5_features = [new_features[i] for i in top_5_idx]
print("___________________________________")
importances = fraud_cv_model.bestModel.stages[-1].featureImportances.toArray()
print("Most Important Features are")
print(pd.DataFrame(zip(top_5_features,top_5_values), columns = ['Feature','Importance']).sort_values('Importance',ascending=False))
#Log feature importances
manager.log_params()
import utils1
import importlib
importlib.reload(utils1)
import random
from utils1 import overSampler
from splicemachine.ml.utilities import SpliceBinaryClassificationEvaluator
rf_depth = [5,10,20,30]
rf_trees = [8,12,18,26]
rf_subsampling_rate = [1.0,0.9,0.8]
oversample_rate = [0.4,0.7,1.0]
for i in range(1,5):
tags = {
'team': 'Clearsense',
'purpose': 'fraud r&d',
'attempt-date': '11/07/2019',
'attempt-number': 'f{i}'
}
manager.start_run(tags=tags)
#random variable choice
depth = random.choice(rf_depth)
trees = random.choice(rf_trees)
subsamp_rate = random.choice(rf_subsampling_rate)
ovrsmpl_rate = random.choice(oversample_rate)
#transformers
feature_cols = df.columns[:-1]
ovr = overSampler(label='label',ratio = ovrsmpl_rate, majorityLabel = 0, minorityLabel = 1, withReplacement = False)
assembler = VectorAssembler(inputCols=feature_cols, outputCol='features')
scaler = StandardScaler(inputCol="features", outputCol='scaledFeatures')
rf = RandomForestClassifier(maxDepth=depth, numTrees=trees, subsamplingRate=subsamp_rate)
#pipeline
stages = [ovr,assembler,scaler,rf]
mlpipe = Pipeline(stages=stages)
#log the stages of the pipeline
manager.log_pipeline_stages(mlpipe)
#log what happens to each feature
manager.log_feature_transformations(mlpipe)
#run on the data
train, test = df.randomSplit([0.8,0.2])
manager.start_timer(f'CV iteration {i}')
trainedModel = mlpipe.fit(train)
execution_time = manager.log_and_stop_timer()
print(f"--- {execution_time} seconds == {execution_time/60} minutes == {execution_time/60/60} hours")
#log model parameters
manager.log_model_params(trainedModel)
preds = trainedModel.transform(test)
#evaluate
evaluator = SpliceBinaryClassificationEvaluator()
evaluator.input(preds)
metrics = evaluator.get_results(dict=True)
#log model performance
manager.log_metrics(list(metrics.items()))
###Output
_____no_output_____ |
HandsOnML/ch03/ex01.ipynb | ###Markdown
3.1 Problem description Try to build a classifier for the MNIST dataset that achieves over 97% accuracyon the test set. Hint: the `KNeighborsClassifier` works quite well for this task;you just need to find good hyperparameter values (try a grid search on theweights and n_neighbors hyperparameters). Load the data
###Code
from scipy.io import loadmat
mnist = loadmat('./datasets/mnist-original.mat')
mnist
X, y = mnist['data'], mnist['label']
X = X.T
X.shape
y = y.T
y.shape
type(y)
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Split test and training data
###Code
X_train, X_test, y_train, y_test = X[:60000], X[60000:], y[:60000], y[60000:]
len(X_train)
shuffle_index = np.random.permutation(len(X_train))
X_train, y_train = X_train[shuffle_index], y_train[shuffle_index]
###Output
_____no_output_____
###Markdown
3.2 Training a Random Forest Classifier for baseline The reason to use Random Forest Classifier is it runs faster than linear model
###Code
from sklearn.ensemble import RandomForestClassifier
forest_clf = RandomForestClassifier(random_state=42)
forest_clf.fit(X_train, y_train)
forest_pred = forest_clf.predict(X_test)
forest_pred = forest_pred.reshape(10000,1)
accuracy = (forest_pred == y_test).sum() / len(y_test)
print(accuracy)
###Output
_____no_output_____
###Markdown
3.3 Training a `KNeighborsClassifier` Classifier with default settings Seems like we have to have `n_jobs = 1` so the prediction runs within reasonable time.
###Code
from sklearn.neighbors import KNeighborsClassifier
knn_clf = KNeighborsClassifier(n_jobs=-1)
knn_clf.fit(X_train, y_train)
knn_clf.predict([X_test[0]])
# for i in range(1000):
# knn_clf.predict([X_test[i]])
knn_pred = knn_clf.predict(X_test)
knn_pred = knn_pred.reshape(10000, 1)
accuracy = (knn_pred == y_test).sum() / len(y_test)
print(accuracy)
###Output
_____no_output_____
###Markdown
3.4 `GridSearchCV`
###Code
from sklearn.model_selection import GridSearchCV
param_grid = [
{'n_jobs': [-1], 'n_neighbors': [3, 5, 11, 19], 'weights': ['uniform', 'distance']}
]
grid_search = GridSearchCV(knn_clf, param_grid, cv=3, scoring='accuracy', n_jobs=-1)
grid_search.fit(X_train, y_train)
###Output
_____no_output_____ |
lt219_max_sum_subarray_kadane_algo.ipynb | ###Markdown
https://leetcode.com/problems/maximum-subarray/ The key idea with kadane's algo is that a negative num will never make a positive contribution to our sum. So, if our current sum, drops below zero, we should just reset by setting our current sum to be the current num. Otherwise, we keep adding cumulative sum. https://afshinm.name/2018/06/24/why-kadane-algorithm-works/
###Code
def maxSubArray(nums) -> int:
"""
https://leetcode.com/problems/maximum-subarray/discuss/523386/Python-O(n)-Time-and-O(1)-Space%3A-Kadane's-Algorithm
"""
max_seq = nums[0]
curr_sum = nums[0]
for num in nums[1:]:
if curr_sum < 0:
curr_sum = num
else:
curr_sum += num
if curr_sum > max_seq:
max_seq = curr_sum
return max_seq
maxSubArray([-3,4,-1,5,-10,3])
###Output
_____no_output_____ |
python_bootcamp/notebooks/02-Python Statements/04-while Loops.ipynb | ###Markdown
while LoopsThe while statement in Python is one of most general ways to perform iteration. A while statement will repeatedly execute a single statement or group of statements as long as the condition is true. The reason it is called a 'loop' is because the code statements are looped through over and over again until the condition is no longer met.The general format of a while loop is: while test: code statements else: final code statementsLet’s look at a few simple while loops in action.
###Code
x = 0
while x < 10:
print('x is currently: ',x)
print(' x is still less than 10, adding 1 to x')
x+=1
###Output
x is currently: 0
x is still less than 10, adding 1 to x
x is currently: 1
x is still less than 10, adding 1 to x
x is currently: 2
x is still less than 10, adding 1 to x
x is currently: 3
x is still less than 10, adding 1 to x
x is currently: 4
x is still less than 10, adding 1 to x
x is currently: 5
x is still less than 10, adding 1 to x
x is currently: 6
x is still less than 10, adding 1 to x
x is currently: 7
x is still less than 10, adding 1 to x
x is currently: 8
x is still less than 10, adding 1 to x
x is currently: 9
x is still less than 10, adding 1 to x
###Markdown
Notice how many times the print statements occurred and how the while loop kept going until the True condition was met, which occurred once x==10. It's important to note that once this occurred the code stopped. Let's see how we could add an else statement:
###Code
x = 0
while x < 10:
print('x is currently: ',x)
print(' x is still less than 10, adding 1 to x')
x+=1
else:
print('All Done!')
###Output
x is currently: 0
x is still less than 10, adding 1 to x
x is currently: 1
x is still less than 10, adding 1 to x
x is currently: 2
x is still less than 10, adding 1 to x
x is currently: 3
x is still less than 10, adding 1 to x
x is currently: 4
x is still less than 10, adding 1 to x
x is currently: 5
x is still less than 10, adding 1 to x
x is currently: 6
x is still less than 10, adding 1 to x
x is currently: 7
x is still less than 10, adding 1 to x
x is currently: 8
x is still less than 10, adding 1 to x
x is currently: 9
x is still less than 10, adding 1 to x
All Done!
###Markdown
break, continue, passWe can use break, continue, and pass statements in our loops to add additional functionality for various cases. The three statements are defined by: break: Breaks out of the current closest enclosing loop. continue: Goes to the top of the closest enclosing loop. pass: Does nothing at all. Thinking about break and continue statements, the general format of the while loop looks like this: while test: code statement if test: break if test: continue else:break and continue statements can appear anywhere inside the loop’s body, but we will usually put them further nested in conjunction with an if statement to perform an action based on some condition.Let's go ahead and look at some examples!
###Code
x = 0
while x < 10:
print('x is currently: ',x)
print(' x is still less than 10, adding 1 to x')
x+=1
if x==3:
print('x==3')
else:
print('continuing...')
continue
###Output
x is currently: 0
x is still less than 10, adding 1 to x
continuing...
x is currently: 1
x is still less than 10, adding 1 to x
continuing...
x is currently: 2
x is still less than 10, adding 1 to x
x==3
x is currently: 3
x is still less than 10, adding 1 to x
continuing...
x is currently: 4
x is still less than 10, adding 1 to x
continuing...
x is currently: 5
x is still less than 10, adding 1 to x
continuing...
x is currently: 6
x is still less than 10, adding 1 to x
continuing...
x is currently: 7
x is still less than 10, adding 1 to x
continuing...
x is currently: 8
x is still less than 10, adding 1 to x
continuing...
x is currently: 9
x is still less than 10, adding 1 to x
continuing...
###Markdown
Note how we have a printed statement when x==3, and a continue being printed out as we continue through the outer while loop. Let's put in a break once x ==3 and see if the result makes sense:
###Code
x = 0
while x < 10:
print('x is currently: ',x)
print(' x is still less than 10, adding 1 to x')
x+=1
if x==3:
print('Breaking because x==3')
break
else:
print('continuing...')
continue
###Output
x is currently: 0
x is still less than 10, adding 1 to x
continuing...
x is currently: 1
x is still less than 10, adding 1 to x
continuing...
x is currently: 2
x is still less than 10, adding 1 to x
Breaking because x==3
###Markdown
Note how the other else statement wasn't reached and continuing was never printed!After these brief but simple examples, you should feel comfortable using while statements in your code.**A word of caution however! It is possible to create an infinitely running loop with while statements. For example:**
###Code
# DO NOT RUN THIS CODE!!!!
while True:
print("I'm stuck in an infinite loop!")
###Output
_____no_output_____ |
notebooks/sesssion_10.ipynb | ###Markdown
topics:- [Hough transform](Hough-transform) - [Hough Line Transform doc](https://docs.opencv.org/3.4/d9/db0/tutorial_hough_lines.html)- [Probabilistic-Hough-Transform](Probabilistic-Hough-Transform) - [lane Finder](lane-Finder)----slide 5
###Code
import cv2
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.pyplot import figure
###Output
_____no_output_____
###Markdown
Hough transform
###Code
img = cv2.imread('session_10/dave.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# find edge with some algorithm
# other algorhtims:
# https://github.com/bigmpc/cv-spring-2021/blob/main/notebooks/session_5.md
# laplacian = cv2.Laplacian(img,cv2.CV_64F)
# sobelx = cv2.Sobel(img,cv2.CV_64F,1,0,ksize=5)
# sobely = cv2.Sobel(img,cv2.CV_64F,0,1,ksize=5)
# scharrx = cv2.Scharr(img,cv2.CV_64F,1,0)
# scharry = cv2.Scharr(img,cv2.CV_64F,0,1)
edges = cv2.Canny(gray,50,150,apertureSize = 3)
lines = cv2.HoughLines(edges,1,np.pi/180,120)
for i in range(0, len(lines)):
for rho,theta in lines[i]:
a = np.cos(theta)
b = np.sin(theta)
x0 = a*rho
y0 = b*rho
x1 = int(x0 + 1000*(-b))
y1 = int(y0 + 1000*(a))
x2 = int(x0 - 1000*(-b))
y2 = int(y0 - 1000*(a))
cv2.line(img,(x1,y1),(x2,y2),(0,0,255),2)
plt.imshow(img[:,:,::-1])
plt.show()
###Output
_____no_output_____
###Markdown
Probabilistic Hough Transform
###Code
#probabilistic hough transform
img = cv2.imread('session_10/dave.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
lines = cv2.HoughLinesP(edges,1,np.pi/180,120,10,5)
for i in range(0, len(lines)):
for x1,y1,x2,y2 in lines[i]:
cv2.line(img,(x1,y1),(x2,y2),(0,255,0),2)
plt.imshow(img[:,:,::-1])
plt.show()
###Output
_____no_output_____
###Markdown
lane Finder
###Code
# preliminary attempt at lane following system
# largely derived from: https://medium.com/pharos-production/
# road-lane-recognition-with-opencv-and-ios-a892a3ab635c
# identify filename of video to be analyzed
cap = cv2.VideoCapture('session_10/testvideo2.mp4')
# read video frame & show on screen
ret, frame = cap.read()
# loop through until entire video file is played
while(ret):
frame=cv2.resize(frame,(800,400))
cv2.imshow("Original Scene", frame)
# create polygon (trapezoid) mask to select region of interest
mask = np.zeros((frame.shape[0], frame.shape[1]), dtype="uint8")
pts = np.array([[80, 400], [330, 310], [450, 310], [750, 400]], dtype=np.int32)
cv2.fillConvexPoly(mask, pts, 255)
cv2.imshow("Mask", mask)
# apply mask and show masked image on screen
masked = cv2.bitwise_and(frame, frame, mask=mask)
cv2.imshow("Region of Interest", masked)
# convert to grayscale then black/white to binary image
masked = cv2.cvtColor(masked, cv2.COLOR_BGR2GRAY)
thresh = 200
ret,masked = cv2.threshold(masked, thresh, 255, cv2.THRESH_BINARY)
#cv2.imshow("Black/White", masked)
# identify edges & show on screen
edged = cv2.Canny(masked, 30, 150)
#cv2.imshow("Edged", edged)
# perform full Hough Transform to identify lane lines
lines = cv2.HoughLines(edged, 1, np.pi / 180, 25)
# define arrays for left and right lanes
rho_left = []
theta_left = []
rho_right = []
theta_right = []
# ensure cv2.HoughLines found at least one line
if lines is not None:
# loop through all of the lines found by cv2.HoughLines
for i in range(0, len(lines)):
# evaluate each row of cv2.HoughLines output 'lines'
for rho, theta in lines[i]:
# collect left lanes
if theta < np.pi/2 and theta > np.pi/4:
rho_left.append(rho)
theta_left.append(theta)
# plot all lane lines for DEMO PURPOSES ONLY
# a = np.cos(theta); b = np.sin(theta)
# x0 = a * rho; y0 = b * rho
# x1 = int(x0 + 400 * (-b)); y1 = int(y0 + 400 * (a))
# x2 = int(x0 - 600 * (-b)); y2 = int(y0 - 600 * (a))
#
# cv2.line(snip, (x1, y1), (x2, y2), (0, 0, 255), 1)
# collect right lanes
if theta > np.pi/2 and theta < 3*np.pi/4:
rho_right.append(rho)
theta_right.append(theta)
# # plot all lane lines for DEMO PURPOSES ONLY
# a = np.cos(theta); b = np.sin(theta)
# x0 = a * rho; y0 = b * rho
# x1 = int(x0 + 400 * (-b)); y1 = int(y0 + 400 * (a))
# x2 = int(x0 - 600 * (-b)); y2 = int(y0 - 600 * (a))
#
# cv2.line(snip, (x1, y1), (x2, y2), (0, 0, 255), 1)
# statistics to identify median lane dimensions
left_rho = np.median(rho_left)
left_theta = np.median(theta_left)
right_rho = np.median(rho_right)
right_theta = np.median(theta_right)
# plot median lane on top of scene snip
if left_theta > np.pi/4:
a = np.cos(left_theta); b = np.sin(left_theta)
x0 = a * left_rho; y0 = b * left_rho
offset1 = 200; offset2 = 500
x1 = int(x0 - offset1 * (-b)); y1 = int(y0 - offset1 * (a))
x2 = int(x0 + offset2 * (-b)); y2 = int(y0 + offset2 * (a))
cv2.line(frame, (x1, y1), (x2, y2), (0, 255, 0), 6)
if right_theta > np.pi/4:
a = np.cos(right_theta); b = np.sin(right_theta)
x0 = a * right_rho; y0 = b * right_rho
offset1 = 500; offset2 = 800
x3 = int(x0 - offset1 * (-b)); y3 = int(y0 - offset1 * (a))
x4 = int(x0 - offset2 * (-b)); y4 = int(y0 - offset2 * (a))
cv2.line(frame, (x3, y3), (x4, y4), (255, 0, 0), 6)
cv2.imshow("Lined", frame)
# press the q key to break out of video
if cv2.waitKey(25) & 0xFF == ord('q'):
break
# read video frame & show on screen
ret, frame = cap.read()
# clear everything once finished
cap.release()
cv2.destroyAllWindows()
###Output
/usr/lib/python3/dist-packages/numpy/core/fromnumeric.py:3256: RuntimeWarning: Mean of empty slice.
return _methods._mean(a, axis=axis, dtype=dtype,
/usr/lib/python3/dist-packages/numpy/core/_methods.py:161: RuntimeWarning: invalid value encountered in double_scalars
ret = ret.dtype.type(ret / rcount)
|
UCI_NN_CNN.ipynb | ###Markdown
Start Training UCI HAR dataset with Fully Connected Network on all data per sample
###Code
%run main.py configs\UCI_FC_0.json
###Output
Using TensorFlow backend.
[INFO]: Hi, This is root.
[INFO]: After the configurations are successfully processed and dirs are created.
[INFO]: The pipeline of the project will begin now.
###Markdown
Testing with Best Epoch: "UC_FC_0-227-0.97.hdf5"
###Code
%run main.py configs\UCI_FC_0_test.json
###Output
[INFO]: Hi, This is root.
[INFO]: Hi, This is root.
[INFO]: After the configurations are successfully processed and dirs are created.
[INFO]: After the configurations are successfully processed and dirs are created.
[INFO]: The pipeline of the project will begin now.
[INFO]: The pipeline of the project will begin now.
###Markdown
Training UCI HAR dataset with Conv1D + Fully Connected Network on Time stamps data
###Code
%run main.py configs\UCI_CNN.json
###Output
[INFO]: Hi, This is root.
[INFO]: Hi, This is root.
[INFO]: Hi, This is root.
[INFO]: After the configurations are successfully processed and dirs are created.
[INFO]: After the configurations are successfully processed and dirs are created.
[INFO]: After the configurations are successfully processed and dirs are created.
[INFO]: The pipeline of the project will begin now.
[INFO]: The pipeline of the project will begin now.
[INFO]: The pipeline of the project will begin now.
###Markdown
Testing with Best Epoch: "UCI_CNN-110-0.97.hdf5"
###Code
%run main.py configs\UCI_CNN_test.json
###Output
[INFO]: Hi, This is root.
[INFO]: Hi, This is root.
[INFO]: Hi, This is root.
[INFO]: Hi, This is root.
[INFO]: After the configurations are successfully processed and dirs are created.
[INFO]: After the configurations are successfully processed and dirs are created.
[INFO]: After the configurations are successfully processed and dirs are created.
[INFO]: After the configurations are successfully processed and dirs are created.
[INFO]: The pipeline of the project will begin now.
[INFO]: The pipeline of the project will begin now.
[INFO]: The pipeline of the project will begin now.
[INFO]: The pipeline of the project will begin now.
|
2 Regression/2.2 polynomial regression/Polynomial_Regression.ipynb | ###Markdown
POLYNOMIAL REGRESSION Dalam masalah di dunia nyata, data tidak selalu terplot secara linear, tetapi bisa jadi terplot secara acak. Dalam hal ini, regresi linear bukanlah cara terbaik dalam menangani data. Oleh karena itu, kita akan mempelajari tentang Polynomial Regression. Persamaan polynomial didapatkan dari hasil transformasi dari persamaan linear. Jadi persamaan polynomial adalah sebagai berikut. Secara statistik polynomial regression tetap bersifat linear, karena koefisiennya bersifat linear. Polynomial dengan orde 2  Polynomial dengan orde 3  Polynomial dengan orde n  Berikut adalah contoh gambar polynomial dengan orde 2.  Polynomial FeaturesMatriks fitur polynomial didapatkan dari hasil transformasi kuadratik dari fitur yang ada. Sebagai contoh, kita ingin melakukan transformasi sebuah fitur dengan degree/orde = 2   Saat kita melakukan transformasi dengan kode sebagai berikut.poly = sklearn.PolynomialFeature(degree=2)polyy = poly.fit_transform(X)maka kita akan mendapatkan fitur yang sudah ditransformasi dengan bentuk $[1,a,b,a^2,ab,b^2]$  CODING SECTION Misalkan kita ingin menganalisa tingkat kejujuran calon karyawan baru. pada pekerjaan yang dia ingin lamar, dia menceritakan bahwa sudah memiliki pengalaman di pekerjaan tersebut selama 16 tahun dan mempunyai gaji sebesar 20 juta rupiah di perusahaan sebelumnya. Kita ingin melakukan pengecekan terhadap pernyataan calon karyawan tersebut. kita melakukan pengecekan terhadap data-data karyawan yang bekerja di bidang yang sama, yang sudah kita ambil dari situs pencarian kerja. Data dibawah ini adalah data gaji karyawan-karyawan terhadap lama pengalaman mereka bekerja.
###Code
import numpy as np #aljabar linear
import pandas as pd #pengolahan data
import matplotlib.pyplot as plt #visualisasi
df = pd.read_csv('salary.csv') #membaca data
df.head(10)
#mengubah data menjadi array agar bisa dilakukan proses machine learning
X = df.pengalaman.values.reshape(-1,1)
y = df.gaji.values
# Fitting Linear Regression to the dataset
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(X, y)
from sklearn.preprocessing import PolynomialFeatures
poly_reg = PolynomialFeatures(degree =8)
X_poly = poly_reg.fit_transform(X)
poly_reg.fit(X_poly, y)
lin_reg_2 = LinearRegression()
lin_reg_2.fit(X_poly, y)
###Output
_____no_output_____
###Markdown
Visualisasi
###Code
X_grid = np.arange(min(X), max(X), 0.1)
X_grid = X_grid.reshape((len(X_grid), 1))
plt.scatter(X, y, color = 'red')
plt.plot(X_grid, lin_reg_2.predict(poly_reg.fit_transform(X_grid)), color = 'blue')
plt.title('(Polynomial Regression)')
plt.xlabel('pengalaman')
plt.ylabel('gaji')
plt.show()
lin_reg_2.predict(poly_reg.fit_transform(np.array(16).reshape(1,-1)))[0]
###Output
_____no_output_____ |
Microbiome.ipynb | ###Markdown
###Code
import numpy as np
import pandas as pd
import scipy.integrate
import matplotlib.pyplot as plt
import panel as pn
import colorcet
pn.extension(comms='colab')
def compete_simple(y, t, input_X, input_Y, input_Z):
species_A, species_B, species_C = y
r_max = 1.0
dydt = [r_max * (input_X - species_A) / input_X * species_A,
r_max * (input_Y - species_B) / input_Y * species_B,
r_max * (input_Z - species_C) / input_Z * species_C,]
return dydt
X_slider = pn.widgets.FloatSlider(name="input X", start=0, end=100, step=1, value=50)
Y_slider = pn.widgets.FloatSlider(name="input Y", start=0, end=100, step=1, value=50)
Z_slider = pn.widgets.FloatSlider(name="input Z", start=0, end=100, step=1, value=50)
@pn.depends(
X_slider.param.value,
Y_slider.param.value,
Z_slider.param.value,
)
def simple_plot(input_X, input_Y, input_Z):
consume_rate=10.0
death_rate=1.0
y0 = [1.0, 1.0, 1.0]
t = np.linspace(0, 10, 101)
sol = odeint(compete_simple, y0, t, args=(input_X, input_Y, input_Z))
fig, ax = plt.subplots(figsize=(15, 7))
plt.plot(t, sol[:, 0], 'b', label='Species A', linewidth=5)
plt.plot(t, sol[:, 1], 'g', label='Species B', linewidth=5)
plt.plot(t, sol[:, 2], 'r', label='Species C', linewidth=5)
plt.legend(loc='best')
plt.xlabel('t')
plt.grid()
plt.close(fig)
return fig
# Final layout
widgets = pn.Column(pn.Spacer(height=80), X_slider, pn.Spacer(height=80), Y_slider, pn.Spacer(height=80), Z_slider, pn.Spacer(height=80), width=150)
pn.Row(pn.Column(simple_plot), widgets)
def compete_complex(y, t, consume_rate, death_rate, input_X, input_Y, input_Z, compete_rate=0.1):
species_A, species_B, species_C = y
r_max = 1.0
compete_rate = 0.2
dydt = [r_max * (input_X - species_A - compete_rate * (species_B + species_C)) / input_X * species_A,
r_max * (input_Y - species_B - compete_rate * (species_A + species_C)) / input_Y * species_B,
r_max * (input_Z - species_C - compete_rate * (species_A + species_B)) / input_Z * species_C,]
return dydt
X_slider = pn.widgets.FloatSlider(name="input X", start=0, end=100, step=1, value=50)
Y_slider = pn.widgets.FloatSlider(name="input Y", start=0, end=100, step=1, value=50)
Z_slider = pn.widgets.FloatSlider(name="input Z", start=0, end=100, step=1, value=50)
@pn.depends(
X_slider.param.value,
Y_slider.param.value,
Z_slider.param.value,
)
def complex_plot(input_X, input_Y, input_Z):
consume_rate=10.0
death_rate=1.0
y0 = [1.0, 1.0, 1.0]
t = np.linspace(0, 10, 101)
sol = odeint(compete_complex, y0, t, args=(consume_rate, death_rate, input_X, input_Y, input_Z))
fig, ax = plt.subplots(figsize=(15, 7))
plt.plot(t, sol[:, 0], 'b', label='Species A', linewidth=5)
plt.plot(t, sol[:, 1], 'g', label='Species B', linewidth=5)
plt.plot(t, sol[:, 2], 'r', label='Species C', linewidth=5)
plt.legend(loc='best')
plt.xlabel('t')
plt.grid()
plt.close(fig)
return fig
# Final layout
widgets = pn.Column(pn.Spacer(height=150), X_slider, pn.Spacer(height=80), Y_slider, pn.Spacer(height=80), Z_slider, pn.Spacer(height=80), width=150)
pn.Row(pn.Column(complex_plot), pn.Spacer(width=20), widgets)
###Output
_____no_output_____ |
notebooks/book1/15/lstm_torch.ipynb | ###Markdown
Please find jax implementation of this notebook here: https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/book1/15/lstm_jax.ipynb Long short term memory (LSTM) We show how to implement LSTMs from scratch.Based on sec 9.2 of http://d2l.ai/chapter_recurrent-modern/lstm.html.This uses code from the [basic RNN colab](https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/rnn_torch.ipynb).
###Code
import numpy as np
import matplotlib.pyplot as plt
import math
from IPython import display
try:
import torch
except ModuleNotFoundError:
%pip install -qq torch
import torch
from torch import nn
from torch.nn import functional as F
from torch.utils import data
import collections
import re
import random
import os
import requests
import hashlib
import time
np.random.seed(seed=1)
torch.manual_seed(1)
!mkdir figures # for saving plots
###Output
_____no_output_____
###Markdown
Data As data, we use the book "The Time Machine" by H G Wells,preprocessed using the code in [this colab](https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/text_preproc_torch.ipynb).
###Code
class SeqDataLoader:
"""An iterator to load sequence data."""
def __init__(self, batch_size, num_steps, use_random_iter, max_tokens):
if use_random_iter:
self.data_iter_fn = seq_data_iter_random
else:
self.data_iter_fn = seq_data_iter_sequential
self.corpus, self.vocab = load_corpus_time_machine(max_tokens)
self.batch_size, self.num_steps = batch_size, num_steps
def __iter__(self):
return self.data_iter_fn(self.corpus, self.batch_size, self.num_steps)
class Vocab:
"""Vocabulary for text."""
def __init__(self, tokens=None, min_freq=0, reserved_tokens=None):
if tokens is None:
tokens = []
if reserved_tokens is None:
reserved_tokens = []
# Sort according to frequencies
counter = count_corpus(tokens)
self.token_freqs = sorted(counter.items(), key=lambda x: x[1], reverse=True)
# The index for the unknown token is 0
self.unk, uniq_tokens = 0, ["<unk>"] + reserved_tokens
uniq_tokens += [token for token, freq in self.token_freqs if freq >= min_freq and token not in uniq_tokens]
self.idx_to_token, self.token_to_idx = [], dict()
for token in uniq_tokens:
self.idx_to_token.append(token)
self.token_to_idx[token] = len(self.idx_to_token) - 1
def __len__(self):
return len(self.idx_to_token)
def __getitem__(self, tokens):
if not isinstance(tokens, (list, tuple)):
return self.token_to_idx.get(tokens, self.unk)
return [self.__getitem__(token) for token in tokens]
def to_tokens(self, indices):
if not isinstance(indices, (list, tuple)):
return self.idx_to_token[indices]
return [self.idx_to_token[index] for index in indices]
def tokenize(lines, token="word"):
"""Split text lines into word or character tokens."""
if token == "word":
return [line.split() for line in lines]
elif token == "char":
return [list(line) for line in lines]
else:
print("ERROR: unknown token type: " + token)
def count_corpus(tokens):
"""Count token frequencies."""
# Here `tokens` is a 1D list or 2D list
if len(tokens) == 0 or isinstance(tokens[0], list):
# Flatten a list of token lists into a list of tokens
tokens = [token for line in tokens for token in line]
return collections.Counter(tokens)
def seq_data_iter_random(corpus, batch_size, num_steps):
"""Generate a minibatch of subsequences using random sampling."""
# Start with a random offset (inclusive of `num_steps - 1`) to partition a
# sequence
corpus = corpus[random.randint(0, num_steps - 1) :]
# Subtract 1 since we need to account for labels
num_subseqs = (len(corpus) - 1) // num_steps
# The starting indices for subsequences of length `num_steps`
initial_indices = list(range(0, num_subseqs * num_steps, num_steps))
# In random sampling, the subsequences from two adjacent random
# minibatches during iteration are not necessarily adjacent on the
# original sequence
random.shuffle(initial_indices)
def data(pos):
# Return a sequence of length `num_steps` starting from `pos`
return corpus[pos : pos + num_steps]
num_batches = num_subseqs // batch_size
for i in range(0, batch_size * num_batches, batch_size):
# Here, `initial_indices` contains randomized starting indices for
# subsequences
initial_indices_per_batch = initial_indices[i : i + batch_size]
X = [data(j) for j in initial_indices_per_batch]
Y = [data(j + 1) for j in initial_indices_per_batch]
yield torch.tensor(X), torch.tensor(Y)
def seq_data_iter_sequential(corpus, batch_size, num_steps):
"""Generate a minibatch of subsequences using sequential partitioning."""
# Start with a random offset to partition a sequence
offset = random.randint(0, num_steps)
num_tokens = ((len(corpus) - offset - 1) // batch_size) * batch_size
Xs = torch.tensor(corpus[offset : offset + num_tokens])
Ys = torch.tensor(corpus[offset + 1 : offset + 1 + num_tokens])
Xs, Ys = Xs.reshape(batch_size, -1), Ys.reshape(batch_size, -1)
num_batches = Xs.shape[1] // num_steps
for i in range(0, num_steps * num_batches, num_steps):
X = Xs[:, i : i + num_steps]
Y = Ys[:, i : i + num_steps]
yield X, Y
def download(name, cache_dir=os.path.join("..", "data")):
"""Download a file inserted into DATA_HUB, return the local filename."""
assert name in DATA_HUB, f"{name} does not exist in {DATA_HUB}."
url, sha1_hash = DATA_HUB[name]
os.makedirs(cache_dir, exist_ok=True)
fname = os.path.join(cache_dir, url.split("/")[-1])
if os.path.exists(fname):
sha1 = hashlib.sha1()
with open(fname, "rb") as f:
while True:
data = f.read(1048576)
if not data:
break
sha1.update(data)
if sha1.hexdigest() == sha1_hash:
return fname # Hit cache
print(f"Downloading {fname} from {url}...")
r = requests.get(url, stream=True, verify=True)
with open(fname, "wb") as f:
f.write(r.content)
return fname
def read_time_machine():
"""Load the time machine dataset into a list of text lines."""
with open(download("time_machine"), "r") as f:
lines = f.readlines()
return [re.sub("[^A-Za-z]+", " ", line).strip().lower() for line in lines]
def load_corpus_time_machine(max_tokens=-1):
"""Return token indices and the vocabulary of the time machine dataset."""
lines = read_time_machine()
tokens = tokenize(lines, "char")
vocab = Vocab(tokens)
# Since each text line in the time machine dataset is not necessarily a
# sentence or a paragraph, flatten all the text lines into a single list
corpus = [vocab[token] for line in tokens for token in line]
if max_tokens > 0:
corpus = corpus[:max_tokens]
return corpus, vocab
def load_data_time_machine(batch_size, num_steps, use_random_iter=False, max_tokens=10000):
"""Return the iterator and the vocabulary of the time machine dataset."""
data_iter = SeqDataLoader(batch_size, num_steps, use_random_iter, max_tokens)
return data_iter, data_iter.vocab
DATA_HUB = dict()
DATA_URL = "http://d2l-data.s3-accelerate.amazonaws.com/"
DATA_HUB["time_machine"] = (DATA_URL + "timemachine.txt", "090b5e7e70c295757f55df93cb0a180b9691891a")
batch_size, num_steps = 32, 35
train_iter, vocab = load_data_time_machine(batch_size, num_steps)
###Output
Downloading ../data/timemachine.txt from http://d2l-data.s3-accelerate.amazonaws.com/timemachine.txt...
###Markdown
Creating model from scratch
###Code
def get_lstm_params(vocab_size, num_hiddens, device):
num_inputs = num_outputs = vocab_size
def normal(shape):
return torch.randn(size=shape, device=device) * 0.01
def three():
return (
normal((num_inputs, num_hiddens)),
normal((num_hiddens, num_hiddens)),
torch.zeros(num_hiddens, device=device),
)
W_xi, W_hi, b_i = three() # Input gate parameters
W_xf, W_hf, b_f = three() # Forget gate parameters
W_xo, W_ho, b_o = three() # Output gate parameters
W_xc, W_hc, b_c = three() # Candidate memory cell parameters
# Output layer parameters
W_hq = normal((num_hiddens, num_outputs))
b_q = torch.zeros(num_outputs, device=device)
# Attach gradients
params = [W_xi, W_hi, b_i, W_xf, W_hf, b_f, W_xo, W_ho, b_o, W_xc, W_hc, b_c, W_hq, b_q]
for param in params:
param.requires_grad_(True)
return params
# The state is now a tuple of hidden state and cell state
def init_lstm_state(batch_size, num_hiddens, device):
return (
torch.zeros((batch_size, num_hiddens), device=device),
torch.zeros((batch_size, num_hiddens), device=device),
)
# forward function
def lstm(inputs, state, params):
[W_xi, W_hi, b_i, W_xf, W_hf, b_f, W_xo, W_ho, b_o, W_xc, W_hc, b_c, W_hq, b_q] = params
(H, C) = state
outputs = []
for X in inputs:
I = torch.sigmoid((X @ W_xi) + (H @ W_hi) + b_i)
F = torch.sigmoid((X @ W_xf) + (H @ W_hf) + b_f)
O = torch.sigmoid((X @ W_xo) + (H @ W_ho) + b_o)
C_tilda = torch.tanh((X @ W_xc) + (H @ W_hc) + b_c)
C = F * C + I * C_tilda
H = O * torch.tanh(C)
Y = (H @ W_hq) + b_q
outputs.append(Y)
return torch.cat(outputs, dim=0), (H, C)
###Output
_____no_output_____
###Markdown
Training and prediction
###Code
# Make the model class
# Input X to call is (B,T) matrix of integers (from vocab encoding).
# We transpse this to (T,B) then one-hot encode to (T,B,V), where V is vocab.
# The result is passed to the forward function.
# (We define the forward function as an argument, so we can change it later.)
class RNNModelScratch:
"""A RNN Model implemented from scratch."""
def __init__(self, vocab_size, num_hiddens, device, get_params, init_state, forward_fn):
self.vocab_size, self.num_hiddens = vocab_size, num_hiddens
self.params = get_params(vocab_size, num_hiddens, device)
self.init_state, self.forward_fn = init_state, forward_fn
def __call__(self, X, state):
X = F.one_hot(X.T, self.vocab_size).type(torch.float32)
return self.forward_fn(X, state, self.params)
def begin_state(self, batch_size, device):
return self.init_state(batch_size, self.num_hiddens, device)
def try_gpu(i=0):
"""Return gpu(i) if exists, otherwise return cpu()."""
if torch.cuda.device_count() >= i + 1:
return torch.device(f"cuda:{i}")
return torch.device("cpu")
class Animator:
"""For plotting data in animation."""
def __init__(
self,
xlabel=None,
ylabel=None,
legend=None,
xlim=None,
ylim=None,
xscale="linear",
yscale="linear",
fmts=("-", "m--", "g-.", "r:"),
nrows=1,
ncols=1,
figsize=(3.5, 2.5),
):
# Incrementally plot multiple lines
if legend is None:
legend = []
display.set_matplotlib_formats("svg")
self.fig, self.axes = plt.subplots(nrows, ncols, figsize=figsize)
if nrows * ncols == 1:
self.axes = [
self.axes,
]
# Use a lambda function to capture arguments
self.config_axes = lambda: set_axes(self.axes[0], xlabel, ylabel, xlim, ylim, xscale, yscale, legend)
self.X, self.Y, self.fmts = None, None, fmts
def add(self, x, y):
# Add multiple data points into the figure
if not hasattr(y, "__len__"):
y = [y]
n = len(y)
if not hasattr(x, "__len__"):
x = [x] * n
if not self.X:
self.X = [[] for _ in range(n)]
if not self.Y:
self.Y = [[] for _ in range(n)]
for i, (a, b) in enumerate(zip(x, y)):
if a is not None and b is not None:
self.X[i].append(a)
self.Y[i].append(b)
self.axes[0].cla()
for x, y, fmt in zip(self.X, self.Y, self.fmts):
self.axes[0].plot(x, y, fmt)
self.config_axes()
display.display(self.fig)
display.clear_output(wait=True)
class Timer:
"""Record multiple running times."""
def __init__(self):
self.times = []
self.start()
def start(self):
"""Start the timer."""
self.tik = time.time()
def stop(self):
"""Stop the timer and record the time in a list."""
self.times.append(time.time() - self.tik)
return self.times[-1]
def avg(self):
"""Return the average time."""
return sum(self.times) / len(self.times)
def sum(self):
"""Return the sum of time."""
return sum(self.times)
def cumsum(self):
"""Return the accumulated time."""
return np.array(self.times).cumsum().tolist()
class Accumulator:
"""For accumulating sums over `n` variables."""
def __init__(self, n):
self.data = [0.0] * n
def add(self, *args):
self.data = [a + float(b) for a, b in zip(self.data, args)]
def reset(self):
self.data = [0.0] * len(self.data)
def __getitem__(self, idx):
return self.data[idx]
def set_axes(axes, xlabel, ylabel, xlim, ylim, xscale, yscale, legend):
"""Set the axes for matplotlib."""
axes.set_xlabel(xlabel)
axes.set_ylabel(ylabel)
axes.set_xscale(xscale)
axes.set_yscale(yscale)
axes.set_xlim(xlim)
axes.set_ylim(ylim)
if legend:
axes.legend(legend)
axes.grid()
def sgd(params, lr, batch_size):
"""Minibatch stochastic gradient descent."""
with torch.no_grad():
for param in params:
param -= lr * param.grad / batch_size
param.grad.zero_()
def grad_clipping(net, theta):
"""Clip the gradient."""
if isinstance(net, nn.Module):
params = [p for p in net.parameters() if p.requires_grad]
else:
params = net.params
norm = torch.sqrt(sum(torch.sum((p.grad**2)) for p in params))
if norm > theta:
for param in params:
param.grad[:] *= theta / norm
def predict(prefix, num_preds, net, vocab, device):
"""Generate new characters following the `prefix`."""
state = net.begin_state(batch_size=1, device=device)
outputs = [vocab[prefix[0]]]
get_input = lambda: torch.tensor([outputs[-1]], device=device).reshape((1, 1))
for y in prefix[1:]: # Warm-up period
_, state = net(get_input(), state)
outputs.append(vocab[y])
for _ in range(num_preds): # Predict `num_preds` steps
y, state = net(get_input(), state)
outputs.append(int(y.argmax(dim=1).reshape(1)))
return "".join([vocab.idx_to_token[i] for i in outputs])
def train_epoch(net, train_iter, loss, updater, device, use_random_iter):
state, timer = None, Timer()
metric = Accumulator(2) # Sum of training loss, no. of tokens
for X, Y in train_iter:
if state is None or use_random_iter:
# Initialize `state` when either it is the first iteration or
# using random sampling
state = net.begin_state(batch_size=X.shape[0], device=device)
else:
if isinstance(net, nn.Module) and not isinstance(state, tuple):
# `state` is a tensor for `nn.GRU`
state.detach_()
else:
# `state` is a tuple of tensors for `nn.LSTM` and
# for our custom scratch implementation
for s in state:
s.detach_()
y = Y.T.reshape(-1) # (B,T) -> (T,B)
X, y = X.to(device), y.to(device)
y_hat, state = net(X, state)
l = loss(y_hat, y.long()).mean()
if isinstance(updater, torch.optim.Optimizer):
updater.zero_grad()
l.backward()
grad_clipping(net, 1)
updater.step()
else:
l.backward()
grad_clipping(net, 1)
# batch_size=1 since the `mean` function has been invoked
updater(batch_size=1)
metric.add(l * y.numel(), y.numel())
return math.exp(metric[0] / metric[1]), metric[1] / timer.stop()
def train(net, train_iter, vocab, lr, num_epochs, device, use_random_iter=False):
loss = nn.CrossEntropyLoss()
animator = Animator(xlabel="epoch", ylabel="perplexity", legend=["train"], xlim=[10, num_epochs])
# Initialize
if isinstance(net, nn.Module):
updater = torch.optim.SGD(net.parameters(), lr)
else:
updater = lambda batch_size: sgd(net.params, lr, batch_size)
num_preds = 50
predict_ = lambda prefix: predict(prefix, num_preds, net, vocab, device)
# Train and predict
for epoch in range(num_epochs):
ppl, speed = train_epoch(net, train_iter, loss, updater, device, use_random_iter)
if (epoch + 1) % 10 == 0:
print(predict_("time traveller"))
animator.add(epoch + 1, [ppl])
print(f"perplexity {ppl:.1f}, {speed:.1f} tokens/sec on {str(device)}")
print(predict_("time traveller"))
print(predict_("traveller"))
vocab_size, num_hiddens, device = len(vocab), 256, try_gpu()
num_epochs, lr = 500, 1
model = RNNModelScratch(len(vocab), num_hiddens, device, get_lstm_params, init_lstm_state, lstm)
train(model, train_iter, vocab, lr, num_epochs, device)
###Output
perplexity 1.1, 21046.3 tokens/sec on cpu
time traveller for so it will be convenient to speak of himwas e
traveller i shall briegt faintlywationsthal is frewsing and
###Markdown
Using pytorch module
###Code
class RNNModel(nn.Module):
"""The RNN model."""
def __init__(self, rnn_layer, vocab_size, **kwargs):
super(RNNModel, self).__init__(**kwargs)
self.rnn = rnn_layer
self.vocab_size = vocab_size
self.num_hiddens = self.rnn.hidden_size
# If the RNN is bidirectional (to be introduced later),
# `num_directions` should be 2, else it should be 1.
if not self.rnn.bidirectional:
self.num_directions = 1
self.linear = nn.Linear(self.num_hiddens, self.vocab_size)
else:
self.num_directions = 2
self.linear = nn.Linear(self.num_hiddens * 2, self.vocab_size)
def forward(self, inputs, state):
X = F.one_hot(inputs.T.long(), self.vocab_size)
X = X.to(torch.float32)
Y, state = self.rnn(X, state)
# The fully connected layer will first change the shape of `Y` to
# (`num_steps` * `batch_size`, `num_hiddens`). Its output shape is
# (`num_steps` * `batch_size`, `vocab_size`).
output = self.linear(Y.reshape((-1, Y.shape[-1])))
return output, state
def begin_state(self, device, batch_size=1):
if not isinstance(self.rnn, nn.LSTM):
# `nn.GRU` takes a tensor as hidden state
return torch.zeros((self.num_directions * self.rnn.num_layers, batch_size, self.num_hiddens), device=device)
else:
# `nn.LSTM` takes a tuple of hidden states
return (
torch.zeros((self.num_directions * self.rnn.num_layers, batch_size, self.num_hiddens), device=device),
torch.zeros((self.num_directions * self.rnn.num_layers, batch_size, self.num_hiddens), device=device),
)
num_inputs = vocab_size
lstm_layer = nn.LSTM(num_inputs, num_hiddens)
model = RNNModel(lstm_layer, len(vocab))
model = model.to(device)
train(model, train_iter, vocab, lr, num_epochs, device)
###Output
perplexity 1.0, 20182.0 tokens/sec on cpu
time traveller for so it will be convenient to speak of himwas e
travelleryou can show black is white by argument said filby
###Markdown
Please find jax implementation of this notebook here: https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/book1/15/lstm_jax.ipynb Long short term memory (LSTM) We show how to implement LSTMs from scratch.Based on sec 9.2 of http://d2l.ai/chapter_recurrent-modern/lstm.html.This uses code from the [basic RNN colab](https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/rnn_torch.ipynb).
###Code
import numpy as np
import matplotlib.pyplot as plt
import math
from IPython import display
try:
import torch
except ModuleNotFoundError:
%pip install torch
import torch
from torch import nn
from torch.nn import functional as F
from torch.utils import data
import collections
import re
import random
import os
import requests
import hashlib
import time
np.random.seed(seed=1)
torch.manual_seed(1)
!mkdir figures # for saving plots
###Output
_____no_output_____
###Markdown
Data As data, we use the book "The Time Machine" by H G Wells,preprocessed using the code in [this colab](https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/text_preproc_torch.ipynb).
###Code
class SeqDataLoader:
"""An iterator to load sequence data."""
def __init__(self, batch_size, num_steps, use_random_iter, max_tokens):
if use_random_iter:
self.data_iter_fn = seq_data_iter_random
else:
self.data_iter_fn = seq_data_iter_sequential
self.corpus, self.vocab = load_corpus_time_machine(max_tokens)
self.batch_size, self.num_steps = batch_size, num_steps
def __iter__(self):
return self.data_iter_fn(self.corpus, self.batch_size, self.num_steps)
class Vocab:
"""Vocabulary for text."""
def __init__(self, tokens=None, min_freq=0, reserved_tokens=None):
if tokens is None:
tokens = []
if reserved_tokens is None:
reserved_tokens = []
# Sort according to frequencies
counter = count_corpus(tokens)
self.token_freqs = sorted(counter.items(), key=lambda x: x[1], reverse=True)
# The index for the unknown token is 0
self.unk, uniq_tokens = 0, ["<unk>"] + reserved_tokens
uniq_tokens += [token for token, freq in self.token_freqs if freq >= min_freq and token not in uniq_tokens]
self.idx_to_token, self.token_to_idx = [], dict()
for token in uniq_tokens:
self.idx_to_token.append(token)
self.token_to_idx[token] = len(self.idx_to_token) - 1
def __len__(self):
return len(self.idx_to_token)
def __getitem__(self, tokens):
if not isinstance(tokens, (list, tuple)):
return self.token_to_idx.get(tokens, self.unk)
return [self.__getitem__(token) for token in tokens]
def to_tokens(self, indices):
if not isinstance(indices, (list, tuple)):
return self.idx_to_token[indices]
return [self.idx_to_token[index] for index in indices]
def tokenize(lines, token="word"):
"""Split text lines into word or character tokens."""
if token == "word":
return [line.split() for line in lines]
elif token == "char":
return [list(line) for line in lines]
else:
print("ERROR: unknown token type: " + token)
def count_corpus(tokens):
"""Count token frequencies."""
# Here `tokens` is a 1D list or 2D list
if len(tokens) == 0 or isinstance(tokens[0], list):
# Flatten a list of token lists into a list of tokens
tokens = [token for line in tokens for token in line]
return collections.Counter(tokens)
def seq_data_iter_random(corpus, batch_size, num_steps):
"""Generate a minibatch of subsequences using random sampling."""
# Start with a random offset (inclusive of `num_steps - 1`) to partition a
# sequence
corpus = corpus[random.randint(0, num_steps - 1) :]
# Subtract 1 since we need to account for labels
num_subseqs = (len(corpus) - 1) // num_steps
# The starting indices for subsequences of length `num_steps`
initial_indices = list(range(0, num_subseqs * num_steps, num_steps))
# In random sampling, the subsequences from two adjacent random
# minibatches during iteration are not necessarily adjacent on the
# original sequence
random.shuffle(initial_indices)
def data(pos):
# Return a sequence of length `num_steps` starting from `pos`
return corpus[pos : pos + num_steps]
num_batches = num_subseqs // batch_size
for i in range(0, batch_size * num_batches, batch_size):
# Here, `initial_indices` contains randomized starting indices for
# subsequences
initial_indices_per_batch = initial_indices[i : i + batch_size]
X = [data(j) for j in initial_indices_per_batch]
Y = [data(j + 1) for j in initial_indices_per_batch]
yield torch.tensor(X), torch.tensor(Y)
def seq_data_iter_sequential(corpus, batch_size, num_steps):
"""Generate a minibatch of subsequences using sequential partitioning."""
# Start with a random offset to partition a sequence
offset = random.randint(0, num_steps)
num_tokens = ((len(corpus) - offset - 1) // batch_size) * batch_size
Xs = torch.tensor(corpus[offset : offset + num_tokens])
Ys = torch.tensor(corpus[offset + 1 : offset + 1 + num_tokens])
Xs, Ys = Xs.reshape(batch_size, -1), Ys.reshape(batch_size, -1)
num_batches = Xs.shape[1] // num_steps
for i in range(0, num_steps * num_batches, num_steps):
X = Xs[:, i : i + num_steps]
Y = Ys[:, i : i + num_steps]
yield X, Y
def download(name, cache_dir=os.path.join("..", "data")):
"""Download a file inserted into DATA_HUB, return the local filename."""
assert name in DATA_HUB, f"{name} does not exist in {DATA_HUB}."
url, sha1_hash = DATA_HUB[name]
os.makedirs(cache_dir, exist_ok=True)
fname = os.path.join(cache_dir, url.split("/")[-1])
if os.path.exists(fname):
sha1 = hashlib.sha1()
with open(fname, "rb") as f:
while True:
data = f.read(1048576)
if not data:
break
sha1.update(data)
if sha1.hexdigest() == sha1_hash:
return fname # Hit cache
print(f"Downloading {fname} from {url}...")
r = requests.get(url, stream=True, verify=True)
with open(fname, "wb") as f:
f.write(r.content)
return fname
def read_time_machine():
"""Load the time machine dataset into a list of text lines."""
with open(download("time_machine"), "r") as f:
lines = f.readlines()
return [re.sub("[^A-Za-z]+", " ", line).strip().lower() for line in lines]
def load_corpus_time_machine(max_tokens=-1):
"""Return token indices and the vocabulary of the time machine dataset."""
lines = read_time_machine()
tokens = tokenize(lines, "char")
vocab = Vocab(tokens)
# Since each text line in the time machine dataset is not necessarily a
# sentence or a paragraph, flatten all the text lines into a single list
corpus = [vocab[token] for line in tokens for token in line]
if max_tokens > 0:
corpus = corpus[:max_tokens]
return corpus, vocab
def load_data_time_machine(batch_size, num_steps, use_random_iter=False, max_tokens=10000):
"""Return the iterator and the vocabulary of the time machine dataset."""
data_iter = SeqDataLoader(batch_size, num_steps, use_random_iter, max_tokens)
return data_iter, data_iter.vocab
DATA_HUB = dict()
DATA_URL = "http://d2l-data.s3-accelerate.amazonaws.com/"
DATA_HUB["time_machine"] = (DATA_URL + "timemachine.txt", "090b5e7e70c295757f55df93cb0a180b9691891a")
batch_size, num_steps = 32, 35
train_iter, vocab = load_data_time_machine(batch_size, num_steps)
###Output
Downloading ../data/timemachine.txt from http://d2l-data.s3-accelerate.amazonaws.com/timemachine.txt...
###Markdown
Creating model from scratch
###Code
def get_lstm_params(vocab_size, num_hiddens, device):
num_inputs = num_outputs = vocab_size
def normal(shape):
return torch.randn(size=shape, device=device) * 0.01
def three():
return (
normal((num_inputs, num_hiddens)),
normal((num_hiddens, num_hiddens)),
torch.zeros(num_hiddens, device=device),
)
W_xi, W_hi, b_i = three() # Input gate parameters
W_xf, W_hf, b_f = three() # Forget gate parameters
W_xo, W_ho, b_o = three() # Output gate parameters
W_xc, W_hc, b_c = three() # Candidate memory cell parameters
# Output layer parameters
W_hq = normal((num_hiddens, num_outputs))
b_q = torch.zeros(num_outputs, device=device)
# Attach gradients
params = [W_xi, W_hi, b_i, W_xf, W_hf, b_f, W_xo, W_ho, b_o, W_xc, W_hc, b_c, W_hq, b_q]
for param in params:
param.requires_grad_(True)
return params
# The state is now a tuple of hidden state and cell state
def init_lstm_state(batch_size, num_hiddens, device):
return (
torch.zeros((batch_size, num_hiddens), device=device),
torch.zeros((batch_size, num_hiddens), device=device),
)
# forward function
def lstm(inputs, state, params):
[W_xi, W_hi, b_i, W_xf, W_hf, b_f, W_xo, W_ho, b_o, W_xc, W_hc, b_c, W_hq, b_q] = params
(H, C) = state
outputs = []
for X in inputs:
I = torch.sigmoid((X @ W_xi) + (H @ W_hi) + b_i)
F = torch.sigmoid((X @ W_xf) + (H @ W_hf) + b_f)
O = torch.sigmoid((X @ W_xo) + (H @ W_ho) + b_o)
C_tilda = torch.tanh((X @ W_xc) + (H @ W_hc) + b_c)
C = F * C + I * C_tilda
H = O * torch.tanh(C)
Y = (H @ W_hq) + b_q
outputs.append(Y)
return torch.cat(outputs, dim=0), (H, C)
###Output
_____no_output_____
###Markdown
Training and prediction
###Code
# Make the model class
# Input X to call is (B,T) matrix of integers (from vocab encoding).
# We transpse this to (T,B) then one-hot encode to (T,B,V), where V is vocab.
# The result is passed to the forward function.
# (We define the forward function as an argument, so we can change it later.)
class RNNModelScratch:
"""A RNN Model implemented from scratch."""
def __init__(self, vocab_size, num_hiddens, device, get_params, init_state, forward_fn):
self.vocab_size, self.num_hiddens = vocab_size, num_hiddens
self.params = get_params(vocab_size, num_hiddens, device)
self.init_state, self.forward_fn = init_state, forward_fn
def __call__(self, X, state):
X = F.one_hot(X.T, self.vocab_size).type(torch.float32)
return self.forward_fn(X, state, self.params)
def begin_state(self, batch_size, device):
return self.init_state(batch_size, self.num_hiddens, device)
def try_gpu(i=0):
"""Return gpu(i) if exists, otherwise return cpu()."""
if torch.cuda.device_count() >= i + 1:
return torch.device(f"cuda:{i}")
return torch.device("cpu")
class Animator:
"""For plotting data in animation."""
def __init__(
self,
xlabel=None,
ylabel=None,
legend=None,
xlim=None,
ylim=None,
xscale="linear",
yscale="linear",
fmts=("-", "m--", "g-.", "r:"),
nrows=1,
ncols=1,
figsize=(3.5, 2.5),
):
# Incrementally plot multiple lines
if legend is None:
legend = []
display.set_matplotlib_formats("svg")
self.fig, self.axes = plt.subplots(nrows, ncols, figsize=figsize)
if nrows * ncols == 1:
self.axes = [
self.axes,
]
# Use a lambda function to capture arguments
self.config_axes = lambda: set_axes(self.axes[0], xlabel, ylabel, xlim, ylim, xscale, yscale, legend)
self.X, self.Y, self.fmts = None, None, fmts
def add(self, x, y):
# Add multiple data points into the figure
if not hasattr(y, "__len__"):
y = [y]
n = len(y)
if not hasattr(x, "__len__"):
x = [x] * n
if not self.X:
self.X = [[] for _ in range(n)]
if not self.Y:
self.Y = [[] for _ in range(n)]
for i, (a, b) in enumerate(zip(x, y)):
if a is not None and b is not None:
self.X[i].append(a)
self.Y[i].append(b)
self.axes[0].cla()
for x, y, fmt in zip(self.X, self.Y, self.fmts):
self.axes[0].plot(x, y, fmt)
self.config_axes()
display.display(self.fig)
display.clear_output(wait=True)
class Timer:
"""Record multiple running times."""
def __init__(self):
self.times = []
self.start()
def start(self):
"""Start the timer."""
self.tik = time.time()
def stop(self):
"""Stop the timer and record the time in a list."""
self.times.append(time.time() - self.tik)
return self.times[-1]
def avg(self):
"""Return the average time."""
return sum(self.times) / len(self.times)
def sum(self):
"""Return the sum of time."""
return sum(self.times)
def cumsum(self):
"""Return the accumulated time."""
return np.array(self.times).cumsum().tolist()
class Accumulator:
"""For accumulating sums over `n` variables."""
def __init__(self, n):
self.data = [0.0] * n
def add(self, *args):
self.data = [a + float(b) for a, b in zip(self.data, args)]
def reset(self):
self.data = [0.0] * len(self.data)
def __getitem__(self, idx):
return self.data[idx]
def set_axes(axes, xlabel, ylabel, xlim, ylim, xscale, yscale, legend):
"""Set the axes for matplotlib."""
axes.set_xlabel(xlabel)
axes.set_ylabel(ylabel)
axes.set_xscale(xscale)
axes.set_yscale(yscale)
axes.set_xlim(xlim)
axes.set_ylim(ylim)
if legend:
axes.legend(legend)
axes.grid()
def sgd(params, lr, batch_size):
"""Minibatch stochastic gradient descent."""
with torch.no_grad():
for param in params:
param -= lr * param.grad / batch_size
param.grad.zero_()
def grad_clipping(net, theta):
"""Clip the gradient."""
if isinstance(net, nn.Module):
params = [p for p in net.parameters() if p.requires_grad]
else:
params = net.params
norm = torch.sqrt(sum(torch.sum((p.grad**2)) for p in params))
if norm > theta:
for param in params:
param.grad[:] *= theta / norm
def predict(prefix, num_preds, net, vocab, device):
"""Generate new characters following the `prefix`."""
state = net.begin_state(batch_size=1, device=device)
outputs = [vocab[prefix[0]]]
get_input = lambda: torch.tensor([outputs[-1]], device=device).reshape((1, 1))
for y in prefix[1:]: # Warm-up period
_, state = net(get_input(), state)
outputs.append(vocab[y])
for _ in range(num_preds): # Predict `num_preds` steps
y, state = net(get_input(), state)
outputs.append(int(y.argmax(dim=1).reshape(1)))
return "".join([vocab.idx_to_token[i] for i in outputs])
def train_epoch(net, train_iter, loss, updater, device, use_random_iter):
state, timer = None, Timer()
metric = Accumulator(2) # Sum of training loss, no. of tokens
for X, Y in train_iter:
if state is None or use_random_iter:
# Initialize `state` when either it is the first iteration or
# using random sampling
state = net.begin_state(batch_size=X.shape[0], device=device)
else:
if isinstance(net, nn.Module) and not isinstance(state, tuple):
# `state` is a tensor for `nn.GRU`
state.detach_()
else:
# `state` is a tuple of tensors for `nn.LSTM` and
# for our custom scratch implementation
for s in state:
s.detach_()
y = Y.T.reshape(-1) # (B,T) -> (T,B)
X, y = X.to(device), y.to(device)
y_hat, state = net(X, state)
l = loss(y_hat, y.long()).mean()
if isinstance(updater, torch.optim.Optimizer):
updater.zero_grad()
l.backward()
grad_clipping(net, 1)
updater.step()
else:
l.backward()
grad_clipping(net, 1)
# batch_size=1 since the `mean` function has been invoked
updater(batch_size=1)
metric.add(l * y.numel(), y.numel())
return math.exp(metric[0] / metric[1]), metric[1] / timer.stop()
def train(net, train_iter, vocab, lr, num_epochs, device, use_random_iter=False):
loss = nn.CrossEntropyLoss()
animator = Animator(xlabel="epoch", ylabel="perplexity", legend=["train"], xlim=[10, num_epochs])
# Initialize
if isinstance(net, nn.Module):
updater = torch.optim.SGD(net.parameters(), lr)
else:
updater = lambda batch_size: sgd(net.params, lr, batch_size)
num_preds = 50
predict_ = lambda prefix: predict(prefix, num_preds, net, vocab, device)
# Train and predict
for epoch in range(num_epochs):
ppl, speed = train_epoch(net, train_iter, loss, updater, device, use_random_iter)
if (epoch + 1) % 10 == 0:
print(predict_("time traveller"))
animator.add(epoch + 1, [ppl])
print(f"perplexity {ppl:.1f}, {speed:.1f} tokens/sec on {str(device)}")
print(predict_("time traveller"))
print(predict_("traveller"))
vocab_size, num_hiddens, device = len(vocab), 256, try_gpu()
num_epochs, lr = 500, 1
model = RNNModelScratch(len(vocab), num_hiddens, device, get_lstm_params, init_lstm_state, lstm)
train(model, train_iter, vocab, lr, num_epochs, device)
###Output
perplexity 1.1, 21046.3 tokens/sec on cpu
time traveller for so it will be convenient to speak of himwas e
traveller i shall briegt faintlywationsthal is frewsing and
###Markdown
Using pytorch module
###Code
class RNNModel(nn.Module):
"""The RNN model."""
def __init__(self, rnn_layer, vocab_size, **kwargs):
super(RNNModel, self).__init__(**kwargs)
self.rnn = rnn_layer
self.vocab_size = vocab_size
self.num_hiddens = self.rnn.hidden_size
# If the RNN is bidirectional (to be introduced later),
# `num_directions` should be 2, else it should be 1.
if not self.rnn.bidirectional:
self.num_directions = 1
self.linear = nn.Linear(self.num_hiddens, self.vocab_size)
else:
self.num_directions = 2
self.linear = nn.Linear(self.num_hiddens * 2, self.vocab_size)
def forward(self, inputs, state):
X = F.one_hot(inputs.T.long(), self.vocab_size)
X = X.to(torch.float32)
Y, state = self.rnn(X, state)
# The fully connected layer will first change the shape of `Y` to
# (`num_steps` * `batch_size`, `num_hiddens`). Its output shape is
# (`num_steps` * `batch_size`, `vocab_size`).
output = self.linear(Y.reshape((-1, Y.shape[-1])))
return output, state
def begin_state(self, device, batch_size=1):
if not isinstance(self.rnn, nn.LSTM):
# `nn.GRU` takes a tensor as hidden state
return torch.zeros((self.num_directions * self.rnn.num_layers, batch_size, self.num_hiddens), device=device)
else:
# `nn.LSTM` takes a tuple of hidden states
return (
torch.zeros((self.num_directions * self.rnn.num_layers, batch_size, self.num_hiddens), device=device),
torch.zeros((self.num_directions * self.rnn.num_layers, batch_size, self.num_hiddens), device=device),
)
num_inputs = vocab_size
lstm_layer = nn.LSTM(num_inputs, num_hiddens)
model = RNNModel(lstm_layer, len(vocab))
model = model.to(device)
train(model, train_iter, vocab, lr, num_epochs, device)
###Output
perplexity 1.0, 20182.0 tokens/sec on cpu
time traveller for so it will be convenient to speak of himwas e
travelleryou can show black is white by argument said filby
|
wp/notebooks/active learning/binary/metric_evaluation.ipynb | ###Markdown
Metrics comparison
###Code
%load_ext autoreload
import os, sys, importlib
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
BASE_PATH = os.path.join(os.getcwd(), "..", "..")
MODULES_PATH = os.path.join(BASE_PATH, "modules")
sys.path.append(MODULES_PATH)
import active_learning
importlib.reload(active_learning)
from active_learning import Metrics
METRICS_PATH = os.path.join(BASE_PATH, "metrics", "old_metrics")
files = os.listdir(METRICS_PATH)
def get_acq_name(filename):
"""
Decodes the acquisition function name from the filename.
Parameters:
filename (str): The filename for example 'mc_dropout_max_entropy.csv'
Returns:
(str) the acquisition function name used for the metrics inside the file.
"""
name, ext = filename.split(".")
if "bald" in name:
return "BALD"
elif "max_entropy" in name:
return "Max Entropy"
elif "max_var_ratio" in name:
return "Var Ratios"
elif "std_mean" in name:
return "Mean STD"
elif "random" in name:
return "Random"
raise ValueError("No acquisition function name encoded in filename: {}".format(name))
def get_model_name(filename):
"""
Decodes the model name from the filename
Parameters:
filename (str): The filename for example 'mc_dropout_max_entropy.csv'
Returns:
(str) the neural network model name
"""
name, ext = filename.split(".")
if "moment_propagation" in name:
return "Moment Propagation"
elif "mc_dropout" in name:
return "MC Dropout"
raise ValueError("No model name encoded in filename: {}".format(name))
metrics_reader = Metrics(METRICS_PATH)
list(filter(lambda x: "mc_dropout" in x, files))
# Aggregate actve leanring information for MC Models
mc_files = filter(lambda x: "mc_dropout" in x, files)
main_df = pd.DataFrame()
for file in mc_files:
model = "MC Dropout"
acq_name = get_acq_name(file)
data = metrics_reader.read(file)
df = pd.DataFrame(data)
df = df.rename(columns={"binary_accuracy": "accuracy"})
df.insert(2, "Acquisition Function", [acq_name]*len(data))
main_df = pd.concat([main_dataframe, df])
main_df = main_df.astype({"iteration": "int32", "loss": "float32", "accuracy": "float32"})
main_df.dtypes
plt.figure(figsize=(20, 100))
sns.relplot(x="iteration", y="accuracy", kind="line", markers=True, dashes=False, style="Acquisition Function", ci=None, hue="Acquisition Function", data=main_df)
plt.figure(figsize=(10, 10))
sns.lineplot(x="iteration", y="accuracy", hue="Acquisition Function", data=main_df)
###Output
_____no_output_____ |
session1/session1_correction.ipynb | ###Markdown
**Introduction au machine learning**
###Code
import keras
from keras.datasets import cifar10
from matplotlib import pyplot as plt
import numpy as np
from collections import defaultdict
import math
%matplotlib inline
###Output
_____no_output_____
###Markdown
Pour vous présenter les notions principales du Machine Learning, nous allons vous introduire deux algorithmes de base : KNN et K-MEAN.Ils seront appliqués au jeu de données CIFAR 10, jeu de données de 50 000 images appartenant à 10 classes d'images différentes.
###Code
# Charge le jeu de données
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
# Observons les dimensions du jeu de données
print("Images du jeu d'entrainement {}".format(x_train.shape));
print("Classes du jeu d'entrainement {}".format(y_train.shape));
# Classes des images de CIFAR-10
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
# Visualisons un exemple et sa classe
img_index = np.random.randint(0, x_train.shape[0])
plt.imshow(x_train[img_index])
plt.show()
class_indx = y_train[img_index, 0]
print("Qui appartient à la classe {} ({})".format(class_indx, classes[class_indx]))
# Grille d'exemples pour chaque classe
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
# Sélectionne aléatoirement des exemples de la classe y
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
# Affiche ces exemples en colonne
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(x_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
###Output
_____no_output_____
###Markdown
K-NN (K Nearest Neighbor ou méthode des K plus proches voisins) est un algorithme consistant à trouver, dans le jeu de données d'entrainement, les K images ressemblant le plus à l'image dont nous souhaitons trouver la classe. Pour calculer la ressemblance entre deux images on peut en première approximation considérer simplement leur distance euclidienne (norme L2). Sur les K images trouvées, nous regardons ensuite quelle classe est la plus présente. On pourra ainsi décider de la classe de notre image de test.
###Code
# Redimensionne les images en les applatissant afin de faciliter
# leur manipulation
x_train = x_train.reshape(50000,32*32*3)
x_test = x_test.reshape(10000,32*32*3)
y_train = y_train[:, 0]
y_test = y_test[:, 0]
k = 20
###Output
_____no_output_____
###Markdown
De plus, utiliser les 50 000 images d'entraînement pour classer les 10 000 images de test serait *long*. Nous allons donc sélectionner une partie des deux ensembles:
###Code
nb_imgs_train = 5000
nb_imgs_test = 1000
###Output
_____no_output_____
###Markdown
Cependant le code au dessus est pas très beau et le réimplémenter est un peu pénible.Afin de simplifier la vie de tout le monde, nous allons utiliser une bibliothèque du nom de **sickit learn**.Cette bibliothèque est une boite à outils remplie de beaucoup de fonctions très pratiques et d'algorithmes d'apprentissages prêts à l'utilisation. On vous laisse chercher [ici](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.htmlsklearn.neighbors.KNeighborsClassifier) comment l'utiliser. **Nous vous déconseillons de faire l'entrainement sur les 50 000 éléments de x_train et le test sur les 10 000 éléments de x_test car cela vous prendrait trop de temps pour tester...**
###Code
# --- Méthode brute ---
predictions = np.empty((nb_imgs_test, ))
for id_img_test, img_test in enumerate(x_test[:nb_imgs_test]):
# Le tableau k_nearest contient les classes des k images les plus proches
# distances contient les distances entre l'image test et les k plus proches
k_nearest, distances = np.full((k, ), -1), np.full((k, ), float("+inf"))
# On cherche à remplir le tableau k_nearest avec les classes des k
# images d'entraînement les "plus proches" au sens de la distance euclidienne.
for id_img_train, img_train in enumerate(x_train[:nb_imgs_train]):
dist = np.linalg.norm(img_test - img_train)
furthest_of_nearest = np.argmax(distances)
if dist < distances[furthest_of_nearest]:
distances[furthest_of_nearest] = dist
k_nearest[furthest_of_nearest] = y_train[id_img_train]
predictions[id_img_test] = np.argmax(np.bincount(k_nearest))
print("Classified image {}/{} ".format(id_img_test + 1, nb_imgs_test))
# Ne prenez pas trop d'images !
nb_imgs_train = 2000
nb_imgs_test = 500
x_test = x_test[:nb_imgs_test]
y_test = y_test[:nb_imgs_test]
# On importe la bibliothèque
from sklearn.neighbors import KNeighborsClassifier
# Regardez la fonction fit et la fonction score...
# On crée un modèle de paramètre k=7
neigh = KNeighborsClassifier(n_neighbors=7)
# On entraine notre modèle
neigh.fit(x_train[:nb_imgs_train], y_train[:nb_imgs_train])
# Prédit les classes de x_test et calcule son score
print(neigh.score(x_test,y_test))
###Output
0.232
###Markdown
Affichons quelques exemples de classes attribuées par le KNN:
###Code
from itertools import product
# On affiche une grille carrée d'images
nb_cols = 4
fig, axes = plt.subplots(nrows=nb_cols, ncols=nb_cols, figsize=(8, 8))
samples = x_test[:nb_cols ** 2]
predictions = neigh.predict(samples)
# On remet les exemples sous la forme d'images
samples = samples.reshape(samples.shape[0], 32, 32, 3)
for i, j in product(range(nb_cols), range(nb_cols)):
axes[i, j].imshow(samples[i * nb_cols + j])
axes[i, j].axis("off")
axes[i, j].set_title(classes[predictions[i * nb_cols + j]])
fig.suptitle("Quelques prédictions...")
plt.show(fig)
###Output
_____no_output_____
###Markdown
La fonction score nous a permis de mesure la précision (accuracy) de l'algorithme KNN sur une partie de notrejeu de données.Nous avons obtenu 0.29 ce qui veut dire que sur les 100 images testées, seules 29% étaient correctes.De plus, vous avez pu remarquer que le temps d'exécution était plutôt long. Imaginez le temps que cela mettrait si l'on voulait tester l'intégralité de notre jeu de testqui avait 10 000 exemples....Ici, nous avons testé pour k = 7. Mais quelle est la valeur optimale de $k$ ? $k$ est ce que l'on apelle un **hyperparamètre**. C'est une valeur à configurer avant l'entrainement de notre modèle sur le jeu de données.A vous de la trouver...
###Code
resultats = []
for k in range(1, 16):
neigh = KNeighborsClassifier(n_neighbors=k)
neigh.fit(x_train[:nb_imgs_train], y_train[:nb_imgs_train])
# On mesure le score et on l'ajoute à une liste pour le sauvegarder
resultats.append(neigh.score(x_test,y_test))
plt.plot(list(range(1, 16)), resultats, "-+")
plt.xlabel("K")
plt.ylabel("Accuracy")
###Output
_____no_output_____
###Markdown
La méthode K-MEAN (ou méthode des K-moyennes) est un algorithme de partionnement de données (*clustering* en anglais). C'est l'un des algorithmes les plus fondamentaux en apprentissage non supervisé. L'algorithme consiste à partionner des données pour tenter d'en dégager des classes. Dans notre cas, appliqué aux images de CIFAR-10, cela revient a classer les images tel que cela est déjà fait mais en utilisant juste les données brutes (c'est pour cela que l'on parle d'apprentissage *non supervisé*). L'idée générale derrière la méthode est de regrouper les données en fonction de leurs ressemblances, i.e. de leur distance. L'algorithme fonctionne ainsi: on commence en considérant K données aléatoires, elles sont chacunes représentantes d'une classe ; à chaque itération, on va partionner les données en fonction de la ressemblance avec les K images types de départ : on regroupe dans la classe K toutes les images étant plus proches de la K-ème image type ; on calcule ensuite la moyenne des classes obtenues et l'on remplace l'image type de chacune des classes obtenues par cette moyenne.Ci-dessous nous avons tracé
###Code
K_VALUE = 10
min_val = 1
# On initialise les K representants de chaque classe
K_mean = [255 * np.random.rand(32*32*3) for _ in range(10)]
# Valeur précédente de ces representants
K_save = [255 * np.random.rand(32*32*3) for _ in range(10)]
def nearest_K(image):
"""
Retourne la classe K la plus proche de image
"""
min_dist, min_k = float("+inf"), None
for id_K, K_point in enumerate(K_mean):
dist = np.linalg.norm(image - K_point)
if dist < min_dist:
min_dist, min_k = dist, id_K
return min_k
def mean_point(k, tab):
"""
Retourne barycentre des points (indicés) de tab
"""
if tab != []:
mean = 0
for id in tab:
mean += x_train[id] / len(tab)
K_mean[k] = mean
def stop_convergence():
"""
Evalue si l'on arrete les itérations
"""
for k in range(10):
if not(np.array_equal(K_mean[k], K_save[k])):
return True
return False
#KMEAN
iteration = 0
while stop_convergence():
iteration += 1
K_nearest = [[] for _ in range(10)]
for id_image, image in enumerate(x_train):
K_nearest[nearest_K(image)].append(id_image)
for k in range(10):
K_save[k] = K_mean[k]
mean_point(k, K_nearest[k])
print(iteration)
###Output
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
###Markdown
Essayons avec une fonction built-in écrite par de vrais Data Scientists:
###Code
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=10)
kmeans.fit(x_train)
###Output
_____no_output_____
###Markdown
On ne peux pas évaluer automatiquement notre algorithme parce qu'il n'a pas conscience du type de classe auquels les images d'un même cluster appartienentOn va donc visualiser les cluster (chaque ligne = 1 cluster)
###Code
predictions = kmeans.predict(x_test)
# Grille de classification pour chaque cluster trouvé
n_clusters=10
imgs = [[] for _ in range(n_clusters)]
for cluster in range(n_clusters):
i=0
while len(imgs[cluster]) < 7 :
if predictions[i] == cluster:
imgs[cluster].append(i)
i+= 1
for col, cluster in enumerate(imgs):
for line, img in enumerate(cluster):
plt_idx = line * n_clusters + col + 1
plt.subplot(7, n_clusters, plt_idx)
plt.imshow(x_test[img].reshape(32,32,3).astype('uint8'))
plt.axis('off')
if line == 0:
plt.title(str(col))
plt.show()
###Output
_____no_output_____ |
iPython_Notebooks/Data Scraping_Cleaning(Ovais).ipynb | ###Markdown
Dependencies***
###Code
#--Dependencies--#
#-- Data Cleaning Libraries:
import pandas as pd
import numpy as np
from pandas.api.types import is_string_dtype
from pandas.api.types import is_numeric_dtype
#-- Data Visualization Libraries:
from matplotlib import pyplot as plt
import seaborn as sns #--just in case we need it, probably won't
#-- Web Scraping Libraries:
import os
import time
import requests
from bs4 import BeautifulSoup as bs
from splinter import Browser
#--other
from tqdm import tqdm_notebook as tqdm
###Output
_____no_output_____
###Markdown
*** Web Scraping ***
###Code
#Settings for accessing Website
executable_path = {"executable_path": "chromedriver.exe"}
browser = Browser ('chrome', **executable_path, headless=False)
# yr_min = 1990
# yr_max = 2019
# NCAA_url = f"https://www.sports-reference.com/cbb/play-index/psl_finder.cgi?request=1&match=single&year_min={str(yr_min)}&year_max={str(yr_max)}&class_is_fr=Y&class_is_so=Y&class_is_jr=Y&class_is_sr=Y&pos_is_g=Y&pos_is_gf=Y&pos_is_f=Y&pos_is_fg=Y&pos_is_fc=Y&pos_is_c=Y&pos_is_cf=Y&games_type=A&c1stat=fga&c1comp=gt&order_by=year_id&order_by_asc=Y"
# browser.visit(NCAA_url)
# html = browser.html
# read_html = pd.read_html(html, header = 0)
# read_html[1]
###Output
_____no_output_____
###Markdown
Defining Functions for Data Scraping and Cleaning*** NBA***
###Code
#Pre-defining Variables for NBA scrape
#Empty DataFrame for NBA
NBA_game_df = pd.DataFrame()
#Reference to names
NBA_refer = "basketball-reference"
#URL
NBA_url = "https://www.basketball-reference.com"
#Create Empty List for years we want to scrape for
# years = ["1990", "1991", "1992", "1993", "1994", "1995",\
# "1996", "1997", "1998", "1999", "2000","2001", \
# "2002", "2003", "2004", "2005", "2006", "2007",\
# "2008", "2009", "2010", "2011", "2012", "2013",\
# "2014", "2015", "2016"]
yr_min = 1990
yr_max = 2019
#Define our function for scraping NBA Data---> ** need to come back to this and finish it off
def scrape_nba_data(page_url):
#URL's for both NBA and NCAA
#Reference global variable: NBA_game_df
global NBA_game_df
for year in tqdm(range(yr_min, (yr_max + 1))):
#Set the rest of the url
url = f"https://www.basketball-reference.com/leagues/NBA_{str(year)}_per_game.html"
#Visit the NBA Web page
browser.visit(url)
#Retrieve the html for the web page
html = browser.html
#Use pandas to read html
year_html = pd.read_html(html, header = 0)
#Get the second Table with all of the Data
year_html = year_html[1]
#Convert into DataFrame
year_df = pd.DataFrame(year_html)
#Delete Column of Ranking: "Rk"
year_df = year_df.rename(columns=({"Rk" : "Year"}))
#Apply the year to the Year column for each row
year_df["Year"] = year_df["Year"].apply(lambda x: year)
#Append to main DataFrame: NBA_game_df
if NBA_game_df.empty:
NBA_game_df = year_df
else:
NBA_game_df = NBA_game_df.append(year_df, ignore_index = True)
#hoooldd on wait a second, let me put some sleep in it
time.sleep(1)
#Function for cleaning and merging NBA data
def clean_nba_data (page_html):
# Calculate PER
# Set Unique ID to players based upon their name
###Output
_____no_output_____
###Markdown
NCAA***
###Code
#Predefining variables for NCAA scrape
NCAA_df = pd.DataFrame()
NCAA_refer = "sports-reference"
yr_min = 2000
yr_max = 2019
off = 0
NCAA_url = f"https://www.sports-reference.com/cbb/play-index/psl_finder.cgi?request=1&match=single&year_min={str(yr_min)}&year_max={str(yr_max)}&conf_id=&school_id=&class_is_fr=Y&class_is_so=Y&class_is_jr=Y&class_is_sr=Y&pos_is_g=Y&pos_is_gf=Y&pos_is_fg=Y&pos_is_f=Y&pos_is_fc=Y&pos_is_cf=Y&pos_is_c=Y&games_type=A&qual=&c1stat=&c1comp=&c1val=&c2stat=&c2comp=&c2val=&c3stat=&c3comp=&c3val=&c4stat=&c4comp=&c4val=&order_by=ws&order_by_asc=&offset={str(off)}"
browser.visit(NCAA_url)
html = browser.html
read = pd.read_html(html, header = 0)
our_read = read[2]
our_read
#Function for scraping NCAA data
def scrape_ncaa_data(start_year, end_year, page_url):
#Call in any global variables
global NCAA_df
#Initialize variables for loop
start = 0
end = 90400
url = page_url
#Loop
for off_set in tqdm(range(start, (end+100), 100)):
try:
if off_set == 0:
#Visit url
url = f"https://www.sports-reference.com/cbb/play-index/psl_finder.cgi?request=1&match=single&year_min={str(yr_min)}&year_max={str(yr_max)}&conf_id=&school_id=&class_is_fr=Y&class_is_so=Y&class_is_jr=Y&class_is_sr=Y&pos_is_g=Y&pos_is_gf=Y&pos_is_fg=Y&pos_is_f=Y&pos_is_fc=Y&pos_is_cf=Y&pos_is_c=Y&games_type=A&qual=&c1stat=&c1comp=&c1val=&c2stat=&c2comp=&c2val=&c3stat=&c3comp=&c3val=&c4stat=&c4comp=&c4val=&order_by=ws&order_by_asc=&offset={str(off_set)}"
print(url)
browser.visit(url)
#Get html of page
html = browser.html
#Read in Html using Pandas
read_html = pd.read_html(html, header=0)
print(read_html[2])
#Convert desired table to DataFrame
to_df = pd.DataFrame(read_html[2])
#Set global NCAA_df equal to to_df
NCAA_df = to_df
#Sleep for one second
time.sleep(1)
else:
#Visit url
url = f"https://www.sports-reference.com/cbb/play-index/psl_finder.cgi?request=1&match=single&year_min={str(yr_min)}&year_max={str(yr_max)}&conf_id=&school_id=&class_is_fr=Y&class_is_so=Y&class_is_jr=Y&class_is_sr=Y&pos_is_g=Y&pos_is_gf=Y&pos_is_fg=Y&pos_is_f=Y&pos_is_fc=Y&pos_is_cf=Y&pos_is_c=Y&games_type=A&qual=&c1stat=&c1comp=&c1val=&c2stat=&c2comp=&c2val=&c3stat=&c3comp=&c3val=&c4stat=&c4comp=&c4val=&order_by=ws&order_by_asc=&offset={str(off_set)}"
browser.visit(url)
#Get html of page
html = browser.html
#Read in Html using Pandas
read_html = pd.read_html(html, header=0)
#Convert desired table to DataFrame
to_df = pd.DataFrame(read_html[2])
#Append to NCAA_df
NCAA_df = NCAA_df.append(to_df, ignore_index = True)
#Sleep for one second
time.sleep(1)
except IndexError as error:
if off_set == 0:
#Visit url
url = f"https://www.sports-reference.com/cbb/play-index/psl_finder.cgi?request=1&match=single&year_min={str(yr_min)}&year_max={str(yr_max)}&conf_id=&school_id=&class_is_fr=Y&class_is_so=Y&class_is_jr=Y&class_is_sr=Y&pos_is_g=Y&pos_is_gf=Y&pos_is_fg=Y&pos_is_f=Y&pos_is_fc=Y&pos_is_cf=Y&pos_is_c=Y&games_type=A&qual=&c1stat=&c1comp=&c1val=&c2stat=&c2comp=&c2val=&c3stat=&c3comp=&c3val=&c4stat=&c4comp=&c4val=&order_by=ws&order_by_asc=&offset={str(off_set)}"
print(url)
browser.visit(url)
#Get html of page
html = browser.html
#Read in Html using Pandas
read_html = pd.read_html(html, header=0)
print(read_html[1])
#Convert desired table to DataFrame
to_df = pd.DataFrame(read_html[1])
#Set global NCAA_df equal to to_df
NCAA_df = to_df
#Sleep for one second
time.sleep(1)
else:
#Visit url
url = f"https://www.sports-reference.com/cbb/play-index/psl_finder.cgi?request=1&match=single&year_min={str(yr_min)}&year_max={str(yr_max)}&conf_id=&school_id=&class_is_fr=Y&class_is_so=Y&class_is_jr=Y&class_is_sr=Y&pos_is_g=Y&pos_is_gf=Y&pos_is_fg=Y&pos_is_f=Y&pos_is_fc=Y&pos_is_cf=Y&pos_is_c=Y&games_type=A&qual=&c1stat=&c1comp=&c1val=&c2stat=&c2comp=&c2val=&c3stat=&c3comp=&c3val=&c4stat=&c4comp=&c4val=&order_by=ws&order_by_asc=&offset={str(off_set)}"
browser.visit(url)
#Get html of page
html = browser.html
#Read in Html using Pandas
read_html = pd.read_html(html, header=0)
#Convert desired table to DataFrame
to_df = pd.DataFrame(read_html[1])
#Append to NCAA_df
NCAA_df = NCAA_df.append(to_df, ignore_index = True)
#Sleep for one second
time.sleep(1)
#Function for cleaning and merging NCAA data ** Will work on this later
# def clean_ncaa_data(ncaa_csv):
# #Calculate PER
###Output
_____no_output_____
###Markdown
Scraping for Data*** NCAA***
###Code
#Scrapety scrape scrape
scrape_ncaa_data(yr_min, yr_max, NCAA_url)
#Test print
NCAA_df
#Push Raw Data to CSV (So we don't have to scrape again)
NCAA_df.to_csv("NCAA_raw2.csv", index = False)
###Output
_____no_output_____
###Markdown
NBA***
###Code
#Scrapety scrape scrape
scrape_nba_data(NBA_url)
#Test print
NBA_game_df
NBA_copy = NBA_game_df
#Confirm last record on Web Page with that on DF
NBA_game_df
#Push Raw Data to csv
NBA_game_df.to_csv("NBA_raw2.csv", index = False)
###Output
_____no_output_____
###Markdown
Cleaning Data***
###Code
#Set PER calculation function for NCAA
def per (row):
#Set calculation pre-requisites
FT_miss = (row.FTA - row.FT)
FG_miss = (row.FGA - row.FG)
row_add = (row.FG * 85.910) + (row.STL * 53.897) + (row["3P"] * 51.757)\
+(row.FT * 46.845) + (row.BLK * 39.190) + (row.ORB * 39.190)\
+(row.AST * 34.677) + (row.DRB * 14.707)
row_sub = (row.PF * 17.174) + FT_miss + FG_miss + (row.TOV * 53.897)
try:
#calculate and return
calc = (row_add - row_sub) * (1/row.MP)
return round(calc, 2)
except ZeroDivisionError as zeroerror:
calc = 0
return calc
except ValueError as error:
calc = ((int(row.FG) * 85.910) + (int(row.STL) * 53.897) + (int(row["3P"]) * 51.757)\
+ (int(row.FT) * 46.845) + (int(row.BLK) * 39.190) + (int(row.ORB) * 39.190)\
+ (int(row.AST) * 34.677) + (int(row.DRB) * 14.707) + (int(row.PF) * 17.174)\
+ ((int(row.FTA) - int(row.FT)) * 20.091) + (int(row.TOV) * 53.897)) * (1/row.MP)
###Output
_____no_output_____
###Markdown
NCAA***
###Code
#Convert Seasons into a single year value for NCAA
def normalize_year(y):
if y == "1999-00":
year = "2000"
elif y == "2009-10":
year = "2010"
elif y == "2019-20":
year = "2020"
else:
year = int(y[0:3] + y[-1])
return year
#Note: Data has not been collected for per game stats, will need to calculate per game stats.
#Read in CSV to be cleaned: "NCAA_raw.csv"
NCAA_df = pd.read_csv("NCAA_raw2.csv", low_memory = False)
NCAA_df
NCAA_df.iloc[0]
ncaa_cols = list(NCAA_df.iloc[0])
ncaa_cols[2] = "nan"
ncaa_cols[-1] = "nan"
#Set Columns for NCAA DF
NCAA_df.columns = ncaa_cols
NCAA_df.head()
#Drop Rk column
NCAA_df = NCAA_df.drop("Rk", axis = 1)
NCAA_df = NCAA_df.drop("nan", axis=1)
NCAA_df.head()
NCAA_df = NCAA_df[NCAA_df.Player != "Advanced"]
NCAA_df = NCAA_df[NCAA_df.Player != "Player"]
NCAA_df = NCAA_df[NCAA_df["School"].notna()]
NCAA_df = NCAA_df[NCAA_df["Conf"].notna()]
NCAA_df["PER"]= NCAA_df["PER"].fillna(0)
NCAA_df = NCAA_df[NCAA_df.Season.notna()]
NCAA_df = NCAA_df.fillna(0)
#Normalize years
NCAA_df.Season = NCAA_df.Season.apply(lambda x: normalize_year(x))
#Test Print
NCAA_df[NCAA_df["Player"] == "Anthony Davis"]
NCAA_df = NCAA_df.reset_index(drop= True)
#Create Team index for each player
NCAA_df[(NCAA_df["PER"] != 0) & (NCAA_df["WS"] != 0)]
NCAA_df.columns
NCAA_df.reset_index(drop = True, inplace=True)
#Send cleaned NCAA data to csv
NCAA_df.to_csv("NCAA_2.csv")
#Group by player
NCAA_grouped = NCAA_df.groupby("Player")
NCAA_grouped.first()
#Aggregate and avg stats by player
stats_test = list(NCAA_df.columns)
stats_test = stats_test [6:]
stats_test
NCAA_agg = NCAA_grouped[stats_test].agg(np.mean)
#Reset index
NCAA_agg = NCAA_agg.reset_index()
#Test print
NCAA_agg.head()
#Save to CSV
NCAA_agg.to_csv("NCAA_agg.csv")
###Output
_____no_output_____
###Markdown
NBA***
###Code
#Note: Data has been collected for Per Game Stats
#Read in CSV to be cleaned: "NBA_raw.csv"
NBA_df = pd.read_csv("NBA_raw.csv")
#Remove irrelevant data
NBA_df = NBA_df[NBA_df["Player"] != "Player"]
NBA_df.head()
#Get columns that we want to loop for
nba_stats_cols = NBA_df.columns
nba_stats_cols = nba_stats_cols[7:]
nba_stats_cols
#Loop over DataFrame to change the values
for col in nba_stats_cols:
string_check = is_string_dtype(NBA_df[col])
#Verify if the value is a string, if it is convert to float
if string_check == True:
print(string_check)
NBA_df[col] = pd.to_numeric(NBA_df[col])
NBA_df[col] = NBA_df[col].fillna(0)
#Calculate PER
NBA_df["PER"] = NBA_df.apply(lambda row : per(row), axis = 1)
#normalize names with astriks at the end
def astriks (row):
name = row
if name[-1] == "*":
name = name[0:-1]
return name
else:
return name
NBA_df["Player"] = NBA_df["Player"].apply(lambda x: astriks(x))
NBA_df.head()
#Save as NBA_clean.csv
NBA_df.to_csv("NBA_clean.csv")
#Group data by player
NBA_grouped = NBA_df.groupby(["Player"])
NBA_grouped.first()
#Add PER to nba_stats_cols
nba_stats_cols = list(nba_stats_cols)
nba_stats_cols.append("PER")
nba_stats_cols
#Group by player
NBA_agg = NBA_grouped[nba_stats_cols].agg(np.mean)
NBA_agg = NBA_agg.sort_values(by="PTS", ascending = False)
NBA_agg
#Reset Tm index to a column
NBA_agg = NBA_agg.reset_index("Player")
NBA_agg
#Save to csv: NBA_agg
NBA_agg.to_csv("NBA_agg1.csv")
###Output
_____no_output_____ |
pipelines/ner_nltk/NER.ipynb | ###Markdown
Entity list: PERSON: John Gilbert, President ObamaORGANIZATION: WHO, FC BayernLOCATION: Mt. Everest, NileGPE: Germany, North AmericaDATE: December, 2016---
###Code
# Download necessary libraries
import nltk
import glob
import os
import csv
nltk.download('words')
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
nltk.download('maxent_ne_chunker')
nltk.download('stopwords')
from nltk import word_tokenize,pos_tag,ne_chunk
import matplotlib as mpl
import matplotlib.pyplot as plt
###Output
[nltk_data] Downloading package words to /root/nltk_data...
[nltk_data] Unzipping corpora/words.zip.
[nltk_data] Downloading package punkt to /root/nltk_data...
[nltk_data] Unzipping tokenizers/punkt.zip.
[nltk_data] Downloading package averaged_perceptron_tagger to
[nltk_data] /root/nltk_data...
[nltk_data] Unzipping taggers/averaged_perceptron_tagger.zip.
[nltk_data] Downloading package maxent_ne_chunker to
[nltk_data] /root/nltk_data...
[nltk_data] Unzipping chunkers/maxent_ne_chunker.zip.
[nltk_data] Downloading package stopwords to /root/nltk_data...
[nltk_data] Unzipping corpora/stopwords.zip.
###Markdown
**NER Example for the file abadilah.txt***italicized text*
###Code
from google.colab import drive
from nltk.corpus import stopwords
drive.mount('/content/drive')
stopWords=[
'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday','Sunday', 'January',
'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October',
'November', 'December', 'Mondays', 'Tuesdays', 'Wednesdays','Thursdays', 'Fridays', 'Saturdays', 'Sundays'
]
#example_path='/content/drive/MyDrive/Lorimer-geography-master/abadilah.txt'
#example_path='/content/drive/MyDrive/combined.txt'
example_path='/content/drive/MyDrive/Lorimer-geography-master/bahrain_principality.txt'
with open(example_path, 'r') as file:
data = file.read().replace('\n', '')
tokens_all = word_tokenize(data)
tokens = [word for word in tokens_all if word not in stopwords.words('english')]
print("All tokens")
for x in tokens:
print(x)
pos_tags=nltk.pos_tag(tokens)
print("POS Tags")
for x in pos_tags:
print(x)
chunks = ne_chunk(pos_tags)
print("All chunks")
for x in chunks:
print(x)
###Output
[1;30;43mStreaming output truncated to the last 5000 lines.[0m
('.', '.')
('Many', 'JJ')
('reefs', 'NNS')
(GPE Bahrain/NNP)
('islands', 'NNS')
('partially', 'RB')
('dry', 'JJ')
('low', 'JJ')
('water', 'NN')
('.', '.')
('On', 'IN')
('side', 'NN')
('towards', 'NNS')
('open', 'JJ')
('sea', 'NN')
('shallow', 'NN')
('waters', 'NNS')
('Bahrain', 'VBP')
('may', 'MD')
('considered', 'VBN')
('end', 'VB')
(PERSON Rennie/NNP Shoal/NNP)
(',', ',')
('54', 'CD')
('miles', 'NNS')
('north', 'RB')
(GPE Muharraq/NNP)
('.', '.')
('There', 'EX')
('passage', 'NN')
(',', ',')
('called', 'VBD')
('Khor-al-Bāb', 'NNP')
('خور', 'NNP')
('الباب', 'NNP')
('Manāmah', 'NNP')
('Qatīf', 'NNP')
('south', 'JJ')
('Fasht-al-Jārim', 'NNP')
(',', ',')
('practicable', 'JJ')
('vessels', 'NNS')
('drawing', 'VBG')
('15', 'CD')
('feet', 'NNS')
('.', '.')
('Many', 'JJ')
('pearl', 'NN')
(',', ',')
('banks', 'NNS')
('situated', 'VBD')
('waters', 'NNS')
(':', ':')
('names', 'NNS')
('positions', 'NNS')
('given', 'VBN')
(PERSON Appendix/NNP Pearl/NNP)
('Fisheries.Geology.—The', 'NNP')
('main', 'JJ')
('island', 'NN')
(GPE Bahrain/NNP)
('forms', 'NNS')
('striking', 'VBG')
('geological', 'JJ')
('contrast', 'NN')
('islands', 'NNS')
(PERSON Persian/JJ Gulf/NNP)
('.', '.')
('The', 'DT')
('rocks', 'NNS')
('chiefly', 'VBP')
('white', 'JJ')
('pale-coloured', 'JJ')
('limestones', 'NNS')
('eocene', 'JJ')
('age', 'NN')
(',', ',')
('sometimes', 'RB')
('sandy', 'JJ')
('argillaceous', 'JJ')
(',', ',')
('disposed', 'JJ')
('form', 'NN')
('low', 'JJ')
('anticlinal', 'JJ')
('dome', 'NN')
('Jabal-ad-', 'NNP')
('Dukhān', 'NNP')
('summit', 'NN')
('.', '.')
('In', 'IN')
('hollow', 'JJ')
('girdling', 'NN')
('plateau', 'NN')
('(', '(')
('described', 'JJ')
('article', 'NN')
(PERSON Bahrain/NNP Island/NNP)
(')', ')')
('central', 'JJ')
('peak', 'NN')
('rock', 'NN')
('denuded', 'VBD')
('marine', 'JJ')
('agency', 'NN')
('forms', 'NNS')
('plain', 'VBP')
('.', '.')
('In', 'IN')
('places', 'NNS')
('eocene', 'VBP')
('limestone', 'JJ')
('rocks', 'NNS')
('highly', 'RB')
('fossiliferous', 'JJ')
('contain', 'NN')
('foraminifera', 'NN')
(',', ',')
('echinids', 'NNS')
('mollusca', 'NN')
(':', ':')
('whole', 'JJ')
('characterised', 'VBD')
('abundance', 'RB')
('siliceous', 'JJ')
('material', 'NN')
(',', ',')
('occurring', 'VBG')
('flint', 'NN')
(',', ',')
('cherty', 'JJ')
('concretions', 'NNS')
('quartz', 'VBP')
('geodes', 'NNS')
(',', ',')
('dissemination', 'NN')
('gypsum', 'NN')
('salt', 'NN')
('throughout', 'IN')
('series', 'NN')
('marked', 'VBD')
('degree', 'JJ')
('.', '.')
('The', 'DT')
('presence', 'NN')
('salt', 'NN')
('gypsum', 'VBP')
('conspicuous', 'JJ')
('certain', 'JJ')
('places', 'NNS')
('leached', 'VBD')
('rock', 'NN')
('formed', 'VBD')
('vast', 'JJ')
('accumulations', 'NNS')
('saliferous', 'JJ')
('gypseous', 'JJ')
('soil', 'NN')
('.', '.')
('The', 'DT')
('distinctly', 'RB')
('marked', 'VBN')
('areas', 'NNS')
('character', 'VBP')
('one', 'CD')
('towards', 'NNS')
('south', 'JJ')
('end', 'VBP')
(PERSON Bahrain/NNP Island/NNP)
('another', 'DT')
('island', 'NN')
(PERSON Umm/NNP Na'asān/NNP)
(',', ',')
('gypsum', 'NN')
('fields', 'NNS')
('latter', 'RBR')
('supply', 'RB')
('practically', 'RB')
('mortar', 'VB')
('used', 'VBN')
(GPE Bahrain/NNP)
('.', '.')
('The', 'DT')
('coastal', 'JJ')
('portions', 'NNS')
(PERSON Bahrain/NNP Island/NNP)
(',', ',')
('also', 'RB')
('islands', 'VBZ')
('group', 'NN')
(',', ',')
('overlaid', 'VBD')
('sub-recent', 'JJ')
('coral', 'JJ')
('rocks', 'NNS')
('shelly', 'RB')
('concrete', 'VBP')
(';', ':')
('sandstone', 'NN')
('age', 'NN')
('found', 'VBD')
('central', 'JJ')
('depression', 'NN')
(PERSON Bahrain/NNP Island/NNP)
('.', '.')
('This', 'DT')
('depression', 'NN')
(',', ',')
('well', 'RB')
('littoral', 'JJ')
('flats', 'NNS')
(',', ',')
('fact', 'NN')
('emerged', 'VBD')
('sea', 'NN')
('comparatively', 'RB')
('recent', 'JJ')
('times', 'NNS')
(',', ',')
('remains', 'VBZ')
('old', 'JJ')
('sea-beaches', 'NNS')
('well', 'RB')
('marked', 'VBN')
('.', '.')
('A', 'DT')
('small', 'JJ')
('deposit', 'NN')
('asphalt', 'NN')
('found', 'VBD')
('penetrating', 'VBG')
('eocene', 'NN')
('rocks', 'NNS')
('3', 'CD')
('miles', 'NNS')
('south-south-east', 'RB')
('Jabal-ad-', 'NNP')
(PERSON Dukhān/NNP)
('.The', 'NNP')
('Bahrain', 'NNP')
('islands', 'VBZ')
('famous', 'JJ')
('remarkable', 'JJ')
('set', 'VBN')
('springs', 'NNS')
(',', ',')
('beautifully', 'RB')
('clear', 'JJ')
('slightly', 'RB')
('brackish', 'JJ')
(',', ',')
('submarine', 'NN')
(';', ':')
('majority', 'NN')
('enumerated', 'VBD')
('articles', 'NNS')
('principal', 'JJ')
('islands', 'NNS')
(',', ',')
('sufficient', 'JJ')
('mention', 'NN')
('northern', 'JJ')
('part', 'NN')
(PERSON Bahrain/NNP Island/NNP)
(',', ',')
('north', 'JJ')
('Khor-al-', 'NNP')
('Kabb', 'NNP')
(',', ',')
('warm', 'NN')
(',', ',')
('copious', 'JJ')
('nearly', 'RB')
('fresh', 'JJ')
(',', ',')
('best', 'JJS')
('known', 'VBN')
('district', 'NN')
("'Adāri", 'NN')
(',', ',')
(PERSON Qassāri/NNP Abu/NNP Zaidān/NNP)
('.', '.')
('The', 'DT')
('noteworthy', 'JJ')
('springs', 'NNS')
('sea', 'NN')
(PERSON Abu/NNP Māhur/NNP)
('close', 'RB')
(PERSON Muharraq/NNP Island/NNP Kaukab/NNP Fasht/NNP KhorFasht/NNP)
('.', '.')
('The', 'DT')
('best', 'JJS')
('water', 'NN')
('islands', 'NNS')
('obtained', 'VBN')
(GPE Hanaini/NNP)
('wells', 'NNS')
(',', ',')
('north', 'JJ')
('end', 'JJ')
('central', 'JJ')
('depression', 'NN')
(PERSON Bahrain/NNP Island/NNP)
(',', ',')
(PERSON Khālid/NNP Umm/NNP Ghuwaifah/NNP)
('wells', 'VBZ')
('plateau', 'NN')
('adjoining', 'VBG')
('.', '.')
('There', 'EX')
('little', 'JJ')
('doubt', 'NN')
('springs', 'NNS')
(GPE Bahrain/NNP)
(',', ',')
('like', 'IN')
(PERSON Hofūf/NNP Qatīf/NNP Oases/NNP)
(',', ',')
('fed', 'VBN')
('drainage', 'NN')
('part', 'NN')
(PERSON Najd/NNP)
(',', ',')
('temporarily', 'RB')
('lost', 'VBN')
(PERSON Dahánah/NNP Sahábah/NNP)
(',', ',')
('travels', 'VBZ')
('thence', 'NN')
('eastwards', 'NNS')
('subterranean', 'JJ')
('passages.Climate', 'VBP')
('seasons.—The', 'JJ')
('climate', 'NN')
(GPE Bahrain/NNP)
('means', 'VBZ')
('worst', 'JJS')
(PERSON Persian/JJ Gulf/NNP)
(',', ',')
('travellers', 'NNS')
('emphasized', 'VBD')
('less', 'RBR')
('pleasant', 'JJ')
('features', 'NNS')
('terms', 'NNS')
('facts', 'NNS')
('warrant', 'VBP')
('.', '.')
('Daily', 'JJ')
('observations', 'NNS')
('taken', 'VBN')
('since', 'IN')
('October', 'NNP')
('1901', 'CD')
(',', ',')
('since', 'IN')
('time', 'NN')
('highest', 'JJS')
('temperature', 'NN')
('registered', 'VBD')
('1075°', 'CD')
('F.', 'NNP')
('lowest', 'JJS')
('40°', 'CD')
('.', '.')
('The', 'DT')
('weather', 'NN')
('October', 'NNP')
('April', 'NNP')
('inclusive', 'JJ')
('pleasant', 'NN')
(',', ',')
('temperature', 'NN')
('indoors', 'NNS')
(',', ',')
('ranging', 'VBG')
('60°', 'CD')
('85°', 'CD')
('F.', 'NNP')
(';', ':')
('January', 'NNP')
('February', 'NNP')
('however', 'RB')
(',', ',')
('north', 'JJ')
('winds', 'NNS')
('blow', 'VBP')
('cold', 'JJ')
('enough', 'RB')
('light', 'JJ')
('housefires', 'NNS')
(',', ',')
('sometimes', 'RB')
('rainy', 'JJ')
('unhealthy', 'JJ')
('.', '.')
('From', 'IN')
('beginning', 'VBG')
('May', 'NNP')
('till', 'VB')
('middle', 'JJ')
('June', 'NNP')
('weather', 'NN')
('hot', 'NN')
(';', ':')
('heat', 'NN')
('still', 'RB')
('tempered', 'VBD')
('sea-breeze', 'JJ')
('Bārih', 'NNP')
(',', ',')
('nights', 'NNS')
('fairly', 'RB')
('cool', 'VBP')
('.', '.')
('From', 'IN')
('middle', 'JJ')
('June', 'NNP')
('end', 'NN')
('September', 'NNP')
('heat', 'NN')
('oppressive', 'NN')
(';', ':')
('land', 'NN')
('breezes', 'VBZ')
('west', 'JJS')
(',', ',')
('south-west', 'JJ')
('south', 'NN')
(',', ',')
('true', 'JJ')
(',', ',')
('continue', 'VBP')
('irregularly', 'RB')
('summer', 'NN')
(';', ':')
('intervals', 'NNS')
('thermometer', 'VBP')
('remains', 'VBZ')
('persistently', 'RB')
('100°', 'CD')
('F.', 'NNP')
('The', 'DT')
('average', 'NN')
('rainfall', 'NN')
('1902', 'CD')
('1906', 'CD')
('3.25', 'CD')
('inches', 'NNS')
('year', 'NN')
(';', ':')
('atmosphere', 'CC')
(GPE Bahrain/NNP)
(',', ',')
('consequence', 'NN')
('irrigation', 'NN')
('nearness', 'NN')
('sea', 'NN')
(',', ',')
('damp', 'JJ')
('heavy', 'JJ')
(',', ',')
('evidenced', 'JJ')
('mean', 'JJ')
('humidity', 'NN')
('ranges', 'VBZ')
('79', 'CD')
('80', 'CD')
('per', 'IN')
('cent', 'NN')
('saturation', 'NN')
('.', '.')
('The', 'DT')
('rainy', 'NN')
('season', 'NN')
('considered', 'VBN')
('begin', 'VBP')
('middle', 'JJ')
('October', 'NNP')
('end', 'NN')
('middle', 'NN')
('May', 'NNP')
(';', ':')
('rainy', 'JJ')
('days', 'NNS')
('however', 'RB')
('ordinarily', 'RB')
('3', 'CD')
('6', 'CD')
('July', 'NNP')
('.', '.')
('In', 'IN')
(GPE Bahrain/NNP Shaikhs/NNP)
(',', ',')
('flocks', 'VBZ')
('herds', 'NNS')
(',', ',')
('welcome', 'JJ')
('rain', 'NN')
(';', ':')
('poorer', 'CC')
('classes', 'NNS')
('frail', 'VBP')
('huts', 'NNS')
('date', 'NN')
('fronds', 'VBZ')
('causes', 'VBZ')
('serious', 'JJ')
('discomfort', 'NN')
('.', '.')
('The', 'DT')
('prevailing', 'NN')
('wind', 'NN')
(PERSON Shamāl/NNP)
('north-wester', 'NN')
(',', ',')
('winter', 'NN')
('violent', 'NN')
('dangerous', 'JJ')
('shipping', 'NN')
(';', ':')
('north', 'JJ')
('wind', 'VBP')
('frequent', 'JJ')
(';', ':')
('strong', 'JJ')
('wind', 'NN')
('besides', 'IN')
(PERSON Shamāl/NNP Qaus/NNP)
('south-east', 'JJ')
('blows', 'NNS')
('irregularly', 'JJ')
('December', 'NNP')
('April', 'NNP')
('.', '.')
('On', 'IN')
('whole', 'JJ')
('climate', 'NN')
(PERSON Bahrain/NNP)
('probably', 'RB')
('superior', 'JJ')
(PERSON Masqat/NNP Bandar/NNP)
("'Abbās", 'POS')
('certainly', 'RB')
('excels', 'VBZ')
('neighbouring', 'VBG')
('shores', 'NNS')
(PERSON Qatīf/NNP)
(',', ',')
('rheumatic', 'JJ')
('affections', 'NNS')
('common', 'JJ')
(',', ',')
('also', 'RB')
('diseases', 'VBZ')
('heart', 'NN')
('lungs', 'NNS')
('.', '.')
('A', 'DT')
('grey-headed', 'JJ')
('negro', 'NN')
('hardly', 'RB')
('ever', 'RB')
('seen', 'VBN')
(',', ',')
('pearl', 'JJ')
('divers', 'NNS')
('Bahrain', 'VBP')
('notoriously', 'RB')
('short-lived', 'JJ')
('race', 'NN')
(',', ',')
('though', 'IN')
('possibly', 'RB')
('due', 'JJ')
('rather', 'RB')
('occupation', 'JJ')
('climate.Natural', 'JJ')
('products', 'NNS')
('wild', 'VBP')
('animals.—The', 'JJ')
('minerals', 'NNS')
('value', 'NN')
(PERSON Bahrain/NNP)
('mentioned', 'VBD')
('already', 'RB')
('paragraph', 'JJ')
('geology', 'NN')
('.', '.')
('There', 'EX')
('almost', 'RB')
('natural', 'JJ')
('vegetation', 'NN')
(';', ':')
('mangroves', 'NNS')
('creeks', 'VBP')
('ber', 'NN')
('trees', 'NNS')
('places', 'NNS')
('appear', 'VBP')
('exceptions', 'NNS')
(';', ':')
('even', 'RB')
('grass', 'VBP')
('except', 'IN')
('artificially', 'RB')
('cultivated', 'VBN')
('.', '.')
('A', 'DT')
('kind', 'NN')
('small', 'JJ')
('gazelle', 'NN')
('(', '(')
('believed', 'VBN')
(PERSON Arabica/NNP)
(')', ')')
('uncommon', 'NN')
('less', 'RBR')
('inhabited', 'JJ')
('parts', 'NNS')
('main', 'JJ')
('island', 'NN')
(';', ':')
('hares', 'NNS')
('mongoose', 'VBP')
('fairly', 'RB')
('numerous', 'JJ')
('.', '.')
('The', 'DT')
(ORGANIZATION Houbara/NNP)
('bustard', 'NN')
('winter', 'NN')
('visitor', 'NN')
(',', ',')
(PERSON Shaikhs/NNP)
('keep', 'VB')
('hawks', 'NN')
('(', '(')
('imported', 'VBN')
(GPE Persia/NNP)
(')', ')')
('hunting', 'NN')
('.', '.')
(PERSON Sandgrouse/NNP)
('also', 'RB')
('sometimes', 'RB')
('seen', 'VBN')
(',', ',')
('apparently', 'RB')
('sought', 'VBD')
('native', 'JJ')
('sportsmen.Population', 'NN')
('tribes.—No', 'NN')
('census', 'NN')
('ever', 'RB')
('taken', 'VBN')
('population', 'NN')
(GPE Bahrain/NNP)
(',', ',')
('subjoin', 'VBD')
('rough', 'JJ')
('estimate', 'NN')
('based', 'VBN')
('reported', 'VBD')
('number', 'NN')
('houses', 'NNS')
('.', '.')
('The', 'DT')
('totals', 'NNS')
('souls', 'VBP')
('calculated', 'VBN')
('assumed', 'JJ')
('average', 'JJ')
('5', 'CD')
('persons', 'NNS')
('house', 'NN')
(';', ':')
(',', ',')
('improbable', 'JJ')
(',', ',')
('figure', 'NN')
('assumed', 'VBD')
('low', 'JJ')
('towns', 'NNS')
(',', ',')
('totals', 'NNS')
('urban', 'JJ')
('population', 'NN')
('must', 'MD')
('proportionally', 'RB')
('increased.Island.Towns.Sunni', 'VB')
("towns-people.Shi'ah", 'JJ')
('owns-people.Sunni', 'JJ')
('villages.Sunni', 'NN')
("villagers.Shi'ah", 'NN')
("villages.Shi'ah", 'NN')
("villagers.BahrainManāmah9800150001062757319450Budaiya'8000Nil.MuharraqMuharraq1900010008777532750Hadd8000Nil.Na'asān", 'NN')
('(', '(')
('Umm', 'NNP')
(')', ')')
('NilNil.Nil.Nil.Nil.Nil.Nil.Nabi', 'NNP')
('SālihDo.Do.Do.Do.Do.2375SitrahDo.Do.Do.115071500TOTAL', 'NNP')
('...', ':')
('448001600019142008524075The', 'CD')
('principality', 'NN')
('contains', 'NNS')
(',', ',')
('estimate', 'NN')
('possible', 'JJ')
('form', 'NN')
(',', ',')
('4', 'CD')
('towns', 'NNS')
('population', 'NN')
('60800', 'CD')
('souls', 'JJ')
('104', 'CD')
('villages', 'NNS')
('population', 'NN')
('38275', 'CD')
(';', ':')
('99075', 'CD')
('.', '.')
('To', 'TO')
('must', 'MD')
('added', 'VB')
('200', 'CD')
('non-Muhammadans', 'JJ')
('Manāmah', 'NNP')
(',', ',')
('making', 'VBG')
('grand', 'JJ')
('total', 'JJ')
('99275', 'CD')
('settled', 'VBD')
('inhabitants', 'NNS')
('.', '.')
('The', 'DT')
('nomads', 'NNS')
("Na'īm", 'NNP')
('Bedouins', 'NNP')
(',', ',')
('frequent', 'JJ')
('island', 'NN')
('varying', 'VBG')
('numbers', 'NNS')
(',', ',')
("Ka'abān", 'NNP')
('settled', 'VBD')
('residence.Of', 'JJ')
('whole', 'JJ')
('population', 'NN')
('100000', 'CD')
('souls', 'JJ')
('60000', 'CD')
(',', ',')
('chiefly', 'NN')
('townsmen', 'NNS')
(',', ',')
(ORGANIZATION Sunnis/NNP)
('40000', 'CD')
(',', ',')
('mostly', 'RB')
('villagers', 'NNS')
(',', ',')
("Shī'ahs.The", 'NNP')
('largest', 'JJS')
('community—for', 'NN')
('called', 'VBN')
('tribe—in', 'JJ')
('principality', 'NN')
('undoubtedly', 'RB')
(PERSON Bahrānis/NNP Bahārinah/NNP)
(',', ',')
('compose', 'VBP')
('nearly', 'RB')
('whole', 'JJ')
("Shī'ah", 'NNP')
('community', 'NN')
('three-fifths', 'JJ')
('rural', 'JJ')
('population', 'NN')
('.', '.')
('The', 'DT')
('remainder', 'NN')
('people', 'NNS')
(',', ',')
('except', 'IN')
('foreigners', 'NNS')
('Persians', 'NNPS')
(PERSON Basrah/NNP Arabs/NNP)
(',', ',')
(GPE Hindus/NNP)
(',', ',')
(PERSON Jews/NNP)
(',', ',')
('etc.', 'NN')
(',', ',')
('belong', 'RB')
('various', 'JJ')
('Sunni', 'NNP')
('tribes', 'NN')
('classes', 'NNS')
(',', ',')
('important', 'JJ')
(',', ',')
('numerically', 'RB')
('reasons', 'NNS')
(',', ',')
('appear', 'VBP')
('following', 'VBG')
('synopsis', 'NN')
(':', ':')
('—Name.Number', 'NNP')
('houses.Where', 'RB')
('located.Remarks', 'VBZ')
('.', '.')
("'Ainain", "''")
('(', '(')
('Āl', 'NNP')
('Bū', 'NNP')
(')', ')')
("95'Askar", 'CD')
('Muharraq', 'NNP')
('Town.Belong', 'NNP')
('Māliki', 'NNP')
('sect', 'JJ')
('Sunnis', 'NNP')
('.', '.')
("'Ali", "''")
('(', '(')
('Āl', 'IN')
('Bin-', 'NNP')
(')', ')')
('500Muharraq', 'CD')
('Hadd', 'NNP')
('Towns.Do', 'NNP')
('.', '.')
("'Amāmarah140Budaiya", "''")
("'", 'POS')
(PERSON Muharraq/NNP)
('Town.Do.Dawāsir1000Budaiya', 'NNP')
("'", 'POS')
("Zallāq.Do.Dhā'in", 'NNP')
('(', '(')
('Āl', 'NNP')
(')', ')')
('10Muharraq', 'CD')
('Town.Do.Hūwalah3080Manāmah', 'NNP')
(',', ',')
(PERSON Muharraq/NNP Town/NNP)
(',', ',')
(PERSON Budaiya/NNP)
("'", 'POS')
(',', ',')
(PERSON Hadd/NNP)
('Hālat-bin', 'NNP')
('Anas.Like', 'NNP')
("Shī'ah", 'NNP')
('Bahārinah', 'NNP')
('Hūwalah', 'NNP')
('class', 'NN')
(',', ',')
('tribe', 'NN')
('.', '.')
('All', 'DT')
('Sunnis', 'NNP')
(',', ',')
(PERSON Māliki/NNP)
("Shāfi'i", 'NNP')
("persuasion.Janā'āt3Manāmah", 'NN')
('.Belong', 'NNP')
('Māliki', 'NNP')
('sect', 'VBP')
("Sunnis.Ka'abān60½", 'NNP')
('Jasairah', 'NNP')
('½', 'NNP')
('wandering', 'VBG')
('near', 'IN')
('Jabal-ad-Dukhān.Do.Khālid', 'NNP')
('(', '(')
(ORGANIZATION Bani/NNP)
(')', ')')
('Dawāwdah', 'NNP')
('section50Jasrah.Belong', 'JJ')
('Māliki', 'NNP')
('sect', 'NN')
('Sunnis.Kibisah8Jasairah', 'NNP')
("Rifā'-al-Gharbi", 'NNP')
('.Do.Kuwārah', 'NNP')
('(', '(')
('Āl', 'NNP')
('Bū', 'NNP')
(')', ')')
('20Muharraq', 'CD')
('Town', 'NNP')
('Hadd.Do.Madhāhakah150Busaitīn', 'NNP')
('.Do.Maqla', 'NNP')
('(', '(')
(PERSON Al/NNP)
('Bin-', 'NNP')
(')', ')')
('100Hālat', 'CD')
('Abu', 'NNP')
("Māhur.Do.Manāna'ah120Qalāli", 'NNP')
(',', ',')
(PERSON Muharraq/NNP Town/NNP)
("Hadd.Do.Mu'āwadah20Muharraq", 'NNP')
('Town.Do.Muraikhāt15Hālat', 'NNP')
('Umm-al-Baidh.Do.Musallam', 'NNP')
('(', '(')
('Āl', 'NNP')
(')', ')')
('25Muharraq', 'CD')
('Town', 'NNP')
(',', ',')
(PERSON Hadd/NNP Hālat/NNP Abu/NNP)
("Māhur.Do.Na'īmFluctuatingMostly", 'NNP')
('nomad', 'RB')
(',', ',')
('154', 'CD')
('settled', 'VBD')
('families', 'NNS')
('found', 'VBN')
("Hālat-an-Na'īm", 'NNP')
(',', ',')
('Umm-ash-Shajar', 'NNP')
(',', ',')
('Umm-ash-Shajairah', 'NNP')
(',', ',')
('Hālat-as-Sulutah', 'NNP')
("Rifā'-al-Gharbi.Do.Negroes", 'NNP')
('(', '(')
('free', 'JJ')
(')', ')')
('860Manāmah', 'CD')
(',', ',')
(PERSON Muharraq/NNP Town/NNP)
(',', ',')
(PERSON Budaiya/NNP)
("'", 'POS')
(',', ',')
(PERSON Hālat/NNP Abu/NNP Māhur/NNP)
("Rifā'-ash-Sharqi", 'NNP')
('.There', 'NNP')
('free', 'JJ')
('negroes', 'NNS')
('places', 'NNS')
('also', 'RB')
(',', ',')
('lists', 'NNS')
('villages', 'NNS')
('treated', 'VBD')
('tribe', 'JJ')
('class', 'NN')
('among', 'IN')
('dwell', 'NN')
('.', '.')
('Only', 'RB')
('50', 'CD')
('free', 'JJ')
('negroes', 'NNS')
('Bahrain', 'VBP')
('formerly', 'RB')
('belonged', 'VBN')
("Shī'ah", 'NNP')
('masters', 'NNS')
("Shī'ahs.Negroes", 'NNP')
('(', '(')
('slaves', 'NNS')
(',', ',')
('living', 'VBG')
('separately', 'RB')
('masters', 'NNS')
(')', ')')
('1160Budaiya', 'CD')
("'", "''")
(',', ',')
(PERSON Muharraq/NNP Town/NNP)
(',', ',')
(PERSON Hālat/NNP Abu/NNP Māhur/NNP)
(',', ',')
(PERSON Manāmah/NNP)
("Rifā'-ash-Sharqi", 'NNP')
('.Negro', 'NNP')
('slaves', 'VBZ')
('places', 'NNS')
('mentioned', 'VBD')
('distinguished', 'JJ')
('tables', 'NNS')
('villages', 'NNS')
('tribe', 'VBP')
('masters', 'NNS')
('belong', 'RB')
('.', '.')
('Only', 'RB')
('20', 'CD')
('negro', 'JJ')
('slaves', 'NNS')
(PERSON Bahrain/NNP)
('owned', 'VBD')
("Shī'ahs", 'NNP')
("Shī'ahs", 'NNP')
('themselves.Qumárah10Muharraq', 'NN')
('Town.Belong', 'NNP')
('Māliki', 'NNP')
('sect', 'NN')
('Sunnis.Rumaih', 'NNP')
('(', '(')
('Āl', 'NNP')
('Bū', 'NNP')
(')', ')')
('115Jan', 'CD')
(',', ',')
(PERSON Busaitīn/NNP Muharraq/NNP)
('Town.Do.Sādah150Hadd.Belong', 'NNP')
(PERSON Hanafi/NNP)
("Shāfi'i", 'NNP')
('sects', 'VBZ')
('Sunnis.Sūdān10Do.Belong', 'NNP')
('Hanbali', 'NNP')
('sect', 'JJ')
('Sunnis.Sulutah10Hālat-as-Sulutah.Belong', 'NNP')
('Māliki', 'NNP')
('sect', 'NN')
('Sunnis', 'NNP')
('.', '.')
("'Utūb930Muharraq", "''")
(PERSON Town/NNP)
(',', ',')
(PERSON Manāmah/NNP)
(',', ',')
(PERSON Rifā/NNP)
("'s", 'POS')
(',', ',')
(PERSON Sharaibah/NNP)
(',', ',')
(GPE Busaitīn/NNP)
(',', ',')
(PERSON Hālat/NNP Abu/NNP Mahūr/NNP Hālat/NNP)
('Umm-al-Baidh.Do.Yās', 'NNP')
('(', '(')
(ORGANIZATION Bani/NNP)
(')', ')')
('Āl', 'VBP')
(PERSON Bū/NNP Falāsah/NNP)
('section120Mostly', 'RB')
(GPE Hadd/NNP)
(';', ':')
('Hālat-as-Sulutah', 'NNP')
(',', ',')
(GPE Busaitīn/NNP)
(',', ',')
('Umm-ash-Shajar', 'NNP')
(',', ',')
('Umm-ash-Shajairah', 'NNP')
('Muharraq', 'NNP')
('Town.Do.Yatail', 'NNP')
('(', '(')
(PERSON Al/NNP Bani/NNP)
(')', ')')
('10Salbah.Do.Ziyāinah150Muharraq', 'CD')
('Town.Do.Besides', '$')
('69', 'CD')
('Hindus', 'NNP')
(GPE Bahrain/NNP)
(',', ',')
('unaccompanied', 'JJ')
('families', 'NNS')
(';', ':')
('number', 'NN')
('rises', 'VBZ')
('pearling', 'VBG')
('season', 'NN')
('175', 'CD')
('souls.Although', 'NN')
(PERSON Bahārinah/NNP)
('numerically', 'RB')
('strongest', 'JJS')
('class', 'NN')
('far', 'RB')
('politically', 'RB')
('important', 'JJ')
(';', ':')
('indeed', 'RB')
('position', 'NN')
('little', 'RB')
('better', 'RBR')
('one', 'CD')
('serfdom', 'NN')
('.', '.')
('Most', 'JJS')
('date', 'NN')
('cultivation', 'NN')
('agriculture', 'NN')
('islands', 'VBZ')
('hands', 'NNS')
(';', ':')
('also', 'RB')
('depend', 'VBP')
(',', ',')
('though', 'RB')
('less', 'RBR')
('extent', 'JJ')
('Sunni', 'NNP')
('brethren', 'NN')
(',', ',')
('upon', 'IN')
('pearl', 'JJ')
('diving', 'VBG')
('seafaring', 'VBG')
('occupations.The', 'JJ')
(ORGANIZATION Hūwalah/NNP)
('numerous', 'JJ')
('community', 'NN')
(GPE Sunnis/NNP)
(';', ':')
('townsmen', 'NNS')
('living', 'VBG')
('trade', 'NN')
('without', 'IN')
('solidarity', 'NN')
('among', 'IN')
(';', ':')
('consequently', 'RB')
('unimportant', 'JJ')
(',', ',')
('except', 'IN')
('commercially', 'RB')
('.', '.')
('The', 'DT')
("'Utūb", 'NN')
(',', ',')
(PERSON Sādah/NNP Dawāsir/NNP)
('influential', 'JJ')
('tribes', 'NNS')
(GPE Bahrain/NNP)
(';', ':')
('first', 'JJ')
('account', 'NN')
('connection', 'NN')
('ruling', 'VBG')
('family', 'NN')
(',', ',')
(GPE Sādah/NNP)
('virtue', 'NN')
('sacred', 'VBD')
('origin', 'NN')
(',', ',')
(PERSON Dawāsir/NNP)
('comparatively', 'RB')
('wealthy', 'JJ')
(',', ',')
('united', 'JJ')
(',', ',')
('obedient', 'JJ')
('chiefs', 'NNS')
(',', ',')
('partly', 'RB')
(',', ',')
('perhaps', 'RB')
(',', ',')
('recent', 'JJ')
('immigrants', 'NNS')
('Najd', 'NNP')
('enjoy', 'VBP')
('certain', 'JJ')
('prestige', 'NN')
('.', '.')
('The', 'DT')
('remainder', 'NN')
('Sunni', 'NNP')
('population', 'NN')
('mostly', 'RB')
('settled', 'VBD')
('near', 'IN')
('coast', 'NN')
(',', ',')
('depend', 'VBP')
('chiefly', 'NN')
('sea', 'NN')
(',', ',')
('lesser', 'JJR')
('degree', 'JJ')
('cultivation', 'NN')
(',', ',')
('subsistence.The', 'JJ')
('races', 'NNS')
('inhabiting', 'VBG')
(GPE Bahrain/NNP)
('generally', 'RB')
('insignificant', 'JJ')
('appearance', 'NN')
('nothing', 'NN')
('remarkable', 'JJ')
('character', 'NN')
('.', '.')
('The', 'DT')
('pearl', 'NN')
('diving', 'VBG')
('sections', 'NNS')
('community', 'NN')
('seem', 'VBP')
('distinguished', 'VBN')
('weak', 'JJ')
('eyes', 'NNS')
('raucous', 'JJ')
('voices', 'NNS')
('.', '.')
('The', 'DT')
('huts', 'JJ')
('bulk', 'JJ')
('rural', 'JJ')
('population', 'NN')
('live', 'VBP')
('constructed', 'VBN')
('date', 'NN')
('reed', 'NN')
('mats', 'NNS')
('gable', 'JJ')
('roofs.In', 'NN')
('concluding', 'VBG')
('paragraph', 'NN')
('may', 'MD')
('notice', 'VB')
(',', ',')
('though', 'IN')
('negroes', 'RB')
('numerous', 'JJ')
(GPE Bahrain/NNP)
(',', ',')
('mixture', 'NN')
(PERSON Arab/NNP)
('negro', 'RB')
('blood', 'VBD')
('less', 'RBR')
('frequent', 'JJ')
('ports', 'NNS')
("'Omān", 'POS')
(PERSON Trucial/NNP)
("'Omān", 'NNP')
('.', '.')
('The', 'DT')
('analysis', 'NN')
('population', 'NN')
('given', 'VBN')
('shows', 'NNS')
('nearly', 'RB')
('5000', 'CD')
('free', 'JJ')
('negroes', 'NNS')
('6000', 'CD')
('negro', 'JJ')
('slaves', 'NNS')
('principality', 'NN')
(',', ',')
('figures', 'VBZ')
('probably', 'RB')
('much', 'JJ')
('mark', 'NN')
('impossible', 'JJ')
(',', ',')
('except', 'IN')
('larger', 'JJR')
('places', 'NNS')
(',', ',')
('distinguish', 'JJ')
('negro', 'NN')
('families', 'NNS')
('communities', 'NNS')
('among', 'IN')
('live', 'JJ')
('owned', 'VBN')
('.', '.')
('The', 'DT')
('reason', 'NN')
('given', 'VBN')
('nonmixture', 'RB')
('blood', 'NN')
(GPE Bahrain/NNP)
('full-blood', 'NN')
('negro', 'NN')
('slave', 'VBP')
('valuable', 'JJ')
('half-caste', 'NN')
('son', 'NN')
('.', '.')
('The', 'DT')
('services', 'NNS')
('slaves', 'NNS')
('hired', 'VBD')
('masters', 'NNS')
('pearl', 'JJ')
('fishery', 'NN')
(',', ',')
('winter', 'NN')
('slave', 'VBP')
('family', 'NN')
('left', 'VBD')
('support', 'NN')
('.', '.')
('In', 'IN')
('cases', 'NNS')
('slave', 'VBP')
('married', 'VBN')
('free', 'JJ')
('woman', 'NN')
('master', 'NN')
('concern', 'NN')
(';', ':')
('others', 'NNS')
('master', 'VBP')
('provides', 'VBZ')
('slave', 'VBP')
('slave', 'JJ')
('wife', 'NN')
('takes', 'VBZ')
('possession', 'NN')
('offspring.Agriculture', 'NN')
('domestic', 'JJ')
('animals.—', 'IN')
('The', 'DT')
('agricultural', 'JJ')
('products', 'NNS')
(GPE Bahrain/NNP)
('chiefly', 'NN')
('fruits', 'NNS')
(',', ',')
('lucerne', 'JJ')
('vegetables', 'NNS')
(';', ':')
('last', 'JJ')
(',', ',')
('including', 'VBG')
('brinjals', 'NNS')
(',', ',')
('cucumbers', 'NNS')
(',', ',')
('carrots', 'NNS')
(',', ',')
('leeks', 'JJ')
('onions', 'NNS')
(',', ',')
('almost', 'RB')
('non-arboreal', 'JJ')
('crops', 'NNS')
('.', '.')
('The', 'DT')
('supply', 'NN')
('vegetables', 'VBZ')
('sufficient', 'JJ')
('local', 'JJ')
('consumption', 'NN')
('none', 'NN')
('imported', 'VBN')
('.', '.')
('Plant', 'NNP')
('growth', 'NN')
('generally', 'RB')
('rank', 'VBN')
(',', ',')
('produce', 'VBP')
('poor', 'JJ')
('.', '.')
('The', 'DT')
('citrons', 'NNS')
(GPE Bahrain/NNP)
(',', ',')
('true', 'JJ')
(',', ',')
('best', 'JJS')
(PERSON Persian/JJ Gulf/NNP)
(',', ',')
('kind', 'NN')
('small', 'JJ')
('luscious', 'JJ')
('banana', 'NN')
(';', ':')
('date', 'NN')
('palms', 'NNS')
('dull', 'VBP')
('green', 'JJ')
('poor', 'JJ')
('stunted', 'VBN')
('appearance', 'NN')
(',', ',')
('fruits', 'NNS')
(',', ',')
('whether', 'IN')
('almonds', 'NNS')
(',', ',')
('apricots', 'NNS')
(',', ',')
('figs', 'NNS')
(',', ',')
('grapes', 'NNS')
(',', ',')
('limes', 'NNS')
(',', ',')
('melons', 'NNS')
(',', ',')
('peaches', 'NNS')
('pomegranates', 'NNS')
(',', ',')
('fall', 'NN')
('mediocrity', 'NN')
('.', '.')
('There', 'EX')
('tamarinds', 'VBZ')
(',', ',')
('mango', 'NN')
('mulberry', 'NN')
('seen', 'VBN')
(',', ',')
('rare', 'JJ')
('.', '.')
('The', 'DT')
('soil', 'NN')
('perhaps', 'RB')
('sterile', 'RB')
(',', ',')
('although', 'IN')
(',', ',')
('without', 'IN')
('cultivation', 'NN')
(',', ',')
('ordinarily', 'RB')
('produce', 'VB')
('even', 'RB')
('grass', 'NN')
(';', ':')
('deficiency', 'NN')
('rainfall', 'NN')
('probably', 'RB')
('chief', 'JJ')
('reason', 'NN')
(',', ',')
('exceptionally', 'RB')
('wet', 'JJ')
('years', 'NNS')
('grass', 'NN')
('said', 'VBD')
('grow', 'JJ')
('knee-deep', 'JJ')
('central', 'JJ')
('depression', 'NN')
('main', 'JJ')
('island', 'NN')
('edge', 'NN')
('Mimlahat-al-Mattalah', 'NNP')
('.', '.')
(PERSON All/NNP)
('cultivated', 'VBD')
('land', 'NN')
('irrigated', 'VBN')
('springs', 'NNS')
('wells', 'NNS')
('.', '.')
('The', 'DT')
('springs', 'NNS')
('many', 'JJ')
('copious', 'JJ')
(',', ',')
('low', 'JJ')
('level', 'NN')
('many', 'JJ')
('lie', 'NN')
('makes', 'VBZ')
('necessary', 'JJ')
('conduct', 'NN')
('water', 'NN')
('plantations', 'NNS')
('deep', 'VBP')
('cuttings', 'NNS')
(',', ',')
('places', 'NNS')
('lined', 'VBD')
('stone', 'NN')
('others', 'NNS')
('carried', 'VBD')
('outcrops', 'JJ')
('rock', 'NN')
('.', '.')
('Irrigation', 'NNP')
('3', 'CD')
('kinds', 'NNS')
(',', ',')
('date', 'NN')
('plantations', 'NNS')
('distinguished', 'VBD')
('Nakhl-as-Saihنخل', 'NNP')
('السيح', 'NNP')
('Dūlāb', 'NNP')
('دولاب', 'NNP')
('Nakhl-al-Gharrāfah', 'NNP')
('نخل', 'NNP')
('الغرّافه', 'NNP')
(';', ':')
('first', 'JJ')
('kind', 'NN')
('watered', 'JJ')
('gravitation', 'NN')
('flowing', 'VBG')
('channels', 'NNS')
(',', ',')
('second', 'JJ')
('lift', 'NN')
('1', 'CD')
('2', 'CD')
('skins', 'NNS')
('raised', 'VBD')
('bullocks', 'NNS')
('donkeys', 'NNS')
('walking', 'VBG')
('slope', 'NN')
(',', ',')
('third', 'JJ')
('Gharrāfah', 'NNP')
('lever', 'NN')
('skin', 'NN')
('counterpoise.2', 'NN')
('Fish-manure', 'NNP')
('used', 'VBD')
('fertilise', 'NN')
('date', 'NN')
('groves', 'NNS')
('.', '.')
('Agricultural', 'NNP')
('produce', 'VBP')
('brought', 'JJ')
('market', 'NN')
('daily', 'RB')
(PERSON Manāmah/NNP)
('bazaar', 'NN')
(',', ',')
('weekly', 'JJ')
('Sūq-al-Khamīs', 'NNP')
('fair', 'NN')
('(', '(')
('Thursdays', 'NNPS')
(')', ')')
('place', 'NN')
('near', 'IN')
("Qal'at-al-'Ajāj", 'NNP')
('(', '(')
(ORGANIZATION Mondays/NNP)
(')', ')')
('.The', 'NNP')
('valuable', 'JJ')
('domestic', 'JJ')
('animals', 'NNS')
('donkeys', 'NNS')
('particular', 'JJ')
('breed', 'NN')
(',', ',')
('12', 'CD')
('13.1', 'CD')
('hands', 'NNS')
('height', 'NN')
(';', ':')
('generally', 'RB')
('white', 'JJ')
('incline', 'NN')
('greyness', 'NN')
(',', ',')
('probably', 'RB')
('account', 'VBP')
('impure', 'JJ')
('breeding', 'NN')
('.', '.')
('The', 'DT')
('stock', 'NN')
('originally', 'RB')
('imported', 'VBN')
(PERSON Hasa/NNP)
('perhaps', 'RB')
('finest', 'JJS')
('kind', 'NN')
('donkey', 'JJ')
('world', 'NN')
('.', '.')
('The', 'DT')
('females', 'NNS')
(',', ',')
('less', 'JJR')
('noisy', 'JJ')
('males', 'NNS')
(',', ',')
('sold', 'VBD')
('higher', 'JJR')
('prices', 'NNS')
(',', ',')
('good', 'JJ')
('one', 'CD')
('sometimes', 'RB')
('fetches', 'VBZ')
('much', 'JJ')
(GPE R500/NNP)
(';', ':')
('stallions', 'NNS')
('sold', 'VBD')
('professional', 'JJ')
('donkey', 'NN')
('boys', 'NNS')
(',', ',')
('hire', 'NN')
('towns', 'NNS')
('riding', 'VBG')
('carrying', 'VBG')
('loads', 'NNS')
('.', '.')
('Only', 'RB')
('200', 'CD')
('donkeys', 'NNS')
(',', ',')
('said', 'VBD')
(',', ',')
('exist', 'VBP')
('upon', 'IN')
('islands', 'NNS')
(';', ':')
('number', 'NN')
('donkeys', 'NNS')
('sorts', 'NNS')
(',', ',')
('according', 'VBG')
('statistics', 'NNS')
('obtainable', 'JJ')
(',', ',')
('nearly', 'RB')
('2000', 'CD')
('.', '.')
('The', 'DT')
('ordinary', 'JJ')
('donkeys', 'NNS')
(',', ',')
('1800', 'CD')
('number', 'NN')
(',', ',')
('colours—white', 'NN')
(',', ',')
('grey', 'NN')
(',', ',')
('black', 'JJ')
('brown—and', 'NN')
('vary', 'JJ')
('height', 'VBD')
('12', 'CD')
('10', 'CD')
('hands', 'NNS')
('less', 'RBR')
(';', ':')
('useful', 'JJ')
('capable', 'JJ')
('hard', 'JJ')
('work', 'NN')
('.', '.')
('The', 'DT')
('provender', 'NN')
('donkeys', 'NNS')
('chiefly', 'VBP')
('lucerne', 'JJ')
(',', ',')
('dates', 'VBZ')
('grass', 'NN')
('.', '.')
('Horses', 'VBZ')
('kept', 'VBD')
('family', 'NN')
('ruling', 'VBG')
(PERSON Shaikh/NN)
(':', ':')
('generally', 'RB')
('pure', 'VBP')
(PERSON Najdi/NNP)
('blood', 'NN')
(',', ',')
('somewhat', 'RB')
('deteriorated', 'VBD')
('bred', 'JJ')
('unsuitable', 'JJ')
('climate', 'NN')
('.', '.')
('No', 'DT')
('horses', 'NNS')
('bred', 'VBD')
('exportation', 'NN')
(':', ':')
('owned', 'VBN')
('50', 'CD')
('number', 'NN')
('.', '.')
('About', 'IN')
('100', 'CD')
('camels', 'NNS')
('owned', 'VBN')
(PERSON Shaikh/NNP)
('family', 'NN')
('perhaps', 'RB')
('50', 'CD')
('others', 'NNS')
('belong', 'JJ')
('private', 'JJ')
('individuals', 'NNS')
('two', 'CD')
(PERSON Rifā/NNP)
("'s", 'POS')
(',', ',')
('employ', 'FW')
('carrying', 'VBG')
('water', 'NN')
('Manāmah', 'NNP')
('sale', 'NN')
('.', '.')
('There', 'EX')
('small', 'JJ')
('fine', 'JJ')
('local', 'JJ')
('breed', 'NN')
('cattle', 'NNS')
(',', ',')
('famous', 'JJ')
('even', 'RB')
(GPE Persian/NNP)
('coast', 'NN')
('milking', 'VBG')
('qualities', 'NNS')
(':', ':')
('beef', 'NN')
(PERSON Bahrain/NNP)
('however', 'RB')
(',', ',')
('sells', 'VBZ')
('locally', 'RB')
('6', 'CD')
('annas', 'NNS')
('per', 'IN')
('lb.', 'NN')
(',', ',')
('means', 'VBZ')
('first-rate', 'JJ')
('.', '.')
(PERSON Cattle/NNP)
('sorts', 'VBZ')
('islands', 'NNS')
('reported', 'VBD')
('amount', 'NN')
(';', ':')
('850', 'CD')
('head', 'NN')
(';', ':')
('stall-fed', 'JJ')
('upon', 'IN')
('dates', 'NNS')
(',', ',')
('lucerne', 'NN')
(',', ',')
('bhoosa', 'NN')
(',', ',')
('dried', 'VBD')
('fish', 'JJ')
('old', 'JJ')
('bones', 'NNS')
('sometimes', 'RB')
('unable', 'JJ')
('walk', 'NN')
('account', 'NN')
('over-grown', 'JJ')
('hoofs', 'NN')
('.', '.')
('Sheep', 'JJ')
('goats', 'NNS')
('hardly', 'RB')
('owned', 'VBD')
('outside', 'JJ')
('principal', 'JJ')
('island', 'NN')
(',', ',')
('grazing', 'VBG')
('elsewhere', 'RB')
('except', 'IN')
('upon', 'IN')
(PERSON Umm/NNP Na'asān/NNP)
(';', ':')
('estimated', 'VBD')
('500', 'CD')
('sheep', 'JJ')
('700', 'CD')
('goats', 'NNS')
(',', ',')
('600', 'CD')
('belong', 'JJ')
('leading', 'VBG')
('members', 'NNS')
('Āl', 'JJ')
('Khalīfah', 'NNP')
('ruling', 'NN')
('family', 'NN')
(',', ',')
('400', 'CD')
('larger', 'JJR')
(PERSON Arab/NNP)
('tribes', 'NNS')
(',', ',')
('200', 'CD')
('(', '(')
('stall-fed', 'NN')
(')', ')')
('individual', 'JJ')
('townsmen', 'NNS')
(PERSON Manāmah/NNP Muharraq/NNP)
('.', '.')
(PERSON Mutton/NNP)
('goats', 'NNS')
("'", 'POS')
('flesh', 'NN')
(',', ',')
('mostly', 'RB')
('imported', 'VBN')
(',', ',')
('sells', 'VBZ')
('locally', 'RB')
('7', 'CD')
('8', 'CD')
('annas', 'NNS')
('per', 'IN')
('lb', 'NN')
('.', '.')
('according', 'VBG')
('quality.Pearl', 'JJ')
('sea', 'NN')
('fisheries.—The', 'NN')
('pearl', 'NN')
('fisheries', 'NNS')
(GPE Bahrain/NNP)
('important', 'JJ')
(LOCATION Persian/NNP Gulf/NNP)
('except', 'IN')
(PERSON Trucial/NNP)
("'Omān", 'POS')
(';', ':')
('employ', 'CC')
('917', 'CD')
('boats', 'NNS')
('afford', 'JJ')
('occupation', 'NN')
('17500', 'CD')
('men', 'NNS')
(',', ',')
(GPE Bahrain/NNP)
('pearl', 'VBP')
('boat', 'NN')
('thus', 'RB')
('manned', 'VBN')
('average', 'JJ')
('19', 'CD')
('men.The', 'JJ')
('local', 'JJ')
('sea', 'NN')
('fisheries', 'NNS')
('productive', 'VBP')
('afford', 'NN')
('livelihood', 'NN')
('considerable', 'JJ')
('proportion', 'NN')
('coast', 'NN')
('population', 'NN')
('.', '.')
('The', 'DT')
('fish', 'JJ')
('taken', 'VBN')
('nets', 'NNS')
('tidal', 'JJ')
('weirs', 'NN')
('enclosures', 'NNS')
('called', 'VBN')
(PERSON Hadhras/NNP حظره/NNP)
('made', 'VBD')
('reeds', 'NNS')
(',', ',')
('surround', 'VB')
('large', 'JJ')
('areas.Communications', 'NNS')
('navigation.—The', 'VBP')
(GPE Bahrain/NNP)
('islands', 'NNS')
('traversable', 'JJ')
('directions', 'NNS')
('riding', 'VBG')
('pack', 'NN')
('animals', 'NNS')
(':', ':')
('irrigated', 'VBN')
('tracts', 'NNS')
('water', 'NN')
('channels', 'NNS')
(',', ',')
('would', 'MD')
('otherwise', 'RB')
('seriously', 'RB')
('impede', 'JJ')
('movement', 'NN')
(',', ',')
('generally', 'RB')
('sufficiently', 'RB')
('well', 'RB')
('bridged', 'VBN')
('.', '.')
('The', 'DT')
('important', 'JJ')
('route', 'NN')
('islands', 'NNS')
('Manāmah', 'NNP')
('two', 'CD')
(PERSON Rifā/NNP)
("'s", 'POS')
(':', ':')
('wayfarers', 'NNS')
('travelling', 'VBG')
('either', 'DT')
('ford', 'JJ')
("Maqua'-at-Tūbli", 'NNP')
('creek', 'NN')
('go', 'VBP')
('round', 'JJ')
('head', 'NN')
(',', ',')
('1', 'CD')
('mile', 'NN')
('west', 'NN')
(',', ',')
('according', 'VBG')
('state', 'NN')
('tide.A', 'NN')
('table3', 'NN')
('various', 'JJ')
('kinds', 'NNS')
('craft', 'NN')
('owned', 'VBD')
('ports', 'NNS')
(GPE Bahrain/NNP)
('given', 'VBN')
("below.Island.Port.Baghlahs.Batīls.Būms.Baqārahs.Shū'ais", 'NN')
('Sambuks.Māshuwahs', 'NNP')
('Jolly', 'NNP')
('boats.Totais.Bahrain', 'VBP')
(ORGANIZATION IslandAnas/NNP)
('(', '(')
(PERSON Hālat/NNP Bin/NNP)
(')', ')')
('...', ':')
('...', ':')
('...', ':')
('...', ':')
('...', ':')
('69', 'CD')
("''", "''")
('``', '``')
("'Aqur", 'CD')
('...', ':')
('...', ':')
('...', ':')
('...', ':')
('3', 'CD')
('...', ':')
('6', 'CD')
("''", "''")
('``', '``')
("'Askar", 'POS')
('...', ':')
('1', 'CD')
('...', ':')
('1', 'CD')
('...', ':')
('1719Carried', 'CD')
('...', ':')
('...', ':')
('1', 'CD')
('...', ':')
('132334Brought', 'CD')
('forward', 'NN')
('...', ':')
('...', ':')
('1', 'CD')
('...', ':')
('.132334Bahrain', 'NN')
(ORGANIZATION IslandBārbār/NNP)
(',', ',')
(PERSON Dirāz/NNP Bani/NNP Jamrah/NNP)
('...', ':')
('...', ':')
('...', ':')
('...', ':')
('15', 'CD')
('...', ':')
('17Bahrain', 'CD')
(ORGANIZATION IslandBudaiya/NNP)
("'", 'POS')
('...', ':')
('11', 'CD')
('...', ':')
('105637114Bahrain', 'CD')
(ORGANIZATION IslandIswār/NNP)
('(', '(')
(PERSON Hālat/NNP)
('Bin-', 'NNP')
(')', ')')
('...', ':')
('...', ':')
('...', ':')
('...', ':')
('...', ':')
('316Bahrain', 'CD')
(ORGANIZATION IslandJan/NNP)
('...', ':')
('...', ':')
('...', ':')
('4', 'CD')
('...', ':')
('2832Bahrain', 'CD')
(ORGANIZATION IslandJufair/NN)
('...', ':')
('...', ':')
('...', ':')
('...', ':')
('...', ':')
('1315Bahrain', 'CD')
("IslandMa'āmir", 'NNP')
('...', ':')
('...', ':')
('...', ':')
('222', 'CD')
('...', ':')
('24Bahrain', 'CD')
(ORGANIZATION IslandManāmah2/NNP)
('...', ':')
('6', 'CD')
('...', ':')
('1100109Bahrain', 'CD')
(ORGANIZATION IslandRummān/NNP)
('(', '(')
('Rās-ar-', 'NNP')
(')', ')')
('1', 'CD')
('...', ':')
('21', 'CD')
('...', ':')
('1519Bahrain', 'CD')
("IslandRuqa'ah", 'NNP')
('Jubailāt', 'NNP')
('...', ':')
('...', ':')
('...', ':')
('...', ':')
('23', 'CD')
('...', ':')
('23Bahrain', 'CD')
(ORGANIZATION IslandSanābis/NNP)
('...', ':')
('...', ':')
('...', ':')
('223', 'CD')
('...', ':')
('30Bahrain', 'CD')
(ORGANIZATION IslandSharaibah/NNP)
('...', ':')
('...', ':')
('...', ':')
('...', ':')
('10', 'CD')
('...', ':')
('13Bahrain', 'CD')
(ORGANIZATION IslandZallāq/NNP)
('...', ':')
('...', ':')
('...', ':')
('519933Muharraq', 'CD')
(ORGANIZATION IslandBusaitīn/NNP)
('...', ':')
('...', ':')
('...', ':')
('8122646Muharraq', 'CD')
(ORGANIZATION IslandDair/NN)
('...', ':')
('...', ':')
('...', ':')
('125', 'CD')
('...', ':')
('26Muharraq', 'CD')
(ORGANIZATION IslandHadd/NNP)
('...', ':')
('3', 'CD')
('...', ':')
('4218321249Muharraq', 'CD')
(ORGANIZATION IslandMuharraq/NNP Town/NNP)
('...', ':')
('401468189396707Muharraq', 'CD')
("IslandNa'īm", 'NNP')
('(', '(')
('Hālat-an-', 'NNP')
(')', ')')
(GPE Sulutah/NNP)
('(', '(')
('Hālat-as', 'NNP')
(')', ')')
('...', ':')
('5', 'CD')
('...', ':')
('1250471Muharraq', 'CD')
(ORGANIZATION IslandQalāli/NNP)
('...', ':')
('...', ':')
('...', ':')
('...', ':')
('55459Muharraq', 'CD')
(ORGANIZATION IslandSamāhij/NNP)
('...', ':')
('...', ':')
('...', ':')
('...', ':')
('6612Muharraq', 'CD')
(ORGANIZATION IslandShajairah/NNP)
('(', '(')
('Umm-ash', 'NNP')
(')', ')')
('...', ':')
('...', ':')
('...', ':')
('114', 'CD')
('...', ':')
('15Muharraq', 'CD')
(ORGANIZATION IslandShajar/NNP)
('(', '(')
('Umm-ash-', 'NNP')
(')', ')')
('...', ':')
('...', ':')
('...', ':')
('215', 'CD')
('...', ':')
('17Nabi', 'CD')
('Sālih', 'NNP')
('IslandKāflān', 'NNP')
('Quryah', 'NNP')
('...', ':')
('...', ':')
('...', ':')
('1', 'CD')
('...', ':')
('810Sitrah', 'CD')
(ORGANIZATION IslandMuhazzah/NNP)
('...', ':')
('...', ':')
('...', ':')
('...', ':')
('30131Sitrah', 'CD')
(ORGANIZATION IslandQuriyah/NNP)
('...', ':')
('...', ':')
('...', ':')
('...', ':')
('19', 'CD')
('...', ':')
('19Sitrah', 'CD')
(ORGANIZATION IslandSufālah/NNP)
('...', ':')
('...', ':')
('...', ':')
('...', ':')
('19', 'CD')
('...', ':')
('19', 'CD')
('...', ':')
('Total', 'JJ')
('number', 'NN')
('vessels360221607896941760', 'NN')
('...', ':')
('Total', 'JJ')
('tonnage365184756334829615447320720', 'NN')
('...', ':')
('Total', 'JJ')
('hands', 'NNS')
(ORGANIZATION employed99201016730909748294218390The/JJ)
('column', 'NN')
('totals', 'NNS')
('table', 'JJ')
('includes', 'VBZ')
('vessels', 'NNS')
('class', 'NN')
('ascertained.A', 'NN')
('word', 'NN')
('necessary', 'JJ')
('uses', 'VBZ')
('various', 'JJ')
('types', 'NNS')
('vessels', 'NNS')
('suitable', 'JJ')
(';', ':')
('certain', 'JJ')
('number', 'NN')
('convertible', 'JJ')
('restricted', 'JJ')
('one', 'CD')
('form', 'NN')
('employment', 'NN')
('.', '.')
('Trading', 'NN')
('vessels', 'NNS')
(GPE Baghlahs/NNP)
(',', ',')
('Būm-', 'NNP')
(',', ',')
("Shū'ais", 'NNP')
('Māshuwahs', 'NNP')
(';', ':')
('pearl', 'CC')
('boats', 'NNS')
('chiefly', 'VBP')
(PERSON Baqārahs/NNP)
(',', ',')
(PERSON Mashuwahs/NNP)
(',', ',')
(PERSON Sambūks/NNP Batīls/NNP)
(';', ':')
('cargo', 'NN')
('lighters', 'NNS')
('Būms', 'NNP')
('wide', 'JJ')
('flat-bottomed', 'JJ')
('species', 'NNS')
('called', 'VBN')
('.', '.')
(PERSON Tashāshil/NNP تشاشيل/NNP)
(';', ':')
('ferry', 'NN')
('boats', 'NNS')
(PERSON Māshuwahs/NNP Shū'ais/NNP)
(',', ',')
('fishing', 'VBG')
('boats', 'NNS')
('.', '.')
('It', 'PRP')
('ascertained', 'VBD')
(',', ',')
(GPE Bahrain/NNP)
('100', 'CD')
('vessels', 'NNS')
('used', 'VBN')
('trade', 'NN')
('run', 'NN')
(PERSON Qatīf/NNP)
(',', ',')
("'Oqair", 'NNP')
(',', ',')
(PERSON Qatar/NNP)
(',', ',')
(PERSON Trucial/NNP)
("'Omān", 'NNP')
('PersianCoast', 'NNP')
(',', ',')
('even', 'RB')
(GPE India/NNP)
(',', ',')
(GPE Southern/NNP)
(PERSON Arabia/NNP Zanzibar/NNP)
(';', ':')
('917', 'CD')
('proceed', 'NN')
(',', ',')
('pearl', 'JJ')
('banks', 'NNS')
(';', ':')
('30', 'CD')
('cargo', 'NN')
('lighters', 'NNS')
(PERSON Manāmah/NNP)
('harbour', 'NN')
(',', ',')
('half', 'NN')
(PERSON Būms/NNP)
(';', ':')
('300', 'CD')
('ferry-boats', 'NNS')
('plying', 'VBG')
('chiefly', 'NN')
(PERSON Manāmah/NNP Muharraq/NNP Town/NNP)
(';', ':')
('600', 'CD')
('fishing', 'NN')
('boats', 'NNS')
('.', '.')
('The', 'DT')
('total', 'JJ')
('figures', 'NNS')
('1947', 'CD')
(',', ',')
('clearly', 'RB')
('illustrating', 'VBG')
('statement', 'NN')
('partial', 'JJ')
('convertibility.Manufactures', 'VBZ')
('industries.—The', 'JJ')
('leading', 'VBG')
('handicrafts', 'NNS')
(GPE Bahrain/NNP)
('sailmaking', 'VBG')
('manufacture', 'NN')
('local', 'JJ')
('sale', 'NN')
('woollen', 'NN')
("'Abas", 'CD')
(',', ',')
('lungis', 'VBZ')
('white', 'JJ')
('coloured', 'VBN')
(',', ',')
('cheeked', 'VBD')
('sheeting', 'NN')
('.', '.')
('Matting', 'VBG')
('woven', 'JJ')
('fine', 'JJ')
(ORGANIZATION Hasa/NNP)
('reeds', 'VBZ')
('best', 'JJS')
('obtainable', 'JJ')
(LOCATION Persian/JJ Gulf/NNP)
('.', '.')
('A', 'DT')
('new', 'JJ')
('textile', 'NN')
('industry', 'NN')
('recently', 'RB')
('sprung', 'VBD')
('manufacture', 'NN')
('striped', 'VBD')
('cotton', 'NN')
('cloth', 'NN')
(PERSON Qabas/NNP Zabūns/NNP)
('made', 'VBD')
(',', ',')
('output', 'NN')
('material', 'JJ')
('now-amounts', 'NNS')
('100', 'CD')
('pieces', 'NNS')
('weekly.Bahrain', 'VBP')
('famous', 'JJ')
('throughout', 'IN')
(PERSON Persian/JJ Gulf/NNP)
('boat', 'NN')
('building', 'NN')
(',', ',')
('industry', 'NN')
('gives', 'VBZ')
('employment', 'NN')
('200', 'CD')
('carpenters', 'NNS')
('.', '.')
('In', 'IN')
('1903-04', 'JJ')
('nearly', 'RB')
('130', 'CD')
('boats', 'NNS')
(',', ',')
('ranging', 'VBG')
('price', 'NN')
(PERSON R300/NNP R8000/NNP)
(',', ',')
('sold', 'VBD')
('purchasers', 'NNS')
(PERSON Qatar/NNP Trucial/NNP)
("'Omān", 'POS')
('.', '.')
('The', 'DT')
('timber', 'NN')
('nails', 'NNS')
('used', 'VBN')
('construction', 'NN')
('chiefly', 'NN')
('India.Foreign', 'NNP')
('trade.—Bahrain', 'NN')
('principal', 'JJ')
('pearl', 'NN')
('market', 'NN')
(PERSON Persian/NNP Gulf/NNP)
('.', '.')
('It', 'PRP')
('also', 'RB')
('emporium', 'VBZ')
('general', 'JJ')
('trade', 'NN')
('mainland', 'NN')
(PERSON Arabia/NNP)
(';', ':')
('former', 'JJ')
('function', 'NN')
('important', 'JJ')
(',', ',')
('probable', 'JJ')
(',', ',')
('pearl', 'JJ')
('beds', 'NNS')
('fail', 'VBP')
(',', ',')
(PERSON Shaikhdom/NNP)
('would', 'MD')
('shortly', 'RB')
('reduced', 'VB')
('comparative', 'JJ')
('insignificance', 'NN')
('.', '.')
('Imports', 'NNS')
('oyster', 'RBR')
('shells', 'NNS')
('pearls', 'VBP')
('neighbouring', 'VBG')
('seas', 'JJ')
('coasts', 'NNS')
(';', ':')
('rice', 'NN')
(',', ',')
('cotton', 'NN')
('piece-goods', 'NNS')
(',', ',')
('silk', 'NN')
('piece-goods', 'NNS')
(',', ',')
('embroidery', 'NN')
(',', ',')
('spices', 'NNS')
(',', ',')
('coffee', 'NN')
(',', ',')
('sugar', 'NN')
(',', ',')
('tea', 'NN')
(',', ',')
('coir-rope', 'NN')
(',', ',')
('timber', 'NN')
(',', ',')
('metals', 'NNS')
(',', ',')
('hardware', 'NN')
('haberdashery', 'NN')
(GPE India/NNP)
(';', ':')
('barley', 'NN')
(',', ',')
('wheat', 'NN')
(',', ',')
('ghi', 'NN')
(',', ',')
('carpets', 'NNS')
(',', ',')
('rosebuds', 'NNS')
(',', ',')
('rosewater', 'NN')
(',', ',')
('firewood', 'NN')
(',', ',')
('almonds', 'VBZ')
(',', ',')
('currants', 'NNS')
(',', ',')
('gram', 'NN')
(',', ',')
('walnuts', 'NNS')
(',', ',')
('live', 'JJ')
('cattle', 'NNS')
(',', ',')
('sheep', 'JJ')
('goats', 'NNS')
('henna', 'VBP')
(GPE Persia/NNP)
(';', ':')
('fruit', 'NN')
('sweetmeats', 'NNS')
(ORGANIZATION Sultanate/NNP)
("'Omān", 'POS')
(';', ':')
("'Abas", 'CD')
(',', ',')
('dates', 'NNS')
(',', ',')
('ghi', 'NN')
('hides', 'NNS')
(GPE Hasa/NNP)
(';', ':')
('dates', 'NNS')
(',', ',')
('fruits', 'NNS')
('sheep', 'VBP')
('ghi', 'JJ')
('Qatīf', 'NNP')
(':', ':')
('ghi', 'NN')
(',', ',')
('sheep', 'NN')
(',', ',')
("'Abas", "''")
('little', 'JJ')
('wool', 'JJ')
(GPE Kuwait/NNP)
(';', ':')
('dates', 'NNS')
(',', ',')
('ghi', 'NN')
(',', ',')
(PERSON Mash/NNP)
(',', ',')
('wheat', 'NN')
(',', ',')
('barley', 'NN')
(',', ',')
('tobacco', 'NN')
('coarse', 'JJ')
('reed-mats', 'NNS')
('roofing', 'VBG')
(GPE Turkish/JJ)
("'Irāq", 'NNS')
(';', ':')
('rafter', 'NN')
('cocoauuts', 'NNS')
(PERSON East/NNP Africa/NNP)
('.', '.')
('Of', 'IN')
('important', 'JJ')
('values', 'NNS')
('(', '(')
('lakhs', 'JJ')
('rupees', 'NNS')
(')', ')')
('1905', 'CD')
('pearls', 'NN')
('(', '(')
('118', 'CD')
(')', ')')
(',', ',')
('rice', 'NN')
('(', '(')
('26½', 'CD')
(')', ')')
(',', ',')
('cotton', 'NN')
('piece-goods', 'NNS')
('(', '(')
('91/3', 'CD')
(')', ')')
(',', ',')
('dates', 'NNS')
('(', '(')
('9¼', 'CD')
(')', ')')
(',', ',')
('coffee', 'NN')
('(', '(')
('4¾', 'CD')
(')', ')')
(',', ',')
('wheat', 'NN')
('(', '(')
('4½', 'CD')
(')', ')')
(',', ',')
('ghi', 'NN')
('(', '(')
('2¾', 'CD')
(')', ')')
(',', ',')
('sugar', 'JJ')
('tobaceo', 'NN')
('(', '(')
('2½', 'CD')
(')', ')')
(',', ',')
("'Abas", 'NNP')
('(', '(')
('1¾', 'CD')
(')', ')')
(',', ',')
('silk', 'JJ')
('piece-goods', 'NNS')
('(', '(')
('1½', 'CD')
(')', ')')
(',', ',')
('slaughter', 'JJ')
('animals', 'NNS')
('(', '(')
('11/3', 'CD')
(')', ')')
('timber', 'NN')
('wood', 'NN')
('(', '(')
('1', 'CD')
(')', ')')
('.', '.')
('The', 'DT')
('slaughter', 'NN')
('animals', 'NNS')
('chiefly', 'VBP')
('sheep', 'JJ')
('goats', 'NNS')
('imported', 'VBN')
('periodically', 'RB')
('15', 'CD')
(GPE Persian/JJ)
('butchers', 'NNS')
(PERSON Persian/JJ Coast/NNP)
(';', ':')
('1905', 'CD')
('less', 'JJR')
('14000', 'CD')
('sheep', 'JJ')
('goats', 'NNS')
('thus', 'RB')
('brought', 'VBD')
(GPE Persia/NNP)
('2050', 'CD')
('Qatīf', 'NNP')
('.', '.')
('An', 'DT')
('import', 'NN')
('arms', 'NNS')
(',', ',')
('worth', 'JJ')
('4', 'CD')
('lakhs', 'JJ')
('rupees', 'NNS')
('annually', 'RB')
('years', 'NNS')
('ago', 'RB')
(',', ',')
('entirely', 'RB')
('disappeared', 'VBN')
('.', '.')
('The', 'DT')
('total', 'JJ')
('value', 'NN')
('imports', 'NNS')
('1905', 'CD')
('243', 'CD')
('lakhs', 'NN')
(',', ',')
('103', 'CD')
('lakhs', 'NN')
('India', 'NNP')
('102', 'CD')
('lakhs', 'NN')
(GPE Turkish/JJ)
('possessions', 'NNS')
('.', '.')
('In', 'IN')
('year', 'NN')
('65', 'CD')
('steamers', 'NNS')
('cargo', 'VBP')
(',', ',')
(GPE British/JJ)
('aggregate', 'NN')
('tonnage', 'NN')
('95097', 'CD')
(',', ',')
('entered', 'VBD')
(PERSON Manāmah/NNP)
(',', ',')
('steam', 'NN')
('port', 'NN')
('Bahrain.The', 'NNP')
('principal', 'JJ')
('exports', 'NNS')
('values', 'NNS')
('(', '(')
('lakhs', 'JJ')
('rupees', 'NNS')
(')', ')')
('1905', 'CD')
('wore', 'NN')
('pearls', 'NNS')
('(', '(')
('161', 'CD')
(')', ')')
(',', ',')
('rice', 'NN')
('(', '(')
('6', 'CD')
(')', ')')
(',', ',')
('cotton', 'NN')
('piece-goods', 'NNS')
('(', '(')
('3¾', 'CD')
(')', ')')
(',', ',')
('dates', 'NNS')
('(', '(')
('3¾', 'CD')
(')', ')')
('coffee', 'NN')
('(', '(')
('1½', 'CD')
(')', ')')
('.', '.')
('The', 'DT')
('pearls', 'NN')
('go', 'VBP')
('chiefly', 'JJ')
(GPE Bombay/NNP)
(';', ':')
('dates', 'NNS')
('(', '(')
('dried', 'VBN')
(',', ',')
('called', 'VBN')
(PERSON Salūq/NNP سلوق/NNP)
(',', ',')
('boiled', 'VBD')
(')', ')')
(PERSON Karāchi/NNP)
('ports', 'VBZ')
(PERSON Kathiawar/NNP)
('.', '.')
(PERSON Oyster/NNP)
('shells', 'VBZ')
('sent', 'VBD')
(GPE United/NNP Kingdom/NNP)
(',', ',')
(GPE Germany/NNP)
(',', ',')
(GPE France/NNP)
('.', '.')
('Some', 'DT')
('sail', 'JJ')
('cloth', 'NN')
('exported', 'VBD')
(',', ',')
('mostly', 'RB')
(GPE Basrah/NNP)
(';', ':')
('fine', 'JJ')
('reed', 'NN')
('matting', 'VBG')
('sent', 'VBD')
(PERSON Persia/NNP Turkish/NNP)
("'Irāq", 'POS')
(';', ':')
('portion', 'NN')
('striped', 'VBD')
('fabrics', 'NNS')
(',', ',')
('already', 'RB')
('mentioned', 'VBN')
('locally', 'RB')
('manufactured', 'VBN')
(',', ',')
('disposed', 'JJ')
(ORGANIZATION Hasa/NNP Qatīf/NNP)
('.', '.')
('In', 'IN')
('1905.', 'CD')
('total', 'JJ')
('value', 'NN')
('exports', 'NNS')
('Bahrain', 'VBP')
('204', 'CD')
('lakhs', 'NN')
('rupees', 'NNS')
(',', ',')
('129', 'CD')
('lakhs', 'NN')
('’', 'NNP')
('worth', 'NN')
('went', 'VBD')
(GPE India/NNP)
('62', 'CD')
('lakhs', 'NN')
('’', 'NNP')
('worth', 'IN')
(GPE Turkish/NNP)
('territory', 'NN')
('.', '.')
('In', 'IN')
('year', 'NN')
('35', 'CD')
('steamers4', 'NN')
('cleared', 'VBD')
(PERSON Manāmah/NNP)
('cargo', 'NN')
(';', ':')
(GPE British/JJ)
('total', 'JJ')
('tonnage', 'NN')
('54666.It', 'CD')
('calculated', 'VBD')
('1/3', 'CD')
('total', 'JJ')
('goods', 'NNS')
('imported', 'VBN')
('eventually', 'RB')
('leave', 'VBP')
('island', 'VBP')
('various', 'JJ')
('destinations', 'NNS')
(';', ':')
('least', 'JJS')
('clear', 'JJ')
('trade', 'NN')
(PERSON Bahrain/NNP)
('largely', 'RB')
('transit', 'JJ')
('trade', 'NN')
('.', '.')
('In', 'IN')
('particular', 'JJ')
('Manāmah', 'NNP')
('steam', 'NN')
('port', 'NN')
(PERSON Qatar/NNP)
('promontory', 'NN')
(',', ',')
(PERSON Oases/NNP Hasa/NNP Qatīf/NNP)
(',', ',')
(PERSON Hasa/NNP)
(',', ',')
('part', 'NN')
(PERSON Najd/NNP)
('well', 'RB')
(';', ':')
('goods', 'NNS')
('despatched', 'VBD')
('mainland', 'RB')
('chiefly', 'JJ')
('piece-goods', 'NNS')
(',', ',')
('rice', 'NN')
(',', ',')
('barley', 'NN')
(',', ',')
('metals', 'NNS')
(',', ',')
('coffee', 'NN')
(',', ',')
('sugar', 'NN')
('spices.About', 'NN')
('50', 'CD')
('horses', 'NNS')
('per', 'IN')
('annum', 'NN')
('pass', 'NN')
(GPE Bahrain/NNP)
('en', 'IN')
('route', 'NN')
(PERSON Najd/NNP Hasa/NNP)
(GPE Indian/JJ)
('market', 'NN')
('.', '.')
('Until', 'IN')
('years', 'NNS')
('ago', 'RB')
('donkeys', 'NNS')
('also', 'RB')
('largely', 'RB')
('exported', 'VBN')
(PERSON Bahrain/NNP Persia/NNP)
(';', ':')
('diminution', 'NN')
('numbers', 'NNS')
(',', ',')
('together', 'RB')
('rise', 'NN')
('price', 'NN')
(GPE Bahrain/NNP)
('islands', 'NNS')
(',', ',')
('brought', 'VBD')
('cessation', 'NN')
('present', 'NN')
('trade', 'NN')
('.', '.')
('The', 'DT')
('trade', 'NN')
(PERSON Qatar/NNP)
('divided', 'VBD')
('towns', 'NNS')
(PERSON Manāmah/NNP Muharraq/NNP)
(':', ':')
(PERSON Hasa/NNP Qatīf/NNP)
('concentrated', 'VBD')
(PERSON Manāmah/NNP .Currency/NNP)
(',', ',')
('weights', 'VBZ')
('measures.—The', 'JJ')
('currency', 'NN')
(GPE Bahrain/NNP)
('principality', 'NN')
('mixed', 'NN')
('.', '.')
(GPE Indian/JJ)
('coins', 'NNS')
('denominations', 'NNS')
('popular', 'JJ')
('circulate', 'VBP')
('freely', 'RB')
(';', ':')
(PERSON Maria/NNP Theresa/NNP)
('dollars', 'NNS')
(PERSON Riyāls/NNP ريال/NNP)
('largely', 'RB')
('current', 'JJ')
('pearl', 'NN')
('season', 'NN')
(',', ',')
('divers', 'NNS')
(',', ',')
('mostly', 'RB')
(PERSON Arabs/NNP)
('mainland', 'NN')
(',', ',')
('prefer', 'VBP')
(';', ':')
('quantities', 'NNS')
('imported', 'VBN')
(GPE Bombay/NNP)
('meet', 'JJ')
('demand', 'NN')
('.', '.')
('The', 'DT')
(GPE Riyāl/NNP)
('present', 'NN')
('(', '(')
('1905', 'CD')
(')', ')')
('worth', 'NN')
('normally', 'RB')
('Re', 'NNP')
('.', '.')
('1', 'CD')
('As', 'IN')
('.', '.')
('5', 'CD')
(',', ',')
('liable', 'JJ')
('fluctuations', 'NNS')
('value', 'NN')
('much', 'RB')
('1', 'CD')
('2', 'CD')
('annas', 'NNS')
('either', 'DT')
('way', 'NN')
('.', '.')
('The', 'DT')
(ORGANIZATION Turkish/JJ Līrah/NNP)
('passes', 'VBZ')
('valuation', 'NN')
('Rs', 'NNP')
('.', '.')
('14', 'CD')
('.', '.')
('The', 'DT')
('ordinary', 'JJ')
('unit', 'NN')
('small', 'JJ')
('values', 'NNS')
('however', 'RB')
('imaginary', 'JJ')
('coin', 'NN')
('called', 'VBN')
(PERSON Bahrain/NNP Qrānقران/NNP)
('worth', 'IN')
('2/5ths', 'CD')
('rupee', 'NN')
('.', '.')
('The', 'DT')
(ORGANIZATION Tawīlah/NNP)
('طويله', 'NNP')
('Hasa', 'NNP')
('Oasis', 'NNP')
('seen', 'VBN')
('valued', 'VBN')
('½', 'JJ')
('anna', 'NN')
(',', ',')
('readily', 'RB')
('accepted', 'VBN')
('.', '.')
('A', 'DT')
('large', 'JJ')
('quantity', 'NN')
('specie', 'NN')
('abroad', 'RB')
('enters', 'NNS')
('Bahrain', 'VBP')
('In', 'IN')
('1903', 'CD')
('value', 'NN')
('coin', 'NN')
('imported', 'VBD')
('43', 'CD')
('lakhs', 'NN')
('rupees', 'NNS')
(',', ',')
('coin', 'VBP')
('worth', 'JJ')
('4½', 'CD')
('lakhs', 'JJ')
('known', 'VBN')
('left', 'JJ')
('islands', 'VBZ')
('year.The', 'JJ')
('ordinary', 'JJ')
('weights', 'NNS')
('Bahrain', 'VBP')
(':', ':')
('—Ruba', 'NN')
("'", "''")
(PERSON Mithqālربع/NNP مثقال04/NNP)
('lb.English.Nisf', 'VBZ')
(PERSON Mithqālنصف/NNP)
('مثقال.08do.Mithqālمثقال.16do.Nisf', 'NNP')
("Ruba'-ath-Thamīnنصف", 'NNP')
('ربع', 'NNP')
("الثمين.32do.Ruba'-ath-Thamīnربع", 'JJ')
('الثمين.64do.Thulth', 'NNP')
('Thaminثلث', 'NNP')
('ثمين.86do.Nisf', 'NNP')
('Thamīnنصف', 'NNP')
("ثمين1.29do.Qiyāsقياس1.54do.Thamīnثمين2.57do.Alfالف3.09do.Ruba'ربع4.11do.Mannمنّ57.60do.Rafa'ahرفعه576.00do.The", 'NNP')
('table', 'JJ')
('lineal', 'JJ')
('measure', 'NN')
('runs', 'VBZ')
(':', ':')
('—6', 'NN')
("Sha'arāt", 'NNP')
(PERSON Bardhūn/NNP شعرات/NNP)
('برذون', 'NNP')
('``', '``')
('mule-hairs', 'JJ')
("''", "''")
('=1', 'NN')
(PERSON Habbat/NNP Sha/NNP)
("'", 'POS')
('īr', 'NN')
('``', '``')
('barley-corn', 'JJ')
('.', '.')
("''", "''")
('حبّة', 'JJ')
(ORGANIZATION شعير6/NNP Habbāt/NNP)
("Sha'īr=1", 'NNP')
('Asba', 'NNP')
("'", 'POS')
(ORGANIZATION اصبع/NN)
('``', '``')
('finger-breadth', 'JJ')
('.', '.')
('``', '``')
('4', 'CD')
("Asābi'=1", 'NNP')
(PERSON Qabdhah/NNP قبضه/NNP)
('``', '``')
('fist', 'NN')
('.', '.')
('``', '``')
("'6", 'JJ')
('Qabdhāt=1', 'NNP')
('Dhirā', 'NNP')
("'", 'POS')
(ORGANIZATION ذراع/NN)
('``', '``')
('cubit', 'NN')
('.', '.')
('``', '``')
('4', 'CD')
("Dhirā'=1", 'NNP')
('Bā', 'NNP')
("'", 'POS')
(ORGANIZATION باع/NN)
('``', '``')
('fathom', 'NN')
('.', '.')
('``', '``')
('1000', 'CD')
("Bā'=1", 'NNP')
('Mīl', 'NNP')
(PERSON Hāshimi/NNP ميل/NNP)
('هاشمي', 'NNP')
('``', '``')
('mile', 'NN')
('.', '.')
('``', '``')
('3', 'CD')
('Amyāl=1', 'NNP')
(PERSON Farsakh/NNP فرسخ/NNP)
('hour', 'NN')
("'s", 'POS')
('walk', 'NN')
('.', '.')
('``', '``')
('4', 'CD')
('Farsakh=1', 'NNP')
(PERSON Barīd/NNP بريد/NNP)
("''", "''")
('postal', 'JJ')
('runner', 'NN')
("'s", 'POS')
('stage', 'NN')
('.', '.')
('``', '``')
('3⅛', 'CD')
('Barīd=1', 'NNP')
(PERSON Darjah/NNP درجه/NNP)
("''", "''")
('degree', 'NN')
('.', '.')
('``', '``')
('360', 'CD')
('Darjah=1', 'NNP')
('Dāirat-al-Ardh', 'NNP')
('دائرة', 'NNP')
('الأرض', 'NNP')
("''", "''")
('circuit', 'NN')
('earth.Of', 'VBZ')
('Only', 'RB')
(PERSON Qabdhah/NNP)
(',', ',')
(PERSON Dhirā/NNP)
("'", 'POS')
(',', ',')
(PERSON Bā/NNP)
("'", 'POS')
(PERSON Farsakh/NNP)
('known', 'VBN')
('ordinary', 'JJ')
('illiterate', 'NN')
('people', 'NNS')
('.', '.')
('The', 'DT')
(ORGANIZATION Dhirā/NNP)
("'is", 'POS')
('equivalent', 'JJ')
('18¾', 'CD')
('English', 'JJ')
('inches.General', 'JJ')
('administration.—', 'IN')
('The', 'DT')
('Government', 'NNP')
('Bahrain', 'NNP')
('loose', 'JJ')
('ill-organised', 'JJ')
('character', 'NN')
('.', '.')
('It', 'PRP')
('ruled', 'VBD')
('Shaikh—at', 'NNP')
('present', 'NN')
("'", "''")
('Īsa', 'JJ')
('bin-Ali-who', 'NN')
(',', ',')
('assistance', 'NN')
(PERSON Wazīr/NNP)
('principal', 'NN')
('adviser', 'NN')
(',', ',')
('disposes', 'VBZ')
('matters', 'NNS')
('political', 'JJ')
('general', 'JJ')
('importance', 'NN')
('personally', 'RB')
('governs', 'VBZ')
(',', ',')
('unless', 'IN')
('absent', 'JJ')
('sporting', 'VBG')
('expeditions', 'NNS')
('mainland', 'VBP')
('island', 'NN')
(GPE Muharraq/NNP)
('part', 'NN')
(PERSON Bahrain/NNP Island/NNP)
('adjacent', 'JJ')
('Manāmah', 'NNP')
('.', '.')
('During', 'IN')
('four', 'CD')
('months', 'NNS')
('hot', 'JJ')
('weather', 'NN')
(PERSON Shaikh/NNP)
('seat', 'NN')
(PERSON Manāmah/NNP)
(':', ':')
('headquarters', 'NN')
('rest', 'VBP')
('year', 'NN')
(PERSON Muharraq/NNP Town/NNP)
(',', ',')
('indulges', 'VBZ')
('frequent', 'JJ')
('journeys', 'NNS')
('.', '.')
('A', 'DT')
('brother', 'NN')
(',', ',')
('sons', 'NNS')
(',', ',')
('nephews', 'NNS')
('near', 'IN')
('relations', 'NNS')
('hold', 'VBP')
('fiefs', 'RB')
('various', 'JJ')
('places', 'NNS')
(',', ',')
('almost', 'RB')
('independent', 'JJ')
('possession', 'NN')
('life', 'NN')
(';', ':')
('upon', 'IN')
('estates', 'NNS')
('collect', 'VBP')
('taxes', 'NNS')
('behoof', 'JJ')
('exercise', 'VBP')
('magisterial', 'JJ')
('seignorial', 'JJ')
('jurisdiction', 'NN')
('.', '.')
('The', 'DT')
('important', 'JJ')
('semi-independent', 'JJ')
('holding', 'VBG')
('sort', 'NN')
('present', 'JJ')
('time', 'NN')
('hands', 'VBZ')
(PERSON Shaikh/NNP)
("'s", 'POS')
('brother', 'NN')
(GPE Khālid/NNP)
(';', ':')
('includes', 'VBZ')
('islands', 'NNS')
(PERSON Sitrah/NNP NabiSālih/NNP)
(',', ',')
('well', 'RB')
('villages', 'VBZ')
('east', 'JJ')
('side', 'NN')
(',', ',')
(GPE Bahrain/NNP Island/NNP)
('south', 'VBD')
('Khor-al-Kabb', 'NNP')
('inland', 'NN')
('villages', 'NNS')
('Rifā', 'NNP')
("'-ash-Sharqi", 'CD')
("Rifā'-al-Gharbi", 'NNP')
('.', '.')
('These', 'DT')
('fiefs', 'NNS')
('resumable', 'VBP')
('death', 'NN')
('holder', 'NN')
(':', ':')
('theory', 'NN')
(',', ',')
('least', 'JJS')
(',', ',')
('obligation', 'NN')
('continue', 'VBP')
('favour', 'JJ')
('heirs.Glass', 'NN')
('disabilities', 'NNS')
('privileges.—', 'VBP')
('Under', 'IN')
('régime', 'NN')
(PERSON Shaikh/NNP)
('relations', 'NNS')
('condition', 'NN')
(PERSON Bāhārinah/NNP)
(',', ',')
('bulk', 'NN')
('cultivating', 'VBG')
('class', 'NN')
('principality', 'NN')
(',', ',')
('unhappy', 'JJ')
('.', '.')
('They', 'PRP')
('subject', 'VBP')
('constant', 'JJ')
(PERSON Sukhrah/NNP سخره/NNP)
('corvée', 'NN')
('affects', 'NNS')
('persons', 'NNS')
(',', ',')
('boats', 'NNS')
('animais', 'VBP')
(';', ':')
('position', 'NN')
('regard', 'NN')
('land', 'NN')
('serfs', 'NN')
('rather', 'RB')
('tenants', 'NNS')
(';', ':')
('fail', 'VB')
('deliver', 'NN')
('certain', 'JJ')
('amount', 'NN')
('produce', 'NN')
(',', ',')
('often', 'RB')
('arbitrarily', 'RB')
('enhanced', 'VBD')
(PERSON Shaikh/NNP)
("'s", 'POS')
('servants', 'NNS')
('relations', 'NNS')
(',', ',')
('summarily', 'RB')
('evicted', 'VBN')
('homes', 'NNS')
('cases', 'NNS')
('beaten', 'VBP')
('imprisoned', 'VBN')
('well', 'RB')
('.', '.')
('Some', 'DT')
('Bahārinah', 'NNP')
('theory', 'NN')
('landowners', 'NNS')
(',', ',')
(',', ',')
('allowed', 'VBD')
('past', 'JJ')
('purchase', 'NN')
('gardens', 'NNS')
('obtain', 'VB')
(GPE Sanads/NNP)
(';', ':')
('estates', 'NNS')
('often', 'RB')
('resumed', 'VBD')
('valid', 'JJ')
(',', ',')
('reason', 'NN')
(':', ':')
('even', 'RB')
('sons', 'NNS')
('present', 'VBP')
('ruler', 'VB')
('guilty', 'JJ')
('injustice', 'NN')
('.', '.')
('The', 'DT')
('crops', 'NNS')
(PERSON Bahārinah/NNP)
('frequently', 'RB')
('stolen', 'VBZ')
(PERSON Bedouins/NNP)
('range', 'NN')
('island', 'NN')
(',', ',')
('damaged', 'VBN')
('animals', 'NNS')
('.', '.')
('It', 'PRP')
('appear', 'VBP')
(PERSON Bahārinah/JJ)
('ever', 'RB')
('put', 'VBD')
('death', 'NN')
('without', 'IN')
('regular', 'JJ')
('trial', 'NN')
(GPE Qādhi/NNP)
(';', ':')
('reason', 'NN')
('suspect', 'JJ')
('deaths', 'NNS')
('due', 'JJ')
('ill-treatment', 'JJ')
('sometimes', 'NNS')
('occur', 'VBP')
('among', 'IN')
(',', ',')
('women', 'NNS')
('apt', 'VBP')
('molested', 'VBN')
(PERSON Shaikh/NNP)
("'s", 'POS')
('servants', 'NNS')
('.', '.')
('If', 'IN')
('oppressed', 'VBN')
('beyond', 'IN')
('endurance', 'NN')
(PERSON Bahārinah/NNP)
('might', 'MD')
('emigrate', 'VB')
(PERSON Qatīf/NNP Oasis/NNP)
(',', ',')
('consciousness', 'NN')
('possibility', 'NN')
('principal', 'JJ')
('check', 'NN')
('upon', 'IN')
('inhumanity', 'NN')
('masters.The', 'NN')
('position', 'NN')
(PERSON Dawāsir/NNP Budaiya/NNP)
("'", 'POS')
(PERSON Zallāq/NNP)
('somewhat', 'RB')
('peculiar', 'VBZ')
('.', '.')
('With', 'IN')
('neighbours', 'NNS')
(PERSON Bahārinah/NNP)
('little', 'RB')
(';', ':')
('relations', 'NNS')
(PERSON Shaikh/NNP Bahrain/NNP)
('distant', 'JJ')
('though', 'IN')
('unfriendly', 'JJ')
('.', '.')
('They', 'PRP')
('insist', 'VBP')
('dealt', 'JJ')
('chiefs', 'NNS')
(',', ',')
('given', 'VBN')
(PERSON Shaikh/NNP Bahrain/NNP)
('clearly', 'RB')
('understand', 'VBP')
(',', ',')
('take', 'VBP')
('action', 'NN')
('affecting', 'VBG')
('disapprove', 'NN')
(',', ',')
('quit', 'NN')
(GPE Bahrain/NNP)
('body', 'NN')
('.', '.')
('It', 'PRP')
('considered', 'VBD')
(',', ',')
('however', 'RB')
(',', ',')
('extensive', 'JJ')
('purchases', 'NNS')
('date', 'NN')
('plantations', 'NNS')
('four', 'CD')
('five', 'CD')
('headmen', 'NNS')
('made', 'VBN')
('late', 'JJ')
('years', 'NNS')
('vicinity', 'NN')
('settleruents', 'NNS')
('render', 'VBP')
('threat', 'NN')
('difficult', 'JJ')
(',', ',')
('impossible', 'JJ')
(',', ',')
('fulfilment.The', 'JJ')
('Bedouins', 'NNP')
(',', ',')
('chiefly', 'NN')
("Na'īm", 'NNP')
(',', ',')
('whose', 'WP$')
('presence', 'NN')
('islands', 'VBZ')
('never', 'RB')
('free', 'JJ')
('whose', 'WP$')
('number', 'NN')
('reaches', 'VBZ')
('maximum', 'JJ')
('hot', 'JJ')
('weather', 'NN')
(',', ',')
('cause', 'RB')
('much', 'JJ')
('trouble', 'NN')
('annoyance', 'NN')
('settled', 'VBD')
('inhabitants', 'NNS')
(';', ':')
('patronised', 'VBN')
('encouraged', 'VBD')
(PERSON Shaikh/NNP)
('idea', 'NN')
(',', ',')
('probably', 'RB')
('erroneous', 'JJ')
(',', ',')
('would', 'MD')
('rally', 'VB')
('side', 'JJ')
('emergency.Religions', 'NNS')
('legal', 'JJ')
('institutions.5—', 'VBP')
('The', 'DT')
(ORGANIZATION Shaikh/NNP Bahrain/NNP)
('family', 'NN')
('tribe', 'NN')
(PERSON Sunnis/NNP)
(',', ',')
(GPE Sunni/NNP)
('form', 'NN')
(PERSON Islam/NNP)
('consequently', 'RB')
('enjoys', 'VBZ')
(',', ',')
(',', ',')
('official', 'JJ')
('recognition', 'NN')
('preference.Serious', 'JJ')
('cases', 'NNS')
('criminal', 'JJ')
('nature', 'NN')
('important', 'JJ')
('cases', 'NNS')
('civil', 'JJ')
('law', 'NN')
('relating', 'VBG')
('mercantile', 'JJ')
('transactions', 'NNS')
(',', ',')
('pearl', 'NN')
('fisheries', 'NNS')
('referred', 'VBD')
(PERSON Shaikh/NNP)
('official', 'NN')
('chief', 'NN')
('Qādhi—at', 'NNP')
('present', 'JJ')
('Jāsim-bin-Mahza', 'NNP')
("'", 'POS')
(PERSON Manāmah/NNP)
('—who', 'NNP')
('Sunni', 'NNP')
(';', ':')
(',', ',')
('provided', 'VBD')
('whole', 'JJ')
('parties', 'NNS')
(GPE Bahrain/NNP)
('subjects', 'NNS')
(',', ',')
('fact', 'NN')
('may', 'MD')
("Shi'ahs", 'VB')
('affect', 'VB')
('established', 'VBN')
('procedure', 'NN')
('respect.Minor', 'NN')
('cases', 'NNS')
(',', ',')
('especially', 'RB')
('civil', 'JJ')
('character', 'NN')
(',', ',')
('sent', 'VBD')
('settlement', 'NN')
(',', ',')
('parties', 'NNS')
(PERSON Sunnis/NNP)
(',', ',')
(PERSON Shaikh/NNP Sharaf/NNP)
('bin-Ahmad', 'JJ')
(GPE Muharraq/NNP)
(',', ',')
(GPE Sunni/NNP)
(';', ':')
(',', ',')
('parties', 'NNS')
("Shī'ahs", 'NNP')
(',', ',')
(PERSON Shaikh/NNP)
('Ahmad-bin-Hurz', 'NNP')
('Manāmah', 'NNP')
(',', ',')
("Shī'ah", 'NNP')
(':', ':')
('extent', 'NN')
('right', 'RB')
("Shī'ahs", 'NNP')
('cases', 'NNS')
('disposed', 'VBD')
('co-religionists', 'NNS')
('recognised', 'VBN')
('.', '.')
('The', 'DT')
('secular', 'JJ')
('arm', 'NN')
('brought', 'VBD')
('play', 'NN')
(PERSON Shaikh/NNP)
('enforce', 'NN')
('findings', 'NNS')
(',', ',')
('criminal', 'JJ')
('side', 'NN')
(',', ',')
('various', 'JJ')
('judges', 'NNS')
(';', ':')
('latter', 'NN')
(',', ',')
('unfortunately', 'RB')
(',', ',')
('reported', 'VBD')
('discharge', 'NN')
('functions', 'NNS')
('``', '``')
('maximum', 'JJ')
('injustice', 'NN')
('.', '.')
('``', '``')
('6', 'CD')
('Besides', 'NNP')
('legal', 'JJ')
('experts', 'NNS')
('whose', 'WP$')
('names', 'NNS')
('mentioned', 'VBN')
(',', ',')
('present', 'JJ')
('time', 'NN')
('Bahrain', 'NNP')
('7', 'CD')
('Sunni', 'NNP')
('Qādhīs', 'NNP')
('2', 'CD')
('Qādhis', 'NNP')
("Shī'ah", 'NNP')
('persuasion', 'NN')
('permitted', 'VBD')
(PERSON Shaikh/NNP)
('adjudicate', 'NN')
('cases', 'NNS')
('contesting', 'VBG')
('parties', 'NNS')
('may', 'MD')
('agree', 'VB')
('refer', 'NN')
('.', '.')
('It', 'PRP')
('believed', 'VBD')
(',', ',')
('criminal', 'JJ')
('matters', 'NNS')
(',', ',')
('headmen', 'NNS')
('Sunni', 'NNP')
('tribes', 'VBP')
('residing', 'VBG')
('towns', 'NNS')
(PERSON Manāmah/NNP Muharraq/NNP)
('wield', 'VBD')
('considerable', 'JJ')
('magisterial', 'JJ')
('powers', 'NNS')
(';', ':')
('probable', 'JJ')
('landowners', 'NNS')
(PERSON Shaikh/NNP)
("'s", 'POS')
('family', 'NN')
('agents', 'NNS')
('exercise', 'VBP')
('similar', 'JJ')
('authority', 'NN')
('regard', 'NN')
('agricultural', 'JJ')
('Bahārinah', 'NNP')
('.', '.')
('It', 'PRP')
('understood', 'VBD')
(PERSON Bahārinah/NNP)
(',', ',')
(',', ',')
('seen', 'VBN')
(',', ',')
('generally', 'RB')
('landowners', 'NNS')
(';', ':')
('accustomed', 'VBN')
('submit', 'VBP')
('matrimonial', 'JJ')
('cases', 'NNS')
('petty', 'VBP')
('disputes', 'VBZ')
('moveable', 'JJ')
('property', 'NN')
('settlement', 'NN')
('village', 'NN')
('Mullas.Mercantile', 'NNP')
('cases', 'NNS')
(',', ',')
('especially', 'RB')
('foreigners', 'NNS')
('concerned', 'JJ')
(',', ',')
('decided', 'VBD')
('tribunal7', 'NN')
('called', 'VBN')
('variously', 'RB')
("Majlis-al-'Urfi", 'NNP')
('مجلس', 'NNP')
('العرفي', 'NNP')
('Majlis-at-Tijārah', 'NNP')
('مجلس', 'NNP')
('التجاره', 'NNP')
(',', ',')
(PERSON Customary/NNP Commercial/NNP Court/NNP)
('.', '.')
('This', 'DT')
('body', 'NN')
(',', ',')
('permanent', 'JJ')
('members', 'NNS')
('nominated', 'VBN')
(PERSON Shaikh/NNP)
('consultation', 'NN')
(GPE British/NNP)
('Political', 'NNP')
('Agent', 'NNP')
(',', ',')
('possibly', 'RB')
('origin', 'JJ')
('arrangements', 'NNS')
('made', 'VBN')
('long', 'RB')
('since', 'IN')
('settling', 'VBG')
('arbitration', 'NN')
('claims', 'NNS')
('arose', 'VBP')
(GPE British/JJ)
('subjects', 'NNS')
('persons', 'NNS')
('amenable', 'JJ')
('jurisdiction', 'NN')
(PERSON Shaikh/NNP Bahrainǂ8/NNP)
(';', ':')
('existed', 'VBD')
('least', 'JJS')
('50', 'CD')
('years', 'NNS')
(',', ',')
('come', 'VB')
('regarded', 'JJ')
('authority', 'NN')
('islands', 'NNS')
('competent', 'JJ')
('settle', 'RB')
('mercantile', 'JJ')
('suits', 'NNS')
('.', '.')
('In', 'IN')
('practice', 'NN')
('suits', 'NNS')
(',', ',')
('parties', 'NNS')
('Bahrain', 'VBP')
('subjects', 'NNS')
('dispute', 'VBP')
('one', 'CD')
('fact', 'NN')
(',', ',')
('often', 'RB')
('irregularly', 'RB')
('settled', 'VBN')
('relations', 'NNS')
('servants', 'NNS')
(GPE Shaikh/NNP)
(';', ':')
('questions', 'NNS')
('principle', 'NN')
(',', ',')
('interests', 'NNS')
('foreigners', 'NNS')
('might', 'MD')
('afterwards', 'VB')
('affected', 'VBN')
(';', ':')
('referred', 'VBN')
(PERSON Majlis/NNP)
('decision', 'NN')
(',', ',')
(PERSON Shaikh/NNP)
('admits', 'VBZ')
('moral', 'JJ')
('obligation', 'NN')
('make', 'VBP')
('use', 'NN')
('Majlis', 'NNP')
('suitable', 'JJ')
('occasions', 'NNS')
('.', '.')
('When', 'WRB')
('one', 'CD')
('parties', 'NNS')
('case', 'NN')
(GPE British/JJ)
('subject', 'NN')
(',', ',')
('none', 'NN')
('parties', 'NNS')
(GPE Bahrain/NNP)
('subjects', 'NNS')
(',', ',')
(PERSON Majlis/NNP)
('ordinarily', 'RB')
('convoked', 'VBD')
(GPE British/JJ)
('Political', 'NNP')
('Agent', 'NNP')
('sits', 'VBZ')
(GPE British/JJ)
('Political', 'NNP')
('Agency', 'NNP')
(',', ',')
('representative', 'NN')
(PERSON Shaikh/NNP)
('allowed', 'VBD')
('present', 'JJ')
(',', ',')
('presence', 'NN')
('occasionally', 'RB')
('requested', 'VBD')
('one', 'CD')
('parties', 'NNS')
(';', ':')
('finding', 'VBG')
('circumstances', 'NNS')
('becomes', 'RB')
('operative', 'VBP')
('approved', 'JJ')
(GPE British/JJ)
('Political', 'NNP')
('Agent.Cases', 'NNP')
('arising', 'VBG')
('pearl', 'NN')
('diving', 'VBG')
('operations', 'NNS')
('pearl', 'RB')
('trade', 'NN')
('determined', 'VBD')
('board', 'NN')
('arbitration', 'NN')
(',', ',')
('known', 'VBN')
('Sālifat-al-Ghaus', 'NNP')
(',', ',')
('constitution', 'NN')
('powers', 'NNS')
('described', 'VBD')
('another', 'DT')
('place.9Judicial', 'JJ')
('fees', 'NNS')
('(', '(')
('called', 'VBN')
(PERSON Khidmah/NNP خدمه/NNP)
(')', ')')
('levied', 'VBD')
(',', ',')
('sometimes', 'RB')
(GPE Qādhis/NNP)
(',', ',')
('sometimes', 'RB')
(PERSON Amīrs/NNP Bazaar/NNP Masters/NNP Manāmah/NNP Muharraq/NNP)
('towns', 'NNS')
(',', ',')
('sometimes', 'RB')
('(', '(')
('particularly', 'RB')
('large', 'JJ')
('cases', 'NNS')
(')', ')')
(PERSON Shaikh/NNP)
('.', '.')
('In', 'IN')
('small', 'JJ')
('cases', 'NNS')
('plaintiff', 'VBP')
('generally', 'RB')
('pays', 'VBZ')
(PERSON Khidmah/NNP)
('amount', 'NN')
('decree', 'NN')
(';', ':')
('sometimes', 'RB')
('made', 'VBD')
('pay', 'NN')
('10', 'CD')
('per', 'IN')
('cent', 'NN')
('.', '.')
('amount', 'NN')
('claim', 'NN')
(',', ',')
('even', 'RB')
('obtain', 'VB')
('decree', 'JJ')
('full', 'JJ')
('.', '.')
('If', 'IN')
('plaintiff', 'JJ')
('loses', 'VBZ')
('case', 'NN')
(PERSON Khidmah/NNP)
('taken', 'VBN')
(',', ',')
('unless', 'IN')
('recovered', 'VBN')
('advance', 'NN')
('.', '.')
('In', 'IN')
('large', 'JJ')
('cases', 'NNS')
(PERSON Shaikh/NNP)
('careful', 'JJ')
('always', 'RB')
('take', 'VBP')
(PERSON Khidmah/NNP)
('advance.Finance.10—The', 'RB')
('budget', 'NN')
(GPE Bahrain/NNP)
('principality', 'NN')
('present', 'JJ')
('time', 'NN')
(',', ',')
('rough', 'JJ')
('outline', 'NN')
(',', ',')
('follows11', 'NN')
(':', ':')
('—Receipts.Expenditure', 'NN')
('...', ':')
('.Rs', 'NN')
('...', ':')
('.Rs.Sea', 'NNP')
('customs.150000Personal', 'JJ')
('expenses', 'NNS')
(GPE Shaikh/NNP)
('(', '(')
('including', 'VBG')
('salaries', 'NNS')
('bodyguard', 'RB')
(')', ')')
('.100000Agricultural', 'JJ')
('dues', 'NNS')
('(', '(')
('viz.', 'NN')
(',', ',')
('produce', 'VBP')
('state', 'NN')
('gardens', 'NNS')
('tax', 'NN')
('called', 'VBN')
(PERSON Nōb/NNP نوب/NNP)
('gardens', 'NNS')
('private', 'JJ')
('individuals', 'NNS')
('collected', 'VBN')
(')', ')')
('.100000Special', 'JJ')
('expenses', 'NNS')
('(', '(')
('connection', 'NN')
('marriages', 'NNS')
(',', ',')
('journeys', 'NNS')
(',', ',')
('etc', 'FW')
('.', '.')
(')', ')')
('.30000Tax', 'VBD')
('pearl', 'JJ')
('boats.12000Allowances', 'NNS')
('members', 'NNS')
(PERSON Shaikh/NNP)
("'s", 'POS')
('family.100000Judicial', 'JJ')
('fees', 'NNS')
(',', ',')
('succession', 'NN')
('duty', 'NN')
('(', '(')
('10', 'CD')
('per', 'IN')
('cent', 'NN')
('.', '.')
(')', ')')
('estates', 'VBZ')
('transferred', 'JJ')
('inheritance', 'NN')
(',', ',')
('etc.20000Expenses', 'VBZ')
(ORGANIZATION administration14000Rent/JJ)
('town', 'NN')
('lands', 'NNS')
(',', ',')
('shops', 'NNS')
('Khāns.14000Subsidies', 'NNP')
('presents', 'VBZ')
('Bedouins.56000Miscellaneous', 'NNP')
('(', '(')
('including', 'VBG')
('secret', 'JJ')
('extortions', 'NNS')
(')', ')')
('.4000', 'VBP')
('...', ':')
('...', ':')
('TOTAL300000TOTAL300000In', 'NNP')
('addition', 'NN')
('monetary', 'JJ')
('taxes', 'NNS')
(PERSON Shaikh/NNP)
('takes', 'VBZ')
('one-twentieth', 'JJ')
('slaughter', 'NN')
('animals', 'NNS')
('imported', 'VBN')
('abroad', 'RB')
(';', ':')
('particular', 'JJ')
('trade', 'NN')
('considered', 'VBN')
('included', 'VBD')
('lease', 'JJ')
('sea', 'NN')
('customs.It', 'NN')
('observed', 'VBD')
('budget', 'NN')
(PERSON Shaikh/NNP Bahrain/NNP)
('alone', 'RB')
(',', ',')
('column', 'NN')
('receipts', 'NNS')
('include', 'VBP')
('amounts', 'NNS')
('wrung', 'JJ')
('fief-holders', 'NNS')
(PERSON Shaikh/NNP)
("'s", 'POS')
('family', 'NN')
('villages', 'NNS')
('situated', 'VBN')
('estates', 'NNS')
('.', '.')
('A', 'NNP')
('poll', 'NN')
('tax', 'NN')
('called', 'VBN')
(PERSON Tarāz/NNP طراز/NNP)
('among', 'IN')
('additional', 'JJ')
('imposts', 'NNS')
('agricultural', 'JJ')
('Bahārinah', 'NNP')
('times', 'NNS')
('subjected', 'VBN')
('.', '.')
('In', 'IN')
(GPE Sunni/NNP)
('villages', 'NNS')
('rule', 'NN')
(',', ',')
('especially', 'RB')
('one', 'CD')
('tribe', 'NN')
('largely', 'RB')
('predominates', 'VBZ')
(',', ',')
('either', 'DT')
('taxation', 'NN')
('proceeds', 'NNS')
('go', 'VBP')
('tribal', 'JJ')
('chief', 'NN')
('instead', 'RB')
(PERSON Shaikh/NNP Bahrain/NNP)
('members', 'NNS')
('family.Military', 'JJ')
('naval', 'JJ')
('resources.—Altogether', 'NN')
(PERSON Shaikh/NNP Bahrain/NNP)
('principal', 'JJ')
('relations', 'NNS')
('servants', 'NNS')
('maintain', 'VBP')
('540', 'CD')
('armed', 'VBN')
('men', 'NNS')
(',', ',')
('distributed', 'VBD')
('somewhat', 'RB')
('follows', 'VBZ')
(':', ':')
('—Shaikh', 'JJ')
("'Īsa-bin-'Ali", 'CD')
(',', ',')
(PERSON Shaikh/NNP Bahrain200Shaikh/NNP Khālid/NNP)
(',', ',')
('brother', 'NN')
("'Īsa100Shaikh", 'POS')
(PERSON Hamad/NNP)
(',', ',')
('son', 'NN')
("'Īsa80Shaikh", 'POS')
(PERSON Muhammad/NNP)
(',', ',')
('son', 'NN')
("'Īsa30Shaikh", 'POS')
("'Abdullah", 'NN')
(',', ',')
('son', 'NN')
("'Īsa30The", 'POS')
(PERSON Bazaar/NNP Master/NNP Manāmah50The/NNP Bazaar/NNP Master/NNP)
('Muharraq50TOTAL540Of', 'NNP')
('force', 'NN')
('200', 'CD')
('men', 'NNS')
('armed', 'VBD')
('rifles', 'NNS')
(',', ',')
('remainder', 'NN')
(',', ',')
('possess', 'NN')
('firearms', 'NNS')
(',', ',')
('matchlocks', 'NNS')
(':', ':')
(',', ',')
('however', 'RB')
(',', ',')
('carry', 'VBP')
('swords', 'NNS')
('.', '.')
('For', 'IN')
('defence', 'NN')
('dominions', 'NNS')
('foreign', 'JJ')
('aggression', 'NN')
(',', ',')
('however', 'RB')
(',', ',')
('maintenance', 'NN')
('order', 'NN')
('within', 'IN')
(',', ',')
(PERSON Shaikh/NNP)
('depends', 'VBZ')
('much', 'JJ')
('retainers', 'NNS')
("Na'īm", 'NNP')
('tribe', 'NN')
(',', ',')
('professes', 'VBZ')
('large', 'JJ')
('numbers', 'NNS')
('call', 'VBP')
('.', '.')
('In', 'IN')
('point', 'NN')
('fact', 'NN')
('Na', 'NNP')
("'īm", 'POS')
(PERSON Bahrain/NNP Qatar/NNP)
('amount', 'NN')
('400', 'CD')
('fighting', 'VBG')
('men', 'NNS')
('told', 'VBD')
(',', ',')
('half', 'PDT')
('generally', 'RB')
('absent', 'JJ')
(PERSON Qatar/NNP)
(',', ',')
('100', 'CD')
('whole', 'JJ')
('number', 'NN')
('mounted', 'VBN')
('.', '.')
('The', 'DT')
(GPE Shaikh/NNP)
(',', ',')
('already', 'RB')
('mentioned', 'VBN')
(',', ',')
('small', 'JJ')
('excellent', 'JJ')
('stud', 'NN')
(PERSON Arab/NNP)
('mares', 'NNS')
(',', ',')
('family', 'NN')
('100', 'CD')
('riding', 'NN')
('camels.Till', 'NN')
('lately', 'RB')
('possessed', 'VBD')
('several', 'JJ')
('fast-sailing', 'JJ')
('unarmed', 'JJ')
(GPE Batīls/NNP)
(',', ',')
('2', 'CD')
('3', 'CD')
('ordinarily', 'RB')
('placed', 'VBN')
('disposal', 'NN')
('customs', 'NNS')
('contractors', 'NNS')
('prevention', 'VBP')
('smuggling', 'VBG')
(';', ':')
(',', ',')
('however', 'RB')
(',', ',')
('disappeared', 'VBD')
(',', ',')
(PERSON Shaikh/NNP)
('requires', 'VBZ')
('boat', 'NN')
('takes', 'VBZ')
('one', 'CD')
('Sukhrah.Political', 'NNP')
('position', 'NN')
('foreign', 'JJ')
('interests.—The', 'NN')
('treaty', 'NN')
('relations', 'NNS')
(PERSON Shaikh/NNP Bahrain/NNP)
('exclusively', 'RB')
(GPE Indian/JJ)
('Government', 'NNP')
(',', ',')
('maintain', 'NN')
(PERSON Manāmah/NNP)
(',', ',')
('local', 'JJ')
('representative', 'NN')
(',', ',')
(GPE European/JJ)
('officer', 'NN')
(ORGANIZATION Political/NNP Department/NNP)
('local', 'JJ')
('rank', 'NN')
('Agent', 'NNP')
(';', ':')
('Agent', 'NNP')
('subordinate', 'VB')
(GPE British/JJ)
('Political', 'NNP')
('Resident', 'NNP')
(PERSON Persian/NNP Gulf/NNP)
('.', '.')
('The', 'DT')
('maintenance', 'NN')
('charitable', 'JJ')
('medical', 'JJ')
('dispensary', 'NN')
(',', ',')
('built', 'VBD')
('local', 'JJ')
('contributions', 'NNS')
('known', 'VBN')
(PERSON Victoria/NNP Memorial/NNP Hospital/NNP)
(',', ',')
('undertaken', 'JJ')
('Government', 'NNP')
('India', 'NNP')
('1905', 'CD')
(',', ',')
('institution', 'NN')
('attached', 'VBD')
(PERSON Political/NNP Agency/NNP)
('.', '.')
('There', 'EX')
('also', 'RB')
(GPE British/JJ)
('post', 'NN')
('office', 'NN')
('connected', 'VBD')
('Agency.British', 'JJ')
('subjects', 'NNS')
('ordinarily', 'RB')
('resident', 'VBP')
(GPE Bahrain/NNP)
('present', 'JJ')
('time', 'NN')
(',', ',')
('inclusive', 'JJ')
('officials', 'NNS')
('exclusive', 'JJ')
(GPE Indian/JJ)
('military', 'JJ')
('guard', 'NN')
(PERSON Agency/NNP)
(',', ',')
(':', ':')
(GPE European/JJ)
(GPE British/JJ)
('subjects', 'NNS')
(',', ',')
('2', 'CD')
(';', ':')
('Eurasians', 'NNS')
(',', ',')
('4', 'CD')
(';', ':')
(ORGANIZATION Native/NNP)
('Christians', 'NNPS')
('2', 'CD')
(';', ':')
(GPE Hindus/NNP)
(',', ',')
('69', 'CD')
(';', ':')
(GPE Muhammadans/NNPS)
(',', ',')
('122', 'CD')
(';', ':')
(PERSON Jews/NNP)
(',', ',')
('5', 'CD')
('.', '.')
('In', 'IN')
('hot', 'JJ')
('weather', 'NN')
('number', 'NN')
(PERSON Hindus/NNP British/NNP)
('protection', 'NN')
('rises', 'VBZ')
('175', 'CD')
(',', ',')
('Muhammadans', 'NNPS')
('150', 'CD')
('.', '.')
('One', 'CD')
(GPE British/JJ)
('mercantile', 'NN')
('firm', 'NN')
('2', 'CD')
(GPE British/JJ)
('steamship', 'NN')
('companies', 'NNS')
('represented', 'VBD')
('islands', 'NNS')
(';', ':')
('22', 'CD')
('resident', 'NN')
('Hindu', 'NNP')
('11', 'CD')
('resident', 'NN')
(ORGANIZATION Muhammadan/NNP)
('traders', 'NNS')
(GPE British/JJ)
('protection.After', 'RB')
('political', 'JJ')
('commercial', 'JJ')
('interests', 'NNS')
(GPE Britain/NNP Bahrain/NNP)
(',', ',')
('interests', 'NNS')
(GPE United/NNP States/NNPS)
(',', ',')
('arising', 'VBG')
('mission', 'NN')
(GPE Reformed/NNP)
('(', '(')
(GPE Dutch/NNP)
(')', ')')
(PERSON Church/NNP America/NNP)
(',', ',')
('important', 'JJ')
(';', ':')
('mission', 'NN')
(',', ',')
(PERSON Manāmah/NNP)
(',', ',')
('founded', 'VBD')
('1893.Of', 'CD')
('recent', 'JJ')
('origin', 'NN')
(',', ',')
('less', 'RBR')
('extensive', 'JJ')
(',', ',')
(GPE German/JJ)
('interests', 'NNS')
('represented', 'VBD')
('commercial', 'JJ')
('firm.Notes1', 'NN')
('.', '.')
('This', 'DT')
('leading', 'VBG')
('article', 'NN')
(GPE Bahrain/NNP)
('principality', 'NN')
('minor', 'JJ')
('articles', 'NNS')
('places', 'NNS')
('founded', 'VBD')
('chiefly', 'NN')
('upon', 'IN')
('systematic', 'JJ')
('careful', 'JJ')
('investigations', 'NNS')
('made', 'VBN')
('spot', 'NN')
('years', 'NNS')
('1904-1905', 'JJ')
('.', '.')
('The', 'DT')
('information', 'NN')
('available', 'JJ')
('sources', 'NNS')
('existing', 'VBG')
('1904', 'CD')
('arranged', 'JJ')
('writer', 'NN')
('issued', 'VBN')
('November', 'NNP')
('year', 'NN')
('form', 'NN')
('9', 'CD')
('printed', 'JJ')
('foolscap', 'NN')
('pages', 'NNS')
('intended', 'VBN')
('serve', 'VBP')
('basis', 'NN')
('inquiry', 'NN')
('.', '.')
('The', 'DT')
('inquiry', 'NN')
('proper', 'JJ')
('begun', 'VBN')
('writer', 'RB')
('tour', 'VB')
(GPE Bahrain/NNP)
('early', 'JJ')
('1905', 'CD')
(';', ':')
('carried', 'VBN')
('chiefly', 'NN')
(ORGANIZATION Lieutenant/NNP)
('C.', 'NNP')
('H.', 'NNP')
(PERSON Gabriel/NNP)
(',', ',')
(ORGANIZATION I.A./NNP)
(',', ',')
('personally', 'RB')
('travelled', 'VBN')
('greater', 'JJR')
('part', 'NN')
('islands', 'NNS')
(',', ',')
(PERSON Captain/NNP F./NNP)
('B.', 'NNP')
('Prideaux', 'NNP')
(',', ',')
(PERSON Political/NNP Agent/NNP Bahrain/NNP)
(',', ',')
('supplied', 'VBD')
('full', 'JJ')
('information', 'NN')
('regarding', 'VBG')
('places', 'NNS')
('jurisdiction', 'NN')
('.', '.')
('A', 'DT')
('set', 'NN')
('draft', 'NN')
('articles', 'NNS')
('founded', 'VBN')
('notes', 'NNS')
('reports', 'NNS')
('1905', 'CD')
('prepared', 'JJ')
('writer', 'NN')
(';', ':')
('finished', 'VBN')
('January', 'NNP')
('1906', 'CD')
('extended', 'VBD')
('60', 'CD')
('octavo', 'NN')
('pages', 'NNS')
('print', 'VBP')
('.', '.')
('These', 'DT')
('drafts', 'NNS')
('sent', 'VBD')
(PERSON Captain/NNP Prideaux/NNP)
(',', ',')
('carefully', 'RB')
('revised', 'VBN')
('assistance', 'NN')
('Mr', 'NNP')
('.', '.')
("In'ām-al-Haqq", 'NNP')
(',', ',')
(PERSON Agency/NNP Interpreter/NNP)
(',', ',')
('graduate', 'NN')
(PERSON Aligarh/NNP College/NNP)
('.', '.')
('Early', 'JJ')
('1907', 'CD')
('drafts', 'NN')
('reissued', 'VBN')
(',', ',')
('modifications', 'NNS')
('additions', 'NNS')
(',', ',')
('points', 'NNS')
('remained', 'VBD')
('doubtful', 'JJ')
('obscure', 'NN')
('disposed', 'VBD')
(PERSON Captain/NNP Prideaux/NNP)
('assistant', 'JJ')
('year', 'NN')
('.', '.')
(PERSON Geological/NNP)
('information', 'NN')
('kindly', 'RB')
('furnished', 'VBD')
(PERSON
Mr./NNP
G./NNP
Pilgrim/NNP
Geological/NNP
Survey/NNP
India/NNP)
('.', '.')
('The', 'DT')
('articles', 'NNS')
('final', 'JJ')
('form', 'NN')
('occupy', 'NN')
('70', 'CD')
('octavo', 'NN')
('pages', 'NNS')
('.', '.')
('Bahrain', 'VB')
('early', 'JJ')
('time', 'NN')
('attracted', 'VBN')
('attention', 'NN')
('travellers', 'NNS')
(PERSON Persian/NNP Gulf/NNP)
(',', ',')
('following', 'VBG')
('older', 'JJR')
('authorities', 'NNS')
('islands', 'NNS')
(':', ':')
(PERSON Niebuhr/NNP)
("'s", 'POS')
('Description', 'NNP')
('de', 'FW')
("I'Arabie", 'NNP')
(',', ',')
('1774', 'CD')
(';', ':')
(PERSON Buckingham/NNP)
("'s", 'POS')
(ORGANIZATION Travels/NNP Assyria/NNP)
(',', ',')
(PERSON Media/NNP Persia/NNP)
(',', ',')
('1829', 'CD')
(';', ':')
(PERSON Whitelock/NNP)
("'s", 'POS')
('Description', 'NNP')
(PERSON Arabian/NNP Coast/NNP)
(',', ',')
('1838', 'CD')
(';', ':')
(PERSON Mignan/NNP)
("'s", 'POS')
('Winter', 'NNP')
('Journey', 'NNP')
(',', ',')
('1839', 'CD')
(';', ':')
(PERSON Bombay/NNP Records/NNP)
(',', ',')
(ORGANIZATION XXIV/NNP)
(',', ',')
('1856', 'CD')
(';', ':')
(GPE Whish/NNP)
("'s", 'POS')
(PERSON Memoir/NNP Bahreyn/NNP)
('(', '(')
('map', 'NN')
(')', ')')
(',', ',')
('1862', 'CD')
(';', ':')
(GPE Palgrave/NNP)
("'s", 'POS')
(ORGANIZATION Central/NNP Eastern/NNP Arabia/NNP)
(',', ',')
('1865', 'CD')
('.', '.')
('More', 'RBR')
('recent', 'JJ')
(':', ':')
('Captain', 'NNP')
('E.', 'NNP')
('L.', 'NNP')
(GPE Durand/NNP)
("'s", 'POS')
('Description', 'NNP')
(PERSON Bahrein/NNP Islands/NNP)
(',', ',')
('1879', 'CD')
(',', ',')
(ORGANIZATION Extracts/NNPS Report/NNP Islands/NNP)
('Antiquities', 'NNP')
(GPE Bahrain/NNP)
(',', ',')
('1880', 'CD')
(';', ':')
(PERSON Mr./NNP T./NNP Bent/NNP)
("'s", 'POS')
(PERSON Bahrein/NNP)
('Islands', 'VBZ')
(PERSON Persian/JJ Gulf/NNP)
(',', ',')
('1890', 'CD')
(';', ':')
(PERSON Captain/NNP J/NNP)
('.', '.')
('A.', 'NNP')
(PERSON Douglas/NNP)
("'s", 'POS')
(ORGANIZATION Journey/NNP Mediterranean/NNP India/NNP)
(',', ',')
('1897', 'CD')
(';', ':')
(PERSON Persian/NNP Gulf/NNP Pilot/NNP)
(',', ',')
('1898', 'CD')
(';', ':')
(PERSON Reverend/NNP S./NNP)
('M.', 'NNP')
('Zwemer', 'NNP')
("'s", 'POS')
(PERSON Arabia/NNP)
(',', ',')
('1900', 'CD')
(';', ':')
('Mrs.', 'NNP')
('T.', 'NNP')
('Bent', 'NNP')
("'s", 'POS')
(LOCATION Southern/NNP Arabia/NNP)
(',', ',')
('1900', 'CD')
(';', ':')
(PERSON Captain/NNP A./NNP)
('W.', 'NNP')
('Stiffe', 'NNP')
("'s", 'POS')
('Ancient', 'JJ')
('Trading', 'NN')
(PERSON Centres/NNS Persian/NNP)
('Gulf—Bahrain', 'NNP')
(',', ',')
('1901', 'CD')
('.', '.')
(PERSON Captain/NNP Durand/NNP)
("'s", 'POS')
('second', 'JJ')
('paper', 'NN')
('contributions', 'NNS')
(PERSON Mr./NNP Mrs/NNP)
('.', '.')
('Bent', 'NNP')
('deal', 'NN')
('partly', 'RB')
('subject', 'JJ')
('antiquities', 'NNS')
(';', ':')
(PERSON Persian/NNP Gulf/NNP Pilot/NNP)
('concerned', 'VBD')
('chiefly', 'NN')
('maritime', 'NN')
('features', 'NNS')
(';', ':')
('remainder', 'VB')
('authorities', 'NNS')
('general', 'JJ')
('scope', 'NN')
('.', '.')
('In', 'IN')
('matters', 'NNS')
('relating', 'VBG')
('trade', 'NN')
(',', ',')
('annual', 'JJ')
('commercial', 'JJ')
('reports', 'NNS')
('Political', 'JJ')
('Agent', 'NNP')
('Bahrain', 'NNP')
('chief', 'JJ')
('source', 'NN')
('information', 'NN')
('.', '.')
('A', 'DT')
('large', 'JJ')
('scale', 'NN')
('map', 'NN')
(PERSON Bahrain/NNP Inlands/NNP)
('(', '(')
('except', 'IN')
(PERSON Jazīrat/NNP Umm/NNP)
("Na'asān", 'NNP')
(')', ')')
('exists', 'VBZ')
(PERSON Survey/NNP India/NNP)
("'s", 'POS')
('sheet', 'NN')
('Bahrain', 'NNP')
('1904-1905', 'CD')
(',', ',')
('result', 'NN')
('survey', 'NN')
('undertaken', 'JJ')
('connection', 'NN')
('Gazetteer', 'NNP')
('inquiries', 'NNS')
(';', ':')
(PERSON Admiralty/NNP Plan/NNP No/NNP)
('.', '.')
('2377-20', 'CD')
(',', ',')
(PERSON Bahrain/NNP Harbour/NNP)
(',', ',')
('shows', 'VBZ')
('detail', 'NN')
('northern', 'JJ')
('half', 'NN')
('islands', 'NNS')
('coasts', 'VBZ')
('well', 'RB')
('marine', 'JJ')
('features', 'NNS')
('northern', 'JJ')
('side', 'NN')
('group', 'NN')
('.', '.')
('The', 'DT')
('general', 'JJ')
('chart', 'NN')
(PERSON Bahrain/NNP No/NNP)
('.', '.')
('2374—2887-B.', 'CD')
(',', ',')
(PERSON Persian/NNP Gulf/NNP)
(';', ':')
(PERSON Plan/NNP)
('mentioned', 'VBD')
('contain', 'NN')
('distant', 'JJ')
('views', 'NNS')
(PERSON Bahrain/NNP Islands/NNP)
('sea', 'NN')
('.', '.')
('There', 'EX')
('two', 'CD')
('recent', 'JJ')
(',', ',')
('marine', 'JJ')
('surveys', 'NNS')
('waters', 'NNS')
('west', 'VBP')
('east', 'JJ')
(GPE Bahrain/NNP)
('islands', 'NNS')
(',', ',')
('respectively', 'RB')
(',', ',')
('namely', 'RB')
(',', ',')
(PERSON Bahrain/NNP Ojar/NNP Bahrein/NNP Ras/NNP Rūkkin/NNP)
(',', ',')
('Preliminary', 'NNP')
('Charbs', 'NNP')
('Nos', 'NNP')
('.', '.')
('O', 'NNP')
('.', '.')
('1', 'CD')
('O', 'NNP')
('.', '.')
('2.', 'CD')
(',', ',')
(GPE Poona/NNP)
(';', ':')
('1902', 'CD')
('.', '.')
('Two', 'CD')
('charts', 'NNS')
('relating', 'VBG')
("Khor-al-Qaiai'ah", 'NNP')
('accompany', 'JJ')
('report', 'NN')
(ORGANIZATION Lieutenant/NNP)
('H.G', 'NNP')
('.', '.')
('Somerville', 'NNP')
(',', ',')
(ORGANIZATION R.N./NNP)
(',', ',')
('printed', 'JJ')
('Government', 'NNP')
('India', 'NNP')
(ORGANIZATION Foreign/NNP Department/NNP)
(',', ',')
(PERSON Simla/NNP)
(',', ',')
('July', 'NNP')
('1905.2', 'CD')
('.', '.')
('The', 'DT')
(ORGANIZATION Gharrāfah/NNP)
('handled', 'VBD')
('one', 'CD')
('man', 'NN')
('.', '.')
('The', 'DT')
('counterpoise', 'NN')
('generally', 'RB')
('basket', 'VBZ')
('earth.3', 'NN')
('.', '.')
('This', 'DT')
('table', 'NN')
('may', 'MD')
('compared', 'VBN')
('estimate', 'VB')
('given', 'VBN')
(PERSON Pelly/NNP Report/NNP Tribes/NNP)
(',', ',')
('etc.', 'NN')
(',', ',')
('1863.4', 'CD')
('.', '.')
('This', 'DT')
('mean', 'JJ')
('half', 'NN')
('steamers', 'NNS')
('called', 'VBD')
(PERSON Bahrain/NNP)
('found', 'VBD')
('cargo', 'NN')
('.', '.')
('The', 'DT')
('explanation', 'NN')
('steamers', 'VBZ')
('call', 'JJ')
('take', 'VB')
('clearance', 'NN')
('certificates.5', 'NN')
('.', '.')
('The', 'DT')
(ORGANIZATION Foreign/NNP Proceedings/NNP)
('Government', 'NNP')
('India', 'NNP')
('April', 'NNP')
('1901', 'CD')
('contain', 'NN')
('information', 'NN')
('head.6', 'NN')
('.', '.')
('Since', 'IN')
('political', 'JJ')
('crisis', 'NN')
('February', 'NNP')
('1905', 'CD')
('administration', 'NN')
('justice', 'NN')
(PERSON Bahrain/NNP)
('somewhat', 'RB')
('improved', 'VBD')
('.', '.')
('Public', 'JJ')
('opinion', 'NN')
('subject', 'JJ')
('growing', 'VBG')
('powerful', 'JJ')
(',', ',')
('largely', 'RB')
('consequence', 'NN')
('steady', 'JJ')
('influx', 'NN')
('protected', 'VBD')
('foreigners.7', 'NN')
('.', '.')
('Regarding', 'VBG')
(PERSON Majlis/NNP)
(',', ',')
('etc.', 'NN')
(',', ',')
('see', 'VBP')
('letters', 'NNS')
(PERSON Major/NNP P./NNP)
('Z.', 'NNP')
('Cox', 'NNP')
(',', ',')
(ORGANIZATION Resident/NNP Persian/NNP Gulf/NNP)
(',', ',')
('No', 'NNP')
('76', 'CD')
('25th', 'CD')
('February', 'NNP')
('No', 'NNP')
('.', '.')
('516', 'CD')
('4th', 'CD')
('March', 'NNP')
('1906', 'CD')
(';', ':')
('Government', 'NNP')
(GPE India/NNP)
("'s", 'POS')
(ORGANIZATION Foreign/NNP Proceedings/NNP)
('April', 'NNP')
('1901', 'CD')
('may', 'MD')
('also', 'RB')
('consulted.8', 'VB')
('.', '.')
('Some', 'DT')
('authorities', 'NNS')
(',', ',')
('however', 'RB')
(',', ',')
('suppose', 'VBP')
('purely', 'RB')
('indigenous', 'JJ')
('institution.9', 'NN')
('.', '.')
('See', 'VB')
(PERSON Appendix/NNP Pearl/NNP)
('Fisheries.10', 'NNP')
('.', '.')
('The', 'DT')
(ORGANIZATION Foreign/NNP Proceedings/NNP)
('Government', 'NNP')
('India', 'NNP')
('October', 'NNP')
('1905', 'CD')
('may', 'MD')
('consulted.11', 'VB')
('.', '.')
('The', 'DT')
('table', 'NN')
('may', 'MD')
('compared', 'VBN')
('page', 'VB')
('66', 'CD')
(PERSON Persian/NNP Gulf/NNP)
('Administration', 'NNP')
('Report', 'NNP')
('1873-74', 'CD')
('.', '.')
###Markdown
**NER for all files**
###Code
entity_names=[]
entities_filtered=[]
print("Labelled entities (Org, Person, GPE etc.):")
for chunk in chunks:
#labelled chunk is a named entity
if hasattr(chunk, 'label'):
entity=str(chunk.label())
#print entity label and value
print("Entity: "+str(chunk.label())+", ",end="")
name=chunk[0][0]
#store the entity if it is not a stop word
if name not in stopWords:
#print name of the entity
entity_names.append(name)
print("Name: "+str(name)+", ",end="")
#print POS tag of the entity
pos_tag=chunk[0][1]
print("POS tag: "+str(pos_tag))
string1=str(entity)+","+str(name)+","+pos_tag
entities_filtered.append(string1)
print("Entity names: ")
for x in entity_names:
print(x)
###Output
_____no_output_____ |
Reinforcement_Learning/Week_05/05_Importance_Sampling_3-step-Q-Learning_Windy_Gridworld.ipynb | ###Markdown
Windy Gridworld Playground环境介绍- observation为格子所在的编号;- action的组成: 有4个动作, 分别是上下左右; - 0, UP - 1, RIGHT - 2, DOWN - 3, LEFT- reward: 每走一步reward=-1, reward越大也就是走的步数越少; 初始化环境
###Code
environment = WindyGridworldEnv()
# 这个环境中可能动作的个数
nA = environment.action_space.n
print(nA)
###Output
4
###Markdown
策略的定义- $\mu$使用$\epsilon$-Greedy策略- $\pi$使用greedy策略
###Code
def mu_policy(Q, epsilon, nA):
"""
这是一个随机的策略, 执行每一个action的概率都是相同的.
"""
def policy_fn(observation):
# 看到这个state之后, 采取不同action获得的累计奖励
action_values = Q[observation]
# 使用获得奖励最大的那个动作
greedy_action = np.argmax(action_values)
# 是的每个动作都有出现的可能性
probabilities = np.ones(nA) /nA * epsilon
# 最好的那个动作的概率会大一些
probabilities[greedy_action] = probabilities[greedy_action] + (1 - epsilon)
return probabilities
return policy_fn
def pi_policy(Q):
"""
这是greedy policy, 每次选择最优的动作
"""
def policy_fn(observation):
action_values = Q[observation]
best_action = np.argmax(action_values) # 选择最优的动作
return np.eye(len(action_values))[best_action] # 返回的是两个动作出现的概率
return policy_fn
###Output
_____no_output_____
###Markdown
Importance Sampling for Off-Policy 3-step TD
###Code
def td_control_importance_sampling(env, num_episodes, discount_factor=1.0, alpha=0.1, epsilon=0.2):
# 环境中所有动作的数量
nA = env.action_space.n
# 初始化Q表
Q = defaultdict(lambda: np.zeros(nA))
# Keeps track of useful statistics
stats = plotting.EpisodeStats(
episode_lengths=np.zeros(num_episodes+1),
episode_rewards=np.zeros(num_episodes+1))
# 初始化police, 因为是off-policy, 所以有两个策略
behaviour_policy = mu_policy(Q, epsilon, nA) # 这是我们实际执行action时候采取的策略, 这里使用随机游走
policy = pi_policy(Q)
for i_episode in range(1, num_episodes + 1):
# 开始一轮游戏
state = env.reset()
action = np.random.choice(nA, p=behaviour_policy(state)) # 从实际执行的policy, 选择action
for t in itertools.count():
env.s = state
next_state, reward, done, _ = env.step(action) # 走第一步
if done:
next_action_pi = np.argmax(policy(next_state))
Q[state][action] = Q[state][action] + alpha * (reward + discount_factor*Q[next_state][next_action_pi] - Q[state][action])
stats.episode_rewards[i_episode] += reward # 计算累计奖励
stats.episode_lengths[i_episode] = t # 查看每一轮的时间
break
next_action = np.random.choice(nA, p=behaviour_policy(next_state))
next_2_state, next_reward, done, _ = env.step(next_action) # 走第二步
if done:
next_action_pi = np.argmax(policy(next_2_state))
pi_p = policy(next_state)[next_action] # 这里只有0或是1两个值, 完全确定的policy
mu_p = behaviour_policy(next_state)[next_action]
Q[state][action] = Q[state][action] + alpha * pi_p/mu_p * (reward + discount_factor*next_reward + (discount_factor**2)*Q[next_2_state][next_action_pi] - Q[state][action]) # 收敛
# Q[state][action] = Q[state][action] + alpha * (reward + pi_p/mu_p*discount_factor*next_reward + (discount_factor**2)*Q[next_2_state][next_action_pi] - Q[state][action]) # 收敛
# Q[state][action] = Q[state][action] + alpha * (pi_p/mu_p*(reward + discount_factor*next_reward + (discount_factor**2)*Q[next_2_state][next_action_pi]) - Q[state][action]) # 没收敛
# Q[state][action] = Q[state][action] + alpha * (reward + discount_factor*next_reward + (discount_factor**2)*Q[next_2_state][next_action_pi] - Q[state][action]) # 收敛
stats.episode_rewards[i_episode] += reward # 计算累计奖励
stats.episode_lengths[i_episode] = t # 查看每一轮的时间
break
next_2_action = np.random.choice(nA, p=behaviour_policy(next_2_state)) # 走第三步
next_3_state, next_2_reward, done, _ = env.step(next_2_action)
next_action_pi = np.argmax(policy(next_3_state))
# 计算两个概率
pi_p_1 = policy(next_state)[next_action] # 这里只有0或是1两个值, 完全确定的policy
mu_p_1 = behaviour_policy(next_state)[next_action]
pi_p_2 = policy(next_2_state)[next_2_action] # 这里只有0或是1两个值, 完全确定的policy
mu_p_2 = behaviour_policy(next_2_state)[next_2_action]
# 更新Q
Q[state][action] = Q[state][action] + alpha * (pi_p_1/mu_p_1)*(pi_p_2/mu_p_2) * (reward + discount_factor*next_reward + (discount_factor**2)*next_2_reward + (discount_factor**3)*Q[next_3_state][next_action_pi] - Q[state][action])
# Q[state][action] = Q[state][action] + alpha * (reward + (pi_p_1/mu_p_1)*discount_factor*next_reward + (pi_p_2/mu_p_2)*(discount_factor**2)*next_2_reward + (discount_factor**3)*Q[next_3_state][next_action_pi] - Q[state][action])
# Q[state][action] = Q[state][action] + alpha * ((pi_p_1/mu_p_1)*(pi_p_2/mu_p_2) * (reward + discount_factor*next_reward + (discount_factor**2)*next_2_reward + (discount_factor**3)*Q[next_3_state][next_action_pi]) - Q[state][action])
# Q[state][action] = Q[state][action] + alpha * (reward + discount_factor*next_reward + (discount_factor**2)*next_2_reward + (discount_factor**3)*Q[next_3_state][next_action_pi] - Q[state][action]) # 不收敛
# 计算统计数据
stats.episode_rewards[i_episode] += reward # 计算累计奖励
stats.episode_lengths[i_episode] = t # 查看每一轮的时间
if done:
break
if t > 500:
break
state = next_state
action = next_action
if i_episode % 100 == 0:
# 打印
print("\rEpisode {}/{}. | ".format(i_episode, num_episodes), end="")
return Q, policy, stats
###Output
_____no_output_____
###Markdown
开始模拟
###Code
Q, policy, stats = td_control_importance_sampling(environment, num_episodes=10000, discount_factor=0.9, alpha=0.3, epsilon=0.4)
# Q, policy, stats = td_control_importance_sampling(environment, num_episodes=10000, discount_factor=0.9, alpha=0.3, epsilon=0.99) # 这里epsilon=1就是随机策略
plotting.plot_episode_stats(stats)
###Output
_____no_output_____
###Markdown
使用最终策略执行
###Code
state = environment.reset()
action = np.argmax(policy(state))
for t in itertools.count():
state, reward, done, _ = environment.step(action) # 执行action, 返回reward和下一步的状态
action = np.argmax(policy(state)) # 查看新的action
print('-Step:{}-'.format(str(t)))
environment._render() # 显示结果
if done:
break
if t > 50:
break
###Output
_____no_output_____ |
programs/formats.ipynb | ###Markdown
Text Formats ins and outsSee also the [docs](https://annotation.github.io/text-fabric/Api/Text/text-representation)
###Code
%load_ext autoreload
%autoreload 2
import os
from tf.fabric import Fabric
GH_BASE = os.path.expanduser('~/github')
ORG = 'annotation'
REPO = 'banks'
FOLDER = 'tf'
TF_DIR = f'{GH_BASE}/{ORG}/{REPO}/{FOLDER}'
VERSION = '0.2'
TF_PATH = f'{TF_DIR}/{VERSION}'
TF = Fabric(locations=TF_PATH)
###Output
This is Text-Fabric 7.6.8
Api reference : https://annotation.github.io/text-fabric/Api/Fabric/
10 features found and 0 ignored
###Markdown
We ask for a list of all features:
###Code
allFeatures = TF.explore(silent=True, show=True)
loadableFeatures = allFeatures['nodes'] + allFeatures['edges']
loadableFeatures
###Output
_____no_output_____
###Markdown
We load all features:
###Code
api = TF.load(loadableFeatures, silent=False)
T = api.T
F = api.F
T.formats
words = F.otype.s('word')
lines = F.otype.s('line')
sents = F.otype.s('sentence')
explain = True
###Output
_____no_output_____
###Markdown
single line
###Code
T.text(lines[0], explain=explain)
T.text(lines[0], descend=True, explain=explain)
T.text(lines[0], fmt='line-term', explain=explain)
###Output
EXPLANATION: T.text) called with parameters:
nodes : single node
fmt : line-term targeted at line
descend: implicit
NODE: line 103
TARGET LEVEL: line (no expansion needed) (descend=None) (target of explicit line-term)
EXPANSION: line 103
FORMATTING: with explicit line-term
56s MATERIAL:
line 103 ADDS ", "
###Markdown
two lines
###Code
T.text(lines[0:2], explain=explain)
T.text(lines[0:2], descend=True, explain=explain)
T.text(lines[0:2], fmt='line-term', explain=explain)
###Output
EXPLANATION: T.text) called with parameters:
nodes : iterable of 2 nodes
fmt : line-term targeted at line
descend: implicit
NODE: line 103
TARGET LEVEL: line (no expansion needed) (descend=None) (target of explicit line-term)
EXPANSION: line 103
FORMATTING: with explicit line-term
MATERIAL:
line 103 ADDS ", "
NODE: line 104
TARGET LEVEL: line (no expansion needed) (descend=None) (target of explicit line-term)
EXPANSION: line 104
FORMATTING: with explicit line-term
MATERIAL:
line 104 ADDS ", "
###Markdown
single sentence
###Code
T.text(sents[0], explain=explain)
T.text(sents[0], descend=False, explain=explain)
T.text(sents[0], fmt='line-term', explain=explain)
###Output
EXPLANATION: T.text) called with parameters:
nodes : single node
fmt : line-term targeted at line
descend: implicit
NODE: sentence 115
TARGET LEVEL: line (descend=None) (target of explicit line-term)
EXPANSION: lines 103, 104, 105, 106
FORMATTING: with explicit line-term
MATERIAL:
line 103 ADDS ", "
line 104 ADDS ", "
line 105 ADDS "; "
line 106 ADDS ". "
###Markdown
two sentences
###Code
T.text(sents[0:2], explain=explain)
T.text(sents[0:2], descend=False, explain=explain)
T.text(sents[0:2], fmt='line-term', explain=explain)
###Output
EXPLANATION: T.text) called with parameters:
nodes : iterable of 2 nodes
fmt : line-term targeted at line
descend: implicit
NODE: sentence 115
TARGET LEVEL: line (descend=None) (target of explicit line-term)
EXPANSION: lines 103, 104, 105, 106
FORMATTING: with explicit line-term
MATERIAL:
line 103 ADDS ", "
line 104 ADDS ", "
line 105 ADDS "; "
line 106 ADDS ". "
NODE: sentence 116
TARGET LEVEL: line (descend=None) (target of explicit line-term)
EXPANSION: lines 107, 108, 109
FORMATTING: with explicit line-term
MATERIAL:
line 107 ADDS ", "
line 108 ADDS ", "
line 109 ADDS "? "
###Markdown
mixed content
###Code
content = list(words[50:53]) + list(lines[4:6]) + list(sents[0:2])
T.text(content, explain=explain)
T.text(content, descend=False, explain=explain)
T.text(content, fmt='line-term', explain=explain)
def test():
var = locals()
def verbose(x):
exec(x, var)
verbose('x = "aap"')
verbose('print(x)')
verbose('print("noot")')
return 'OK'
test()
###Output
aap
noot
|
Workshop/LSTM_101.ipynb | ###Markdown
LSTM 10116 neurons
###Code
from google.colab import drive
PATH='/content/drive/'
drive.mount(PATH)
DATAPATH=PATH+'My Drive/data/'
PC_FILENAME = DATAPATH+'pcRNA.fasta'
NC_FILENAME = DATAPATH+'ncRNA.fasta'
# LOCAL
#PC_FILENAME = 'pcRNA.fasta'
#NC_FILENAME = 'ncRNA.fasta'
import numpy as np
import pandas as pd
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
from sklearn.model_selection import ShuffleSplit
from keras.models import Sequential
from keras.layers import Bidirectional
from keras.layers import GRU
from keras.layers import Dense
from sklearn.model_selection import StratifiedKFold
import time
tf.keras.backend.set_floatx('float32')
EPOCHS=100
SPLITS=1
K=3
EMBED_DIMEN=16
FILENAME='LSTM101'
###Output
_____no_output_____
###Markdown
Load and partition sequences
###Code
# Assume file was preprocessed to contain one line per seq.
# Prefer Pandas dataframe but df does not support append.
# For conversion to tensor, must avoid python lists.
def load_fasta(filename,label):
DEFLINE='>'
labels=[]
seqs=[]
lens=[]
nums=[]
num=0
with open (filename,'r') as infile:
for line in infile:
if line[0]!=DEFLINE:
seq=line.rstrip()
num += 1 # first seqnum is 1
seqlen=len(seq)
nums.append(num)
labels.append(label)
seqs.append(seq)
lens.append(seqlen)
df1=pd.DataFrame(nums,columns=['seqnum'])
df2=pd.DataFrame(labels,columns=['class'])
df3=pd.DataFrame(seqs,columns=['sequence'])
df4=pd.DataFrame(lens,columns=['seqlen'])
df=pd.concat((df1,df2,df3,df4),axis=1)
return df
# Split into train/test stratified by sequence length.
def sizebin(df):
return pd.cut(df["seqlen"],
bins=[0,1000,2000,4000,8000,16000,np.inf],
labels=[0,1,2,3,4,5])
def make_train_test(data):
bin_labels= sizebin(data)
from sklearn.model_selection import StratifiedShuffleSplit
splitter = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=37863)
# split(x,y) expects that y is the labels.
# Trick: Instead of y, give it it the bin labels that we generated.
for train_index,test_index in splitter.split(data,bin_labels):
train_set = data.iloc[train_index]
test_set = data.iloc[test_index]
return (train_set,test_set)
def separate_X_and_y(data):
y= data[['class']].copy()
X= data.drop(columns=['class','seqnum','seqlen'])
return (X,y)
def make_slice(data_set,min_len,max_len):
print("original "+str(data_set.shape))
too_short = data_set[ data_set['seqlen'] < min_len ].index
no_short=data_set.drop(too_short)
print("no short "+str(no_short.shape))
too_long = no_short[ no_short['seqlen'] >= max_len ].index
no_long_no_short=no_short.drop(too_long)
print("no long, no short "+str(no_long_no_short.shape))
return no_long_no_short
def make_kmer_table(K):
npad='N'*K
shorter_kmers=['']
for i in range(K):
longer_kmers=[]
for mer in shorter_kmers:
longer_kmers.append(mer+'A')
longer_kmers.append(mer+'C')
longer_kmers.append(mer+'G')
longer_kmers.append(mer+'T')
shorter_kmers = longer_kmers
all_kmers = shorter_kmers
kmer_dict = {}
kmer_dict[npad]=0
value=1
for mer in all_kmers:
kmer_dict[mer]=value
value += 1
return kmer_dict
KMER_TABLE=make_kmer_table(K)
def strings_to_vectors(data,uniform_len):
all_seqs=[]
for seq in data['sequence']:
i=0
seqlen=len(seq)
kmers=[]
while i < seqlen-K+1:
kmer=seq[i:i+K]
i += 1
value=KMER_TABLE[kmer]
kmers.append(value)
pad_val=0
while i < uniform_len:
kmers.append(pad_val)
i += 1
all_seqs.append(kmers)
pd2d=pd.DataFrame(all_seqs)
return pd2d # return 2D dataframe, uniform dimensions
def build_model(maxlen,dimen):
vocabulary_size=4**K+1 # e.g. K=3 => 64 DNA K-mers + 'NNN'
act="sigmoid"
dt='float32'
neurons=16
rnn = keras.models.Sequential()
embed_layer = keras.layers.Embedding(
vocabulary_size,EMBED_DIMEN,input_length=maxlen);
rnn1_layer = keras.layers.Bidirectional(
keras.layers.LSTM(neurons, return_sequences=True, dropout=0.50,
input_shape=[maxlen,dimen]))
rnn2_layer = keras.layers.Bidirectional(
keras.layers.LSTM(neurons, dropout=0.50, return_sequences=True))
dense1_layer = keras.layers.Dense(neurons,activation=act,dtype=dt)
dense2_layer = keras.layers.Dense(neurons,activation=act,dtype=dt)
output_layer = keras.layers.Dense(1,activation=act,dtype=dt)
rnn.add(embed_layer)
rnn.add(rnn1_layer)
rnn.add(rnn2_layer)
rnn.add(dense1_layer)
rnn.add(dense2_layer)
rnn.add(output_layer)
bc=tf.keras.losses.BinaryCrossentropy(from_logits=False)
print("COMPILE")
rnn.compile(loss=bc, optimizer="Adam",metrics=["accuracy"])
return rnn
def do_cross_validation(X,y,eps,maxlen,dimen):
cv_scores = []
fold=0
splitter = ShuffleSplit(n_splits=SPLITS, test_size=0.2, random_state=37863)
rnn2=None
for train_index,valid_index in splitter.split(X):
X_train=X[train_index] # use iloc[] for dataframe
y_train=y[train_index]
X_valid=X[valid_index]
y_valid=y[valid_index]
print("BUILD MODEL")
rnn2=build_model(maxlen,dimen)
print("FIT")
# this is complaining about string to float
start_time=time.time()
history=rnn2.fit(X_train, y_train, # batch_size=10, default=32 works nicely
epochs=eps, verbose=1, # verbose=1 for ascii art, verbose=0 for none
validation_data=(X_valid,y_valid) )
end_time=time.time()
elapsed_time=(end_time-start_time)
fold += 1
print("Fold %d, %d epochs, %d sec"%(fold,eps,elapsed_time))
pd.DataFrame(history.history).plot(figsize=(8,5))
plt.grid(True)
plt.gca().set_ylim(0,1)
plt.show()
scores = rnn2.evaluate(X_valid, y_valid, verbose=0)
print("%s: %.2f%%" % (rnn2.metrics_names[1], scores[1]*100))
# What are the other metrics_names?
# Try this from Geron page 505:
# np.mean(keras.losses.mean_squared_error(y_valid,y_pred))
cv_scores.append(scores[1] * 100)
print()
print("Validation core mean %.2f%% (+/- %.2f%%)" % (np.mean(cv_scores), np.std(cv_scores)))
return rnn2
def make_kmers(MINLEN,MAXLEN,train_set):
(X_train_all,y_train_all)=separate_X_and_y(train_set)
# The returned values are Pandas dataframes.
# print(X_train_all.shape,y_train_all.shape)
# (X_train_all,y_train_all)
# y: Pandas dataframe to Python list.
# y_train_all=y_train_all.values.tolist()
# The sequences lengths are bounded but not uniform.
X_train_all
print(type(X_train_all))
print(X_train_all.shape)
print(X_train_all.iloc[0])
print(len(X_train_all.iloc[0]['sequence']))
# X: List of string to List of uniform-length ordered lists of K-mers.
X_train_kmers=strings_to_vectors(X_train_all,MAXLEN)
# X: true 2D array (no more lists)
X_train_kmers.shape
print("transform...")
# From pandas dataframe to numpy to list to numpy
print(type(X_train_kmers))
num_seqs=len(X_train_kmers)
tmp_seqs=[]
for i in range(num_seqs):
kmer_sequence=X_train_kmers.iloc[i]
tmp_seqs.append(kmer_sequence)
X_train_kmers=np.array(tmp_seqs)
tmp_seqs=None
print(type(X_train_kmers))
print(X_train_kmers)
labels=y_train_all.to_numpy()
return (X_train_kmers,labels)
print("Load data from files.")
nc_seq=load_fasta(NC_FILENAME,0)
pc_seq=load_fasta(PC_FILENAME,1)
all_seq=pd.concat((nc_seq,pc_seq),axis=0)
print("Put aside the test portion.")
(train_set,test_set)=make_train_test(all_seq)
# Do this later when using the test data:
# (X_test,y_test)=separate_X_and_y(test_set)
nc_seq=None
pc_seq=None
all_seq=None
print("Ready: train_set")
train_set
###Output
Load data from files.
Put aside the test portion.
Ready: train_set
###Markdown
Len 200-1Kb
###Code
MINLEN=200
MAXLEN=1000
print("Working on full training set, slice by sequence length.")
print("Slice size range [%d - %d)"%(MINLEN,MAXLEN))
subset=make_slice(train_set,MINLEN,MAXLEN)# One array to two: X and y
print ("Sequence to Kmer")
(X_train,y_train)=make_kmers(MINLEN,MAXLEN,subset)
print ("Compile the model")
model=build_model(MAXLEN,EMBED_DIMEN)
print(model.summary()) # Print this only once
print ("Cross valiation")
model1=do_cross_validation(X_train,y_train,EPOCHS,MAXLEN,EMBED_DIMEN)
model1.save(FILENAME+'.short.model')
###Output
Working on full training set, slice by sequence length.
Slice size range [200 - 1000)
original (30290, 4)
no short (30290, 4)
no long, no short (8879, 4)
Sequence to Kmer
<class 'pandas.core.frame.DataFrame'>
(8879, 1)
sequence AGTCCCTCCCCAGCCCAGCAGTCCCTCCAGGCTACATCCAGGAGAC...
Name: 1280, dtype: object
348
transform...
<class 'pandas.core.frame.DataFrame'>
<class 'numpy.ndarray'>
[[12 46 54 ... 0 0 0]
[ 9 36 14 ... 0 0 0]
[34 7 28 ... 0 0 0]
...
[37 19 9 ... 0 0 0]
[57 36 15 ... 0 0 0]
[33 3 12 ... 0 0 0]]
Compile the model
COMPILE
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_2 (Embedding) (None, 1000, 16) 1040
_________________________________________________________________
bidirectional_4 (Bidirection (None, 1000, 32) 4224
_________________________________________________________________
bidirectional_5 (Bidirection (None, 1000, 32) 6272
_________________________________________________________________
dense_6 (Dense) (None, 1000, 16) 528
_________________________________________________________________
dense_7 (Dense) (None, 1000, 16) 272
_________________________________________________________________
dense_8 (Dense) (None, 1000, 1) 17
=================================================================
Total params: 12,353
Trainable params: 12,353
Non-trainable params: 0
_________________________________________________________________
None
Cross valiation
BUILD MODEL
COMPILE
FIT
Epoch 1/100
222/222 [==============================] - 31s 140ms/step - loss: 0.7028 - accuracy: 0.5207 - val_loss: 0.6707 - val_accuracy: 0.6006
Epoch 2/100
222/222 [==============================] - 30s 136ms/step - loss: 0.6620 - accuracy: 0.6067 - val_loss: 0.6476 - val_accuracy: 0.6244
Epoch 3/100
222/222 [==============================] - 29s 133ms/step - loss: 0.6254 - accuracy: 0.6590 - val_loss: 0.6126 - val_accuracy: 0.6908
Epoch 4/100
222/222 [==============================] - 30s 134ms/step - loss: 0.6180 - accuracy: 0.6778 - val_loss: 0.6096 - val_accuracy: 0.6903
Epoch 5/100
222/222 [==============================] - 30s 134ms/step - loss: 0.6040 - accuracy: 0.6893 - val_loss: 0.6082 - val_accuracy: 0.6985
Epoch 6/100
222/222 [==============================] - 30s 134ms/step - loss: 0.6160 - accuracy: 0.6816 - val_loss: 0.7052 - val_accuracy: 0.5048
Epoch 7/100
222/222 [==============================] - 30s 135ms/step - loss: 0.6565 - accuracy: 0.6117 - val_loss: 0.6406 - val_accuracy: 0.6208
Epoch 8/100
222/222 [==============================] - 30s 134ms/step - loss: 0.6175 - accuracy: 0.6474 - val_loss: 0.6086 - val_accuracy: 0.6613
Epoch 9/100
222/222 [==============================] - 30s 136ms/step - loss: 0.6455 - accuracy: 0.6288 - val_loss: 0.6602 - val_accuracy: 0.6224
Epoch 10/100
222/222 [==============================] - 30s 134ms/step - loss: 0.6520 - accuracy: 0.6268 - val_loss: 0.6357 - val_accuracy: 0.6498
Epoch 11/100
222/222 [==============================] - 30s 133ms/step - loss: 0.6279 - accuracy: 0.6588 - val_loss: 0.6336 - val_accuracy: 0.6421
Epoch 12/100
222/222 [==============================] - 31s 138ms/step - loss: 0.6295 - accuracy: 0.6573 - val_loss: 0.6331 - val_accuracy: 0.6475
Epoch 13/100
222/222 [==============================] - 30s 135ms/step - loss: 0.6264 - accuracy: 0.6556 - val_loss: 0.6308 - val_accuracy: 0.6483
Epoch 14/100
222/222 [==============================] - 30s 136ms/step - loss: 0.6289 - accuracy: 0.6472 - val_loss: 0.6346 - val_accuracy: 0.6298
Epoch 15/100
222/222 [==============================] - 30s 135ms/step - loss: 0.6325 - accuracy: 0.6351 - val_loss: 0.6304 - val_accuracy: 0.6215
Epoch 16/100
222/222 [==============================] - 30s 136ms/step - loss: 0.6032 - accuracy: 0.6590 - val_loss: 0.5784 - val_accuracy: 0.7011
Epoch 17/100
222/222 [==============================] - 31s 139ms/step - loss: 0.6082 - accuracy: 0.6690 - val_loss: 0.6846 - val_accuracy: 0.5176
Epoch 18/100
222/222 [==============================] - 30s 136ms/step - loss: 0.6677 - accuracy: 0.6005 - val_loss: 0.6558 - val_accuracy: 0.6171
Epoch 19/100
222/222 [==============================] - 30s 136ms/step - loss: 0.6661 - accuracy: 0.5991 - val_loss: 0.6649 - val_accuracy: 0.6050
Epoch 20/100
222/222 [==============================] - 30s 136ms/step - loss: 0.6627 - accuracy: 0.6053 - val_loss: 0.6320 - val_accuracy: 0.6929
Epoch 21/100
222/222 [==============================] - 30s 136ms/step - loss: 0.6315 - accuracy: 0.6567 - val_loss: 0.7282 - val_accuracy: 0.5201
Epoch 22/100
222/222 [==============================] - 30s 136ms/step - loss: 0.6583 - accuracy: 0.6068 - val_loss: 0.6658 - val_accuracy: 0.5937
Epoch 23/100
222/222 [==============================] - 31s 139ms/step - loss: 0.6464 - accuracy: 0.6220 - val_loss: 0.6259 - val_accuracy: 0.6483
Epoch 24/100
222/222 [==============================] - 30s 136ms/step - loss: 0.6124 - accuracy: 0.6805 - val_loss: 0.6385 - val_accuracy: 0.6275
Epoch 25/100
222/222 [==============================] - 30s 136ms/step - loss: 0.6367 - accuracy: 0.6317 - val_loss: 0.6220 - val_accuracy: 0.6420
Epoch 26/100
222/222 [==============================] - 30s 136ms/step - loss: 0.6140 - accuracy: 0.6526 - val_loss: 0.6138 - val_accuracy: 0.6802
Epoch 27/100
222/222 [==============================] - 31s 139ms/step - loss: 0.6381 - accuracy: 0.6295 - val_loss: 0.6535 - val_accuracy: 0.6127
Epoch 28/100
222/222 [==============================] - 30s 135ms/step - loss: 0.6454 - accuracy: 0.6276 - val_loss: 0.6394 - val_accuracy: 0.6243
Epoch 29/100
222/222 [==============================] - 30s 137ms/step - loss: 0.6650 - accuracy: 0.6048 - val_loss: 0.6591 - val_accuracy: 0.6083
Epoch 30/100
222/222 [==============================] - 30s 137ms/step - loss: 0.6464 - accuracy: 0.6251 - val_loss: 0.6011 - val_accuracy: 0.6846
Epoch 31/100
222/222 [==============================] - 30s 134ms/step - loss: 0.5985 - accuracy: 0.6759 - val_loss: 0.6061 - val_accuracy: 0.6816
Epoch 32/100
222/222 [==============================] - 30s 137ms/step - loss: 0.6378 - accuracy: 0.6350 - val_loss: 0.6334 - val_accuracy: 0.6319
Epoch 33/100
222/222 [==============================] - 30s 136ms/step - loss: 0.6095 - accuracy: 0.6540 - val_loss: 0.6066 - val_accuracy: 0.6394
Epoch 34/100
222/222 [==============================] - 30s 137ms/step - loss: 0.6080 - accuracy: 0.6633 - val_loss: 0.6227 - val_accuracy: 0.6451
Epoch 35/100
222/222 [==============================] - 30s 135ms/step - loss: 0.6088 - accuracy: 0.6679 - val_loss: 0.6183 - val_accuracy: 0.6531
Epoch 36/100
222/222 [==============================] - 30s 136ms/step - loss: 0.6010 - accuracy: 0.6770 - val_loss: 0.5908 - val_accuracy: 0.6968
Epoch 37/100
222/222 [==============================] - 30s 137ms/step - loss: 0.5842 - accuracy: 0.6968 - val_loss: 0.6015 - val_accuracy: 0.6617
Epoch 38/100
222/222 [==============================] - 31s 137ms/step - loss: 0.5888 - accuracy: 0.6816 - val_loss: 0.5984 - val_accuracy: 0.6513
Epoch 39/100
222/222 [==============================] - 30s 137ms/step - loss: 0.5827 - accuracy: 0.6904 - val_loss: 0.6704 - val_accuracy: 0.6244
Epoch 40/100
222/222 [==============================] - 31s 138ms/step - loss: 0.6130 - accuracy: 0.6687 - val_loss: 0.6050 - val_accuracy: 0.6718
Epoch 41/100
222/222 [==============================] - 30s 137ms/step - loss: 0.5787 - accuracy: 0.6975 - val_loss: 0.5329 - val_accuracy: 0.7562
Epoch 42/100
222/222 [==============================] - 31s 140ms/step - loss: 0.5412 - accuracy: 0.7403 - val_loss: 0.5368 - val_accuracy: 0.7435
Epoch 43/100
222/222 [==============================] - 31s 138ms/step - loss: 0.5108 - accuracy: 0.7612 - val_loss: 0.5127 - val_accuracy: 0.7527
Epoch 44/100
222/222 [==============================] - 31s 138ms/step - loss: 0.4897 - accuracy: 0.7765 - val_loss: 0.4747 - val_accuracy: 0.7771
Epoch 45/100
222/222 [==============================] - 30s 137ms/step - loss: 0.4739 - accuracy: 0.7869 - val_loss: 0.4935 - val_accuracy: 0.7671
Epoch 46/100
222/222 [==============================] - 31s 138ms/step - loss: 0.4735 - accuracy: 0.7818 - val_loss: 0.5458 - val_accuracy: 0.7321
Epoch 47/100
222/222 [==============================] - 31s 138ms/step - loss: 0.4558 - accuracy: 0.7950 - val_loss: 0.4795 - val_accuracy: 0.7812
Epoch 48/100
222/222 [==============================] - 31s 138ms/step - loss: 0.4530 - accuracy: 0.7929 - val_loss: 0.4603 - val_accuracy: 0.7855
Epoch 49/100
222/222 [==============================] - 30s 136ms/step - loss: 0.4560 - accuracy: 0.7926 - val_loss: 0.4633 - val_accuracy: 0.7778
Epoch 50/100
222/222 [==============================] - 31s 139ms/step - loss: 0.4433 - accuracy: 0.8008 - val_loss: 0.4494 - val_accuracy: 0.7935
Epoch 51/100
222/222 [==============================] - 31s 138ms/step - loss: 0.4390 - accuracy: 0.8034 - val_loss: 0.4423 - val_accuracy: 0.7978
Epoch 52/100
222/222 [==============================] - 31s 138ms/step - loss: 0.4443 - accuracy: 0.7981 - val_loss: 0.4431 - val_accuracy: 0.7937
Epoch 53/100
222/222 [==============================] - 31s 139ms/step - loss: 0.4297 - accuracy: 0.8010 - val_loss: 0.4438 - val_accuracy: 0.7921
Epoch 54/100
222/222 [==============================] - 30s 136ms/step - loss: 0.4314 - accuracy: 0.8038 - val_loss: 0.4373 - val_accuracy: 0.7986
Epoch 55/100
222/222 [==============================] - 31s 139ms/step - loss: 0.4298 - accuracy: 0.8071 - val_loss: 0.4332 - val_accuracy: 0.8019
Epoch 56/100
222/222 [==============================] - 30s 136ms/step - loss: 0.4278 - accuracy: 0.8085 - val_loss: 0.4324 - val_accuracy: 0.7997
Epoch 57/100
222/222 [==============================] - 31s 138ms/step - loss: 0.4349 - accuracy: 0.8017 - val_loss: 0.4342 - val_accuracy: 0.7985
Epoch 58/100
222/222 [==============================] - 31s 140ms/step - loss: 0.4229 - accuracy: 0.8098 - val_loss: 0.4307 - val_accuracy: 0.7996
Epoch 59/100
222/222 [==============================] - 30s 136ms/step - loss: 0.4262 - accuracy: 0.8045 - val_loss: 0.4317 - val_accuracy: 0.8023
Epoch 60/100
222/222 [==============================] - 31s 138ms/step - loss: 0.4290 - accuracy: 0.8068 - val_loss: 0.4375 - val_accuracy: 0.8035
Epoch 61/100
222/222 [==============================] - 31s 138ms/step - loss: 0.4258 - accuracy: 0.8054 - val_loss: 0.4362 - val_accuracy: 0.7931
Epoch 62/100
222/222 [==============================] - 30s 137ms/step - loss: 0.4181 - accuracy: 0.8112 - val_loss: 0.4533 - val_accuracy: 0.7844
Epoch 63/100
222/222 [==============================] - 30s 135ms/step - loss: 0.4220 - accuracy: 0.8083 - val_loss: 0.4395 - val_accuracy: 0.7959
Epoch 64/100
222/222 [==============================] - 30s 133ms/step - loss: 0.4148 - accuracy: 0.8145 - val_loss: 0.4245 - val_accuracy: 0.8064
Epoch 65/100
222/222 [==============================] - 29s 132ms/step - loss: 0.4161 - accuracy: 0.8095 - val_loss: 0.4199 - val_accuracy: 0.8085
Epoch 66/100
222/222 [==============================] - 30s 133ms/step - loss: 0.4146 - accuracy: 0.8130 - val_loss: 0.4392 - val_accuracy: 0.8005
Epoch 67/100
222/222 [==============================] - 30s 135ms/step - loss: 0.4547 - accuracy: 0.7858 - val_loss: 0.4227 - val_accuracy: 0.8026
Epoch 68/100
222/222 [==============================] - 30s 135ms/step - loss: 0.4171 - accuracy: 0.8113 - val_loss: 0.4292 - val_accuracy: 0.8003
Epoch 69/100
222/222 [==============================] - 29s 133ms/step - loss: 0.5002 - accuracy: 0.7652 - val_loss: 0.5361 - val_accuracy: 0.7477
Epoch 70/100
222/222 [==============================] - 29s 133ms/step - loss: 0.5436 - accuracy: 0.7427 - val_loss: 0.5138 - val_accuracy: 0.7610
Epoch 71/100
222/222 [==============================] - 30s 133ms/step - loss: 0.5213 - accuracy: 0.7524 - val_loss: 0.5227 - val_accuracy: 0.7460
Epoch 72/100
222/222 [==============================] - 30s 134ms/step - loss: 0.5106 - accuracy: 0.7562 - val_loss: 0.5176 - val_accuracy: 0.7474
Epoch 73/100
222/222 [==============================] - 29s 132ms/step - loss: 0.5056 - accuracy: 0.7604 - val_loss: 0.4998 - val_accuracy: 0.7638
Epoch 74/100
222/222 [==============================] - 30s 134ms/step - loss: 0.5148 - accuracy: 0.7545 - val_loss: 0.5182 - val_accuracy: 0.7528
Epoch 75/100
222/222 [==============================] - 29s 130ms/step - loss: 0.4931 - accuracy: 0.7670 - val_loss: 0.4795 - val_accuracy: 0.7723
Epoch 76/100
222/222 [==============================] - 29s 132ms/step - loss: 0.5118 - accuracy: 0.7501 - val_loss: 0.5116 - val_accuracy: 0.7597
Epoch 77/100
222/222 [==============================] - 29s 133ms/step - loss: 0.4901 - accuracy: 0.7711 - val_loss: 0.4770 - val_accuracy: 0.7854
Epoch 78/100
222/222 [==============================] - 30s 134ms/step - loss: 0.4626 - accuracy: 0.7915 - val_loss: 0.4654 - val_accuracy: 0.7864
Epoch 79/100
222/222 [==============================] - 30s 133ms/step - loss: 0.4776 - accuracy: 0.7793 - val_loss: 0.5041 - val_accuracy: 0.7647
Epoch 80/100
222/222 [==============================] - 30s 133ms/step - loss: 0.4720 - accuracy: 0.7799 - val_loss: 0.5185 - val_accuracy: 0.7759
Epoch 81/100
222/222 [==============================] - 29s 132ms/step - loss: 0.4758 - accuracy: 0.7858 - val_loss: 0.4706 - val_accuracy: 0.7915
Epoch 82/100
222/222 [==============================] - 30s 135ms/step - loss: 0.4664 - accuracy: 0.7870 - val_loss: 0.4808 - val_accuracy: 0.7734
Epoch 83/100
222/222 [==============================] - 29s 132ms/step - loss: 0.4664 - accuracy: 0.7836 - val_loss: 0.4602 - val_accuracy: 0.7858
Epoch 84/100
222/222 [==============================] - 30s 133ms/step - loss: 0.4571 - accuracy: 0.7922 - val_loss: 0.5402 - val_accuracy: 0.7356
Epoch 85/100
222/222 [==============================] - 33s 147ms/step - loss: 0.4560 - accuracy: 0.7907 - val_loss: 0.4618 - val_accuracy: 0.7794
Epoch 86/100
222/222 [==============================] - 31s 141ms/step - loss: 0.4887 - accuracy: 0.7730 - val_loss: 0.4574 - val_accuracy: 0.7939
Epoch 87/100
222/222 [==============================] - 31s 140ms/step - loss: 0.4471 - accuracy: 0.8013 - val_loss: 0.4591 - val_accuracy: 0.7867
Epoch 88/100
222/222 [==============================] - 31s 141ms/step - loss: 0.4433 - accuracy: 0.8015 - val_loss: 0.4852 - val_accuracy: 0.7850
Epoch 89/100
222/222 [==============================] - 32s 145ms/step - loss: 0.4556 - accuracy: 0.7927 - val_loss: 0.4498 - val_accuracy: 0.7862
Epoch 90/100
222/222 [==============================] - 31s 141ms/step - loss: 0.4520 - accuracy: 0.7908 - val_loss: 0.4517 - val_accuracy: 0.7857
Epoch 91/100
222/222 [==============================] - 31s 142ms/step - loss: 0.4525 - accuracy: 0.7940 - val_loss: 0.4572 - val_accuracy: 0.7916
Epoch 92/100
222/222 [==============================] - 32s 143ms/step - loss: 0.4891 - accuracy: 0.7676 - val_loss: 0.4929 - val_accuracy: 0.7596
Epoch 93/100
222/222 [==============================] - 31s 140ms/step - loss: 0.4659 - accuracy: 0.7836 - val_loss: 0.4627 - val_accuracy: 0.7847
Epoch 94/100
222/222 [==============================] - 31s 141ms/step - loss: 0.4474 - accuracy: 0.7952 - val_loss: 0.4559 - val_accuracy: 0.7798
Epoch 95/100
222/222 [==============================] - 31s 141ms/step - loss: 0.4292 - accuracy: 0.8071 - val_loss: 0.4379 - val_accuracy: 0.7971
Epoch 96/100
222/222 [==============================] - 31s 141ms/step - loss: 0.4281 - accuracy: 0.8050 - val_loss: 0.4672 - val_accuracy: 0.7966
Epoch 97/100
222/222 [==============================] - 31s 142ms/step - loss: 0.4212 - accuracy: 0.8153 - val_loss: 0.4403 - val_accuracy: 0.8018
Epoch 98/100
222/222 [==============================] - 32s 145ms/step - loss: 0.4305 - accuracy: 0.8051 - val_loss: 0.4426 - val_accuracy: 0.7979
Epoch 99/100
222/222 [==============================] - 32s 146ms/step - loss: 0.4164 - accuracy: 0.8109 - val_loss: 0.4353 - val_accuracy: 0.8059
Epoch 100/100
222/222 [==============================] - 32s 142ms/step - loss: 0.4622 - accuracy: 0.7872 - val_loss: 0.5483 - val_accuracy: 0.7414
Fold 1, 100 epochs, 3052 sec
###Markdown
Len 1K-2Kb
###Code
MINLEN=1000
MAXLEN=2000
print("Working on full training set, slice by sequence length.")
print("Slice size range [%d - %d)"%(MINLEN,MAXLEN))
subset=make_slice(train_set,MINLEN,MAXLEN)# One array to two: X and y
print ("Sequence to Kmer")
(X_train,y_train)=make_kmers(MINLEN,MAXLEN,subset)
print ("Compile the model")
model=build_model(MAXLEN,EMBED_DIMEN)
print(model.summary()) # Print this only once
print ("Cross valiation")
model2=do_cross_validation(X_train,y_train,EPOCHS,MAXLEN,EMBED_DIMEN)
model2.save(FILENAME+'.medium.model')
###Output
Working on full training set, slice by sequence length.
Slice size range [1000 - 2000)
original (30290, 4)
no short (9273, 4)
no long, no short (3368, 4)
Sequence to Kmer
<class 'pandas.core.frame.DataFrame'>
(3368, 1)
sequence GGCGGGGTCGACTGACGGTAACGGGGCAGAGAGGCTGTTCGCAGAG...
Name: 12641, dtype: object
1338
transform...
<class 'pandas.core.frame.DataFrame'>
<class 'numpy.ndarray'>
[[42 39 27 ... 0 0 0]
[57 34 5 ... 0 0 0]
[27 44 47 ... 0 0 0]
...
[44 47 57 ... 0 0 0]
[10 37 20 ... 0 0 0]
[47 60 48 ... 0 0 0]]
Compile the model
COMPILE
Model: "sequential_4"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_4 (Embedding) (None, 2000, 16) 1040
_________________________________________________________________
bidirectional_8 (Bidirection (None, 2000, 32) 4224
_________________________________________________________________
bidirectional_9 (Bidirection (None, 2000, 32) 6272
_________________________________________________________________
dense_12 (Dense) (None, 2000, 16) 528
_________________________________________________________________
dense_13 (Dense) (None, 2000, 16) 272
_________________________________________________________________
dense_14 (Dense) (None, 2000, 1) 17
=================================================================
Total params: 12,353
Trainable params: 12,353
Non-trainable params: 0
_________________________________________________________________
None
Cross valiation
BUILD MODEL
COMPILE
FIT
Epoch 1/100
85/85 [==============================] - 24s 280ms/step - loss: 0.7551 - accuracy: 0.4760 - val_loss: 0.6717 - val_accuracy: 0.6039
Epoch 2/100
85/85 [==============================] - 23s 266ms/step - loss: 0.6632 - accuracy: 0.6221 - val_loss: 0.6732 - val_accuracy: 0.6039
Epoch 3/100
85/85 [==============================] - 23s 272ms/step - loss: 0.6636 - accuracy: 0.6221 - val_loss: 0.6722 - val_accuracy: 0.6039
Epoch 4/100
85/85 [==============================] - 22s 264ms/step - loss: 0.6634 - accuracy: 0.6221 - val_loss: 0.6719 - val_accuracy: 0.6039
Epoch 5/100
85/85 [==============================] - 23s 266ms/step - loss: 0.6633 - accuracy: 0.6221 - val_loss: 0.6717 - val_accuracy: 0.6039
Epoch 6/100
85/85 [==============================] - 22s 260ms/step - loss: 0.6633 - accuracy: 0.6221 - val_loss: 0.6726 - val_accuracy: 0.6039
Epoch 7/100
85/85 [==============================] - 23s 268ms/step - loss: 0.6634 - accuracy: 0.6221 - val_loss: 0.6720 - val_accuracy: 0.6039
Epoch 8/100
85/85 [==============================] - 22s 264ms/step - loss: 0.6634 - accuracy: 0.6221 - val_loss: 0.6728 - val_accuracy: 0.6039
Epoch 9/100
85/85 [==============================] - 22s 264ms/step - loss: 0.6633 - accuracy: 0.6221 - val_loss: 0.6731 - val_accuracy: 0.6039
Epoch 10/100
85/85 [==============================] - 23s 265ms/step - loss: 0.6638 - accuracy: 0.6221 - val_loss: 0.6744 - val_accuracy: 0.6039
Epoch 11/100
85/85 [==============================] - 22s 261ms/step - loss: 0.6610 - accuracy: 0.6235 - val_loss: 0.6577 - val_accuracy: 0.6221
Epoch 12/100
85/85 [==============================] - 22s 256ms/step - loss: 0.6573 - accuracy: 0.6237 - val_loss: 0.6546 - val_accuracy: 0.6222
Epoch 13/100
85/85 [==============================] - 22s 263ms/step - loss: 0.6443 - accuracy: 0.6216 - val_loss: 0.6341 - val_accuracy: 0.6043
Epoch 14/100
85/85 [==============================] - 23s 265ms/step - loss: 0.6415 - accuracy: 0.6304 - val_loss: 0.6541 - val_accuracy: 0.6286
Epoch 15/100
85/85 [==============================] - 22s 263ms/step - loss: 0.6454 - accuracy: 0.6337 - val_loss: 0.6366 - val_accuracy: 0.6539
Epoch 16/100
85/85 [==============================] - 22s 262ms/step - loss: 0.6303 - accuracy: 0.6370 - val_loss: 0.6053 - val_accuracy: 0.6487
Epoch 17/100
85/85 [==============================] - 22s 260ms/step - loss: 0.6341 - accuracy: 0.6422 - val_loss: 0.6256 - val_accuracy: 0.6554
Epoch 18/100
85/85 [==============================] - 23s 265ms/step - loss: 0.6250 - accuracy: 0.6609 - val_loss: 0.6137 - val_accuracy: 0.6714
Epoch 19/100
85/85 [==============================] - 22s 263ms/step - loss: 0.6269 - accuracy: 0.6504 - val_loss: 0.6455 - val_accuracy: 0.6187
Epoch 20/100
85/85 [==============================] - 22s 262ms/step - loss: 0.6490 - accuracy: 0.6355 - val_loss: 0.6687 - val_accuracy: 0.6124
Epoch 21/100
85/85 [==============================] - 22s 264ms/step - loss: 0.6591 - accuracy: 0.6293 - val_loss: 0.6687 - val_accuracy: 0.6108
Epoch 22/100
85/85 [==============================] - 22s 264ms/step - loss: 0.6458 - accuracy: 0.6396 - val_loss: 0.6148 - val_accuracy: 0.6689
Epoch 23/100
85/85 [==============================] - 23s 267ms/step - loss: 0.6320 - accuracy: 0.6433 - val_loss: 0.6095 - val_accuracy: 0.6570
Epoch 24/100
85/85 [==============================] - 22s 262ms/step - loss: 0.6323 - accuracy: 0.6418 - val_loss: 0.6527 - val_accuracy: 0.6087
Epoch 25/100
85/85 [==============================] - 22s 264ms/step - loss: 0.6374 - accuracy: 0.6304 - val_loss: 0.6252 - val_accuracy: 0.6161
Epoch 26/100
85/85 [==============================] - 23s 267ms/step - loss: 0.6115 - accuracy: 0.6529 - val_loss: 0.5767 - val_accuracy: 0.6808
Epoch 27/100
85/85 [==============================] - 23s 267ms/step - loss: 0.6191 - accuracy: 0.6662 - val_loss: 0.6568 - val_accuracy: 0.6388
Epoch 28/100
85/85 [==============================] - 22s 262ms/step - loss: 0.6278 - accuracy: 0.6539 - val_loss: 0.6099 - val_accuracy: 0.6445
Epoch 29/100
85/85 [==============================] - 23s 267ms/step - loss: 0.6206 - accuracy: 0.6351 - val_loss: 0.5929 - val_accuracy: 0.6694
Epoch 30/100
85/85 [==============================] - 23s 267ms/step - loss: 0.6043 - accuracy: 0.6624 - val_loss: 0.5859 - val_accuracy: 0.7179
Epoch 31/100
85/85 [==============================] - 23s 268ms/step - loss: 0.5916 - accuracy: 0.6711 - val_loss: 0.5916 - val_accuracy: 0.7234
Epoch 32/100
85/85 [==============================] - 23s 273ms/step - loss: 0.5817 - accuracy: 0.6994 - val_loss: 0.5805 - val_accuracy: 0.7157
Epoch 33/100
85/85 [==============================] - 23s 270ms/step - loss: 0.5679 - accuracy: 0.6932 - val_loss: 0.5501 - val_accuracy: 0.6869
Epoch 34/100
85/85 [==============================] - 23s 270ms/step - loss: 0.6697 - accuracy: 0.6366 - val_loss: 0.6640 - val_accuracy: 0.6231
Epoch 35/100
85/85 [==============================] - 22s 263ms/step - loss: 0.6520 - accuracy: 0.6354 - val_loss: 0.6489 - val_accuracy: 0.6346
Epoch 36/100
85/85 [==============================] - 23s 268ms/step - loss: 0.6427 - accuracy: 0.6478 - val_loss: 0.6343 - val_accuracy: 0.6553
Epoch 37/100
85/85 [==============================] - 23s 267ms/step - loss: 0.6170 - accuracy: 0.6769 - val_loss: 0.6134 - val_accuracy: 0.6858
Epoch 38/100
85/85 [==============================] - 22s 265ms/step - loss: 0.5903 - accuracy: 0.7042 - val_loss: 0.5586 - val_accuracy: 0.7337
Epoch 39/100
85/85 [==============================] - 23s 269ms/step - loss: 0.5752 - accuracy: 0.6994 - val_loss: 0.5645 - val_accuracy: 0.6951
Epoch 40/100
85/85 [==============================] - 22s 260ms/step - loss: 0.5672 - accuracy: 0.7103 - val_loss: 0.5495 - val_accuracy: 0.7395
Epoch 41/100
85/85 [==============================] - 23s 268ms/step - loss: 0.5734 - accuracy: 0.6798 - val_loss: 0.5972 - val_accuracy: 0.6217
Epoch 42/100
85/85 [==============================] - 22s 264ms/step - loss: 0.5786 - accuracy: 0.6692 - val_loss: 0.5613 - val_accuracy: 0.7088
Epoch 43/100
85/85 [==============================] - 22s 261ms/step - loss: 0.5529 - accuracy: 0.7174 - val_loss: 0.5694 - val_accuracy: 0.6977
Epoch 44/100
85/85 [==============================] - 23s 267ms/step - loss: 0.5590 - accuracy: 0.7125 - val_loss: 0.5276 - val_accuracy: 0.7380
Epoch 45/100
85/85 [==============================] - 22s 264ms/step - loss: 0.5501 - accuracy: 0.7170 - val_loss: 0.5456 - val_accuracy: 0.7036
Epoch 46/100
85/85 [==============================] - 22s 263ms/step - loss: 0.5495 - accuracy: 0.7142 - val_loss: 0.5578 - val_accuracy: 0.7345
Epoch 47/100
85/85 [==============================] - 22s 260ms/step - loss: 0.5509 - accuracy: 0.7267 - val_loss: 0.5725 - val_accuracy: 0.7271
Epoch 48/100
85/85 [==============================] - 23s 268ms/step - loss: 0.5816 - accuracy: 0.7005 - val_loss: 0.5873 - val_accuracy: 0.6681
Epoch 49/100
85/85 [==============================] - 23s 265ms/step - loss: 0.5570 - accuracy: 0.7282 - val_loss: 0.5673 - val_accuracy: 0.7269
Epoch 50/100
85/85 [==============================] - 23s 266ms/step - loss: 0.5577 - accuracy: 0.7305 - val_loss: 0.6045 - val_accuracy: 0.7056
Epoch 51/100
85/85 [==============================] - 23s 269ms/step - loss: 0.5535 - accuracy: 0.7199 - val_loss: 0.5462 - val_accuracy: 0.7460
Epoch 52/100
85/85 [==============================] - 23s 268ms/step - loss: 0.5532 - accuracy: 0.7207 - val_loss: 0.5309 - val_accuracy: 0.7577
Epoch 53/100
85/85 [==============================] - 23s 268ms/step - loss: 0.6041 - accuracy: 0.6877 - val_loss: 0.6362 - val_accuracy: 0.5958
Epoch 54/100
85/85 [==============================] - 23s 268ms/step - loss: 0.6258 - accuracy: 0.6431 - val_loss: 0.6915 - val_accuracy: 0.6060
Epoch 55/100
85/85 [==============================] - 23s 265ms/step - loss: 0.6575 - accuracy: 0.6245 - val_loss: 0.6577 - val_accuracy: 0.6071
Epoch 56/100
85/85 [==============================] - 23s 266ms/step - loss: 0.6479 - accuracy: 0.6256 - val_loss: 0.6605 - val_accuracy: 0.6119
Epoch 57/100
85/85 [==============================] - 22s 264ms/step - loss: 0.6532 - accuracy: 0.6284 - val_loss: 0.6639 - val_accuracy: 0.6120
Epoch 58/100
85/85 [==============================] - 23s 265ms/step - loss: 0.6440 - accuracy: 0.6382 - val_loss: 0.6001 - val_accuracy: 0.7032
Epoch 59/100
85/85 [==============================] - 22s 263ms/step - loss: 0.5990 - accuracy: 0.6919 - val_loss: 0.7140 - val_accuracy: 0.6307
Epoch 60/100
85/85 [==============================] - 23s 268ms/step - loss: 0.6665 - accuracy: 0.6281 - val_loss: 0.6637 - val_accuracy: 0.6134
Epoch 61/100
85/85 [==============================] - 23s 266ms/step - loss: 0.6388 - accuracy: 0.6433 - val_loss: 0.6225 - val_accuracy: 0.6528
Epoch 62/100
85/85 [==============================] - 23s 266ms/step - loss: 0.6045 - accuracy: 0.6661 - val_loss: 0.5678 - val_accuracy: 0.6743
Epoch 63/100
85/85 [==============================] - 23s 268ms/step - loss: 0.5690 - accuracy: 0.7000 - val_loss: 0.5399 - val_accuracy: 0.7395
Epoch 64/100
85/85 [==============================] - 23s 269ms/step - loss: 0.5594 - accuracy: 0.7102 - val_loss: 0.5395 - val_accuracy: 0.6951
Epoch 65/100
85/85 [==============================] - 22s 261ms/step - loss: 0.5324 - accuracy: 0.7359 - val_loss: 0.5268 - val_accuracy: 0.7556
Epoch 66/100
85/85 [==============================] - 22s 260ms/step - loss: 0.5299 - accuracy: 0.7392 - val_loss: 0.5106 - val_accuracy: 0.7588
Epoch 67/100
85/85 [==============================] - 23s 268ms/step - loss: 0.5146 - accuracy: 0.7543 - val_loss: 0.4983 - val_accuracy: 0.7733
Epoch 68/100
85/85 [==============================] - 22s 257ms/step - loss: 0.4984 - accuracy: 0.7632 - val_loss: 0.5436 - val_accuracy: 0.7185
Epoch 69/100
85/85 [==============================] - 22s 261ms/step - loss: 0.4911 - accuracy: 0.7725 - val_loss: 0.5059 - val_accuracy: 0.7555
Epoch 70/100
85/85 [==============================] - 23s 266ms/step - loss: 0.4856 - accuracy: 0.7702 - val_loss: 0.5173 - val_accuracy: 0.7374
Epoch 71/100
85/85 [==============================] - 22s 261ms/step - loss: 0.4913 - accuracy: 0.7648 - val_loss: 0.4783 - val_accuracy: 0.7825
Epoch 72/100
85/85 [==============================] - 22s 261ms/step - loss: 0.4549 - accuracy: 0.7944 - val_loss: 0.5295 - val_accuracy: 0.7286
Epoch 73/100
85/85 [==============================] - 23s 267ms/step - loss: 0.4760 - accuracy: 0.7832 - val_loss: 0.5062 - val_accuracy: 0.7460
Epoch 74/100
85/85 [==============================] - 23s 271ms/step - loss: 0.4710 - accuracy: 0.7823 - val_loss: 0.4666 - val_accuracy: 0.7807
Epoch 75/100
85/85 [==============================] - 22s 262ms/step - loss: 0.4618 - accuracy: 0.7850 - val_loss: 0.4519 - val_accuracy: 0.7914
Epoch 76/100
85/85 [==============================] - 22s 263ms/step - loss: 0.4464 - accuracy: 0.7986 - val_loss: 0.4363 - val_accuracy: 0.8016
Epoch 77/100
85/85 [==============================] - 22s 263ms/step - loss: 0.4514 - accuracy: 0.7880 - val_loss: 0.4639 - val_accuracy: 0.7786
Epoch 78/100
85/85 [==============================] - 23s 267ms/step - loss: 0.4447 - accuracy: 0.8002 - val_loss: 0.4619 - val_accuracy: 0.7754
Epoch 79/100
85/85 [==============================] - 23s 265ms/step - loss: 0.4398 - accuracy: 0.8044 - val_loss: 0.4314 - val_accuracy: 0.8020
Epoch 80/100
85/85 [==============================] - 23s 266ms/step - loss: 0.4465 - accuracy: 0.7969 - val_loss: 0.4504 - val_accuracy: 0.7931
Epoch 81/100
85/85 [==============================] - 22s 263ms/step - loss: 0.4339 - accuracy: 0.8028 - val_loss: 0.4381 - val_accuracy: 0.7974
Epoch 82/100
85/85 [==============================] - 22s 262ms/step - loss: 0.4270 - accuracy: 0.8128 - val_loss: 0.4851 - val_accuracy: 0.7658
Epoch 83/100
85/85 [==============================] - 22s 262ms/step - loss: 0.4268 - accuracy: 0.8099 - val_loss: 0.4201 - val_accuracy: 0.8060
Epoch 84/100
85/85 [==============================] - 22s 263ms/step - loss: 0.4178 - accuracy: 0.8175 - val_loss: 0.4929 - val_accuracy: 0.7556
Epoch 85/100
85/85 [==============================] - 23s 267ms/step - loss: 0.4221 - accuracy: 0.8125 - val_loss: 0.4382 - val_accuracy: 0.7974
Epoch 86/100
85/85 [==============================] - 23s 271ms/step - loss: 0.4193 - accuracy: 0.8176 - val_loss: 0.4268 - val_accuracy: 0.8030
Epoch 87/100
85/85 [==============================] - 22s 260ms/step - loss: 0.4138 - accuracy: 0.8219 - val_loss: 0.4043 - val_accuracy: 0.8114
Epoch 88/100
85/85 [==============================] - 22s 255ms/step - loss: 0.4149 - accuracy: 0.8194 - val_loss: 0.4185 - val_accuracy: 0.8026
Epoch 89/100
85/85 [==============================] - 22s 261ms/step - loss: 0.4205 - accuracy: 0.8192 - val_loss: 0.5763 - val_accuracy: 0.6881
Epoch 90/100
85/85 [==============================] - 22s 256ms/step - loss: 0.4197 - accuracy: 0.8110 - val_loss: 0.4213 - val_accuracy: 0.8033
Epoch 91/100
85/85 [==============================] - 22s 259ms/step - loss: 0.4205 - accuracy: 0.8127 - val_loss: 0.4280 - val_accuracy: 0.8010
Epoch 92/100
85/85 [==============================] - 22s 260ms/step - loss: 0.4009 - accuracy: 0.8236 - val_loss: 0.3993 - val_accuracy: 0.8169
Epoch 93/100
85/85 [==============================] - 22s 254ms/step - loss: 0.4215 - accuracy: 0.8119 - val_loss: 0.3993 - val_accuracy: 0.8183
Epoch 94/100
85/85 [==============================] - 22s 255ms/step - loss: 0.4018 - accuracy: 0.8259 - val_loss: 0.4078 - val_accuracy: 0.8119
Epoch 95/100
85/85 [==============================] - 22s 256ms/step - loss: 0.4023 - accuracy: 0.8265 - val_loss: 0.3954 - val_accuracy: 0.8196
Epoch 96/100
85/85 [==============================] - 22s 254ms/step - loss: 0.3957 - accuracy: 0.8265 - val_loss: 0.3999 - val_accuracy: 0.8179
Epoch 97/100
85/85 [==============================] - 22s 257ms/step - loss: 0.3984 - accuracy: 0.8255 - val_loss: 0.4089 - val_accuracy: 0.8164
Epoch 98/100
85/85 [==============================] - 22s 255ms/step - loss: 0.4039 - accuracy: 0.8227 - val_loss: 0.4048 - val_accuracy: 0.8170
Epoch 99/100
85/85 [==============================] - 21s 252ms/step - loss: 0.3993 - accuracy: 0.8245 - val_loss: 0.3872 - val_accuracy: 0.8245
Epoch 100/100
85/85 [==============================] - 22s 259ms/step - loss: 0.3903 - accuracy: 0.8341 - val_loss: 0.3940 - val_accuracy: 0.8203
Fold 1, 100 epochs, 2272 sec
###Markdown
Len 2K-3Kb
###Code
MINLEN=2000
MAXLEN=3000
print("Working on full training set, slice by sequence length.")
print("Slice size range [%d - %d)"%(MINLEN,MAXLEN))
subset=make_slice(train_set,MINLEN,MAXLEN)# One array to two: X and y
print ("Sequence to Kmer")
(X_train,y_train)=make_kmers(MINLEN,MAXLEN,subset)
print ("Compile the model")
model=build_model(MAXLEN,EMBED_DIMEN)
print(model.summary()) # Print this only once
print ("Cross valiation")
model3=do_cross_validation(X_train,y_train,EPOCHS,MAXLEN,EMBED_DIMEN)
model3.save(FILENAME+'.long.model')
#model1.save(FILENAME+'.short.model')
#abc
#efg
#hij
###Output
_____no_output_____ |
notebooks/Python3-Language/04-Functions and Modules/05-Nested Functions and Scope.ipynb | ###Markdown
LEGB Rule:L: Local — Names assigned in any way within a function (def or lambda), and not declared global in that function.E: Enclosing function locals — Names in the local scope of any and all enclosing functions (def or lambda), from inner to outer.G: Global (module) — Names assigned at the top-level of a module file, or declared global in a def within the file.B: Built-in (Python) — Names preassigned in the built-in names module : open, range, SyntaxError,...
###Code
greeting = "Hello from the global scope"
def greet():
#greeting = "Hello from enclosing scope"
def nested():
#greeting = "Hello from local scope"
print(greeting)
nested()
result = greet()
print(greeting)
result
#comment the innermost, then level above, and above
#list = don't override and don't assign
greeting = "Hello from the global scope"
def greet(greeting):
print(f'Greet in func:{greeting}')
greeting = "Hello from enclosing scope"
print(f'Greet in func:{greeting}')
def nested():
greeting = "Hello from local scope"
print(greeting)
nested()
result = greet("test")
result
print(greeting)
greeting = "Hello from the global scope"
def greet():
global greeting
print(f'Greet in func:{greeting}')
greeting = "Hello from enclosing scope"
print(f'Greet in func:{greeting}')
def nested():
greeting = "Hello from local scope"
print(greeting)
nested()
result = greet()
result
print(greeting)
#can't have argument and global for the same name
#avoid using global keyword and use global variables as rarely as possible
#if you need to change a value, use a function and return from it new value, don't use globals
#when a global variable is used from different place, then there is a high change of spoiling it unintentionally
###Output
_____no_output_____ |
Day7/Untitled.ipynb | ###Markdown
Just cleaning out the reviews that were badly annotated:
###Code
labeled_data = labeled_data[labeled_data.infrastructure != -1]
labeled_data = labeled_data[labeled_data.cost != -1]
labeled_data = labeled_data[labeled_data.family != 9]
labeled_data = labeled_data.reset_index(drop = True)
english_stopwords = ["a", "about", "above", "above", "across", "after", "afterwards",
"again", "against", "all", "almost", "alone", "along", "already",
"also","although","always","am","among", "amongst", "amoungst",
"amount", "an", "and", "another", "any","anyhow","anyone",
"anything","anyway", "anywhere", "are", "around", "as", "at",
"back","be","became", "because","become","becomes", "becoming",
"been", "before", "beforehand", "behind", "being", "below",
"beside", "besides", "between", "beyond", "bill", "both",
"bottom","but", "by", "call", "can", "cannot", "cant", "co",
"con", "could", "couldnt", "cry", "de", "describe", "detail",
"do", "done", "down", "due", "during", "each", "eg", "eight",
"either", "eleven","else", "elsewhere", "empty", "enough", "etc",
"even", "ever", "every", "everyone", "everything", "everywhere",
"except", "few", "fifteen", "fify", "fill", "find", "fire", "first",
"five", "for", "former", "formerly", "forty", "found", "four", "from",
"front", "full", "further", "get", "give", "good", "great", "woburn", "go",
"had", "has", "hasnt", "have", "he", "hence", "her", "here", "hereafter",
"hereby", "herein", "hereupon", "hers", "herself", "him", "himself", "his",
"how", "however", "hundred", "ie", "if", "in", "inc", "indeed", "interest",
"into", "is", "it", "its", "itself", "keep", "last", "latter", "latterly",
"least", "less", "ltd", "made", "many", "may", "me", "meanwhile", "might",
"mill", "mine", "more", "moreover", "most", "mostly", "move", "much", "must",
"my", "myself", "name", "namely", "neither", "never", "nevertheless", "next",
"nine", "no", "nobody", "none", "noone", "nor", "not", "nothing", "now", "nowhere",
"of", "off", "often", "on", "once", "one", "only", "onto", "or", "other", "others",
"otherwise", "our", "ours", "ourselves", "out", "over", "own","part", "per", "perhaps",
"please", "put", "rather", "re", "same", "see", "seem", "seemed", "seeming", "seems",
"serious", "several", "she", "should", "show", "side", "since", "sincere", "six", "sixty",
"so", "some", "somehow", "someone", "something", "sometime", "sometimes", "somewhere",
"still", "such", "system", "take", "ten", "than", "that", "the", "their", "them",
"themselves", "then", "thence", "there", "thereafter", "thereby", "therefore", "therein",
"thereupon", "these", "they", "thickv", "thin", "third", "this", "those", "though", "three",
"through", "throughout", "thru", "thus", "to", "together", "too", "top", "toward", "towards",
"twelve", "twenty", "two", "un", "under", "until", "up", "upon", "us", "very", "via", "was",
"we", "well", "were", "what", "whatever", "when", "whence", "whenever", "where", "whereafter",
"whereas", "whereby", "wherein", "whereupon", "wherever", "whether", "which", "while", "whither",
"who", "whoever", "whole", "whom", "whose", "why", "will", "with", "within", "without", "would",
"yet", "you", "your", "yours", "yourself", "yourselves", "the"]
def character_replacement(input_string):
character_mapping = {"\\u00e9": "é",
"\\u2019": "'",
"\\": "",
"\\u00fb": "û",
"u00e8": "è",
"u00e0": "à",
"u00f4": "ô",
"u00ea": "ê",
"u00ee": "i",
"u00fb": "û",
"u2018": "'",
"u00e2": "a",
"u00ab": "'",
"u00bb": "'",
"u00e7": "ç",
"u00e2": "â",
"u00f9": "ù",
"u00a3": "£",
}
for character in character_mapping:
input_string = input_string.replace(character, character_mapping[character])
input_string = input_string.lower()
characters_to_remove = ["@", "/", "#", ".", ",", "!", "?", "(", ")", "-", "_", "’", "'", "\"", ":", "1", "2", "3", "4", "5", "6", "7", "8", "9", "0"]
transformation_dict = {initial: " " for initial in characters_to_remove}
no_punctuation_reviews = input_string.translate(str.maketrans(transformation_dict))
return no_punctuation_reviews
def tokenize(input_string):
return word_tokenize(input_string)
def remove_stop_words(input_tokens, english_stopwords = english_stopwords):
return [token for token in input_tokens if token not in english_stopwords]
lemmatizer = WordNetLemmatizer()
def lemmatize(tokens, lemmatizer = lemmatizer):
tokens = [lemmatizer.lemmatize(lemmatizer.lemmatize(lemmatizer.lemmatize(token,pos='a'),pos='v'),pos='n') for token in tokens]
return tokens
labeled_data['review'] = labeled_data['review'].apply(lambda x: character_replacement(x))
labeled_data['tokens'] = labeled_data['review'].apply(lambda x: tokenize(x))
labeled_data['tokens'] = labeled_data['tokens'].apply(lambda token_list: [meaningful_word for meaningful_word in token_list if len(meaningful_word) > 3])
labeled_data['tokens'] = labeled_data['tokens'].apply(lambda x: remove_stop_words(x))
###Output
_____no_output_____
###Markdown
COST LABEL Train test split on manually labeled data:
###Code
training_set = {'tokens' : list(labeled_data['tokens'])[:2000],
'labels' : list(labeled_data['cost'])[:2000]}
training_set = pd.DataFrame(training_set)
test_set = {'tokens' : list(labeled_data['tokens'])[2000:],
'labels' : list(labeled_data['cost'])[2000:]}
test_set = pd.DataFrame(test_set)
###Output
_____no_output_____
###Markdown
Re-arranging training set to take into account unlabeled tokens and their missing label
###Code
semi_supervised_data = {'tokens' : list(training_set['tokens']) + list(unlabeled_data['tokens']),
'labels' : list(training_set['labels']) + [-1]*len(unlabeled_data)}
# We use -1 to encode unlabeled samples
semi_supervised_data = pd.DataFrame(semi_supervised_data)
semi_supervised_data.head()
labeled_data.shape
semi_supervised_data.shape
labeled_data.head(1)
w2v = KeyedVectors.load_word2vec_format(path_to_google_vectors + 'GoogleNews-vectors-negative300.bin', binary = True)
def my_vector_getter(word, wv = w2v) :
# returns the vector of a word
try:
word_array = wv[word].reshape(1,-1)
return word_array
except :
# if word not in google word2vec vocabulary, return vector with low norm
return np.zeros((1,300))
def document_embedding(text, wv = w2v) :
# returns naïve document embedding
embeddings = np.concatenate([my_vector_getter(token) for token in text])
centroid = np.mean(embeddings, axis=0).reshape(1,-1)
return centroid
document_embedding(semi_supervised_data['tokens'][0]).shape
###Output
_____no_output_____
###Markdown
Train embedding:
###Code
X = np.zeros((len(semi_supervised_data), 300))
for i in range(len(semi_supervised_data)) :
X[i] = document_embedding(semi_supervised_data['tokens'][i])
#X_values = X.values
X_train = X[:2000]
Y_train = training_set['labels'].values
###Output
_____no_output_____
###Markdown
Test embedding:
###Code
X_test = np.zeros((len(test_set), 300))
for i in range(len(test_set)) :
X_test[i] = document_embedding(test_set['tokens'][i])
#X_test_pca = pca.transform(X_test)
Y_test = test_set['labels'].values
###Output
_____no_output_____
###Markdown
Fitting the model
###Code
label_spreading_model = LabelSpreading()
model_s = label_spreading_model.fit(X_train, Y_train)
pred = model_s.predict(X_test)
print("\n")
print("Using count vectorization")
print("\n")
acc_count = accuracy_score(Y_test,pred)
prec_count = precision_score(Y_test, pred)
sens_count = recall_score(Y_test,pred)
print("Accuracy :", acc_count)
print("Precision :", prec_count)
print("Sensitivity :", sens_count)
###Output
Using count vectorization
Accuracy : 0.5561694290976059
Precision : 0.5421455938697318
Sensitivity : 0.9929824561403509
###Markdown
Propagating:
###Code
Y_train.shape
X_train = np.concatenate((X_train,X_test), axis=0)
Y_train = np.concatenate((Y_train,pred), axis=0)
X_train
###Output
_____no_output_____ |
toying around.ipynb | ###Markdown
Import all packages
###Code
# from hw2
import pydub
from pydub import AudioSegment
import pydub
from pydub.playback import play
from python_speech_features import mfcc
from python_speech_features import logfbank
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from time import sleep
import scipy.io.wavfile as wav
from glob import glob
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import pandas as pd
import os
import torch
import torch.nn as nn
import torch.nn.functional as F
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
###Output
Mounted at /content/drive
###Markdown
Preparing the dataset
###Code
#!ls /content/drive/'My Drive'/validated_clips/validated_clip/1.000000
downloaded_files = glob("/content/drive/My Drive/validated_clips/validated_clip/**/*.mp3", recursive=True)
num_downloaded_files = len(downloaded_files)
print(num_downloaded_files)
dev = pd.read_csv("/content/drive/My Drive/tsvs/dev.tsv", sep="\t")
train = pd.read_csv("/content/drive/My Drive/tsvs/train.tsv", sep="\t")
test = pd.read_csv("/content/drive/My Drive/tsvs/test.tsv", sep="\t")
validated = pd.read_csv("/content/drive/My Drive/tsvs/validated.tsv", sep="\t")
invalidated = pd.read_csv("/content/drive/My Drive/tsvs/invalidated.tsv", sep="\t")
#print(dev)
#TODO use numpy arrays instead dataframes
#TODO how are we balancing classes?
splits = {
"dev": dev,
"train": train,
"test": test,
"validated": validated,
"invalidated": invalidated
}
#print(splits)
###Output
_____no_output_____
###Markdown
Fix the paths
###Code
#TODO: Fix prefixes
prefix = "./data/audios/"
for key in splits:
splits[key]['path'] = prefix + splits[key]['path'].astype(str)
#print(splits)
###Output
_____no_output_____
###Markdown
Drop the rows with NaN accent values
###Code
for key in splits:
print(key, len(splits[key]))
for key in splits:
splits[key].dropna(axis=0, subset=["accent"], inplace=True)
print(len(splits[key]))
###Output
2100
135391
1398
46169
46728
###Markdown
Drop the rows yet to be validated
###Code
train.columns
columns = ['client_id', 'path', 'sentence', 'up_votes', 'down_votes', 'age', 'gender', 'accent']
# https://stackoverflow.com/questions/26921943/pandas-intersection-of-two-data-frames-based-on-column-entries
train_validated = pd.merge(train, validated, how='inner', on=columns)
test_validated = pd.merge(test, validated, how='inner', on=columns)
dev_validated = pd.merge(dev, validated, how='inner', on=columns)
print(f"The overall dataset has {len(train_validated)} training, {len(test_validated)} test and {len(dev_validated)} dev validated audio files.")
###Output
_____no_output_____
###Markdown
Lets look at the overall data
###Code
# visualize categorical data https://www.datacamp.com/community/tutorials/categorical-data
def visualize_categorical_distribution(pd_series, title="Plot", ylabel='Number of Samples', xlabel='Accent', figsize=None):
digit_counts = pd_series.value_counts()
sns.set(style="darkgrid")
if figsize is None:
sns.set(rc={'figure.figsize':(10,6)})
else:
sns.set(rc={'figure.figsize':figsize})
sns.barplot(digit_counts.index, digit_counts.values, alpha=0.9)
plt.title(title)
plt.ylabel(ylabel, fontsize=12)
plt.xlabel(xlabel, fontsize=12)
plt.show()
print(train_validated.groupby(['accent', 'gender']).size())
print(test_validated.groupby(['accent', 'gender']).size())
print(dev_validated.groupby(['accent', 'gender']).size())
yeet.unstack(level=1).plot(kind='bar', subplots=False)
#print(splits["dev"].groupby(['accent', 'gender']).size())
#print(splits["test"].groupby(['accent', 'gender']).size())
#print(splits['train']['accent'].value_counts())
visualize_categorical_distribution(splits["train"]["accent"], "Total Train Distribution", figsize=(17,7))
visualize_categorical_distribution(splits["test"]["accent"], "Total Test Distribution", figsize=(17,7))
visualize_categorical_distribution(splits["dev"]["accent"], "Total Dev Distribution", figsize=(17,7))
###Output
_____no_output_____
###Markdown
Lets look at some other attributes...
###Code
visualize_categorical_distribution(splits["train"]["age"], "Age Distribution (Train)")
visualize_categorical_distribution(splits["train"]["gender"], "Gender Distribution (Train)", figsize=(4,4))
###Output
_____no_output_____
###Markdown
What about the overall validated data?
###Code
visualize_categorical_distribution(train_validated["accent"], "Gender Distribution (Train)", figsize=(17,7))
visualize_categorical_distribution(test_validated["accent"], "Gender Distribution (Train)", figsize=(17,7))
visualize_categorical_distribution(dev_validated["accent"], "Gender Distribution (Train)", figsize=(17,7))
print(num_downloaded_files)
print(downloaded_files)
print(splits['train']['path'])
###Output
_____no_output_____
###Markdown
Choose the data we've actually downloaded
###Code
for key in splits:
splits[key] = splits[key][splits[key]["path"].isin(downloaded_files)]
print(len(splits[key]))
print(splits)
###Output
_____no_output_____
###Markdown
Make sure speaker independent and validated
###Code
len(splits['train'][splits['train']["client_id"].isin(splits['test']['client_id'])])
len(splits['train'][splits['train']["client_id"].isin(splits['dev']['client_id'])])
num_train = len(splits['train'][splits['train']["client_id"].isin(splits['validated']['client_id'])])
num_test = len(splits['test'][splits['test']["client_id"].isin(splits['validated']['client_id'])])
num_dev = len(splits['dev'][splits['dev']["client_id"].isin(splits['validated']['client_id'])])
print(f"Wow, we only end up with {num_train} training, {num_test} test and {num_dev} dev audio files from the {num_downloaded_files} files we started with!")
###Output
_____no_output_____
###Markdown
For the purposes of this toy example we forgoe using only the validated clips and assume all clips are good. Lets take a look at some of our downloaded data
###Code
visualize_categorical_distribution(splits["train"]["accent"], "Train Distribution")
visualize_categorical_distribution(splits["test"]["accent"], "Test Distribution", figsize=(3,4))
###Output
_____no_output_____
###Markdown
Lets play some audio files
###Code
def play_mp3_from_path(relative_path):
"""plays mp3 located at provided relative path, returns the audio segment"""
a = pydub.AudioSegment.from_mp3(relative_path)
# test that it sounds right (requires ffplay, or pyaudio):
print(a)
play(a)
return a
# pick 4 random audio files from first 10
#paths = np.random.choice(splits["train"]["path"][:10], size=4)
#for path in paths:
# print(path)
# play_mp3_from_path(path)
###Output
_____no_output_____
###Markdown
Yikes... That is a bad distribution. Lets start preprocessing our actual audio files to the dataset Some useful functions:
###Code
from utils import *
###Output
_____no_output_____
###Markdown
Extract the audio files, remove leading silence, and silent clips...
###Code
train = splits["train"]
train = train[:30]
# Remove silent clips
drop_idxs = train[train['path'].apply(detect_leading_silence_filepath) >= train['path'].apply(length_of_file)-1].index
train = train.drop(drop_idxs)
# Zero pad to normalize length
downloaded_audios = [pydub.AudioSegment.from_mp3(f) for f in train['path']]
max_audio_length = np.max([len(sample) for sample in downloaded_audios])
print(f"max audio length is {max_audio_length / 1000} seconds")
padded_audios = [zero_pad_in_end(audio, max_audio_length) for audio in downloaded_audios]
mfccs_padded = np.array([extract_mfcc(audio) for audio in padded_audios])
audio_embeddings = pd.DataFrame({
'label': train['accent'],
'mfcc': np.array([extract_mfcc(audio) for audio in downloaded_audios]),
'mfb': np.array([extract_mfb(audio) for audio in downloaded_audios]),
'mfcc_padded': np.array([extract_mfcc(audio) for audio in padded_audios]),
'mfb_padded': np.array([extract_mfb(audio) for audio in padded_audios])
})
labels = train['accent'].to_numpy()
labels
paddingamounts = [(len(pad)-len(unpad))/1000 for unpad,pad in zip(downloaded_audios, padded_audios)]
print(paddingamounts)
print(sum(paddingamounts)/len(paddingamounts))
print([len(audio) for audio in audio_embeddings["mfcc_padded"]])
print([extract_mfcc(audio).shape for audio in padded_audios])
###Output
_____no_output_____
###Markdown
TODO: Plot some filterbanks and mfccs TODO: Constant size features for CNN TODO:: Create a validation set?
###Code
###Output
_____no_output_____
###Markdown
TODO: Creating our Toy Models Toy CNN
###Code
mfccs_padded = np.moveaxis(mfccs_padded, (1,2), (2,1))
mfccs_padded.shape
#want array size to be
#numbe of samples, number of filterbanks, length of each sample
class downwardSlope(nn.Module):
def __init__(self, maxSeqLength, outsize):
super().__init__()
self.maxSeqLength = maxSeqLength
self.conv1 = nn.Conv1d(13, 32, 15, padding=1)
self.conv2 = nn.Conv1d(32, 32, 15, padding=1)
self.conv3 = nn.Conv1d(32, 32, 15, padding=1)
self.fc1 = nn.Linear(2908, outsize)
def forward(self,x):
print("insideforward",type(x), x.dtype)
x= self.conv1(x)
x= self.conv2(x)
x= self.conv3(x)
x = self.fc1(x)
x = torch.sigmoid(x)
return torch.squeeze(x)
def evaluateModel(model, lossCriterion, X, y, batchSize):
with torch.no_grad():
n=batchSize
batched = list(zip([X[i:i + n] for i in range(0, len(X), n)],
[y[i:i + n] for i in range(0, len(y), n)]))
predList = []
yList = []
lossMeans = []
for i,(batchX, batchY) in enumerate(batched):
print('x',type(X), X.dtype)
print('batchx',type(batchX), batchX.dtype)
predProb = model(batchX)
if np.isnan(predProb.cpu().detach().numpy()).any():
print("AAAAAA a NAN")
return
loss = lossCriterion(predProb, batchY)
batchPred = predProb.round().tolist()
predList = predList+batchPred
batchYargmaxed = batchY.tolist()
yList = yList+batchYargmaxed
lossMeans.append(torch.mean(loss).item())
#acc = accuracy_score(yList, predList)
#report = classification_report(yList, predList)
#confMat = confusion_matrix(yList, predList)
lossMean = sum(lossMeans)/float(len(lossMeans))
return lossMean
def trainModel(model, optimizer, criterion, train_X, train_y, val_X, val_y, batchSize, startEpoch, endEpoch):
notifyEvery = 100 if torch.cuda.is_available() else 2
checkmarkTime = time.time()
n=batchSize
batched = list(zip([train_X[i:i + n] for i in range(0, len(train_X), n)],
[train_y[i:i + n] for i in range(0, len(train_y), n)]))
numBatches = len(batched)
print("number of batches", numBatches)
for epoch in range(startEpoch, endEpoch):
print("epoch:", epoch)
for i,(batchX,batchy) in enumerate(batched):
optimizer.zero_grad()
output = model(batchX)
loss = criterion(output, batchy)
loss.backward()
optimizer.step()
if np.isnan(output.cpu().detach().numpy()).any():
print("AAAAAA a NAN")
return
if i%notifyEvery ==notifyEvery-1:
print('[%d, %5d]' %
(epoch + 1, i + 1))
timeTook = time.time() - checkmarkTime
print("took", timeTook, "seconds for", notifyEvery, "batches")
if(torch.cuda.is_available()):
print(torch.cuda.max_memory_allocated()/1e9, "GB of VRAM being used")
checkmarkTime = time.time()
trainLoss = evaluateModel(model,criterion,train_X, train_y, batchSize)
valLoss = evaluateModel(model, criterion,val_X, val_y, batchSize)
print('loss', valLoss)
print("finished training")
!pip3 install skorch
from skorch import NeuralNetClassifier
from sklearn import preprocessing
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report
import time
X = torch.as_tensor(mfccs_padded, dtype=torch.float)
le = preprocessing.LabelEncoder()
le.fit(labels)
transformed = le.transform(labels)
print(type(transformed),transformed.shape, print(transformed))
enc = preprocessing.OneHotEncoder(handle_unknown='ignore')
onehotted =enc.fit_transform(transformed.reshape(-1,1)).toarray()
optimizer = torch.optim.Adam(mod.parameters(), lr = learningRate)
y = torch.as_tensor(onehotted, dtype=torch.long)
outsize = len(le.classes_)
print(outsize, type(onehotted), type(y), onehotted.shape)
device = "cpu"
seqLength = mfccs_padded.shape[2]
mod = downwardSlope(seqLength, outsize)
criterion = nn.CrossEntropyLoss()
endEpoch = 3
learningRate = 0.001
print('x',type(X), X.dtype)
trainModel(mod, optimizer,criterion, X[:20], y[:20],X[20:],y[20:], 10, 0, 10)
###Output
_____no_output_____
###Markdown
Toy LSTM
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
torch.manual_seed(42)
# https://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html
###Output
_____no_output_____ |
examples/Example Neural Network.ipynb | ###Markdown
Iris
###Code
data=load_excel('data/iris.xls')
data_train,data_test=split(data,test_size=0.2)
len(data.targets),len(data_train.targets),len(data_test.targets),
C=Perceptron()
timeit(reset=True)
C.fit(data_train.vectors,data_train.targets)
print(("Training time: ",timeit()))
print(("On Training Set:",C.percent_correct(data_train.vectors,data_train.targets)))
print(("On Test Set:",C.percent_correct(data_test.vectors,data_test.targets)))
C.weights # these are the weights
C=BackProp(hidden_layer_sizes = [4],max_iter=10000,tol=1e-4)
timeit(reset=True)
C.fit(data_train.vectors,data_train.targets)
print(("Training time: ",timeit()))
data_train.vectors.shape,data_train.targets.shape
print(("On Training Set:",C.percent_correct(data_train.vectors,data_train.targets)))
print(("On Test Set:",C.percent_correct(data_test.vectors,data_test.targets)))
C.weights
W_inp_hid,W_hid_out=C.weights
print(W_inp_hid)
print("==")
print(W_hid_out)
###Output
[[-1.08065252e+00 9.52796556e-02 -1.58943896e+00 -8.28935340e-42]
[-4.57125312e-01 -2.54793675e-01 -1.60848695e+00 -3.13230127e-39]
[ 5.66766252e-01 2.44509077e-01 8.67926146e-01 6.19659439e-81]
[ 6.80331304e-01 -8.86891613e-01 9.75248954e-01 -2.37650832e-07]]
==
[[ 3.08547893e-01 1.18527489e+00 -1.96163945e+00]
[ 2.35980725e-01 -9.17589102e-02 -3.58105409e-01]
[ 1.01267589e+00 -1.88427180e+00 -9.25026076e-01]
[-4.11257778e-15 -3.25712393e-60 -6.54717443e-07]]
###Markdown
XOR Problem - Perceptron
###Code
data=make_dataset(bob=[[0,0],[1,1]],sally=[[0,1],[1,0]])
data
C=Perceptron()
C.fit(data.vectors,data.targets)
print((C.predict(data.vectors)))
print(("On Training Set:",C.percent_correct(data.vectors,data.targets)))
plot2D(data,classifier=C,axis_range=[-.55,1.5,-.5,1.5])
###Output
_____no_output_____
###Markdown
XOR Problem - Backprop
###Code
data.vectors
data.targets
C=BackProp(hidden_layer_sizes = [5],max_iter=10000,tol=1e-4)
C.fit(data.vectors,data.targets)
print((C.predict(data.vectors)))
print(("On Training Set:",C.percent_correct(data.vectors,data.targets)))
plot2D(data,classifier=C,axis_range=[-.55,1.5,-.5,1.5])
print((data.vectors))
print()
print((data.targets))
C.output(data.vectors)
h,y=C.output(data.vectors)
print(h)
print()
print((np.round(h)))
print()
print(y)
print(around(C.weights[0],2))
around(C.weights[1],2)
data.vectors.shape
###Output
_____no_output_____
###Markdown
Curvy data
###Code
figure(figsize=(8,8))
N=30
x1=randn(N)*.2
y1=randn(N)*.2
plot(x1,y1,'bo')
a=linspace(0,3*pi/2,N)
x2=cos(a)+randn(N)*.2
y2=sin(a)+randn(N)*.2
plot(x2,y2,'rs')
axis('equal')
vectors=vstack([hstack([atleast_2d(x1).T,atleast_2d(y1).T]),
hstack([atleast_2d(x2).T,atleast_2d(y2).T]),
])
targets=concatenate([zeros(N),ones(N)])
target_names=['center','around']
feature_names=['x','y']
data=Struct(vectors=vectors,targets=targets,
target_names=target_names,feature_names=feature_names)
C=Perceptron()
C.fit(data.vectors,data.targets)
print(("On Training Set:",C.percent_correct(data.vectors,data.targets)))
plot2D(data,classifier=C,axis_range=[-2,2,-2,2])
C=BackProp(hidden_layer_sizes = [6],max_iter=10000,tol=1e-4)
C.fit(data.vectors,data.targets)
print(("On Training Set:",C.percent_correct(data.vectors,data.targets)))
plot2D(data,classifier=C,axis_range=[-2,2,-2,2])
C=NaiveBayes()
C.fit(data.vectors,data.targets)
print(("On Training Set:",C.percent_correct(data.vectors,data.targets)))
C.plot_centers()
plot2D(data,classifier=C,axis_range=[-2,2,-2,2])
C=kNearestNeighbor()
C.fit(data.vectors,data.targets)
print(("On Training Set:",C.percent_correct(data.vectors,data.targets)))
plot2D(data,classifier=C,axis_range=[-2,2,-2,2])
C=CSC()
C.fit(data.vectors,data.targets)
print(("On Training Set:",C.percent_correct(data.vectors,data.targets)))
C.plot_centers()
plot2D(data,classifier=C,axis_range=[-2,2,-2,2])
###Output
('On Training Set:', 100.0)
###Markdown
8x8 - Autoencoder
###Code
vectors=eye(8)
targets=arange(1,9)
print((vectors,targets))
C=BackProp(activation='logistic',hidden_layer_sizes = [3],max_iter=10000,tol=1e-4)
C.fit(vectors,targets)
print((C.predict(vectors)))
h,y=C.output(vectors)
around(h,2)
h.round()
y.round()
C.predict(vectors)
y.shape
imshow(h,interpolation='nearest',cmap=cm.gray)
colorbar()
weights_xh,weights_hy=C.weights
plot(weights_xh,'-o')
plot(weights_hy,'-o')
###Output
_____no_output_____
###Markdown
Tuning the number of hidden units
###Code
data=load_excel('data/iris.xls')
data_train,data_test=split(data,test_size=0.75)
###Output
iris.data 151 5
150 vectors of length 4
Feature names: 'petal length in cm', 'petal width in cm', 'sepal length in cm', 'sepal width in cm'
Target values given.
Target names: 'Iris-setosa', 'Iris-versicolor', 'Iris-virginica'
Mean: [3.75866667 1.19866667 5.84333333 3.054 ]
Median: [4.35 1.3 5.8 3. ]
Stddev: [1.75852918 0.76061262 0.82530129 0.43214658]
Original vector shape: (150, 4)
Train vector shape: (37, 4)
Test vector shape: (113, 4)
###Markdown
select which number of hidden units to use
###Code
hidden=list(range(1,10))
percent_correct=[]
for n in hidden:
C=BackProp(hidden_layer_sizes = [n],tol=1e-4,max_iter=10000)
C.fit(data_train.vectors,data_train.targets)
percent_correct.append(C.percent_correct(data_test.vectors,data_test.targets))
plot(hidden,percent_correct,'-o')
xlabel('Number of Hidden Units')
ylabel('Percent Correct on Test Data')
###Output
_____no_output_____ |
Steps.ipynb | ###Markdown
step 1.Create reuqirements.txt!pip install watermark
###Code
%load_ext watermark
%watermark -v -m -p pandas,numpy,tensorflow,watermark
%watermark -u -n -t -z
###Output
CPython 3.7.2
IPython 7.8.0
pandas 0.25.2
numpy 1.17.3
tensorflow 2.1.0
watermark 2.0.2
compiler : MSC v.1916 64 bit (AMD64)
system : Windows
release : 10
machine : AMD64
processor : Intel64 Family 6 Model 58 Stepping 9, GenuineIntel
CPU cores : 4
interpreter: 64bit
last updated: Mon Jun 22 2020 10:27:31 Central Europe Daylight Time
|
stats-newtextbook-python/samples/3-7-推定.ipynb | ###Markdown
第3部 Pythonによるデータ分析|Pythonで学ぶ統計学入門 7章 推定 分析の準備
###Code
# 数値計算に使うライブラリ
import numpy as np
import pandas as pd
import scipy as sp
from scipy import stats
# グラフを描画するライブラリ
from matplotlib import pyplot as plt
import seaborn as sns
sns.set()
# 表示桁数の指定
%precision 3
# グラフをjupyter Notebook内に表示させるための指定
%matplotlib inline
# データの読み込み
fish = pd.read_csv("3-7-1-fish_length.csv")["length"]
fish
###Output
_____no_output_____
###Markdown
実装:点推定
###Code
# 母平均の点推定
mu = sp.mean(fish)
mu
# 母分散の点推定
sigma_2 = sp.var(fish, ddof = 1)
sigma_2
###Output
_____no_output_____
###Markdown
実装:区間推定
###Code
# 自由度
df = len(fish) - 1
df
# 標準誤差
se = sigma / sp.sqrt(len(fish))
se
# 区間推定
interval = stats.t.interval(
alpha = 0.95, df = df, loc = mu, scale = se)
interval
###Output
_____no_output_____
###Markdown
補足:信頼区間の求め方の詳細
###Code
# 97.5%点
t_975 = stats.t.ppf(q = 0.975, df = df)
t_975
# 下側信頼限界
lower = mu - t_975 * se
lower
# 上側信頼限界
upper = mu + t_975 * se
upper
###Output
_____no_output_____
###Markdown
信頼区間の幅を決める要素
###Code
# 標本標準偏差が大きいと、信頼区間は広くなる
se2 = (sigma*10) / sp.sqrt(len(fish))
stats.t.interval(
alpha = 0.95, df = df, loc = mu, scale = se2)
# サンプルサイズが大きいと、信頼区間は狭くなる
df2 = (len(fish)*10) - 1
se3 = sigma / sp.sqrt(len(fish)*10)
stats.t.interval(
alpha = 0.95, df = df2, loc = mu, scale = se3)
# 99%信頼区間
stats.t.interval(
alpha = 0.99, df = df, loc = mu, scale = se)
###Output
_____no_output_____
###Markdown
信頼区間の解釈
###Code
# 信頼区間が母平均(4)を含んでいればTrue
be_included_array = np.zeros(20000, dtype = "bool")
be_included_array
# 「データを10個選んで95%信頼区間を求める」試行を20000回繰り返す
# 信頼区間が母平均(4)を含んでいればTrue
np.random.seed(1)
norm_dist = stats.norm(loc = 4, scale = 0.8)
for i in range(0, 20000):
sample = norm_dist.rvs(size = 10)
df = len(sample) - 1
mu = sp.mean(sample)
std = sp.std(sample, ddof = 1)
se = std / sp.sqrt(len(sample))
interval = stats.t.interval(0.95, df, mu, se)
if(interval[0] <= 4 and interval[1] >= 4):
be_included_array[i] = True
sum(be_included_array) / len(be_included_array)
###Output
_____no_output_____ |
2. Regression/Assingments/PhillyCrime.ipynb | ###Markdown
Fire up graphlab create
###Code
import graphlab
###Output
_____no_output_____
###Markdown
Load some house value vs. crime rate dataDataset is from Philadelphia, PA and includes average house sales price in a number of neighborhoods. The attributes of each neighborhood we have include the crime rate ('CrimeRate'), miles from Center City ('MilesPhila'), town name ('Name'), and county name ('County').
###Code
sales = graphlab.SFrame.read_csv('Philadelphia_Crime_Rate_noNA.csv/')
sales
###Output
_____no_output_____
###Markdown
Exploring the data The house price in a town is correlated with the crime rate of that town. Low crime towns tend to be associated with higher house prices and vice versa.
###Code
graphlab.canvas.set_target('ipynb')
sales.show(view="Scatter Plot", x="CrimeRate", y="HousePrice")
###Output
_____no_output_____
###Markdown
Fit the regression model using crime as the feature
###Code
crime_model = graphlab.linear_regression.create(sales, target='HousePrice', features=['CrimeRate'], validation_set=None, verbose=False)
###Output
_____no_output_____
###Markdown
Let's see what our fit looks like Matplotlib is a Python plotting library that is also useful for plotting. You can install it with:'pip install matplotlib'
###Code
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(sales['CrimeRate'], sales['HousePrice'], '.', sales['CrimeRate'], crime_model.predict(sales), '-')
###Output
_____no_output_____
###Markdown
Above: blue dots are original data, green line is the fit from the simple regression. Remove Center City and redo the analysis Center City is the one observation with an extremely high crime rate, yet house prices are not very low. This point does not follow the trend of the rest of the data very well. A question is how much including Center City is influencing our fit on the other datapoints. Let's remove this datapoint and see what happens.
###Code
sales_noCC = sales[sales['MilesPhila'] != 0.0]
sales_noCC.show(view="Scatter Plot", x="CrimeRate", y="HousePrice")
###Output
_____no_output_____
###Markdown
Refit our simple regression model on this modified dataset:
###Code
crime_model_noCC = graphlab.linear_regression.create(sales_noCC, target='HousePrice', features=['CrimeRate'], validation_set=None, verbose=False)
###Output
_____no_output_____
###Markdown
Look at the fit:
###Code
plt.plot(sales_noCC['CrimeRate'], sales_noCC['HousePrice'], '.', sales_noCC['CrimeRate'],
crime_model.predict(sales_noCC), '-')
###Output
_____no_output_____
###Markdown
Compare coefficients for full-data fit versus no-Center-City fit Visually, the fit seems different, but let's quantify this by examining the estimated coefficients of our original fit and that of the modified dataset with Center City removed.
###Code
crime_model.get('coefficients')
crime_model_noCC.get('coefficients')
###Output
_____no_output_____
###Markdown
Above: We see that for the "no Center City" version, per unit increase in crime, the predicted decrease in house prices is 2,287. In contrast, for the original dataset, the drop is only 576 per unit increase in crime. This is significantly different! High leverage points: Center City is said to be a "high leverage" point because it is at an extreme x value where there are not other observations. As a result, recalling the closed-form solution for simple regression, this point has the *potential* to dramatically change the least squares line since the center of x mass is heavily influenced by this one point and the least squares line will try to fit close to that outlying (in x) point. If a high leverage point follows the trend of the other data, this might not have much effect. On the other hand, if this point somehow differs, it can be strongly influential in the resulting fit. Influential observations: An influential observation is one where the removal of the point significantly changes the fit. As discussed above, high leverage points are good candidates for being influential observations, but need not be. Other observations that are *not* leverage points can also be influential observations (e.g., strongly outlying in y even if x is a typical value). Remove high-value outlier neighborhoods and redo analysis Based on the discussion above, a question is whether the outlying high-value towns are strongly influencing the fit. Let's remove them and see what happens.
###Code
sales_nohighend = sales_noCC[sales_noCC['HousePrice'] < 350000]
crime_model_nohighend = graphlab.linear_regression.create(sales_nohighend, target='HousePrice', features=['CrimeRate'],validation_set=None, verbose=False)
###Output
_____no_output_____
###Markdown
Do the coefficients change much?
###Code
crime_model_noCC.get('coefficients')
crime_model_nohighend.get('coefficients')
###Output
_____no_output_____ |
results_replication/ACDC_NN-reproduce_results_ssym.ipynb | ###Markdown
IMPORTING LIBRARIES AND MOUNTING DRIVE
###Code
!pip install biopython
!pip install silence_tensorflow
import pandas as pd
import numpy as np
import math
from sklearn.metrics import mean_squared_error
from Bio.PDB import *
path='./'
import sys
sys.path.append("../acdc_nn/")
import util
import nn
###Output
_____no_output_____
###Markdown
REPRODUCING PAPER RESULTS ACDC-NN In the following cell we run a loop on the 8 cross-validation folds generated with blustclust. The network has been trained in transfer learning on s2648 and Ivankov, therefore the cv sets have been generated taking into account the similarity between proteins (Similarity < 25%).Then we load the weights of the network for each fold, generate protein structures, create the input for the network in the appropriate form and also generate the reverse mutation. Finally we predict the $\Delta \Delta G$ with ACDC-NN. We have done the same thing for both direct and inverse proteins.We underlie that in the following cell we are using ACDC-NN with one structure.
###Code
#path: ./replicate_results/
cv_folds=[0,1,2,4,5,6,7,8] # cross-validation folds
cv_pred_dir=list()
cv_pred_inv=list()
for fold in cv_folds:
pred_dir=list()
pred_inv=list()
#loading the proper fold
ssym_dir=pd.read_csv(path+'Ssym_cv.mut/ssym_TS_dir_{}.mut'.format(fold), sep=' ',header=None)
ssym_inv=pd.read_csv(path+'Ssym_cv.mut/ssym_TS_inv_{}.mut'.format(fold), sep=' ',header=None)
# building the specific model with cv weights
path_weights=path+"weights_cv/Weights_PostTL_CV_{}".format(fold)
num_H=[32,16]
d=0.2
ACDC_NN=nn.ACDC(num_H,d,25)[0]
ACDC_NN.load_weights(path_weights)
#Ssym dir prediction
for protein,mut in zip(list(ssym_dir[0]),list(ssym_dir[1])):
prof_path =path + 'profiles/' + protein +'.prof.gz'
pdb_path= path + 'pdbs/' + protein[:-1] +'.pdb.gz'
chain = protein[-1]
# information processing
# get structure and other information
structure, pchain, seq, d_seq2pdb, d_pdb2seq = util.pdb2info(pdb_path, chain)
prof = util.getProfile(prof_path)
kvar=(mut[0],d_pdb2seq[mut[1:-1]],mut[-1])
kvar_pdb=(mut[0],mut[1:-1],mut[-1])
dist_neigh_3d= util.get_neigh_ps(kvar_pdb,5,d_seq2pdb,pchain)
list_dist_neigh_3d = dist_neigh_3d[kvar]
# extracting features
codif=util.getMutCod(mut)
all_profile = util.Unified_prof(kvar[1],prof,seq, list_dist_neigh_3d)
#dir
To_predict_dir=pd.DataFrame([*codif,*all_profile,*np.zeros(600-len(all_profile))]).T
#inv (dir)
To_predict_inv=To_predict_dir.copy()
To_predict_inv.iloc[:,:20]=To_predict_inv.iloc[:,:20].replace([1.0,-1.0],[-1.0,1.0])
# Making input in the proper shape
Xm_d, X1D_d, X3D_d = nn.mkInp(np.asarray(To_predict_dir).astype(np.float32),500)
Xm_i, X1D_i, X3D_i = nn.mkInp(np.asarray(To_predict_inv).astype(np.float32),500)
#predict
prediction=ACDC_NN.predict([X3D_d, X1D_d, Xm_d , X3D_i, X1D_i, Xm_i])
pred_dir.append(prediction[0][0][0])
#Ssym inv prediction
for protein,mut in zip(list(ssym_inv[0]),list(ssym_inv[1])):
prof_path =path + 'profiles/' + protein +'.prof.gz'
pdb_path= path + 'pdbs/' + protein[:-1] +'.pdb.gz'
chain = protein[-1]
# information processing
# get structure and other information
structure, pchain, seq, d_seq2pdb, d_pdb2seq = util.pdb2info(pdb_path, chain)
prof = util.getProfile(prof_path)
kvar=(mut[0],d_pdb2seq[mut[1:-1]],mut[-1])
kvar_pdb=(mut[0],mut[1:-1],mut[-1])
dist_neigh_3d= util.get_neigh_ps(kvar_pdb,5,d_seq2pdb,pchain)
list_dist_neigh_3d = dist_neigh_3d[kvar]
# extracting features
codif=util.getMutCod(mut)
all_profile = util.Unified_prof(kvar[1],prof,seq, list_dist_neigh_3d)
#dir
To_predict_dir=pd.DataFrame([*codif,*all_profile,*np.zeros(600-len(all_profile))]).T
#dir (inv)
To_predict_inv=To_predict_dir.copy()
To_predict_inv.iloc[:,:20]=To_predict_inv.iloc[:,:20].replace([1.0,-1.0],[-1.0,1.0])
# Making input in the proper shape
Xm_d, X1D_d, X3D_d = nn.mkInp(np.asarray(To_predict_dir).astype(np.float32),500)
Xm_i, X1D_i, X3D_i = nn.mkInp(np.asarray(To_predict_inv).astype(np.float32),500)
#predict
prediction=ACDC_NN.predict([X3D_d, X1D_d, Xm_d , X3D_i, X1D_i, Xm_i])
pred_inv.append(prediction[0][0][0])
#appending results
cv_pred_dir.append(pred_dir)
cv_pred_inv.append(pred_inv)
#merging the results
cv_pred_dir=[protein for cv in cv_pred_dir for protein in cv]
cv_pred_inv=[protein for cv in cv_pred_inv for protein in cv]
###Output
/usr/local/lib/python3.6/dist-packages/Bio/PDB/Polypeptide.py:344: UserWarning: Assuming residue CA is an unknown modified amino acid
% residue.get_resname()
/usr/local/lib/python3.6/dist-packages/Bio/PDB/Polypeptide.py:344: UserWarning: Assuming residue CA is an unknown modified amino acid
% residue.get_resname()
###Markdown
RESHAPING RESULTS In the following cell we merge together all the cv folds and the results obtained, building a dataframe so that we can easily measure performances
###Code
# fold 0
S_dir=pd.read_csv(path+'Ssym_cv.mut/ssym_TS_dir_0.mut', sep=' ',header=None)
S_inv=pd.read_csv(path+'Ssym_cv.mut/ssym_TS_inv_0.mut', sep=' ',header=None)
# appending the others
cv_folds=[1,2,4,5,6,7,8] # cross-validation folds
for fold in cv_folds:
S_dir=S_dir.append(pd.read_csv(path+'Ssym_cv.mut/'+'ssym_TS_dir_{}.mut'.format(fold), sep=' ',header=None),)
S_inv=S_inv.append(pd.read_csv(path+'Ssym_cv.mut/'+'ssym_TS_inv_{}.mut'.format(fold), sep=' ',header=None),)
S_dir.columns=['Protein','Mut','DDG']
S_inv.columns=['Protein','Mut','DDG']
S_dir['DDG_pred']=cv_pred_dir
S_inv['DDG_pred']=cv_pred_inv
###Output
_____no_output_____
###Markdown
MEASURE OF ACDC-NN PERFORMANCE ON SSYM Ssym direct
###Code
print('pearson_direct : ', np.corrcoef(S_dir['DDG_pred'],S_dir['DDG'])[0][1].round(2))
print('rmse : ',round(math.sqrt(mean_squared_error(S_dir['DDG_pred'],S_dir['DDG'])),2))
###Output
pearson_direct : 0.58
rmse : 1.42
###Markdown
Ssym inverse
###Code
print('pearson_inverse : ', np.corrcoef(S_inv['DDG_pred'],S_inv['DDG'])[0][1].round(2))
print('rmse : ',round(math.sqrt(mean_squared_error(S_inv['DDG_pred'],S_inv['DDG'])),2))
###Output
pearson_inverse : 0.55
rmse : 1.47
###Markdown
Antisimmetry
###Code
print('r_dir-inv : ' ,np.corrcoef(cv_pred_dir,cv_pred_inv)[0][1].round(2))
print('bias : ', util.bias(cv_pred_dir,cv_pred_inv).round(2))
###Output
r_dir-inv : -0.99
bias : -0.01
###Markdown
ACDC-NN* (two structures available)
###Code
#path
cv_folds=[0,1,2,4,5,6,7,8] # cross-validation folds
cv_pred_dir=list()
cv_pred_inv=list()
for fold in cv_folds:
pred_dir=list()
pred_inv=list()
#loading the proper fold
ssym_dir=pd.read_csv(path+'Ssym_cv.mut/ssym_TS_dir_{}.mut'.format(fold), sep=' ',header=None)
ssym_inv=pd.read_csv(path+'Ssym_cv.mut/ssym_TS_inv_{}.mut'.format(fold), sep=' ',header=None)
# building the specific model with cv weights
path_weights=path+"weights_cv/Weights_PostTL_CV_{}".format(fold)
num_H=[32,16]
d=0.2
ACDC_NN=nn.ACDC(num_H,d,25)[0]
ACDC_NN.load_weights(path_weights)
#Ssym dir-inv prediction using both structures
for (protein_dir,protein_inv,mut_dir,mut_inv) in zip(list(ssym_dir[0]),list(ssym_inv[0]),list(ssym_dir[1]),list(ssym_inv[1])):
# information processing
# get structure and other information for the direct protein
prof_path_dir =path + 'profiles/' + protein_dir +'.prof.gz'
pdb_path_dir= path + 'pdbs/' + protein_dir[:-1] +'.pdb.gz'
chain_dir = protein_dir[-1]
structure_dir, pchain_dir, seq_dir, d_seq2pdb_dir, d_pdb2seq_dir = util.pdb2info(pdb_path_dir, chain_dir)
prof_dir = util.getProfile(prof_path_dir)
kvar_dir=(mut_dir[0],d_pdb2seq_dir[mut_dir[1:-1]],mut_dir[-1])
kvar_pdb_dir=(mut_dir[0],mut_dir[1:-1],mut_dir[-1])
dist_neigh_3d_dir= util.get_neigh_ps(kvar_pdb_dir,5,d_seq2pdb_dir,pchain_dir)
list_dist_neigh_3d_dir = dist_neigh_3d_dir[kvar_dir]
# extracting features
codif_dir=util.getMutCod(mut_dir)
all_profile_dir = util.Unified_prof(kvar_dir[1],prof_dir,seq_dir, list_dist_neigh_3d_dir)
#dir
To_predict_dir=pd.DataFrame([*codif_dir,*all_profile_dir,*np.zeros(600-len(all_profile_dir))]).T
# information processing
# get structure and other information for the inverse protein
prof_path_inv =path + 'profiles/' + protein_inv +'.prof.gz'
pdb_path_inv= path + 'pdbs/' + protein_inv[:-1] +'.pdb.gz'
chain_inv = protein_inv[-1]
# information processing
# get structure and other information for the inverse protein
structure_inv, pchain_inv, seq_inv, d_seq2pdb_inv, d_pdb2seq_inv = util.pdb2info(pdb_path_inv, chain_inv)
prof_inv = util.getProfile(prof_path_inv)
kvar_inv=(mut_inv[0],d_pdb2seq_inv[mut_inv[1:-1]],mut_inv[-1])
kvar_pdb_inv=(mut_inv[0],mut_inv[1:-1],mut_inv[-1])
dist_neigh_3d_inv= util.get_neigh_ps(kvar_pdb_inv,5,d_seq2pdb_inv,pchain_inv)
list_dist_neigh_3d_inv = dist_neigh_3d_inv[kvar_inv]
# extracting features
codif_inv=util.getMutCod(mut_inv)
all_profile_inv = util.Unified_prof(kvar_inv[1],prof_inv,seq_inv, list_dist_neigh_3d_inv)
#inv
To_predict_inv=pd.DataFrame([*codif_inv,*all_profile_inv,*np.zeros(600-len(all_profile_inv))]).T
# Making input in the proper shape
Xm_d, X1D_d, X3D_d = nn.mkInp(np.asarray(To_predict_dir).astype(np.float32),500)
Xm_i, X1D_i, X3D_i = nn.mkInp(np.asarray(To_predict_inv).astype(np.float32),500)
#predict dir
prediction_dir=ACDC_NN.predict([X3D_d, X1D_d, Xm_d , X3D_i, X1D_i, Xm_i])
pred_dir.append(prediction_dir[0][0][0])
#predict inv
prediction_inv=ACDC_NN.predict([X3D_i, X1D_i, Xm_i , X3D_d, X1D_d, Xm_d])
pred_inv.append(prediction_inv[0][0][0])
#appending results
cv_pred_dir.append(pred_dir)
cv_pred_inv.append(pred_inv)
#merging the results
cv_pred_dir=[protein for cv in cv_pred_dir for protein in cv]
cv_pred_inv=[protein for cv in cv_pred_inv for protein in cv]
###Output
/usr/local/lib/python3.6/dist-packages/Bio/PDB/Polypeptide.py:344: UserWarning: Assuming residue CA is an unknown modified amino acid
% residue.get_resname()
/usr/local/lib/python3.6/dist-packages/Bio/PDB/Polypeptide.py:344: UserWarning: Assuming residue CA is an unknown modified amino acid
% residue.get_resname()
###Markdown
ADDING ACDC-NN* PREDICTIONS TO THE PREVIOUS DATAFRAMES
###Code
S_dir['DDG_pred_two_pdbs']=cv_pred_dir
S_inv['DDG_pred_two_pdbs']=cv_pred_inv
###Output
_____no_output_____
###Markdown
MEASURE OF ACDC-NN* PERFORMANCE ON SSYM Ssym direct
###Code
print('pearson dir: ',np.corrcoef(S_dir['DDG'],S_dir['DDG_pred_two_pdbs'])[0][1].round(2))
print('rmse dir: ',round(math.sqrt(mean_squared_error(S_dir['DDG'],S_dir['DDG_pred_two_pdbs'])),2))
###Output
pearson dir: 0.57
rmse dir: 1.45
###Markdown
Ssym inverse
###Code
print('pearson inv: ',np.corrcoef(S_inv['DDG'],S_inv['DDG_pred_two_pdbs'])[0][1].round(2))
print('rmse inv: ',round(math.sqrt(mean_squared_error(S_inv['DDG'],S_inv['DDG_pred_two_pdbs'])),2))
###Output
pearson inv: 0.57
rmse inv: 1.45
###Markdown
Antisimmetry
###Code
print('r_dir-inv: ' ,np.corrcoef(cv_pred_dir,cv_pred_inv)[0][1].round(2))
print('bias: ', util.bias(cv_pred_dir,cv_pred_inv).round(2))
###Output
r_dir-inv: -1.0
bias: 0.0
|
AstroFix Sample Jupyter Notebook.ipynb | ###Markdown
The example uses two V-band images: one showing the globular cluster 47 Tucanae, and the other showing the globular cluster Messier 15. Both images were taken by the LCO 0.4-meter telescope. They are available at the links below: [47 Tuc](https://archive.lco.global/?q=a&RLEVEL=&PROPID=&INSTRUME=&OBJECT=&SITEID=&TELID=&FILTER=&OBSTYPE=&EXPTIME=&BLKUID=&REQNUM=&basename=cpt0m407-kb84-20200917-0147-e91&start=2020-09-17%2000%3A00&end=2020-09-18%2000%3A00&id=&public=true) [M15](https://archive.lco.global/?q=a&RLEVEL=&PROPID=&INSTRUME=&OBJECT=&SITEID=&TELID=&FILTER=&OBSTYPE=&EXPTIME=&BLKUID=&REQNUM=&basename=cpt0m407-kb84-20201021-0084-e91&start=2020-10-21%2000%3A00&end=2021-10-22%2000%3A00&id=&public=true) Let's begin with the image of 47 Tucanae:
###Code
Tuc47 = fits.open('cpt0m407-kb84-20200917-0147-e91.fits.fz')[1].data
print(Tuc47.shape)
plt.figure(figsize=(10,10))
plt.imshow(Tuc47[950:1150,1400:1600])
plt.show()
###Output
_____no_output_____
###Markdown
To show **astrofix**'s performance on repairing the image, we randomly generate some artificial bad pixels. Let's say we turn 1% of all pixels into NaN. Half of the them stand alone, and the other half of them form crossed shaped regions of bad pixels. You can experiment with other bad pixel fractions and shapes.
###Code
img=Tuc47.copy().astype(float)
# 0.5% bad pixel
BP_mask=np.random.rand(img.shape[0],img.shape[1])>0.995
# 0.1% cross shaped regions of bad pixels
cross_mask=np.random.rand(img.shape[0],img.shape[1])>0.999
cross_generator=np.array([[0,1,0],[1,1,1],[0,1,0]])
BP_mask=(BP_mask+convolve(cross_mask,cross_generator,mode="same",method="direct"))!=0
img[BP_mask]=np.nan
print("Number of Bad Pixels: {}".format(np.count_nonzero(BP_mask)))
###Output
Number of Bad Pixels: 62427
###Markdown
The image is now populated by NaN pixels:
###Code
plt.figure(figsize=(10,10))
plt.imshow(img[950:1150,1400:1600])
plt.show()
###Output
_____no_output_____
###Markdown
Now let's repair the image with **astrofix.Fix_Image**:
###Code
# Because there is almost no saturation in this image, we set max_clip=1 so that we keep the brightest pixels in the training set.
fixed_img,para,TS=astrofix.Fix_Image(img,"asnan",max_clip=1)
print("a={},h={}".format(para[0],para[1]))
print("Number of training set pixels: {}".format(np.count_nonzero(TS)))
###Output
a=3.3138644003486495,h=0.9528206390741414
Number of training set pixels: 278813
###Markdown
Compare with the original image of 47 Tucanae:
###Code
fig,ax=plt.subplots(1,3,figsize=(18,7))
im=ax[0].imshow(Tuc47[990:1015,1525:1550]) # Choose your region to zoom in
divider = make_axes_locatable(ax[0])
cax = divider.append_axes("bottom", size="5%", pad=0.2)
fig.colorbar(im,ax=ax[0],cax=cax,orientation="horizontal")
ax[0].set_title("Original",fontsize=30,pad=15)
ax[0].axis("off")
im=ax[1].imshow(img[990:1015,1525:1550])
divider = make_axes_locatable(ax[1])
cax = divider.append_axes("bottom", size="5%", pad=0.2)
fig.colorbar(im,ax=ax[1],cax=cax,orientation="horizontal")
ax[1].set_title("Bad",fontsize=30,pad=15)
ax[1].axis("off")
im=ax[2].imshow(fixed_img[990:1015,1525:1550])
divider = make_axes_locatable(ax[2])
cax = divider.append_axes("bottom", size="5%", pad=0.2)
fig.colorbar(im,ax=ax[2],cax=cax,orientation="horizontal")
ax[2].set_title("Fixed",fontsize=30,pad=15)
ax[2].axis("off")
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
One feature of **astrofix** is that the training result for one image is usually applicable to similar images taken by the same telescope, meaning that we can repair the image of M15 without having to train on M15, simply using the optimal parameters that we got previously from 47 Tucanae:
###Code
M15 = fits.open('cpt0m407-kb84-20201021-0084-e91.fits.fz')[1].data
print(M15.shape)
plt.figure(figsize=(10,10))
plt.imshow(M15[950:1150,1400:1600])
plt.show()
###Output
_____no_output_____
###Markdown
Generate artificial bad pixels like before:
###Code
new_img=M15.copy().astype(float)
# 0.5% bad pixel
new_BP_mask=np.random.rand(new_img.shape[0],new_img.shape[1])>0.995
# 0.1% cross shaped regions of bad pixels
new_cross_mask=np.random.rand(new_img.shape[0],new_img.shape[1])>0.999
cross_generator=np.array([[0,1,0],[1,1,1],[0,1,0]])
new_BP_mask=(new_BP_mask+convolve(new_cross_mask,cross_generator,mode="same",method="direct"))!=0
new_img[new_BP_mask]=np.nan
print("Number of Bad Pixels: {}".format(np.count_nonzero(new_BP_mask)))
plt.figure(figsize=(10,10))
plt.imshow(new_img[950:1150,1400:1600])
plt.show()
###Output
Number of Bad Pixels: 62320
###Markdown
This time, instead of calling **astrofix.Fix_image**, we use **astrofix.Interpolate** with $a$ and $h$ equal to their optimal values for 47 Tucanae.
###Code
new_fixed_img=astrofix.Interpolate(para[0],para[1],new_img,BP="asnan")
###Output
_____no_output_____
###Markdown
Compare with the original image of M15:
###Code
fig,ax=plt.subplots(1,3,figsize=(18,7))
im=ax[0].imshow(M15[1010:1035,1500:1525]) # Choose your region to zoom in
divider = make_axes_locatable(ax[0])
cax = divider.append_axes("bottom", size="5%", pad=0.2)
fig.colorbar(im,ax=ax[0],cax=cax,orientation="horizontal")
ax[0].set_title("Original",fontsize=30,pad=15)
ax[0].axis("off")
im=ax[1].imshow(new_img[1010:1035,1500:1525])
divider = make_axes_locatable(ax[1])
cax = divider.append_axes("bottom", size="5%", pad=0.2)
fig.colorbar(im,ax=ax[1],cax=cax,orientation="horizontal")
ax[1].set_title("Bad",fontsize=30,pad=15)
ax[1].axis("off")
im=ax[2].imshow(new_fixed_img[1010:1035,1500:1525])
divider = make_axes_locatable(ax[2])
cax = divider.append_axes("bottom", size="5%", pad=0.2)
fig.colorbar(im,ax=ax[2],cax=cax,orientation="horizontal")
ax[2].set_title("Fixed",fontsize=30,pad=15)
ax[2].axis("off")
plt.tight_layout()
plt.show()
###Output
_____no_output_____ |
Shapiro/two-channel-CPR/Figures for supp_new_C/Shapiro_Diagram_RSJ_normalized_two_channels-Plot-paper_20201010.ipynb | ###Markdown
Shapiro Diagram with normalized parameters Cythonized, RF current in linear & log scale CPR of $I(\phi)=[\sin(\phi)+\eta\sin(2\phi)]+A(\sin(\phi+C)+\eta\sin[2(\phi+C)])$
###Code
import numpy as np
import matplotlib.pyplot as plt
from datetime import *
from scipy.io import savemat
from scipy.integrate import odeint
%matplotlib inline
%load_ext Cython
###Output
_____no_output_____
###Markdown
Resistively Shunted Model:$\frac{d\phi}{dt}=\frac{2eR_N}{\hbar}[I_{DC}+I_{RF}\sin(2\pi f_{RF}t)-I_C\sin\phi]$Solving $\phi(t)$, then you can get the voltage difference between the superconducting leads:$V=\frac{\hbar}{2e}\langle\frac{d\phi}{dt}\rangle$After Normalizing:$I_{DC}\leftrightarrow \tilde{I_{DC}}=I_{DC}/I_C$,$I_{RF} \leftrightarrow \tilde{I_{RF}}=I_{RF}/I_C$,$ V \leftrightarrow \tilde{V}=\frac{V}{I_CR_N}$,$ R=\frac{dV}{dI} \leftrightarrow \tilde{R}=\frac{R}{R_N}$,$\because f_0=2eI_CR_N/h$,$f_{RF} \leftrightarrow \tilde{f_{RF}}=f_{RF}/f_0$,$t \leftrightarrow \tilde{t}=f_0t$,The Josephson voltage quantized at $\frac{V}{hf_{RF}f_0/2e}=n \leftrightarrow \frac{V}{f_{RF}f_0}=n$ Here, we can set $f_0=1$ or $\frac{I_CR_N}{hf_0/2e}=1$, without loss of generalityThe RSJ model simply becomes (omitting $\tilde{}$):$\frac{d\phi}{dt}=[I_{DC}+I_{RF}\sin(2\pi f_{RF}t)-\sin\phi]$At equilibrium, $V=\frac{\hbar}{2e}\langle\frac{d\phi}{dt}\rangle \leftrightarrow \tilde{V}=\frac{1}{2\pi}\langle\frac{d\phi}{d\tilde{t}}\rangle$ would also quantized at integers in the Shapiro step regime. Cython codes here is to speed up the simulation because python is slower than C:
###Code
%%cython
#--pgo # only for Mac OS X
#To use GNU compiler gcc-10 specified in .bash_profile
cimport numpy as np
from libc.math cimport sin, pi
### cdef is faster but can only be used for cython in this cell
#cpdef can be used for python outside this cell
cdef double CPR(double G, double A, double eta, double C):
'''
Current-phase relationship for the junction
'''
return sin(G)+eta*sin(2*G)+A*sin(G+C*pi)+A*eta*sin(2*G+2*C*pi)
cpdef double dGdt(G,double t,double i_dc,double i_ac,double f_rf,double A, double eta, double C):
'''
RSJ model
'''
der = i_dc + i_ac * sin(2*pi*f_rf*t) - CPR(G,A,eta,C)
return der
from scipy.optimize import fmin
def find_Ic_max(A,eta,C):
Gmax=fmin(lambda x: -CPR(x,A,eta,C),0,disp=0)
return CPR(Gmax,A,eta,C)
A=0.
eta=0.8
C=-0.7 # as a unit of pi
f_rf=0.8
IDC_step=0.1
IDC_array=np.linspace(-5,5,201)
IRF_step=0.1
IRF_array=np.linspace(0,15,151)
print("DC array size: "+str(len(IDC_array)))
print("RF array size: "+str(len(IRF_array)))
###Output
DC array size: 201
RF array size: 151
###Markdown
Plot CPR
###Code
G=np.linspace(-3,3,301)*np.pi
def CPR(G, A, eta, C):
return np.sin(G)+eta*np.sin(2*G)+A*np.sin(G+C*np.pi)+A*eta*np.sin(2*G+2*C*np.pi)
plt.figure()
plt.plot(G,CPR(G,A,eta,C))
###Output
_____no_output_____
###Markdown
Test on a single RF current
###Code
t=np.arange(0,300.01,0.01)/f_rf
V=np.empty([len(IDC_array)])
G_array=np.empty(len(t))
for i in range(0,len(IDC_array)):
G_array= odeint(dGdt,0,t,args=(IDC_array[i],15,f_rf,A,eta,C))
V[i]=np.mean(np.gradient(G_array[-10000:,0]))/(0.01/f_rf)/(2*np.pi)
DVDI=2*np.pi*np.gradient(V,IDC_step) #differential resistance dV/dI
plt.plot(t,G_array)
JV=f_rf
plt.figure()
plt.plot(IDC_array,V/JV)
plt.grid()
plt.figure()
plt.plot(IDC_array,DVDI)
#plt.ylim([0,3])
_name_file = "f_"+str(f_rf)+"_A"+str(np.round(A,3))+"_eta"+str(np.round(eta,3))+"_C"+str(np.round(C,2))+"pi_norm"
_name_title = "f= "+str(f_rf)+", A= "+str(np.round(A,3))+", eta= "+str(np.round(eta,3))+",C= "+str(np.round(C,2))+"pi"
print(_name_title)
T1=datetime.now()
print (T1)
V=np.empty([len(IRF_array),len(IDC_array)])
for i in range(0,len(IRF_array)):
print("RF power now: "+str(i)+" of "+str(len(IRF_array))+" ,"+str(datetime.now()),end="\r")
for j in range(0,len(IDC_array)):
t=np.arange(0,300.01,0.01)/f_rf
G_array= odeint(dGdt,0,t,args=(IDC_array[j],IRF_array[i],f_rf,A,eta,C))
V[i,j]=np.mean(np.gradient(G_array[-10001:,0]))/(0.01/f_rf)/(2*np.pi)
DVDI=2*np.pi*np.gradient(V,IDC_step,axis=1)
print ("\n It takes " + str(datetime.now()-T1))
plt.figure()
plt.pcolormesh(IDC_array, IRF_array, DVDI, cmap = 'inferno', vmin = 0,linewidth=0,rasterized=True,shading="auto")
plt.xlabel("DC Current($I/I_C$)")
plt.ylabel("RF Current ($I_RF/I_C$)")
plt.colorbar(label = "DV/DI")
plt.title(_name_title)
plt.savefig("DVDI_"+_name_file+".pdf")
plt.show()
plt.figure()
plt.pcolormesh(IDC_array, IRF_array, V/f_rf , cmap = 'coolwarm',linewidth=0,rasterized=True,shading="auto")
plt.xlabel("DC Current($I/I_C$)")
plt.ylabel("RF Current ($I_RF/I_C$)")
plt.colorbar(label = "$V/I_CR_N$")
plt.title(_name_title)
plt.savefig("V_"+_name_file+".pdf")
plt.show()
plt.figure()
plt.plot(IDC_array,V[1,:]/f_rf)#/(np.pi*hbar*f/Qe))
plt.show()
plt.figure()
plt.plot(IDC_array,DVDI[1,:])
plt.show()
savemat("data"+_name_file+'.mat',mdict={'IDC':IDC_array,'IRF':IRF_array,'A':A, 'eta':eta, 'f_rf':f_rf,'C':C,'V':V,'DVDI':DVDI})
print('file saved')
###Output
f= 0.4, A= 0.2, eta= 0.8,C= -0.8pi
2020-09-26 13:11:28.119589
RF power now: 3 of 151 ,2020-09-26 13:12:04.409093
###Markdown
Calculate the normalized frequency
###Code
Qe=1.602e-19
Ic=2e-6
Rn=13
h=6.626e-34
f0=2*Qe*Ic*Rn/h
print(5e9/f0)
print(2.5e9/f0)
###Output
0.3976999903966197
0.19884999519830984
###Markdown
Simulation using log scales in power
###Code
IDC_step=0.05
IDC_array=np.linspace(-10,10,401)
PRF_step=0.02
PRF_array=np.linspace(-1.5,2.5,201)
IRF_array = 10**(PRF_array/2)
print(IRF_array[-1])
print(IRF_array[0])
print("DC array size: "+str(len(IDC_array)))
print("RF array size: "+str(len(IRF_array)))
print("DC step: "+str(IDC_array[1]-IDC_array[0]))
print("RF step: "+str(PRF_array[1]-PRF_array[0]))
f_rf=0.4
A_array=np.array([0])
eta_array=np.array([0,0.9])
C_array=np.array([0])
for A in A_array:
for eta in eta_array:
for C in C_array:
_name_file = "f_"+str(f_rf)+"_A"+str(np.round(A,3))+"_eta"+str(np.round(eta,3))+"_C"+str(np.round(C,2))+"pi_log"
_name_title = "f= "+str(f_rf)+", A= "+str(np.round(A,3))+", eta= "+str(np.round(eta,3))+",C= "+str(np.round(C,2))+"pi"
print(_name_title)
#Ic_max=find_Ic_max(A,eta,C)
T1=datetime.now()
print (T1)
V=np.empty([len(IRF_array),len(IDC_array)])
for i in range(0,len(IRF_array)):
#print("RF power now: "+str(i)+" of "+str(len(IRF_array))+" ,"+str(datetime.now()),end="\r")
for j in range(0,len(IDC_array)):
t=np.arange(0,300.01,0.01)/f_rf
G_array= odeint(dGdt,0,t,args=(IDC_array[j],IRF_array[i],f_rf,A,eta,C))
V[i,j]=np.mean(np.gradient(G_array[-10001:,0]))/(0.01/f_rf)/(2*np.pi)
DVDI=2*np.pi*np.gradient(V,IDC_step,axis=1)
print ("\n It takes " + str(datetime.now()-T1))
plt.figure()
plt.pcolormesh(IDC_array, PRF_array, DVDI, cmap = 'inferno', vmin = 0,linewidth=0,rasterized=True,shading="auto")
plt.xlabel("DC Current($I/I_C$)")
plt.ylabel("RF Power (a.u.)")
plt.colorbar(label = "DV/DI")
plt.title(_name_title)
plt.savefig("DVDI_"+_name_file+".pdf")
plt.show()
#plt.figure()
#plt.pcolormesh(IDC_array, PRF_array, V/f_rf , cmap = 'coolwarm',linewidth=0,rasterized=True,shading="auto")
#plt.xlabel("DC Current($I/I_C$)")
#plt.ylabel("RF Power (a.u.)")
#plt.colorbar(label = "$V/I_CR_N$")
#plt.title(_name_title)
#plt.savefig("V_"+_name_file+".pdf")
#plt.show()
#plt.figure()
#plt.plot(IDC_array,V[len(IRF_array)//2,:]/f_rf)#/(np.pi*hbar*f/Qe))
#plt.title("cut at power= "+str(PRF_array[len(IRF_array)//2]))
#plt.show()
#plt.figure()
#plt.plot(IDC_array,DVDI[len(IRF_array)//2,:])
#plt.title("cut at power= "+str(PRF_array[len(IRF_array)//2]))
#plt.show()
savemat("./data"+_name_file+'.mat',mdict={'IDC':IDC_array,'IRF':IRF_array,'PRF':PRF_array,'A':A, 'eta':eta, 'f_rf':f_rf,'C':C,'V':V,'DVDI':DVDI})#,'Ic_max':Ic_max})
print('file saved')
plt.figure()
plt.plot(IDC_array,DVDI[130,:,])
plt.show()
print(PRF_array[130])
from scipy.io import loadmat
import sys
#sys.path.insert(0, 'C:/Users/QMDla/Documents/GitHub/data_file_manipulations/')
sys.path.insert(0, '/Volumes/GoogleDrive/My Drive/GitHub/data_file_manipulations/')
import files_manipulation
import importlib
importlib.reload(files_manipulation)
dataDir = "./"
files_manipulation.merge_multiple_mat(dataDir,True) # True for saving .h5
import h5py
fd= h5py.File('merged.h5','r')
list(fd.keys())
A=fd['A'][...]
print(A.shape)
C=fd['C'][...]
print(C.shape)
DVDI=fd['DVDI'][...]
print(DVDI.shape)
IDC=fd['IDC'][...]
print(IDC.shape)
IRF=fd['IRF'][...]
print(IRF.shape)
PRF=fd['PRF'][...]
print(PRF.shape)
V=fd['V'][...]
print(V.shape)
eta=fd['eta'][...]
print(eta.shape)
f_rf=fd['f_rf'][...]
print(f_rf.shape)
print(A)
print(eta)
print(f_rf)
print(C)
selected=np.array([5,0,2,6,3,8])
for i in selected:
fig=plt.figure()
im=plt.pcolormesh(IDC, PRF, np.squeeze(DVDI[:,:,i]), cmap = 'inferno', vmin = 0,vmax=8.5,linewidth=0,rasterized=True,shading="auto")
plt.xlabel("I")
plt.title("eta="+str(eta[i])+", A="+str(A[i])+", C="+str(C[i]))
plt.xlim([-10,10])
plt.ylim([-1.5,2.5])
plt.ylabel("P")
cbaxes = fig.add_axes([0.1, 1, 0.4, 0.05])
cb=fig.colorbar(im, label = "DV/DI",orientation="horizontal",cax=cbaxes)
plt.savefig("DVDI_eta"+str(eta[i])+"_A"+str(A[i])+"_C"+str(C[i])+".pdf",bbox_inches='tight')
plt.show()
print(np.max(np.max(DVDI[:,:,i])))
plt.figure()
plt.plot(IDC,DVDI[150,:,2])
#plt.xlim([-4,4])
print(PRF[150])
plt.figure()
plt.plot(IDC,V[150,:,2])
###Output
1.5
|
covid_mutation_analysis_and_nextstrain_build/EpitopeMutationRate.ipynb | ###Markdown
Random aside, making a fasta file for each of the full proteins
###Code
seq_dict = dict()
for aa_file in amino_acid_files:
protein = aa_file.split('_')[-1].split('.')[0]
new_aa_file= 'data/larger_version_modified/aligned_protein_'+protein+'.fasta' # newer code
with open(new_aa_file, "rt") as handle:
records = list(SeqIO.parse(handle, "fasta"))
for ind, r in enumerate(records):
if r.id == 'Wuhan-Hu-1/2019':
print('ind = ', ind)
ref_seq_ind = ind
ref_seq_new = str(records[ref_seq_ind].seq)
seq_dict[protein] = ref_seq_new
seq_dict;
### All ind's should be zero by design, because I have added the reference sequence to the beginning of all the
### protein files
### Create the 'base_proteins_with_end_codon_larger.fasta' file
with open('base_proteins_with_end_codon_larger.fasta', 'w') as f:
for k, v in seq_dict.items():
f.write('> Protein: '+k+' | Base Sequence: Wuhan-Hu-1/2019\n')
f.write(v +'\n')
###Output
_____no_output_____
###Markdown
Checking if the proteins have changed from original due to new masking* This was necessary when NextStrain was doing different read calling!
###Code
# for aa_file in amino_acid_files:
# protein = aa_file.split('_')[-1].split('.')[0]
# old_aa_file = '../PUFFIN/proteins/v1/aligned_aa_'+protein+'.fasta' # Trenton's code
# # old_aa_file = 'PUFFIN/proteins/aligned_aa_'+protein+'.fasta' # new code wrong
# # old_aa_file = '../PUFFIN/proteins/v1/aligned_aa_'+protein+'.fasta' # newer code wrong
# print('protein:', protein)
# with open(old_aa_file, "rt") as handle:
# records = list(SeqIO.parse(handle, "fasta"))
# # getting rid of the node sequences!
# no_nodes = []
# for r in records:
# if 'NODE_' not in r.id:
# no_nodes.append(r)
# records = no_nodes
# len(records)
# for ind, r in enumerate(records):
# if r.id == 'Wuhan/IPBCAMS-WH-01/2019':#'Wuhan-Hu-1/2019':#'Wuhan/WH01/2019':#'Wuhan/IPBCAMS-WH-01/2019':
# print(ind)
# ref_seq_ind = ind
# ref_seq_old = str(records[ref_seq_ind].seq)[:-1]
# print('length of old sequence', len(ref_seq_old))
# new_aa_file= 'PUFFIN/proteins/aligned_protein_'+protein+'.fasta'
# with open(new_aa_file, "rt") as handle:
# records = list(SeqIO.parse(handle, "fasta"))
# # getting rid of the node sequences!
# no_nodes = []
# for r in records:
# if 'NODE_' not in r.id:
# no_nodes.append(r)
# records = no_nodes
# len(records)
# for ind, r in enumerate(records):
# if r.id == 'Wuhan/IPBCAMS-WH-01/2019':#'Wuhan-Hu-1/2019':#'Wuhan/WH01/2019':#'Wuhan/IPBCAMS-WH-01/2019':
# print(ind)
# ref_seq_ind = ind
# ref_seq_new = str(records[ref_seq_ind].seq)[:-1]
# if str(records[ref_seq_ind].seq)[-1] == '*':
# print(protein, 'has star at end')
# else:
# print(protein, 'has NO STAR at end')
# mask = np.asarray(list(ref_seq_old)) != np.asarray(list(ref_seq_new))
# print(np.asarray(list(ref_seq_old))[mask])
# print(np.arange(len(ref_seq_old))[mask])
# for aa_file in amino_acid_files:
# protein = aa_file.split('_')[-1].split('.')[0]
# old_aa_file = '../PUFFIN/proteins/v1/aligned_aa_'+protein+'.fasta'
# print('protein:', protein)
# with open(old_aa_file, "rt") as handle:
# records = list(SeqIO.parse(handle, "fasta"))
# # getting rid of the node sequences!
# no_nodes = []
# for r in records:
# if 'NODE_' not in r.id:
# no_nodes.append(r)
# records = no_nodes
# len(records)
# for ind, r in enumerate(records):
# if r.id == 'Wuhan/IPBCAMS-WH-01/2019':#'Wuhan-Hu-1/2019':#'Wuhan/WH01/2019':#'Wuhan/IPBCAMS-WH-01/2019':
# print(ind)
# ref_seq_ind = ind
# ref_seq_old = str(records[ref_seq_ind].seq)[:-1]
# print('length of old sequence', len(ref_seq_old))
# new_aa_file= '../PUFFIN/proteins/v6/aligned_protein_'+protein+'.fasta'
# with open(new_aa_file, "rt") as handle:
# records = list(SeqIO.parse(handle, "fasta"))
# # getting rid of the node sequences!
# no_nodes = []
# for r in records:
# if 'NODE_' not in r.id:
# no_nodes.append(r)
# records = no_nodes
# len(records)
# for ind, r in enumerate(records):
# if r.id == 'Wuhan/IPBCAMS-WH-01/2019':#'Wuhan-Hu-1/2019':#'Wuhan/WH01/2019':#'Wuhan/IPBCAMS-WH-01/2019':
# print(ind)
# ref_seq_ind = ind
# ref_seq_new = str(records[ref_seq_ind].seq)[:-1]
# if str(records[ref_seq_ind].seq)[-1] == '*':
# print(protein, 'has star at end')
# else:
# print(protein, 'has NO STAR at end')
# mask = np.asarray(list(ref_seq_old)) != np.asarray(list(ref_seq_new))
# print(np.asarray(list(ref_seq_old))[mask])
# print(np.arange(len(ref_seq_old))[mask])
# # what is the frequency before and after at these different sites?
# protein_name = 'ORF1b'
# position = 1687
# old_aa_file = '../PUFFIN/proteins/v1/aligned_aa_'+protein_name+'.fasta'
# print('protein:', protein)
# with open(old_aa_file, "rt") as handle:
# records = list(SeqIO.parse(handle, "fasta"))
# # getting rid of the node sequences!
# no_nodes = []
# for r in records:
# if 'NODE_' not in r.id:
# no_nodes.append(r)
# records = no_nodes
# len(records)
# for ind, r in enumerate(records):
# if r.id == 'Wuhan/IPBCAMS-WH-01/2019':#'Wuhan-Hu-1/2019':#'Wuhan/WH01/2019':#'Wuhan/IPBCAMS-WH-01/2019':
# print(ind)
# ref_seq_ind = ind
# aa_counter={}
# ref_seq_old = str(records[ref_seq_ind].seq)[:-1]
# for ind, r in enumerate(records):
# try:
# aa_counter[str(r.seq[position])] += 1
# except:
# aa_counter[str(r.seq[position])] =1
# aa_counter
# new_aa_file = 'data/v6/aligned_protein_'+protein_name+'.fasta'
# print('protein:', protein)
# with open(new_aa_file, "rt") as handle:
# records = list(SeqIO.parse(handle, "fasta"))
# # getting rid of the node sequences!
# no_nodes = []
# for r in records:
# if 'NODE_' not in r.id:
# no_nodes.append(r)
# records = no_nodes
# len(records)
# for ind, r in enumerate(records):
# if r.id == 'Wuhan/IPBCAMS-WH-01/2019':#'Wuhan-Hu-1/2019':#'Wuhan/WH01/2019':#'Wuhan/IPBCAMS-WH-01/2019':
# print(ind)
# ref_seq_ind = ind
# aa_counter={}
# for ind, r in enumerate(records):
# try:
# aa_counter[str(r.seq[position])] += 1
# except:
# aa_counter[str(r.seq[position])] =1
# aa_counter
# len(records)
# amino_acid_files
###Output
_____no_output_____
###Markdown
Removing sequences with Gap, X, * or no M to start (except for ORF1b, doesnt start with a M!)
###Code
# doing a quality check
error_seq = np.zeros(len(records))
for aa_file in amino_acid_files:
protein = aa_file.split('_')[-1].split('.')[0]
#print('protein:', protein)
with open(aa_file, "rt") as handle:
records = list(SeqIO.parse(handle, "fasta"))
# getting rid of the node sequences!
no_nodes = []
for r in records:
if 'NODE_' not in r.id:
no_nodes.append(r)
records = no_nodes
num_errs = 0
stop_errs =[]
m_errs = 0
#records = clean_records
for r_ind, r in enumerate(records):
trans = r.seq[:-1] # ignores final stop codon
if protein=='ORF1b': # ignore this masked region
if '*' in trans or 'X' in trans or '-' in trans: # doesnt seem to start with an M for anything
num_errs+=1
error_seq[r_ind] += 1
stop_errs.append( (np.asarray( list(trans) )=='*').sum() )
elif '*' in trans or trans[0] != 'M' or 'X' in trans or '-' in trans:
num_errs+=1
error_seq[r_ind] += 1
stop_errs.append( (np.asarray( list(trans) )=='*').sum() )
#if p=='ORF14':
#print(trans)
if trans[0] != 'M':
m_errs += 1
print(protein, num_errs, num_errs/len(trans), m_errs)
plt.hist(error_seq)
(error_seq>0).sum()
len(error_seq)
drop_seq = (error_seq>0)
###Output
_____no_output_____
###Markdown
Removing Belgian Sequenceshttps://github.com/nextstrain/ncov/commit/6b7b822858ccc3e21ec279fc6afde30365d25aa0 They had 3 masked regions that need to be accounted for. May be more read quality issues too* This should no longer be a problem!
###Code
'''# getting the nt regions
with open('nextstrain_covid19_ref_protein_pos.txt', 'r') as f:
lines = f.readlines()
take_next_line = False
regions = []
proteins = []
for l in lines:
if 'CDS ' in l:
regions.append(l.split('CDS')[1].strip())
take_next_line=True
elif take_next_line:
proteins.append(l.split('gene=')[1].strip().strip('"'))
take_next_line=False
protein_regions = {p:r for p,r in zip(proteins, regions)}
protein_regions'''
'''belgian_masked_sites = [
13402-266,
24389-21563,
24390-21563]
belgian_masked_sites = np.asarray(belgian_masked_sites)/3
belgian_masked_sites'''
'''the_proteins = ['ORF1a', 'S', 'S']
#for prot, pos in zip(the_proteins,belgian_masked_sites )
masked_dict = {'ORF1a':4378, 'S':942 }
masked_dict'''
'''for aa_file in amino_acid_files:
protein = aa_file.split('_')[-1].split('.')[0]
print('protein:', protein)
old_aa_file = '../PUFFIN/proteins/v4/aligned_protein_'+protein+'.fasta'
with open(old_aa_file, "rt") as handle:
old_records = list(SeqIO.parse(handle, "fasta"))
n_belg = 0
no_nodes = []
for r in old_records:
if 'Belgium' in r.id:
n_belg+=1
if 'NODE_' not in r.id:
no_nodes.append(r)
old_records = no_nodes
print('reference sequences number belgian', n_belg)
for ind, r in enumerate(old_records):
if r.id == 'Wuhan/IPBCAMS-WH-01/2019':#'Wuhan-Hu-1/2019':#'Wuhan/WH01/2019':#'Wuhan/IPBCAMS-WH-01/2019':
print(ind)
ref_seq_ind = ind
ref_seq_old = str(old_records[ref_seq_ind].seq)[:-1]
# this sequence does not have these same masking positions applied.
curr_aa_file = '../PUFFIN/proteins/v6/cleaned_aligned_protein_'+protein+'.fasta'
with open(curr_aa_file, "rt") as handle:
records = list(SeqIO.parse(handle, "fasta"))
belg_seqs=[]
for ind, r in enumerate(records):
if 'Belgium' in r.id:
belg_seqs.append(r.seq)
print('num belg seqs', len(belg_seqs))
if protein in masked_dict.keys():
mask_pos = masked_dict[protein]
print('reference value',ref_seq_old[mask_pos] )
old_records= np.asarray(old_records)
unique, counts = np.unique(old_records[:, mask_pos], return_counts=True)
count_dict = dict(zip(unique, counts))
print('reference values', count_dict)
belg_seqs = np.asarray(belg_seqs)
unique, counts = np.unique(belg_seqs[:, mask_pos], return_counts=True)
count_dict = dict(zip(unique, counts))
print('belgian masked values', count_dict)
records = np.asarray(records)
unique, counts = np.unique(records[:, mask_pos], return_counts=True)
count_dict = dict(zip(unique, counts))
print('all masked sequences', count_dict)'''
'''unique, counts = np.unique(belg_seqs[:, mask_pos], return_counts=True)
count_dict = dict(zip(unique, counts))
count_dict'''
###Output
_____no_output_____
###Markdown
Saving out these cleaned sequences
###Code
# # write out the dropped sequences.
# from Bio import SeqIO
# for aa_file in amino_acid_files:
# protein = aa_file.split('_')[-1].split('.')[0]
# #print('protein:', protein)
# with open(aa_file, "rt") as handle:
# records = list(SeqIO.parse(handle, "fasta"))
# # getting rid of the node sequences!
# no_nodes = []
# for r in records:
# if 'NODE_' not in r.id:
# no_nodes.append(r)
# records = no_nodes
# clean_records = []
# for r_ind, r in enumerate(records):
# if drop_seq[r_ind]==False:
# clean_records.append(r)
# print(len(clean_records))
# with open("../PUFFIN/proteins/v6_cleaned/aligned_protein_"+protein+".fasta", "w") as output_handle:
# SeqIO.write(clean_records, output_handle, "fasta")
# amino_acid_files
###Output
_____no_output_____
###Markdown
Computing the Entropy in a Parallelized Fashion
###Code
def computeWindows(aa_file):
print('computeWindows!!!')
print('aa_file = ', aa_file) ## Alex
def entropy_calc(x):
# where x is a list of numbers.
summ = 0
nc = np.sum(x)
for e in x:
p = e/nc
summ += p*np.log2(p)
return -summ
window_sizes = list(range(8,12)) + list(range(13,26)) # why were sliding windows of 12 excluded? -- Alex
#df = dict() # will store the results.
protein_res = []
protein = aa_file.split('_')[-1].split('.')[0]
print('protein:', protein)
with open(aa_file, "rt") as handle:
records = list(SeqIO.parse(handle, "fasta"))
# getting rid of the node sequences!
no_nodes = []
for r in records:
if 'NODE_' not in r.id:
no_nodes.append(r)
records = no_nodes
print('size of records', len(records))
# getting the reference sequence
for ind, r in enumerate(records):
if r.id == 'Wuhan-Hu-1/2019':#'Wuhan/WH01/2019':#'Wuhan/IPBCAMS-WH-01/2019':
ref_seq_ind = ind
ref_seq = str(records[ref_seq_ind].seq)
seqs = np.array(records)
if seqs[ref_seq_ind, -1]=='*': # ignore the stop code at the end.
print('removing the stop codon at the end', protein)
ref_seq = ref_seq[:-1]
seqs = seqs[:, :-1]
for window in window_sizes:
print('window size', window)
window_res = []
# get epitope based column slices. gives a list of epi columns.
epi_columns = [seqs[:,i:i+window] for i in range(seqs.shape[1]-window+1)]
for col_ind, col in enumerate(epi_columns):
# count all of the unique epitopes here.
#first need to convert each of the columns into strings:
col = np.asarray( [ ''.join(col[i,:]) for i in range(col.shape[0]) ])
unique, counts = np.unique(col, return_counts=True)
ent = entropy_calc(counts)
# useful for percentages
ref_epitope = col[ref_seq_ind]
count_dict = dict(zip(unique, counts))
# -1 for no self count. percentage mutated.
perc = 1 - ((count_dict[ref_epitope]-1)/(np.sum(counts)-1))
# start pos, window size, entropy
window_res.append([protein, ref_epitope, col_ind, window, ent, perc])
protein_res += window_res
df = pd.DataFrame(protein_res)
df.columns = ['protein', 'epitope', 'start_pos', 'epi_len', 'entropy', 'perc_mutated']
# df.to_csv('../data/processed/epitope_calcs/Epitope_calc_'+protein+'.csv' , index=False)
df.to_csv('data/processed/epitope_calcs_larger/Epitope_calc_'+protein+'.csv' , index=False)
#df[protein] = protein_res
### Check that these are the larger_version_modified files
amino_acid_files
# # check these are the right AA files:
# # amino_acid_files = !ls ../PUFFIN/proteins/v6_cleaned/*_protein_*
# amino_acid_files = !ls data/v6_cleaned/*_protein_*
# amino_acid_files
# # len(amino_acid_files)
# from multiprocessing import Process, Queue, cpu_count, Pool
from multiprocess import Process, Queue, cpu_count, Pool
import time
ncores = 7
start_time = time.time()
### uncomment this to run it, commented for safety reasons -- Alex
# multicore generate new samples
print('Starting pooling!!!')
p = Pool(ncores)
p.map(computeWindows, amino_acid_files)
p.close()
print('all processes are done!, trying to join together all of the files')
for ind, aa_file in enumerate(amino_acid_files):
protein = aa_file.split('_')[-1].split('.')[0]
# temp = pd.read_csv('../data/processed/epitope_calcs/Epitope_calc_'+protein+'.csv')
temp = pd.read_csv('data/processed/epitope_calcs_larger/Epitope_calc_'+protein+'.csv')
if ind == 0:
df = temp
else:
df = df.append(temp)
print('Total run time in minutes: '+str((time.time()-start_time)/60))
# df.to_csv('../data/processed/parallelized_epitope_entropies.csv', index=False)
df.to_csv('data/processed/parallelized_epitope_entropies_larger.csv', index=False)
###Output
Starting pooling!!!
computeWindows!!!computeWindows!!!computeWindows!!!
computeWindows!!!
aa_file = computeWindows!!!aa_file = aa_file = computeWindows!!!
computeWindows!!!
aa_file = data/larger_version_modified/aligned_protein_M.fasta
aa_file = data/larger_version_modified/aligned_protein_E.fasta
aa_file = data/larger_version_modified/aligned_protein_N.fasta
aa_file = protein:data/larger_version_modified/aligned_protein_ORF10.fasta
data/larger_version_modified/aligned_protein_ORF1a.fastaprotein:
data/larger_version_modified/aligned_protein_ORF1b.fastaprotein:
data/larger_version_modified/aligned_protein_ORF3a.fastaMprotein:E
protein:
protein:Nprotein:ORF1aORF10
ORF1bORF3a
size of records size of records12789
12789
size of records 12789
size of records 12789
size of records 12789
size of records removing the stop codon at the end12789
ORF10
window size 8
removing the stop codon at the end E
size of recordswindow size 127898
removing the stop codon at the end M
window size 8
window size 9
removing the stop codon at the end ORF3a
window size 8
window size 10
removing the stop codon at the end N
window size 8
window size 9
window size 11
window size 13
window size 10
window size 14
window size 15
window size 11
window size 9
window size 16
window size 17
window size 9
window size 13
window size 18
window size 19
window size 14
window size 20
window size 21
window size 22
removing the stop codon at the end ORF1b
window size 8
window size 9
window size 10
window size 15
window size 23
window size 24
window size 25
window size 16
window size 10
computeWindows!!!
aa_file = data/larger_version_modified/aligned_protein_ORF6.fasta
protein: ORF6
size of records 12789
removing the stop codon at the end ORF6
window size 8
window size 17
window size 9
window size 11
window size 10
window size 8
window size 18
window size 11
window size 13
window size 19
window size 11
window size 10
window size 14
window size 13
window size 20
window size 15
window size 16
window size 21
window size 17
window size 22
window size 18
window size 13
window size 14
window size 19
window size 23
window size 20
window size 11
window size 24
window size 21
window size 22
window size 25
window size 15
window size 14
window size 23
computeWindows!!!
aa_file = data/larger_version_modified/aligned_protein_ORF7a.fasta
protein: ORF7a
size of records 12789
removing the stop codon at the end ORF7a
window size 8
window size 24
window size 25
window size 9
computeWindows!!!
aa_file = data/larger_version_modified/aligned_protein_ORF7b.fasta
protein: ORF7b
size of records 12789
removing the stop codon at the end ORF7b
window size 8
window size 13
window size 16
window size 10
window size 9
window size 10
window size 15
window size 11
window size 13
window size 11
window size 14
window size 15
window size 16
window size 13
window size 17
window size 17
window size 18
window size 19
window size 14
window size 20
window size 16
window size 21
window size 22
window size 14
window size 23
window size 15
window size 24
window size 18
window size 25
computeWindows!!!
aa_file = data/larger_version_modified/aligned_protein_ORF8.fasta
protein: ORF8
size of records 12789
removing the stop codon at the end ORF8
window size 8
window size 16
window size 9
window size 17
window size 17
window size 10
window size 19
window size 18
window size 11
window size 15
window size 9
window size 13
window size 19
window size 18
window size 14
window size 20
window size 20
window size 15
window size 21
window size 16
window size 16
window size 19
window size 21
window size 22
window size 17
window size 18
window size 23
window size 22
window size 19
window size 20
window size 24
window size 17
window size 20
window size 25
window size 23
window size 21
computeWindows!!!
aa_file = data/larger_version_modified/aligned_protein_ORF9b.fasta
protein: ORF9b
size of records 12789
window size 21
removing the stop codon at the end ORF9b
window size 8
window size 22
window size 9
window size 10
window size 11
window size 23
window size 24
window size 18
window size 13
window size 9
window size 22
window size 24
window size 14
window size 15
window size 25
window size 25
window size 16
window size 17
computeWindows!!!
aa_file = data/larger_version_modified/aligned_protein_S.fasta
protein: S
size of records 12789
window size 23
window size 18
window size 19
removing the stop codon at the end S
window size window size8
19
window size 20
window size 10
window size 21
window size 24
window size 22
window size 23
window size 20
window size 24
window size 25
window size 25
window size 9
window size 21
window size 22
window size 10
window size 23
window size 11
window size 24
window size 10
window size 11
window size 25
window size 13
window size 13
window size 14
window size 11
window size 15
window size 14
window size 16
window size 17
window size 15
window size 13
window size 18
window size 19
window size 16
window size 20
window size 14
window size 21
window size 17
window size 22
window size 15
window size 18
window size 23
window size 24
window size 19
window size 25
window size 16
window size 20
window size 17
window size 21
window size 22
window size 18
window size 23
window size 19
window size 24
window size 20
window size 25
window size 21
window size 22
window size 23
window size 24
window size 25
all processes are done!, trying to join together all of the files
Total run time in minutes: 103.48027988274892
###Markdown
Can ignore all of the following now. Looking at the mutation differences between Hu-1 and original sequence* This analysis motivated my suggestion that we should in fact be using the Hu-1 sequence as we were missing a couple potential epitopes we could have chosen.
###Code
#df = pd.read_csv('../data/processed/epitope_entropies.csv')
#df.head()
df.shape
df.head()
plt.scatter(df[df.protein=='ORF1a'].start_pos, df[df.protein=='ORF1a'].perc_mutated)
plt.axhline(0.001, c='red')
#plt.xlim(2690, 2727)
plt.show()
# mutation positions = mut_pos = [2707, 2907]
mut_pos = [2707, 2907]
orf1a = df[df.protein=='ORF1a']
orf1a.shape
orf1a['crosses_mutation'] = 0
for pos in mut_pos:
seq_start = orf1a['start_pos']
seq_end = orf1a['start_pos']+(orf1a['epi_len']-1)
in_region = np.logical_and(pos >= seq_start,pos <= seq_end)
orf1a.loc[in_region,'crosses_mutation'] = 1
#apply protein mask and then epitope mask
orf1a[orf1a.crosses_mutation.astype(bool)].shape
(orf1a[orf1a.crosses_mutation.astype(bool)].perc_mutated > 0.001).sum()
below_thresh_mask = orf1a[orf1a.crosses_mutation.astype(bool)].perc_mutated <= 0.001
below_thresh_mask.sum()
orf1a[orf1a.crosses_mutation.astype(bool)][below_thresh_mask]
orf1a[orf1a.crosses_mutation.astype(bool)][below_thresh_mask].start_pos.unique()
# epitopes that are mutated according to our data but not the other papers.
danger_epitopes = orf1a[orf1a.crosses_mutation.astype(bool)][below_thresh_mask].epitope.to_list()
import pickle
other_epitopes = pickle.load(open('../data/processed/Other_Methods_Proposed_Peptides.pkl', 'rb'))
len(other_epitopes.keys())
for k, v in other_epitopes.items():
if len(list(set(danger_epitopes).intersection(set(v))))>0:
print('Missing', k, list(set(danger_epitopes).intersection(set(v))))
###Output
Missing MHC_1_Grifoni-LaJolla ['KLIEYTDFA']
Missing MHC_1_Nerli-UCSC ['SKLIEYTDFA', 'KLIEYTDFAT']
###Markdown
It is only the second mutation position that is below the mutation threshold
###Code
'''for ind, aa_file in enumerate(amino_acid_files):
protein = aa_file.split('_')[-1].split('.')[0]
temp = pd.read_csv('../data/processed/epitope_calcs/Epitope_calc_'+protein+'.csv')
if ind == 0:
df = temp
else:
df = df.append(temp)'''
df.head()
df.head()
df.shape
df.tail()
df.reset_index(inplace=True)
df.drop('index', axis=1, inplace=True)
plt.hist(df.entropy)
(df.perc_mutated==0.0).sum()
df.shape
plt.hist(df.perc_mutated)
(df.perc_mutated <0.001).sum()
(df.entropy == 0.0).sum()
(df.perc_mutated==0).sum()
df.head()
df.to_csv('../data/processed/Hu1_epitope_entropies.csv', index=False)
###Output
_____no_output_____
###Markdown
Looking at and flagging the masked positions
###Code
df = pd.read_csv('../data/processed/epitope_entropies.csv')
df.head()
df.masked_position.sum()
df.groupby('protein').masked_position.sum()
df[df.protein=='ORF1a'].groupby(['protein', 'epi_len']).masked_position.sum()
df.epi_len.unique()
np.arange(8,26).sum()-12
# need to mask out all of these positions by setting them to 100% prob of mutating. and have
#anything within them be set to prob of 999.
# this will help flag them as being unique.
# actually just create a new masked column feature.
masked_nt_positions = [18529, 29849, 29851, 29853, 13402, 24389, 24390]
# getting the nt regions
with open('nextstrain_covid19_ref_protein_pos.txt', 'r') as f:
lines = f.readlines()
take_next_line = False
regions = []
proteins = []
for l in lines:
if 'CDS ' in l:
regions.append(l.split('CDS')[1].strip())
take_next_line=True
elif take_next_line:
proteins.append(l.split('gene=')[1].strip().strip('"'))
take_next_line=False
protein_regions = {p:r for p,r in zip(proteins, regions)}
protein_regions
in_proteins = [18529-13468,
13402-266,
24389-21563,
24390-21563]
4400-4378
in_proteins = np.floor(np.asarray(in_proteins)/3)
in_proteins
the_proteins = ['ORF1b', 'ORF1a', 'S', 'S']
df.head()
df['masked_position'] = 0
for pos, protein in zip(in_proteins, the_proteins):
protein_mask = protein==df.protein # select relevant protein
seq_start = df['start_pos']
seq_end = df['start_pos']+(df['epi_len']-1)
in_region = np.logical_and(pos >= seq_start,pos <= seq_end)
#if all 4 matter:::
#. front_in_region = np.logical_and(glyco_start >= seq_start,glyco_start <= seq_end)
# end_in_region = np.logical_and(glyco_end >= seq_start,glyco_end <= seq_end)
#. in_region = np.logical_or(front_in_region, end_in_region)
in_region_and_protein = np.logical_and(protein_mask,in_region)
df.loc[in_region_and_protein,'masked_position'] = 1
#apply protein mask and then epitope mask
df[np.logical_and(df.protein=='ORF1b', np.logical_and(df.start_pos==1686,df.epi_len==8 ))]
df.head()
df.tail()
len(df.protein.unique())
df.shape
len(df.epi_len.unique())
test = pd.read_csv('../data/processed/pre_cleaning_AllEpitopeFeatures.csv')
test.head()
test.tail()
test.shape
# run for a single example.
df.head()
len(df.protein.unique())
df[df.protein=='E'].tail()
def entropy_calc(x):
# where x is a list of numbers.
summ = 0
nc = np.sum(x)
for e in x:
p = e/nc
summ += p*np.log2(p)
return -summ
window_sizes = list(range(8,12)) + list(range(13,26))
aa_file = amino_acid_files[0]
protein = aa_file.split('_')[-1].split('.')[0]
print('protein:', protein)
with open(aa_file, "rt") as handle:
records = list(SeqIO.parse(handle, "fasta"))
# getting rid of the node sequences!
no_nodes = []
for r in records:
if 'NODE_' not in r.id:
no_nodes.append(r)
records = no_nodes
# getting the reference sequence
for ind, r in enumerate(records):
if r.id == 'Wuhan/IPBCAMS-WH-01/2019':#'Wuhan-Hu-1/2019':#'Wuhan/WH01/2019':#'Wuhan/IPBCAMS-WH-01/2019':
ref_seq_ind = ind
ref_seq = str(records[ref_seq_ind].seq)[:-1]
seqs = np.array(records)[:, :-1] # ignore the stop code at the end.
for window in window_sizes:
print('window size', window)
window_res = []
# get epitope based column slices. gives a list of epi columns.
epi_columns = [seqs[:,i:i+window] for i in range(seqs.shape[1]-window+1)]
for col_ind, col in enumerate(epi_columns):
# count all of the unique epitopes here.
#first need to convert each of the columns into strings:
col = np.asarray( [ ''.join(col[i,:]) for i in range(col.shape[0]) ])
unique, counts = np.unique(col, return_counts=True)
ent = entropy_calc(counts)
# useful for percentages
ref_epitope = col[ref_seq_ind]
count_dict = dict(zip(unique, counts))
# -1 for no self count. percentage mutated.
perc = 1 - ((count_dict[ref_epitope]-1)/(np.sum(counts)-1))
break
break
col
epi_columns[0]
unique
counts
ent
ref_epitope
count_dict
perc
df = pd.read_csv('../data/processed/epitope_entropies.csv')
df.head()
(df.entropy == 0.0).sum()
seqs.shape
len(ref_seq)
len(epi_columns)
protein_res[-5:]
np.log2(len(seqs))
np.array(df['E'])[:,2];
plt.hist(np.array(df['E'])[:,2])
###Output
_____no_output_____ |
pipeline-components/create-sparkapplication/kubeflow-pipeline-create-SparkApplication.ipynb | ###Markdown
Install libs if necessary
###Code
%%capture
# Install the SDK (Uncomment the code if the SDK is not installed before)
!python3 -m pip install 'kfp>=0.1.31' --quiet
# Restart the kernel for changes to take effect
###Output
_____no_output_____
###Markdown
Load pipeline components
###Code
from kfp.components import load_component_from_file
create_sparkapplication_op = load_component_from_file('component.yaml')
help(create_sparkapplication_op)
###Output
Help on function create_sparkapplication:
create_sparkapplication()
create_sparkapplication
Create a SparkApplication CRD in the same k8s cluster for Spark-operator.
###Markdown
Define the pipeline
###Code
import kfp.dsl as dsl
@dsl.pipeline(
name='Integration of spark-operator',
description=''
)
def sparkapplication_pipeline():
create_sparkapplication_op()
pipeline_func = sparkapplication_pipeline
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
import kfp.compiler as compiler
pipeline_filename = pipeline_func.__name__ + '.zip'
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Run the pipeline
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment("test")
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____ |
tests/test_jacobian_calculation.ipynb | ###Markdown
**Objective**: Test jacobian calculation in pythonFirst define the functionAutograd does not support assignment to index, so construct $\frac{dy}{dt}$ using stack instead of indexing
###Code
def f(x1,x2,x3,x4,x5,x6):
# State is { x,y,theta, vx,vy,omega }
dydt = []
dydt.append( x4**2 + 2*x6 )
dydt.append( 5*x5 )
dydt.append( x4 + 2*x5 + 3*x6 )
dydt.append( x1**2 + 2*x2 )
dydt.append( x2**3 + 4*x3 )
dydt.append( 0 )
return np.stack(dydt)
###Output
_____no_output_____
###Markdown
Test that the calculation is what we expect
###Code
x = np.ones(6)
res1 = []
res2 = []
for i in range(6):
res1.append( jacobian(f, argnum=i)(*x) )
res2.append( make_jvp(f, argnum=i)(*x)(1)[1] )
res1 = np.stack(res1, axis=1)
res2 = np.stack(res2, axis=1)
print('jacobian:\n', res1)
print('make_jvp:\n',res2)
make_jvp_reversemode(f, argnum=0)(1.0, 1.0, 1.0, 1.0, 1.0, 1.0)([1,0,0,0,0,0])
###Output
_____no_output_____
###Markdown
Compare speed of `Jacobian` vs `make_jvp`make_jvp is expected to be much faster because it is a fast forward-mode Jacobian `Jacobian`
###Code
%%timeit
x = np.zeros(6)
J = [ jacobian(f, argnum=i) for i in range(6) ]
for t in range(1000):
res = []
for i in range(6):
res.append( J[i](*x) )
res = np.stack(res).T
###Output
2.98 s ± 31.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
`make_jvp`
###Code
%%timeit
x = np.zeros(6)
J = [ make_jvp_reversemode(f, argnum=i) for i in range(6) ]
for t in range(1000):
res = []
for i in range(6):
res.append( J[i](*x)(1)[1] )
res = np.stack(res).T
###Output
_____no_output_____ |
10_high_level_functions.ipynb | ###Markdown
Higher Order Functions
###Code
# Defined by the weather people
def crude_good_enough(guess, x):
if abs(guess * guess - x) < 3:
return True
else:
return False
def avg(a, b):
return (a + b) / 2.0
def improve_guess(guess, x):
return avg(guess, float(x)/guess)
def sqrt(x, good_enough, guess=0.1):
print "Trying:", guess, "-- Value:", guess*guess
if good_enough(guess, x):
return guess
else:
guess = improve_guess(guess, x)
return sqrt(x, good_enough, guess)
sqrt(36, crude_good_enough)
# By the nuclear reactor people
def very_accurate_good_enough(guess, x):
if abs(guess * guess - x) < 0.00000000001:
return True
else:
return False
sqrt(36, very_accurate_good_enough)
###Output
Trying: 0.1 -- Value: 0.01
Trying: 180.05 -- Value: 32418.0025
Trying: 90.1249722299 -- Value: 8122.51061945
Trying: 45.262208784 -- Value: 2048.66754401
Trying: 23.0287871495 -- Value: 530.325037577
Trying: 12.2960239699 -- Value: 151.192205469
Trying: 7.61189982741 -- Value: 57.9410189825
Trying: 6.17066836877 -- Value: 38.0771481174
Trying: 6.00236017319 -- Value: 36.0283276487
Trying: 6.00000046402 -- Value: 36.0000055682
Trying: 6.0 -- Value: 36.0
###Markdown
Handling Complexity Recall the fib method we wrote earlier.
###Code
def fib(n):
if n <= 1:
return n
else:
return fib(n-2) + fib(n-1)
%time fib(40)
###Output
CPU times: user 53.5 s, sys: 221 ms, total: 53.7 s
Wall time: 54 s
###Markdown
This will take a bit of time so let's see what's wrong. We can use the concept of higher-order functions to tackle this issue.
###Code
def logger(f):
def wrapper(n):
print "I'm going to call a function."
v = f(n)
print "The function returned: ", v
return v
return wrapper
logged_fib = logger(fib) # remember, fib is just a name!
logged_fib(4)
###Output
I'm going to call a function.
The function returned: 3
###Markdown
Now that we can do stuff before the `fib` call, let's see if we can save some values that are repeatedly needed.
###Code
def memoize(f):
mem = {}
def wrapper(x):
if x not in mem:
mem[x] = f(x)
return mem[x]
return wrapper
fib = memoize(fib)
%time fib(40)
###Output
CPU times: user 93 µs, sys: 35 µs, total: 128 µs
Wall time: 107 µs
###Markdown
That's about **450,000** times speedup! Syntactic Sugar We can write this in another way.
###Code
def memoize(f):
mem = {}
def wrapper(x):
if x not in mem:
mem[x] = f(x)
return mem[x]
return wrapper
@memoize # this is called a decorator
def fib(n):
if n <= 1:
return n
else:
return fib(n-1) + fib(n-2)
fib(50)
###Output
_____no_output_____ |
3-Visualization/2-Seaborn/6-Style Und Farbgebung.ipynb | ###Markdown
Style und FarbgebungWir haben bereits einige Male die Anpassungsmöglichkeiten der Diagrammoptik in Seaborn genutzt. Lasst uns dies nun noch einmal formell betrachten:
###Code
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
tips = sns.load_dataset('tips')
###Output
_____no_output_____
###Markdown
StylesWir können bestimmte Styles festlegen:
###Code
sns.countplot(x='sex',data=tips)
sns.set_style('ticks')
sns.countplot(x='sex',data=tips,palette='deep')
###Output
_____no_output_____
###Markdown
Rahmen entfernen
###Code
sns.countplot(x='sex',data=tips)
sns.despine()
sns.countplot(x='sex',data=tips)
sns.despine(left=True)
###Output
_____no_output_____
###Markdown
Größe und PerspektiveWir können die aus *Matplotlib* bekannte `plt.figure(figsize=(width,heigth)` nutzen, um die Größe von Seaborn Diagrammen zu ändern.Allerdings können wir die Größe und Perspektive der meisten Seaborn Diagramme auch über die Parameter `size` und `aspect` anpassen. Zum Beispiel:
###Code
plt.figure(figsize=(12,3))
sns.countplot(x='sex',data=tips)
sns.lmplot(x='total_bill',y='tip',size=2,aspect=4,data=tips)
###Output
_____no_output_____
###Markdown
Skalierung und KontextDie `set_context()` Funktion erlaubt es uns Standardeigenschaften zu überschreiben:
###Code
sns.set_context('poster',font_scale=4)
sns.countplot(x='sex',data=tips,palette='coolwarm')
###Output
_____no_output_____ |
notebooks-src/notebooks/R Examples/Search for Samples or Studies.ipynb | ###Markdown
Search for MGnify Studies or Samples, using MGnifyRThe [MGnify API](https://www.ebi.ac.uk/metagenomics/api/v1) returns data and relationships as JSON. [MGnifyR](https://github.com/beadyallen/MGnifyR) is a package to help you read MGnify data into your R analyses.**This example shows you how to perform a search of MGnify Studies or Samples**You can find all of the other "API endpoints" using the [Browsable API interface in your web browser](https://www.ebi.ac.uk/metagenomics/api/v1).This interface also lets you inspect the kinds of Filters that can be created for each list.This is an interactive code notebook (a Jupyter Notebook).To run this code, click into each cell and press the ▶ button in the top toolbar, or press `shift+enter`.---
###Code
library(IRdisplay)
display_markdown(file = '../_resources/mgnifyr_help.md')
###Output
_____no_output_____
###Markdown
Load packages:
###Code
library(vegan)
library(ggplot2)
library(phyloseq)
library(MGnifyR)
mg <- mgnify_client(usecache = T, cache_dir = '/tmp/mgnify_cache')
###Output
_____no_output_____
###Markdown
Contents- [Example: Find Polar Samples](Example:-find-Polar-samples)- [Example: Find Wastewater Samples](Example:-find-Wastewater-studies)- [More Sample filters](More-Sample-filters)- [More Study filters](More-Study-filters)- [Example: Filtering Samples both API-side and client-side](Example:-adding-additional-filters-to-the-data-frame) Documentation for `mgnify_query`
###Code
?mgnify_query
###Output
_____no_output_____
###Markdown
Example: find Polar samples
###Code
samps_np <- mgnify_query(mg, "samples", latitude_gte=88, maxhits=-1)
samps_sp <- mgnify_query(mg, "samples", latitude_lte=-88, maxhits=-1)
samps_polar <- rbind(samps_np, samps_sp)
head(samps_polar)
###Output
_____no_output_____
###Markdown
Example: find Wastewater studies
###Code
studies_ww <- mgnify_query(mg, "studies", biome_name="wastewater", maxhits=-1)
head(studies_ww)
###Output
_____no_output_____
###Markdown
More Sample filters By location
###Code
more_northerly_than <- mgnify_query(mg, "samples", latitude_gte=88, maxhits=-1)
more_southerly_than <- mgnify_query(mg, "samples", latitude_lte=-88, maxhits=-1)
more_easterly_than <- mgnify_query(mg, "samples", longitude_gte=170, maxhits=-1)
more_westerly_than <- mgnify_query(mg, "samples", longitude_lte=170, maxhits=-1)
at_location <- mgnify_query(mg, "samples", geo_loc_name="usa", maxhits=-1)
###Output
_____no_output_____
###Markdown
By biome
###Code
biome_within_wastewater <- mgnify_query(mg, "samples", biome_name="wastewater", maxhits=-1)
###Output
_____no_output_____
###Markdown
By metadataThere are a large number of metadata key:value pairs, because these are author-submitted, along with the samples, to the ENA archive.If you know how to specify the metadata key:value query for the samples you're interested in, you can use this form to find matching Samples:
###Code
from_ex_smokers <- mgnify_query(mg, "samples", metadata_key="smoker", metadata_value="ex-smoker", maxhits=-1)
###Output
_____no_output_____
###Markdown
To find `metadata_key`s and values, it is best to browse the [interactive API Browser](https://www.ebi.ac.uk/metagenomics/v1/samples), and use the `Filters` button to construct queries interactively at first. --- More Study filters By Centre Name
###Code
from_smithsonian <- mgnify_query(mg, "studies", centre_name="Smithsonian", maxhits=-1)
###Output
_____no_output_____
###Markdown
--- Example: adding additional filters to the data frame First, fetch some samples from the Lentic biome. We can specify the entire Biome lineage, too.
###Code
lentic_samples <- mgnify_query(mg, "samples", biome_name="root:Environmental:Aquatic:Lentic", usecache=T)
###Output
_____no_output_____
###Markdown
Not, also filter by depth *within* the returned results, using normal R syntax.
###Code
depth_numeric = as.numeric(lentic_samples$depth) # We must convert data from MGnifyR (always strings) to numerical format.
depth_numeric[is.na(depth_numeric)] = 0.0 # If depth data is missing, assume it is surface-level.
lentic_subset = lentic_samples[depth_numeric >=25 & depth_numeric <=50,] # Filter to samples collected between 25m and 50m down.
lentic_subset
###Output
_____no_output_____ |
Spark Algorithms - Linear Regression (Documentation Example).ipynb | ###Markdown
Linear Regression (Documentation Example)The documentation example is available here: https://spark.apache.org/docs/latest/ml-classification-regression.html. Objective: First, what we'll do is go through this example. This allows us to read from the documentation, understand it, then apply it. This dataset is quite unrealistic, but it is necessary in understanding some of the basic elements of using Spark's MLlib library. More relevant datasets are used in the advanced linear regression exercise.
###Code
# Must be included at the beginning of each new notebook. Remember to change the app name.
import findspark
findspark.init('/home/ubuntu/spark-2.1.1-bin-hadoop2.7')
import pyspark
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('linear_regression_docs').getOrCreate()
# If you're getting an error with numpy, please type 'sudo pip install numpy --user' into the EC2 console.
from pyspark.ml.regression import LinearRegression
# Load model training data. Location of the data may be different.
training = spark.read.format("libsvm").load("Datasets/sample_linear_regression_data.txt")
###Output
_____no_output_____
###Markdown
The libsvm format might be new to you. It's not used often, and may not be relevant to your dataset. However, it's used extensively throughout the Spark documentation. Let's see what the training data looks like:
###Code
# Visualise the training data format.
training.show()
###Output
+-------------------+--------------------+
| label| features|
+-------------------+--------------------+
| -9.490009878824548|(10,[0,1,2,3,4,5,...|
| 0.2577820163584905|(10,[0,1,2,3,4,5,...|
| -4.438869807456516|(10,[0,1,2,3,4,5,...|
|-19.782762789614537|(10,[0,1,2,3,4,5,...|
| -7.966593841555266|(10,[0,1,2,3,4,5,...|
| -7.896274316726144|(10,[0,1,2,3,4,5,...|
| -8.464803554195287|(10,[0,1,2,3,4,5,...|
| 2.1214592666251364|(10,[0,1,2,3,4,5,...|
| 1.0720117616524107|(10,[0,1,2,3,4,5,...|
|-13.772441561702871|(10,[0,1,2,3,4,5,...|
| -5.082010756207233|(10,[0,1,2,3,4,5,...|
| 7.887786536531237|(10,[0,1,2,3,4,5,...|
| 14.323146365332388|(10,[0,1,2,3,4,5,...|
|-20.057482615789212|(10,[0,1,2,3,4,5,...|
|-0.8995693247765151|(10,[0,1,2,3,4,5,...|
| -19.16829262296376|(10,[0,1,2,3,4,5,...|
| 5.601801561245534|(10,[0,1,2,3,4,5,...|
|-3.2256352187273354|(10,[0,1,2,3,4,5,...|
| 1.5299675726687754|(10,[0,1,2,3,4,5,...|
| -0.250102447941961|(10,[0,1,2,3,4,5,...|
+-------------------+--------------------+
only showing top 20 rows
###Markdown
This is the format that Spark needs to run a machine learning algorithm. One column with the name "label" and the other with the name "features". The label represents the output/answer/predictor (for example, house value), while the features represent the inputs.The "label" column then needs to have the numerical label, either a regression numerical value, or a numerical value that matches to a classification grouping. The feature column has inside of it a vector of all the features that belong to that row. Usually what we end up doing is combining the various feature columns we have into a single 'features' column using the data transformations from the previous lab.
###Code
# These are the default values:
# featuresCol: What is the features column named?
# labelCol: What is the label column named?
# predictionCol: What is the name of the actual prediction?
lr = LinearRegression(featuresCol='features', labelCol='label', predictionCol='prediction')
# Fit/train the model. Fit the model onto the training data.
lrModel = lr.fit(training)
# Print the coefficients and intercept for linear regression
print("Coefficients: {}".format(str(lrModel.coefficients))) # For each feature...
print('\n')
print("Intercept:{}".format(str(lrModel.intercept)))
###Output
Coefficients: [0.0073350710225801715,0.8313757584337543,-0.8095307954684084,2.441191686884721,0.5191713795290003,1.1534591903547016,-0.2989124112808717,-0.5128514186201779,-0.619712827067017,0.6956151804322931]
Intercept:0.14228558260358093
###Markdown
You can use the summary attribute to get even more information.
###Code
# Summarize the model over the training set and print out some metrics.
trainingSummary = lrModel.summary
###Output
_____no_output_____
###Markdown
This has a lot of information, here are a few examples:
###Code
trainingSummary.residuals.show()
# Print Root Mean Squared Error.
print("RMSE: {}".format(trainingSummary.rootMeanSquaredError))
# Print R-Squared.
print("r2: {}".format(trainingSummary.r2))
###Output
+-------------------+
| residuals|
+-------------------+
|-11.011130022096554|
| 0.9236590911176538|
|-4.5957401897776675|
| -20.4201774575836|
|-10.339160314788181|
|-5.9552091439610555|
|-10.726906349283922|
| 2.122807193191233|
| 4.077122222293811|
|-17.316168071241652|
| -4.593044343959059|
| 6.380476690746936|
| 11.320566035059846|
|-20.721971774534094|
| -2.736692773777401|
| -16.66886934252847|
| 8.242186378876315|
|-1.3723486332690233|
|-0.7060332131264666|
|-1.1591135969994064|
+-------------------+
only showing top 20 rows
RMSE: 10.16309157133015
r2: 0.027839179518600154
###Markdown
Train/Test SplitsBased on our nine-step process, we've actually missed a fundamental step following the Spark documentation! We never split our data into a training and testing set. Instead we've trained the model using all of our data, which you know by now is not a good idea.Luckily, Spark DataFrames has a convienent method of splitting the data. Let's see it:
###Code
# Remember, data is stored in the parent directory.
all_data = spark.read.format("libsvm").load("Datasets/sample_linear_regression_data.txt")
# Pass in the split between training/test as a list.
# This is based on your test-designs, but generally 70/30 or 60/40 splits are used.
# Depending on how much data you have and how unbalanced it is.
train_data,test_data = all_data.randomSplit([0.7,0.3])
# Let's check out our training data.
train_data.show()
# Let's check out the count (348).
train_data.describe().show()
# And our test data.
test_data.show()
# Let's check out the count (153, approximately a 70/30 split).
test_data.describe().show()
# Now we only train the train_data.
correct_model = lr.fit(train_data)
# Now we can directly get a .summary object using the evaluate method.
test_results = correct_model.evaluate(test_data)
# And generate some basic evaluation metrics.
test_results.residuals.show()
print("RMSE: {}".format(test_results.rootMeanSquaredError))
###Output
+-------------------+
| residuals|
+-------------------+
|-26.807162722626384|
| -21.59371356869086|
| -21.9464814828004|
|-20.784412270368783|
|-17.815241924516652|
|-16.627443595267554|
| -17.05982262760015|
| -17.94762687581216|
|-15.654922779606231|
|-15.349049359554938|
|-15.695588171296139|
| -16.38920332633152|
|-12.990305637371522|
|-12.854812920968358|
|-14.716372011315832|
|-10.767144737423443|
|-13.030990836368117|
|-11.164342817225434|
| -8.477199122731658|
|-13.573983042656442|
+-------------------+
only showing top 20 rows
RMSE: 10.504373377842208
|
distracted-driver-detection.ipynb | ###Markdown
Problem Statment Given a dataset of 2D dashboard camera images, an algorithm needs to be developed to classify each driver's behaviour and determine if they are driving attentively, wearing their seatbelt, or taking a selfie with their friends in the backseat etc..? This can then be used to automatically detect drivers engaging in distracted behaviours from dashboard cameras.Following are needed tasks for the development of the algorithm:1. Download and preprocess the driver images1. Build and train the model to classify the driver images1. Test the model and further improve the model using different techniques. Data ExplorationThe provided data set has driver images, each taken in a car with a driver doing something in the car (texting, eating, talking on the phone, makeup, reaching behind, etc). This dataset is obtained from Kaggle(State Farm Distracted Driver Detection competition).Following are the file descriptions and URL’s from which the data can be obtained :1. imgs.zip - zipped folder of all (train/test) images1. sample_submission.csv - a sample submission file in the correct format1. driver_imgs_list.csv - a list of training images, their subject (driver) id, and1. class id1. driver_imgs_list.csv.zip1. sample_submission.csv.zipThe 10 classes to predict are:1. c0: safe driving1. c1: texting - right1. c2: talking on the phone - right1. c3: texting - left1. c4: talking on the phone - left1. c5: operating the radio1. c6: drinking1. c7: reaching behind1. c8: hair and makeup1. c9: talking to passengerThere are 102150 total images. Data PreprocessingPreprocessing of data is carried out before model is built and training process is executed. Following are the steps carried out during preprocessing.1. Initially the images are divided into training and validation sets.1. The images are resized to a square images i.e. 64 x 64 (224 x 224 if ram > 32 gb) pixels.1. All three channels were used during training process as these are color images.1. The images are normalised by dividing every pixel in every image by 255.1. To ensure the mean is zero a value of 0.5 is subtracted. ImplementationA standard CNN architecture was initially created and trained. We have created 4 convolutional layers with 4 max pooling layers in between. Filters were increased from 64 to 512 in each of the convolutional layers. Also dropout was used along with flattening layer before using the fully connected layer. Altogether the CNN has 2 fully connected layers. Number of nodes in the last fully connected layer were setup as 10 along with softmax activation function. Relu activation function was used for all other layers.Xavier initialization was used in each of the layers.
###Code
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import os
import pandas as pd
import pickle
import numpy as np
import seaborn as sns
from sklearn.datasets import load_files
from keras.utils import np_utils
import matplotlib.pyplot as plt
# Pretty display for notebooks
%matplotlib inline
from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D
from keras.layers import Dropout, Flatten, Dense
from keras.models import Sequential
from keras.utils.vis_utils import plot_model
from keras.callbacks import ModelCheckpoint
from keras.utils import to_categorical
from sklearn.metrics import confusion_matrix
from keras.preprocessing import image
from tqdm import tqdm
import seaborn as sns
from sklearn.metrics import accuracy_score,precision_score,recall_score,f1_score
###Output
_____no_output_____
###Markdown
Defining the train,test and model directoriesWe will create the directories for train,test and model training paths if not present
###Code
TEST_DIR = os.path.join(os.getcwd(),"/kaggle/input/state-farm-distracted-driver-detection/imgs","test")
TRAIN_DIR = os.path.join(os.getcwd(),"/kaggle/input/state-farm-distracted-driver-detection/imgs","train")
MODEL_PATH = os.path.join(os.getcwd(),"model","self_trained")
PICKLE_DIR = os.path.join(os.getcwd(),"pickle_files")
if not os.path.exists(TEST_DIR):
print("Testing data does not exists")
if not os.path.exists(TRAIN_DIR):
print("Training data does not exists")
if not os.path.exists(MODEL_PATH):
print("Model path does not exists")
os.makedirs(MODEL_PATH)
print("Model path created")
if not os.path.exists(PICKLE_DIR):
os.makedirs(PICKLE_DIR)
###Output
_____no_output_____
###Markdown
Data Preparationcsv files for saving path location of different files and their classes
###Code
def create_csv(DATA_DIR,filename):
class_names = os.listdir(DATA_DIR)
data = list()
if(os.path.isdir(os.path.join(DATA_DIR,class_names[0]))):
for class_name in class_names:
file_names = os.listdir(os.path.join(DATA_DIR,class_name))
for file in file_names:
data.append({
"Filename":os.path.join(DATA_DIR,class_name,file),
"ClassName":class_name
})
else:
class_name = "test"
file_names = os.listdir(DATA_DIR)
for file in file_names:
data.append(({
"FileName":os.path.join(DATA_DIR,file),
"ClassName":class_name
}))
data = pd.DataFrame(data)
data.to_csv(os.path.join(os.getcwd(),"csv_files",filename),index=False)
CSV_FILES_DIR = os.path.join(os.getcwd(),"csv_files")
if not os.path.exists(CSV_FILES_DIR):
os.makedirs(CSV_FILES_DIR)
create_csv(TRAIN_DIR,"train.csv")
create_csv(TEST_DIR,"test.csv")
data_train = pd.read_csv(os.path.join(os.getcwd(),"csv_files","train.csv"))
data_test = pd.read_csv(os.path.join(os.getcwd(),"csv_files","test.csv"))
data_train.info()
data_train['ClassName'].value_counts()
###Output
_____no_output_____
###Markdown
Data exploration
###Code
nf = data_train['ClassName'].value_counts(sort=False)
labels = data_train['ClassName'].value_counts(sort=False).index.tolist()
y = np.array(nf)
width = 1/1.5
N = len(y)
x = range(N)
fig = plt.figure(figsize=(20,15))
ay = fig.add_subplot(211)
plt.xticks(x, labels, size=15)
plt.yticks(size=15)
ay.bar(x, y, width, color="blue")
plt.title('Bar Chart',size=25)
plt.xlabel('classname',size=15)
plt.ylabel('Count',size=15)
plt.show()
data_test.head()
data_test.shape
###Output
_____no_output_____
###Markdown
Observation* 22424 Train samples* 79726 Test samples* The training dataset is well balanced to a great extent and hence we need not do any downsampling of the data Converting into numerical valuesData preprocessing
###Code
labels_list = list(set(data_train['ClassName'].values.tolist()))
labels_id = {label_name:id for id,label_name in enumerate(labels_list)}
print(labels_id)
data_train['ClassName'].replace(labels_id,inplace=True)
with open(os.path.join(os.getcwd(),"pickle_files","labels_list.pkl"),"wb") as handle:
pickle.dump(labels_id,handle)
labels = to_categorical(data_train['ClassName'])
print(labels.shape)
###Output
_____no_output_____
###Markdown
Further splitting data into training and test data
###Code
from sklearn.model_selection import train_test_split
xtrain,xtest,ytrain,ytest = train_test_split(data_train.iloc[:,0],labels,test_size = 0.2,random_state=42)
###Output
_____no_output_____
###Markdown
Converting into 64*64 imagesDue to ram limitations
###Code
def path_to_tensor(img_path):
# loads RGB image as PIL.Image.Image type
img = image.load_img(img_path, target_size=(64, 64))
# convert PIL.Image.Image type to 3D tensor with shape (224, 224, 3)
x = image.img_to_array(img)
# convert 3D tensor to 4D tensor with shape (1, 224, 224, 3) and return 4D tensor
return np.expand_dims(x, axis=0)
def paths_to_tensor(img_paths):
list_of_tensors = [path_to_tensor(img_path) for img_path in tqdm(img_paths)]
return np.vstack(list_of_tensors)
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
# pre-process the data for Keras
train_tensors = paths_to_tensor(xtrain).astype('float32')/255 - 0.5
valid_tensors = paths_to_tensor(xtest).astype('float32')/255 - 0.5
###Output
_____no_output_____
###Markdown
Defining model
###Code
model = Sequential()
# 64 conv2d filters with relu
model.add(Conv2D(filters=64, kernel_size=2, padding='same', activation='relu', input_shape=(64,64,3), kernel_initializer='glorot_normal'))
model.add(MaxPooling2D(pool_size=2)) #Maxpool
model.add(Conv2D(filters=128, kernel_size=2, padding='same', activation='relu', kernel_initializer='glorot_normal'))
model.add(MaxPooling2D(pool_size=2)) #Maxpool
model.add(Conv2D(filters=256, kernel_size=2, padding='same', activation='relu', kernel_initializer='glorot_normal'))
model.add(MaxPooling2D(pool_size=2)) #Maxpool
model.add(Conv2D(filters=512, kernel_size=2, padding='same', activation='relu', kernel_initializer='glorot_normal'))
model.add(MaxPooling2D(pool_size=2)) #Maxpool
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(500, activation='relu', kernel_initializer='glorot_normal'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax', kernel_initializer='glorot_normal'))
model.summary()
plot_model(model,to_file=os.path.join(MODEL_PATH,"model_distracted_driver.png"),show_shapes=True,show_layer_names=True)
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
filepath = os.path.join(MODEL_PATH,"distracted-{epoch:02d}-{val_accuracy:.2f}.hdf5")
checkpoint = ModelCheckpoint(filepath, monitor='val_accuracy', verbose=1, save_best_only=True, mode='max',period=1)
callbacks_list = [checkpoint]
epochs = 20
model_history = model.fit(train_tensors,ytrain,validation_data = (valid_tensors, ytest),epochs=epochs, batch_size=40, shuffle=True,callbacks=callbacks_list)
###Output
_____no_output_____
###Markdown
Model performance graph
###Code
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(12, 12))
ax1.plot(model_history.history['loss'], color='b', label="Training loss")
ax1.plot(model_history.history['val_loss'], color='r', label="validation loss")
ax1.set_xticks(np.arange(1, 25, 1))
ax1.set_yticks(np.arange(0, 1, 0.1))
ax2.plot(model_history.history['accuracy'], color='b', label="Training accuracy")
ax2.plot(model_history.history['val_accuracy'], color='r',label="Validation accuracy")
ax2.set_xticks(np.arange(1, 25, 1))
legend = plt.legend(loc='best', shadow=True)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Model Analysis
###Code
def print_confusion_matrix(confusion_matrix, class_names, figsize = (10,7), fontsize=14):
df_cm = pd.DataFrame(
confusion_matrix, index=class_names, columns=class_names,
)
fig = plt.figure(figsize=figsize)
try:
heatmap = sns.heatmap(df_cm, annot=True, fmt="d")
except ValueError:
raise ValueError("Confusion matrix values must be integers.")
heatmap.yaxis.set_ticklabels(heatmap.yaxis.get_ticklabels(), rotation=0, ha='right', fontsize=fontsize)
heatmap.xaxis.set_ticklabels(heatmap.xaxis.get_ticklabels(), rotation=45, ha='right', fontsize=fontsize)
plt.ylabel('True label')
plt.xlabel('Predicted label')
fig.savefig(os.path.join(MODEL_PATH,"confusion_matrix.png"))
return fig
def print_heatmap(n_labels, n_predictions, class_names):
labels = n_labels #sess.run(tf.argmax(n_labels, 1))
predictions = n_predictions #sess.run(tf.argmax(n_predictions, 1))
# confusion_matrix = sess.run(tf.contrib.metrics.confusion_matrix(labels, predictions))
matrix = confusion_matrix(labels.argmax(axis=1),predictions.argmax(axis=1))
row_sum = np.sum(matrix, axis = 1)
w, h = matrix.shape
c_m = np.zeros((w, h))
for i in range(h):
c_m[i] = matrix[i] * 100 / row_sum[i]
c = c_m.astype(dtype = np.uint8)
heatmap = print_confusion_matrix(c, class_names, figsize=(18,10), fontsize=20)
class_names = list()
for name,idx in labels_id.items():
class_names.append(name)
# print(class_names)
ypred = model.predict(valid_tensors)
print_heatmap(ytest,ypred,class_names)
#Precision Recall F1 Score
ypred_class = np.argmax(ypred,axis=1)
# print(ypred_class[:10])
ytest = np.argmax(ytest,axis=1)
accuracy = accuracy_score(ytest,ypred_class)
print('Accuracy: %f' % accuracy)
# precision tp / (tp + fp)
precision = precision_score(ytest, ypred_class,average='weighted')
print('Precision: %f' % precision)
# recall: tp / (tp + fn)
recall = recall_score(ytest,ypred_class,average='weighted')
print('Recall: %f' % recall)
# f1: 2 tp / (2 tp + fp + fn)
f1 = f1_score(ytest,ypred_class,average='weighted')
print('F1 score: %f' % f1)
###Output
_____no_output_____
###Markdown
Let us check model performance on never seen images
###Code
from keras.models import load_model
from keras.utils import np_utils
import shutil
BASE_MODEL_PATH = os.path.join(os.getcwd(),"model")
TEST_DIR = os.path.join(os.getcwd(),"csv_files","test.csv")
PREDICT_DIR = os.path.join(os.getcwd(),"pred_dir")
PICKLE_DIR = os.path.join(os.getcwd(),"pickle_files")
JSON_DIR = os.path.join(os.getcwd(),"json_files")
if not os.path.exists(PREDICT_DIR):
os.makedirs(PREDICT_DIR)
else:
shutil.rmtree(PREDICT_DIR)
os.makedirs(PREDICT_DIR)
if not os.path.exists(JSON_DIR):
os.makedirs(JSON_DIR)
BEST_MODEL = os.path.join(BASE_MODEL_PATH,"self_trained","distracted-07-0.99.hdf5") #loading checkpoint with best accuracy and min epochs
model = load_model(BEST_MODEL)
model.summary()
data_test = pd.read_csv(os.path.join(TEST_DIR))
#testing on the only 10000 images as loading the all test images requires ram>8gb
data_test = data_test[:10000]
data_test.info()
with open(os.path.join(PICKLE_DIR,"labels_list.pkl"),"rb") as handle:
labels_id = pickle.load(handle)
print(labels_id)
ImageFile.LOAD_TRUNCATED_IMAGES = True
test_tensors = paths_to_tensor(data_test.iloc[:,0]).astype('float32')/255 - 0.5
ypred_test = model.predict(test_tensors,verbose=1)
ypred_class = np.argmax(ypred_test,axis=1)
id_labels = dict()
for class_name,idx in labels_id.items():
id_labels[idx] = class_name
print(id_labels)
for i in range(data_test.shape[0]):
data_test.iloc[i,1] = id_labels[ypred_class[i]]
#to create a human readable and understandable class_name
import json
class_name = dict()
class_name["c0"] = "SAFE_DRIVING"
class_name["c1"] = "TEXTING_RIGHT"
class_name["c2"] = "TALKING_PHONE_RIGHT"
class_name["c3"] = "TEXTING_LEFT"
class_name["c4"] = "TALKING_PHONE_LEFT"
class_name["c5"] = "OPERATING_RADIO"
class_name["c6"] = "DRINKING"
class_name["c7"] = "REACHING_BEHIND"
class_name["c8"] = "HAIR_AND_MAKEUP"
class_name["c9"] = "TALKING_TO_PASSENGER"
with open(os.path.join(JSON_DIR,'class_name_map.json'),'w') as secret_input:
json.dump(class_name,secret_input,indent=4,sort_keys=True)
# creating the prediction results for the image classification and shifting the predicted images to another folder
#with renamed filename having the class name predicted for that image using model
with open(os.path.join(JSON_DIR,'class_name_map.json')) as secret_input:
info = json.load(secret_input)
for i in range(data_test.shape[0]):
new_name = data_test.iloc[i,0].split("/")[-1].split(".")[0]+"_"+info[data_test.iloc[i,1]]+".jpg"
shutil.copy(data_test.iloc[i,0],os.path.join(PREDICT_DIR,new_name))
#saving the model predicted results into a csv file
data_test.to_csv(os.path.join(os.getcwd(),"csv_files","short_test_result.csv"),index=False)
###Output
_____no_output_____ |
fm-pca-xgboost/xgboost/BikeSharingRegression/biketrain_data_preparation_rev2.ipynb | ###Markdown
Kaggle Bike Sharing Demand DatasetNew Feature Hour AddedTo download dataset, sign-in and download from this link: https://www.kaggle.com/c/bike-sharing-demand/dataInput Features: ['season', 'holiday', 'workingday', 'weather', 'temp', 'atemp', 'humidity', 'windspeed', 'year', 'month', 'day', 'dayofweek','hour']Target Feature: ['count']Objective: You are provided hourly rental data spanning two years. For this competition, the training set is comprised of the first 19 days of each month, while the test set is the 20th to the end of the month. You must predict the total count of bikes rented during each hour covered by the test set, using only information available prior to the rental period (Ref: Kaggle.com)
###Code
columns = ['count', 'season', 'holiday', 'workingday', 'weather', 'temp',
'atemp', 'humidity', 'windspeed', 'year', 'month', 'day', 'dayofweek','hour']
df = pd.read_csv('train.csv', parse_dates=['datetime'])
df_test = pd.read_csv('test.csv', parse_dates=['datetime'])
df.head()
df_test.head()
# We need to convert datetime to numeric for training.
# Let's extract key features into separate numeric columns
def add_features(df):
df['year'] = df['datetime'].dt.year
df['month'] = df['datetime'].dt.month
df['day'] = df['datetime'].dt.day
df['dayofweek'] = df['datetime'].dt.dayofweek
df['hour'] = df['datetime'].dt.hour
add_features(df)
add_features(df_test)
df.dtypes
# Correlation will indicate how strongly features are related to the output
df.corr()['count']
group_hour = df.groupby(['hour'])
average_by_hour = group_hour['count'].mean()
plt.plot(average_by_hour.index,average_by_hour)
plt.xlabel('hour')
plt.ylabel('Count')
plt.xticks(np.arange(24))
plt.grid(True)
plt.title('Rental Count Average by hour')
group_year_hour = df.groupby(['year','hour'])
average_year_hour = group_year_hour['count'].mean()
for year in average_year_hour.index.levels[0]:
#print (year)
#print(average_year_month[year])
plt.plot(average_year_hour[year].index,average_year_hour[year],label=year)
plt.legend()
plt.xlabel('hour')
plt.ylabel('Count')
plt.xticks(np.arange(24))
plt.grid(True)
plt.title('Rental Count Average by Year,Hour')
group_workingday_hour = df.groupby(['workingday','hour'])
average_workingday_hour = group_workingday_hour['count'].mean()
for workingday in average_workingday_hour.index.levels[0]:
#print (year)
#print(average_year_month[year])
plt.plot(average_workingday_hour[workingday].index,average_workingday_hour[workingday],label=workingday)
plt.legend()
plt.xlabel('hour')
plt.ylabel('Count')
plt.xticks(np.arange(24))
plt.grid(True)
plt.title('Rental Count Average by Working Day,Hour')
df.dtypes
# Save all data
df.to_csv('bike_all.csv',index=False,
columns=columns)
###Output
_____no_output_____
###Markdown
Training and Validation Set Target Variable as first column followed by input features Training, Validation files do not have a column header
###Code
# Training = 70% of the data
# Validation = 30% of the data
# Randomize the datset
np.random.seed(5)
l = list(df.index)
np.random.shuffle(l)
df = df.iloc[l]
rows = df.shape[0]
train = int(.7 * rows)
test = int(.3 * rows)
rows, train, test
columns
# Write Training Set
df[:train].to_csv('bike_train.csv'
,index=False,header=False
,columns=columns)
# Write Validation Set
df[train:].to_csv('bike_validation.csv'
,index=False,header=False
,columns=columns)
# Test Data has only input features
df_test.to_csv('bike_test.csv',index=False)
','.join(columns)
# Write Column List
with open('bike_train_column_list.txt','w') as f:
f.write(','.join(columns))
###Output
_____no_output_____ |
code/16S well-to-well contamination analysis.ipynb | ###Markdown
Set up notebook environment NOTE: Use qiime2-2021.11 kernel
###Code
import numpy as np
import pandas as pd
import seaborn as sns
import scipy
from scipy import stats
import matplotlib.pyplot as plt
import re
%matplotlib inline
from qiime2.plugins import feature_table
from qiime2 import Artifact
from qiime2 import Metadata
import biom
from biom import load_table
from qiime2.plugins import diversity
from scipy.stats import ttest_ind
###Output
_____no_output_____
###Markdown
Assign taxonomy to reads using synthetic plasmid taxonomy file Unzip demux QZA's to obtain R1 files for vsearch NOTE: After unzipping each file, the folder was renamed to match the file name
###Code
cd /projects/dna_extraction_12201/round_02_six_kit_comparison/data/16S/03_demux/
unzip dna_og_16S_plates1_6_8redos_demux.qza
unzip dna_og_16S_plates1_6_excluding_8redos_demux.qza
unzip dna_og_16S_plates7_12_demux.qza
unzip dna_rerun_16S_hbm_seqs_demux.qza
unzip dna_rerun_16S_lbm_seqs_demux.qza
cd /projects/dna_extraction_12201/round_02_six_kit_comparison/data/16S/03_demux/
mv dna_og_16S_plates1_6_8redos_demux/data/*R1* /projects/dna_extraction_12201/round_02_six_kit_comparison/data/16S/14_well_to_well_contamination/01_demux_R1/
mv dna_og_16S_plates1_6_excluding_8redos_demux/data/*R1* /projects/dna_extraction_12201/round_02_six_kit_comparison/data/16S/14_well_to_well_contamination/01_demux_R1/
mv dna_og_16S_plates7_12_demux/data/*R1* /projects/dna_extraction_12201/round_02_six_kit_comparison/data/16S/14_well_to_well_contamination/01_demux_R1/
mv dna_rerun_16S_hbm_seqs_demux/data/*R1* /projects/dna_extraction_12201/round_02_six_kit_comparison/data/16S/14_well_to_well_contamination/01_demux_R1/
mv dna_rerun_16S_lbm_seqs_demux/data/*R1* /projects/dna_extraction_12201/round_02_six_kit_comparison/data/16S/14_well_to_well_contamination/01_demux_R1/
#!/bin/bash
#PBS -V
#PBS -l nodes=1:ppn=16
#PBS -l walltime=12:00:00
#PBS -l mem=128gb
#PBS -M [email protected]
#PBS -m abe
source activate qiime2-2021.11
# Import raw, demuxed sequences
qiime tools import \
--type 'SampleData[SequencesWithQuality]' \
--input-path /projects/dna_extraction_12201/round_02_six_kit_comparison/data/16S/14_well_to_well_contamination/01_demux_R1/ \
--input-format CasavaOneEightSingleLanePerSampleDirFmt \
--output-path /projects/dna_extraction_12201/round_02_six_kit_comparison/data/16S/14_well_to_well_contamination/02_vsearch/dna_all_16s_seqs_with_quality.qza
# Remove adapters (most basic QC)
qiime cutadapt trim-single \
--i-demultiplexed-sequences /projects/dna_extraction_12201/round_02_six_kit_comparison/data/16S/14_well_to_well_contamination/02_vsearch/dna_all_16s_seqs_with_quality.qza \
--o-trimmed-sequences /projects/dna_extraction_12201/round_02_six_kit_comparison/data/16S/14_well_to_well_contamination/02_vsearch/dna_all_16s_seqs_with_quality_trimmed.qza
# Create a table
qiime vsearch dereplicate-sequences \
--i-sequences /projects/dna_extraction_12201/round_02_six_kit_comparison/data/16S/14_well_to_well_contamination/02_vsearch/dna_all_16s_seqs_with_quality_trimmed.qza \
--o-dereplicated-table /projects/dna_extraction_12201/round_02_six_kit_comparison/data/16S/14_well_to_well_contamination/02_vsearch/dna_all_16s_vsearch_biom.qza \
--o-dereplicated-sequences /projects/dna_extraction_12201/round_02_six_kit_comparison/data/16S/14_well_to_well_contamination/02_vsearch/dna_all_16s_vsearch_seqs.qza
qiime feature-table summarize \
--i-table /projects/dna_extraction_12201/round_02_six_kit_comparison/data/16S/14_well_to_well_contamination/02_vsearch/dna_all_16s_vsearch_biom.qza \
--o-visualization /projects/dna_extraction_12201/round_02_six_kit_comparison/data/16S/14_well_to_well_contamination/02_vsearch/dna_all_16s_vsearch_biom.qzv
# There are 1920 samples and 5446117 features
###Output
_____no_output_____
###Markdown
Assign taxonomy using synthetic plasmid taxonomy file
###Code
#!/bin/bash
#PBS -V
#PBS -l nodes=1:ppn=16
#PBS -l walltime=6:00:00
#PBS -l mem=128gb
#PBS -M [email protected]
#PBS -m abe
source activate qiime2-2021.11
# Assign taxonomy
qiime feature-classifier classify-consensus-vsearch \
--i-query /projects/dna_extraction_12201/round_02_six_kit_comparison/data/16S/14_well_to_well_contamination/02_vsearch/dna_all_16s_vsearch_seqs.qza \
--i-reference-reads /projects/dna_extraction_12201/round_03_MagMAX_comparison/16S/snyth_16Splas_seqs.qza \
--i-reference-taxonomy /projects/dna_extraction_12201/round_03_MagMAX_comparison/16S/synth_16Splas_taxonomy.qza \
--o-classification /projects/dna_extraction_12201/round_02_six_kit_comparison/data/16S/14_well_to_well_contamination/03_taxonomy/dna_all_16S_deblur_seqs_taxonomy_synthetic_plasmids.qza
# Collapse taxonomy to level 1
qiime taxa collapse \
--i-table /projects/dna_extraction_12201/round_02_six_kit_comparison/data/16S/14_well_to_well_contamination/02_vsearch/dna_all_16s_vsearch_biom.qza \
--i-taxonomy /projects/dna_extraction_12201/round_02_six_kit_comparison/data/16S/14_well_to_well_contamination/03_taxonomy/dna_all_16S_deblur_seqs_taxonomy_synthetic_plasmids.qza \
--p-level 1 \
--o-collapsed-table /projects/dna_extraction_12201/round_02_six_kit_comparison/data/16S/14_well_to_well_contamination/03_taxonomy/dna_all_16S_deblur_biom_taxa_collapse_synthetic_plasmids.qza
qiime feature-table summarize \
--i-table /projects/dna_extraction_12201/round_02_six_kit_comparison/data/16S/14_well_to_well_contamination/03_taxonomy/dna_all_16S_deblur_biom_taxa_collapse_synthetic_plasmids.qza \
--o-visualization /projects/dna_extraction_12201/round_02_six_kit_comparison/data/16S/14_well_to_well_contamination/03_taxonomy/dna_all_16S_deblur_biom_taxa_collapse_synthetic_plasmids.qzv
###Output
_____no_output_____
###Markdown
Import synthetic read counts into pandas NOTE: Download table, unzip, and rename file for use with biom-format
###Code
biom convert \
-i /Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/16S/14_w2w/dna_all_16S_deblur_biom_taxa_collapse_synthetic_plasmids.biom \
-o /Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/16S/14_w2w/dna_all_16S_deblur_biom_taxa_collapse_synthetic_plasmids.tsv \
--to-tsv
biom_collapsed = pd.read_csv('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/16S/14_w2w/dna_all_16S_deblur_biom_taxa_collapse_synthetic_plasmids.tsv',
sep = '\t',
index_col = 0,
header = 1)
biom_collapsed.tail()
md = pd.read_csv('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/sample_metadata/12201_metadata.txt',
sep='\t',
index_col=0)
# Sum the number of reads per synthetic plasmid
plasmid_sum = biom_collapsed
plasmid_sum = plasmid_sum.apply(pd.to_numeric)
plasmid_sum['synth_sums_across_samples'] = plasmid_sum.sum(axis=1)
# We expect that all plasmids should be present at some abundance - check for this.
## Make table of read counts per plasmid
plasmid_sum.synth_sums_across_samples
# Copy collapsed BIOM table and convert values to numeric
biom_collapsed_with_sums = biom_collapsed
biom_collapsed_with_sums = biom_collapsed_with_sums.apply(pd.to_numeric)
# Add a row with columns sums (i.e., total synthetic plasmid reads per sample)
biom_collapsed_with_sums.loc['synth_sum'] = (biom_collapsed_with_sums.sum(axis=0) - biom_collapsed_with_sums.loc['Unassigned'])
biom_collapsed_with_sums.loc['synth_perc'] = (biom_collapsed_with_sums.loc['synth_sum'] / (biom_collapsed_with_sums.loc['Unassigned'] + biom_collapsed_with_sums.loc['synth_sum']))
biom_collapsed_with_sums.tail()
biom_collapsed_with_sums_transposed = biom_collapsed_with_sums.T
biom_collapsed_with_sums_transposed.head()
md_with_biom_and_sums = pd.merge(biom_collapsed_with_sums_transposed, md, left_index=True, right_index=True)
md_with_biom_and_sums.head()
# Export metadata with BIOM table and synthetic plasmid sums
md_with_biom_and_sums.to_csv('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/16S/14_w2w/synthetic_plasmid_results.csv', index = 1)
# Double-check levels for 'biomass_plate'
md_with_biom_and_sums.biomass_plate.unique()
# We expect the synthetic plasmids to be at greater abundance in the high biomass plates more than the low biomass and COVID
# plates (as expected since we only put into the high biomass plate) - check for this
plot_plasmids_across_plates = sns.boxplot(x = 'biomass_plate', y = 'synth_sum', data = md_with_biom_and_sums)
plot_plasmids_across_plates.set_yscale('log')
###Output
_____no_output_____
###Markdown
NOTE: At this point, the summary results file generated above was manually curated to include the following columns:row, column, plasmid_reads, plasmid_reads_log10, plasmid_reads_percent, extraction_protocol, well_type (sink vs. source)
###Code
# Round 1 & 2 synthetic plasmid well locations:
## For Round 1 PowerSoil - B6 is replaced with B7 and there is no F7
A3
A11
B6
C8
D4
D10
E1
F7
G2
H5
H9
###Output
_____no_output_____ |
src/python/tensorflow_cloud/tuner/tests/examples/cloud_fit.ipynb | ###Markdown
Run in AI Platform Notebooks Run in Colab View on GitHub OverviewFollowing is a quick introduction to `cloud_fit`. `cloud_fit` enables training on Google Cloud AI Platform in the same manner as `model.fit()`.In this notebook, we will start by installing libraries required, then proceed with two samples showing how to use `numpy.array` and `tf.data.dataset` with `cloud_fit` What are the components of `cloud_fit()`?`cloud_fit` has two main components as follows:**client.py:** serializes the provided data and model along with typical `model.fit()` parameters and triggers a AI platform training``` pythondef cloud_fit(model, remote_dir: Text, region: Text = None, project_id: Text = None, image_uri: Text = None, distribution_strategy: Text = DEFAULT_DISTRIBUTION_STRATEGY, job_spec: Dict[str, Any] = None, job_id: Text = None, **fit_kwargs) -> Text: """Facilitates remote execution of in memory Models and Datasets on AI Platform. Args: model: A compiled Keras Model. remote_dir: Google Cloud Storage path for temporary assets and AI Platform training output. Will overwrite value in job_spec. region: Target region for running the AI Platform Training job. project_id: Project id where the training should be deployed to. image_uri: base image used to use for AI Platform Training distribution_strategy: Specifies the distribution strategy for remote execution when a jobspec is provided. Accepted values are strategy names as specified by 'tf.distribute..__name__'. job_spec: AI Platform training job_spec, will take precedence over all other provided values except for remote_dir. If none is provided a default cluster spec and distribution strategy will be used. job_id: A name to use for the AI Platform Training job (mixed-case letters, numbers, and underscores only, starting with a letter). **fit_kwargs: Args to pass to model.fit() including training and eval data. Only keyword arguments are supported. Callback functions will be serialized as is. Returns: AI Platform job ID Raises: RuntimeError: If executing in graph mode, eager execution is required for cloud_fit. NotImplementedError: Tensorflow v1.x is not supported. """```**remote.py:** A job that takes in a remote_dir as parameter , load model and data from this location and executes the training with stored parameters.```pythondef run(remote_dir: Text, distribution_strategy_text: Text): """deserializes Model and Dataset and runs them. Args: remote_dir: Temporary cloud storage folder that contains model and Dataset graph. This folder is also used for job output. distribution_strategy_text: Specifies the distribution strategy for remote execution when a jobspec is provided. Accepted values are strategy names as specified by 'tf.distribute..__name__'. """``` CostsThis tutorial uses billable components of Google Cloud:* AI Platform Training* Cloud StorageLearn about [AI Platform Trainingpricing](https://cloud.google.com/ai-platform/training/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project.](https://console.cloud.google.com/cloud-resource-manager) When you first create an account, you get a $300 free credit towards your compute/storage costs.2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)3. [Enable the AI Platform APIs](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com)4. If running locally on your own machine, you will need to install the [Google Cloud SDK](https://cloud.google.com/sdk).**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. Authenticate your Google Cloud account**If you are using [AI Platform Notebooks](https://cloud.google.com/ai-platform/notebooks/docs/)**, your environment is alreadyauthenticated. Skip these steps.
###Code
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your Google Cloud account. This provides access
# to your Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
if 'google.colab' in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this tutorial in a notebook locally, replace the string
# below with the path to your service account key and run this cell to
# authenticate your Google Cloud account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json
# Log in to your account on Google Cloud
! gcloud auth application-default login --quiet
! gcloud auth login --quiet
###Output
_____no_output_____
###Markdown
Clone and build tensorflow_cloudTo use the latest version of the tensorflow_cloud, we will clone and build the repo. The resulting whl file is both used in the client side as well as in construction of a docker image for remote execution.
###Code
!git clone https://github.com/tensorflow/cloud.git
!cd cloud/src/python && python3 setup.py -q bdist_wheel
!pip install -U cloud/src/python/dist/tensorflow_cloud-*.whl --quiet
###Output
_____no_output_____
###Markdown
Restart the KernelWe will automatically restart your kernel so the notebook has access to the packages you installed.
###Code
# Restart the kernel after pip installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Import libraries and define constants
###Code
import os
import uuid
import numpy as np
import tensorflow as tf
from tensorflow_cloud.tuner import cloud_fit_client as client
# Setup and imports
REMOTE_DIR = '[gcs-bucket-for-temporary-files]' #@param {type:"string"}
REGION = 'us-central1' #@param {type:"string"}
PROJECT_ID = '[your-project-id]' #@param {type:"string"}
DOCKER_IMAGE_NAME = '[name-for-docker-image]' #@param {type:"string"}
! gcloud config set project $PROJECT_ID
IMAGE_URI = f'gcr.io/{PROJECT_ID}/{DOCKER_IMAGE_NAME}:latest' #@param {type:"string"}
###Output
_____no_output_____
###Markdown
Create a docker file with tensorflow_cloudIn the next step we create a base docker file with the latest wheel file to use for remote training. You may use any base image. However, DLVM base images come pre-installed with most needed packages.
###Code
%%file Dockerfile
# Using DLVM base image
FROM gcr.io/deeplearning-platform-release/tf2-cpu
WORKDIR /root
# Path configuration
ENV PATH $PATH:/root/tools/google-cloud-sdk/bin
# Make sure gsutil will use the default service account
RUN echo '[GoogleCompute]\nservice_account = default' > /etc/boto.cfg
# Copy and install tensorflow_cloud wheel file
ADD cloud/src/python/dist/tensorflow_cloud-*.whl /tmp/
RUN pip3 install --upgrade /tmp/tensorflow_cloud-*.whl --quiet
# Sets up the entry point to invoke cloud_fit.
ENTRYPOINT ["python3","-m","tensorflow_cloud.tuner.cloud_fit_remote"]
!docker build -t {IMAGE_URI} -f Dockerfile . -q && docker push {IMAGE_URI}
###Output
_____no_output_____
###Markdown
Tutorial 1 - Functional modelIn this sample we will demonstrate using numpy.array as input data by creating a basic model and and submit it for remote training. Define model building function
###Code
"""Simple model to compute y = wx + 1, with w trainable."""
inp = tf.keras.layers.Input(shape=(1,), dtype=tf.float32)
times_w = tf.keras.layers.Dense(
units=1,
kernel_initializer=tf.keras.initializers.Constant([[0.5]]),
kernel_regularizer=tf.keras.regularizers.l2(0.01),
use_bias=False)
plus_1 = tf.keras.layers.Dense(
units=1,
kernel_initializer=tf.keras.initializers.Constant([[1.0]]),
bias_initializer=tf.keras.initializers.Constant([1.0]),
trainable=False)
outp = plus_1(times_w(inp))
simple_model = tf.keras.Model(inp, outp)
simple_model.compile(tf.keras.optimizers.SGD(0.002),
"mean_squared_error", run_eagerly=True)
###Output
_____no_output_____
###Markdown
Prepare Data
###Code
# Creating sample data
x = [[9.], [10.], [11.]] * 10
y = [[xi[0]/2. + 6] for xi in x]
###Output
_____no_output_____
###Markdown
Run the model locally for validation
###Code
# Verify the model by training locally for one step.
simple_model.fit(np.array(x), np.array(y), batch_size=len(x), epochs=1)
###Output
_____no_output_____
###Markdown
Submit model and dataset for remote training
###Code
# Create a unique remote sub folder path for assets and model training output.
SIMPLE_REMOTE_DIR = os.path.join(REMOTE_DIR, str(uuid.uuid4()))
print('your remote folder is %s' % (SIMPLE_REMOTE_DIR))
# Using default configuration with two workers dividing the dataset between the two.
simple_model_job_id = client.cloud_fit(model=simple_model, remote_dir = SIMPLE_REMOTE_DIR, region =REGION , image_uri=IMAGE_URI, x=np.array(x), y=np.array(y), epochs=100, steps_per_epoch=len(x)/2,verbose=2)
!gcloud ai-platform jobs describe projects/{PROJECT_ID}/jobs/{simple_model_job_id}
###Output
_____no_output_____
###Markdown
Retrieve the trained modelOnce the training is complete you can access the trained model at `remote_folder/output`
###Code
# Load the trained model from gcs bucket
trained_simple_model = tf.keras.models.load_model(os.path.join(SIMPLE_REMOTE_DIR, 'output'))
# Test that the saved model loads and works properly
trained_simple_model.evaluate(x,y)
###Output
_____no_output_____
###Markdown
Tutorial 2 - Sequential Models and DatasetsIn this sample we will demonstrate using datasets by creating a basic model and submitting it for remote training. Define model building function
###Code
# create a model
fashion_mnist_model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
fashion_mnist_model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Prepare Data
###Code
train, test = tf.keras.datasets.fashion_mnist.load_data()
images, labels = train
images = images/255
dataset = tf.data.Dataset.from_tensor_slices((images, labels))
dataset = dataset.batch(32)
###Output
_____no_output_____
###Markdown
Run the model locally for validation
###Code
# Verify the model by training locally for one step. This is not necessary prior to cloud.fit() however it is recommended.
fashion_mnist_model.fit(dataset, epochs=1)
###Output
_____no_output_____
###Markdown
Submit model and dataset for remote training
###Code
# Create a unique remote sub folder path for assets and model training output.
FASHION_REMOTE_DIR = os.path.join(REMOTE_DIR, str(uuid.uuid4()))
print('your remote folder is %s' % (FASHION_REMOTE_DIR))
fashion_mnist_model_job_id = client.cloud_fit(model=fashion_mnist_model, remote_dir = FASHION_REMOTE_DIR,region =REGION , image_uri=IMAGE_URI, x=dataset,epochs=10, steps_per_epoch=15,verbose=2)
!gcloud ai-platform jobs describe projects/{PROJECT_ID}/jobs/{fashion_mnist_model_job_id}
###Output
_____no_output_____
###Markdown
Retrieve the trained modelOnce the training is complete you can access the trained model at remote_folder/output
###Code
# Load the trained model from gcs bucket
trained_fashion_mnist_model = tf.keras.models.load_model(os.path.join(FASHION_REMOTE_DIR, 'output'))
# Test that the saved model loads and works properly
test_images, test_labels = test
test_images = test_images/255
test_dataset = tf.data.Dataset.from_tensor_slices((test_images, test_labels))
test_dataset = test_dataset.batch(32)
trained_fashion_mnist_model.evaluate(test_dataset)
###Output
_____no_output_____ |
fb-live-selling-data-analysis.ipynb | ###Markdown
**Introduction**The 'Facebook Live Sellers in Thailand' is a dataset curated in UCI Machine Learning Datasets. The data contains 7050 observations and twelve attributes. The data is about live selling feature on the Facebook platform. Each record consists of information about the time live information of sale is posted to Facebook and engagements in the data. The engagements are regular Facebook interactions such as share and emotion rection. Details and academic publications relating to the data is available from the source https://archive.ics.uci.edu/ml/datasets/Facebook+Live+Sellers+in+Thailand.
###Code
%matplotlib inline
import os
import warnings
warnings.simplefilter(action='ignore')
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import sklearn as sl
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
###Output
/kaggle/input/facebook-live-sellers-in-thailand-uci-ml-repo/Live.csv
###Markdown
Data Analysis
###Code
data = pd.read_csv("/kaggle/input/facebook-live-sellers-in-thailand-uci-ml-repo/Live.csv")
data.head(2)
###Output
_____no_output_____
###Markdown
The columns Column1,Column2,Column3,Column4 are not part of the original data. These colums might have appeared in the data due to format conversion. We will exclude these columns from the analysis.
###Code
data = data[data.columns[:-4]]
data.head()
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 7050 entries, 0 to 7049
Data columns (total 12 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 status_id 7050 non-null object
1 status_type 7050 non-null object
2 status_published 7050 non-null object
3 num_reactions 7050 non-null int64
4 num_comments 7050 non-null int64
5 num_shares 7050 non-null int64
6 num_likes 7050 non-null int64
7 num_loves 7050 non-null int64
8 num_wows 7050 non-null int64
9 num_hahas 7050 non-null int64
10 num_sads 7050 non-null int64
11 num_angrys 7050 non-null int64
dtypes: int64(9), object(3)
memory usage: 661.1+ KB
###Markdown
From the pandas DataFrame meta information, it is evident that the data is complete w.r.f to the description. There are 7050 entries, and no null values are reported here. Let's proceed to expalore the data!
###Code
data.nunique()
###Output
_____no_output_____
###Markdown
From the unique value counts, it is evident that from the 7050 observations, only 6997 is unique live selling status. There are four types of status available in the data. From this, we can infer that around 53 observations may be duplicated or some other business phenomena are involved in the status_id column.
###Code
duplicated_data = data[data['status_id'].duplicated() == True]
duplicated_data.head()
duplicated_data.tail()
data[data.status_id == '246675545449582_326883450762124']
data[data.status_id == '819700534875473_1002372733274918']
data[data.status_id == '819700534875473_955149101330615']
data[data.status_id == '819700534875473_951614605017398']
###Output
_____no_output_____
###Markdown
From the samples evaluated, it is evident that the 53 observations are duplicate. We will proceed and remove the duplicated by the status_id column.
###Code
data_ndp = data.drop_duplicates(subset='status_id',
keep='last')
data_ndp.shape
###Output
_____no_output_____
###Markdown
Now we have only 6997 observation in the dataset. Let's explore the status type and other attributes in the data to gain further insights.
###Code
st_ax = data_ndp.status_type.value_counts().plot(kind='bar',
figsize=(10,5),
title="Status Type")
st_ax.set(xlabel="Status Type", ylabel="Count")
###Output
_____no_output_____
###Markdown
Most of the sellers seem to be using a photo or video as status for the selling. A tiny portion of the users is depending on text status or URL/link for posting an advertisement.
###Code
data_ndp.head(2)
###Output
_____no_output_____
###Markdown
The num_reaction column seems to be a sum of following colums.* num_reaction = sum(num_likes, num_loves,num_wows,num_hahas,num_sads,num_angrys)Let's validate the assumption.
###Code
data_ndp['all_reaction_count'] = data_ndp.iloc[:,-6:].sum(axis=1)
data_ndp['reactio_match'] = data_ndp.apply(lambda x: x['num_reactions'] == x['all_reaction_count'],
axis=1)
data_react_mismatch = data_ndp[data_ndp.reactio_match == False]
data_react_mismatch.shape
###Output
_____no_output_____
###Markdown
There are nine observations where the assumption mentioned above is invalid. Let's examine the difference and reasons behind this. Since only nine observations are there, we can even remove these observations from the data due to inconsistency issues. But it is worthwhile to examine the reason.
###Code
data_react_mismatch["diff_react"] = data_react_mismatch.num_reactions - data_react_mismatch.all_reaction_count
data_react_mismatch
###Output
_____no_output_____
###Markdown
Let's check if the duplicate records cause the mismatch. We created a subset data consists only the duplicated values. Let's run a quick search by the status_id!
###Code
data_react_mismatch[data_react_mismatch['status_id'].isin(list(duplicated_data.status_id.values))]
###Output
_____no_output_____
###Markdown
And by looking at the numbers, it is evident that comments or shares do not contribute it. Values of those attributes are higher than the difference, and some of the status_is's are not even shared. As there is no data available to verify the correctness, we can go for* Correct the value based on the interactions.* Drop the nine observations. I prefer to correct the values as part of this experiment before we proceed further.
###Code
data_ndp.num_reactions = data_ndp.all_reaction_count
data_ndp['reactio_match'] = data_ndp.apply(lambda x: x['num_reactions'] == x['all_reaction_count'],
axis=1)
data_ndp[data_ndp.reactio_match == False]
###Output
_____no_output_____
###Markdown
Now all the reactions_count is matching based on the calculation logic.
###Code
data_ndp.head(2)
###Output
_____no_output_____
###Markdown
Let's create two variables to understand the reactions to comment and share ratio. Comments and shares show people may be interested and inquiring or maybe complaining. Shares activity indicates that users found it interesting, hence sharing it for other's benefits.
###Code
data_ndp['react_comment_r'] = data_ndp.num_reactions/data_ndp.num_comments
data_ndp['react_share_r'] = data_ndp.num_reactions/data_ndp.num_shares
data_ndp.head()
data_ndp.react_comment_r.plot(kind='line',
figsize=(16,5))
###Output
_____no_output_____
###Markdown
From the graph, we can see that there are many NaN or Inf values and extreme values in the reactions to comments ratio. The ratio becomes inf while the comments or shares are zero in the count. The extreme values are something exciting. It may be an indication of data error or a trend in the data and worth investigating.
###Code
data_ndp.replace([np.inf, -np.inf],
0.0,
inplace=True)
data_with_p_reaction = data_ndp[(data_ndp.react_comment_r > 0) &
(data_ndp.react_comment_r <= 2)]
data_with_p_reaction = data_with_p_reaction[["num_reactions","num_comments","react_comment_r"]]
data_with_p_reaction.shape
data_with_p_reaction.head()
data_with_p_reaction.react_comment_r.min(),data_with_p_reaction.react_comment_r.max()
###Output
_____no_output_____
###Markdown
When comments are less than ten, the reaction to comment ratio becomes higher. It means it created impressions but may not be enough interest in the customer base. At the same time, we can see that the three interaction types in the data 'haha', 'angry,' and 'sad' are there. Knowing Facebook as a social platform, these reactions are expressed in extreme emotions or disappointed by the product. It is work exploring the positive reactions 'likes,' 'loves,' and 'wows.' We can create positive reactions and adverse reactions summary here. Positive Reactions = sum('likes,' 'loves,' and 'wows.' )Negative Reactions = sum('haha', 'angry,' and 'sad' )With the variables mentioned above, we can check if the reaction to comment ratio is higher for selling attempts with positive comments or negative comments.
###Code
data_ndp.head(2)
data_ndp['postive_reactions'] = data_ndp.iloc[:,-10:-7].sum(axis=1)
data_ndp.head(2)
data_ndp['negative_reactions'] = data_ndp.iloc[:,-8:-5].sum(axis=1)
data_ndp.head(2)
data_ndp.plot.scatter(x='num_comments',
y='negative_reactions',
figsize=(16,5),
title="Number of Comments v.s Negative Reactions")
data_ndp.plot.scatter(x='num_comments',
y='postive_reactions',
figsize=(16,5),
title="Number of Comments v.s Positive Reactions")
data_ndp.num_comments.min(),data_ndp.num_comments.max()
###Output
_____no_output_____
###Markdown
It looks like low comments and otherwise negative and positive, and reactions are there. Comments to positive responses are much higher than comments to negative. If we extract the respective comments and study the intent and sentiment, that could lead us to fascinating insights. Data Quality Issues and ResolutionsWe found the following data quality issues and appropriate remedy implemented. 1) Duplicate records - There were 53 records duplicated, and we preserved the last records. 2) Calculated Columns value Mismatch - The column [num_reactions](http://) column is created by summing the columns num_likes, num_loves,num_wows,num_hahas,num_sads,num_angrys. There was nine instanced where the values are not matching. The values were replaced with correct calculations. New Features and RationaleAs part of the analysis, we created six new features. They are :all_reaction_count, reactio_match, react_comment_r, react_share_r, postive_reactions, negative_reactions.all_reaction_count: This feature was generated to check the validity of data 'num_reactions'. The logic used to create the column is num_reaction = sum(num_likes, num_loves,num_wows,num_hahas,num_sads,num_angrys) .reactio_match: This is a bool column. If the values are False that means num_reactions and all_reactions_count values are different. react_comment_r: reactions to comments ratio. The logic for creating this variable is num_reactions/num_comments react_share_r: Reactiont to share ratio. The logic to create the variable is num_reactions/num_shares.postive_reactions: This is the overall positve reaction count. Logic to generate the column : positive_reactions = sum(num_likes,num_loves,num_wows)negative_reactions: This variable represents overall negative reactions. Logic to generate the columsn : negative_reactions = sum(num_hahas, num_sads, num_angrys) Clean DataFrom the final data, we will exclude the columns all_reaction_count, reactio_match. These columns are created for verification and validation. The rest of the new columns can be removed based on the use case we are framing from the data.
###Code
clean_data = data_ndp.drop(['all_reaction_count','reactio_match'],
axis=1)
clean_data.head()
clean_data.to_csv("clean_data_v1.0.csv",
index=False)
###Output
_____no_output_____ |
notebooks/220322_MI.ipynb | ###Markdown
Quickly visualizing MI between feature map activations of the trained network
###Code
import h5py
import numpy as np
import matplotlib.pyplot as plt
from sklearn import feature_selection
from lecun1989repro import utils
###Output
_____no_output_____
###Markdown
"Modern" replication
###Code
acts = utils.read_h5_dict("../out/modern/activations/activations_test.h5")
acts["h1"].shape
def mutual_info(acts, i, j):
feature_map_i = acts[:, i, :, :].reshape(-1, 1)
feature_map_j = acts[:, j, :, :].ravel()
return feature_selection.mutual_info_regression(feature_map_i, feature_map_j)
def feature_map_MI(acts):
num_maps = acts.shape[1]
matrix = np.zeros((num_maps, num_maps), dtype=np.float32)
for i in range(num_maps):
for j in range(i, acts.shape[1]):
matrix[i, j] = mutual_info(acts, i, j)[0]
# fill in the redundant entries
for i in range(num_maps):
for j in range(num_maps):
if i > j:
matrix[i, j] = matrix[j, i]
return matrix
%time MI_h1 = feature_map_MI(acts["h1"])
%time MI_h2 = feature_map_MI(acts["h2"])
plt.figure(figsize=(7, 6))
plt.imshow(MI_h1)
plt.title("Unnormalized MI heatmap")
plt.xlabel("Feature map")
plt.ylabel("Feature map")
plt.show()
plt.figure(figsize=(7, 6))
plt.imshow(MI_h2)
plt.title("Unnormalized MI heatmap")
plt.xlabel("Feature map")
plt.ylabel("Feature map")
plt.show()
###Output
_____no_output_____
###Markdown
"Base" model
###Code
acts = utils.read_h5_dict("../out/base/activations/activations_test.h5")
%time MI_h1 = feature_map_MI(acts["h1"])
%time MI_h2 = feature_map_MI(acts["h2"])
plt.figure(figsize=(7, 6))
plt.imshow(MI_h1)
plt.title("Unnormalized MI heatmap")
plt.xlabel("Feature map")
plt.ylabel("Feature map")
plt.show()
plt.figure(figsize=(7, 6))
plt.imshow(MI_h2)
plt.title("Unnormalized MI heatmap")
plt.xlabel("Feature map")
plt.ylabel("Feature map")
plt.show()
###Output
_____no_output_____ |
code/qss20_groupcode/predict_viol/B_feature_matrix_prep.ipynb | ###Markdown
Imports
###Code
#imports
import pandas as pd
import numpy as np
import random
import re
import recordlinkage
import time
import matplotlib.pyplot as plt
# ML imports
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score
from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import cross_val_score
from sklearn.svm import SVC
from sklearn.preprocessing import OneHotEncoder
from sklearn.tree import DecisionTreeClassifier
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import LabelBinarizer
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification
from sklearn.impute import SimpleImputer
# prevent depreciation warnings
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
warnings.filterwarnings("ignore", category=FutureWarning)
## repeated printouts
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
###Output
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/feature_extraction/image.py:167: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
dtype=np.int):
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/linear_model/least_angle.py:30: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
method='lar', copy_X=True, eps=np.finfo(np.float).eps,
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/linear_model/least_angle.py:167: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
method='lar', copy_X=True, eps=np.finfo(np.float).eps,
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/linear_model/least_angle.py:284: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
eps=np.finfo(np.float).eps, copy_Gram=True, verbose=0,
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/linear_model/least_angle.py:862: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
eps=np.finfo(np.float).eps, copy_X=True, fit_path=True,
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/linear_model/least_angle.py:1101: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
eps=np.finfo(np.float).eps, copy_X=True, fit_path=True,
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/linear_model/least_angle.py:1127: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
eps=np.finfo(np.float).eps, positive=False):
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/linear_model/least_angle.py:1362: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
max_n_alphas=1000, n_jobs=None, eps=np.finfo(np.float).eps,
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/linear_model/least_angle.py:1602: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
max_n_alphas=1000, n_jobs=None, eps=np.finfo(np.float).eps,
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/linear_model/least_angle.py:1738: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
eps=np.finfo(np.float).eps, copy_X=True, positive=False):
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/decomposition/online_lda.py:29: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
EPS = np.finfo(np.float).eps
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/ensemble/gradient_boosting.py:32: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
from ._gradient_boosting import predict_stages
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/ensemble/gradient_boosting.py:32: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
from ._gradient_boosting import predict_stages
###Markdown
Read in, assign a unique identifier via the index, set to dates
###Code
# read in our PreMatrix csv from step A
# for violations
preMatrix = pd.read_csv('../output/repMatrixforpredict_violations.csv').drop(columns=['Unnamed: 0'])
# for investigations
#preMatrix = pd.read_csv('../output/repMatrixforpredict_investigations.csv').drop(columns=['Unnamed: 0'])
preMatrix.shape
preMatrix = preMatrix.reset_index().copy()
preMatrix = preMatrix.rename(columns={"index": 'unique_id'})
preMatrix.is_violator.value_counts()
## convert the dates to datetime objects
for col in ['CASE_RECEIVED_DATE', 'DECISION_DATE',
'REQUESTED_START_DATE_OF_NEED', 'REQUESTED_END_DATE_OF_NEED',
'JOB_START_DATE', 'JOB_END_DATE']:
preMatrix[col] = pd.to_datetime(preMatrix[col])
preMatrix.columns
preMatrix.info()
# Second Diploma Major has no non null values so drop it
preMatrix = preMatrix.drop(columns=['SECOND_DIPLOMA_MAJOR'])
# Assign the is_violator status to the y (value we are trying to predict)
y = list(preMatrix.is_violator)
# remove the is_violator status from the preMatrix ... because that would be too easy!
preMatrix = preMatrix.drop(columns=['is_violator'])
## dtypes auto-separate
## list of non-features
numeric_options = ["int64", "float64", "datetime64[ns]"]
num_cols = [one for one in preMatrix.columns if preMatrix.dtypes[one] in numeric_options]
cat_cols = [one for one in preMatrix.columns if preMatrix.dtypes[one] not in numeric_options]
print('Numeric Columns:')
print(num_cols)
print('\nCategorical Columns:')
print(cat_cols)
# OLD USELESS CODE SAVED FOR POSTERITY...JUST IN CASE
# encoded_text_feature_pre = text_feature_pre.copy()
# for one in encoded_text_feature_pre.columns:
# enc = LabelEncoder()
# enc.fit(encoded_text_feature_pre[one].astype(str))
# encoded_text_feature_pre[one] = enc.transform(encoded_text_feature_pre[one].astype(str))
# get the categorical features in one dataframe
cat_feature_pre = preMatrix.loc[:, cat_cols].copy()
print("Shape of non-imputed: ")
print(cat_feature_pre.shape)
# and the numerical features into another dataframe
num_feature_pre = preMatrix.loc[:, num_cols].copy()
print(num_feature_pre.shape)
# SimpleImputer on the categorical features and apply a "missing_value" to NANs
imputer_cat = SimpleImputer(strategy='constant', fill_value='missing_value')
imputed_cat_feature_pre = pd.DataFrame(imputer_cat.fit_transform(cat_feature_pre))
imputed_cat_feature_pre.columns = cat_feature_pre.columns
# SimpleImputer on the numerical features and apply mode to NANs
imputer_num = SimpleImputer(strategy='most_frequent', verbose=5)
imputed_num_feature_pre = pd.DataFrame(imputer_num.fit_transform(num_feature_pre))
imputed_num_feature_pre.columns = num_feature_pre.columns
print("Shape of imputed: ")
print(imputed_cat_feature_pre.shape)
print(imputed_num_feature_pre.shape)
# recombine the imputed cat and imputed num
# we need to drop some columns which are going to be unique identifiers and could
# be an issue within our model
unique_cols_to_drop = ['unique_id', 'CASE_NO', 'EMPLOYER_NAME', 'TRADE_NAME_DBA']
for l in [cat_cols, num_cols]:
for col in l:
if col in unique_cols_to_drop:
l.remove(col)
# prepare input data with OneHotEncoder
def prepare_inputs(X_train, X_test):
oe = OneHotEncoder(handle_unknown='ignore')
oe.fit(X_train)
X_train_enc = oe.transform(X_train)
X_test_enc = oe.transform(X_test)
return X_train_enc, X_test_enc
imputed_combined = pd.merge(imputed_cat_feature_pre.reset_index(),
imputed_num_feature_pre.reset_index(), how='left',
on='index')
print('%s rows lost in merge' %(imputed_num_feature_pre.shape[0]-imputed_combined.shape[0]))
print(imputed_combined.shape)
imputed_combined = imputed_combined.drop(columns = 'index')
# do a train test split
# split into train and test sets (80/20)
# X_train, X_test, y_train, y_test = train_test_split(imputed_cat_feature_pre, y, test_size=0.20, random_state=1)
X_train, X_test, y_train, y_test = train_test_split(imputed_combined, y, test_size=0.20, random_state=3)
# apply the oneHotEcoder within prepare_inputs
X_train, X_test = prepare_inputs(X_train, X_test)
imputed_combined.head()
clf = RandomForestClassifier(max_depth = None, random_state=0)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print("Confusion matrix \n")
print(pd.crosstab(pd.Series(y_test, name='Actual'), pd.Series(y_pred, name='Predicted')))
def get_metrics(y_test, y_predicted):
accuracy = accuracy_score(y_test, y_predicted)
precision = precision_score(y_test, y_predicted, average='binary')
recall = recall_score(y_test, y_predicted, average='binary')
f1 = f1_score(y_test, y_predicted, average='binary')
return accuracy, precision, recall, f1
accuracy, precision, recall, f1 = get_metrics(y_test, y_pred)
print("accuracy = %.3f \nprecision = %.3f \nrecall = %.3f \nf1 = %.3f" % (accuracy, precision, recall, f1))
'''
start_time = time.time()
importances = clf.feature_importances_
std = np.std([
tree.feature_importances_ for tree in clf.estimators_], axis=0)
elapsed_time = time.time() - start_time
print(f"Elapsed time to compute the importances: "
f"{elapsed_time:.3f} seconds")
print(importances)
forest_importances = pd.Series(importances, index=cat_cols)
fig, ax = plt.subplots()
fig.set_figheight(10)
fig.set_figwidth(10)
forest_importances.plot.bar(yerr=std, ax=ax)
ax.set_title("Feature importances using MDI")
ax.set_ylabel("Mean decrease in impurity")
fig.tight_layout()
'''
###Output
_____no_output_____ |
Notebooks/covid-19-caution/SEIR_COVID19-original.ipynb | ###Markdown
Model Equations\begin{equation}\begin{split}\dot{S} &= -\beta_1 I_1 S -\beta_2 I_2 S - \beta_3 I_3 S\\\dot{E} &=\beta_1 I_1 S +\beta_2 I_2 S + \beta_3 I_3 S - a E \\\dot{I_1} &= a E - \gamma_1 I_1 - p_1 I_1 \\\dot{I_2} &= p_1 I_1 -\gamma_2 I_2 - p_2 I_2 \\\dot{I_3} & = p_2 I_2 -\gamma_3 I_3 - \mu I_3 \\\dot{R} & = \gamma_1 I_1 + \gamma_2 I_2 + \gamma_3 I_3 \\\dot{D} & = \mu I_3\end{split}\end{equation} Variables* $S$: Susceptible individuals* $E$: Exposed individuals - infected but not yet infectious or symptomatic* $I_i$: Infected individuals in severity class $i$. Severity increaes with $i$ and we assume individuals must pass through all previous classes * $I_1$: Mild infection (hospitalization not required) * $I_2$: Severe infection (hospitalization required) * $I_3$: Critical infection (ICU required)* $R$: individuals who have recovered from disease and are now immune* $D$: Dead individuals* $N=S+E+I_1+I_2+I_3+R+D$ Total population size (constant) Parameters* $\beta_i$ rate at which infected individuals in class $I_i$ contact susceptibles and infect them* $a$ rate of progression from the exposed to infected class* $\gamma_i$ rate at which infected individuals in class $I_i$ recover from disease and become immune* $p_i$ rate at which infected individuals in class $I_i$ progress to class $I_{I+1}$* $\mu$ death rate for individuals in the most severe stage of disease Basic reproductive ratioIdea: $R_0$ is the sum of 1. the average number of secondary infections generated from an individual in stage $I_1$2. the probability that an infected individual progresses to $I_2$ multiplied by the average number of secondary infections generated from an individual in stage $I_2$3. the probability that an infected individual progresses to $I_3$ multiplied by the average number of secondary infections generated from an individual in stage $I_3$\begin{equation}\begin{split}R_0 & = N\frac{\beta_1}{p_1+\gamma_1} + \frac{p_1}{p_1 + \gamma_1} \left( \frac{N \beta_2}{p_2+\gamma_2} + \frac{p_2}{p_2 + \gamma_2} \frac{N \beta_3}{\mu+\gamma_3}\right)\\&= N\frac{\beta_1}{p_1+\gamma_1} \left(1 + \frac{p_1}{p_2 + \gamma_2}\frac{\beta_2}{\beta_1} \left( 1 + \frac{p_2}{\mu + \gamma_3} \frac{\beta_3}{\beta_2} \right) \right)\end{split}\end{equation}
###Code
import numpy as np, matplotlib.pyplot as plt
from scipy.integrate import odeint
#Defining the differential equations
#Don't track S because all variables must add up to 1
#include blank first entry in vector for beta, gamma, p so that indices align in equations and code.
#In the future could include recovery or infection from the exposed class (asymptomatics)
def seir(y,t,b,a,g,p,u,N):
dy=[0,0,0,0,0,0]
S=N-sum(y);
dy[0]=np.dot(b[1:3],y[1:3])*S-a*y[0] # E
dy[1]= a*y[0]-(g[1]+p[1])*y[1] #I1
dy[2]= p[1]*y[1] -(g[2]+p[2])*y[2] #I2
dy[3]= p[2]*y[2] -(g[3]+u)*y[3] #I3
dy[4]= np.dot(g[1:3],y[1:3]) #R
dy[5]=u*y[3] #D
return dy
# Define parameters based on clinical observations
#I will add sources soon
# https://github.com/midas-network/COVID-19/tree/master/parameter_estimates/2019_novel_coronavirus
IncubPeriod=5 #Incubation period, days
DurMildInf=10 #Duration of mild infections, days
FracMild=0.8 #Fraction of infections that are mild
FracSevere=0.15 #Fraction of infections that are severe
FracCritical=0.05 #Fraction of infections that are critical
CFR=0.02 #Case fatality rate (fraction of infections resulting in death)
TimeICUDeath=7 #Time from ICU admission to death, days
DurHosp=11 #Duration of hospitalization, days
# Define parameters and run ODE
N=1000
b=np.zeros(4) #beta
g=np.zeros(4) #gamma
p=np.zeros(3)
a=1/IncubPeriod
u=(1/TimeICUDeath)*(CFR/FracCritical)
g[3]=(1/TimeICUDeath)-u
p[2]=(1/DurHosp)*(FracCritical/(FracCritical+FracSevere))
g[2]=(1/DurHosp)-p[2]
g[1]=(1/DurMildInf)*FracMild
p[1]=(1/DurMildInf)-g[1]
#b=2e-4*np.ones(4) # all stages transmit equally
b=2.5e-4*np.array([0,1,0,0]) # hospitalized cases don't transmit
#Calculate basic reproductive ratio
R0=N*((b[1]/(p[1]+g[1]))+(p[1]/(p[1]+g[1]))*(b[2]/(p[2]+g[2])+ (p[2]/(p[2]+g[2]))*(b[3]/(u+g[3]))))
print("R0 = {0:4.1f}".format(R0))
print(b)
print(a)
print(g)
print(p)
print(u)
tmax=365
tvec=np.arange(0,tmax,0.1)
ic=np.zeros(6)
ic[0]=1
soln=odeint(seir,ic,tvec,args=(b,a,g,p,u,N))
soln=np.hstack((N-np.sum(soln,axis=1,keepdims=True),soln))
plt.figure(figsize=(13,5))
plt.subplot(1,2,1)
plt.plot(tvec,soln)
plt.xlabel("Time (days)")
plt.ylabel("Number per 1000 People")
plt.legend(("S","E","I1","I2","I3","R","D"))
plt.ylim([0,1000])
#Same plot but on log scale
plt.subplot(1,2,2)
plt.plot(tvec,soln)
plt.semilogy()
plt.xlabel("Time (days)")
plt.ylabel("Number per 1000 People")
plt.legend(("S","E","I1","I2","I3","R","D"))
plt.ylim([1,1000])
#plt.tight_layout()
# get observed growth rate r (and doubling time) for a particular variable between selected time points
#(all infected classes eventually grow at same rate during early infection)
#Don't have a simple analytic formula for r for this model due to the complexity of the stages
def growth_rate(tvec,soln,t1,t2,i):
i1=np.where(tvec==t1)[0][0]
i2=np.where(tvec==t2)[0][0]
r=(np.log(soln[i2,1])-np.log(soln[i1,1]))/(t2-t1)
DoublingTime=np.log(2)/r
return r, DoublingTime
(r,DoublingTime)=growth_rate(tvec,soln,10,20,1)
print("The epidemic growth rate is = {0:4.2f} per day and the doubling time {1:4.1f} days ".format(r,DoublingTime))
###Output
The epidemic growth rate is = 0.08 per day and the doubling time 9.0 days
###Markdown
Repeat but with a social distancing measure that reduces transmission rate
###Code
bSlow=0.6*b
R0Slow=N*((bSlow[1]/(p[1]+g[1]))+(p[1]/(p[1]+g[1]))*(bSlow[2]/(p[2]+g[2])+ (p[2]/(p[2]+g[2]))*(bSlow[3]/(u+g[3]))))
solnSlow=odeint(seir,ic,tvec,args=(bSlow,a,g,p,u,N))
solnSlow=np.hstack((N-np.sum(solnSlow,axis=1,keepdims=True),solnSlow))
plt.figure(figsize=(13,5))
plt.subplot(1,2,1)
plt.plot(tvec,solnSlow)
plt.xlabel("Time (days)")
plt.ylabel("Number per 1000 People")
plt.legend(("S","E","I1","I2","I3","R","D"))
plt.ylim([0,1000])
#Same plot but on log scale
plt.subplot(1,2,2)
plt.plot(tvec,solnSlow)
plt.semilogy()
plt.xlabel("Time (days)")
plt.ylabel("Number per 1000 People")
plt.legend(("S","E","I1","I2","I3","R","D"))
plt.ylim([1,1000])
(rSlow,DoublingTimeSlow)=growth_rate(tvec,solnSlow,30,40,1)
plt.show()
print("R0 under intervention = {0:4.1f}".format(R0Slow))
print("The epidemic growth rate is = {0:4.2f} per day and the doubling time {1:4.1f} days ".format(rSlow,DoublingTimeSlow))
###Output
_____no_output_____
###Markdown
Compare epidemic growth with and without intervention
###Code
### All infectious cases (not exposed)
plt.figure(figsize=(13,5))
plt.subplot(1,2,1)
plt.plot(tvec,np.sum(soln[:,2:5],axis=1,keepdims=True))
plt.plot(tvec,np.sum(solnSlow[:,2:5],axis=1,keepdims=True))
plt.xlabel("Time (days)")
plt.ylabel("Number per 1000 People")
plt.legend(("No intervention","Intervention"))
plt.ylim([0,1000])
plt.title('All infectious cases')
###Output
_____no_output_____
###Markdown
COVID19 Cases vs Hospital Capacity Depending on the severity ($I_i$) stage of COVID-19 infection, patients need different level of medical care. Individuals in $I_1$ have "mild" infection, meaning they have cough/fever/other flu-like symptoms and may also have mild pneumonia. Mild pneumonia does not require hospitalization, although in many outbreak locations like China and South Korea all symptomatic patients are being hospitalized. This is likely to reduce spread and to monitor these patients in case they rapidly progress to worse outcome. However, it is a huge burden on the health care system.Individuals in $I_2$ have "severe" infection, which is categorized medically as having any of the following: "dyspnea, respiratory frequency 30/min, blood oxygen saturation 93%, partial pressure of arterial oxygen to fraction of inspired oxygen ratio $$50% within 24 to 48 hours". These individuals require hospitalization but can be treated on regular wards. They may require supplemental oxygen. Individuals in $I_3$ have "critical" infection, which is categorized as having any of the following: "respiratory failure, septic shock, and/or multiple organ dysfunction or failure".They require ICU-level care, generally because they need mechanical ventilation. We consider different scenarios for care requirements. One variation between scenarios is whether we include hospitalization for all individuals or only those with severe or critical infection. Another is the care of critical patients. If ICUs are full, hospitals have protocols developed for pandemic influenza to provide mechanical ventilation outside regular ICU facility and staffing requirements. Compared to "conventional" ventilation protocols, there are "contingency" and "crisis" protocols that can be adopted to increase patient loads. These protocols involve increasing patient:staff ratios, using non-ICU beds, and involving non-critical care specialists in patient care.
###Code
#Parameter sources: https://docs.google.com/spreadsheets/d/1zZKKnZ47lqfmUGYDQuWNnzKnh-IDMy15LBaRmrBcjqE
# All values are adjusted for increased occupancy due to flu season
AvailHospBeds=2.6*(1-0.66*1.1) #Available hospital beds per 1000 ppl in US based on total beds and occupancy
AvailICUBeds=0.26*(1-0.68*1.07) #Available ICU beds per 1000 ppl in US, based on total beds and occupancy. Only counts adult not neonatal/pediatric beds
ConvVentCap=0.062 #Estimated excess # of patients who could be ventilated in US (per 1000 ppl) using conventional protocols
ContVentCap=0.15 #Estimated excess # of patients who could be ventilated in US (per 1000 ppl) using contingency protocols
CrisisVentCap=0.42 #Estimated excess # of patients who could be ventilated in US (per 1000 ppl) using crisis protocols
###Output
_____no_output_____
###Markdown
Assumptions 1* Only severe or critical cases go to the hospital* All critical cases require ICU care and mechanical ventilation
###Code
NumHosp=soln[:,3]+soln[:,4]
NumICU=soln[:,4]
plt.figure(figsize=(13,4.8))
plt.subplot(1,2,1)
plt.plot(tvec,NumHosp)
plt.plot(np.array((0, tmax)),AvailHospBeds*np.ones(2),color='C0',linestyle=":")
plt.xlabel("Time (days)")
plt.ylabel("Number Per 1000 People")
plt.legend(("Cases Needing Hospitalization","Available Hospital Beds"))
ipeakHosp=np.argmax(NumHosp) #find peak
peakHosp=10*np.ceil(NumHosp[ipeakHosp]/10)#find time at peak
plt.ylim([0,peakHosp])
plt.subplot(1,2,2)
plt.plot(tvec,NumICU,color='C1')
plt.plot(np.array((0, tmax)),AvailICUBeds*np.ones(2),color='C1',linestyle=":")
plt.xlabel("Time (days)")
plt.ylabel("Number Per 1000 People")
plt.legend(("Cases Needing ICU","Available ICU Beds"))
ipeakICU=np.argmax(NumICU) #find peak
peakICU=10*np.ceil(NumICU[ipeakICU]/10)#find time at peak
plt.ylim([0,peakICU])
plt.ylim([0,10])
#Find time when hospitalized cases = capacity
icross=np.argmin(np.abs(NumHosp[0:ipeakHosp]-AvailHospBeds)) #find intersection before peak
TimeFillBeds=tvec[icross]
#Find time when ICU cases = capacity
icross=np.argmin(np.abs(NumICU[0:ipeakICU]-AvailICUBeds)) #find intersection before peak
TimeFillICU=tvec[icross]
plt.show()
print("Hospital and ICU beds are filled by COVID19 patients after {0:4.1f} and {1:4.1f} days".format(TimeFillBeds,TimeFillICU))
###Output
_____no_output_____
###Markdown
Note that we have not taken into account the limited capacity in the model itself. If hospitals are at capacity, then the death rate will increase, since individuals with severe and critical infection will often die without medical care. The transmission rate will probably also increase, since any informal home-care for these patients will likely not include the level of isolation/precautions used in a hospital. Allow for mechanical ventilation outside of ICUs using contingency or crisis capacity
###Code
plt.plot(tvec,NumICU)
plt.plot(np.array((0, tmax)),ConvVentCap*np.ones(2),linestyle=":")
plt.plot(np.array((0, tmax)),ContVentCap*np.ones(2),linestyle=":")
plt.plot(np.array((0, tmax)),CrisisVentCap*np.ones(2),linestyle=":")
plt.xlabel("Time (days)")
plt.ylabel("Number Per 1000 People")
plt.legend(("Cases Needing Mechanical Ventilation","Conventional Capacity","Contingency Capacity","Crisis Capacity"))
plt.ylim([0,peakICU])
plt.ylim([0,10])
#Find time when ICU cases = conventional capacity
icrossConv=np.argmin(np.abs(NumICU[0:ipeakICU]-ConvVentCap)) #find intersection before peak
TimeConvCap=tvec[icrossConv]
icrossCont=np.argmin(np.abs(NumICU[0:ipeakICU]-ContVentCap)) #find intersection before peak
TimeContCap=tvec[icrossCont]
icrossCrisis=np.argmin(np.abs(NumICU[0:ipeakICU]-CrisisVentCap)) #find intersection before peak
TimeCrisisCap=tvec[icrossCrisis]
plt.show()
print("Capacity for mechanical ventilation is filled by COVID19 patients after {0:4.1f} (conventional), {1:4.1f} (contingency) and {2:4.1f} (crisis) days".format(TimeConvCap,TimeContCap,TimeCrisisCap))
###Output
_____no_output_____
###Markdown
Compare to the case with intervention
###Code
NumHospSlow=solnSlow[:,3]+solnSlow[:,4]
NumICUSlow=solnSlow[:,4]
plt.figure(figsize=(13,4.8))
plt.subplot(1,2,1)
plt.plot(tvec,NumHosp)
plt.plot(tvec,NumHospSlow,color='C0',linestyle="--")
plt.plot(np.array((0, tmax)),AvailHospBeds*np.ones(2),color='C0',linestyle=":")
plt.xlabel("Time (days)")
plt.ylabel("Number Per 1000 People")
plt.legend(("Cases Needing Hospitalization","Cases Needing Hospitalization (Intervetion)","Available Hospital Beds"))
plt.ylim([0,peakHosp])
plt.subplot(1,2,2)
plt.plot(tvec,NumICU,color='C1')
plt.plot(tvec,NumICUSlow,color='C1',linestyle="--")
plt.plot(np.array((0, tmax)),AvailICUBeds*np.ones(2),color='C1',linestyle=":")
plt.xlabel("Time (days)")
plt.ylabel("Number Per 1000 People")
plt.legend(("Cases Needing ICU","Cases Needing ICU (Intervetion)","Available ICU Beds"))
plt.ylim([0,peakICU])
#Find time when hospitalized cases = capacity
ipeakHospSlow=np.argmax(NumHospSlow) #find peak
icross=np.argmin(np.abs(NumHospSlow[0:ipeakHospSlow]-AvailHospBeds)) #find intersection before peak
TimeFillBedsSlow=tvec[icross]
#Find time when ICU cases = capacity
ipeakICUSlow=np.argmax(NumICUSlow) #find peak
icross=np.argmin(np.abs(NumICUSlow[0:ipeakICU]-AvailICUBeds)) #find intersection before peak
TimeFillICUSlow=tvec[icross]
plt.show()
print("With intervention, hospital and ICU beds are filled by COVID19 patients after {0:4.1f} and {1:4.1f} days".format(TimeFillBedsSlow,TimeFillICUSlow))
###Output
_____no_output_____
###Markdown
And for expanded mechanical ventilation capacity
###Code
plt.plot(tvec,NumICU)
plt.plot(tvec,NumICUSlow)
plt.plot(np.array((0, tmax)),ConvVentCap*np.ones(2),linestyle=":")
plt.plot(np.array((0, tmax)),ContVentCap*np.ones(2),linestyle=":")
plt.plot(np.array((0, tmax)),CrisisVentCap*np.ones(2),linestyle=":")
plt.xlabel("Time (days)")
plt.ylabel("Number Per 1000 People")
plt.legend(("Cases Needing Mechanical Ventilation","Cases Needing Mechanical Ventilation (Intervention)","Conventional Capacity","Contingency Capacity","Crisis Capacity"))
plt.ylim([0,peakICU])
#Find time when ICU cases = conventional capacity (with intervention)
icrossConvSlow=np.argmin(np.abs(NumICUSlow[0:ipeakICUSlow]-ConvVentCap)) #find intersection before peak
TimeConvCapSlow=tvec[icrossConvSlow]
icrossContSlow=np.argmin(np.abs(NumICUSlow[0:ipeakICUSlow]-ContVentCap)) #find intersection before peak
TimeContCapSlow=tvec[icrossContSlow]
icrossCrisisSlow=np.argmin(np.abs(NumICUSlow[0:ipeakICUSlow]-CrisisVentCap)) #find intersection before peak
TimeCrisisCapSlow=tvec[icrossCrisisSlow]
plt.show()
print("Capacity for mechanical ventilation is filled by COVID19 patients after {0:4.1f} (conventional), {1:4.1f} (contingency) and {2:4.1f} (crisis) days".format(TimeConvCapSlow,TimeContCapSlow,TimeCrisisCapSlow))
###Output
_____no_output_____ |
jupyter_notebooks/network_result_visualiztion.ipynb | ###Markdown
Load the result data
###Code
training_result = "/home/wentao/project/keras_training/UNet2D2D_SSIM_loss_no_regularization/20190819-162734/predictions/result.h5"
task_name = 'UNet2D2D_SSIM_loss_no_regularization'
f = h5py.File(training_result,'r')
result = np.array(f.get('result'))
truth = np.array(f.get('truth'))
imag = np.array(f.get('input'))
###Output
_____no_output_____
###Markdown
Calculate the metrics
###Code
psnr = []
mse = []
nrmse = []
ssim = []
for i in range(0, truth.shape[0]):
psnr.append(measure.compare_psnr(result[i],truth[i], 1))
mse.append(measure.compare_mse(result[i], truth[i]))
nrmse.append(measure.compare_nrmse(result[i], truth[i]))
ssim.append(measure.compare_ssim(result[i], truth[i], data_range=1))
# find the best and the worst images by ssim value
best_image_index = ssim.index(max(ssim))
worst_image_index = ssim.index(min(ssim))
###Output
_____no_output_____
###Markdown
Average mse, nrmse psnr, ssim values
###Code
print(np.mean(mse), np.mean(nrmse), np.mean(psnr), np.mean(ssim))
###Output
0.00199017724690729 0.17560795468353496 27.69450816257319 0.877606626530342
###Markdown
Best image by ssim value
###Code
plt.figure(1, figsize=(10,10))
plt.subplot(1, 3, 1)
plt.axis('off')
plt.title('Input')
plt.imshow(imag[best_image_index, :, :, 0], cmap='gray')
plt.subplot(1, 3, 2)
plt.axis('off')
plt.title('Reconstructed Image')
plt.imshow(result[best_image_index, :, :, 0], cmap='gray')
plt.subplot(1, 3, 3)
plt.axis('off')
plt.title('Ground Truth')
plt.imshow(truth[best_image_index, :, :, 0], cmap='gray')
plt.savefig('./images/{task_name}_best_ssim_{ssim:.4f}.jpg'.format(task_name=task_name, ssim=ssim[best_image_index]), bbox_inches='tight')
###Output
_____no_output_____
###Markdown
SSIM value of the best image
###Code
ssim[best_image_index]
###Output
_____no_output_____
###Markdown
Worst image by ssim value
###Code
plt.figure(2, figsize=(10,10))
plt.subplot(1, 3, 1)
plt.axis('off')
plt.title('Input')
plt.imshow(imag[worst_image_index, :, :, 0], cmap='gray')
plt.subplot(1, 3, 2)
plt.axis('off')
plt.title('Reconstructed Image')
plt.imshow(result[worst_image_index, :, :, 0], cmap='gray')
plt.subplot(1, 3, 3)
plt.axis('off')
plt.title('Ground Truth')
plt.imshow(truth[worst_image_index, :, :, 0], cmap='gray')
plt.savefig('./images/{task_name}_worst_ssim_{ssim:.4f}.jpg'.format(task_name=task_name, ssim=ssim[worst_image_index]), bbox_inches='tight')
###Output
_____no_output_____
###Markdown
SSIM value of the worst image
###Code
ssim[worst_image_index]
###Output
_____no_output_____
###Markdown
Historgrams
###Code
figsize = (5, 5)
title = task_name
plt.figure(1)
plt.title(title)
plt.xlabel('SSIM')
plt.hist(ssim, bins=50)
plt.savefig('./images/{task_name}_hist_SSIM.jpg'.format(task_name=task_name))
plt.figure(2)
plt.title(title)
plt.xlabel('PSNR')
plt.hist(psnr, bins=50)
plt.savefig('./images/{task_name}_hist_PSNR.jpg'.format(task_name=task_name))
plt.figure(3)
plt.title(title)
plt.xlabel('MSE')
plt.hist(mse, bins=50)
plt.savefig('./images/{task_name}_hist_MSE.jpg'.format(task_name=task_name))
plt.figure(4)
plt.title(title)
plt.xlabel('NRMSE')
plt.hist(nrmse, bins=50)
plt.savefig('./images/{task_name}_hist_NRMSE.jpg'.format(task_name=task_name))
import pickle
best = [imag[best_image_index, :, :, 0], result[best_image_index, :, :, 0], truth[best_image_index, :, :, 0]]
worst = [imag[worst_image_index, :, :, 0], result[worst_image_index, :, :, 0], truth[worst_image_index, :, :, 0]]
plot_data = {'best_images': best, 'worst_images': worst, 'ssim': ssim, 'psnr': psnr}
with open(task_name+"_plot_data.pkl", "wb") as f:
pickle.dump(plot_data, f)
###Output
_____no_output_____ |
notebooks/3_Modelling.ipynb | ###Markdown
Scores of baseline model and seven regression models---
###Code
## load modules
import os
import sys
sys.path.append("..")
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from modeling.functions import baseline, modelling_fc, get_features
from sklearn.neighbors import KNeighborsRegressor
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import RandomForestRegressor
from xgboost import XGBRegressor
from lightgbm import LGBMRegressor
from sklearn.preprocessing import MinMaxScaler
RSEED = 42
###Output
_____no_output_____
###Markdown
Preparation of data for modelling Read data, linearly interpolate missing values and create dummies for cardinal wind directions.
###Code
data = pd.read_csv('../data/GEFCom2014Data/Wind/raw_data_incl_features.csv', parse_dates=['TIMESTAMP'], index_col='TIMESTAMP')
data.interpolate(method='linear', inplace=True)
data = pd.get_dummies(data, columns = ['WD100CARD','WD10CARD'], drop_first=True)
data.head()
###Output
_____no_output_____
###Markdown
Use the first 75 % of the data as training/validation set and the last 25 % as a test set. In addition, get a dictionary with different feature combinations.
###Code
data_train = data[:'2013-07-01 00:00:00']
data_test = data['2013-07-01 01:00:00':]
feature_dict = get_features(data)
###Output
_____no_output_____
###Markdown
Run models Run models by calling the function "modelling_fc" and save scores and model parameter in "../results/{MODEL}.csv".
###Code
## Define a list with models to be run. Pass 'Baseline' for the baseline model, otherwise the model object.
models = ['Baseline']
for model in models:
# Baseline
if model == 'Baseline':
results = baseline(data_train, data_test)
# RandomForest
if model.__class__.__name__ == 'RandomForestRegressor':
param_grid = {'n_estimators' : [100,150], 'max_depth' : np.arange(15,31,5), 'min_samples_leaf' : np.arange(10,21,10)}
results = modelling_fc(data_train, data_test, feature_dict, model, param_grid = param_grid, scaler=MinMaxScaler(), n_jobs=3)
# Linear regression
if model.__class__.__name__ == 'LinearRegression':
param_grid = {'fit_intercept' : [True]}
results = modelling_fc(data_train, data_test, feature_dict, model, param_grid = param_grid, scaler=MinMaxScaler(), n_jobs=-1)
# XGBoost
if model.__class__.__name__ == 'XGBRegressor':
param_grid = {'random_state' : [RSEED]}
results = modelling_fc(data_train, data_test, feature_dict, model, param_grid = param_grid, scaler = MinMaxScaler(), n_jobs=-1)
# LGBM
if model.__class__.__name__ == 'LGBMRegressor':
param_grid = [{'n_estimators' : [100]},
{'n_estimators' : [1000], 'num_leaves' : [20]},
{'n_estimators' : [50], 'num_leaves': [62]}]
results = modelling_fc(data_train, data_test, feature_dict, model, param_grid = param_grid, scaler = MinMaxScaler(), n_jobs=-1)
# KNN
if model.__class__.__name__ == 'KNeighborsRegressor':
param_grid = {'n_neighbors' : np.arange(20,141,10), 'weights' : ['uniform','distance'], 'p' : [1,2]}
results = modelling_fc(data_train, data_test, feature_dict, model, scaler = MinMaxScaler(), param_grid = param_grid)
# remove file before new file is created
if os.path.isfile(f'../results/{results.MODEL.iloc[1]}.csv'):
os.remove(f'../results/{results.MODEL.iloc[1]}.csv')
# save results in csv file
results.to_csv(f'../results/{results.MODEL.iloc[1]}.csv')
###Output
_____no_output_____
###Markdown
Results Plot validation-RMSE and test-RMSE by model for predictions aggregated over all wind farms.
###Code
## define models to plot
models = ['Baseline','LinearRegression', 'KNeighborsRegressor', 'RandomForestRegressor','LinearSVR', 'SVR', 'LGBMRegressor', 'XGBRegressor']
## collect scores in the dataframe "scores"
scores= pd.DataFrame(index = models, columns = ['TESTSCORE','VALSCORE'])
for model in models:
df = pd.read_csv(f'../results/{model}.csv', index_col='ZONE')
if model == 'Baseline':
scores.loc[model]['VALSCORE'] = df.loc['TOTAL'].TRAINSCORE
scores.loc[model]['TESTSCORE'] = df.loc['TOTAL'].TESTSCORE
else:
scores.loc[model]['VALSCORE'] = np.sqrt(np.mean(df.CV**2))
scores.loc[model]['TESTSCORE'] = np.sqrt(np.mean(df.TESTSCORE**2))
scores.index.set_names('MODEL', inplace=True)
scores.reset_index(inplace=True)
## plot for validation and test set
for scoretype in ['VALSCORE', 'TESTSCORE']:
scores = scores.sort_values(by = scoretype, ascending = False, ignore_index = True)
fontsize=8
palette = ['blue'] + 6 * ['gray'] + ['red']
fig, ax = plt.subplots(dpi=400, figsize=(6,3))
bp = sns.barplot(data = scores, x = 'MODEL', y = scoretype, ax=ax, dodge=False, palette = palette)
ax.set_xticklabels(labels=ax.get_xticklabels(), rotation=45, ha='right', fontsize=fontsize)
ax.set(xlabel=None)
ax.set_ylabel('RMSE [-]', fontsize=fontsize)
ax.set_ylim([.0,.35]);
ax.yaxis.grid()
ax.tick_params(axis = 'both', labelsize = fontsize)
if scoretype == 'VALSCORE':
for index, row in scores.iterrows():
bp.text(row.name, row.VALSCORE + row.VALSCORE/100, '{:.3f}'.format(row.VALSCORE), ha='center', fontsize=fontsize)
fig.savefig('../images/VAL_RMSE-By-Models_Aggregated.png')
elif scoretype == 'TESTSCORE':
for index, row in scores.iterrows():
bp.text(row.name, row.TESTSCORE + row.TESTSCORE/100, '{:.3f}'.format(row.TESTSCORE), ha='center', fontsize=fontsize)
fig.savefig('../images/TEST_RMSE-By-Models_Aggregated.png')
###Output
_____no_output_____
###Markdown
Plot validation-RMSE and test-RMSE by wind farm for different models.
###Code
## Plot RMSE by wind farm for different models
scores = pd.DataFrame(columns = ['ZONE', 'MODEL', 'VALSCORE', 'TESTSCORE'])
scores.set_index('ZONE', inplace=True)
## select models
models = ['LinearRegression', 'KNeighborsRegressor', 'RandomForestRegressor','LinearSVR', 'SVR', 'LGBMRegressor', 'XGBRegressor']
for model in models:
df = pd.read_csv(f'../results/{model}.csv', index_col='ZONE')
df = df[['CV', 'TESTSCORE', 'MODEL']]
df.rename(columns={'CV':'VALSCORE'}, inplace=True)
scores = scores.append(df)
scores = scores.loc[scores.index != 'TOTAL']
ranges = scores[scores.MODEL != 'RandomForestRegressor']
scores = scores[scores.MODEL == 'RandomForestRegressor'][['VALSCORE', 'TESTSCORE']]
for scoretype in ['VAL', 'TEST']:
scores['MINVAL'] = [ranges.loc[zone][scoretype + 'SCORE'].min() for zone in ranges.index.unique()]
scores['MAXVAL'] = [ranges.loc[zone][scoretype + 'SCORE'].max() for zone in ranges.index.unique()]
## plot
fig, ax = plt.subplots(dpi=400, figsize=(6,3))
ax.plot(scores.index.unique(), scores[scoretype + 'SCORE'], color='r', linestyle='--', marker='.', markersize=5, linewidth=.7)
ax.fill_between(scores.index.unique(), scores['MINVAL'], scores['MAXVAL'], color = 'gray', alpha=.5, edgecolor=None);
ax.set_xlabel('wind farm',fontsize=fontsize)
ax.set_ylabel('RMSE [-]', fontsize=fontsize)
ax.grid(linewidth=.3)
ax.legend(['RandomForestRegressor','range over all models without RandomForestRegressor'], fontsize=fontsize - 2, loc = 'upper left')
ax.set_xticklabels([zone[-1] if len(zone) == 5 else zone[-2:] for zone in scores.index]);
ax.set_xticklabels(range(1,11));
fig.savefig('{}{}{}'.format('../images/',scoretype,'_RMSE-By-Windfarms.png'))
###Output
_____no_output_____ |
examples/Schrodinger.ipynb | ###Markdown
**Install deepxde** Tensorflow and all other dependencies are already installed in Colab terminals
###Code
!pip install deepxde
###Output
Collecting deepxde
Downloading DeepXDE-0.14.0-py3-none-any.whl (111 kB)
[?25l
[K |███ | 10 kB 22.0 MB/s eta 0:00:01
[K |█████▉ | 20 kB 21.5 MB/s eta 0:00:01
[K |████████▉ | 30 kB 10.9 MB/s eta 0:00:01
[K |███████████▊ | 40 kB 8.5 MB/s eta 0:00:01
[K |██████████████▊ | 51 kB 5.5 MB/s eta 0:00:01
[K |█████████████████▋ | 61 kB 5.6 MB/s eta 0:00:01
[K |████████████████████▋ | 71 kB 5.5 MB/s eta 0:00:01
[K |███████████████████████▌ | 81 kB 6.2 MB/s eta 0:00:01
[K |██████████████████████████▍ | 92 kB 4.9 MB/s eta 0:00:01
[K |█████████████████████████████▍ | 102 kB 5.3 MB/s eta 0:00:01
[K |████████████████████████████████| 111 kB 5.3 MB/s
[?25hRequirement already satisfied: scikit-learn in /usr/local/lib/python3.7/dist-packages (from deepxde) (1.0.1)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from deepxde) (3.2.2)
Collecting scikit-optimize
Downloading scikit_optimize-0.9.0-py2.py3-none-any.whl (100 kB)
[K |████████████████████████████████| 100 kB 8.9 MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from deepxde) (1.19.5)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from deepxde) (1.4.1)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->deepxde) (3.0.6)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->deepxde) (2.8.2)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->deepxde) (0.11.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->deepxde) (1.3.2)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.1->matplotlib->deepxde) (1.15.0)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->deepxde) (1.1.0)
Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->deepxde) (3.0.0)
Collecting pyaml>=16.9
Downloading pyaml-21.10.1-py2.py3-none-any.whl (24 kB)
Requirement already satisfied: PyYAML in /usr/local/lib/python3.7/dist-packages (from pyaml>=16.9->scikit-optimize->deepxde) (3.13)
Installing collected packages: pyaml, scikit-optimize, deepxde
Successfully installed deepxde-0.14.0 pyaml-21.10.1 scikit-optimize-0.9.0
###Markdown
**Problem setup** We are going to solve the non-linear Schrödinger equation given by $i h_t + \frac{1}{2} h_{xx} + |h|^2h = 0$ with periodic boundary conditions as $x \in [-5,5], \quad t \in [0, \pi/2]$ $h(t, -5) = h(t,5)$ $h_x(t, -5) = h_x(t,5)$ and initial condition equal to $h(0,x) = 2 sech(x)$ Deepxde only uses real numbers, so we need to explicitly split the real and imaginary parts of the complex PDE. In place of the single residual $f = ih_t + \frac{1}{2} h_{xx} +|h|^2 h$ we get the two (real valued) residuals $f_{\mathcal{R}} = u_t + \frac{1}{2} v_{xx} + (u^2 + v^2)v$ $f_{\mathcal{I}} = v_t - \frac{1}{2} u_{xx} - (u^2 + v^2)u$ where u(x,t) and v(x,t) denote respectively the real and the imaginary part of h.
###Code
import numpy as np
import deepxde as dde
# For plotting
import matplotlib.pyplot as plt
from scipy.interpolate import griddata
x_lower = -5
x_upper = 5
t_lower = 0
t_upper = np.pi / 2
# Creation of the 2D domain (for plotting and input)
x = np.linspace(x_lower, x_upper, 256)
t = np.linspace(t_lower, t_upper, 201)
X, T = np.meshgrid(x, t)
# The whole domain flattened
X_star = np.hstack((X.flatten()[:, None], T.flatten()[:, None]))
# Space and time domains/geometry (for the deepxde model)
space_domain = dde.geometry.Interval(x_lower, x_upper)
time_domain = dde.geometry.TimeDomain(t_lower, t_upper)
geomtime = dde.geometry.GeometryXTime(space_domain, time_domain)
# The "physics-informed" part of the loss
def pde(x, y):
"""
INPUTS:
x: x[:,0] is x-coordinate
x[:,1] is t-coordinate
y: Network output, in this case:
y[:,0] is u(x,t) the real part
y[:,1] is v(x,t) the imaginary part
OUTPUT:
The pde in standard form i.e. something that must be zero
"""
u = y[:, 0:1]
v = y[:, 1:2]
# In 'jacobian', i is the output component and j is the input component
u_t = dde.grad.jacobian(y, x, i=0, j=1)
v_t = dde.grad.jacobian(y, x, i=1, j=1)
u_x = dde.grad.jacobian(y, x, i=0, j=0)
v_x = dde.grad.jacobian(y, x, i=1, j=0)
# In 'hessian', i and j are both input components. (The Hessian could be in principle something like d^2y/dxdt, d^2y/d^2x etc)
# The output component is selected by "component"
u_xx = dde.grad.hessian(y, x, component=0, i=0, j=0)
v_xx = dde.grad.hessian(y, x, component=1, i=0, j=0)
f_u = u_t + 0.5 * v_xx + (u ** 2 + v ** 2) * v
f_v = v_t - 0.5 * u_xx - (u ** 2 + v ** 2) * u
return [f_u, f_v]
# Boundary and Initial conditions
# Periodic Boundary conditions
bc_u_0 = dde.PeriodicBC(
geomtime, 0, lambda _, on_boundary: on_boundary, derivative_order=0, component=0
)
bc_u_1 = dde.PeriodicBC(
geomtime, 0, lambda _, on_boundary: on_boundary, derivative_order=1, component=0
)
bc_v_0 = dde.PeriodicBC(
geomtime, 0, lambda _, on_boundary: on_boundary, derivative_order=0, component=1
)
bc_v_1 = dde.PeriodicBC(
geomtime, 0, lambda _, on_boundary: on_boundary, derivative_order=1, component=1
)
# Initial conditions
def init_cond_u(x):
"2 sech(x)"
return 2 / np.cosh(x[:, 0:1])
def init_cond_v(x):
return 0
ic_u = dde.IC(geomtime, init_cond_u, lambda _, on_initial: on_initial, component=0)
ic_v = dde.IC(geomtime, init_cond_v, lambda _, on_initial: on_initial, component=1)
data = dde.data.TimePDE(
geomtime,
pde,
[bc_u_0, bc_u_1, bc_v_0, bc_v_1, ic_u, ic_v],
num_domain=10000,
num_boundary=20,
num_initial=200,
train_distribution="pseudo",
)
# Network architecture
net = dde.maps.FNN([2] + [100] * 4 + [2], "tanh", "Glorot normal")
model = dde.Model(data, net)
###Output
_____no_output_____
###Markdown
Adam optimization.
###Code
# To employ a GPU accelerated system is highly encouraged.
model.compile("adam", lr=1e-3, loss="MSE")
model.train(epochs=10000, display_every=1000)
###Output
Compiling model...
Building feed-forward neural network...
'build' took 0.103608 s
###Markdown
L-BFGS optimization.
###Code
dde.optimizers.config.set_LBFGS_options(
maxcor=50,
ftol=1.0 * np.finfo(float).eps,
gtol=1e-08,
maxiter=10000,
maxfun=10000,
maxls=50,
)
model.compile("L-BFGS")
model.train()
###Output
Compiling model...
'compile' took 0.795132 s
Training model...
Step Train loss Test loss Test metric
10000 [5.98e-04, 5.54e-04, 2.52e-06, 3.60e-06, 8.98e-07, 1.62e-06, 5.89e-04, 6.80e-06] [5.98e-04, 5.54e-04, 2.52e-06, 3.60e-06, 8.98e-07, 1.62e-06, 5.89e-04, 6.80e-06] []
11000 [3.11e-05, 3.23e-05, 1.71e-07, 4.36e-07, 1.81e-07, 2.50e-07, 7.96e-06, 5.44e-07]
12000 [7.32e-06, 1.00e-05, 9.68e-08, 2.63e-07, 1.82e-07, 1.37e-07, 4.86e-06, 3.68e-07]
13000 [3.49e-06, 4.89e-06, 5.19e-08, 3.75e-07, 1.72e-07, 8.74e-08, 3.18e-06, 1.80e-07]
14000 [2.45e-06, 3.06e-06, 1.79e-08, 2.69e-07, 1.80e-07, 3.77e-08, 2.35e-06, 1.47e-07]
15000 [1.61e-06, 2.15e-06, 1.59e-08, 1.69e-07, 1.59e-07, 1.20e-08, 2.04e-06, 1.13e-07]
16000 [1.34e-06, 1.59e-06, 8.94e-09, 1.29e-07, 1.42e-07, 6.53e-09, 1.82e-06, 8.62e-08]
INFO:tensorflow:Optimization terminated with:
Message: b'CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH'
Objective function value: 0.000005
Number of iterations: 6179
Number of functions evaluations: 6589
16589 [1.18e-06, 1.40e-06, 7.96e-09, 1.25e-07, 1.25e-07, 7.69e-09, 1.74e-06, 9.01e-08] [1.18e-06, 1.40e-06, 7.96e-09, 1.25e-07, 1.25e-07, 7.69e-09, 1.74e-06, 9.01e-08] []
Best model at step 16589:
train loss: 4.67e-06
test loss: 4.67e-06
test metric: []
'train' took 437.862604 s
###Markdown
Final results. The reference solution and further information can be found in [this paper](https://arxiv.org/abs/1711.10561) from Raissi, Karniadakis, Perdikaris. The test data can be got [here](https://github.com/maziarraissi/PINNs/blob/master/main/Data/NLS.mat).
###Code
# Make prediction
prediction = model.predict(X_star, operator=None)
u = griddata(X_star, prediction[:, 0], (X, T), method="cubic")
v = griddata(X_star, prediction[:, 1], (X, T), method="cubic")
h = np.sqrt(u ** 2 + v ** 2)
# Plot predictions
fig, ax = plt.subplots(3)
ax[0].set_title("Results")
ax[0].set_ylabel("Real part")
ax[0].imshow(
u.T,
interpolation="nearest",
cmap="viridis",
extent=[t_lower, t_upper, x_lower, x_upper],
origin="lower",
aspect="auto",
)
ax[1].set_ylabel("Imaginary part")
ax[1].imshow(
v.T,
interpolation="nearest",
cmap="viridis",
extent=[t_lower, t_upper, x_lower, x_upper],
origin="lower",
aspect="auto",
)
ax[2].set_ylabel("Amplitude")
ax[2].imshow(
h.T,
interpolation="nearest",
cmap="viridis",
extent=[t_lower, t_upper, x_lower, x_upper],
origin="lower",
aspect="auto",
)
plt.show()
###Output
_____no_output_____ |
contents/pandas/part4.ipynb | ###Markdown
Tópicos extraEn este anexo se revisan algunos tópicos específicos relacionados a la librería `pandas` que no fueron cubiertos anteriormente, estos son- Objeto `pandas.Series`- Gráficos a partir de objetos de pandas- Guardar y leer datos en formato HDF5
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Objeto `pandas.Series`El objeto [`pandas.Series`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.html) es un arreglo de una dimensión (vector) que **representa una secuencia** - Los elementos de la secuencia se identifican con un índice etiquetado `index`- Todos los elementos son de un mismo tipo `dtype`- La serie se identifica con un nombre `name`A continuación veremos algunas formas de crear `Series`**Construyendo un objeto `Series` a partir de un dataframe**Cuando pedimos **una columna** de un DataFrame el objeto retornado es de tipo `Series`Tecnicamente, **una fila** de un DataFrame también retorna como `Series` sin embargo los tipos se mezclan
###Code
clientes = ['Pablo', 'Marianna', 'Matthieu', 'Luis', 'Eliana', 'Cristobal']
ventas = {
'lechugas [unidades]': [1, 0, 1, 2, 0, 0],
'papas [kilos]': [0.5, 2, 1.5, 1.2, 0, 5]
}
df = pd.DataFrame(data=ventas, index=clientes)
display(f'La columna de lechugas es un objeto {type(df["lechugas [unidades]"])}',
f'cuyo tipo es {df["lechugas [unidades]"].dtype}',
f'La fila Matthieu es un objeto {type(df.loc["Matthieu"])}',
f'cuyo tipo es {df.loc["Matthieu"].dtype}')
###Output
_____no_output_____
###Markdown
**Construyendo un objeto `Series` a partir de otras estructuras de datos**Un objeto `Series` se puede crear de forma más general usando el constructor```pythonpandas.Series(data=None, index=None, dtype=None, name=None, copy=False, fastpath=False)```donde `data` puede ser un diccionarios, una lista o un ndarrayPor ejemplo:
###Code
plan_diario= {'dormir': 7, 'comer': 1, 'quehaceres': 1, 'trabajo': 10, 'procastinar': 5}
pd.Series(plan_diario, name='mi planificación de hoy')
###Output
_____no_output_____
###Markdown
:::{note}- Una columna o una fila de un `DataFrame` es un `Series`- Varias `Series` se pueden unir para formar un `DataFrame`::: Gráfico a partir de DataFramesSe pueden crear gráficos sencillos directamente de un `DataFrame`Puedes revisar en detalle la API para graficar en este [link](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.plot.html)
###Code
fig, ax = plt.subplots(figsize=(6, 4), tight_layout=True)
df.plot(ax=ax, kind='line', subplots=True);
###Output
/tmp/ipykernel_23473/1562864067.py:2: UserWarning: To output multiple subplots, the figure containing the passed axes is being cleared.
df.plot(ax=ax, kind='line', subplots=True);
|
chapter-generative-adversarial-networks/PR1358_test_loss_ok.ipynb | ###Markdown
https://github.com/d2l-ai/d2l-en/pull/1358
###Code
%matplotlib inline
from d2l import torch as d2l
import torch
from torch import nn
from torch.utils.data import DataLoader
X = torch.normal(0.0, 1, (1000, 2))
A = torch.tensor([[1, 2], [-0.1, 0.5]])
b = torch.tensor([1, 2])
data = torch.mm(X, A) + b
d2l.set_figsize()
d2l.plt.scatter(data[:100, 0].numpy(), data[:100, 1].numpy());
print(f'The covariance matrix is\n{torch.mm(A.T, A)}')
batch_size = 8
data_iter = DataLoader(data, batch_size=batch_size)
net_G = nn.Sequential(nn.Linear(2, 2))
net_D = nn.Sequential(
nn.Linear(2, 5), nn.Tanh(),
nn.Linear(5, 3), nn.Tanh(),
nn.Linear(3, 1)
)
def update_D(X, Z, net_D, net_G, loss, trainer_D): #@save
"""Update discriminator."""
batch_size = X.shape[0]
ones = torch.ones((batch_size, 1))
zeros = torch.zeros((batch_size, 1))
trainer_D.zero_grad()
real_Y = net_D(X)
fake_X = net_G(Z)
# Do not need to compute gradient for `net_G`, detach it from
# computing gradients.
fake_Y = net_D(fake_X.detach())
loss_D = (loss(real_Y, ones) + loss(fake_Y, zeros)) / 2
loss_D.backward()
trainer_D.step()
return loss_D
def update_G(Z, net_D, net_G, loss, trainer_G): #@save
"""Update generator."""
batch_size = Z.shape[0]
ones = torch.ones((batch_size, 1))
trainer_G.zero_grad()
# We could reuse `fake_X` from `update_D` to save computation
fake_X = net_G(Z)
# Recomputing `fake_Y` is needed since `net_D` is changed
fake_Y = net_D(fake_X)
loss_G=loss(fake_Y,ones)
loss_G.backward()
trainer_G.step()
return loss_G
def train(net_D, net_G, data_iter, num_epochs, lr_D, lr_G, latent_dim, data):
loss = nn.BCEWithLogitsLoss()
for w in net_D.parameters():
nn.init.normal_(w, 0, 0.02)
for w in net_G.parameters():
nn.init.normal_(w, 0, 0.02)
net_D.zero_grad()
net_G.zero_grad()
trainer_D = torch.optim.Adam(net_D.parameters(), lr=lr_D)
trainer_G = torch.optim.Adam(net_G.parameters(), lr=lr_G)
animator = d2l.Animator(xlabel='epoch', ylabel='loss',
xlim=[1, num_epochs], nrows=2, figsize=(5, 5),
legend=['discriminator', 'generator'])
animator.fig.subplots_adjust(hspace=0.3)
for epoch in range(num_epochs):
# Train one epoch
timer = d2l.Timer()
metric = d2l.Accumulator(3) # loss_D, loss_G, num_examples
for X in data_iter:
batch_size = X.shape[0]
Z = torch.normal(0, 1, size=(batch_size, latent_dim))
trainer_D.zero_grad()
trainer_G.zero_grad()
metric.add(update_D(X, Z, net_D, net_G, loss, trainer_D),
update_G(Z, net_D, net_G, loss, trainer_G),
batch_size)
# Visualize generated examples
Z = torch.normal(0, 1, size=(100, latent_dim))
fake_X = net_G(Z).detach().numpy()
animator.axes[1].cla()
animator.axes[1].scatter(data[:, 0], data[:, 1])
animator.axes[1].scatter(fake_X[:, 0], fake_X[:, 1])
animator.axes[1].legend(['real', 'generated'])
# Show the losses
loss_D, loss_G = metric[0]/metric[2], metric[1]/metric[2]
animator.add(epoch + 1, (loss_D, loss_G))
print(f'loss_D {loss_D:.3f}, loss_G {loss_G:.3f}, '
f'{metric[2] / timer.stop():.1f} examples/sec')
lr_D, lr_G, latent_dim, num_epochs = 0.05, 0.005, 2, 20
train(net_D, net_G, data_iter, num_epochs, lr_D, lr_G,
latent_dim, d2l.numpy(data[:100]))
###Output
loss_D 0.087, loss_G 0.087, 1067.1 examples/sec
|
mlcourse.ai assignment/Machine learning from Zero2Hero.ipynb | ###Markdown
Machine Learning: What and Why? Machine learning has been around for a decades but the main application which got the popularity is spam filter which can be called a proper machine learning which has learned so well that we don't need to flag an email as a spam. We all have a few questions like:1. What is machine learning?2. How to start machine learning project?3. What does it mean for a machine to learn something?4. Why machine learning now?5. Finally, how to approach for machine learning.Before we jump into code, we must answer all the above questions. What is Machine Learning? Machine learning is an art of teaching computer so they can learn from data. For ex. Spam filter is a machone learning program that can learn to flag spam from the given training datasets. Few important terminology:1. The example that system uses to learn is called training instance.2. According to tom mitchell definition of machine learning, the task T is to flag spam for new emails, the experience E is the training data, and the performance measures p to be defined.3. The performance measure is called accuracy and can be calculated by taking the ratio. Why Machine learning? Before machine learning we were using traditional technique to write any algorithm. Let us say we need to write a spam filter using traditional technique:1. First we need to define what actually spam looks like. we need to focus on words like "credit", "free", "offer" and "amazing" which is poular subject line. Perhaps we can look for other patterns like sender's name, the email body and the parts of thye email.2. We have to write an algorithm to detect these patterns and algorithm will flag emails as spam if these patters were detected.3. We would test and repeat untill we find a good model.Since the problem is difficult, algorithm will likely become a long list of complex rules which is hard to maintain.In contrast, a spam filter based on the machine learning technique will learn automatically about the words which words are considered for spam. This is much shorter, easier, and more accurate as compared to traditional approach. See the picture below for more detailsThis machine learning approach automatically find the unusual pattern and marked it as spam without any human intervention but in case of traditional approach everytime, we have to sit and write new rules which is not feasible.So the Machine learning is for problems where the problem are too complex or there is no knows traditional algorithm available.Machine learning can also help human to learn from large set of data which is not easy to guess for humans.Finally, machine learning is great for:1. Problems which has lot of rules.2. Complex problems for which there is no traditional solutions3. Fluctuating environment is not for traditional system because everytime, rule is changing..4. Getting insight about complex problem and large amount of data Types of Machine learning systemMchine learning systems are classifieds on the basis of following criteria:1. Whether they need human supervision (Supervised, Unsupervised, Reinforcement learning)2. Whether or not they can learn incrementally on the fly (Online vs batch learning)3. Comparing new data points to known data point or detecting pattern in the data and building predictive modeling (instance vs model based learning)we can combine these in any form like spam filter may learn on the fly using deep learning model and this is an example of online, model-based supervised learning Let us dicuss one by one: Supervised learning/Unsupervised learningThis system can be classified a/c to the amount of supervision they get during training. There are four major categories: supervised, unsupervised, semi-supervised, re-inforcement learning SupervisedWE fed the training set and labels to the system like this How to approach for Vizualization
###Code
import pandas as pd
from matplotlib import pyplot as plt
import seaborn as sns
df = pd.read_csv("telecom_churn.csv")
df.head()
df.shape
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 3333 entries, 0 to 3332
Data columns (total 20 columns):
State 3333 non-null object
Account length 3333 non-null int64
Area code 3333 non-null int64
International plan 3333 non-null object
Voice mail plan 3333 non-null object
Number vmail messages 3333 non-null int64
Total day minutes 3333 non-null float64
Total day calls 3333 non-null int64
Total day charge 3333 non-null float64
Total eve minutes 3333 non-null float64
Total eve calls 3333 non-null int64
Total eve charge 3333 non-null float64
Total night minutes 3333 non-null float64
Total night calls 3333 non-null int64
Total night charge 3333 non-null float64
Total intl minutes 3333 non-null float64
Total intl calls 3333 non-null int64
Total intl charge 3333 non-null float64
Customer service calls 3333 non-null int64
Churn 3333 non-null bool
dtypes: bool(1), float64(8), int64(8), object(3)
memory usage: 498.1+ KB
###Markdown
Whole data visualization To plot a histogram on all dataset, we need to make sure that all the data is in string format
###Code
df["International plan"] = df["International plan"].map({"Yes":1, "No":0})
df["Churn"] = df["Churn"].astype("int")
plt.rcParams['figure.figsize'] = (16, 12)
df.drop(['State'], axis=1).hist()
# THis will take a lot of time if you have a big column
###Output
_____no_output_____
###Markdown
We can gain a lot of insight from the above picture such as:1. Percentage of people who churned2. How many service call has been made
###Code
sns.heatmap(df.corr())
###Output
_____no_output_____
###Markdown
Exploring one feature at a time Numeric feature
###Code
df['Total day minutes'].describe()
###Output
_____no_output_____
###Markdown
We can do this seperately but this is the best way to have a glance at once
###Code
sns.boxplot(x="Total day minutes", data=df)
###Output
_____no_output_____
###Markdown
Middle line is median and 25% percentile and 75% percentile, left hand side are outliers who never used their phone and right side used their pfone often
###Code
plt.rcParams['figure.figsize'] = (8,6)
df['Total day minutes'].hist()
###Output
_____no_output_____
###Markdown
Categorical variable
###Code
df['State'].value_counts().head()
df['State'].nunique()
df['Churn'].value_counts(normalize=True)
sns.countplot(x = "Churn", data = df)
###Output
_____no_output_____
###Markdown
Interaction between feature Numeric-Numeric Interaction 1. This intearction is good if target varaiable is numeric2. It's reasonable to explore with the target varaiable to gain some insight3. For regression task, this plot is very important
###Code
plt.scatter(df['Total day minutes'],
df['Customer service calls'])
###Output
_____no_output_____
###Markdown
We can calculate the correlation between series and dataframe to check how much the feature is correlating with other feature
###Code
df.head()
new_df = df.drop('State', axis=1)
new_df.head()
new_df.corrwith(df["Total day minutes"])
###Output
_____no_output_____
###Markdown
Categorical-Categorical Feature 1. This can also include binary2. This is good if we have a target variable in categorical value
###Code
pd.crosstab(df['Churn'], df['State'])
sns.countplot(x="Customer service calls", hue="Churn", data=df)
plt.title("Customer service calls for loyal and churned");
###Output
_____no_output_____
###Markdown
We can easily gain the insight from the graph that with fewer service call, people are churning a lot 1. We can customize the plot according to our need2. we can add label, legend in the graph3. We can save the graph Categorical-numerical variable
###Code
import numpy as np
df.groupby('Churn')['Total day minutes', 'Customer service calls'].agg([np.median,
np.std])
sns.boxplot(x="Churn", y="Total day minutes", data=df)
###Output
_____no_output_____
###Markdown
From the above graph, we can easily say that the churn are more than the loyal customerhttps://www.analyticsvidhya.com/blog/2019/09/comprehensive-data-visualization-guide-seaborn-python/ Machine learning We have some datasets called x(having all the matrix) and y is some target feature. X and Y forms a training set Vector y is not easy to get by Supervised learning ---> classification and regression and Ranking1. When we have 0 and 1, this is called classification task2. When we have to predict some numerical value, this is called regression task3. Ranking is something where we have to predict whether page is liked or relevant such as recomendation system.There can be multiclass in the classification task such as in digit recognizer where we have 10 classesGradient boosting can handle all three taskFrom the business perspective. it is hard to decide the y target variable because it has to be relevant. Decision tree This can help us to provide a formal structure of our dataset in the tree and node form which is easy to visualize Vizualizing in the tree format is the easy way to start solvoing any problem Main idea is having a dataset and generate some tree automatically Rules for creating tree1. Root should be very much specific as given in the picture below2. the deeper we go the more specific question arises Entropy Notes are in Copy Random forest from scratch
###Code
import pandas as pd
from matplotlib import pyplot as plt
import seaborn as sns
df = pd.read_csv("telecom_churn.csv")
df["International plan"] = df["International plan"].map({"Yes":1, "No":0})
df['Voice mail plan'] = df["Voice mail plan"].map({"Yes":1, "No":0})
df["Churn"] = df["Churn"].astype("int")
df.head()
states = df.pop('State')
X, y = df.drop("Churn", axis=1), df['Churn']
from sklearn.model_selection import train_test_split
X_train, X_holdout, y_train, y_holdout = train_test_split(X, y, test_size=0.3, random_state=20)
X_train.shape, X_holdout.shape
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(random_state=20)
tree.fit(X_train, y_train)
from sklearn.metrics import accuracy_score
pred_holdout = tree.predict(X_holdout)
pred_holdout.shape, y_holdout.shape
accuracy_score(y_holdout, pred_holdout)
# baseline for accuracy
y.value_counts(normalize=True)
# baseline is 0.85
###Output
_____no_output_____
###Markdown
Let's apply cross validation for parameter tuning
###Code
import numpy as np
from sklearn.model_selection import GridSearchCV, StratifiedKFold
params = {'max_depth': np.arange(2,11), 'min_samples_leaf':np.arange(2,11)}
skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=17)
best_tree = GridSearchCV(estimator=tree, param_grid=params, cv=skf,
n_jobs=-1,verbose=1)
best_tree.fit(X_train, y_train) # not effiecient for large datasets
best_tree.best_params_
best_tree.best_estimator_
###Output
_____no_output_____
###Markdown
Cross-validation assessment accuracy
###Code
best_tree.best_score_
###Output
_____no_output_____
###Markdown
Holdout assessment
###Code
pred_holdout_better = best_tree.predict(X_holdout)
accuracy_score(y_holdout, pred_holdout_better)
###Output
_____no_output_____ |
Notebooks/Data Exploration/movie data thru 2016.ipynb | ###Markdown
Importing libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
Reading the datasetSource listed in the github repo
###Code
data = pd.read_csv('movie_metadata.csv')
###Output
_____no_output_____
###Markdown
Exploring the data
###Code
data.head()
data.info()
# we only have movies up to 2016
data['title_year'].value_counts().sort_index().plot(kind='barh', figsize=(15,16))
plt.show()
###Output
_____no_output_____
###Markdown
Data Wrangling Only considering the following features for the recommender system:- director_name- actor_1_name- actor_2_name- actor_3_name- genres- movie_title
###Code
data = data[['director_name', 'actor_1_name', 'actor_2_name', 'actor_3_name', 'genres', 'movie_title']]
data.head()
# we will strip '\xa0' from all the titles
data.movie_title[10]
data.isna().sum().sort_values(ascending=False)
def wrangle(X):
# remove the pipe symbol for seperation in genres
X['genres'] = X['genres'].str.replace('|', ' ')
# strip '\xa0' from all the titles in the dataset
X['movie_title'] = X['movie_title'].str.strip('\xa0')
X['movie_title'] = X['movie_title'].str.lower()
# replace all the Nan values with 'unkownn'
cols = ['director_name', 'actor_3_name', 'actor_2_name', 'actor_1_name',
'movie_title', 'genres']
for col in cols:
X[col] = X[col].replace(np.nan, 'unknown')
return X
wrangle(data)
# we extracted the data we wanted and now we will put into an csv
data.to_csv('data.csv', index=False)
###Output
_____no_output_____ |
epa1361_open_G21/Week 5-6 - robustness and direct search/assignment 10 - MORO.ipynb | ###Markdown
Multi-objective Robust Optimization (MORO)This exercise demostrates the application of MORO on the lake model. In contrast to the exercises in previous weeks, we will be using a slightly more sophisticated version of the problem. For details see the MORDM assignment for this week. Setup MOROMany objective robust optimization aims at finding decisions that are robust with respect to the various deeply uncertain factors. For this, MORO evalues each candidate decision over a set of scenarios. For each outcome of interest, the robusntess over this set is calculated. A MOEA is used to maximize the robustness. For this assignment, we will be using a domain criterion as our robustness metric. The table below lists the rules that you should use for each outcome of interest.|Outcome of interest| threhsold ||-------------------|------------|| Maximum pollution | $\leq$ 0.75|| Inertia | $\geq$ 0.6 || Reliability | $\geq$ 0.99| | Utility | $\geq$ 0.75|**1) Implement a function for each outcome that takes a numpy array with results for the outcome of interest, and returns the robustness score**
###Code
import functools
def robustness(direction, threshold, data):
if direction == SMALLER:
return np.sum(data<=threshold)/data.shape[0]
else:
return np.sum(data>=threshold)/data.shape[0]
def maxp(data):
return np.sum(data<=0.75)/data.shape[0]
SMALLER = 'SMALLER'
LARGER = 'LARGER'
maxp = functools.partial(robustness, SMALLER, 0.75)
inertia = functools.partial(robustness, LARGER, 0.6)
reliability = functools.partial(robustness, LARGER, 0.99)
utility = functools.partial(robustness, LARGER, 0.75)
###Output
_____no_output_____ |
Project 4 - non-professional developer population.ipynb | ###Markdown
IntroductionWith this notebook I want to explore the non-professional developer population at the Stack Overflow Annual Survey to have some idea of who is this population.From the most recent survey results (2019) we will look at this non-professional developer population profile.We will take a look at career and job satisfaction comparing the professional and non-professional developers.
###Code
import numpy as np
import pandas as pd
from collections import defaultdict
import matplotlib.pyplot as plt
from matplotlib_venn import venn2, venn3, venn3_circles
import plotly.express as px
import seaborn as sns
%matplotlib inline
df = pd.read_csv('./stackoverflow/2019.csv')
df.describe()
###Output
_____no_output_____
###Markdown
As the population we want to focus at this data exploration are the non-professional developers, let's take a look at the information that can potentially help us with this filter:* **MainBranch:** Which of the following options best describes you today? Here, by "developer" we mean "someone who writes code."* **Hobbyist:** Do you code as a hobby?
###Code
## uncomment to see all the questions
#df2 = pd.read_csv('./stackoverflow/2019_schema.csv')
#with pd.option_context('display.max_rows', None, 'display.max_columns', None):
# print(df2)
mainbranch = df['MainBranch'].value_counts().reset_index()
#mainbranch.head()
mainbranch.rename(columns={'index': 'What option best describes you today?', 'MainBranch': 'Count'}, inplace=True)
mainbranch['Percent'] = mainbranch.Count / (len(df['MainBranch'])-df['MainBranch'].isnull().sum())
mainbranch.style.format({'Count': "{:,}", 'Percent': '{:.2%}'})
df['MainBranch']
n_answers = len(df['MainBranch'])-df['MainBranch'].isnull().sum()
study_pop = mainbranch.iloc[2, 1]+ mainbranch.iloc[3, 1]
perc_pop = study_pop/n_answers
print("The population we will be looking into will be a total of {a:,} survey answers.\n\
This represents {b:.2f}% of the total population".format(a=study_pop, b=perc_pop*100))
###Output
The population we will be looking into will be a total of 10,879 survey answers.
This represents 12.32% of the total population
###Markdown
I would like to focus on professionals who are developing their programming skills, but who are not professional developers nor were professional developers in the past.I would also like to exclude the "student" population from the dataset.Before deciding simplifying our data set only with the population of our interest and only with the relevant columns, let's take a look at the "hobbyist" column.
###Code
df_study = df.copy()
df_study = df[['Respondent','MainBranch','Hobbyist','Employment','Country','Student','EdLevel','UndergradMajor',\
'EduOther','OrgSize','YearsCode','Age1stCode','YearsCodePro','CareerSat','JobSat',\
'MgrIdiot','MgrMoney','MgrWant','JobSeek','LastHireDate','JobFactors','CurrencySymbol',\
'CurrencyDesc','CompTotal','CompFreq','ConvertedComp','WorkWeekHrs','WorkPlan','ImpSyn',\
'Age','Gender','Trans','Sexuality','Ethnicity','Dependents']]
ans1 = 'I am not primarily a developer, but I write code sometimes as part of my work'
ans2 = 'I code primarily as a hobby'
df_study = df_study[(df.MainBranch == ans1)| (df.MainBranch == ans2)]
df_study.describe()
#df_study.to_csv(r'study.csv')
###Output
_____no_output_____
###Markdown
Population profile
###Code
hob = df_study['Hobbyist'].value_counts().reset_index()
hob.rename(columns={'index': 'Hobbyist', 'Hobbyist': 'Count'}, inplace=True)
hob['Percent'] = hob.Count / study_pop
hob.style.format({'Count': "{:,}", 'Percent': '{:.2%}'})
###Output
_____no_output_____
###Markdown
The majority of this population, 84.45%, code as a hobby. I will isolate the population of interest and also disregard all the columns that are not important for our study:* Population chacteristc features;* Educational level;* Coding experience;* Job satisfaction related questions.We have some numerical answers within these features of interest, so we will probably have to treat NA answers.Before taking a look at the numerical features, let's get some sense of the categorical features.
###Code
# Which of the following do you currently identify as? Please select all that apply. If you prefer not to answer, you may leave this question blank.
male = len(df_study[(df_study.Gender == 'Man')])
female = len(df_study[(df_study.Gender == 'Woman')])
others = study_pop - female - male
print("Percentage of man in our population study is {:.0f}% and woman, {:.0f}%. Other classifications represents {:.0f}%."\
.format(male/study_pop*100, female/study_pop*100, others/study_pop*100))
gender = df_study['Gender'].value_counts().reset_index()
gender
# In which country do you currently reside?
country = df_study['Country'].value_counts().reset_index()
country.rename(columns={'index': 'Country', 'Country': 'Count'}, inplace=True)
country['Percent'] = country.Count / (study_pop-df_study['Country'].isnull().sum())
country.style.format({'Count': "{:,}", 'Percent': '{:.2%}'})
# Which of the following best describes the highest level of formal education that you've completed?
edu = df_study['EdLevel'].value_counts().reset_index()
edu.rename(columns={'index': 'Educational Level', 'EdLevel': 'Count'}, inplace=True)
edu['Percent'] = edu.Count / (study_pop-df_study['EdLevel'].isnull().sum())
edu.style.format({'Count': "{:,}", 'Percent': '{:.2%}'})
# What was your main or most important field of study?
major = df_study['UndergradMajor'].value_counts().reset_index()
major.rename(columns={'index': 'Undergrad Major', 'UndergradMajor': 'Count'}, inplace=True)
major['Percent'] = major.Count / (study_pop-df_study['UndergradMajor'].isnull().sum()-180)
major.style.format({'Count': "{:,}", 'Percent': '{:.2%}'})
# "Which of the following best describes your current employment status?"
emp = df_study['Employment'].value_counts().reset_index()
emp.rename(columns={'index': 'Employment status', 'Employment': 'Count'}, inplace=True)
emp['Percent'] = emp.Count / (study_pop-df_study['Employment'].isnull().sum())
emp.style.format({'Count': "{:,}", 'Percent': '{:.2%}'})
# dataset = emp
# #layout = go.Layout(yaxis=dict(tickformat=".2%"))
# fig = px.bar(dataset, x='Employment status', y='Percentage')
# fig.show()
# data to plot
height = emp['Percent']
bars = emp['Employment status']
y_pos = np.arange(len(bars))
# Create horizontal bars
ax = plt.barh(y_pos, height)
# Create names on the y-axis
plt.yticks(y_pos, bars)
plt.gca().invert_yaxis()
plt.figtext(.5,.9,'Employment status - non-professional developers', fontsize=15, ha='center')
# Show graphic
plt.show()
# "Approximately how many people are employed by the company or organization you work for?"
size = df_study['OrgSize'].value_counts().reset_index()
size.rename(columns={'index': 'Organization size', 'OrgSize': 'Count'}, inplace=True)
size['Percent'] = size.Count / (study_pop-df_study['OrgSize'].isnull().sum())
size.style.format({'Count': "{:,}", 'Percent': '{:.2%}'})
###Output
_____no_output_____
###Markdown
Coding experience We know that this population only have the individuals who responded that they do not work as a developer, has not worked in the past and are not students. But how much experience do they have? We will look now at the following questions:* Including any education, how many years have you been coding?* At what age did you write your first line of code or program?Probably some people did not reply this questions. Let's check how signifficant this percentage of empty values is.
###Code
nulls = df_study.isnull().sum()/study_pop
print("The percentage of nulls is: {:.2%} for YearsCode, {:.2%} for Age1stCode, and {:.2%} for Age."\
.format(nulls['YearsCode'], nulls['Age1stCode'], nulls['Age']))
# copy df as we are going to replace some values
df_age = df_study.copy()
df_age = df_age.dropna(subset=['YearsCode', 'Age1stCode', 'Age'])
study_age = len(df_age['Age'])
print("By dropping the rows with null value in any of the columns, our new population is {:,},\
which is {:.2%} the of population snippet we started with.".format(study_age,study_age/study_pop))
###Output
By dropping the rows with null value in any of the columns, our new population is 9,284, which is 85.34% the of population snippet we started with.
###Markdown
We still got a signifficant amount of responses (>85%) if we remove the null cases.These columns are string type in the survey file because they have non-numerical option answers. To plot the distributions we need all values as numerical. Let's assume "Less than..." and "Younger than..." as the integer immediately below and "More than..." and "Older than..." as the integer immediately above. Then let's convert the whole column to integer type.
###Code
df_age['YearsCode'].replace(["Less than 1 year","More than 50 years"],[0,50],inplace=True)
df_age['Age1stCode'].replace(['Younger than 5 years','Older than 85'],[4,86],inplace=True)
df_age['YearsCode'] = df_age['YearsCode'].astype(int)
df_age['Ages1stCode'] = df_age['Age1stCode'].astype(int)
# density plot with shade
sns.kdeplot(df_age['YearsCode'], shade=True);
p1=sns.kdeplot(df_age['Age1stCode'], shade=True)
p1=sns.kdeplot(df_age['Age'], shade=True)
df_age.describe()
###Output
_____no_output_____
###Markdown
More than 75% of the respondents started coding when they were 18 years old or younger. The median age is 15 years old.Comparing the median age (30) with the median age of first coding experience (15) there is a period of 15 years. The respondents, at median, have been coding for 8 years, so there is a gap.
###Code
# sns.boxplot(x="Gender", y="YearsCode", data=df_study, palette="Set1");
# Basic violinplot
ax = sns.violinplot(x="Gender", y="YearsCode", data=df_age)
# Calculate number of obs per group & median to position labels
medians = df_age.groupby(['Gender'])['YearsCode'].median().values
nobs = df_age['Gender'].value_counts().values
nobs = [str(x) for x in nobs.tolist()]
nobs = ["n: " + i for i in nobs]
# Add it to the plot
pos = range(len(nobs))
for tick,label in zip(pos,ax.get_xticklabels()):\
ax.text(pos[tick], medians[tick] + 0.03, nobs[tick], horizontalalignment='center', size='large', color='k', weight='semibold', rotation=45)
ax.set_xticklabels(ax.get_xticklabels(), rotation=45, horizontalalignment='right');
###Output
_____no_output_____
###Markdown
Career / Job satisfaction We would also like to know about the respondents' job satisfaction. Are they dissatisfied with their current careers / jobs? Here are the questions we will be looking into:* Overall, how satisfied are you with your career thus far?* How satisfied are you with your current job? * How confident are you that your manager knows what they're doing?* Which of the following best describes your current job-seeking status?All features in this part are categorical and they all relate to job / career satisfaction. Let's give a score for the answers try to understand a little bit better beyond looking at each answer individually.Let's give the following scores to the range of responses:| Career and job satisfaction | Score || --- | --- || Very satisfied | +2 || Slightly satisfied | +1|| Neither satisfied nor dissatisfied | 0|| Slightly dissatisfied | -1|| Very dissatisfied | -2|| Conf. Manager | Score || --- | --- || Very confident | +2 || Somewhat confident | +1|| I don't have a manager | 0|| Not at all confident | -1|| Job-seeking | Score || --- | --- || I am not interested in new job opportunities | +1 || I'm not actively looking, but I am open to new opportunities | 0|| I am actively looking for a job | -1|For null responses, we will assume value zero.
###Code
# copy df as we are going to replace some values
df_job = df_study.copy()
df_job['CareerSat'].replace(['Very satisfied','Slightly satisfied', 'Neither satisfied nor dissatisfied',\
'Slightly dissatisfied','Very dissatisfied'],[2, 1, 0, -1, -2],inplace=True)
df_job['JobSat'].replace(['Very satisfied','Slightly satisfied', 'Neither satisfied nor dissatisfied',\
'Slightly dissatisfied','Very dissatisfied'],[2, 1, 0, -1, -2],inplace=True)
df_job['MgrIdiot'].replace(['Very confident','Somewhat confident', "I don't have a manager",\
'Not at all confident'],[2, 1, 0, -1],inplace=True)
df_job['JobSeek'].replace(['I am not interested in new job opportunities',"I’m not actively looking, but I am open to new opportunities",\
'I am actively looking for a job'],[1, 0, -1],inplace=True)
# exclude na
df_job = df_job.dropna(subset=['CareerSat', 'JobSat', 'MgrIdiot', 'JobSeek'])
# create new column with the score
df_job['SatScore'] = df_job['CareerSat']+df_job['JobSat']+df_job['MgrIdiot']+df_job['JobSeek']
df_job['SatScore'].describe()
###Output
_____no_output_____
###Markdown
How does our study population compare to the whole survey population?
###Code
df_job_all = df.copy()
df_job_all['CareerSat'].replace(['Very satisfied','Slightly satisfied', 'Neither satisfied nor dissatisfied',\
'Slightly dissatisfied','Very dissatisfied'],[2, 1, 0, -1, -2],inplace=True)
df_job_all['JobSat'].replace(['Very satisfied','Slightly satisfied', 'Neither satisfied nor dissatisfied',\
'Slightly dissatisfied','Very dissatisfied'],[2, 1, 0, -1, -2],inplace=True)
df_job_all['MgrIdiot'].replace(['Very confident','Somewhat confident', "I don't have a manager",\
'Not at all confident'],[2, 1, 0, -1],inplace=True)
df_job_all['JobSeek'].replace(['I am not interested in new job opportunities',"I’m not actively looking, but I am open to new opportunities",\
'I am actively looking for a job'],[1, 0, -1],inplace=True)
# drop na
df_job_all = df_job_all.dropna(subset=['CareerSat', 'JobSat', 'MgrIdiot', 'JobSeek'])
# create new column with the score
df_job_all['JobSeek'] = df_job_all['JobSeek'].fillna(0)
df_job_all['SatScore'] = df_job_all['CareerSat']+df_job_all['JobSat']+df_job_all['MgrIdiot']+df_job_all['JobSeek']
df_job_all['SatScore'].describe()
plt.hist(df_job_all['SatScore'], bins=14);
plt.figtext(.5,.9,'Job satisfaction score - all 2019 respondents', fontsize=15, ha='center');
plt.axvline(x.median(), color='k', linestyle='dashed', linewidth=1)
min_ylim, max_ylim = plt.ylim()
plt.text(x.median()*1.1, max_ylim*0.95, 'Median: {:.2f}'.format(x.median()))
plt.axvline(x.mean(), color='k', linestyle='dashed', linewidth=1)
min_ylim, max_ylim = plt.ylim()
plt.text(x.mean()*1.1, max_ylim*0.85, 'Mean: {:.2f}'.format(x.mean()))
x1 = df_job['SatScore']
plt.hist(x1, bins=14);
plt.figtext(.5,.9,'Job satisfaction score - non-professional developers 2019 respondents', fontsize=15, ha='center');
plt.axvline(x1.median(), color='k', linestyle='dashed', linewidth=1)
min_ylim, max_ylim = plt.ylim()
l11 = plt.text(x1.median()*1.1, max_ylim*0.95, 'Median: {:.2f}'.format(x1.median()))
l11 = plt.axvline(x1.mean(), color='k', linestyle='dashed', linewidth=1)
min_ylim, max_ylim = plt.ylim()
l12 = plt.text(x1.mean()*1.1, max_ylim*0.85, 'Mean: {:.2f}'.format(x1.mean()))
hist = sns.distplot( a=df_job_all["SatScore"], color='darkblue',hist=True, kde=False,hist_kws={"rwidth":1, 'alpha':0.6});
hist = sns.distplot( a=df_job["SatScore"],color='red', hist=True, kde=False, hist_kws={"edgecolor":'red',"rwidth":0.4, 'alpha':0.6});
hist.set(xlabel='Job Satisfaction score', ylabel='Frequency');
plt.legend(['All','Non-prof. developer'], ncol=1, loc=0);
df_job['SatScore']
df_job.to_csv('satscore.csv')
###Output
_____no_output_____
###Markdown
It seems that the whole population is more satisfied with their jobs and careers compared to our snippet of non-professional developers. What is the percentage of the respodents who are actively looking for a job or willing to consider new opportunities?
###Code
jobseek = df_study['JobSeek'].value_counts().reset_index()
jobseek.rename(columns={'index': 'Job Seek - non-professional developers', 'JobSeek': 'Count'}, inplace=True)
jobseek['Percent'] = jobseek.Count / (study_pop-df_study['JobSeek'].isnull().sum())
jobseek.style.format({'Count': "{:,}", 'Percent': '{:.2%}'})
jobseek_all = df['JobSeek'].value_counts().reset_index()
jobseek_all.rename(columns={'index': 'Job Seek - All respondents', 'JobSeek': 'Count'}, inplace=True)
jobseek_all['Percent'] = jobseek_all.Count / (n_answers-df['JobSeek'].isnull().sum())
jobseek_all.style.format({'Count': "{:,}", 'Percent': '{:.2%}'})
###Output
_____no_output_____
###Markdown
Compared to the overall population, the job-seeking status for our study population has similar distribution. How this population evolved over the years? We have the data from 2011 to 2019, however we can only indentify non-professional developers starting from 2017 survey.Then, let's use the data for the years 2017 to 2019.
###Code
df_2018 = pd.read_csv('./stackoverflow/2018.csv')
df_2017 = pd.read_csv('./stackoverflow/2017.csv')
###Output
_____no_output_____
###Markdown
For 2019 survey we have to look at MainBranch question.Let's take a look at the questions for 2018 and 2017.
###Code
df2_2018 = pd.read_csv('./stackoverflow/2018_schema.csv')
with pd.option_context('display.max_rows', None, 'display.max_columns', None):
print(df2_2018)
###Output
Column \
0 Respondent
1 Hobby
2 OpenSource
3 Country
4 Student
5 Employment
6 FormalEducation
7 UndergradMajor
8 CompanySize
9 DevType
10 YearsCoding
11 YearsCodingProf
12 JobSatisfaction
13 CareerSatisfaction
14 HopeFiveYears
15 JobSearchStatus
16 LastNewJob
17 AssessJob1
18 AssessJob2
19 AssessJob3
20 AssessJob4
21 AssessJob5
22 AssessJob6
23 AssessJob7
24 AssessJob8
25 AssessJob9
26 AssessJob10
27 AssessBenefits1
28 AssessBenefits2
29 AssessBenefits3
30 AssessBenefits4
31 AssessBenefits5
32 AssessBenefits6
33 AssessBenefits7
34 AssessBenefits8
35 AssessBenefits9
36 AssessBenefits10
37 AssessBenefits11
38 JobContactPriorities1
39 JobContactPriorities2
40 JobContactPriorities3
41 JobContactPriorities4
42 JobContactPriorities5
43 JobEmailPriorities1
44 JobEmailPriorities2
45 JobEmailPriorities3
46 JobEmailPriorities4
47 JobEmailPriorities5
48 JobEmailPriorities6
49 JobEmailPriorities7
50 UpdateCV
51 Currency
52 Salary
53 SalaryType
54 ConvertedSalary
55 CurrencySymbol
56 CommunicationTools
57 TimeFullyProductive
58 EducationTypes
59 SelfTaughtTypes
60 TimeAfterBootcamp
61 HackathonReasons
62 AgreeDisagree1
63 AgreeDisagree2
64 AgreeDisagree3
65 LanguageWorkedWith
66 LanguageDesireNextYear
67 DatabaseWorkedWith
68 DatabaseDesireNextYear
69 PlatformWorkedWith
70 PlatformDesireNextYear
71 FrameworkWorkedWith
72 FrameworkDesireNextYear
73 IDE
74 OperatingSystem
75 NumberMonitors
76 Methodology
77 VersionControl
78 CheckInCode
79 AdBlocker
80 AdBlockerDisable
81 AdBlockerReasons
82 AdsAgreeDisagree1
83 AdsAgreeDisagree2
84 AdsAgreeDisagree3
85 AdsActions
86 AdsPriorities1
87 AdsPriorities2
88 AdsPriorities3
89 AdsPriorities4
90 AdsPriorities5
91 AdsPriorities6
92 AdsPriorities7
93 AIDangerous
94 AIInteresting
95 AIResponsible
96 AIFuture
97 EthicsChoice
98 EthicsReport
99 EthicsResponsible
100 EthicalImplications
101 StackOverflowRecommend
102 StackOverflowVisit
103 StackOverflowHasAccount
104 StackOverflowParticipate
105 StackOverflowJobs
106 StackOverflowDevStory
107 StackOverflowJobsRecommend
108 StackOverflowConsiderMember
109 HypotheticalTools1
110 HypotheticalTools2
111 HypotheticalTools3
112 HypotheticalTools4
113 HypotheticalTools5
114 WakeTime
115 HoursComputer
116 HoursOutside
117 SkipMeals
118 ErgonomicDevices
119 Exercise
120 Gender
121 SexualOrientation
122 EducationParents
123 RaceEthnicity
124 Age
125 Dependents
126 MilitaryUS
127 SurveyTooLong
128 SurveyEasy
QuestionText
0 Randomized respondent ID number (not in order ...
1 Do you code as a hobby?
2 Do you contribute to open source projects?
3 In which country do you currently reside?
4 Are you currently enrolled in a formal, degree...
5 Which of the following best describes your cur...
6 Which of the following best describes the high...
7 You previously indicated that you went to a co...
8 Approximately how many people are employed by ...
9 Which of the following describe you? Please se...
10 Including any education, for how many years ha...
11 For how many years have you coded professional...
12 How satisfied are you with your current job? I...
13 Overall, how satisfied are you with your caree...
14 Which of the following best describes what you...
15 Which of the following best describes your cur...
16 When was the last time that you took a job wit...
17 Imagine that you are assessing a potential job...
18 Imagine that you are assessing a potential job...
19 Imagine that you are assessing a potential job...
20 Imagine that you are assessing a potential job...
21 Imagine that you are assessing a potential job...
22 Imagine that you are assessing a potential job...
23 Imagine that you are assessing a potential job...
24 Imagine that you are assessing a potential job...
25 Imagine that you are assessing a potential job...
26 Imagine that you are assessing a potential job...
27 Now, imagine you are assessing a job's benefit...
28 Now, imagine you are assessing a job's benefit...
29 Now, imagine you are assessing a job's benefit...
30 Now, imagine you are assessing a job's benefit...
31 Now, imagine you are assessing a job's benefit...
32 Now, imagine you are assessing a job's benefit...
33 Now, imagine you are assessing a job's benefit...
34 Now, imagine you are assessing a job's benefit...
35 Now, imagine you are assessing a job's benefit...
36 Now, imagine you are assessing a job's benefit...
37 Now, imagine you are assessing a job's benefit...
38 Imagine that a company wanted to contact you a...
39 Imagine that a company wanted to contact you a...
40 Imagine that a company wanted to contact you a...
41 Imagine that a company wanted to contact you a...
42 Imagine that a company wanted to contact you a...
43 Imagine that same company decided to contact y...
44 Imagine that same company decided to contact y...
45 Imagine that same company decided to contact y...
46 Imagine that same company decided to contact y...
47 Imagine that same company decided to contact y...
48 Imagine that same company decided to contact y...
49 Imagine that same company decided to contact y...
50 Think back to the last time you updated your r...
51 Which currency do you use day-to-day? If your ...
52 What is your current gross salary (before taxe...
53 Is that salary weekly, monthly, or yearly?
54 Salary converted to annual USD salaries using ...
55 Three digit currency abbreviation.
56 Which of the following tools do you use to com...
57 Suppose a new developer with four years of exp...
58 Which of the following types of non-degree edu...
59 You indicated that you had taught yourself a p...
60 You indicated previously that you went through...
61 You indicated previously that you had particip...
62 To what extent do you agree or disagree with e...
63 To what extent do you agree or disagree with e...
64 To what extent do you agree or disagree with e...
65 Which of the following programming, scripting,...
66 Which of the following programming, scripting,...
67 Which of the following database environments h...
68 Which of the following database environments h...
69 Which of the following platforms have you done...
70 Which of the following platforms have you done...
71 Which of the following libraries, frameworks, ...
72 Which of the following libraries, frameworks, ...
73 Which development environment(s) do you use re...
74 What is the primary operating system in which ...
75 How many monitors are set up at your workstation?
76 Which of the following methodologies do you ha...
77 What version control systems do you use regula...
78 Over the last year, how often have you checked...
79 Do you have ad-blocking software installed on ...
80 In the past month, have you disabled your ad b...
81 What are the reasons that you have disabled yo...
82 To what extent do you agree or disagree with t...
83 To what extent do you agree or disagree with t...
84 To what extent do you agree or disagree with t...
85 Which of the following actions have you taken ...
86 Please rank the following advertising qualitie...
87 Please rank the following advertising qualitie...
88 Please rank the following advertising qualitie...
89 Please rank the following advertising qualitie...
90 Please rank the following advertising qualitie...
91 Please rank the following advertising qualitie...
92 Please rank the following advertising qualitie...
93 What do you think is the most dangerous aspect...
94 What do you think is the most exciting aspect ...
95 Whose responsibility is it, <u>primarily</u>, ...
96 Overall, what's your take on the future of art...
97 Imagine that you were asked to write code for ...
98 Do you report or otherwise call out the unethi...
99 Who do you believe is ultimately most responsi...
100 Do you believe that you have an obligation to ...
101 How likely is it that you would recommend Stac...
102 How frequently would you say you visit Stack O...
103 Do you have a Stack Overflow account?
104 How frequently would you say you participate i...
105 Have you ever used or visited Stack Overflow J...
106 Do you have an up-to-date Developer Story on S...
107 How likely is it that you would recommend Stac...
108 Do you consider yourself a member of the Stack...
109 Please rate your interest in participating in ...
110 Please rate your interest in participating in ...
111 Please rate your interest in participating in ...
112 Please rate your interest in participating in ...
113 Please rate your interest in participating in ...
114 On days when you work, what time do you typica...
115 On a typical day, how much time do you spend o...
116 On a typical day, how much time do you spend o...
117 In a typical week, how many times do you skip ...
118 What ergonomic furniture or devices do you use...
119 In a typical week, how many times do you exerc...
120 Which of the following do you currently identi...
121 Which of the following do you currently identi...
122 What is the highest level of education receive...
123 Which of the following do you identify as? Ple...
124 What is your age? If you prefer not to answer,...
125 Do you have any children or other dependents t...
126 Are you currently serving or have you ever ser...
127 How do you feel about the length of the survey...
128 How easy or difficult was this survey to compl...
###Markdown
For 2018 we will the following question:* **DevType**: Which of the following describe you? Please select all that apply.
###Code
df2_2017 = pd.read_csv('./stackoverflow/2017_schema.csv')
with pd.option_context('display.max_rows', None, 'display.max_columns', None):
print(df2_2017)
###Output
Column \
0 Respondent
1 Professional
2 ProgramHobby
3 Country
4 University
5 EmploymentStatus
6 FormalEducation
7 MajorUndergrad
8 HomeRemote
9 CompanySize
10 CompanyType
11 YearsProgram
12 YearsCodedJob
13 YearsCodedJobPast
14 DeveloperType
15 WebDeveloperType
16 MobileDeveloperType
17 NonDeveloperType
18 CareerSatisfaction
19 JobSatisfaction
20 ExCoderReturn
21 ExCoderNotForMe
22 ExCoderBalance
23 ExCoder10Years
24 ExCoderBelonged
25 ExCoderSkills
26 ExCoderWillNotCode
27 ExCoderActive
28 PronounceGIF
29 ProblemSolving
30 BuildingThings
31 LearningNewTech
32 BoringDetails
33 JobSecurity
34 DiversityImportant
35 AnnoyingUI
36 FriendsDevelopers
37 RightWrongWay
38 UnderstandComputers
39 SeriousWork
40 InvestTimeTools
41 WorkPayCare
42 KinshipDevelopers
43 ChallengeMyself
44 CompetePeers
45 ChangeWorld
46 JobSeekingStatus
47 HoursPerWeek
48 LastNewJob
49 AssessJobIndustry
50 AssessJobRole
51 AssessJobExp
52 AssessJobDept
53 AssessJobTech
54 AssessJobProjects
55 AssessJobCompensation
56 AssessJobOffice
57 AssessJobCommute
58 AssessJobRemote
59 AssessJobLeaders
60 AssessJobProfDevel
61 AssessJobDiversity
62 AssessJobProduct
63 AssessJobFinances
64 ImportantBenefits
65 ClickyKeys
66 JobProfile
67 ResumePrompted
68 LearnedHiring
69 ImportantHiringAlgorithms
70 ImportantHiringTechExp
71 ImportantHiringCommunication
72 ImportantHiringOpenSource
73 ImportantHiringPMExp
74 ImportantHiringCompanies
75 ImportantHiringTitles
76 ImportantHiringEducation
77 ImportantHiringRep
78 ImportantHiringGettingThingsDone
79 Currency
80 Overpaid
81 TabsSpaces
82 EducationImportant
83 EducationTypes
84 SelfTaughtTypes
85 TimeAfterBootcamp
86 CousinEducation
87 WorkStart
88 HaveWorkedLanguage
89 WantWorkLanguage
90 HaveWorkedFramework
91 WantWorkFramework
92 HaveWorkedDatabase
93 WantWorkDatabase
94 HaveWorkedPlatform
95 WantWorkPlatform
96 IDE
97 AuditoryEnvironment
98 Methodology
99 VersionControl
100 CheckInCode
101 ShipIt
102 OtherPeoplesCode
103 ProjectManagement
104 EnjoyDebugging
105 InTheZone
106 DifficultCommunication
107 CollaborateRemote
108 MetricAssess
109 EquipmentSatisfiedMonitors
110 EquipmentSatisfiedCPU
111 EquipmentSatisfiedRAM
112 EquipmentSatisfiedStorage
113 EquipmentSatisfiedRW
114 InfluenceInternet
115 InfluenceWorkstation
116 InfluenceHardware
117 InfluenceServers
118 InfluenceTechStack
119 InfluenceDeptTech
120 InfluenceVizTools
121 InfluenceDatabase
122 InfluenceCloud
123 InfluenceConsultants
124 InfluenceRecruitment
125 InfluenceCommunication
126 StackOverflowDescribes
127 StackOverflowSatisfaction
128 StackOverflowDevices
129 StackOverflowFoundAnswer
130 StackOverflowCopiedCode
131 StackOverflowJobListing
132 StackOverflowCompanyPage
133 StackOverflowJobSearch
134 StackOverflowNewQuestion
135 StackOverflowAnswer
136 StackOverflowMetaChat
137 StackOverflowAdsRelevant
138 StackOverflowAdsDistracting
139 StackOverflowModeration
140 StackOverflowCommunity
141 StackOverflowHelpful
142 StackOverflowBetter
143 StackOverflowWhatDo
144 StackOverflowMakeMoney
145 Gender
146 HighestEducationParents
147 Race
148 SurveyLong
149 QuestionsInteresting
150 QuestionsConfusing
151 InterestedAnswers
152 Salary
153 ExpectedSalary
Question
0 Respondent ID number
1 Which of the following best describes you?
2 Do you program as a hobby or contribute to ope...
3 In which country do you currently live?
4 Are you currently enrolled in a formal, degree...
5 Which of the following best describes your cur...
6 Which of the following best describes the high...
7 Which of the following best describes your mai...
8 How often do you work from home or remotely?
9 In terms of the number of employees, how large...
10 Which of the following best describes the type...
11 How long has it been since you first learned h...
12 For how many years have you coded as part of y...
13 For how many years did you code as part of you...
14 Which of the following best describe you?
15 Which of the following best describes you as a...
16 For which of the following platforms do you de...
17 Which of the following describe you?
18 Career satisfaction rating
19 Job satisfaction rating
20 You said before that you used to code as part ...
21 You said before that you used to code as part ...
22 You said before that you used to code as part ...
23 You said before that you used to code as part ...
24 You said before that you used to code as part ...
25 You said before that you used to code as part ...
26 You said before that you used to code as part ...
27 You said before that you used to code as part ...
28 How do you pronounce "GIF"?
29 I love solving problems
30 Building things is very rewarding
31 Learning new technologies is fun
32 I tend to get bored by implementation details
33 Job security is important to me
34 Diversity in the workplace is important
35 It annoys me when software has a poor UI
36 Most of my friends are developers, engineers, ...
37 There's a right and a wrong way to do everything
38 Honestly, there's a lot about computers that I...
39 I take my work very seriously
40 I invest a lot of time into the tools I use
41 I don't really care what I work on, so long as...
42 I feel a sense of kinship to other developers
43 I like to challenge myself
44 I think of myself as competing with my peers
45 I want to change the world
46 Which of the following best describes your cur...
47 During a typical week, approximately how many ...
48 When was the last time that you took a job wit...
49 When you're assessing potential jobs to apply ...
50 When you're assessing potential jobs to apply ...
51 When you're assessing potential jobs to apply ...
52 When you're assessing potential jobs to apply ...
53 When you're assessing potential jobs to apply ...
54 When you're assessing potential jobs to apply ...
55 When you're assessing potential jobs to apply ...
56 When you're assessing potential jobs to apply ...
57 When you're assessing potential jobs to apply ...
58 When you're assessing potential jobs to apply ...
59 When you're assessing potential jobs to apply ...
60 When you're assessing potential jobs to apply ...
61 When you're assessing potential jobs to apply ...
62 When you're assessing potential jobs to apply ...
63 When you're assessing potential jobs to apply ...
64 When it comes to compensation and benefits, ot...
65 If two developers are sharing an office, is it...
66 On which of the following sites do you maintai...
67 Think back to the last time you updated your r...
68 Think back to when you first applied to work f...
69 Congratulations! You've just been put in charg...
70 Congratulations! You've just been put in charg...
71 Congratulations! You've just been put in charg...
72 Congratulations! You've just been put in charg...
73 Congratulations! You've just been put in charg...
74 Congratulations! You've just been put in charg...
75 Congratulations! You've just been put in charg...
76 Congratulations! You've just been put in charg...
77 Congratulations! You've just been put in charg...
78 Congratulations! You've just been put in charg...
79 Which currency do you use day-to-day? If you'r...
80 Compared to your estimate of your own market v...
81 Tabs or spaces?
82 Overall, how important has your formal schooli...
83 Outside of your formal schooling and education...
84 You indicated that you had taught yourself a p...
85 You indicated previously that you went through...
86 Let's pretend you have a distant cousin. They ...
87 Suppose you could choose your own working hour...
88 Which of the following languages have you done...
89 Which of the following languages have you done...
90 Which of the following libraries, frameworks, ...
91 Which of the following libraries, frameworks, ...
92 Which of the following database technologies h...
93 Which of the following database technologies h...
94 Which of the following platforms have you done...
95 Which of the following platforms have you done...
96 Which development environment(s) do you use re...
97 Suppose you're about to start a few hours of c...
98 Which of the following methodologies do you ha...
99 What version control system do you use? If you...
100 Over the last year, how often have you checked...
101 It's better to ship now and optimize later
102 Maintaining other people's code is a form of t...
103 Most project management techniques are useless
104 I enjoy debugging code
105 I often get “into the zone” when I'm coding
106 I have difficulty communicating my ideas to my...
107 It's harder to collaborate with remote peers t...
108 Congratulations! The bosses at your new employ...
109 Thinking about your main coding workstation, h...
110 Thinking about your main coding workstation, h...
111 Thinking about your main coding workstation, h...
112 Thinking about your main coding workstation, h...
113 Thinking about your main coding workstation, h...
114 How much influence do you have on purchasing d...
115 How much influence do you have on purchasing d...
116 How much influence do you have on purchasing d...
117 How much influence do you have on purchasing d...
118 How much influence do you have on purchasing d...
119 How much influence do you have on purchasing d...
120 How much influence do you have on purchasing d...
121 How much influence do you have on purchasing d...
122 How much influence do you have on purchasing d...
123 How much influence do you have on purchasing d...
124 How much influence do you have on purchasing d...
125 How much influence do you have on purchasing d...
126 Which of the following best describes you?
127 Stack Overflow satisfaction
128 Which of the following devices have you used t...
129 Over the last three months, approximately how ...
130 Over the last three months, approximately how ...
131 Over the last three months, approximately how ...
132 Over the last three months, approximately how ...
133 Over the last three months, approximately how ...
134 Over the last three months, approximately how ...
135 Over the last three months, approximately how ...
136 Over the last three months, approximately how ...
137 The ads on Stack Overflow are relevant to me
138 The ads on Stack Overflow are distracting
139 The moderation on Stack Overflow is unfair
140 I feel like a member of the Stack Overflow com...
141 The answers and code examples I get on Stack O...
142 Stack Overflow makes the Internet a better place
143 I don't know what I'd do without Stack Overflow
144 The people who run Stack Overflow are just in ...
145 Which of the following do you currently identi...
146 What is the highest level of education receive...
147 Which of the following do you identify as?
148 This survey was too long
149 The questions were interesting
150 The questions were confusing
151 I'm interested in learning how other developer...
152 What is your current annual base salary, befor...
153 You said before that you are currently learnin...
###Markdown
For 2017 data we can use:* **Professional**: Which of the following best describes you? It will be more difficult to identify our group of interest from 2018 data because the respondents selected all options that applied. What we will try to do is count only the rows that does not contain professional developer terms in the answer.From the database we can create a list of the possible values and the list of the values I identify as something like a professional developer:
###Code
possible_vals = ['Full-stack developer','Database administrator','DevOps specialist','System administrator',
'Engineering manager','Data or business analyst','Desktop or enterprise applications developer',
'Game or graphics developer','QA or test developer','Student','Back-end developer',
'Front-end developer','Designer','C-suite executive (CEO, CTO, etc.)','Mobile developer',
'Data scientist or machine learning specialist','Marketing or sales professional',
'Product manager','Embedded applications or devices developer','Educator or academic researcher']
values_prof_dev = ['Full-stack developer','Database administrator','DevOps specialist','System administrator',
'Engineering manager','Data or business analyst','Desktop or enterprise applications developer',
'Game or graphics developer','QA or test developer','Student','Back-end developer',
'Front-end developer','Designer','Mobile developer','Data scientist or machine learning specialist',
'Product manager','Embedded applications or devices developer']
# I want to exclude from the population students and null answers. I will work only with the remaining population
df_2018 = df_2018.dropna(subset=['DevType'])
df_2018 = df_2018[df_2018.columns.drop(list(df_2018.filter(regex='Student')))]
total2018=len(df_2018['Respondent'])
# Now we will create a column to indicate if the respondent is a professional developer or not
prof2018 = 0
for idx in range(len(df_2018['DevType'].values)):
if ';' in df_2018['DevType'].values[idx]:
profs = df_2018['DevType'].values[idx].split(';')
else:
profs = [df_2018['DevType'].values[idx]]
for val in profs:
if val in values_prof_dev:
prof2018+=1
break
1 - prof2018/total2018
###Output
_____no_output_____
###Markdown
Less than 1% selected an option that is clearly not related to a programming activity (like C-suite, Marketing/Sales, Educator/academic researcher). The nature of this question is different from 2017 and 2019, so it is difficult to compare the results Now let's take a look at 2017 data.
###Code
professional = df_2017['Professional'].value_counts().reset_index()
professional.rename(columns={'index': 'Professional dev', 'Professional': 'Count'}, inplace=True)
professional['Percent'] = professional.Count / len(df_2017['Professional'])
professional.style.format({'Count': "{:,}", 'Percent': '{:.2%}'})
pd.pivot_table(df_2017,index=["Professional","ProgramHobby"], values=["Respondent"], aggfunc=lambda x: len(x.unique()))
###Output
_____no_output_____ |
scripts/FPTU_rals_model_b-noisy.ipynb | ###Markdown
Dynamical System ApproximationThis notebook aims at learning a functional correlation based on given snapshots. The data is created through the following ODE which is called the Fermi Pasta model:\begin{align}\frac{d^2}{dt^2} x_i = (x_{i+1} - 2x_i + x_{i-1}) + 0.7((x_{i+1} - x_i)^3 - (x_i-x_{i-1})^3) + \mathcal{N}(0,\sigma)\end{align}
###Code
import numpy as np
import xerus
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
import time
from itertools import chain
import helpers as hp
import pandas as pd
%precision 4
#Construction of the exact solution in the choosen basis.
#Transform implements the basis transformation for given b
#For the exact solution we also factor out the kernel, by use of the Pseudoinverse.
def transform(X):
M = np.zeros([4,4])
M[0,0] = 1 #1
M[1,1] = 1 #1
M[2,2] = 1.5 #1.5
M[3,3] = 2.5 #2.5
M[0,2] = -0.5 #-0.5
M[1,3] = -1.5 #-1.5
t = xerus.Tensor.from_ndarray(np.linalg.inv(M))
a1,a2,a3,a4,b1,b2,b3,b4 = xerus.indices(8)
for eq in range(noo):
tmp = X.get_component(eq)
tmp2 = C2list[eq]
tmp(a1,a2,a3,a4) << tmp(a1,b2,a3,a4)* t(a2,b2)
X.set_component(eq,tmp)
return X
def project(X):
dim = ([(3 if i == 0 or i == noo -1 else 4) for i in range(0,noo)])
dim.extend(dim)
C2T = xerus.TTOperator(dim)
for eq in range(noo):
idx = [0 for i in range(noo)]
if eq == 0:
idx[0] = 2
idx[1] = 3
elif eq == noo -1:
idx[noo-2] = 1
idx[noo-1] = 1
elif eq == noo -2:
idx[eq-1] = 1
idx[eq] = 2
idx[eq+1] = 2
else:
idx[eq-1] = 1
idx[eq] = 2
idx[eq+1] = 3
idx.extend(idx)
C2T += xerus.TTOperator.dirac(dim,idx)
C2T.round(1e-12)
i1,i2,i3,i4,i5,i6,j1,j2,j3,j4,k1,k2,k3 = xerus.indices(13)
X(i1^noo,j1^noo) << X(i1^noo,k1^noo) * C2T(k1^noo,j1^noo)
X.round(1e-12)
return X
def exact(noo,p):
beta = 0.7
C1ex = hp.construct_exact_fermit_pasta(noo,p,beta)
C1ex = transform(C1ex)
C1ex = project(C1ex)
return C1ex
###Output
_____no_output_____
###Markdown
Recovery algorithmWe want to recover the exact solution with the help of a regularized ALS.
###Code
# initialize simulation
def initialize(p,noo):
rank = 4 #fix rank
dim = [p for i in range(0,noo)]
dim.extend([4 for i in range(0,noo)])
dim[noo] = 3
dim[2*noo-1]=3
C = xerus.TTOperator.random(dim,[rank for i in range(0,noo-1)]) # initalize randomly
C.move_core(0,True)
return C
# We choose different pairstriples of dimensions samplesizes and noise level to run the algoirthm for.
data_noo_nos = [(12,4500,1e-8),(12,4500,1e-7),(12,4500,1e-6),(12,4500,1e-5),(12,4500,1e-4)\
,(12,4500,1e-3),(12,4500,1e-2),(12,4500,1e-1),(12,4500,1)]
# pairs used in simulations in the paper
# uncomment to simulate but is computational intensive
data_noo_nos = [(12,4500,1e-2),(12,4500,1e-1),(12,4500,1)]
#data_noo_nos = [(8,3000)] #specify pairs to simulate for
#runs = 1 #specify number of runs for each pair (10 in the paper)
runs = 3
max_iter = 30 # specify number of sweeps
output = 'data.csv' # specify name of output file
# build data structure to store solution
tuples = []
for data in data_noo_nos:
noo = data[0]
nos = data[1]
sigma = data[2]
for r in range(0,runs):
tuples.append((noo,nos,sigma, r))
index = pd.MultiIndex.from_tuples(tuples, names=['d', 'm','sigma','runs'])
# The results of each optimization is store in a Dataframe
col = ["data norm", "noise norm"]
col.extend([i for i in range(1,max_iter+1)])
df = pd.DataFrame(np.zeros([len(tuples),max_iter+2]), index=index,columns=col)
print(len(index))
df["data norm"]
np.set_printoptions(precision=4)
#loop over all pairs of samples, calls hp.run_als for the solution
lam = 1 #regularization parameter
#Master iteration
psi = hp.basis(0) # get basis functions, Legendre
p = len(psi)
for data in data_noo_nos:
noo = data[0]
nos = data[1]
sigma = data[2]
print( "(noo,nos,sigma) = (" + str(noo) +',' + str(nos) +',' + str(sigma) + ')' )
C2list = hp.build_choice_tensor2(noo) # build selection tensor as list of pxnos matrices
C1ex = exact(noo,p) # construct exact solution
print("C1ex frob_norm: " +str(C1ex.frob_norm()))
for r in range(runs):
[x, y] = hp.fermi_pasta_ulam(noo, nos) # create samples and labels
df["data norm"].loc[(noo,nos,sigma,r)] = np.linalg.norm(y)
noise = np.random.normal(0,sigma,size=y.shape)
df["noise norm"].loc[(noo,nos,sigma,r)] = np.linalg.norm(noise)
y = y + noise
Alist = hp.build_data_tensor_list2(noo,x,nos,psi,p) # build the dictionary tensor for the given samples x
Y = xerus.Tensor.from_ndarray(y)
C1 = initialize(p,noo) # initialize the als randomly
errors = hp.run_als(noo,nos,C1,C2list,Alist,C1ex,Y,max_iter,lam) # run the regularized ALS iteration
#post processing, store data in dataframe
for i in range(1,len(errors)):
df[i].loc[(noo,nos,sigma,r)] = errors[i-1]
print("Run: " +str(r) + " finished result = " + str(errors) + " data norm = " + str( df["data norm"].loc[(noo,nos,sigma,r)])+ " noise norm = " + str( df["noise norm"].loc[(noo,nos,sigma,r)]))
df.to_csv(output)
###Output
L2(-1,1) orthogonal polynomials
(noo,nos,sigma) = (12,4500,0.01)
C1ex frob_norm: 19.885773809434728
|
Misc/Untitled.ipynb | ###Markdown
Matrics and Gradients
###Code
import numpy as np
import torch
import torchvision
X_train = np.array([[23, 32, 43, 343, 33, 44, 84, 34, 43, 423, 45, 56, 645, 254, 432],
[344, 3454, 32, 656, 783, 645, 23, 57, 23, 775, 232, 757, 864, 85, 33],
[23, 90, 233, 235, 907, 879, 402, 23, 44, 2323, 36, 232, 66, 232, 66],
[239, 845, 323, 664, 64, 42, 98, 903, 886, 332, 64, 892, 34, 240, 89],
[89, 23, 534, 134, 242, 53, 89, 775, 73, 353, 80, 489, 235, 24, 64]])
###Output
_____no_output_____
###Markdown
for i in X_train: squares = np.sum(i) print(squares) for i in X_test: squares = np.sum(i) print(squares)
###Code
y = np.array([1, 2, 3, 2, 1])
y.shape
X_test = np.array([[34, 89, 42, 24, 64, 23, 136, 643, 42, 633, 249, 24, 89, 23, 52],
[224, 646, 324, 89, 535, 64, 42, 64, 42, 646, 89, 909, 42, 53, 9],
[23, 456, 42, 646, 75, 24, 189, 53, 64, 89, 635, 89, 42, 63, 23],
[239, 845, 323, 664, 64, 42, 98, 903, 886, 332, 64, 892, 34, 240, 189]])
num_test = X_test.shape[0]
num_train = X_train.shape[0]
# 3 rows and 5 columns for all the X_train examples
dists = np.zeros((num_test, num_train))
for i in range(X_test.shape[0]):
for j in range(X_train.shape[0]):
dists[i, j] = np.sqrt(np.sum(np.square(X_test[i]-X_train[j])))
dists
np.argsort(dists)
# dists.sort(key=lambda tup: tup[1])
closest_y = []
for i in sorted_dists:
# print(y[k][0])
print(i)
# print('y[k]:', y[k])
# closest_y.ppend(y[k][0])
k = 1
closest_y = []
for i in range(k):
closest_y.append(sorted_dists[i][0])
# closest_y.append(y[sorted_dists[i]])
# closest_y.append(y[sorted_dists][k])
print(closest_y)
closest_y = []
for i in sorted_dists:
# print(i)
closest = i[0]
closest_y.append(y[closest])
print(closest_y)
# most common label in labels:
num_neighbors = X_train.shape[0]
num_neighbors
# closest_y = []
for i in range(num_test):
closest_y = []
sorted_dists = np.argsort(dists)
print(closest)
for i in range(sorted_dists):
# print(i)
closest = i[0]
print(closest)
# closest_y.append(y[closest])
# y_pred[i] = max(set(closest_y), key=closest_y.count)
# closest_y.append(sorted_dists[i][0])
closest_y
minInRows = np.amin(dists, axis=1)
result = np.where(dists == np.amin(dists, axis=1))
print(result)
dists
for i in dists:
min_i = np.argmin(i, axis=0)
print(min_i)
# predict_labels
# -dists: matrix of distances
# in dists: 1st image has the minimum distance at 812 so it should get the test label at 812.
# the 2nd image has min distance at 1162 and so, it should get the same distnace
# same is hte case for the third image
# y_test = []
labels = []
for i in dists:
# get the minimum
# indices_dists = np.min(i)
# get it's index
min_i = np.argmin(i, axis=0)
labels.append(y[min_i])
# min_index = np.where(dists == np.amin(dists, axis=1))
# y_test.append(min_index[1])
# y_test = list(zip(min_index[1]))
# print(min_index)
# print(type(min_index))
print(labels)
# get the indices first
# then get the value for those form y by doing y[i]
# y_test.append(y[i])
# get the index of the mininum from the dists numpy and compare it with y array
# y_test[i].append(np.min(i[]))
arr == numpy.amin(arr)
###Output
_____no_output_____
###Markdown
for j in dists: min_in = np.min(j) print(min_in) print(dists[0][min_in]) for every array in jy_test = []for j in dists: go to the minimum value: min_index = (dists[np.min(j)]) print(min_index) y_test.append(y[min_index])
###Code
y_test
###Output
_____no_output_____
###Markdown
Broadcasting
###Code
A = np.array([[56.0, 0.0, 4.4, 68.0],
[1.2, 104.0, 52.0, 0.0],
[1.8, 135.0,99.0, 0.9]])
print(A)
cal = A.sum(axis=0)
print(cal)
percentage = 100*A/cal.reshape(1,4)
print(percentage)
# if only one dimension of a matrix and vector matches, Python will copy the vector to make it
# look like the same shape as the matrix and then, we add them elementwise.
# the row refers to test_point
# the column refers to training point
dists
dists.shape
num_test = dists.shape[0]
y_pred = np.zeros(num_test)
print(y_pred)
sorted_dist = np.argsort(dists)
print(sorted_dist)
for i in range(num_test):
closest_y = []
sorted_dists = np.argsort(dists)
print(sorted_dists)
print(sorted_dists[i][0])
closest_y.append(sorted_dists[i][0])
closest_y
import numpy as np
X = np.array([[2, 3, 1, 2, 3, 2, 1, 3, 4, 4],
[3, 3, 1, 2, 4, 5, 3, 5, 3, 2],
[1, 2, 1, 3, 5, 4, 3, 4, 2, 1]])
W = np.array([[0.02, 0.01, 0.03],
[0.04, 0.04, 0.03],
[0.01, 0.05, 0.02],
[0.05, 0.01, 0.01],
[0.01, 0.09, 0.01],
[0.02, 0.04, 0.02],
[0.03, 0.01, 0.07],
[0.01, 0.05, 0.02],
[0.02, 0.06, 0.02],
[0.01, 0.06, 0.04]])
scores= np.dot(X, W)
scores
scores.shape[0]
reg = 1
# scores for the correct classes
yi_scores = scores[np.arange(scores.shape[0]), y]
margins = np.maximum(0, (scores - np.matrix(yi_scores).T + 1))
margins
margins[np.arange(num_train), y] = 0
reg = 0.000005
loss = np.mean(np.sum(margins, axis=1))
loss += 0.5* reg * np.sum(W * W)
loss
binary = margins
binary[margins > 0] = 1
row_sum = np.sum(binary, axis=1)
binary[np.arange(num_train), y] = -row_sum.T
dW = np.dot(X.T, binary)
# Average
dW /= num_train
dW += reg * W
dW
# since we only calculate the loss where the j is not equal to y_i
margins
y = np.array([1, 2, 1])
dW = np.zeros(W.shape)
dW = np.zeros(W.shape)
loss = 0.0
num_classes = W.shape[1]
num_train = X.shape[0]
scores = np.dot(X, W)
correct_class_score = scores[y]
scores[y]
scores
margin = (scores - correct_scores + 1)
margin
for j in range(num_classes):
margin = (scores[j] - correct_scores + 1)
if margin.any() > 0:
# for i in range(X.shape[0]):
dW = np.zeros(W.shape)
loss = 0.0
num_classes = W.shape[1]
num_train = X.shape[0]
for i in range(num_train):
scores = np.dot(X[i], W)
correct_score = scores[y[i]]
for j in range(num_classes):
margin = (scores[j] - correct_score + 1)
if margin > 0:
loss += margin
# for all the classes other than the y[i] classes, += X[i,:]
dW[:, j] += X[i, :]
dW[:, y[i]] -= X[i, :]
scores
X = np.array([[2, 3, 1, 2, 3, 2, 1, 3, 4, 4],
[3, 3, 1, 2, 4, 5, 3, 5, 3, 2],
[1, 2, 1, 3, 5, 4, 3, 4, 2, 1]])
X
dW
dW
correct_class_score = scores[y]
correct_class_score
# that is the score for the correct class
# what we are trying to do is minimize the loss for that class
# so we are going to calculate the loss:
loss = 0.0
for j in range(3):
if j == y:
continue
margin = (scores[j] - correct_class_score + 1)
if margin >0:
loss += margin
dW[:, j] += X
dW[:, y] -= X
dW
margin
margin
print(loss)
dW = np.zeros(W.shape)
dW
###Output
_____no_output_____ |
RegExDragon.ipynb | ###Markdown
Regex Extractor TODO1. define regexes - IBAN - KvK - Amount - Name - Invoice reference - Total2. take input3. match with regex4. get results5. package them as per Rick's specifications REGEXES- Iban: "[a-zA-Z]{2}[0-9]{2}[a-zA-Z0-9]{4}[0-9]{7}([a-zA-Z0-9]?){0,16}"- KVK: ""- Date: ""- Amount: "^(€|$)?\s?(\d{1,10})(\.|\,)(\d{2})(€|$)?$"- Name: ""- Invoice reference: ""- Total: "" Setup
###Code
import re
import os
import glob
###Output
_____no_output_____
###Markdown
define regexes
###Code
# Works
IbanRegex = re.compile(r'[a-zA-Z]{2}[0-9]{2}[a-zA-Z0-9]{4}[0-9]{7}([a-zA-Z0-9]?){0,16}')
# Takes 4 different numbers from an example input file
KvKRegex = re.compile(r'\d{8}')
# Works
AmountRegex = re.compile(r'[€|$]\s?\d{1,10}[\.|\,]\d{2}')
# Pulls out 4 different files
ReferenceRegex = re.compile(r'[A-Z-0-9]{8}')
#
NameRegex = re.compile(r'[a-zA-Z]{12}?\s?([a-zA-Z]{12})')
###Output
_____no_output_____
###Markdown
definitions
###Code
outputData = []
def importer(filename):
invoice = open(filename, 'r')
def IbanMatcher(invoice):
IbanRegex = re.compile(r'[A-Z]{2}\d\d\s?[A-Z0-9]{4}[0-9]{8,20}')
IbanMatcher.result = IbanRegex.findall(invoice)
outputData.append({'IBAN':IbanMatcher.result})
def KvKMatcher(invoice):
KvKRegex = re.compile(r'[0-9]{8}')
KvKMatcher.result = KvKRegex.findall(invoice)
outputData.append({'KvK_Nummer':KvKMatcher.result})
def AmountMatcher(invoice):
AmountRegex = re.compile(r'(TE BETALEN\n\n€\s)(\d{1,10}\.\d\d'))
AmountMatcher.result = AmountRegex.findall(invoice)
outputData.append({'amounts':AmountMatcher.result})
###Output
_____no_output_____
###Markdown
run
###Code
file = open('Output.txt', 'r')
invoice = file.read()
IbanMatcher(invoice)
KvKMatcher(invoice)
AmountMatcher(invoice)
###Output
_____no_output_____
###Markdown
make tuple
###Code
print(invoice)
print(outputData)
###Output
[{'IBAN': ['NL02INGB0681309748']}, {'KvK_Nummer': ['24269393', '00001926', '06813097', '88163931']}, {'amounts': ['TE BETALEN\n\n€ 14804.57']}]
|
XGBoost/XGBoost_Regression/XGBoost (Regression).ipynb | ###Markdown
Machine Learning in Python XGBoost Seyed Mohammad Sajadi Topics:- [ ] What is XGBoost (Review)- [ ] XGBoost in action (Regression) What is XGBoost? eXtreme Gradient Boosting (XGBoost) is a scalable and improved version of the gradient boosting algorithm (terminology alert) designed for efficacy, computational speed and model performance. It is an open-source library and a part of the Distributed Machine Learning Community. XGBoost is a perfect blend of software and hardware capabilities designed to enhance existing boosting techniques with accuracy in the shortest amount of time. What makes XGBoost a go-to algorithm for winning Machine Learning and Kaggle competitions? XGBoost in Action (Regression) Importing the libraries
###Code
# !pip install xgboost
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import sklearn
import xgboost as xgb
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score, KFold
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import r2_score
###Output
_____no_output_____
###Markdown
Load and Prepare Data * Dataset We will be using a dataset that encapsulates the carbon dioxide emissions generated from burning coal for producing electricity power in the United States of America between 1973 and 2016. Using XGBoost, we will try to predict the carbon dioxide emissions in jupyter notebook for the next few years.
###Code
#Read the dataset and print the top 5 elements of the dataset
data = pd.read_csv('CO2.csv')
data.head(5)
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 523 entries, 0 to 522
Data columns (total 2 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 YYYYMM 523 non-null int64
1 Value 523 non-null float64
dtypes: float64(1), int64(1)
memory usage: 8.3 KB
###Markdown
We use Pandas to import the CSV file. We notice that the dataframe contains a column 'YYYYMM' that needs to be separated into 'Year' and 'Month' column. In this step, we will also remove any null values that we may have in the dataframe. Finally, we will retrieve the last five elements of the dataframe to check if our code worked. And it did!
###Code
data['Month'] = data.YYYYMM.astype(str).str[4:6].astype(float)
data['Year'] = data.YYYYMM.astype(str).str[0:4].astype(float)
data.shape
data.drop(['YYYYMM'], axis=1, inplace=True)
data.replace([np.inf, -np.inf], np.nan, inplace=True)
data.tail(5)
# check for data type
print(data.dtypes)
data.isnull().sum()
data.shape
X = data.loc[:,['Month', 'Year']].values
y = data.loc[:,'Value'].values
y
data_dmatrix = xgb.DMatrix(X,label=y)
data_dmatrix
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=5)
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
reg_mod = xgb.XGBRegressor(
n_estimators=1000,
learning_rate=0.08,
subsample=0.75,
colsample_bytree=1,
max_depth=7,
gamma=0,
)
reg_mod.fit(X_train, y_train)
#After training the model, we'll check the model training score.
scores = cross_val_score(reg_mod, X_train, y_train,cv=10)
print("Mean cross-validation score: %.2f" % scores.mean())
reg_mod.fit(X_train,y_train)
predictions = reg_mod.predict(X_test)
rmse = np.sqrt(mean_squared_error(y_test, predictions))
print("RMSE: %f" % (rmse))
from sklearn.metrics import r2_score
r2 = np.sqrt(r2_score(y_test, predictions))
print("R_Squared Score : %f" % (r2))
###Output
R_Squared Score : 0.990117
###Markdown
* As you can see, the these statistical metrics have reinstated our confidence about this model. RMSE ~ 4.95 R-Squared Score ~ 98.8% Now, let's visualize the original data set using the seaborn library.
###Code
plt.figure(figsize=(10, 5), dpi=80)
sns.lineplot(x='Year', y='Value', data=data)
plt.figure(figsize=(10, 5), dpi=80)
x_ax = range(len(y_test))
plt.plot(x_ax, y_test, label="test")
plt.plot(x_ax, predictions, label="predicted")
plt.title("Carbon Dioxide Emissions - Test and Predicted data")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Finally, the last piece of code will print the forecasted carbon dioxide emissions until 2025.
###Code
plt.figure(figsize=(10, 5), dpi=80)
df=pd.DataFrame(predictions, columns=['pred'])
df['date'] = pd.date_range(start='8/1/2016', periods=len(df), freq='M')
sns.lineplot(x='date', y='pred', data=df)
plt.title("Carbon Dioxide Emissions - Forecast")
plt.show()
###Output
_____no_output_____ |
kaggle/ml-fraud-detection-master/k-means.ipynb | ###Markdown
K-means
###Code
import numpy as np
import sklearn as sk
import pandas as pd
df = pd.read_csv('creditcard.csv', low_memory=False)
df.head()
from sklearn.cluster import KMeans
from time import time
import matplotlib.pyplot as plt
from sklearn import metrics
from sklearn.cluster import KMeans
from sklearn.datasets import load_digits
from sklearn.decomposition import PCA
from sklearn.preprocessing import scale
from sklearn.model_selection import train_test_split
X = df.iloc[:,:-1]
y = df['Class']
X_scaled = scale(X)
pca = PCA(n_components=2)
X_reduced = pca.fit_transform(X_scaled)
X_train, X_test, y_train, y_test = train_test_split(X_reduced, y, test_size = 0.33, random_state=500)
kmeans = KMeans(init='k-means++', n_clusters=2, n_init=10)
kmeans.fit(X_train)
# Step size of the mesh. Decrease to increase the quality of the VQ.
h = .01 # point in the mesh [x_min, x_max]x[y_min, y_max].
# Plot the decision boundary. For that, we will assign a color to each
x_min, x_max = X_reduced[:, 0].min() - 1, X_reduced[:, 0].max() + 1
y_min, y_max = X_reduced[:, 1].min() - 1, X_reduced[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Obtain labels for each point in mesh. Use last trained model.
Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(1)
plt.clf()
plt.imshow(Z, interpolation='nearest',
extent=(xx.min(), xx.max(), yy.min(), yy.max()),
cmap=plt.cm.Paired,
aspect='auto', origin='lower')
plt.plot(X_reduced[:, 0], X_reduced[:, 1], 'k.', markersize=2)
# Plot the centroids as a white X
centroids = kmeans.cluster_centers_
plt.scatter(centroids[:, 0], centroids[:, 1],
marker='x', s=169, linewidths=3,
color='w', zorder=10)
plt.title('K-means clustering on the credit card fraud dataset (PCA-reduced data)\n'
'Centroids are marked with white cross')
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.xticks(())
plt.yticks(())
plt.show()
predictions = kmeans.predict(X_test)
pred_fraud = np.where(predictions == 1)[0]
real_fraud = np.where(y_test == 1)[0]
false_pos = len(np.setdiff1d(pred_fraud, real_fraud))
pred_good = np.where(predictions == 0)[0]
real_good = np.where(y_test == 0)[0]
false_neg = len(np.setdiff1d(pred_good, real_good))
false_neg_rate = false_neg/(false_pos+false_neg)
accuracy = (len(X_test) - (false_neg + false_pos)) / len(X_test)
print("Accuracy:", accuracy)
print("False negative rate (with respect to misclassifications): ", false_neg_rate)
print("False negative rate (with respect to all the data): ", false_neg / len(predictions))
print("False negatives, false positives, mispredictions:", false_neg, false_pos, false_neg + false_pos)
print("Total test data points:", len(X_test))
###Output
Accuracy: 0.5474799706342367
False negative rate (with respect to misclassifications): 0.0025393242576003386
False negative rate (with respect to all the data): 0.0011490950876185005
False negatives, false positives, mispredictions: 108 42423 42531
Total test data points: 93987
|
section_4/01_sort.ipynb | ###Markdown
ソート**ソート**(sort)は、データを昇順、もしくは降順に並び替えるアルゴリズムです。 ソートのアルゴリズムは多く存在しますが、それぞれ計算量が異なります。 今回は、以下の4つの有名なソートのアルゴリズムを解説します。 * 選択ソート* バブルソート* マージソート* クイックソート ◎選択ソート**選択ソート**(selection sort)は、並んだ複数の要素から最小値(最大値)を探し最初(最後)の要素と入れ替えを行うソートのアルゴリズムです。 直感的でシンプルなのですが、時間計算量(平均、最悪ともに)が$\mathcal{O}(n^2)$となり、データのサイズが大きい場合は計算に時間がかかるのが欠点です。 空間計算量は、$\mathcal{O}(1)$となります。 以下は、Pythonによる選択ソートの実装です。
###Code
data = [3, 5, 2, 1, 4] # ソート対象のデータ
# ----- 選択ソート -----
for i in range(0, len(data)):
min_idx = i
for j in range(i+1, len(data)):
if data[j] < data[min_idx]:
min_idx = j
# 値の交換
min = data[min_idx]
data[min_idx] = data[i]
data[i] = min
print(data)
###Output
_____no_output_____
###Markdown
◎バブルソート**バブルソート**(bubble sort)は、隣接した要素の大小を比較しながら整列させるソートのアルゴリズムです。 アルゴリズムがシンプルで並列処理に向いているのですが、選択ソートと同様に時間計算量(最悪)が$\mathcal{O}(n^2)$となり、データのサイズが大きい場合は計算に時間がかかるのが欠点です。 空間計算量は、$\mathcal{O}(1)$となります。 以下は、Pythonによるバブルソートの実装です。
###Code
data = [3, 5, 2, 1, 4] # ソート対象のデータ
# ----- バブルソート -----
for i in range(0, len(data)):
min_idx = i
for j in range(0, len(data)-i-1):
if data[j] > data[j+1]:
larger = data[j]
data[j] = data[j+1]
data[j+1] = larger
print(data)
###Output
_____no_output_____
###Markdown
◎クイックソート**クイックソート**(quick sort)は、ピボットと呼ぶ要素を用いてデータの分割を繰り返すことによりソートを行うアルゴリズムです。 比較と交換の回数が少なく、しばしば実用上最も高速であるとされるソートのアルゴリズムです。 クイックソートは以下の手順で表すことができます。 1. 要素数が1以下であれば、データをそのまま返す2. 要素を1つ選択し、ピボットとする3. ピボットの値以下のグループと、ピボットの値より大きいグループにデータを分割する4. 分割された各グループに1-3を再帰的に適用し、最後に全てのグループを結合する 平均時間計算量は$\mathcal{O}(n\log n)$ですが、データのサイズや並びによっては計算に時間がかかることもあり、最悪時間計算量は$\mathcal{O}(n^2)$となります。 空間計算量は、$\mathcal{O}(n)$となります。 以下は、Pythonによるクイックソートの実装例です。
###Code
def quick_sort(data):
if len(data) <= 1:
return data
pivot = data[0] # ピボット
less = [] # ピボット以下の要素
more = [] # ピボットより大きい要素
for i in range(1, len(data)):
if data[i] <= pivot:
less.append(data[i])
else:
more.append(data[i])
return quick_sort(less) + [pivot] + quick_sort(more)
data = [3, 5, 6, 7, 2, 1, 4, 5, 1] # ソート対象のデータ
print(quick_sort(data))
###Output
_____no_output_____
###Markdown
なお、クイックソートのような分割を繰り返すことでソートする手法は、**分割統治法**(divide-and-conquer method)と呼ばれます。 @ 演習 分割統治法の一種、**マージソート**(merge sort)を実装しましょう。 マージソートはデータを2つに分割して、それぞれをソートして結合(マージ)し、1つのソート済みデータとします。 クイックソートと比べると最悪計算量は少ないですが、ランダムに並んだでデータでは一般的にクイックソートの方が高速です。 マージソートは以下の手順で表すことができます。 1. データをA、B2つに分割する2. A、Bをそれぞれマージソートする3. A、Bを結合する上記2.では再帰的な処理時が行われます。 3.のデータの結合は以下の手順で行われます。 1. A、Bの先頭要素を比較して、小さい方の要素を抜き出してデータCの末尾に加える2. A、Bどちらかの要素が無くなるまで1.を繰り返す3. 余った要素をC末尾に加え、Cを結合済みのデータとする 平均時間計算量は$\mathcal{O}(n\log n)$で、最悪時間計算量は$\mathcal{O}(n\log n)$となります。 空間計算量は、$\mathcal{O}(n)$です。 以下のセルにPythonのコードを追記し、マージソートを実装してください。
###Code
def merge_sort(data):
if len(data) <= 1:
return data
center = len(data) // 2 # 中央のインデックス
data_a = data[:center] # A
data_b = data[center:] # B
return # ←コードを追記
def merge(data_a, data_b): # 結合
merged = []
a_idx = 0
b_idx = 0
while a_idx < len(data_a) and a_idx < len(data_b): # どちらかの要素が無くなるまで繰り返す
if data_a[a_idx] < data_b[b_idx]: # 先頭要素を比較
# ←コードを追記
a_idx += 1
else:
# ←コードを追記
b_idx += 1
return merged + data_a[a_idx:] + data_b[b_idx:] # 余った要素を末尾に加える
data = [3, 5, 6, 7, 2, 1, 4, 5, 1] # ソート対象のデータ
print(quick_sort(data))
###Output
_____no_output_____
###Markdown
@解答例
###Code
def merge_sort(data):
if len(data) <= 1:
return data
center = len(data) // 2 # 中央のインデックス
data_a = data[:center] # A
data_b = data[center:] # B
return merge(merge_sort(data_a), merge_sort(data_b)) # ←コードを追記
def merge(data_a, data_b): # 結合
merged = []
a_idx = 0
b_idx = 0
while a_idx < len(data_a) and a_idx < len(data_b): # どちらかの要素が無くなるまで繰り返す
if data_a[a_idx] < data_b[b_idx]: # 先頭要素を比較
merged.append(data_a[a_idx]) # ←コードを追記
a_idx += 1
else:
merged.append(data_b[b_idx]) # ←コードを追記
b_idx += 1
return merged + data_a[a_idx:] + data_b[b_idx:] # 余った要素を末尾に加える
data = [3, 5, 6, 7, 2, 1, 4, 5, 1] # ソート対象のデータ
print(quick_sort(data))
###Output
_____no_output_____ |
Emulator.ipynb | ###Markdown
EmulatorI am using Gaussian Process to build an Emulator of FOM(Figure Of Merit). I have predicted a FOM value for last raw from given data.
###Code
import george
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from george import kernels
from scipy.optimize import minimize
from george.metrics import Metric
import matplotlib.cm as cm
%matplotlib inline
a = np.loadtxt("parameters_with_FOM.txt")
FOM = a[:,6]
a = a[:,:-1]
# area=14,300, depth=26.35, shear_m=0.003, sigma_z=0.05, sig_delta_z=0.001, sig_sigma_z=0.003
test1 = np.linspace(7000,20000, num=25) #area
test2 = np.linspace(25,27, num=25) #depth
#test2 = np.linspace(0.003,0.02, num=25) #shear_m
#test2 = np.linspace(0.01,0.1, num=25) #sig_z
#test2 = np.linspace(0.001,0.005, num=25) #sig_delta_z
#test2 = np.linspace(0.003,0.006, num=25) #sig_sigma_z
for param1 in test1:
for param2 in test2:
a = np.concatenate((a, [[param1, param2, 0.003, 0.05, 0.001, 0.003]]), axis=0)
Xtest = a[36:, 0]
Ytest = a[36:, 1]
Xtest = Xtest.reshape(25,25)
Ytest = Ytest.reshape(25,25)
#slicing up the array in 7 column array:
area = a[:,0]
depth = a[:,1]
shear_m = a[:,2]
sigma_z = a[:,3]
sig_delta_z = a[:,4]
sig_sigma_z= a[:,5]
#standardizing the data:
Sarea = (area-np.mean(area))/np.std(area)
Sdepth = (depth-np.mean(depth))/np.std(depth)
Sshear_m = (shear_m-np.mean(shear_m))/np.std(shear_m)
Ssigma_z = (sigma_z-np.mean(sigma_z))/np.std(sigma_z)
Ssig_delta_z = (sig_delta_z-np.mean(sig_delta_z))/np.std(sig_delta_z)
Ssig_sigma_z = (sig_sigma_z-np.mean(sig_sigma_z))/np.std(sig_sigma_z)
SFOM = (FOM-np.mean(FOM))/np.std(FOM)
#Putting together standardised parameter array:
x = np.column_stack([Sarea, Sdepth, Sshear_m, Ssigma_z, Ssig_delta_z, Ssig_sigma_z])
Sxtest = x[36:,:]
x = x[:36,:]
#creating a kernel-covariance in 6-D parameter space:
kernel = kernels.Product(kernels.ConstantKernel(log_constant=np.log((((2*np.pi)))**-0.5), ndim=6),
kernels.ExpSquaredKernel(metric= [1,1,1,1,1,1], ndim=6))
gp = george.GP(kernel, mean=np.mean(SFOM))
gp.compute(x)
#maximising likelihood:
def neg_ln_lik(p):
gp.set_parameter_vector(p)
return -gp.log_likelihood(SFOM)
def grad_neg_ln_like(p):
gp.set_parameter_vector(p)
return -gp.grad_log_likelihood(SFOM)
result = minimize(neg_ln_lik, gp.get_parameter_vector(), jac=grad_neg_ln_like)
gp.set_parameter_vector(result.x)
#Predicting the test point (it is the last raw of data here):
predSFOM, Svariance = gp.predict(SFOM, Sxtest, return_var=True)
predFOM = predSFOM*np.std(FOM) + np.mean(FOM)
print(max(predFOM), min(predFOM))
predFOM = predFOM.reshape(25,25)
predFOM
fig,ax = plt.subplots(figsize=(10,8))
norm = mpl.colors.LogNorm(vmin=17.699764038062785, vmax=50.694991388128386)
cs = ax.pcolormesh(Xtest, Ytest, predFOM, norm=norm, cmap=cm.Blues)
plt.title('FoM on Depth vs Area \n shear_m=0.003, $\sigma_z$=0.05, $\sigma (\Delta_z)$=0.001, $\sigma (\sigma_z)$=0.003')
plt.xlabel('Area')
plt.ylabel('$Depth')
formatter = mpl.ticker.ScalarFormatter()
fig.colorbar(cs, cmap=cm.Blues, norm=norm, format=formatter)
plt.savefig('FOM-DepthvsArea.png',dpi=500)
plt.show()
###Output
_____no_output_____ |
Many-to-one_LSTM.ipynb | ###Markdown
Many-to-one LSTMref: UCB-CS282-John F. CannyIn this notebook we implement Many-to-One Long Short-Term Memory using a modular approach. For each layer we implement a `forward` and a `backward` function. The `forward` function will receive inputs, weights, and other parameters and will return both an output and a `cache` object storing data needed for the backward pass, like this:```pythondef layer_forward(x, w): """ Receive inputs x and weights w """ Do some computations ... z = ... some intermediate value Do some more computations ... out = the output cache = (x, w, z, out) Values we need to compute gradients return out, cache```The backward pass will receive upstream derivatives and the `cache` object, and will return gradients with respect to the inputs and weights, like this:```pythondef layer_backward(dout, cache): """ Receive derivative of loss with respect to outputs and cache, and compute derivative with respect to inputs. """ Unpack cache values x, w, z, out = cache Use values in cache to compute derivatives dx = Derivative of loss with respect to x dw = Derivative of loss with respect to w return dx, dw```After implementing a bunch of layers this way, we will be able to easily combine them to build classifiers with different architectures.In addition to implementing LSTM networks of arbitrary depth, we also explore different update rules for optimization, and introduce Dropout as a regularizer and Batch Normalization as a tool to more efficiently optimize deep networks.
###Code
# A bit of setup
import time
import numpy as np
import matplotlib.pyplot as plt
from batchnormlstm.classifiers.lstm import *
from batchnormlstm.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from batchnormlstm.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load test dataset and process it.
import pandas as pd
raw = pd.read_csv('.\\dataset\\energydata_complete.csv', index_col=0)
T = 4
test_pct = 0.05
data = {'X_val': np.empty([int(test_pct * raw.index.size) - T, T, raw.columns.size]),
'y_val': np.empty([int(test_pct * raw.index.size) - T, raw.columns.size]),
'X_train': np.empty([int((1 - test_pct) * raw.index.size) - T
- int(test_pct * raw.index.size), T, raw.columns.size]),
'y_train': np.empty([int((1 - test_pct) * raw.index.size) - T
- int(test_pct * raw.index.size), raw.columns.size]),
'X_test': np.empty([raw.index.size - T
- int((1 - test_pct) * raw.index.size), T, raw.columns.size]),
'y_test': np.empty([raw.index.size - T
- int((1 - test_pct) * raw.index.size), raw.columns.size])}
for i in range(int(test_pct * raw.index.size) - T):
data['X_val'][i] = raw.iloc[i:(i + T), :].values
data['y_val'][i] = raw.iloc[i + T, :].values
for i in range(int(test_pct * raw.index.size), int((1 - test_pct) * raw.index.size) - T):
data['X_train'][i - int(test_pct * raw.index.size)] = raw.iloc[i:(i + T), :].values
data['y_train'][i - int(test_pct * raw.index.size)] = raw.iloc[i + T, :].values
for i in range(int((1 - test_pct) * raw.index.size), raw.index.size - T):
data['X_test'][i - int((1 - test_pct) * raw.index.size)] = raw.iloc[i:(i + T), :].values
data['y_test'][i - int((1 - test_pct) * raw.index.size)] = raw.iloc[i + T, :].values
for k, v in data.items():
print('%s: ' % k, v.shape)
###Output
X_val: (982, 4, 28)
y_val: (982, 28)
X_train: (17758, 4, 28)
y_train: (17758, 28)
X_test: (983, 4, 28)
y_test: (983, 28)
###Markdown
Affine layer: forwardIn file `batchnormlstm/layers.py` we implement the `affine_forward` function, then test the implementaion by running the following:
###Code
# Test the affine_forward function
num_inputs = 2
input_shape = (4, 5, 6)
output_dim = 3
input_size = num_inputs * np.prod(input_shape)
weight_size = output_dim * np.prod(input_shape)
x = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape)
w = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim)
b = np.linspace(-0.3, 0.1, num=output_dim)
out, _ = affine_forward(x, w, b)
correct_out = np.array([[ 1.49834967, 1.70660132, 1.91485297],
[ 3.25553199, 3.5141327, 3.77273342]])
# Compare the outputs. The error should be around 1e-9.
print('Testing affine_forward function:')
print('difference: ', rel_error(out, correct_out))
###Output
Testing affine_forward function:
difference: 9.769847728806635e-10
###Markdown
Affine layer: backwardImplement the `affine_backward` function and test the implementation using numeric gradient checking.
###Code
# Test the affine_backward function
x = np.random.randn(10, 2, 3)
w = np.random.randn(6, 5)
b = np.random.randn(5)
dout = np.random.randn(10, 5)
dx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout)
_, cache = affine_forward(x, w, b)
dx, dw, db = affine_backward(dout, cache)
# The error should be around 1e-10
print('Testing affine_backward function:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
print()
###Output
Testing affine_backward function:
dx error: 1.580274371758088e-10
dw error: 1.6327342070374637e-10
db error: 8.79291994626051e-12
###Markdown
Activation layers: forwardImplement the forward pass for tanh and sigmoid activation function in the `tanh_forward` and `sigmoid_forward` function and test the implementation using the following:
###Code
# Test the tanh_forward and sigmoid_forward function
x = np.linspace(-0.5, 0.5, num=12).reshape(3, 4)
out, _ = tanh_forward(x)
correct_out = np.array([[-0.46211716, -0.38770051, -0.30786199, -0.22343882],
[-0.13552465, -0.04542327, 0.04542327, 0.13552465],
[ 0.22343882, 0.30786199, 0.38770051, 0.46211716]])
# Compare the outputs. The error should be around 1e-8
print('Testing tanh_forward function:')
print('difference: ', rel_error(out, correct_out))
out, _ = sigmoid_forward(x)
correct_out = np.array([[0.37754067, 0.39913012, 0.42111892, 0.44342513],
[0.46596182, 0.48863832, 0.51136168, 0.53403818],
[0.55657487, 0.57888108, 0.60086988, 0.62245933]])
# Compare the outputs. The error should be around 1e-8
print('Testing sigmoid_forward function:')
print('difference: ', rel_error(out, correct_out))
###Output
Testing tanh_forward function:
difference: 3.8292287781296644e-08
Testing sigmoid_forward function:
difference: 5.157221295671855e-09
###Markdown
Activation layers: backwardImplement the backward pass for tanh and sigmoid activation function in the `tanh_backward` and `sigmoid_backward` function and test the implementation using numeric gradient checking:
###Code
x = np.random.randn(5, 5, 10)
dout = np.random.randn(*x.shape)
dx_num = eval_numerical_gradient_array(lambda x: tanh_forward(x)[0], x, dout)
_, cache = tanh_forward(x)
dx = tanh_backward(dout, cache)
# The error should be around 1e-10
print('Testing tanh_backward function:')
print('dx error: ', rel_error(dx_num, dx))
dx_num = eval_numerical_gradient_array(lambda x: sigmoid_forward(x)[0], x, dout)
_, cache = sigmoid_forward(x)
dx = sigmoid_backward(dout, cache)
# The error should be around 1e-10
print('Testing sigmoid_backward function:')
print('dx error: ', rel_error(dx_num, dx))
print()
###Output
Testing tanh_backward function:
dx error: 1.9348653675634142e-10
Testing sigmoid_backward function:
dx error: 1.0034653863721936e-10
###Markdown
Batch normalization: ForwardIn the file `batchnormlstm/layers.py`, we implement the batch normalization forward pass in the function `batchnorm_forward`, then run the following to test the implementation.
###Code
# Check the training-time forward pass by checking means and variances
# of features both before and after batch normalization
# Simulate the forward pass for a two-layer network
N, D1, D2, D3 = 200, 50, 60, 3
X = np.random.randn(N, D1)
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
a = np.maximum(0, X.dot(W1)).dot(W2)
print('Before batch normalization:')
print(' means: ', a.mean(axis=0))
print(' stds: ', a.std(axis=0))
# Means should be close to zero and stds close to one
print('After batch normalization (gamma=1, beta=0)')
a_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'})
print(' mean: ', a_norm.mean(axis=0))
print(' std: ', a_norm.std(axis=0))
# Now means should be close to beta and stds close to gamma
gamma = np.asarray([1.0, 2.0, 3.0])
beta = np.asarray([11.0, 12.0, 13.0])
a_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})
print('After batch normalization (nontrivial gamma, beta)')
print(' means: ', a_norm.mean(axis=0))
print(' stds: ', a_norm.std(axis=0))
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
N, D1, D2, D3 = 200, 50, 60, 3
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
bn_param = {'mode': 'train'}
gamma = np.ones(D3)
beta = np.zeros(D3)
for t in range(50):
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
batchnorm_forward(a, gamma, beta, bn_param)
bn_param['mode'] = 'test'
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
a_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print('After batch normalization (test-time):')
print(' means: ', a_norm.mean(axis=0))
print(' stds: ', a_norm.std(axis=0))
###Output
After batch normalization (test-time):
means: [ 0.07693787 -0.02578972 0.05492385]
stds: [0.99651805 1.04399964 0.95703145]
###Markdown
Batch Normalization: backwardImplement the backward pass for batch normalization in the function `batchnorm_backward`.To derive the backward pass we write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; sum gradients across these branches in the backward pass.Run the following to numerically check the backward pass.
###Code
# Gradient check batchnorm backward pass
N, D = 4, 5
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
fx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: batchnorm_forward(x, gamma, beta, bn_param)[0]
fb = lambda b: batchnorm_forward(x, gamma, beta, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma, dout)
db_num = eval_numerical_gradient_array(fb, beta, dout)
_, cache = batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = batchnorm_backward(dout, cache)
print('dx error: ', rel_error(dx_num, dx))
print('dgamma error: ', rel_error(da_num, dgamma))
print('dbeta error: ', rel_error(db_num, dbeta))
###Output
dx error: 2.8510779751549673e-10
dgamma error: 5.7137851762636384e-12
dbeta error: 3.3697549129496796e-11
###Markdown
Batch Normalization: alternative backwardWe derive a simple expression for the batch normalization backward pass after working out derivatives on paper and simplifying. Implement the simplified batch normalization backward pass in the function `batchnorm_backward_alt` and compare the two implementations by running the following. Our two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.
###Code
N, D = 100, 500
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
out, cache = batchnorm_forward(x, gamma, beta, bn_param)
t1 = time.time()
dx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)
t2 = time.time()
dx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)
t3 = time.time()
print('dx difference: ', rel_error(dx1, dx2))
print('dgamma difference: ', rel_error(dgamma1, dgamma2))
print('dbeta difference: ', rel_error(dbeta1, dbeta2))
print('speedup: %.2fx' % ((t2 - t1) / (t3 - t2)))
print()
###Output
dx difference: 3.14204955235961e-11
dgamma difference: 0.0
dbeta difference: 0.0
speedup: 3.00x
###Markdown
Dropout forward passIn the file `batchnormlstm/layers.py`, implement the forward pass for dropout. Since dropout behaves differently during training and testing, we implement the operation for both modes.Run the cell below to test the implementation.
###Code
x = np.random.randn(500, 500) + 10
rates = [0.3, 0.6, 0.75]
for i, p in enumerate(rates):
out, _ = dropout_forward(x, {'mode': 'train', 'p': p})
out_test, _ = dropout_forward(x, {'mode': 'test', 'p': p})
print('Running tests with p = ', p)
print('Mean of input: ', x.mean())
print('Mean of train-time output: ', out.mean())
print('Mean of test-time output: ', out_test.mean())
print('Fraction of train-time output set to zero: ', (out == 0).mean())
print('Fraction of test-time output set to zero: ', (out_test == 0).mean())
if i < (len(rates) - 1):
print()
###Output
Running tests with p = 0.3
Mean of input: 10.000280495638942
Mean of train-time output: 10.003757970096457
Mean of test-time output: 10.000280495638942
Fraction of train-time output set to zero: 0.699864
Fraction of test-time output set to zero: 0.0
Running tests with p = 0.6
Mean of input: 10.000280495638942
Mean of train-time output: 10.012958120128776
Mean of test-time output: 10.000280495638942
Fraction of train-time output set to zero: 0.399508
Fraction of test-time output set to zero: 0.0
Running tests with p = 0.75
Mean of input: 10.000280495638942
Mean of train-time output: 10.012202300191028
Mean of test-time output: 10.000280495638942
Fraction of train-time output set to zero: 0.249144
Fraction of test-time output set to zero: 0.0
###Markdown
Dropout backward passIn the file `batchnormlstm/layers.py`, implement the backward pass for dropout. Run the following cell to numerically gradient-check the implementation.
###Code
x = np.random.randn(10, 10) + 10
dout = np.random.randn(*x.shape)
dropout_param = {'mode': 'train', 'p': 0.8, 'seed': 123}
out, cache = dropout_forward(x, dropout_param)
dx = dropout_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda xx: dropout_forward(xx, dropout_param)[0], x, dout)
print('dx relative error: ', rel_error(dx, dx_num))
print()
###Output
dx relative error: 5.4456072614148924e-11
###Markdown
LSTM Unit: forwardImplement a LSTM forward pass for one time step with `lstm_forward_unit`.ref: https://blog.aidangomez.ca/2016/04/17/Backpropogating-an-LSTM-A-Numerical-Example/
###Code
# Test the lstm_forward_unit function
num_inputs = 2
input_shape = (4, 5, 6)
output_dim = 3
inputX_size = num_inputs * np.prod(input_shape)
weightW_size = 4 * output_dim * np.prod(input_shape)
inputS_size = num_inputs * output_dim
weightU_size = 4 * output_dim * output_dim
scale_size = 4 * output_dim
x = np.linspace(-0.1, 0.5, num=inputX_size).reshape(num_inputs, *input_shape)
w = np.linspace(-0.2, 0.3, num=weightW_size).reshape(np.prod(input_shape), 4 * output_dim)
b = np.linspace(-0.3, 0.1, num=scale_size)
h_prev = np.linspace(0.1, -0.5, num=inputS_size).reshape(num_inputs, output_dim)
c_prev = np.linspace(0.2, -0.3, num=inputS_size).reshape(num_inputs, output_dim)
u = np.linspace(0.3, -0.1, num=weightU_size).reshape(output_dim, 4 * output_dim)
out, _, _ = lstm_forward_unit(x, w, u, b, h_prev, c_prev)
correct_out = np.array([[0.63273332, 0.60414966, 0.57096795],
[0.67986212, 0.63027987, 0.57357252]])
# Compare the outputs. The error should be around 1e-9.
print('Testing lstm_forward_unit function:')
print('difference: ', rel_error(out, correct_out))
###Output
Testing lstm_forward_unit function:
difference: 3.5742222070529497e-09
###Markdown
LSTM & BatchNorm-LSTM Unit: backwardThen implement the `lstm_backward_unit` and `batchnorm_lstm_backward_unit` function and numerically check the implementations.
###Code
# Test the lstm_backward_unit function
x = np.random.randn(10, 2, 3)
w = np.random.randn(6, 20)
b = np.random.randn(20)
h_prev = np.random.randn(10, 5)
c_prev = np.random.randn(10, 5)
u = np.random.randn(5, 20)
dout = np.random.randn(10, 5)
dx_num = eval_numerical_gradient_array(lambda x: lstm_forward_unit(x, w, u, b, h_prev, c_prev)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: lstm_forward_unit(x, w, u, b, h_prev, c_prev)[0], w, dout)
du_num = eval_numerical_gradient_array(lambda u: lstm_forward_unit(x, w, u, b, h_prev, c_prev)[0], u, dout)
db_num = eval_numerical_gradient_array(lambda b: lstm_forward_unit(x, w, u, b, h_prev, c_prev)[0], b, dout)
_, _, cache = lstm_forward_unit(x, w, u, b, h_prev, c_prev)
dx, dw, du, db, _, _, _ = lstm_backward_unit(dout, cache)
# The error should be around 1e-9
print('Testing lstm_backward_unit function:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('du error: ', rel_error(du_num, du))
print('db error: ', rel_error(db_num, db))
print()
# Test the batchnorm_lstm_backward_unit function
gamma_x = np.random.randn(20)
gamma_h = np.random.randn(20)
gamma_c = np.random.randn(5)
beta_x = np.random.randn(20)
beta_h = np.random.randn(20)
beta_c = np.random.randn(5)
bn_params = [{'mode': 'train'} for i in range(3)]
dx_num = eval_numerical_gradient_array(lambda x:
batchnorm_lstm_forward_unit(x, w, u, b, (gamma_x, gamma_h, gamma_c),
(beta_x, beta_h, beta_c), bn_params,
h_prev, c_prev)[0],
x, dout)
dw_num = eval_numerical_gradient_array(lambda w:
batchnorm_lstm_forward_unit(x, w, u, b, (gamma_x, gamma_h, gamma_c),
(beta_x, beta_h, beta_c), bn_params,
h_prev, c_prev)[0],
w, dout)
du_num = eval_numerical_gradient_array(lambda u:
batchnorm_lstm_forward_unit(x, w, u, b, (gamma_x, gamma_h, gamma_c),
(beta_x, beta_h, beta_c), bn_params,
h_prev, c_prev)[0],
u, dout)
db_num = eval_numerical_gradient_array(lambda b:
batchnorm_lstm_forward_unit(x, w, u, b, (gamma_x, gamma_h, gamma_c),
(beta_x, beta_h, beta_c), bn_params,
h_prev, c_prev)[0],
b, dout)
dgamma_x_num = eval_numerical_gradient_array(lambda gamma_x:
batchnorm_lstm_forward_unit(x, w, u, b, (gamma_x, gamma_h, gamma_c),
(beta_x, beta_h, beta_c), bn_params,
h_prev, c_prev)[0],
gamma_x, dout)
dbeta_x_num = eval_numerical_gradient_array(lambda beta_x:
batchnorm_lstm_forward_unit(x, w, u, b, (gamma_x, gamma_h, gamma_c),
(beta_x, beta_h, beta_c), bn_params,
h_prev, c_prev)[0],
beta_x, dout)
dgamma_h_num = eval_numerical_gradient_array(lambda gamma_h:
batchnorm_lstm_forward_unit(x, w, u, b, (gamma_x, gamma_h, gamma_c),
(beta_x, beta_h, beta_c), bn_params,
h_prev, c_prev)[0],
gamma_h, dout)
dbeta_h_num = eval_numerical_gradient_array(lambda beta_h:
batchnorm_lstm_forward_unit(x, w, u, b, (gamma_x, gamma_h, gamma_c),
(beta_x, beta_h, beta_c), bn_params,
h_prev, c_prev)[0],
beta_h, dout)
dgamma_c_num = eval_numerical_gradient_array(lambda gamma_c:
batchnorm_lstm_forward_unit(x, w, u, b, (gamma_x, gamma_h, gamma_c),
(beta_x, beta_h, beta_c), bn_params,
h_prev, c_prev)[0],
gamma_c, dout)
dbeta_c_num = eval_numerical_gradient_array(lambda beta_c:
batchnorm_lstm_forward_unit(x, w, u, b, (gamma_x, gamma_h, gamma_c),
(beta_x, beta_h, beta_c), bn_params,
h_prev, c_prev)[0],
beta_c, dout)
_, _, cache = batchnorm_lstm_forward_unit(x, w, u, b, (gamma_x, gamma_h, gamma_c),
(beta_x, beta_h, beta_c), bn_params, h_prev, c_prev)
dx, dw, du, db, dgammas, dbetas, _, _, _ = batchnorm_lstm_backward_unit(dout, cache)
dgamma_x, dgamma_h, dgamma_c = dgammas
dbeta_x, dbeta_h, dbeta_c = dbetas
# The error should be around 1e-9
print('Testing batchnorm_lstm_backward_unit function:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('du error: ', rel_error(du_num, du))
print('dgamma_x error: ', rel_error(dgamma_x_num, dgamma_x))
print('dbeta_x error: ', rel_error(dbeta_x_num, dbeta_x))
print('dgamma_h error: ', rel_error(dgamma_h_num, dgamma_h))
print('dbeta_h error: ', rel_error(dbeta_h_num, dbeta_h))
print('dgamma_c error: ', rel_error(dgamma_c_num, dgamma_c))
print('dbeta_c error: ', rel_error(dbeta_c_num, dbeta_c))
print()
###Output
Testing lstm_backward_unit function:
dx error: 5.676013334485782e-10
dw error: 9.09439438048634e-10
du error: 6.022805979019028e-09
db error: 2.958323064832232e-10
Testing batchnorm_lstm_backward_unit function:
dx error: 8.584173797425142e-09
dw error: 4.5670732371827704e-08
du error: 7.438542676015566e-08
dgamma_x error: 3.725801103979747e-10
dbeta_x error: 9.704859248215657e-10
dgamma_h error: 1.6933380811135852e-08
dbeta_h error: 1.0815590892310194e-09
dgamma_c error: 5.595591165982771e-10
dbeta_c error: 3.837078124809926e-11
###Markdown
Assembled Layer: forwardImplement a batch-normalized LSTM forward pass with `batchnorm_lstm_forward` that loops through `batchnorm_lstm_forward_unit`.
###Code
# Test the batchnorm_lstm_forward function
num_inputs = 2
input_shape = (4, 5, 6)
output_dim = 5
time_step = 3
inputX_size = num_inputs * time_step * np.prod(input_shape)
weightW_size = 4 * output_dim * np.prod(input_shape)
inputS_size = num_inputs * output_dim
weightU_size = 4 * output_dim * output_dim
scale_size = 4 * output_dim
x = np.linspace(-0.1, 0.5, num=inputX_size).reshape(num_inputs, time_step, *input_shape)
w = np.linspace(-0.2, 0.3, num=weightW_size).reshape(np.prod(input_shape), 4 * output_dim)
b = np.linspace(-0.3, 0.1, num=scale_size)
h_n1 = np.linspace(0.1, -0.5, num=inputS_size).reshape(num_inputs, output_dim)
c_n1 = np.linspace(0.2, -0.3, num=inputS_size).reshape(num_inputs, output_dim)
u = np.linspace(0.3, -0.1, num=weightU_size).reshape(output_dim, 4 * output_dim)
gamma_x = np.linspace(3., 4., num=scale_size)
gamma_h = np.linspace(-2., -5., num=scale_size)
gamma_c = np.linspace(1., 6., num=output_dim)
beta_x = np.linspace(-0.1, 0.3, num=scale_size)
beta_h = np.linspace(-0.2, 0.2, num=scale_size)
beta_c = np.linspace(-0.3, 0.1, num=output_dim)
bn_params = [{'mode': 'train'} for i in range(3)]
outs, _ = batchnorm_lstm_forward(x, w, u, b, (gamma_x, gamma_h, gamma_c),
(beta_x, beta_h, beta_c), bn_params)
out = outs[:, -1, :]
correct_out = np.array([[-0.61717064, -0.73806868, -0.77826679, -0.80687535, -0.83184207],
[ 0.26743016, 0.41807813, 0.42096102, 0.41164257, 0.40158984]])
# Compare the outputs. The error should be around 1e-7.
print('Testing batchnorm_lstm_forward function:')
print('difference: ', rel_error(out, correct_out))
###Output
Testing batchnorm_lstm_forward function:
difference: 2.505371298335007e-07
###Markdown
Assembled Layer: backwardImplement the `batchnorm_lstm_backward` function and test the implementation using numeric gradient checking.
###Code
# Test the batchnorm_lstm_backward function
x = np.random.randn(10, 4, 2, 3)
w = np.random.randn(6, 20)
b = np.random.randn(20)
h_n1 = np.random.randn(10, 5)
c_n1 = np.random.randn(10, 5)
u = np.random.randn(5, 20)
dout = np.random.randn(10, 4, 5)
gamma_x = np.random.randn(20)
gamma_h = np.random.randn(20)
gamma_c = np.random.randn(5)
beta_x = np.random.randn(20)
beta_h = np.random.randn(20)
beta_c = np.random.randn(5)
bn_params = [{'mode': 'train'} for i in range(3)]
dx_num = eval_numerical_gradient_array(lambda x: batchnorm_lstm_forward(x, w, u, b, (gamma_x, gamma_h, gamma_c),
(beta_x, beta_h, beta_c), bn_params,
h_n1, c_n1)[0],
x, dout)
dw_num = eval_numerical_gradient_array(lambda w: batchnorm_lstm_forward(x, w, u, b, (gamma_x, gamma_h, gamma_c),
(beta_x, beta_h, beta_c), bn_params,
h_n1, c_n1)[0],
w, dout)
du_num = eval_numerical_gradient_array(lambda u: batchnorm_lstm_forward(x, w, u, b, (gamma_x, gamma_h, gamma_c),
(beta_x, beta_h, beta_c), bn_params,
h_n1, c_n1)[0],
u, dout)
db_num = eval_numerical_gradient_array(lambda b: batchnorm_lstm_forward(x, w, u, b, (gamma_x, gamma_h, gamma_c),
(beta_x, beta_h, beta_c), bn_params,
h_n1, c_n1)[0],
b, dout)
dgamma_x_num = eval_numerical_gradient_array(lambda gamma_x: batchnorm_lstm_forward(x, w, u, b,
(gamma_x, gamma_h, gamma_c),
(beta_x, beta_h, beta_c),
bn_params, h_n1, c_n1)[0],
gamma_x, dout)
dbeta_x_num = eval_numerical_gradient_array(lambda beta_x: batchnorm_lstm_forward(x, w, u, b,
(gamma_x, gamma_h, gamma_c),
(beta_x, beta_h, beta_c),
bn_params, h_n1, c_n1)[0],
beta_x, dout)
dgamma_h_num = eval_numerical_gradient_array(lambda gamma_h: batchnorm_lstm_forward(x, w, u, b,
(gamma_x, gamma_h, gamma_c),
(beta_x, beta_h, beta_c),
bn_params, h_n1, c_n1)[0],
gamma_h, dout)
dbeta_h_num = eval_numerical_gradient_array(lambda beta_h: batchnorm_lstm_forward(x, w, u, b,
(gamma_x, gamma_h, gamma_c),
(beta_x, beta_h, beta_c),
bn_params, h_n1, c_n1)[0],
beta_h, dout)
dgamma_c_num = eval_numerical_gradient_array(lambda gamma_c: batchnorm_lstm_forward(x, w, u, b,
(gamma_x, gamma_h, gamma_c),
(beta_x, beta_h, beta_c),
bn_params, h_n1, c_n1)[0],
gamma_c, dout)
dbeta_c_num = eval_numerical_gradient_array(lambda beta_c: batchnorm_lstm_forward(x, w, u, b,
(gamma_x, gamma_h, gamma_c),
(beta_x, beta_h, beta_c),
bn_params, h_n1, c_n1)[0],
beta_c, dout)
_, cache = batchnorm_lstm_forward(x, w, u, b, (gamma_x, gamma_h, gamma_c), (beta_x, beta_h, beta_c), bn_params,
h_n1, c_n1)
dx, dw, du, db, dgammas, dbetas = batchnorm_lstm_backward(dout, cache)
dgamma_x, dgamma_h, dgamma_c = dgammas
dbeta_x, dbeta_h, dbeta_c = dbetas
# The error should be around 1e-7
print('Testing lstm_backward_unit function:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('du error: ', rel_error(du_num, du))
print('db error: ', rel_error(db_num, db))
print('dgamma_x error: ', rel_error(dgamma_x_num, dgamma_x))
print('dbeta_x error: ', rel_error(dbeta_x_num, dbeta_x))
print('dgamma_h error: ', rel_error(dgamma_h_num, dgamma_h))
print('dbeta_h error: ', rel_error(dbeta_h_num, dbeta_h))
print('dgamma_c error: ', rel_error(dgamma_c_num, dgamma_c))
print('dbeta_c error: ', rel_error(dbeta_c_num, dbeta_c))
print()
###Output
Testing lstm_backward_unit function:
dx error: 6.510179435612519e-08
dw error: 4.77448491604468e-07
du error: 2.683146430218781e-07
db error: 7.013542670085222e-09
dgamma_x error: 4.715754374359772e-09
dbeta_x error: 7.013542670085222e-09
dgamma_h error: 3.0510325475552296e-09
dbeta_h error: 7.247845978538795e-09
dgamma_c error: 6.701361988181442e-10
dbeta_c error: 2.216564624855941e-09
###Markdown
Loss layer: MSEImplement the loss function in `batchnormlstm/layers.py`.We make sure that the implementations are correct by running the following:
###Code
dim_outputs, num_inputs = 10, 50
x = np.random.randn(num_inputs, dim_outputs)
y = np.random.randn(num_inputs, dim_outputs)
dx_num = eval_numerical_gradient(lambda x: mse_loss(x, y)[0], x, verbose=False)
loss, dx = mse_loss(x, y)
# Test mse_loss function. Loss should be around 2 and dx error should be 1e-7
print('Testing mse_loss:')
print('loss: ', loss)
print('dx error: ', rel_error(dx_num, dx))
print()
###Output
Testing mse_loss:
loss: 2.052333493108092
dx error: 2.555032546004221e-05
###Markdown
Two-layer LSTM networkIn the file `batchnormlstm/classifiers/lstm.py` we complete the implementation of the `TwoLayerLSTM` class. This class will serve as a model for the other networks we implement. Run the cell below to test the implementation.
###Code
N, D, H, O = 3, 5, 50, 6
T = 4
std = 1e-2
model = TwoLayerLSTM(input_dim=D, hidden_dim=H, output_dim=O, weight_scale=std)
print('Testing initialization ... ')
W1_std = abs(model.params['W1'].std() - std)
b1 = model.params['b1']
W2_std = abs(model.params['W2'].std() - std)
b2 = model.params['b2']
U_std = abs(model.params['U'].std() - std)
assert W1_std < std / 10, 'LSTM layer input weights do not seem right'
assert np.all(b1 == 0), 'LSTM layer biases do not seem right'
assert W2_std < std / 10, 'Affine layer weights do not seem right'
assert np.all(b2 == 0), 'Affine layer biases do not seem right'
assert U_std < std / 10, 'LSTM layer state weights do not seem right'
print('Testing test-time forward pass ... ')
model.params['W1'] = np.linspace(-0.7, 0.3, num=D*4*H).reshape(D, 4*H)
model.params['b1'] = np.linspace(-0.1, 0.9, num=4*H)
model.params['W2'] = np.linspace(-0.3, 0.4, num=H*O).reshape(H, O)
model.params['b2'] = np.linspace(-0.9, 0.1, num=O)
model.params['U'] = np.linspace(-0.5, 0.5, num=H*4*H).reshape(H, 4*H)
X = np.linspace(-5.5, 4.5, num=N*T*D).reshape((D, T, N)).T
h_prev = np.zeros((N, H))
c_prev = np.zeros((N, H))
outs = model.loss(X)
correct_outs = np.asarray(
[[1.30180544, 1.61810112, 1.9343968, 2.25069248, 2.56698816, 2.88328384],
[1.30212866, 1.61834704, 1.93456542, 2.2507838, 2.56700219, 2.88322057],
[1.30257611, 1.61870838, 1.93484065, 2.25097292, 2.56710519, 2.88323746]])
outs_diff = np.abs(outs - correct_outs).sum()
assert outs_diff < 1e-6, 'Problem with test-time forward pass'
print('Testing training loss (no regularization) ...')
Y = np.linspace(2.5, -3.5, num=N*O).reshape(O, N).T
loss, grads = model.loss(X, Y)
correct_loss = 12.319904047979042
assert abs(loss - correct_loss) < 1e-10, 'Problem with training-time loss'
print('Testing training loss (with regularization) ...')
model.reg = 1.0
loss, grads = model.loss(X, Y)
correct_loss = 497.3609656985614
assert abs(loss - correct_loss) < 1e-10, 'Problem with regularization loss'
for reg in [0.0, 0.7]:
print('Running numeric gradient check with reg = ', reg)
model.reg = reg
loss, grads = model.loss(X, Y)
for name in sorted(grads):
f = lambda _: model.loss(X, Y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
print()
###Output
Testing initialization ...
Testing test-time forward pass ...
Testing training loss (no regularization) ...
Testing training loss (with regularization) ...
Running numeric gradient check with reg = 0.0
U relative error: 3.33e-04
W1 relative error: 1.60e-03
W2 relative error: 5.15e-10
b1 relative error: 2.49e-04
b2 relative error: 4.77e-10
Running numeric gradient check with reg = 0.7
U relative error: 1.68e-05
W1 relative error: 2.44e-05
W2 relative error: 4.08e-07
b1 relative error: 3.29e-03
b2 relative error: 1.61e-09
###Markdown
SolverIn the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class.In the file `batchnormlstm/solver.py`, a modular designed class is implemented to train the models. Below we test a `Solver` instance by using it to train a `TwoLayerLSTM` with the data.
###Code
model = TwoLayerLSTM(input_dim=28, hidden_dim=10)
solver = None
##############################################################################
# Use a Solver instance to train a TwoLayerLSTM #
##############################################################################
solver = Solver(model, data,
update_rule='sgd',
optim_config={
'learning_rate': 1e-3,
},
lr_decay=0.9,
num_epochs=12, batch_size=100,
print_every=100)
solver.train(regress=True)
solver.check_accuracy(data['X_test'], data['y_test'], regress=True)
##############################################################################
# #
##############################################################################
# Run this cell to visualize training loss and train / val accuracy
plt.subplot(2, 1, 1)
plt.title('Training loss')
plt.plot(solver.loss_history, 'o')
plt.xlabel('Iteration')
plt.subplot(2, 1, 2)
plt.title('Accuracy')
plt.plot(solver.train_acc_history, '-o', label='train')
plt.plot(solver.val_acc_history, '-o', label='val')
plt.plot([0.5] * len(solver.val_acc_history), 'k--')
plt.xlabel('Epoch')
plt.legend(loc='lower right')
plt.gcf().set_size_inches(15, 12)
plt.show()
print()
###Output
_____no_output_____
###Markdown
Advanced LSTM networkNext we implement the multi-layer LSTM network with an arbitrary number of hidden layers, and the options of batch normalization and dropout, as the `AdvancedLSTM` class in the file `batchnormlstm/classifiers/lstm.py`.ref: https://arxiv.org/pdf/1603.09025.pdf https://arxiv.org/pdf/1409.2329.pdf Initial loss and gradient check As a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization and see if the initial losses seem reasonable.
###Code
N, D, H1, H2, O = 2, 15, 20, 30, 10
T = 4
X = np.random.randn(N, T, D)
Y = np.random.randn(N, O)
for reg in [0, 3.14]:
print('Running check with reg = ', reg)
model = AdvancedLSTM([H1, H2], input_dim=D, output_dim=O,
reg=reg, weight_scale=5e-2, dtype=np.float64)
loss, grads = model.loss(X, Y)
print('Initial loss: ', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, Y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
###Output
Running check with reg = 0
Initial loss: 0.6364185128745645
U1 relative error: 6.23e-04
U2 relative error: 1.06e-03
W1 relative error: 1.24e-04
W2 relative error: 6.81e-04
W3 relative error: 6.66e-05
b1 relative error: 2.69e-05
b2 relative error: 1.44e-05
b3 relative error: 4.11e-11
Running check with reg = 3.14
Initial loss: 36.14152720955063
U1 relative error: 6.91e-07
U2 relative error: 8.19e-07
W1 relative error: 7.27e-07
W2 relative error: 3.58e-07
W3 relative error: 2.79e-08
b1 relative error: 6.86e-05
b2 relative error: 1.16e-03
b3 relative error: 2.35e-09
###Markdown
As another sanity check, train on a small dataset of 50 instances. We use three layers of LSTM and tweak the learning rate and initialization scale.
###Code
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 1e-2
learning_rate = 1e-2
model = AdvancedLSTM([20, 10, 20], input_dim=28,
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train(regress=True)
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
###Output
(Iteration 1 / 40) loss: 25834.386138
(Epoch 0 / 20) train_acc: -24585.993359; val_acc: -22754.584035
(Epoch 1 / 20) train_acc: -24553.678901; val_acc: -22722.841330
(Epoch 2 / 20) train_acc: -24489.199432; val_acc: -22659.478862
(Epoch 3 / 20) train_acc: -24424.312421; val_acc: -22595.890431
(Epoch 4 / 20) train_acc: -24359.229251; val_acc: -22532.153400
(Epoch 5 / 20) train_acc: -24292.794632; val_acc: -22467.386107
(Iteration 11 / 40) loss: 23365.510047
(Epoch 6 / 20) train_acc: -24226.185597; val_acc: -22401.898157
(Epoch 7 / 20) train_acc: -24154.486765; val_acc: -22331.632446
(Epoch 8 / 20) train_acc: -24069.942663; val_acc: -22249.184240
(Epoch 9 / 20) train_acc: -23956.483160; val_acc: -22138.375839
(Epoch 10 / 20) train_acc: -23767.291472; val_acc: -21953.807456
(Iteration 21 / 40) loss: 21446.622670
(Epoch 11 / 20) train_acc: -23340.993891; val_acc: -21535.989780
(Epoch 12 / 20) train_acc: -22434.520582; val_acc: -20649.555723
(Epoch 13 / 20) train_acc: -21410.202070; val_acc: -19648.278429
(Epoch 14 / 20) train_acc: -20369.472620; val_acc: -18633.258228
(Epoch 15 / 20) train_acc: -19333.098594; val_acc: -17627.971436
(Iteration 31 / 40) loss: 20595.584323
(Epoch 16 / 20) train_acc: -18320.673507; val_acc: -16642.917560
(Epoch 17 / 20) train_acc: -17370.295687; val_acc: -15709.193444
(Epoch 18 / 20) train_acc: -16456.554631; val_acc: -14819.026648
(Epoch 19 / 20) train_acc: -15593.876870; val_acc: -13980.018860
(Epoch 20 / 20) train_acc: -14786.478780; val_acc: -13193.151827
###Markdown
Update rulesSo far we have used vanilla stochastic gradient descent (SGD) as our update rule. More sophisticated update rules can make it easier to train deep networks. We implement a few of the most commonly used update rules and compare them to vanilla SGD. SGD+MomentumStochastic gradient descent with momentum is a widely used update rule that tends to make deep networks converge faster than vanilla stochstic gradient descent.In the file `batchnormlstm/optim.py` we implement the SGD+momentum update rule in the function `sgd_momentum`, then run the following to check the implementation. We should see errors less than 1e-8.
###Code
from batchnormlstm.optim import sgd_momentum
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
v = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-3, 'velocity': v}
next_w, _ = sgd_momentum(w, dw, config=config)
expected_next_w = np.asarray([
[ 0.1406, 0.20738947, 0.27417895, 0.34096842, 0.40775789],
[ 0.47454737, 0.54133684, 0.60812632, 0.67491579, 0.74170526],
[ 0.80849474, 0.87528421, 0.94207368, 1.00886316, 1.07565263],
[ 1.14244211, 1.20923158, 1.27602105, 1.34281053, 1.4096 ]])
expected_velocity = np.asarray([
[ 0.5406, 0.55475789, 0.56891579, 0.58307368, 0.59723158],
[ 0.61138947, 0.62554737, 0.63970526, 0.65386316, 0.66802105],
[ 0.68217895, 0.69633684, 0.71049474, 0.72465263, 0.73881053],
[ 0.75296842, 0.76712632, 0.78128421, 0.79544211, 0.8096 ]])
print('next_w error: ', rel_error(next_w, expected_next_w))
print('velocity error: ', rel_error(expected_velocity, config['velocity']))
###Output
next_w error: 8.882347033505819e-09
velocity error: 4.269287743278663e-09
###Markdown
RMSProp and AdamRMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients.In the file `batchnormlstm/optim.py`, implement the RMSProp update rule in the `rmsprop` function and implement the Adam update rule in the `adam` function, and check the implementations using the tests below.[1] Tijmen Tieleman and Geoffrey Hinton. "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude." COURSERA: Neural Networks for Machine Learning 4 (2012).[2] Diederik Kingma and Jimmy Ba, "Adam: A Method for Stochastic Optimization", ICLR 2015.
###Code
# Test RMSProp implementation; we should see errors less than 1e-7
from batchnormlstm.optim import rmsprop
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
cache = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'cache': cache}
next_w, _ = rmsprop(w, dw, config=config)
expected_next_w = np.asarray([
[-0.39223849, -0.34037513, -0.28849239, -0.23659121, -0.18467247],
[-0.132737, -0.08078555, -0.02881884, 0.02316247, 0.07515774],
[ 0.12716641, 0.17918792, 0.23122175, 0.28326742, 0.33532447],
[ 0.38739248, 0.43947102, 0.49155973, 0.54365823, 0.59576619]])
expected_cache = np.asarray([
[ 0.5976, 0.6126277, 0.6277108, 0.64284931, 0.65804321],
[ 0.67329252, 0.68859723, 0.70395734, 0.71937285, 0.73484377],
[ 0.75037008, 0.7659518, 0.78158892, 0.79728144, 0.81302936],
[ 0.82883269, 0.84469141, 0.86060554, 0.87657507, 0.8926 ]])
print('next_w error: ', rel_error(expected_next_w, next_w))
print('cache error: ', rel_error(expected_cache, config['cache']))
# Test Adam implementation; we should see errors around 1e-7 or less
from batchnormlstm.optim import adam
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
m = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
v = np.linspace(0.7, 0.5, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'm': m, 'v': v, 't': 5}
next_w, _ = adam(w, dw, config=config)
expected_next_w = np.asarray([
[-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977],
[-0.1380274, -0.08544591, -0.03286534, 0.01971428, 0.0722929],
[ 0.1248705, 0.17744702, 0.23002243, 0.28259667, 0.33516969],
[ 0.38774145, 0.44031188, 0.49288093, 0.54544852, 0.59801459]])
expected_v = np.asarray([
[ 0.69966, 0.68908382, 0.67851319, 0.66794809, 0.65738853,],
[ 0.64683452, 0.63628604, 0.6257431, 0.61520571, 0.60467385,],
[ 0.59414753, 0.58362676, 0.57311152, 0.56260183, 0.55209767,],
[ 0.54159906, 0.53110598, 0.52061845, 0.51013645, 0.49966, ]])
expected_m = np.asarray([
[ 0.48, 0.49947368, 0.51894737, 0.53842105, 0.55789474],
[ 0.57736842, 0.59684211, 0.61631579, 0.63578947, 0.65526316],
[ 0.67473684, 0.69421053, 0.71368421, 0.73315789, 0.75263158],
[ 0.77210526, 0.79157895, 0.81105263, 0.83052632, 0.85 ]])
print('next_w error: ', rel_error(expected_next_w, next_w))
print('v error: ', rel_error(expected_v, config['v']))
print('m error: ', rel_error(expected_m, config['m']))
###Output
next_w error: 1.1395691798535431e-07
v error: 4.208314038113071e-09
m error: 4.214963193114416e-09
###Markdown
Final modelBy combining the features and modifying the rules, we can come up with a model that best fits the data. An example is shown below.
###Code
adv_model = None
################################################################################
# Train a AdvancedLSTM with modified features and rules. Store the model in #
# adv_model variable. #
################################################################################
hidden_dims = [20, 15, 20]
adv_model = AdvancedLSTM(hidden_dims, input_dim=28,
weight_scale=1e-2, dropout=0.95, use_batchnorm=True, reg=3e-5)
solver = Solver(adv_model, data,
num_epochs=30, batch_size=100,
update_rule='adam',
optim_config={
'learning_rate': 1e-2,
},
verbose=True, print_every=100,
lr_decay = 0.9)
solver.train(regress=True)
################################################################################
# #
################################################################################
###Output
(Iteration 1 / 5340) loss: 22429.565881
(Epoch 0 / 30) train_acc: -22142.952517; val_acc: -22784.897269
(Iteration 101 / 5340) loss: 20349.013528
(Epoch 1 / 30) train_acc: -18865.138875; val_acc: -19299.204119
(Iteration 201 / 5340) loss: 18602.975270
(Iteration 301 / 5340) loss: 17323.709701
(Epoch 2 / 30) train_acc: -16796.062704; val_acc: -17131.768075
(Iteration 401 / 5340) loss: 16288.072421
(Iteration 501 / 5340) loss: 15502.713236
(Epoch 3 / 30) train_acc: -15196.966181; val_acc: -15526.532841
(Iteration 601 / 5340) loss: 14649.562715
(Iteration 701 / 5340) loss: 14009.695086
(Epoch 4 / 30) train_acc: -14044.081025; val_acc: -14265.049174
(Iteration 801 / 5340) loss: 13185.206727
(Epoch 5 / 30) train_acc: -13031.742671; val_acc: -13231.504870
(Iteration 901 / 5340) loss: 13008.184389
(Iteration 1001 / 5340) loss: 12358.698903
(Epoch 6 / 30) train_acc: -12082.949525; val_acc: -12363.994974
(Iteration 1101 / 5340) loss: 11953.610956
(Iteration 1201 / 5340) loss: 11570.680937
(Epoch 7 / 30) train_acc: -11386.860170; val_acc: -11617.967127
(Iteration 1301 / 5340) loss: 11076.299291
(Iteration 1401 / 5340) loss: 10706.953638
(Epoch 8 / 30) train_acc: -10756.732469; val_acc: -10982.382432
(Iteration 1501 / 5340) loss: 10634.784156
(Iteration 1601 / 5340) loss: 10100.037639
(Epoch 9 / 30) train_acc: -10167.074312; val_acc: -10428.199456
(Iteration 1701 / 5340) loss: 9745.082310
(Epoch 10 / 30) train_acc: -9761.301357; val_acc: -9949.255366
(Iteration 1801 / 5340) loss: 9740.305786
(Iteration 1901 / 5340) loss: 9455.521302
(Epoch 11 / 30) train_acc: -9220.368611; val_acc: -9526.771915
(Iteration 2001 / 5340) loss: 9318.000634
(Iteration 2101 / 5340) loss: 8766.926771
(Epoch 12 / 30) train_acc: -8826.712583; val_acc: -9157.506234
(Iteration 2201 / 5340) loss: 8883.436453
(Iteration 2301 / 5340) loss: 8415.150049
(Epoch 13 / 30) train_acc: -8601.044907; val_acc: -8831.214089
(Iteration 2401 / 5340) loss: 8337.676423
(Epoch 14 / 30) train_acc: -8265.879308; val_acc: -8543.576560
(Iteration 2501 / 5340) loss: 8176.022965
(Iteration 2601 / 5340) loss: 8134.875731
(Epoch 15 / 30) train_acc: -7979.966729; val_acc: -8289.256179
(Iteration 2701 / 5340) loss: 8010.672002
(Iteration 2801 / 5340) loss: 7786.470749
(Epoch 16 / 30) train_acc: -7698.672004; val_acc: -8063.710700
(Iteration 2901 / 5340) loss: 7724.126279
(Iteration 3001 / 5340) loss: 7620.020365
(Epoch 17 / 30) train_acc: -7719.087292; val_acc: -7862.021490
(Iteration 3101 / 5340) loss: 7325.542304
(Iteration 3201 / 5340) loss: 7464.801944
(Epoch 18 / 30) train_acc: -7538.533580; val_acc: -7682.794647
(Iteration 3301 / 5340) loss: 7422.965915
(Epoch 19 / 30) train_acc: -7322.645703; val_acc: -7522.302291
(Iteration 3401 / 5340) loss: 7050.178851
(Iteration 3501 / 5340) loss: 6933.501588
(Epoch 20 / 30) train_acc: -7133.884757; val_acc: -7380.145795
(Iteration 3601 / 5340) loss: 7228.305912
(Iteration 3701 / 5340) loss: 6913.018620
(Epoch 21 / 30) train_acc: -6962.117160; val_acc: -7251.559253
(Iteration 3801 / 5340) loss: 7036.348716
(Iteration 3901 / 5340) loss: 6832.206765
(Epoch 22 / 30) train_acc: -6784.496720; val_acc: -7135.825761
(Iteration 4001 / 5340) loss: 7018.763070
(Epoch 23 / 30) train_acc: -6748.284937; val_acc: -7032.677722
(Iteration 4101 / 5340) loss: 6842.172335
(Iteration 4201 / 5340) loss: 6681.822179
(Epoch 24 / 30) train_acc: -6647.404915; val_acc: -6940.317673
(Iteration 4301 / 5340) loss: 6580.366273
(Iteration 4401 / 5340) loss: 6412.021725
(Epoch 25 / 30) train_acc: -6612.512929; val_acc: -6857.909244
(Iteration 4501 / 5340) loss: 6587.338017
(Iteration 4601 / 5340) loss: 6532.855847
(Epoch 26 / 30) train_acc: -6474.088960; val_acc: -6783.350387
(Iteration 4701 / 5340) loss: 6441.732856
(Iteration 4801 / 5340) loss: 6231.865239
(Epoch 27 / 30) train_acc: -6450.675438; val_acc: -6716.356471
(Iteration 4901 / 5340) loss: 6263.615659
(Epoch 28 / 30) train_acc: -6514.149270; val_acc: -6655.453157
(Iteration 5001 / 5340) loss: 6569.669732
(Iteration 5101 / 5340) loss: 6196.811852
(Epoch 29 / 30) train_acc: -6297.476843; val_acc: -6600.935016
(Iteration 5201 / 5340) loss: 6293.660362
(Iteration 5301 / 5340) loss: 6276.366139
(Epoch 30 / 30) train_acc: -6229.338944; val_acc: -6552.802483
###Markdown
Test the final modelRun the model on the validation and test sets.
###Code
print('Test set accuracy: ')
solver.check_accuracy(data['X_test'], data['y_test'], regress=True)
###Output
Test set accuracy:
|
rulevetting/projects/csi_pecarn/notebooks/feature_visualization.ipynb | ###Markdown
Correlation of features and in groups and association with outcome Group1: Consciousness
###Code
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
feat_conscious = ['HxLOC', 'TotalGCSManual', 'TotalGCS', 'AVPUDetails','AlteredMentalStatus', 'LOC','ControlType_x']
dfs_conscious=dfs[0].merge(dfs[3],how='left', on=['SITE', 'CaseID', 'StudySubjectID'])
dfs_conscious=dfs_conscious[feat_conscious]
# dfs_conscious.loc[:, 'ControlType_x'] = (dfs_conscious['ControlType_x'] == 'case').astype(int)
dfs_conscious = dfs_conscious.replace(['Y', 'YES', 'A'], 1)
dfs_conscious = dfs_conscious.replace(['N', 'NO'], 0)
dfs_conscious = dfs_conscious.replace(['15'], 15)
dfs_conscious = dfs_conscious.replace(['10'], 10)
dfs_conscious = dfs_conscious.replace(['14'], 14)
dfs_conscious = dfs_conscious.replace(['6'], 6)
dfs_conscious = dfs_conscious.replace(['8'], 8)
dfs_conscious = dfs_conscious.replace(['12'], 12)
dfs_conscious = dfs_conscious.replace(['5'], 5)
dfs_conscious = dfs_conscious.replace(['13'], 13)
dfs_conscious = dfs_conscious.replace(['11'],11)
dfs_conscious = dfs_conscious.replace(['9'], 9)
dfs_conscious = dfs_conscious.replace(['7','7T'], 7)
dfs_conscious = dfs_conscious.replace(['4'], 4)
dfs_conscious = dfs_conscious.replace(['ND', 'NA', '3'], float("NaN"))
dfs_conscious = dfs_conscious.replace(['U','V','N', 'P'], 0)
# dfs_conscious=dfs_conscious.fillna(dfs_conscious.median())
# print(dfs_conscious['TotalGCS'])
# print(pd.unique(dfs_conscious['TotalGCS']))
dfs_conscious = dfs_conscious.rename(columns={'ControlType_x': 'outcome'})
dfs_conscious_corr=dfs_conscious.corr(method='pearson')
# .style.background_gradient(cmap="Blues")
sns.heatmap(dfs_conscious_corr,cmap="coolwarm")
###Output
_____no_output_____
###Markdown
Group2: Complaint of pain in neck and age
###Code
# feat_pain = ['PtCompPainHead', 'PtCompPainFace', 'PtCompPainNeck', 'PtCompPainNeckMove', 'PtCompPainChest', 'PtCompPainBack', 'PtCompPainFlank', 'PtCompPainAbd', 'PtCompPainPelvis', 'PtCompPainExt']
# demog_df = dfs[4]
clean_key_col_names = lambda df: df.rename(columns={'site': 'SITE',
'caseid': 'CaseID',
'studysubjectid': 'StudySubjectID'})
demog_df = clean_key_col_names(dfs[4])
agegroup_df = pd.get_dummies(pd.cut(demog_df['AgeInYears'], bins=[0, 2, 6, 12, 16],
labels=['infant', 'preschool', 'school_age', 'adolescents'],
include_lowest=True), prefix='age')
agegroup_df=pd.concat([demog_df[['SITE', 'CaseID', 'StudySubjectID']], agegroup_df], axis=1)
feat_pain = ['PtCompPainNeck', 'PtCompPainNeckMove', 'age_infant', 'age_preschool', 'age_school_age', 'age_adolescents','PainNeck','ControlType_x']
dfs_pain=dfs[0].merge(dfs[3],how='left', on=['SITE', 'CaseID', 'StudySubjectID'])
dfs_pain=dfs_pain.merge(agegroup_df,how='left', on=['SITE', 'CaseID', 'StudySubjectID'])
dfs_pain=dfs_pain[feat_pain]
dfs_pain = dfs_pain.replace(['Y', 'YES', 'A'], 1)
dfs_pain = dfs_pain.replace(['N', 'NO'], 0)
dfs_pain = dfs_pain.replace(['ND', 'NA'], float("NaN"))
dfs_pain = dfs_pain.rename(columns={'ControlType_x': 'outcome'})
# dfs_pain=dfs_pain.fillna(dfs_pain.median())
# print(pd.unique(dfs_pain['PtCompPainNeckMove']))
dfs_pain_corr=dfs_pain.corr(method='pearson')
# .style.background_gradient(cmap="Blues")
sns.heatmap(dfs_pain_corr,cmap="coolwarm")
###Output
_____no_output_____
###Markdown
Group3: Tenderness in neck
###Code
# feat_tender = ['PtTenderHead', 'PtTenderFace', 'PtTenderNeck', 'PtTenderNeckLevel', 'PtTenderNeckLevelC1', 'PtTenderNeckLevelC2', 'PtTenderNeckLevelC3', 'PtTenderNeckLevelC4', 'PtTenderNeckLevelC5', 'PtTenderNeckLevelC6', 'PtTenderNeckLevelC7', 'PtTenderNeckAnt', 'PtTenderNeckPos', 'PtTenderNeckLat', 'PtTenderNeckMid', 'PtTenderNeckOther', 'PtTenderChest', 'PtTenderBack', 'PtTenderFlank', 'PtTenderAbd', 'PtTenderPelvis', 'PtTenderExt']
feat_tender =['PtTenderNeck', 'PtTenderNeckLevel', 'PtTenderNeckLevelC1', 'PtTenderNeckLevelC2', 'PtTenderNeckLevelC3', 'PtTenderNeckLevelC4', 'PtTenderNeckLevelC5', 'PtTenderNeckLevelC6', 'PtTenderNeckLevelC7', 'PtTenderNeckAnt', 'PtTenderNeckPos', 'PtTenderNeckLat', 'PtTenderNeckMid', 'PtTenderNeckOther','PosMidNeckTenderness', 'TenderNeck','ControlType_x']
dfs_tender=dfs[0].merge(dfs[3],how='left', on=['SITE', 'CaseID', 'StudySubjectID'])
dfs_tender=dfs_tender[feat_tender]
dfs_tender = dfs_tender.replace(['Y', 'YES', 'A'], 1)
dfs_tender = dfs_tender.replace(['N', 'NO'], 0)
dfs_tender = dfs_tender.replace(['ND', 'NA'], float("NaN"))
# dfs_tender=dfs_tender.fillna(dfs_tender.median())
# print(dfs_tender)
dfs_tender = dfs_tender.rename(columns={'ControlType_x': 'outcome'})
hide_column_neck=["PtTenderNeck", "PtTenderNeckLevel","PtTenderNeckLevelC1","PtTenderNeckLevelC2", "PtTenderNeckLevelC3","PtTenderNeckLevelC4","PtTenderNeckLevelC5","PtTenderNeckLevelC6","PtTenderNeckLevelC7","PtTenderNeckAnt","PtTenderNeckPos","PtTenderNeckLat","PtTenderNeckMid","PtTenderNeckOther","PosMidNeckTenderness","TenderNeck"]
dfs_tender_corr=dfs_tender.corr(method='pearson')
# .style.background_gradient(cmap="Blues").hide_columns(hide_column_neck)
dfs_tender_corr['outcome']
plt.figure(dpi=250, figsize=(2, 4))
vals = dfs_tender_corr['outcome']
args = np.argsort(vals)
labs = vals.index.values[args]
ax = plt.subplot(111)
plt.barh(labs[:-1], vals[args][:-1])
plt.xlabel('Correlation w/ outcome')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
plt.show()
###Output
_____no_output_____
###Markdown
Group4: Focal neurological deficits
###Code
feat_focal= ['PtParesthesias', 'PtSensoryLoss', 'PtExtremityWeakness','FocalNeuroFindings','ControlType_x']
dfs_focal=dfs[0].merge(dfs[3],how='left', on=['SITE', 'CaseID', 'StudySubjectID'])
dfs_focal=dfs_focal[feat_focal]
dfs_focal = dfs_focal.replace(['Y', 'YES', 'A'], 1)
dfs_focal = dfs_focal.replace(['N', 'NO'], 0)
dfs_focal = dfs_focal.replace(['3'], 3)
dfs_focal = dfs_focal.replace(['ND', 'NA'], float("NaN"))
dfs_focal = dfs_focal.rename(columns={'ControlType_x': 'outcome'})
# dfs_focal=dfs_focal.fillna(dfs_focal.median())
# print(pd.unique(dfs_focal['PtExtremityWeakness']))
dfs_focal_corr=dfs_focal.corr(method='pearson')
# .style.background_gradient(cmap="Blues")
sns.heatmap(dfs_focal_corr,cmap="coolwarm")
###Output
_____no_output_____
###Markdown
Group 5: Other parts of the body
###Code
feat_otherpain = ['PtCompPainHead', 'PtCompPainFace', 'PtCompPainExt', 'PtTenderHead', 'PtTenderFace', 'PtTenderExt','SubInj_Head', 'SubInj_Face', 'SubInj_Ext', 'SubInj_TorsoTrunk','ControlType_x']
dfs_otherpain=dfs[0].merge(dfs[3],how='left', on=['SITE', 'CaseID', 'StudySubjectID'])
dfs_otherpain=dfs_otherpain[feat_otherpain]
dfs_otherpain = dfs_otherpain.replace(['Y', 'YES', 'A'], 1)
dfs_otherpain = dfs_otherpain.replace(['N', 'NO'], 0)
dfs_otherpain = dfs_otherpain.replace(['ND', 'NA'], float("NaN"))
dfs_otherpain = dfs_otherpain.rename(columns={'ControlType_x': 'outcome'})
# dfs_focal=dfs_focal.fillna(dfs_focal.median())
# print(dfs_focal)
dfs_otherpain_corr=dfs_otherpain.corr(method='pearson')
# .style.background_gradient(cmap="Blues")
sns.heatmap(dfs_otherpain_corr,cmap="coolwarm")
###Output
_____no_output_____
###Markdown
Group6: Injury mechanism
###Code
# feat_injury= ['InjuryPrimaryMechanism', 'HeadFirst', 'HeadFirstRegion','HighriskDiving', 'HighriskFall', 'HighriskHanging', 'HighriskHitByCar', 'HighriskMVC', 'HighriskOtherMV', 'AxialLoadAnyDoc', 'axialloadtop', 'Clotheslining','ControlType_x']
feat_injury= ['InjuryPrimaryMechanism', 'HeadFirst', 'HeadFirstRegion', 'ControlType_x']
dfs_injury=dfs[0].merge(dfs[6],how='left', on=['SITE', 'CaseID', 'StudySubjectID'])
dfs_injury=dfs_injury[feat_injury]
dfs_injury = dfs_injury.replace(['Y', 'YES', 'A'], 1)
dfs_injury = dfs_injury.replace(['N', 'NO'], 0)
dfs_injury = dfs_injury.replace(['ND', 'NA'], float("NaN"))
# dfs_injury['InjuryPrimaryMechanism'] = pd.to_numeric(dfs_injury['InjuryPrimaryMechanism'])
# dfs_injury['InjuryPrimaryMechanism'] = dfs_injury['InjuryPrimaryMechanism'].astype('Int64')
series_HeadFirstRegion = pd.get_dummies(dfs_injury.HeadFirstRegion, prefix='HeadFirstRegion')
dfs_injury=dfs_injury.drop(columns=['HeadFirstRegion'])
dfs_injury=pd.concat([series_HeadFirstRegion,dfs_injury], axis=1)
series_injury = pd.get_dummies(dfs_injury.InjuryPrimaryMechanism, prefix='Injuryechanism')
dfs_injury=dfs_injury.drop(columns=['InjuryPrimaryMechanism'])
dfs_injury=pd.concat([series_injury,dfs_injury], axis=1)
dfs_injury=dfs_injury.rename(columns={'Injuryechanism_1': 'Motor Vehicle Collision'})
dfs_injury=dfs_injury.rename(columns={'Injuryechanism_2': 'outcOther Motorized Transport Crashome'})
dfs_injury=dfs_injury.rename(columns={'Injuryechanism_3': 'Bike rider struck by moving vehicle'})
dfs_injury=dfs_injury.rename(columns={'Injuryechanism_4': 'Bike collision or fall from bike'})
dfs_injury=dfs_injury.rename(columns={'Injuryechanism_5': 'Other non-motorized transport struck by moving vehicle'})
dfs_injury=dfs_injury.rename(columns={'Injuryechanism_6': 'Pedestrian struck by moving vehicle'})
dfs_injury=dfs_injury.rename(columns={'Injuryechanism_7': 'Blunt injury to head/neck'})
dfs_injury=dfs_injury.rename(columns={'Injuryechanism_8': 'Sports injury'})
dfs_injury=dfs_injury.rename(columns={'Injuryechanism_9': 'Fall from elevation'})
dfs_injury=dfs_injury.rename(columns={'Injuryechanism_10': 'Fall down stairs'})
dfs_injury=dfs_injury.rename(columns={'Injuryechanism_11': 'Fall from standing/walking/running'})
dfs_injury=dfs_injury.rename(columns={'Injuryechanism_12': 'Diving injury'})
dfs_injury=dfs_injury.rename(columns={'Injuryechanism_13': 'Hanging injury'})
dfs_injury=dfs_injury.rename(columns={'Injuryechanism_14': 'Other'})
dfs_injury=dfs_injury.rename(columns={'Injuryechanism_20': 'fall from non-motorized transport while riding'})
dfs_injury=dfs_injury.rename(columns={'ControlType_x': 'outcome'})
# dfs_focal=dfs_focal.fillna(dfs_focal.median())
# print(pd.unique(dfs_injury['HeadFirstRegion']))
# hide_column_injury=["HeadFirst","HighriskDiving","HighriskFall","HighriskHanging","HighriskHitByCar","HighriskMVC","HighriskOtherMV","AxialLoadAnyDoc","axialloadtop","Clotheslining"]
dfs_injury_corr=dfs_injury.corr(method='pearson')
dfs_injury_corr['outcome']
plt.figure(dpi=250, figsize=(2, 4))
vals = dfs_injury_corr['outcome']
args = np.argsort(vals)
labs = vals.index.values[args]
ax = plt.subplot(111)
plt.barh(labs[:-1], vals[args][:-1])
plt.xlabel('Correlation w/ outcome')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
plt.show()
###Output
_____no_output_____ |
.ipynb_checkpoints/dna_cnn_autoencoder_and_siamese_network-checkpoint.ipynb | ###Markdown
Detection and classification of genetic mutations in BRCA1 gene in human chromosome 17 AbstractOne of the major tasks in clinical genomics is the identification of mutations associated with human genetic diseases. Typically, genome-wide genetic studies identify a large number of variants that have potential association with a disease. However, to narrow this search and to pinpoint the variants that most likely cause a disease, a number of methods have been developed . These methods use the evolutionary conservation of nucleotide positions and/or the functional consequences of mutations to distinguish disease-associated variants from neutral and benign variants. Breast cancer is a common disease. Each year, approximately 200,000 women in the United States are diagnosed with breast cancer, and one in nine American women will develop breast cancer in her lifetime. In 1994, the first gene associated with breast cancer - BRCA1 (for BReast CAncer1) was identified on chromosome 17. When individuals carry a mutated form of BRCA1, they have an increased risk of developing breast or ovarian cancer at some point in their lives. Children of parents with a BRCA1 mutation have a 50 percent chance of inheriting the gene mutation. "The simplest denomination of breast cancer is based upon inherited susceptibility to breast cancer vs sporadic occurrences of breast cancer. Heightened breast cancer risk may be due to a genetic alteration that increases susceptibility based upon an inherited heterozygous gene defect in for example BRCA1, TP53, PTEN or other tumor suppressors"The quote is taken from [DNA damage and breast cancer](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3168783/) written by Jennifer D Davis and Shiaw-Yih Lin. OverviewThe current project is focused on applying deep learning approaches for detecting and classifying potential point mutations in patient's BRCA1 gene.Triple-negative breast cancers often contain inactivation of the DNA repair gene BRCA1. In fact, as much as 30% of breast cancers are thought to have some degree of BRCA1 inactivation.A key feature of the structure of a genes like BRCA1 is that their transcripts are typically subdivided into exon and intron regions. Exon regions are retained in the final mature mRNA molecule, while intron regions are cut out during post-transcriptional process. Indeed my focus was on exon's mutations. Point mutationsGenetic mutations can be classified based on their effect on the protein structure:- Missense: these mutations change the coded amino acid, hence they influence the final protein structure. Their effect can be uncertain or pathogenic.- Nonsense: these mutations cause a shift in the reading frame or the formation of a premature stop codon (truncated protein). Very often these mutations are pathogenic.But there's also another class of genetic mutations, rather infrequent, but possible:- Start-loss: these mutations affect the initiation codon (the very first amino acid of the protein - a Methionine), and their effect on the final protein structure is very often pathogenic.- Stop-loss: even rarer than start-loss, these mutations affect the last protein amino acid. Also for these mutations, the effect isn't easy to understandI want to try to clarify the strand issue. Consider the following stretch of double stranded DNA which encodes a short peptide:The actual biological transcription process works from the template strand of the DNA. This is the reason why the reference and the patient sequences are indeed taken from reverse strand (aka Watson strand, strand −1). The DataThere is used various types of data:- Reference and patient DNA of BRCA1 gene and exon's boundaries were taken from [National Library of Medicine](https://www.ncbi.nlm.nih.gov/genome/gdv/browser/genome/?id=GCF_000001405.39).- All the DNA variants (mutation) were taken from [Health University of Utah](https://arup.utah.edu/database/BRCA/Variants/BRCA1.php) Patient dataFor the current research there is a patient DNA sequence with anomalies in exons: 2, 3, 4, 7, 13 and 14. Most of them are `non pathogenic`. Only one (in exon 2) is a pathogenic variant c.1A>G - mutation type:start loss."Start-loss: these mutations affect the initiation codon, i.e. the very first amino acid of the protein (which is a Methionine), and their effect on the final protein structure (and therefore on the individual's clinical picture), is anything but easily deducible." The quote was taken from [Breda Genetics](https://bredagenetics.com/start-loss-mutations-in-rare-diseases/)
###Code
#Reading the patient DNA sequence
patient_data = pd.read_csv('dna_data/BRCA1_gene_sequence_patient.txt',header=None)
#Reading the reference DNA sequence
reference_data = pd.read_csv('dna_data/BRCA1_gene_sequence.txt', header=None)
#Reading the reference exon boundaries
exon_boundaries = pd.read_csv('dna_data/BRCA1_exon_bounderies.csv')
#Loading dna variants dataset
variants_data = pd.read_csv('dna_data/variants/dna_variants.csv')
###Output
_____no_output_____
###Markdown
ImplementationAfter wide research I decided to use a convolutional autoencoder for anomaly detection in patient DNA. For the anomaly detection part I transformed the DNA sequences into arrays of numbers (1 - 5).Each one corresponding to each nucleotide and 5 is for letter 'N' (unknown) .After many experiments I decided to use different approach for the classification part.I transformed the DNA sequences into arrays of one hot encoded nucleotides.The latter seemed more robust and reliable.Because of the relatively small size of described DNA variants (about 240), I had to find something different than regular classifier.I decide to use a siamese network with contrastive loss function for classifying the anomalies.
###Code
def vectorize(seq):
"""
Vectorize the the DNA sequence
"""
str_sequence = ''
arr_sequence = []
#Concatenate lines of strings
for line in seq[0]:
str_sequence += line
#Populate the vector with the nucleotides
for n in str_sequence:
arr_sequence.append(n)
return np.array(arr_sequence)
# Find the length of the longest exon
EXON_MAX_LENGTH = 0
for i in range(len(exon_boundaries)):
start = exon_boundaries.loc[i].start
end = exon_boundaries.loc[i].end
diff = end - start + 1
if diff > EXON_MAX_LENGTH:
EXON_MAX_LENGTH = diff
#Vectorize the referent and patient data
ref_vec = vectorize(reference_data)
pat_vec = vectorize(patient_data)
def normalize(seq):
"""
Normalize the sequence
"""
return seq / np.max(seq)
def nucleotide_to_num(sequence):
"""
Transforms DNA sequence to a sequence of numbers coresponding to certain nucleotide
"""
ref_sequence_to_num = []
nucleotide_to_num = {
'A': 1.,
'C': 2.,
'G': 3.,
'T': 4.,
'N': 5.
}
for nucleodtide in sequence:
ref_sequence_to_num.append(nucleotide_to_num[nucleodtide])
return np.array(normalize(ref_sequence_to_num))
###Output
_____no_output_____
###Markdown
Exons boundariesThe exon boundaries were taken from [National Center for Biotechnology Information](https://www.ncbi.nlm.nih.gov/). They are basically the start and end position of the exon according the order of Human Genome version called ` Assembly GRCh38.p13 - Cr17`.For the current project only the exon sequences will be used.The following function is extracting only exons and making their length even number. I needed this even lengths because of the nature of the autoencoder.
###Code
def split_exons(seq):
"""Devide the DNA sequence on exons - remove the other parts"""
exon_seq = []
for i in range(len(exon_boundaries)):
start = exon_boundaries.loc[i].start
end = exon_boundaries.loc[i].end
ex = seq[start : end]
# Making the length of the exons to be even number
if len(ex) % 2 != 0:
ex = np.hstack((ex , ['N' for _ in range(1)]))
exon_seq.append(nucleotide_to_num(ex))
return exon_seq
#Extracting exons for the referent and patient sets
referent_exons = split_exons(ref_vec)
patient_exons = split_exons(pat_vec)
COPY_COUNT = 300
def create_dataset(X):
"""Create an 3D array with multiple copies of given sequence"""
Xs=[]
for i in range(COPY_COUNT):
Xs.append(X)
return np.array(Xs)
###Output
_____no_output_____
###Markdown
Convolutional reconstruction autoencoderThe dataset for the anomaly detection part of the project is actually the entire DNA sequence of the BRCA1 gene. The idea is to train the model with the referent (healthy) DNA sequence.After that the model should be able to detect potential changes in the patient DNA.Since the input data is a 3d array i decided to use `convolutional reconstruction autoencoder model`The model will take input of shape (batch_size, sequence_length, num_features) and return output of the same shape. In this case, sequence_length is 81070 (the length of BRCA1 gene) and num_features is 4 (which corresponds to one hot encoded nucleotide). An autoencoder is a special type of neural network that is trained to copy its input to its output.It will first encode the sequence into a lower dimensional latent representation, then decodes the latent representation back to an sequence.The autoencoder was trained to minimize reconstruction error with the referent DNA (normal) only, then I used it to reconstruct patient data. My hypothesis was that the mutated sequence will have higher reconstruction error.For the encoder part were used two `Conv1D` layers. For the "bottleneck" I used a `Flatten` layout. And for decoder part were used two `Conv1DTranspose` layers and one `Conv1D` with output size 1. The `Conv1DTranspose` purpose is to increase the volume size back to the original array spatial dimensions.A special class `MutationDetector` was created to capsulate the `autoencoder`. This way it was easier to use it multiple times for each exon separately.
###Code
KRNL_SIZE = 7
latent_dim = 20
class MutationDetector(Model):
"""
Capsulate an autoencoder in a class.
This way it can be used multiple times as separate object.
"""
def __init__(self, input_a):
super(MutationDetector, self).__init__()
#Encoder part ----------------------------------------------
inputs = Input(shape=(input_a, 1))
conv = Conv1D(
filters=8, kernel_size=KRNL_SIZE, strides=2, padding="same", activation = 'relu')(inputs)
conv = Conv1D(
filters=4, kernel_size=KRNL_SIZE, strides=1, padding="same", activation="relu")(conv)
"""
Storing the shape of the last convolutional layer
to use it in the decoder part
"""
vol_size = K.int_shape(conv)
"""
Flatten the convolutional output to a 1d lattent space
Pass it to a dense layer
"""
flat = Flatten()(conv)
latent = Dense(latent_dim)(flat)
#Decoder part ----------------------------------------------
dense = Dense(np.prod(vol_size[1:]))(latent)
reshape = Reshape((vol_size[1], vol_size[2]))(dense)
conv_trans = Conv1DTranspose(
filters=4, kernel_size=KRNL_SIZE, strides=1, padding="same", activation = 'relu')(reshape)
conv_trans = Conv1DTranspose(
filters=8, kernel_size=KRNL_SIZE, strides=2,padding="same", activation = 'relu')(conv_trans)
outputs = Conv1DTranspose(
filters=1, kernel_size=KRNL_SIZE, padding="same", activation='sigmoid')(conv_trans)
self.autoencoder = Model(inputs, outputs)
def call(self, x):
return self.autoencoder(x)
def show_summary(self):
#Returns the summary of the current object
return self.autoencoder.summary()
def training_plot(hist,title):
"""
Plot training and validation losses
"""
fig, ax = plt.subplots(figsize=(6, 3), dpi=80)
plt.plot(hist.history['loss'], label='Training loss')
plt.plot(hist.history['val_loss'], label='Validation loss')
ax.set_title(title)
ax.set_ylabel('Loss (mae)')
ax.set_xlabel('Epoch')
plt.legend()
plt.show()
def calculate_threshold(predicted, referent):
"""
Calculating train loss and threshold
"""
train_loss = np.mean(np.abs(predicted - referent), axis=1)
threshold = np.max(train_loss)
plt.hist(train_loss, bins=50)
plt.xlabel("Train MAE loss")
plt.ylabel("No of samples")
plt.show()
return threshold
###Output
_____no_output_____
###Markdown
The reference_set was used as both the input and the target since this is a reconstruction model. Anomaly detection
###Code
anomalous_exon_names = []
def detect_anomalies(predicted, referent,threshold, exon_name):
"""
Detect the samples which are anomalies.
Calculate mean absolute error
"""
test_loss = np.mean(np.abs(predicted - referent), axis=1)
print('Anomaly detection - {}'.format(exon_name))
# Finding the anomalies
anomalies = test_loss > threshold
anomaly_sum = np.sum(anomalies)
print("Count of anomalous nucleotides: ", anomaly_sum)
#Print the anomaly nucleotide position
if anomaly_sum > 0:
anomalous_exon_names.append(exon_name)
print("Position of anomalous nucleotide: ", np.where(anomalies))
print("Test losses {}".format(test_loss[np.where(anomalies)]))
###Output
_____no_output_____
###Markdown
Here , for every exon was created a train set ,a test set and an autoencoder. Every model was trained and it made it's predictions separately from the others.
###Code
for exon_n in range(len(referent_exons)):
#Creating the exon names
exon_name = 'exon_0' + str(exon_n+1) if exon_n+1 < 10 else 'exon_' + str(exon_n+1)
#Converting the sequences into required shape
reference_set = create_dataset(referent_exons[exon_n])
reference_set = reference_set.reshape(COPY_COUNT, len(referent_exons[exon_n]), 1)
#Calling the autoencoder
autoencoder = MutationDetector(reference_set.shape[1])
#Compile the model using optimizer adam and loss: mean absolute error
autoencoder.compile(optimizer=Adam(learning_rate=0.001), loss="mae")
tr_title = 'Training for {}'.format(exon_name)
print( '==========='+ tr_title +'============')
#Fitting the model
history = autoencoder.fit(reference_set, reference_set,
epochs=30,
batch_size=16,
validation_split=0.2,
verbose=2)
print(autoencoder.show_summary())
#Plot training result
training_plot(history, tr_title)
"""
Predict the referent data to calculate the training loss
it is needed to determine the reconstruction loss
"""
predict = autoencoder.predict(reference_set)
# Get reconstruction loss threshold.
threshold = calculate_threshold(predict[0], reference_set[0])
print("Reconstruction error threshold:", threshold)
# Loading patient exon sequence
patient_set = patient_exons[exon_n].reshape(1, len(referent_exons[exon_n]), 1)
test_predict = autoencoder.predict(patient_set)
#Searching for anomalies
detect_anomalies(test_predict[0] , patient_set[0], threshold, exon_name)
print("======== End of processing {} ======== \n".format(exon_name))
"Mutations of BRCA1 gene were found in: {}".format(anomalous_exon_names)
###Output
_____no_output_____
###Markdown
All the anomalous exons were found by the autoencoder. The problem is that there is one additional - exon_5. It is not in the list of known anomalous exons of the patient DNA.The exons of interest were found and now i had to classify them. Classifying the the anomalous sequences Siamese networkDuring my research I realized that there aren't so much DNA variants for different exons in BRCA1 gene. The overall count of the variants is 241.So I came to the conclusion that I need something different than common classifier to determine which anomalous sequence belongs to which class (pathogenic , not pathogenic) , mutation type or nucleotide change.Then I decide to try `siamese network` with `contrastive loss function`.Practical, real-world use cases of `siamese networks` include face recognition, signature verification, prescription pill identification, and many more!Furthermore, `siamese networks` can be trained with very little data, which actually I was aiming for.The concept of a `siamese network`: Two convolutional network to be merged and to output the similarity of two entries in a pair. DNA VariantsThe DNA variants were taken from BRCA1 database of [Health University of Utah](https://arup.utah.edu/database/BRCA/Variants/BRCA1.php). The data includes all the recorded mutations classified as `Definitely pathogenic` , `likely not pathogenic` and `not pathogenic`.All the nucleotide changes have mutation type: `Start loss`,`Splice site`, `Missense` or `Nonsence`
###Code
variants_data.head()
###Output
_____no_output_____
###Markdown
Only the variants of the anomalous exons were taken for the classification.Next steps were to vectorize the DNA sequences of the variants.Because the network requires to the sequences to be with equal lengths, the shorter sequences were made as long as the longest sequence by adding sufficient count of 'N's at their ends. Missing information for variantsUnfortunately, during my research I couldn't find any variants for exons: 1, 8 and 10.So this is the reason why I didn't considered them for the classification.
###Code
# Vectorize the patient BRCA1 gene sequence
vec_patient_sequence = vectorize(patient_data)
patient_exons = {}
exon_boundaries_df = exon_boundaries.set_index('name')
missing_data = ['exon_01','exon_08','exon_10']
# Create dictionary with detetected by the autencoder anomalies
for name in anomalous_exon_names:
if name in missing_data:
continue
address = exon_boundaries_df.loc[name]
patient_exons[name] = vec_patient_sequence[address.start : address.end]
def nucleotide_to_vector(sequence):
"""
Transforms DNA sequence to a array of one-hot encoded nucleotides
"""
ref_sequence_to_vec = []
nucleotide_to_vec = {
'A': [1., 0., 0., 0.],
'C': [0., 1., 0., 0.],
'G': [0. ,0. ,1., 0.],
'T': [0., 0., 0., 1.],
'N': [0., 0., 0., 0.]
}
for nucleodtide in sequence:
ref_sequence_to_vec.append(nucleotide_to_vec[nucleodtide])
return np.array(ref_sequence_to_vec)
###Output
_____no_output_____
###Markdown
Creating the variants data frame with vectorized sequences, so that all the variants have the same length. The shorter sequences received subsequence with letter 'N' at the end.
###Code
EXON_MAX_LENGTH = len(max(variants_data.variant, key=len))
vec_variants_dict = {'variant':[],'location':[],'classification':[],'nucleotide change':[],
'mutation type':[]}
for i in range(len(variants_data)):
row = variants_data.loc[i]
sequence = row.variant
# Make the exons as long as the longest one
sequence = sequence + 'N'* (EXON_MAX_LENGTH-len(sequence))
#Transform the nucleotides sequence to one hot vector
sequence = nucleotide_to_vector(sequence)
#Populate the variant dictionary
vec_variants_dict['variant'].append(sequence)
vec_variants_dict['location'].append(row.location.strip())
vec_variants_dict['classification'].append(row.classification.strip())
vec_variants_dict['nucleotide change'].append(row[' nucleotide change'].strip())
vec_variants_dict['mutation type'].append(row['mutation type'])
#Populate a dataframe with required data
vec_variants_df = pd.DataFrame(vec_variants_dict).sample(frac=1)
vec_variants_df = vec_variants_df.reset_index()
vec_variants_df = vec_variants_df.drop(columns=['index'])
vec_variants_df.head()
"""
Create and populate a dictionary with data, so that after the predictions,
the data of predicted variant should be acessible
"""
pairs_dict={'current':[], 'candidate':[], 'current nuc change':[],
'current class':[],'paired class':[], 'paired nuc change':[], 'class':[]}
def populate_pairs_data(df, row, is_similar):
"""
Populate a dictionary with pairs of sequences
"""
current_seq = row.variant
current_nuc_change = row['nucleotide change']
for j in range(len(df)):
c_row = df.loc[j]
pairs_dict['current'].append(current_seq)
pairs_dict['candidate'].append(c_row.variant)
sim_class = np.array([1.]) if is_similar else np.array([0.])
pairs_dict['class'].append(sim_class)
pairs_dict['current nuc change'].append(current_nuc_change)
pairs_dict['paired nuc change'].append(c_row['nucleotide change'])
pairs_dict['paired class'].append(c_row.classification)
pairs_dict['current class'].append(row.classification)
###Output
_____no_output_____
###Markdown
Since there are two subnetworks, there must be two inputs to the model.When training siamese networks I need to have positive pairs and negative pairs:- Positive pairs: Two sequences that belong to the same class (pathogenic - pathogenic).- Negative pairs: Two sequences that belong to different classes (non pathogenic - pathogenic).Next steps were to create such dataset that can fit the requirements of the siamese network inputs.
###Code
TRAIN_SIZE = 21000
pairs_df = pd.DataFrame()
def generate_pairs(df):
"""
initialize two empty lists to hold the (sequence, sequence) pairs and
labels to indicate if a pair is positive or negative
"""
global pairs_df
pair_sequences = []
pair_classes = []
non_pair_columns = ['class','current nuc change', 'paired nuc change',
'paired class','current class']
# loop over all dataset
for i in range(len(df)):
row = df.loc[i]
# take the current sequence and it's class
current_seq = row.variant
current_class = row.classification
current_location = row.location
#Create a Dataframe with variants with same classes of the current exon
current_similars_df=df[(df.location == current_location)
& (df.classification == current_class)].reset_index()
populate_pairs_data(current_similars_df, row, True)
#Create a Dataframe with variants with different classes of the current exon
current_non_similar_df=df[(df.classification != current_class)].reset_index()
populate_pairs_data(current_non_similar_df, row, False)
#Populate a dataframe and shuffle
pairs_df = pd.DataFrame(pairs_dict).sample(frac=1)
pairs_df = pairs_df.reset_index()
pairs_df = pairs_df.drop(columns=['index'])
#Construct and return the pairs
pair_sequences = np.array(pairs_df.drop(columns=non_pair_columns, axis=1).values.tolist()).astype(np.float)
pair_classes = np.array(pairs_df['class'].values.tolist()).astype(np.float)
return (pair_sequences[:TRAIN_SIZE],pair_sequences[TRAIN_SIZE:],
pair_classes[:TRAIN_SIZE],pair_classes[TRAIN_SIZE:])
#Defining train and test sets for the siamese network
train_pairs, test_pairs, train_classes, test_classes = generate_pairs(vec_variants_df)
print("Train pairs shape:{} - Train labels shape:{}".format(train_pairs.shape,train_classes.shape))
print("Test pairs shape:{} - Test labels shape:{}".format(test_pairs.shape,test_classes.shape))
###Output
Test pairs shape:(7525, 2, 312, 4) - Test labels shape:(7525, 1)
###Markdown
So the train data has 21000 pairs with shape 312 nucleotides and last dimension is representing the encoded nucleotide itself.
###Code
seq_input_shape = [train_pairs.shape[2],train_pairs.shape[3]]
###Output
_____no_output_____
###Markdown
The twin prototypeI constructed the prototype of the twins model, defining three sets of `Conv1D` layer with `relu` activation. Each convolutional layer has a total of 64 filters with size 7 following by `MaxPooling1D` with size = 2.`GlobalAveragePooling1D` - This layer performs exactly the same operation as the 1D Average pooling layer, except that the pool size is the size of the entire input of the layer,it computes a single average value for each of the input channels (the second dimension).A fully-connected layer was defined with the specified size = 48.Finally I normalized the features using L2 normalization before using Contrastive Loss.
###Code
def build_twin_model(input_shape):
inputs = Input(input_shape)
layer = Conv1D(filters=64, kernel_size=7, padding="same", activation="relu")(inputs)
layer = MaxPooling1D(pool_size=2)(layer)
layer = Conv1D(filters=64, kernel_size=7, padding="same", activation="relu")(layer)
layer = MaxPooling1D(pool_size=2)(layer)
layer = Conv1D(filters=64, kernel_size=7, padding="same", activation="relu")(layer)
layer = MaxPooling1D(pool_size=2)(layer)
pooled = GlobalAveragePooling1D()(layer)
dense = Dense(48)(pooled)
#Normalizing the output distances
outputs = Lambda(lambda x: K.l2_normalize(x,axis=1))(dense)
twin = Model(inputs, outputs)
return twin
#Create the separate inputs for the twin nets
pos_input = Input(seq_input_shape)
neg_input = Input(seq_input_shape)
#Construct the twins
prototype_twin = build_twin_model(seq_input_shape)
pos_twin = prototype_twin(pos_input)
neg_twin = prototype_twin(neg_input)
###Output
_____no_output_____
###Markdown
Here a Lambda layer was used to compute the euclidean distance between the outputs of the twin networks. Eventually they became the output of the sieamese network through a Dence layer.
###Code
def euclidean_distance(vectors):
"""Calculating the euclidian distance between twin ouputs"""
# unpack the vectors into separate lists
(vec_A, vec_B) = vectors
# compute the sum of squared distances between the vectors
squared = K.sum(K.square(vec_A - vec_B), axis=1, keepdims=True)
# Return the distances
return K.sqrt(K.maximum(squared, K.epsilon()))
#construct the siamese network
distance = Lambda(euclidean_distance)([pos_twin, neg_twin])
outputs = Dense(1, activation="sigmoid")(distance)
siamese = Model(inputs=[pos_input, neg_input], outputs=outputs)
###Output
_____no_output_____
###Markdown
Contrastive lossFor the current task `binary cross entropy` can be valid loss function.Тhe goal of a siamese network isn’t to classify a set of pairs but instead to differentiate between them. Essentially, contrastive loss is evaluating how good the siamese network is distinguishing between the pairs. There is distance based loss function called `contrastive loss`.$$Contrastive Loss = \frac{1}{2} * Y_{true} * D^2 + \frac{1}{2} * (1-Y_{true}) * max(margin - D, 0)^2$$where:- $Y_{true}$ - The ground-truth labels from the dataset. A value of 1 indicates that the two sequences in the pair are of the same class, while a value of 0 indicates that the sequences belong to two different classes.- $D$ is the Euclidean distance between the outputs of the siamese network.- margin - typically this value is set to 1- The max function takes the largest value of 0 and the margin, m, minus the distance.
###Code
def contrastive_loss(true_labels, dist, margin=1):
"""
Calculate the contrastive loss between the true labels and
the predicted distances
"""
squared_dist = K.square(dist)
squared_margin = K.square(K.maximum(margin - dist, 0))
return K.mean(1/2 * true_labels * squared_dist + 1/2 * (1 - true_labels) * squared_margin)
siamese.compile(loss=contrastive_loss, optimizer=Adam(learning_rate=0.005))
siamese.summary()
# Fit the siamese network
history = siamese.fit(
[train_pairs[:,0], train_pairs[:,1]], [train_classes],
validation_data=([test_pairs[:,0], test_pairs[:,1]], [test_classes]),
batch_size=128,
epochs=25)
#Plot siamese loss functions
plt.figure()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'])
plt.show()
predict = siamese.predict([test_pairs[:, 0], test_pairs[:, 1]])
###Output
_____no_output_____
###Markdown
Here I have created a data frame with the ground truth labels and the predicted distances between the pairs. We have to keep in mind that the smallest the distance is - the higher is the probability the sequences to belong of the same class. So in fact the prediction of 0.9 should correspond to label 0 (opposite classes) and vice versa.
###Code
#Create a dataframe with predicted distances and ground truth labels
pr_data = {'predicted':predict.reshape( -1).round(2), 'actual':test_classes.reshape(-1)}
comparisson_df = pd.DataFrame(data = pr_data).astype('float32')
comparisson_df.head()
#Sorting the predicted distances
sorted_distances = np.sort(comparisson_df.predicted.unique())
# Getting the train part of the combined_df dataframe
combined_df = pairs_df[TRAIN_SIZE:]
combined_df = combined_df.reset_index()
#Join the columns with predictions
combined_df = combined_df.join(comparisson_df.predicted)
#Drop not needed columns
combined_df = combined_df.drop(columns=['current', 'candidate'])
###Output
_____no_output_____
###Markdown
Here I have constructed a data frame which helped me to find the related data to the predicted variants.
###Code
combined_df.head()
###Output
_____no_output_____
###Markdown
The marginChecking the point from which we can say that a pair similar or not.Here I looped through the distances and calculated the count of false positive and false negative pairs. Then I found the distance where false negative and false positive counts were minimal. In this case the distance was 0.58.
###Code
counts = {'dist':[],'false positive':[] ,'false negative':[]}
for d in sorted_distances:
# Find count of false positive and false negative counts for the current distance
false_positive_count = len(combined_df[(combined_df['predicted'] > d) & (combined_df['class'] == 1)])
false_negaitive_count = (len(combined_df[(combined_df['predicted'] < d) & (combined_df['class'] == 0)]))
#Populate a dictionary
counts['dist'].append(d)
counts['false positive'].append(false_positive_count)
counts['false negative'].append(false_negaitive_count)
print('For distance: {} -> False positive count: {} , False negative count: {} '.format(d,false_positive_count,
false_negaitive_count))
print('False positive median: {} - False negative median: {}'.format(np.median(counts['false positive']),
np.median(counts['false negative'])))
counts_df = pd.DataFrame(counts)
best_dist = counts_df.loc[(counts_df.dist > 0.57) & (counts_df.dist < 0.59)]
best_dist
# compute final accuracy on test sets
faults_count = best_dist['false positive'] + best_dist['false negative']
'* Accuracy : %0.2f%%' % (100 - faults_count / (len(test_classes) / 100))
###Output
_____no_output_____
###Markdown
The resultsIt is time to get the final results. Pairs between the variants and anomalous sequences were created and passes to the model.
###Code
def vec_to_sequence(seq):
"""
Converting the vector back to a string sequence
"""
sequence = ''
for c in range(len(seq)):
sequence+=seq[c]
return sequence
###Output
_____no_output_____
###Markdown
Constructing an array with pairs of anomalous patient data and variants dataand make predictions
###Code
for exon_name in patient_exons.keys():
x = None
# Converting the vector back to a string sequence
sequence = vec_to_sequence(patient_exons[exon_name])
# Making the patient exon with the same length as the reference
anchor = sequence + 'N'* (EXON_MAX_LENGTH-len(sequence))
# Converting the sequence to one hot vector
anchor = nucleotide_to_vector(anchor)
# Construct a dataframe with variants of the current exon
anchor_variants = vec_variants_df[vec_variants_df.location == exon_name].reset_index()
#Construct pairs for the predictions
for v in range(len(anchor_variants)):
variant = anchor_variants.loc[v].variant
if x is None:
x = [[anchor], [variant]]
else:
x[0].append(anchor)
x[1].append(variant)
x = np.array(x)
"""
Make a prediction for the current annomaly
Get the index of the prediction with the highest probabiliry
Get the result from the variants dataframe
"""
index = np.argmin(siamese.predict([x[0], x[1]]), axis=0)
predicted_variant = anchor_variants.loc[index].values
#Printing the result for curen exon
print("Result for {} - mutation type: {}, nucleotide change: {} class: {}"
.format(exon_name,predicted_variant[0][5],
predicted_variant[0][4],
predicted_variant[0][3]))
###Output
Result for exon_02 - mutation type: start loss, nucleotide change: c.1A>G class: pathogenic
Result for exon_03 - mutation type: missense, nucleotide change: c.133A>C class: non pathogenic
Result for exon_04 - mutation type: missense, nucleotide change: c.154C>A class: non pathogenic
Result for exon_05 - mutation type: -, nucleotide change: - class: non pathogenic
Result for exon_07 - mutation type: misssense, nucleotide change: c.469T>C class: non pathogenic
Result for exon_13 - mutation type: misssense, nucleotide change: c.4402A>C class: non pathogenic
Result for exon_14 - mutation type: misssense, nucleotide change: c.4520G>C class: non pathogenic
|
programs/category.ipynb | ###Markdown
BHSA and OSM: comparison on word categoriesWe will investigate how the morphology marked up in the OSM corresponds and differs from the BHSA linguistic features.In this notebook we investigate the word categories.The [OSM docs](http://openscriptures.github.io/morphhb/parsing/HebrewMorphologyCodes.html)specify a main category for part-of-speech, and additional subtypes for noun, pronoun, adjective, preposition and suffix.The BHSA specifies its categories in the features[sp](https://etcbc.github.io/bhsa/features/hebrew/2017/sp.html),[ls](https://etcbc.github.io/bhsa/features/hebrew/2017/ls.html), and[nametype](https://etcbc.github.io/bhsa/features/hebrew/2017/nametype.html).The purpose of this notebook is to see how they correlate. MappingsWe collect the numbers of cooccurrences of OSM types and BHSA types.We do this separately for main words and for suffixes.We give examples where the rare cases occur.A rare case is less than 10% of the total number of cases.That means, if OSM type $t$ compares to BHS types $s_1, ... ,s_n$, with frequencies$f_1, ..., f_n$, then we give cases of those $(t, s_i)$ such that$$f_i <= 0.10\times \sum_{j=1}^{n}f_j$$. Results* [categories.txt](categories.txt) overview of cooccurrences of OSM and BHSA categories* [categoriesCases.tsv](categoriesCases.tsv) same, but examples for the rarer combinations* [allCategoriesCases.tsv](allCategoriesCases.tsv) all rarer cases, in biblical order
###Code
import operator
from functools import reduce
from tf.app import use
from helpers import show
###Output
_____no_output_____
###Markdown
Load dataWe load the BHSA data in the standard way, and we add the OSM data as a module of the features `osm` and `osm_sf`.Note that we only need to point TF to the right GitHub org/repo/directory, in order to load the OSM features.
###Code
A = use("bhsa", mod="etcbc/bridging/tf", hoist=globals())
###Output
_____no_output_____
###Markdown
Let's quickly oversee the values of the relevant BHSA features. We only work on words where the OSM has assigned morphology.
###Code
wordAll = [w for w in F.otype.s("word") if F.g_word_utf8.v(w) != ""]
len(wordAll)
wordOsm = [w for w in wordAll if F.osm.v(w)]
len(wordOsm)
wordBase = [w for w in wordOsm if F.osm.v(w) != "*"]
print(len(wordBase))
F.sp.freqList()
F.ls.freqList()
F.nametype.freqList()
F.prs.freqList()
F.uvf.freqList()
###Output
_____no_output_____
###Markdown
In order to read the results with more ease, we translate the codes to friendly names, found in the docs ofOSM and BHSA.
###Code
naValues = {"NA", "N/A", "n/a", "none", "absent"}
NA = ""
missingValues = {None, ""}
MISSING = ""
unknownValues = {"unknown"}
UNKNOWN = "?"
PRS = "p"
noSubTypes = {"C", "D", "V"}
pspOSM = {
"": dict(
A="adjective",
C="conjunction",
D="adverb",
N="noun",
P="pronoun",
R="preposition",
S="suffix",
T="particle",
V="verb",
),
"A": dict(
a="adjective",
c="cardinal number",
g="gentilic",
o="ordinal number",
),
"N": dict(
c="common",
g="gentilic",
p="proper name",
x="unknown",
),
"P": dict(
d="demonstrative",
f="indefinite",
i="interrogative",
p="personal",
r="relative",
),
"R": dict(
d="definite article",
),
"S": dict(
d="directional he",
h="paragogic he",
n="paragogic nun",
p="pronominal",
),
"T": dict(
a="affirmation",
d="definite article",
e="exhortation",
i="interrogative",
j="interjection",
m="demonstrative",
n="negative",
o="direct object marker",
r="relative",
),
}
spBHS = dict(
art="article",
verb="verb",
subs="noun",
nmpr="proper noun",
advb="adverb",
prep="preposition",
conj="conjunction",
prps="personal pronoun",
prde="demonstrative pronoun",
prin="interrogative pronoun",
intj="interjection",
nega="negative particle",
inrg="interrogative particle",
adjv="adjective",
)
lsBHS = dict(
nmdi="distributive noun",
nmcp="copulative noun",
padv="potential adverb",
afad="anaphoric adverb",
ppre="potential preposition",
cjad="conjunctive adverb",
ordn="ordinal",
vbcp="copulative verb",
mult="noun of multitude",
focp="focus particle",
ques="interrogative particle",
gntl="gentilic",
quot="quotation verb",
card="cardinal",
none=MISSING,
)
nametypeBHS = dict(
pers="person",
mens="measurement unit",
gens="people",
topo="place",
ppde="demonstrative personal pronoun",
)
nametypeBHS.update(
{
"pers,gens,topo": "person",
"pers,gens": "person",
"gens,topo": "gentilic",
"pers,god": "person",
"topo,pers": "person",
}
)
def getValueBHS(x, feat=None):
return (
NA
if x in naValues
else MISSING
if x in missingValues
else UNKNOWN
if x in unknownValues
else feat[x]
if feat
else x
)
def getValueOSM(x):
if not x or len(x) < 2:
return UNKNOWN
tp = x[1]
tpName = pspOSM[""][tp]
subTpName = None if tp in noSubTypes or len(x) < 3 else pspOSM[tp][x[2]]
return ":".join((x for x in (tpName, subTpName) if x is not None))
def getTypeBHS(w):
return ":".join(
(
getValueBHS(F.sp.v(w), spBHS),
getValueBHS(F.ls.v(w), lsBHS),
getValueBHS(F.nametype.v(w), nametypeBHS),
)
)
def getTypeOSM(w):
return getValueOSM(F.osm.v(w))
def getSuffixTypeBHS(w):
prs = getValueBHS(F.prs.v(w))
if prs not in {NA, UNKNOWN}:
prs = PRS
return ":".join((prs, getValueBHS(F.uvf.v(w))))
def getSuffixTypeOSM(w):
return getValueOSM(F.osm_sf.v(w))
def getWordBHS(w):
return "T={} S={}".format(getTypeBHS(w), getSuffixTypeBHS(w))
def getWordOSM(w):
return "T={} [{}] S={} [{}]".format(
getTypeOSM(w),
F.osm.v(w),
getSuffixTypeOSM(w),
F.osm_sf.v(w),
)
def showFeatures(base):
cases = set()
categories = []
categoriesCases = []
mappings = {}
def makeMap(key, getBHS, getOSM):
BHSFromOSM = {}
OSMFromBHS = {}
for w in base:
osm = getOSM(w)
bhs = getBHS(w)
BHSFromOSM.setdefault(osm, {}).setdefault(bhs, set()).add(w)
OSMFromBHS.setdefault(bhs, {}).setdefault(osm, set()).add(w)
mappings.setdefault(key, {})[True] = BHSFromOSM
mappings.setdefault(key, {})[False] = OSMFromBHS
def showMap(key, direction):
dirLabel = "OSM ===> BHS" if direction else "BHS ===> OSM"
categories.append(
"""
---------------------------------------------------------------------------------
--- {} {}
---------------------------------------------------------------------------------
""".format(
key, dirLabel
)
)
categoriesCases.append(categories[-1])
cases = set()
for (item, itemData) in sorted(mappings[key][direction].items()):
categories.append("{:<40}".format(item))
categoriesCases.append(categories[-1])
totalCases = reduce(operator.add, (len(d) for d in itemData.values()), 0)
for (itemOther, ws) in sorted(
itemData.items(), key=lambda x: (-len(x[1]), x[0])
):
nws = len(ws)
perc = int(round(100 * nws / totalCases))
categories.append(
"\t{:<40} ({:>3}% = {:>6}x)".format(itemOther, perc, nws)
)
categoriesCases.append(categories[-1])
if nws < 0.1 * totalCases:
for w in sorted(ws)[0:10]:
categoriesCases.append(
show(
T,
F,
[w],
getWordBHS,
getWordOSM,
indent="\t\t\t\t",
asString=True,
)
)
cases.add(w)
if nws > 10:
categoriesCases.append("\t\t\t\tand {} more".format(nws - 10))
categories.append("\n{} ({}): {} cases".format(key, dirLabel, len(cases)))
categoriesCases.append(categories[-1])
return cases
def showFeature(key):
cases = set()
categories.append(
"""
o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o
o-o COMPARING FEATURE {}
o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o
""".format(
key
)
)
categoriesCases.append(categories[-1])
for direction in (True, False):
theseCases = showMap(key, direction)
cases |= theseCases
categories.append("\n{}: {} cases".format(key, len(cases)))
categoriesCases.append(categories[-1])
return cases
for (key, getBHS, getOSM) in (
("main", getTypeBHS, getTypeOSM),
("suffix", getSuffixTypeBHS, getSuffixTypeOSM),
):
makeMap(key, getBHS, getOSM)
cases |= showFeature(key)
categories.append("\n{}: {} cases".format("All features", len(cases)))
categoriesCases.append(categories[-1])
with open("categories.txt", "w") as fh:
fh.write("\n".join(categories))
with open("categoriesCases.txt", "w") as fh:
fh.write("\n".join(categoriesCases))
fields = """
passage
node
occurrence
OSMmorph
OSMtype
BHStype
OSMmorphSuffix
OSMsuffixType
BHSsuffixType
""".strip().split()
lineFormat = ("{}\t" * (len(fields) - 1)) + "{}\n"
with open("allCategoriesCases.tsv", "w") as fh:
fh.write(lineFormat.format(*fields))
for w in sorted(cases):
fh.write(
lineFormat.format(
"{} {}:{}".format(*T.sectionFromNode(w)),
w,
F.g_word_utf8.v(w),
F.osm.v(w),
getTypeOSM(w),
getTypeBHS(w),
F.osm_sf.v(w),
getSuffixTypeOSM(w),
getSuffixTypeBHS(w),
)
)
###Output
_____no_output_____
###Markdown
Feature comparisonWe are going to compare all features.
###Code
showFeatures(wordBase)
###Output
_____no_output_____ |
MaterialCursoPython/Fase 5 - Modulos externos/Tema 19 - Numpy/Teasers/10 - Filtrado de arrays.ipynb | ###Markdown
Filtrado de ArraysEn esta lección vamos a repasar algunas funciones para filtrar nuestros arrays. Si queréis más información sobre las funciones disponibles no olvidéis pasaros por [la documentación oficial en este enlace](https://docs.scipy.org/doc/numpy-1.12.0/reference/index.html). Filtro uniqueDevuelve un array de una dimensión borrando todos los elementos duplicados.
###Code
np.unique(arr)
arr = np.random.randint(0, 4, 10)
arr
###Output
_____no_output_____
###Markdown
Filtro in1dDevuelve un array de una dimensión indicando si los elementos de una lista se encuentran en un array.
###Code
arr = np.random.randint(0, 4, 10)
arr
np.in1d([-1, 3, 2], arr)
###Output
_____no_output_____
###Markdown
Filtro whereEsta función sirve para generar un array filtrado a partir de una condición y un valor por defecto.
###Code
import numpy as np
# Generamos un array de aleatorios
arr_1 = np.random.uniform(-5, 5, size=[3,2])
arr_1
# Creamos un filtro que establece los negativos a 0
arr_2 = np.where(arr_1<0, 0, arr_1)
arr_2
# Añadimos otro filtro que establece los positivos a 1
arr_2 = np.where(arr_2>0, True, arr_2)
arr_2
# Podemos crear nuestros propios arrays de condiciones
arr_3 = np.array([1, -2, 3, -4, 5])
arr_4 = np.array([-1, 2, -3, 4, -5])
arr_cond = np.array([True, False, True, False, True])
np.where(arr_cond, arr_3, arr_4)
###Output
_____no_output_____
###Markdown
Filtros booleanos
###Code
# Comprobar si todos los elementos de un array son True
arr_bool = np.array([True,True,True,False])
arr_bool.all()
# Comprobar al menos un elemento del array es True
arr_bool = np.array([True,True,True,False])
arr_bool.any()
# También aplican a un eje en particular
arr_bool = np.array([[True,True],[False,True],[True,True]])
arr_bool
# Columas verdaderas
arr_bool.all(0)
# Filas verdaderas
arr_bool.all(1)
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.