path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
pfriskmgmt/test/test.ipynb | ###Markdown
Test notebook
###Code
import pfriskmgmt
pfriskmgmt.__name__
pfriskmgmt.__version__
pfriskmgmt.__path__
from pfriskmgmt import pfriskmgmt as rm
rm.get_ffme_returns().head()
rm.get_fff_returns().head()
'Test passed!'
###Output
_____no_output_____ |
ashrae_code/ashrae-training-lgbm-by-meter-type.ipynb | ###Markdown
ASHRAE - Great Energy Predictor IIIOur aim in this competition is to predict energy consumption of buildings.There are 4 types of energy to predict: - 0: electricity - 1: chilledwater - 2: steam - 3: hotwaterElectricity and water consumption may have different behavior!So I tried to separately train & predict the model.I moved previous [ASHRAE: Simple LGBM submission](https://www.kaggle.com/corochann/ashrae-simple-lgbm-submission) kernel.**[Update] I published "[Optuna tutorial for hyperparameter optimization](https://www.kaggle.com/corochann/optuna-tutorial-for-hyperparameter-optimization)" notebook.Please also check it :)**
###Code
import gc
import os
from pathlib import Path
import random
import sys
from tqdm import tqdm_notebook as tqdm
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
import seaborn as sns
from IPython.core.display import display, HTML
# --- plotly ---
from plotly import tools, subplots
import plotly.offline as py
py.init_notebook_mode(connected=True)
import plotly.graph_objs as go
import plotly.express as px
import plotly.figure_factory as ff
# --- models ---
from sklearn import preprocessing
from sklearn.model_selection import KFold
import lightgbm as lgb
import xgboost as xgb
import catboost as cb
# Original code from https://www.kaggle.com/gemartin/load-data-reduce-memory-usage by @gemartin
# Modified to support timestamp type, categorical type
# Modified to add option to use float16 or not. feather format does not support float16.
from pandas.api.types import is_datetime64_any_dtype as is_datetime
from pandas.api.types import is_categorical_dtype
def reduce_mem_usage(df, use_float16=False):
""" iterate through all the columns of a dataframe and modify the data type
to reduce memory usage.
"""
start_mem = df.memory_usage().sum() / 1024**2
print('Memory usage of dataframe is {:.2f} MB'.format(start_mem))
for col in df.columns:
if is_datetime(df[col]) or is_categorical_dtype(df[col]):
# skip datetime type or categorical type
continue
col_type = df[col].dtype
if col_type != object:
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
if use_float16 and c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
else:
df[col] = df[col].astype('category')
end_mem = df.memory_usage().sum() / 1024**2
print('Memory usage after optimization is: {:.2f} MB'.format(end_mem))
print('Decreased by {:.1f}%'.format(100 * (start_mem - end_mem) / start_mem))
return df
!ls ../input
###Output
_____no_output_____
###Markdown
Fast data loadingThis kernel uses the preprocessed data from my previous kernel, [ASHRAE: feather format for fast loading](https://www.kaggle.com/corochann/ashrae-feather-format-for-fast-loading), to accelerate data loading!
###Code
%%time
root = Path('../input/ashrae-feather-format-for-fast-loading')
train_df = pd.read_feather(root/'train.feather')
weather_train_df = pd.read_feather(root/'weather_train.feather')
building_meta_df = pd.read_feather(root/'building_metadata.feather')
train_df['date'] = train_df['timestamp'].dt.date
train_df['meter_reading_log1p'] = np.log1p(train_df['meter_reading'])
def plot_date_usage(train_df, meter=0, building_id=0):
train_temp_df = train_df[train_df['meter'] == meter]
train_temp_df = train_temp_df[train_temp_df['building_id'] == building_id]
train_temp_df_meter = train_temp_df.groupby('date')['meter_reading_log1p'].sum()
train_temp_df_meter = train_temp_df_meter.to_frame().reset_index()
fig = px.line(train_temp_df_meter, x='date', y='meter_reading_log1p')
fig.show()
plot_date_usage(train_df, meter=0, building_id=0)
###Output
_____no_output_____
###Markdown
Removing weired data on site_id 0As you can see above, this data looks weired until May 20. It is reported in [this discussion](https://www.kaggle.com/c/ashrae-energy-prediction/discussion/113054656588) by @barnwellguy that **All electricity meter is 0 until May 20 for site_id == 0**. I will remove these data from training data.It corresponds to `building_id <= 104`.
###Code
building_meta_df[building_meta_df.site_id == 0]
train_df = train_df.query('not (building_id <= 104 & meter == 0 & timestamp <= "2016-05-20")')
###Output
_____no_output_____
###Markdown
Data preprocessingNow, Let's try building GBDT (Gradient Boost Decision Tree) model to predict `meter_reading_log1p`. I will try using LightGBM in this notebook.
###Code
debug = False
###Output
_____no_output_____
###Markdown
Add time feature Some features introduced in https://www.kaggle.com/ryches/simple-lgbm-solution by @rychesFeatures that are likely predictive: Weather- time of day- holiday- weekend- cloud_coverage + lags- dew_temperature + lags- precip_depth + lags- sea_level_pressure + lags- wind_direction + lags- wind_speed + lags Train- max, mean, min, std of the specific building historicallyHowever we should be careful of putting time feature, since we have only 1 year data in training,including `date` makes overfiting to training data.How about `month`? It may be better to check performance by cross validation.I go not using this data in this kernel for robust modeling.
###Code
def preprocess(df):
df["hour"] = df["timestamp"].dt.hour
# df["day"] = df["timestamp"].dt.day
df["weekend"] = df["timestamp"].dt.weekday
df["month"] = df["timestamp"].dt.month
df["dayofweek"] = df["timestamp"].dt.dayofweek
# hour_rad = df["hour"].values / 24. * 2 * np.pi
# df["hour_sin"] = np.sin(hour_rad)
# df["hour_cos"] = np.cos(hour_rad)
preprocess(train_df)
df_group = train_df.groupby('building_id')['meter_reading_log1p']
building_mean = df_group.mean().astype(np.float16)
building_median = df_group.median().astype(np.float16)
building_min = df_group.min().astype(np.float16)
building_max = df_group.max().astype(np.float16)
building_std = df_group.std().astype(np.float16)
train_df['building_mean'] = train_df['building_id'].map(building_mean)
train_df['building_median'] = train_df['building_id'].map(building_median)
train_df['building_min'] = train_df['building_id'].map(building_min)
train_df['building_max'] = train_df['building_id'].map(building_max)
train_df['building_std'] = train_df['building_id'].map(building_std)
building_mean.head()
###Output
_____no_output_____
###Markdown
Fill Nan value in weather dataframe by interpolationweather data has a lot of NaNs!!I tried to fill these values by **interpolating** data.
###Code
weather_train_df.head()
weather_train_df.describe()
weather_train_df.isna().sum()
weather_train_df.shape
weather_train_df.groupby('site_id').apply(lambda group: group.isna().sum())
weather_train_df = weather_train_df.groupby('site_id').apply(lambda group: group.interpolate(limit_direction='both'))
weather_train_df.groupby('site_id').apply(lambda group: group.isna().sum())
###Output
_____no_output_____
###Markdown
Seems number of nan has reduced by `interpolate` but some property has never appear in specific `site_id`, and nan remains for these features. lagsAdding some lag feature
###Code
def add_lag_feature(weather_df, window=3):
group_df = weather_df.groupby('site_id')
cols = ['air_temperature', 'cloud_coverage', 'dew_temperature', 'precip_depth_1_hr', 'sea_level_pressure', 'wind_direction', 'wind_speed']
rolled = group_df[cols].rolling(window=window, min_periods=0)
lag_mean = rolled.mean().reset_index().astype(np.float16)
lag_max = rolled.max().reset_index().astype(np.float16)
lag_min = rolled.min().reset_index().astype(np.float16)
lag_std = rolled.std().reset_index().astype(np.float16)
for col in cols:
weather_df[f'{col}_mean_lag{window}'] = lag_mean[col]
weather_df[f'{col}_max_lag{window}'] = lag_max[col]
weather_df[f'{col}_min_lag{window}'] = lag_min[col]
weather_df[f'{col}_std_lag{window}'] = lag_std[col]
add_lag_feature(weather_train_df, window=3)
add_lag_feature(weather_train_df, window=72)
weather_train_df.head()
weather_train_df.columns
# categorize primary_use column to reduce memory on merge...
primary_use_list = building_meta_df['primary_use'].unique()
primary_use_dict = {key: value for value, key in enumerate(primary_use_list)}
print('primary_use_dict: ', primary_use_dict)
building_meta_df['primary_use'] = building_meta_df['primary_use'].map(primary_use_dict)
gc.collect()
reduce_mem_usage(train_df, use_float16=True)
reduce_mem_usage(building_meta_df, use_float16=True)
reduce_mem_usage(weather_train_df, use_float16=True)
building_meta_df.head()
###Output
_____no_output_____
###Markdown
Train modelTo win in kaggle competition, how to evaluate your model is important.What kind of cross validation strategy is suitable for this competition? This is time series data, so it is better to consider time-splitting.However this notebook is for simple tutorial, so I will proceed with KFold splitting without shuffling, so that at least near-term data is not included in validation.
###Code
category_cols = ['building_id', 'site_id', 'primary_use'] # , 'meter'
feature_cols = ['square_feet', 'year_built'] + [
'hour', 'weekend', # 'month' , 'dayofweek'
'building_median'] + [
'air_temperature', 'cloud_coverage',
'dew_temperature', 'precip_depth_1_hr', 'sea_level_pressure',
'wind_direction', 'wind_speed', 'air_temperature_mean_lag72',
'air_temperature_max_lag72', 'air_temperature_min_lag72',
'air_temperature_std_lag72', 'cloud_coverage_mean_lag72',
'dew_temperature_mean_lag72', 'precip_depth_1_hr_mean_lag72',
'sea_level_pressure_mean_lag72', 'wind_direction_mean_lag72',
'wind_speed_mean_lag72', 'air_temperature_mean_lag3',
'air_temperature_max_lag3',
'air_temperature_min_lag3', 'cloud_coverage_mean_lag3',
'dew_temperature_mean_lag3',
'precip_depth_1_hr_mean_lag3', 'sea_level_pressure_mean_lag3',
'wind_direction_mean_lag3', 'wind_speed_mean_lag3']
def create_X_y(train_df, target_meter):
target_train_df = train_df[train_df['meter'] == target_meter]
target_train_df = target_train_df.merge(building_meta_df, on='building_id', how='left')
target_train_df = target_train_df.merge(weather_train_df, on=['site_id', 'timestamp'], how='left')
X_train = target_train_df[feature_cols + category_cols]
y_train = target_train_df['meter_reading_log1p'].values
del target_train_df
return X_train, y_train
def fit_lgbm(train, val, devices=(-1,), seed=None, cat_features=None, num_rounds=1500, lr=0.1, bf=0.1):
"""Train Light GBM model"""
X_train, y_train = train
X_valid, y_valid = val
metric = 'l2'
params = {'num_leaves': 31,
'objective': 'regression',
# 'max_depth': -1,
'learning_rate': lr,
"boosting": "gbdt",
"bagging_freq": 5,
"bagging_fraction": bf,
"feature_fraction": 0.9,
"metric": metric,
# "verbosity": -1,
# 'reg_alpha': 0.1,
# 'reg_lambda': 0.3
}
device = devices[0]
if device == -1:
# use cpu
pass
else:
# use gpu
print(f'using gpu device_id {device}...')
params.update({'device': 'gpu', 'gpu_device_id': device})
params['seed'] = seed
early_stop = 20
verbose_eval = 20
d_train = lgb.Dataset(X_train, label=y_train, categorical_feature=cat_features)
d_valid = lgb.Dataset(X_valid, label=y_valid, categorical_feature=cat_features)
watchlist = [d_train, d_valid]
print('training LGB:')
model = lgb.train(params,
train_set=d_train,
num_boost_round=num_rounds,
valid_sets=watchlist,
verbose_eval=verbose_eval,
early_stopping_rounds=early_stop)
# predictions
y_pred_valid = model.predict(X_valid, num_iteration=model.best_iteration)
print('best_score', model.best_score)
log = {'train/mae': model.best_score['training']['l2'],
'valid/mae': model.best_score['valid_1']['l2']}
return model, y_pred_valid, log
folds = 5
seed = 666
shuffle = False
kf = KFold(n_splits=folds, shuffle=shuffle, random_state=seed)
###Output
_____no_output_____
###Markdown
Train model by each meter type
###Code
target_meter = 0
X_train, y_train = create_X_y(train_df, target_meter=target_meter)
y_valid_pred_total = np.zeros(X_train.shape[0])
gc.collect()
print('target_meter', target_meter, X_train.shape)
cat_features = [X_train.columns.get_loc(cat_col) for cat_col in category_cols]
print('cat_features', cat_features)
models0 = []
for train_idx, valid_idx in kf.split(X_train, y_train):
train_data = X_train.iloc[train_idx,:], y_train[train_idx]
valid_data = X_train.iloc[valid_idx,:], y_train[valid_idx]
print('train', len(train_idx), 'valid', len(valid_idx))
# model, y_pred_valid, log = fit_cb(train_data, valid_data, cat_features=cat_features, devices=[0,])
model, y_pred_valid, log = fit_lgbm(train_data, valid_data, cat_features=category_cols,
num_rounds=1000, lr=0.05, bf=0.7)
y_valid_pred_total[valid_idx] = y_pred_valid
models0.append(model)
gc.collect()
if debug:
break
sns.distplot(y_train)
del X_train, y_train
gc.collect()
def plot_feature_importance(model):
importance_df = pd.DataFrame(model.feature_importance(),
index=feature_cols + category_cols,
columns=['importance']).sort_values('importance')
fig, ax = plt.subplots(figsize=(8, 8))
importance_df.plot.barh(ax=ax)
fig.show()
target_meter = 1
X_train, y_train = create_X_y(train_df, target_meter=target_meter)
y_valid_pred_total = np.zeros(X_train.shape[0])
gc.collect()
print('target_meter', target_meter, X_train.shape)
cat_features = [X_train.columns.get_loc(cat_col) for cat_col in category_cols]
print('cat_features', cat_features)
models1 = []
for train_idx, valid_idx in kf.split(X_train, y_train):
train_data = X_train.iloc[train_idx,:], y_train[train_idx]
valid_data = X_train.iloc[valid_idx,:], y_train[valid_idx]
print('train', len(train_idx), 'valid', len(valid_idx))
# model, y_pred_valid, log = fit_cb(train_data, valid_data, cat_features=cat_features, devices=[0,])
model, y_pred_valid, log = fit_lgbm(train_data, valid_data, cat_features=category_cols, num_rounds=1000,
lr=0.05, bf=0.5)
y_valid_pred_total[valid_idx] = y_pred_valid
models1.append(model)
gc.collect()
if debug:
break
sns.distplot(y_train)
del X_train, y_train
gc.collect()
target_meter = 2
X_train, y_train = create_X_y(train_df, target_meter=target_meter)
y_valid_pred_total = np.zeros(X_train.shape[0])
gc.collect()
print('target_meter', target_meter, X_train.shape)
cat_features = [X_train.columns.get_loc(cat_col) for cat_col in category_cols]
print('cat_features', cat_features)
models2 = []
for train_idx, valid_idx in kf.split(X_train, y_train):
train_data = X_train.iloc[train_idx,:], y_train[train_idx]
valid_data = X_train.iloc[valid_idx,:], y_train[valid_idx]
print('train', len(train_idx), 'valid', len(valid_idx))
# model, y_pred_valid, log = fit_cb(train_data, valid_data, cat_features=cat_features, devices=[0,])
model, y_pred_valid, log = fit_lgbm(train_data, valid_data, cat_features=category_cols,
num_rounds=1000, lr=0.05, bf=0.8)
y_valid_pred_total[valid_idx] = y_pred_valid
models2.append(model)
gc.collect()
if debug:
break
sns.distplot(y_train)
del X_train, y_train
gc.collect()
target_meter = 3
X_train, y_train = create_X_y(train_df, target_meter=target_meter)
y_valid_pred_total = np.zeros(X_train.shape[0])
gc.collect()
print('target_meter', target_meter, X_train.shape)
cat_features = [X_train.columns.get_loc(cat_col) for cat_col in category_cols]
print('cat_features', cat_features)
models3 = []
for train_idx, valid_idx in kf.split(X_train, y_train):
train_data = X_train.iloc[train_idx,:], y_train[train_idx]
valid_data = X_train.iloc[valid_idx,:], y_train[valid_idx]
print('train', len(train_idx), 'valid', len(valid_idx))
# model, y_pred_valid, log = fit_cb(train_data, valid_data, cat_features=cat_features, devices=[0,])
model, y_pred_valid, log = fit_lgbm(train_data, valid_data, cat_features=category_cols, num_rounds=1000,
lr=0.03, bf=0.9)
y_valid_pred_total[valid_idx] = y_pred_valid
models3.append(model)
gc.collect()
if debug:
break
sns.distplot(y_train)
del X_train, y_train
gc.collect()
###Output
_____no_output_____
###Markdown
Prediction on test data
###Code
print('loading...')
test_df = pd.read_feather(root/'test.feather')
weather_test_df = pd.read_feather(root/'weather_test.feather')
print('preprocessing building...')
test_df['date'] = test_df['timestamp'].dt.date
preprocess(test_df)
test_df['building_mean'] = test_df['building_id'].map(building_mean)
test_df['building_median'] = test_df['building_id'].map(building_median)
test_df['building_min'] = test_df['building_id'].map(building_min)
test_df['building_max'] = test_df['building_id'].map(building_max)
test_df['building_std'] = test_df['building_id'].map(building_std)
print('preprocessing weather...')
weather_test_df = weather_test_df.groupby('site_id').apply(lambda group: group.interpolate(limit_direction='both'))
weather_test_df.groupby('site_id').apply(lambda group: group.isna().sum())
add_lag_feature(weather_test_df, window=3)
add_lag_feature(weather_test_df, window=72)
print('reduce mem usage...')
reduce_mem_usage(test_df, use_float16=True)
reduce_mem_usage(weather_test_df, use_float16=True)
gc.collect()
sample_submission = pd.read_feather(os.path.join(root, 'sample_submission.feather'))
reduce_mem_usage(sample_submission)
def create_X(test_df, target_meter):
target_test_df = test_df[test_df['meter'] == target_meter]
target_test_df = target_test_df.merge(building_meta_df, on='building_id', how='left')
target_test_df = target_test_df.merge(weather_test_df, on=['site_id', 'timestamp'], how='left')
X_test = target_test_df[feature_cols + category_cols]
return X_test
def pred(X_test, models, batch_size=1000000):
iterations = (X_test.shape[0] + batch_size -1) // batch_size
print('iterations', iterations)
y_test_pred_total = np.zeros(X_test.shape[0])
for i, model in enumerate(models):
print(f'predicting {i}-th model')
for k in tqdm(range(iterations)):
y_pred_test = model.predict(X_test[k*batch_size:(k+1)*batch_size], num_iteration=model.best_iteration)
y_test_pred_total[k*batch_size:(k+1)*batch_size] += y_pred_test
y_test_pred_total /= len(models)
return y_test_pred_total
%%time
X_test = create_X(test_df, target_meter=0)
gc.collect()
y_test0 = pred(X_test, models0)
sns.distplot(y_test0)
del X_test
gc.collect()
%%time
X_test = create_X(test_df, target_meter=1)
gc.collect()
y_test1 = pred(X_test, models1)
sns.distplot(y_test1)
del X_test
gc.collect()
%%time
X_test = create_X(test_df, target_meter=2)
gc.collect()
y_test2 = pred(X_test, models2)
sns.distplot(y_test2)
del X_test
gc.collect()
X_test = create_X(test_df, target_meter=3)
gc.collect()
y_test3 = pred(X_test, models3)
sns.distplot(y_test3)
del X_test
gc.collect()
sample_submission.loc[test_df['meter'] == 0, 'meter_reading'] = np.expm1(y_test0)
sample_submission.loc[test_df['meter'] == 1, 'meter_reading'] = np.expm1(y_test1)
sample_submission.loc[test_df['meter'] == 2, 'meter_reading'] = np.expm1(y_test2)
sample_submission.loc[test_df['meter'] == 3, 'meter_reading'] = np.expm1(y_test3)
sample_submission.to_csv('submission.csv', index=False, float_format='%.4f')
sample_submission.head()
np.log1p(sample_submission['meter_reading']).hist()
plot_feature_importance(models0[1])
plot_feature_importance(models1[1])
plot_feature_importance(models2[1])
plot_feature_importance(models3[1])
###Output
_____no_output_____ |
Notebooks/Shuffeling.ipynb | ###Markdown
Handlabeling Negative-Positive EvaluatorMachine Learning Approach:
###Code
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
import time
import pandas as pd
import numpy as np
import re
import math
import statistics
from hatesonar import Sonar
def mostCommon(li):
st = set(li)
mx = -1
for each in st:
temp = li.count(each)
if mx < temp:
mx = temp
h = each
return h
#reading the labeling csv
data = pd.read_csv('../Data/dataset_handlabeling.csv', sep=',')
# analyze
analyzer = SentimentIntensityAnalyzer()
# load learnables
voters = list(data.columns.values)
voters.remove('AD_TEXT')
for index, row in data.iterrows():
decisions = []
for voter in voters:
decisions.append(row[voter].lower())
convergenceValue = mostCommon(decisions)
data.loc[index, 'vote'] = convergenceValue
# 10 random shuffel for 10 random samples
averageAccuracy = []
averageRecall = []
for x in range (20):
selectedData = data.sample(12)
# dictionary analyzer
selectedData['pos'] = np.nan
selectedData['neg'] = np.nan
selectedData['neu'] = np.nan
selectedData['compound'] = np.nan
selectedData['flag'] = np.nan
matchCount = 0
totalCount = 0
for index, row in selectedData.iterrows():
# skip empty texts
if pd.isnull(row['AD_TEXT']):
continue
# calculate the senti-index for each advert text
tempRespond = analyzer.polarity_scores(row['AD_TEXT'])
selectedData.loc[index, 'pos'] = tempRespond['pos']
selectedData.loc[index, 'neg'] = tempRespond['neg']
selectedData.loc[index, 'neu'] = tempRespond['neu']
selectedData.loc[index, 'compound'] = tempRespond['compound']
# make the flags
if 0.5 <= tempRespond['compound'] and tempRespond['compound'] <= 1:
selectedData.loc[index, 'flag'] = 'positive'
elif -0.5 < tempRespond['compound'] and tempRespond['compound'] < 0.5:
selectedData.loc[index, 'flag'] = 'neutral'
else:
selectedData.loc[index, 'flag'] = 'negative'
# accuracy ignoring neutral votes
if selectedData.loc[index, 'flag'] == 'neutral': continue
if selectedData.loc[index, 'flag'] == row['vote']:
matchCount = matchCount + 1
totalCount = totalCount + 1
accuracy = matchCount / totalCount * 100
print( 'Accuracy Trail', accuracy, ' %' )
averageAccuracy.append(accuracy)
print ('Average Accuracy', statistics.mean(averageAccuracy) , ' %')
###Output
Average Accuracy 75.07936507936508 %
|
Triatomic_Linear_Molecule.ipynb | ###Markdown
Triatomic linear moleculewe will look at a model of a linear triatomic molecule with one central mass $M$ attached to two peripheral masses $m$.If the molecule is linear, we can approximate this as three masses attached by two springs of constants $k_{12}$ and $k_{23}$. (Tim Thomay with CP1, 2021, [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/))
###Code
%pylab notebook
###Output
_____no_output_____
###Markdown
$T_1 = \frac{1}{2}m_1(\dot{x_1})^2$$U_1 = \frac{1}{2}k_{12} (x_2 - x_1)^2$$L = (\frac{1}{2}m_1(\dot{x_1})^2 + \frac{1}{2}m_2(\dot{x_2})^2 + \frac{1}{2}m_3(\dot{x_3})^2) - (\frac{1}{2}k_{12} (x_2 - x_1)^2 + \frac{1}{2}k_{23} (x_3 - x_2)^2)$
###Code
###Output
_____no_output_____
###Markdown
$\partial{L}/\partial{x_i} = d/dt(\partial{L}/\partial{\dot{x_i}})$ $k_{12} (x_2 - x_1) = m_1(\ddot{x_1})$ $K \vec{x} = \omega^2M\vec{x}$
###Code
k12 = 1
k23 = 1
K = np.array([
[ k12, -k12, 0 ],
[ -k12, k12 + k23 , -k23 ],
[ 0, -k23, k23 ]
])
m1 = 1
m3 = 1
m2 = 2
M = np.diag([m1,m2,m3])
from scipy.linalg import eig
omega, v = eig(K,M)
v
###Output
_____no_output_____ |
totembionet/notebooks/diapos.ipynb | ###Markdown
Mohamed TotemBionet Auteurs : Mohamed Chennouf et Alexandre Clement Responsable : Hélène Collavizza Co-encadrants : M.Gilles Bernot et M.Jean-Paul Comet M Contexte Biologistes Bio Informaticiens Informaticiens → → * Modélisation formelle de problèmes en biologie* Collaboration avec une équipe de bio-informaticiens* Chaque doctorant en bio-informatique donne lieu à un formalisme Problématique * Plusieurs prototypes qui se complètent* Plusieurs manipulations pour lancer les prototypes* Parser les données à la main* Compiler du code* Installer plusieurs dépendances* Pas de sauvegarde* Différents langages Existe t-il un moyen de faciliter l’utilisation des prototypes des bio-informaticiens ? Les solutions envisagées Première présentation client : Mock SMBionet et GGEA Approche Script Points positifs: Facile à mettre en place Peu de changements dans le code des prototypes Points négatifs: Limité du point de vue évolution Pas une vraie valeur ajoutée Approche Microservices Points positifs: Forte évolutivité Peut être utilisé sur n’importe qu’elle plate-forme ayant docker Points négatifs: Nécessite la création d’API pour chaque prototype Nécessite une interface graphique pour pouvoir utiliser ces APIs Nécessite Docker Approche Jupyter Notebook inspiré de CoLoMoTo Approche Jupyter Notebook inspiré de CoLoMoTo Points positifs: Intégrable dans docker Démonstration interactive Les doctorants et bio-informaticiens peuvent créer des notebook tutoriels pour leurs successeurs Points négatifs: Prise en main de Jupyter Ajout de librairies coûteux Restreint à un seul langage Solution retenue par les bio-informaticiens: Le Notebook Jupyter inspiré de CoLoMoTo Problèmes rencontrés* Apprentissage et compréhension des notions métier* Les prérequis et manipulations pour créer et ajouter des librairies* Le réseau : accès bloqué vers Anaconda* La sécurité du code : aspect propriétaire* La taille de l’image docker Solution finale  Architecture des composants de TotemBionet Démonstration ! Génération des modèles avec SMBionet
###Code
import smb_lib
smb = smb_lib.smbionet() #constructor smbionet
smb.runSmbionet("../resources/mucusOperonV3.smb")
###Output
_____no_output_____
###Markdown
Sauvegarde des modèles avec Save Experiences
###Code
import save_experiences
save = save_experiences.save() #constructor s ave experiences
save.saveFileExperience("../resources/mucusOperonV3.out","myExperienceSmbionet")
save.downloadExperience("myExperienceSmbionet")
###Output
_____no_output_____
###Markdown
Sélection d'un modèle aléatoirement
###Code
import discrete_model
models = discrete_model.parse_smbionet_output_file('../resources/mucusOperonV3.out')
import model_picker
model = model_picker.pick_a_model_randomly(models)
print(model)
###Output
_____no_output_____
###Markdown
Génération du graphe d'influence
###Code
model.influence_graph.show('circo')
###Output
_____no_output_____
###Markdown
Génération de la table des ressources
###Code
import resource_table
resource_table.ResourceTableWithModel(model)
###Output
_____no_output_____
###Markdown
Sauvegarde de la table des ressources au format csv
###Code
save.saveExperience(resource_table.ResourceTableWithModel(model).as_data_frame().to_csv(), "resource_table.csv")
save.downloadExperience("resource_table.csv")
save.saveExperience(resource_table.ResourceTableWithModel(model).as_data_frame().to_latex(), "resource_table.tex")
save.downloadExperience("resource_table.tex")
###Output
_____no_output_____
###Markdown
Génération du graphe d'état asynchrone
###Code
import ggea
graph = ggea.Graph(model)
graph
###Output
_____no_output_____
###Markdown
Simulations
###Code
import simu_net
for _ in range(2):
simulation = simu_net.Simulation(model)
simulation.steps = 10
result = simulation.run()
result.plot_evolution()
###Output
_____no_output_____ |
5- Sequence Models/Week 1/Dinosaur Island -- Character-level language model/Dinosaur Island RNN.ipynb | ###Markdown
Print training data (used for debugging, you can ignore this)
###Code
def print_ds(ds, num_examples=10):
for i, (x, y) in enumerate(trn_ds, 1):
print('*'*50)
x_str, y_str = '', ''
for idx in y:
y_str += trn_ds.idx_to_ch[idx.item()]
print(repr(y_str))
for t in x[1:]:
x_str += trn_ds.idx_to_ch[t.argmax().item()]
print(repr(x_str))
if i == num_examples:
break
print_ds(trn_ds, 5)
###Output
**************************************************
'aachenosaurus\n'
'aachenosaurus'
**************************************************
'aardonyx\n'
'aardonyx'
**************************************************
'abdallahsaurus\n'
'abdallahsaurus'
**************************************************
'abelisaurus\n'
'abelisaurus'
**************************************************
'abrictosaurus\n'
'abrictosaurus'
|
dev/_downloads/03db2d983950efa77a26beb0ac22b422/plot_20_rejecting_bad_data.ipynb | ###Markdown
Rejecting bad data spans========================This tutorial covers manual marking of bad spans of data, and automatedrejection of data spans based on signal amplitude. :depth: 2We begin as always by importing the necessary Python modules and loading some`example data `; to save memory we'll use a pre-filteredand downsampled version of the example data, and we'll also load an eventsarray to use when converting the continuous data to epochs:
###Code
import os
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_filt-0-40_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
events_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_filt-0-40_raw-eve.fif')
events = mne.read_events(events_file)
###Output
_____no_output_____
###Markdown
Annotating bad spans of data^^^^^^^^^^^^^^^^^^^^^^^^^^^^The tutorial `tut-events-vs-annotations` describes how:class:`~mne.Annotations` can be read from embedded events in the rawrecording file, and `tut-annotate-raw` describes in detail how tointeractively annotate a :class:`~mne.io.Raw` data object. Here, we focus onbest practices for annotating *bad* data spans so that they will be excludedfrom your analysis pipeline.The ``reject_by_annotation`` parameter~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~In the interactive ``raw.plot()`` window, the annotation controls can beopened by pressing :kbd:`a`. Here, new annotation labels can be created orexisting annotation labels can be selected for use.
###Code
fig = raw.plot()
fig.canvas.key_press_event('a')
###Output
_____no_output_____
###Markdown
.. sidebar:: Annotating good spans The default "BAD\_" prefix for new labels can be removed simply by pressing the backspace key four times before typing your custom annotation label.You can see that default annotation label is "BAD\_"; this can be editedprior to pressing the "Add label" button to customize the label. The intentis that users can annotate with as many or as few labels as makes sense fortheir research needs, but that annotations marking spans that should beexcluded from the analysis pipeline should all begin with "BAD" or "bad"(e.g., "bad_cough", "bad-eyes-closed", "bad door slamming", etc). When thispractice is followed, many processing steps in MNE-Python will automaticallyexclude the "bad"-labelled spans of data; this behavior is controlled by aparameter ``reject_by_annotation`` that can be found in many MNE-Pythonfunctions or class constructors, including:- creation of epoched data from continuous data (:class:`mne.Epochs`)- independent components analysis (:class:`mne.preprocessing.ICA`)- functions for finding heartbeat and blink artifacts (:func:`~mne.preprocessing.find_ecg_events`, :func:`~mne.preprocessing.find_eog_events`)- covariance computations (:func:`mne.compute_raw_covariance`)- power spectral density computation (:meth:`mne.io.Raw.plot_psd`, :func:`mne.time_frequency.psd_welch`)For example, when creating epochs from continuous data, if``reject_by_annotation=True`` the :class:`~mne.Epochs` constructor will dropany epoch that partially or fully overlaps with an annotated span that beginswith "bad".Generating annotations programmatically~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~The `tut-artifact-overview` tutorial introduced the artifact detectionfunctions :func:`~mne.preprocessing.find_eog_events` and:func:`~mne.preprocessing.find_ecg_events` (although that tutorial mostlyrelied on their higher-level wrappers:func:`~mne.preprocessing.create_eog_epochs` and:func:`~mne.preprocessing.create_ecg_epochs`). Here, for demonstrationpurposes, we make use of the lower-level artifact detection function to getan events array telling us where the blinks are, then automatically add"bad_blink" annotations around them (this is not necessary when using:func:`~mne.preprocessing.create_eog_epochs`, it is done here just to showhow annotations are added non-interactively). We'll start the annotations250 ms before the blink and end them 250 ms after it:
###Code
eog_events = mne.preprocessing.find_eog_events(raw)
onsets = eog_events[:, 0] / raw.info['sfreq'] - 0.25
durations = [0.5] * len(eog_events)
descriptions = ['bad blink'] * len(eog_events)
blink_annot = mne.Annotations(onsets, durations, descriptions,
orig_time=raw.info['meas_date'])
raw.set_annotations(blink_annot)
###Output
_____no_output_____
###Markdown
Now we can confirm that the annotations are centered on the EOG events. Sinceblinks are usually easiest to see in the EEG channels, we'll only plot EEGhere:
###Code
eeg_picks = mne.pick_types(raw.info, meg=False, eeg=True)
raw.plot(events=eog_events, order=eeg_picks)
###Output
_____no_output_____
###Markdown
See the section `tut-section-programmatic-annotations` for more detailson creating annotations programmatically.Rejecting Epochs based on channel amplitude^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^Besides "bad" annotations, the :class:`mne.Epochs` class constructor hasanother means of rejecting epochs, based on signal amplitude thresholds foreach channel type. In the `overview tutorial` we saw an example of this: setting maximumacceptable peak-to-peak amplitudes for each channel type in an epoch, usingthe ``reject`` parameter. There is also a related parameter, ``flat``, thatcan be used to set *minimum* acceptable peak-to-peak amplitudes for eachchannel type in an epoch:
###Code
reject_criteria = dict(mag=3000e-15, # 3000 fT
grad=3000e-13, # 3000 fT/cm
eeg=100e-6, # 100 μV
eog=200e-6) # 200 μV
flat_criteria = dict(mag=1e-15, # 1 fT
grad=1e-13, # 1 fT/cm
eeg=1e-6) # 1 μV
###Output
_____no_output_____
###Markdown
The values that are appropriate are dataset- and hardware-dependent, so sometrial-and-error may be necessary to find the correct balance between dataquality and loss of power due to too many dropped epochs. Here, we've set therejection criteria to be fairly stringent, for illustration purposes.Two additional parameters, ``reject_tmin`` and ``reject_tmax``, are used toset the temporal window in which to calculate peak-to-peak amplitude for thepurposes of epoch rejection. These default to the same ``tmin`` and ``tmax``of the entire epoch. As one example, if you wanted to only apply therejection thresholds to the portion of the epoch that occurs *before* theevent marker around which the epoch is created, you could set``reject_tmax=0``. A summary of the causes of rejected epochs can begenerated with the :meth:`~mne.Epochs.plot_drop_log` method:
###Code
epochs = mne.Epochs(raw, events, tmin=-0.2, tmax=0.5, reject_tmax=0,
reject=reject_criteria, flat=flat_criteria,
reject_by_annotation=False, preload=True)
epochs.plot_drop_log()
###Output
_____no_output_____
###Markdown
Notice that we've passed ``reject_by_annotation=False`` above, in order toisolate the effects of the rejection thresholds. If we re-run the epochingwith ``reject_by_annotation=True`` (the default) we see that the rejectionsdue to EEG and EOG channels have disappeared (suggesting that those channelfluctuations were probably blink-related, and were subsumed by rejectionsbased on the "bad blink" label).
###Code
epochs = mne.Epochs(raw, events, tmin=-0.2, tmax=0.5, reject_tmax=0,
reject=reject_criteria, flat=flat_criteria, preload=True)
epochs.plot_drop_log()
###Output
_____no_output_____
###Markdown
More importantly, note that *many* more epochs are rejected (~20% instead of~2.5%) when rejecting based on the blink labels, underscoring why it isusually desirable to repair artifacts rather than exclude them.The :meth:`~mne.Epochs.plot_drop_log` method is a visualization of an:class:`~mne.Epochs` attribute, namely ``epochs.drop_log``, which storesempty lists for retained epochs and lists of strings for dropped epochs, withthe strings indicating the reason(s) why the epoch was dropped. For example:
###Code
print(epochs.drop_log)
###Output
_____no_output_____
###Markdown
Finally, it should be noted that "dropped" epochs are not necessarily deletedfrom the :class:`~mne.Epochs` object right away. Above, we forced thedropping to happen when we created the :class:`~mne.Epochs` object by usingthe ``preload=True`` parameter. If we had not done that, the:class:`~mne.Epochs` object would have been `memory-mapped`_ (not loaded intoRAM), in which case the criteria for dropping epochs are stored, and theactual dropping happens when the :class:`~mne.Epochs` data are finally loadedand used. There are several ways this can get triggered, such as:- explicitly loading the data into RAM with the :meth:`~mne.Epochs.load_data` method- plotting the data (:meth:`~mne.Epochs.plot`, :meth:`~mne.Epochs.plot_image`, etc)- using the :meth:`~mne.Epochs.average` method to create an :class:`~mne.Evoked` objectYou can also trigger dropping with the :meth:`~mne.Epochs.drop_bad` method;if ``reject`` and/or ``flat`` criteria have already been provided to theepochs constructor, :meth:`~mne.Epochs.drop_bad` can be used withoutarguments to simply delete the epochs already marked for removal (if theepochs have already been dropped, nothing further will happen):
###Code
epochs.drop_bad()
###Output
_____no_output_____
###Markdown
Alternatively, if rejection thresholds were not originally given to the:class:`~mne.Epochs` constructor, they can be passed to:meth:`~mne.Epochs.drop_bad` later instead; this can also be a way ofimposing progressively more stringent rejection criteria:
###Code
stronger_reject_criteria = dict(mag=2000e-15, # 2000 fT
grad=2000e-13, # 2000 fT/cm
eeg=100e-6, # 100 μV
eog=100e-6) # 100 μV
epochs.drop_bad(reject=stronger_reject_criteria)
print(epochs.drop_log)
###Output
_____no_output_____ |
examples/Debug NumpyNetNN.ipynb | ###Markdown
Image classification as just vectors
###Code
images=image.load_images('data/digits')
data=image.images_to_vectors(images,verbose=False)
data.vectors/=255
summary(data)
data_train,data_test=split(data,test_size=0.2)
number_of_features=data.vectors.shape[1]
number_of_classes=len(data.target_names)
C=NumPyNetBackProp({
'input':number_of_features, # number of features
'hidden':[(5,'logistic'),], # sizes here are arbitrary
'output':(number_of_classes,'logistic'), # number of classes
'cost':'mse',
})
C.fit(data_train.vectors,data_train.targets,epochs=1000)
print("On Training Set:",C.percent_correct(data_train.vectors,data_train.targets))
print("On Test Set:",C.percent_correct(data_test.vectors,data_test.targets))
data_train.vectors.shape
data_train.targets
data_train.vectors.shape[1:]
import classy
classy.datasets.save_csv('digits_data.csv',data)
data=load_csv('digits_data.csv')
data_train,data_test=split(data,test_size=0.2)
number_of_features=data.vectors.shape[1]
number_of_classes=len(data.target_names)
C=NumPyNetBackProp({
'input':number_of_features, # number of features
'hidden':[(5,'logistic'),], # sizes here are arbitrary
'output':(number_of_classes,'logistic'), # number of classes
'cost':'mse',
},batch_size=100)
C.fit(data_train.vectors,data_train.targets,epochs=10000)
print("On Training Set:",C.percent_correct(data_train.vectors,data_train.targets))
print("On Test Set:",C.percent_correct(data_test.vectors,data_test.targets))
###Output
On Training Set: 96.79888656924147
On Test Set: 88.88888888888889
|
scripts/Basic Machine Learning Introduction.ipynb | ###Markdown
Importing our modules
###Code
import numpy as np
import pandas as pd
import sklearn as sk
from sklearn import datasets
from sklearn import svm
from sklearn import preprocessing
from sklearn.preprocessing import OneHotEncoder
from sklearn.cross_validation import train_test_split
from sklearn import metrics
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Pandas: Import Data from a CSV File into a Pandas DataFrame
###Code
filename_csv = '../datasets/IRIS.csv'
csv_data = pd.read_csv(filename_csv)
sk_data = datasets.load_iris()
#features = sk_data.data[:, :2] # we only take the first two features.
#targets = sk_data.target
print("Pandas Dataframe Describe method: \n")
print(csv_data.describe())
print("\nPandas Dataframe Mean method: \n")
print(csv_data.mean())
print("\n\nVariable 'sk_data' has a type of: ")
print(type(sk_data))
print("\n sk_data data: ")
print("\nVariable 'csv_type' has a type of: ")
print(type(csv_data))
print("\nCSV Data:")
print(csv_data)
###Output
Pandas Dataframe Describe method:
column1 column2 column3 column4 column5 column6 \
count 99.000000 99.000000 99.000000 100.000000 100.000000 100.000000
mean 5.926263 0.451278 3.040404 0.433849 3.851000 0.483319
std 0.851959 0.237209 0.441685 0.182881 1.785378 0.302445
min 4.300000 0.010000 2.000000 0.010000 1.000000 0.010000
25% 5.200000 0.250000 2.800000 0.333333 1.600000 0.101695
50% 5.900000 0.444444 3.000000 0.416667 4.500000 0.593220
75% 6.500000 0.611111 3.300000 0.541667 5.100000 0.694915
max 7.900000 0.999900 4.400000 0.999900 6.900000 0.999900
column7 column8
count 100.00000 100.000000
mean 1.21300 0.464048
std 0.74558 0.310207
min 0.10000 0.010000
25% 0.37500 0.114583
50% 1.40000 0.541667
75% 1.80000 0.708333
max 2.50000 0.999900
Pandas Dataframe Mean method:
column1 5.926263
column2 0.451278
column3 3.040404
column4 0.433849
column5 3.851000
column6 0.483319
column7 1.213000
column8 0.464048
dtype: float64
Variable 'sk_data' has a type of:
<class 'sklearn.datasets.base.Bunch'>
sk_data data:
Variable 'csv_type' has a type of:
<class 'pandas.core.frame.DataFrame'>
CSV Data:
column1 column2 column3 column4 column5 column6 column7 \
0 5.1 0.222222 3.5 0.625000 1.4 0.067797 0.2
1 4.9 NaN 3.0 0.416667 1.4 0.067797 0.2
2 NaN 0.111111 3.2 0.500000 1.3 0.050847 0.2
3 4.6 0.083333 NaN 0.458333 1.5 0.084746 0.2
4 5.0 0.194444 3.6 0.666667 1.4 0.067797 0.2
5 5.4 0.305556 3.9 0.791667 1.7 0.118644 0.4
6 4.6 0.083333 3.4 0.583333 1.4 0.067797 0.3
7 5.0 0.194444 3.4 0.583333 1.5 0.084746 0.2
8 4.4 0.027778 2.9 0.375000 1.4 0.067797 0.2
9 4.9 0.166667 3.1 0.458333 1.5 0.084746 0.1
10 5.4 0.305556 3.7 0.708333 1.5 0.084746 0.2
11 4.8 0.138889 3.4 0.583333 1.6 0.101695 0.2
12 4.8 0.138889 3.0 0.416667 1.4 0.067797 0.1
13 4.3 0.010000 3.0 0.416667 1.1 0.016949 0.1
14 5.8 0.416667 4.0 0.833333 1.2 0.033898 0.2
15 5.7 0.388889 4.4 0.999900 1.5 0.084746 0.4
16 5.4 0.305556 3.9 0.791667 1.3 0.050847 0.4
17 5.1 0.222222 3.5 0.625000 1.4 0.067797 0.3
18 5.7 0.388889 3.8 0.750000 1.7 0.118644 0.3
19 5.1 0.222222 3.8 0.750000 1.5 0.084746 0.3
20 5.4 0.305556 3.4 0.583333 1.7 0.118644 0.2
21 5.1 0.222222 3.7 0.708333 1.5 0.084746 0.4
22 4.6 0.083333 3.6 0.666667 1.0 0.010000 0.2
23 5.1 0.222222 3.3 0.541667 1.7 0.118644 0.5
24 4.8 0.138889 3.4 0.583333 1.9 0.152542 0.2
25 5.0 0.194444 3.0 0.416667 1.6 0.101695 0.2
26 5.0 0.194444 3.4 0.583333 1.6 0.101695 0.4
27 5.2 0.250000 3.5 0.625000 1.5 0.084746 0.2
28 5.2 0.250000 3.4 0.583333 1.4 0.067797 0.2
29 4.7 0.111111 3.2 0.500000 1.6 0.101695 0.2
.. ... ... ... ... ... ... ...
70 6.5 0.611111 3.0 0.416667 5.8 0.813559 2.2
71 7.6 0.916667 3.0 0.416667 6.6 0.949153 2.1
72 4.9 0.166667 2.5 0.208333 4.5 0.593220 1.7
73 7.3 0.833333 2.9 0.375000 6.3 0.898305 1.8
74 6.7 0.666667 2.5 0.208333 5.8 0.813559 1.8
75 7.2 0.805556 3.6 0.666667 6.1 0.864407 2.5
76 6.5 0.611111 3.2 0.500000 5.1 0.694915 2.0
77 6.4 0.583333 2.7 0.291667 5.3 0.728814 1.9
78 6.8 0.694444 3.0 0.416667 5.5 0.762712 2.1
79 5.7 0.388889 2.5 0.208333 5.0 0.677966 2.0
80 5.8 0.416667 2.8 0.333333 5.1 0.694915 2.4
81 6.4 0.583333 3.2 0.500000 5.3 0.728814 2.3
82 6.5 0.611111 3.0 0.416667 5.5 0.762712 1.8
83 7.7 0.944444 3.8 0.750000 6.7 0.966102 2.2
84 7.7 0.944444 2.6 0.250000 6.9 0.999900 2.3
85 6.0 0.472222 2.2 0.083333 5.0 0.677966 1.5
86 6.9 0.722222 3.2 0.500000 5.7 0.796610 2.3
87 5.6 0.361111 2.8 0.333333 4.9 0.661017 2.0
88 7.7 0.944444 2.8 0.333333 6.7 0.966102 2.0
89 6.3 0.555556 2.7 0.291667 4.9 0.661017 1.8
90 6.7 0.666667 3.3 0.541667 5.7 0.796610 2.1
91 7.2 0.805556 3.2 0.500000 6.0 0.847458 1.8
92 6.2 0.527778 2.8 0.333333 4.8 0.644068 1.8
93 6.1 0.500000 3.0 0.416667 4.9 0.661017 1.8
94 6.4 0.583333 2.8 0.333333 5.6 0.779661 2.1
95 7.2 0.805556 3.0 0.416667 5.8 0.813559 1.6
96 7.4 0.861111 2.8 0.333333 6.1 0.864407 1.9
97 7.9 0.999900 3.8 0.750000 6.4 0.915254 2.0
98 6.4 0.583333 2.8 0.333333 5.6 0.779661 2.2
99 6.3 0.555556 2.8 0.333333 5.1 0.694915 1.5
column8 target
0 0.041667 setosa
1 0.041667 setosa
2 0.041667 setosa
3 0.041667 setosa
4 0.041667 setosa
5 0.125000 setosa
6 0.083333 setosa
7 0.041667 setosa
8 0.041667 setosa
9 0.010000 setosa
10 0.041667 setosa
11 0.041667 setosa
12 0.010000 setosa
13 0.010000 setosa
14 0.041667 setosa
15 0.125000 setosa
16 0.125000 setosa
17 0.083333 setosa
18 0.083333 setosa
19 0.083333 setosa
20 0.041667 setosa
21 0.125000 setosa
22 0.041667 setosa
23 0.166667 setosa
24 0.041667 setosa
25 0.041667 setosa
26 0.125000 setosa
27 0.041667 setosa
28 0.041667 setosa
29 0.041667 setosa
.. ... ...
70 0.875000 virginica
71 0.833333 virginica
72 0.666667 virginica
73 0.708333 virginica
74 0.708333 virginica
75 0.999900 virginica
76 0.791667 virginica
77 0.750000 virginica
78 0.833333 virginica
79 0.791667 virginica
80 0.958333 virginica
81 0.916667 virginica
82 0.708333 virginica
83 0.875000 virginica
84 0.916667 virginica
85 0.583333 virginica
86 0.916667 virginica
87 0.791667 virginica
88 0.791667 virginica
89 0.708333 virginica
90 0.833333 virginica
91 0.708333 virginica
92 0.708333 virginica
93 0.708333 virginica
94 0.833333 virginica
95 0.625000 virginica
96 0.750000 virginica
97 0.791667 virginica
98 0.875000 virginica
99 0.583333 virginica
[100 rows x 9 columns]
###Markdown
Numpy: Simple Numpy NdArray vs Python List
###Code
python_list = [1, 2, 3]
numpy_array = np.array([1, 2, 3])
pandas_dataframe = pd.DataFrame(data=[1,2,3])
print("Python List:")
print(python_list)
print(type(python_list))
print("\nNumpy Array:")
print(numpy_array)
print(type(numpy_array))
print("\nPandas DataFrame:")
print(pandas_dataframe)
print(type(pandas_dataframe))
###Output
Python List:
[1, 2, 3]
<class 'list'>
Numpy Array:
[1 2 3]
<class 'numpy.ndarray'>
Pandas DataFrame:
0
0 1
1 2
2 3
<class 'pandas.core.frame.DataFrame'>
###Markdown
Pandas: Subset of columns
###Code
subset_columns = csv_data[['column1','column3','target']]
column1_cleaned = subset_columns[['column1']].fillna( subset_columns[['column1']].mean() )
column3_cleaned = subset_columns[['column3']].fillna( subset_columns[['column3']].mean() )
subset_columns.column1 = column1_cleaned
subset_columns.column3 = column3_cleaned
print(subset_columns)
###Output
column1 column3 target
0 5.100000 3.500000 setosa
1 4.900000 3.000000 setosa
2 5.926263 3.200000 setosa
3 4.600000 3.040404 setosa
4 5.000000 3.600000 setosa
5 5.400000 3.900000 setosa
6 4.600000 3.400000 setosa
7 5.000000 3.400000 setosa
8 4.400000 2.900000 setosa
9 4.900000 3.100000 setosa
10 5.400000 3.700000 setosa
11 4.800000 3.400000 setosa
12 4.800000 3.000000 setosa
13 4.300000 3.000000 setosa
14 5.800000 4.000000 setosa
15 5.700000 4.400000 setosa
16 5.400000 3.900000 setosa
17 5.100000 3.500000 setosa
18 5.700000 3.800000 setosa
19 5.100000 3.800000 setosa
20 5.400000 3.400000 setosa
21 5.100000 3.700000 setosa
22 4.600000 3.600000 setosa
23 5.100000 3.300000 setosa
24 4.800000 3.400000 setosa
25 5.000000 3.000000 setosa
26 5.000000 3.400000 setosa
27 5.200000 3.500000 setosa
28 5.200000 3.400000 setosa
29 4.700000 3.200000 setosa
.. ... ... ...
70 6.500000 3.000000 virginica
71 7.600000 3.000000 virginica
72 4.900000 2.500000 virginica
73 7.300000 2.900000 virginica
74 6.700000 2.500000 virginica
75 7.200000 3.600000 virginica
76 6.500000 3.200000 virginica
77 6.400000 2.700000 virginica
78 6.800000 3.000000 virginica
79 5.700000 2.500000 virginica
80 5.800000 2.800000 virginica
81 6.400000 3.200000 virginica
82 6.500000 3.000000 virginica
83 7.700000 3.800000 virginica
84 7.700000 2.600000 virginica
85 6.000000 2.200000 virginica
86 6.900000 3.200000 virginica
87 5.600000 2.800000 virginica
88 7.700000 2.800000 virginica
89 6.300000 2.700000 virginica
90 6.700000 3.300000 virginica
91 7.200000 3.200000 virginica
92 6.200000 2.800000 virginica
93 6.100000 3.000000 virginica
94 6.400000 2.800000 virginica
95 7.200000 3.000000 virginica
96 7.400000 2.800000 virginica
97 7.900000 3.800000 virginica
98 6.400000 2.800000 virginica
99 6.300000 2.800000 virginica
[100 rows x 3 columns]
###Markdown
Pandas: Subset of rows
###Code
setosa = subset_columns[csv_data.target == 'setosa']
versicolor = subset_columns[csv_data.target == 'versicolor']
virginica = subset_columns[csv_data.target == 'virginica']
print("Subset-rows created.")
setosa_x = setosa['column3'].values
setosa_y = setosa['column1'].values
versicolor_x = versicolor['column3'].values
versicolor_y = versicolor['column1'].values
virginica_x = virginica['column3'].values
virginica_y = virginica['column1'].values
f, axarr = plt.subplots(3, sharex=True, sharey=True)
axarr[0].plot(setosa_x, setosa_y, "bo")
axarr[0].set_title('Setosa')
axarr[1].set_title('Versicolor')
axarr[2].set_title('Virginica')
axarr[1].scatter(versicolor_x, versicolor_y)
axarr[2].scatter(virginica_x, virginica_y)
plt.show()
###Output
_____no_output_____
###Markdown
Building a SVM (Support Vector Machine) Model/Classifier
###Code
print("Preparing Data...")
# [5.1 , 3.5]
classifier_x = subset_columns[['column1','column3']].values
#[1,0,0] = setosa
#[0,1,0] = versicolor
#[0,0,1] = virginica7
labels = subset_columns['target'].values
le = preprocessing.LabelEncoder()
le.fit(labels)
classifier_y = le.transform(labels)
print("Data Splitting:")
print("Shape before Split: ",classifier_x.shape,"-",classifier_y.shape)
X_train, X_test, y_train, y_test = train_test_split(classifier_x,
classifier_y)
print("Shape after Split: ",X_train.shape,"-",X_test.shape)
clf = svm.LinearSVC(max_iter=10)
print("Fitting...")
clf.fit(X=X_train,
y=y_train)
print(clf.coef_)
print("Predicting...")
y_pred = clf.predict(X_test)
print("#"*50)
print(metrics.confusion_matrix(y_test, y_pred))
print(metrics.classification_report(y_test, y_pred))
print(__doc__)
# Code source: Gaël Varoquaux
# Modified for documentation by Jaques Grobler
# License: BSD 3 clause
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from sklearn import datasets
from sklearn.decomposition import PCA
# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2] # we only take the first two features.
y = iris.target
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
plt.figure(2, figsize=(8, 6))
plt.clf()
# Plot the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Set1,
edgecolor='k')
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.xticks(())
plt.yticks(())
# To getter a better understanding of interaction of the dimensions
# plot the first three PCA dimensions
fig = plt.figure(1, figsize=(8, 6))
ax = Axes3D(fig, elev=-150, azim=110)
X_reduced = PCA(n_components=3).fit_transform(iris.data)
ax.scatter(X_reduced[:, 0], X_reduced[:, 1], X_reduced[:, 2], c=y,
cmap=plt.cm.Set1, edgecolor='k', s=40)
ax.set_title("First three PCA directions")
ax.set_xlabel("1st eigenvector")
ax.w_xaxis.set_ticklabels([])
ax.set_ylabel("2nd eigenvector")
ax.w_yaxis.set_ticklabels([])
ax.set_zlabel("3rd eigenvector")
ax.w_zaxis.set_ticklabels([])
plt.show()
###Output
Automatically created module for IPython interactive environment
|
ECG_Adversarial_Attacks_pynb.ipynb | ###Markdown
**Processing Data for White-Box Attack**
###Code
from google.colab import drive
drive.mount('/content/drive')
# reinstall updated versions
!pip install tensorflow
!pip install keras
!pip install cleverhans
# create folder called ECGadv-master
# store the downloaded entire repository from https://github.com/codespace123/ECGadv in ECG_advmaster
# indices for all CORRECT classifications done by ECG classifying model
# download the training data (training2017) from https://archive.physionet.org/challenge/2017/training2017.zip.
# store in folder called ECG_v1
# create a folder in ECG_v1 called Processed Data
import csv
import numpy as np
import scipy.io
csvfile_a = np.array(list(csv.reader(open("/content/drive/My Drive/ECGadv-master/data_select_A.csv"))))[:,3]
csvfile_a = [int(i) for i in csvfile_a]
csvfile_n = np.array(list(csv.reader(open("/content/drive/My Drive/ECGadv-master/data_select_N.csv"))))[:,3]
csvfile_n = [int(i) for i in csvfile_n]
csvfile_o = np.array(list(csv.reader(open("/content/drive/My Drive/ECGadv-master/data_select_O.csv"))))[:,3]
csvfile_o = [int(i) for i in csvfile_o]
csvfile_i = np.array(list(csv.reader(open("/content/drive/My Drive/ECGadv-master/data_select_i.csv"))))[:,3]
csvfile_i = [int(i) for i in csvfile_i]
print(csvfile_a)
# PROCESSING ALL THE TRAINING DATA TO BE ATTACKED
import scipy.io
from glob import glob
import numpy as np
from natsort import natsorted
files = natsorted(glob("/content/drive/My Drive/ECG_v1/training2017"+ "/*.mat"))
print(len(files))
data = np.zeros((8529, 9000,1))
num = 1
for path in files:
if path.find("(") == -1:
temp = scipy.io.loadmat(path)
temp = np.array(temp["val"])
temp = np.nan_to_num(temp)
temp = temp[:,:9000]
temp = temp - np.mean(temp)
temp = temp / np.std(temp)
if temp.shape[1] < 9000:
temp = np.pad(temp,((0,0),(0,9000 - temp.shape[1])), 'constant')
temp = np.expand_dims(temp, axis = 2)
data[num] = temp
num += 1
if path.find("(") != -1:
print("Duplicate")
print(path)
print(num-1)
np.save("/content/drive/My Drive/ECGadv-master/ProcessedData/agg_data.npy", data)
# Reload the combined dataset
data = np.load("/content/drive/My Drive/ECGadv-master/ProcessedData/agg_data.npy")
# Separating out the individual ECG data from the aggregate combined dataset into 4 datasets
import numpy as np
from tensorflow.keras.models import load_model
model = load_model("/content/drive/My Drive/ECG_v1/ecg_mit.hdf5")
for i in csvfile_a:
a = np.expand_dims(data[i,:,:],axis=0)
a = np.append(a, [[[0]]], axis = 1)
%cd
%cd /content/drive/My Drive/ECGadv-master/ProcessedData/A
np.save("A_{}".format(i),a)
for i in csvfile_n:
n = np.expand_dims(data[i,:,:],axis=0)
n = np.append(n, [[[1]]], axis = 1)
%cd
%cd /content/drive/My Drive/ECGadv-master/ProcessedData/N
np.save("N_{}".format(i),n)
for i in csvfile_o:
o = np.expand_dims(data[i,:,:],axis=0)
o = np.append(o, [[[2]]], axis = 1)
%cd
%cd /content/drive/My Drive/ECGadv-master/ProcessedData/O
np.save("O_{}".format(i),o)
for i in csvfile_i:
t = np.expand_dims(data[i,:,:],axis=0)
t = np.append(t, [[[3]]], axis = 1)
%cd
%cd /content/drive/My Drive/ECGadv-master/ProcessedData/Tilde
np.save("Tilde_{}".format(i),t)
dicti = {
0 : "/content/drive/My Drive/ECGadv-master/ProcessedData/A",
1 : "/content/drive/My Drive/ECGadv-master/ProcessedData/N",
2 : "/content/drive/My Drive/ECGadv-master/ProcessedData/O",
3 : "/content/drive/My Drive/ECGadv-master/ProcessedData/Tilde",
}
# MAKING RANDOM PAIRS OF ORIGINAL AND TARGET DATA
# create folder in ECG_v1 called PairedDataPhysio
# download Classifying Model from https://github.com/fernandoandreotti/cinc-challenge2017/blob/master/deeplearn-approach/ResNet_30s_34lay_16conv.hdf5
import numpy as np
from glob import glob
from tensorflow.keras.models import load_model
model = load_model("/content/drive/My Drive/ECG_v1/ResNet_30s_34lay_16conv.hdf5")
x = 0
y = 0
for j in range(4):
for k in range(4):
x = j
y = k
if(x == y):
continue
%cd
%cd /content/drive/My Drive/ECG_v1/PairedDataPhysio
p = "ECG_MIT_orig" + "{}".format(x) +"_target"+ "{}".format(y)
!mkdir $p
%cd $p
for z in range(10):
base_o = dicti[x]
base_t = dicti[y]
paths_o = sorted(glob(base_o + "/*.npy"))
paths_t = sorted(glob(base_t + "/*.npy"))
num_o = np.random.randint(len(paths_o))
num_t = np.random.randint(len(paths_t))
o = np.load(paths_o[num_o])
t = np.load(paths_t[num_t])
if model.predict(o[:,:-1,:]).argmax() != o[:,-1,:]:
print("You messed up.")
if model.predict(t[:,:-1,:]).argmax() != t[:,-1,:]:
print("You messed up v2")
%cd
%cd /content/drive/My Drive/ECG_v1/PairedDataPhysio
%cd $p
d = "ECG_Pair{}".format(z)
!mkdir $d
%cd $d
np.save("ECG_orig"+"{}_#{}".format(x,z),o)
np.save("ECG_target"+"{}_#{}".format(y,z),t)
###Output
_____no_output_____
###Markdown
**Boundary Attack Black Box**
###Code
# ATTACKING THE PROCESSED DATA
%cd
%cd /content/drive/My Drive/ECG_v1
for i in range(4):
for j in range(4):
if(i == j):
continue
for k in range(10):
title = "ECG_orig{}_target{}".format(i,j) +"/" + "ECG_Pair{}".format(k)
!python attack.py --input_dir ./PairedDataPhysio/$title/ --store_dir ./results_8000_Physio/$title/ --net ./ecg_mit.hdf5 --max_steps 8000 --s --candidate
###Output
_____no_output_____
###Markdown
**Processing Paired Data to be attacked with PhysioNet White Box**
###Code
# install previous versions only for white box
!pip install https://github.com/mind/wheels/releases/download/tf1.8-cpu/tensorflow-1.8.0-cp36-cp36m-linux_x86_64.whl
!pip install keras==2.2
!pip install cleverhans==2.1
# Creating a list with ALL INDICES OF ALL THE ORIGINAL ATTACK PAIRS FROM THE SOURCE CSV
# each element of list contains: name(ECG_orig{}_target{}/ECG_Pair{}, original ECG index in agg_data, and original ECG index in agg_data)
import scipy.io
from glob import glob
import numpy as np
saved = []
from natsort import natsorted
agg_data = np.load("/content/drive/My Drive/ECGadv-master/ProcessedData/agg_data.npy").reshape((8529,9000))
agg_data = agg_data[:,:1000]
for i in range(4):
for j in range(4):
if(i == j):
continue
for k in range(10):
title = "ECG_orig{}_target{}".format(i,j) +"/" + "ECG_Pair{}".format(k)
files = natsorted(glob("/content/drive/My Drive/ECG_v1/PairedDataPhysio" + "/" + title + "/*.npy"))
individualSaved = [title]
for a in files:
# arrays has 2 files, original ECG and target ECG (paths)
print(a)
arr = np.load(a)
arr = np.squeeze(arr)[:1000]
index = (agg_data == arr).all(axis=1).argmax()
print(index)
individualSaved.append(index)
saved.append(individualSaved)
# Ordered list, saved[0] contains the name and two files used in the attack,
# 1st element is 0 -> 1, pair1
# 2nd file is the original index, 3rd file is the target index
print(len(saved))
np.save("/content/drive/My Drive/ECG_v1/IndicesData.npy",saved)
import numpy as np
x= np.load("/content/drive/My Drive/ECG_v1/IndicesData.npy")
x= np.array(x)
# print(x)
# create dictionary with title and corresponding original ECG data set
dict_index_title = {}
z = 1
for i in range(0,120):
orig = int(x[i][1])
title = x[i][0]
dict_index_title.update({title:orig})
print(len(dict_index_title))
dicti = {
"0" : "data_select_A.csv",
"1" : "data_select_N.csv",
"2" : "data_select_O.csv",
"3" : "data_select_i.csv",
}
#WHITE BOX ATTACK CODE
import csv
import scipy.io
for i in range(120)):
orig = x[i][1]
orig = int(orig)
print(orig)
title = x[i][0]
csvIndex = dicti[title[12:13]]
print(csvIndex)
csvfile = np.array(list(csv.reader(open("/content/drive/My Drive/ECGadv-master/" + csvIndex))))[:,3]
csvfile = [int(i) for i in csvfile]
csvfile = np.array(csvfile)
index = np.where(csvfile==orig)[0][0]
print(index)
print(csvfile[index])
index1 = index
index2 = index + 1
%cd
%cd /content/drive/My Drive/ECGadv-master
#smooth white box attack
!python ./cloud_eval_diff.py $csvIndex $index1 $index2
#l2 white box attack
#!python ./cloud_eval_l2.py $csvIndex $index1 $index2
###Output
_____no_output_____ |
ClusteringNotebook.ipynb | ###Markdown
Clustering Notebook We import some packages and our dataset.
###Code
!pip install --upgrade plotly
!pip install geopandas
!pip install sklearn
import pandas as pd
import numpy as np
from scipy.spatial.distance import cdist
import matplotlib.pyplot as plt
import sklearn
from sklearn import preprocessing
from sklearn.preprocessing import MinMaxScaler
from sklearn.cluster import KMeans
from itertools import product
from geopandas import GeoDataFrame
from shapely.geometry import Point
import plotly.express as px
import plotly.graph_objects as go
import geopy.distance
import seaborn as sns
!gdown https://drive.google.com/uc?id=1Jpz2zP4-gU8aN7hiF0e6GhFhnRMm1zTl
!unzip "/content/BA.zip" #should be changed to path where zip is downloaded (this line works if used on Colab)
###Output
Requirement already satisfied: plotly in /usr/local/lib/python3.7/dist-packages (4.4.1)
Collecting plotly
Downloading plotly-5.4.0-py2.py3-none-any.whl (25.3 MB)
[K |████████████████████████████████| 25.3 MB 1.8 MB/s
[?25hCollecting tenacity>=6.2.0
Downloading tenacity-8.0.1-py3-none-any.whl (24 kB)
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from plotly) (1.15.0)
Installing collected packages: tenacity, plotly
Attempting uninstall: plotly
Found existing installation: plotly 4.4.1
Uninstalling plotly-4.4.1:
Successfully uninstalled plotly-4.4.1
Successfully installed plotly-5.4.0 tenacity-8.0.1
Collecting geopandas
Downloading geopandas-0.10.2-py2.py3-none-any.whl (1.0 MB)
[K |████████████████████████████████| 1.0 MB 4.2 MB/s
[?25hCollecting pyproj>=2.2.0
Downloading pyproj-3.2.1-cp37-cp37m-manylinux2010_x86_64.whl (6.3 MB)
[K |████████████████████████████████| 6.3 MB 29.3 MB/s
[?25hRequirement already satisfied: pandas>=0.25.0 in /usr/local/lib/python3.7/dist-packages (from geopandas) (1.1.5)
Collecting fiona>=1.8
Downloading Fiona-1.8.20-cp37-cp37m-manylinux1_x86_64.whl (15.4 MB)
[K |████████████████████████████████| 15.4 MB 33.0 MB/s
[?25hRequirement already satisfied: shapely>=1.6 in /usr/local/lib/python3.7/dist-packages (from geopandas) (1.8.0)
Requirement already satisfied: attrs>=17 in /usr/local/lib/python3.7/dist-packages (from fiona>=1.8->geopandas) (21.2.0)
Collecting click-plugins>=1.0
Downloading click_plugins-1.1.1-py2.py3-none-any.whl (7.5 kB)
Requirement already satisfied: six>=1.7 in /usr/local/lib/python3.7/dist-packages (from fiona>=1.8->geopandas) (1.15.0)
Collecting cligj>=0.5
Downloading cligj-0.7.2-py3-none-any.whl (7.1 kB)
Collecting munch
Downloading munch-2.5.0-py2.py3-none-any.whl (10 kB)
Requirement already satisfied: certifi in /usr/local/lib/python3.7/dist-packages (from fiona>=1.8->geopandas) (2021.10.8)
Requirement already satisfied: click>=4.0 in /usr/local/lib/python3.7/dist-packages (from fiona>=1.8->geopandas) (7.1.2)
Requirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from fiona>=1.8->geopandas) (57.4.0)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.25.0->geopandas) (2018.9)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.25.0->geopandas) (2.8.2)
Requirement already satisfied: numpy>=1.15.4 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.25.0->geopandas) (1.19.5)
Installing collected packages: munch, cligj, click-plugins, pyproj, fiona, geopandas
Successfully installed click-plugins-1.1.1 cligj-0.7.2 fiona-1.8.20 geopandas-0.10.2 munch-2.5.0 pyproj-3.2.1
Requirement already satisfied: sklearn in /usr/local/lib/python3.7/dist-packages (0.0)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.7/dist-packages (from sklearn) (1.0.1)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->sklearn) (1.1.0)
Requirement already satisfied: numpy>=1.14.6 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->sklearn) (1.19.5)
Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->sklearn) (3.0.0)
Requirement already satisfied: scipy>=1.1.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->sklearn) (1.4.1)
Downloading...
From: https://drive.google.com/uc?id=1Jpz2zP4-gU8aN7hiF0e6GhFhnRMm1zTl
To: /content/BA.zip
100% 90.0M/90.0M [00:02<00:00, 37.1MB/s]
Archive: /content/BA.zip
inflating: 202011-capitalbikeshare-tripdata.csv
inflating: __MACOSX/._202011-capitalbikeshare-tripdata.csv
inflating: 202012-capitalbikeshare-tripdata.csv
inflating: __MACOSX/._202012-capitalbikeshare-tripdata.csv
inflating: 202101-capitalbikeshare-tripdata.csv
inflating: __MACOSX/._202101-capitalbikeshare-tripdata.csv
inflating: 202102-capitalbikeshare-tripdata.csv
inflating: __MACOSX/._202102-capitalbikeshare-tripdata.csv
inflating: 202103-capitalbikeshare-tripdata.csv
inflating: __MACOSX/._202103-capitalbikeshare-tripdata.csv
inflating: 202104-capitalbikeshare-tripdata.csv
inflating: __MACOSX/._202104-capitalbikeshare-tripdata.csv
inflating: 202105-capitalbikeshare-tripdata.csv
inflating: __MACOSX/._202105-capitalbikeshare-tripdata.csv
inflating: 202106-capitalbikeshare-tripdata.csv
inflating: __MACOSX/._202106-capitalbikeshare-tripdata.csv
inflating: 202107-capitalbikeshare-tripdata.csv
inflating: __MACOSX/._202107-capitalbikeshare-tripdata.csv
inflating: 202108-capitalbikeshare-tripdata.csv
inflating: __MACOSX/._202108-capitalbikeshare-tripdata.csv
inflating: 202109-capitalbikeshare-tripdata.csv
inflating: __MACOSX/._202109-capitalbikeshare-tripdata.csv
inflating: 202110-capitalbikeshare-tripdata.csv
inflating: __MACOSX/._202110-capitalbikeshare-tripdata.csv
inflating: history_data.csv
inflating: __MACOSX/._history_data.csv
inflating: locations_withLL_complete.xlsx
inflating: __MACOSX/._locations_withLL_complete.xlsx
###Markdown
We import our data
###Code
weather=pd.read_csv("/content/history_data.csv") #Again path has to be changed if not on Colab
nov = pd.read_csv("/content/202011-capitalbikeshare-tripdata.csv")
dec = pd.read_csv("/content/202012-capitalbikeshare-tripdata.csv")
jan = pd.read_csv("/content/202101-capitalbikeshare-tripdata.csv")
feb = pd.read_csv("/content/202102-capitalbikeshare-tripdata.csv")
mar = pd.read_csv("/content/202103-capitalbikeshare-tripdata.csv")
apr = pd.read_csv("/content/202104-capitalbikeshare-tripdata.csv")
may = pd.read_csv("/content/202105-capitalbikeshare-tripdata.csv")
jun = pd.read_csv("/content/202106-capitalbikeshare-tripdata.csv")
jul = pd.read_csv("/content/202107-capitalbikeshare-tripdata.csv")
aug = pd.read_csv("/content/202108-capitalbikeshare-tripdata.csv")
sep = pd.read_csv("/content/202109-capitalbikeshare-tripdata.csv")
oct = pd.read_csv("/content/202110-capitalbikeshare-tripdata.csv")
###Output
/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py:2718: DtypeWarning: Columns (5,7) have mixed types.Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Weather dataset prep
###Code
weather.head()
weather.describe()
###Output
_____no_output_____
###Markdown
We convert `date` and `time` from string to a date/type format.
###Code
weather['date'] = pd.to_datetime(weather['Date time']).dt.date
weather['time'] = pd.to_datetime(weather['Date time']).dt.time
###Output
_____no_output_____
###Markdown
We encode the `conditions` variable.
###Code
le = preprocessing.LabelEncoder()
weather["Conditions_enc"] = le.fit_transform(weather["Conditions"])
###Output
_____no_output_____
###Markdown
We remove columns that are not useful.
###Code
weather = weather.drop(columns=["Minimum Temperature","Maximum Temperature","Heat Index"])
###Output
_____no_output_____
###Markdown
All remaing NA in the dataset represent a phenomen, e.g. rain, that didn't happen. We thus replace them with 0s.
###Code
weather = weather.fillna(0)
weather.head()
###Output
_____no_output_____
###Markdown
Rides dataset prep We combine all the rides' monthly datasets into a single one.
###Code
rides=pd.concat([nov,dec,jan,feb,mar,apr,may,jun,jul,aug,sep,oct])
rides.head()
###Output
_____no_output_____
###Markdown
We remove values with missing `start_lat`, `start_lng`, `end_lat`, or `end_lng`.
###Code
na_end_coords = rides[(rides['end_lat'].isnull()) | (rides['end_lng'].isnull())]
print(f"Entries with missing end point coordinates: {len(na_end_coords)} ")
na_start_coords = rides[(rides['start_lat'].isnull()) | (rides['start_lng'].isnull())]
print(f"Entries with missing start point coordinates: {len(na_start_coords)} ")
rides = rides[ rides['end_lat'].notnull() & rides['end_lng'].notnull() & rides['start_lat'].notnull() & rides['start_lng'].notnull() ]
print(f"Legth rides after removal of rows with missing coordinates: {len(rides)}")
###Output
Entries with missing end point coordinates: 4809
Entries with missing start point coordinates: 2
Legth rides after removal of rows with missing coordinates: 2593432
###Markdown
We change `start_station_id` or `end_station_id` to undocked for rides with it missing.
###Code
rides.loc[:,'start_station_id'].fillna('undocked',inplace=True)
rides.loc[:,'end_station_id'].fillna('undocked',inplace=True)
###Output
/usr/local/lib/python3.7/dist-packages/pandas/core/series.py:4536: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
downcast=downcast,
###Markdown
We add the `start_date`, `start_time`, `start_hour`, `end_date`, and `end_time` variables to each ride. We then create a `duration` variable which indicates the duration of each ride.
###Code
rides['start_date'] = pd.to_datetime(rides['started_at']).dt.date
rides['start_time'] = pd.to_datetime(rides['started_at']).dt.time
rides.loc[:,'started_at']= pd.to_datetime(rides.loc[:,'started_at'])
rides.loc[:,'start_hour'] = rides.loc[:,'started_at'].dt.hour #to convert it to hour format
rides['end_date'] = pd.to_datetime(rides['ended_at']).dt.date
rides['end_time'] = pd.to_datetime(rides['ended_at']).dt.time
rides['duration'] = pd.to_datetime(rides['ended_at']) - pd.to_datetime(rides['started_at'])
###Output
_____no_output_____
###Markdown
We remove outliers based on `duration` (see report for details).
###Code
quantile_05 = rides['duration'].quantile(0.05)
quantile_999 = rides['duration'].quantile(0.999)
rides = rides[(rides['duration'] > quantile_05) & ((rides['duration'] < quantile_999))]
rides['duration'].describe()
###Output
_____no_output_____
###Markdown
We create `duration_sec` which expresses the duration of the ride in seconds.
###Code
def to_seconds(x):
return x.total_seconds()
rides.loc[:,'duration_sec'] = rides.loc[:,'duration'].apply(to_seconds)
###Output
_____no_output_____
###Markdown
We create a `distance` variable that computes the distance between the starting and the ending point.
###Code
l = []
for i in range(len(rides)):
coords_1 = (rides["start_lng"].iloc[i],rides["start_lat"].iloc[i])
coords_2 = (rides["end_lng"].iloc[i],rides["end_lat"].iloc[i])
l.append(geopy.distance.geodesic(coords_1, coords_2).km)
rides['distance']=l
###Output
_____no_output_____
###Markdown
We set rides that start and at the same station as having `distance` 0.
###Code
rides.loc [((rides['start_station_id']==rides['end_station_id'])&(rides['start_station_id']!= 'undocked')), 'distance'] = 0
###Output
_____no_output_____
###Markdown
We encode the categorical variables.
###Code
rides["rideable_type_enc"] = le.fit_transform(rides["rideable_type"])
rides["member_casual_enc"] = le.fit_transform(rides["member_casual"])
###Output
_____no_output_____
###Markdown
We create the `start_hour_merge` variable which contains the starting hour of the ride in the appropriate format so as to perform the merge with the weather dataset.
###Code
rides['start_hour_merge']= rides['start_time'].apply(lambda x: x.replace(minute=0, second=0))
###Output
_____no_output_____
###Markdown
We create the `weekday` variable which identifies the day of the week in which the ride was recorded.
###Code
rides.loc[:,"weekday"]=rides.loc[:,"started_at"].dt.dayofweek
###Output
_____no_output_____
###Markdown
We copy the rides dataset to data.
###Code
data=rides.copy()
###Output
_____no_output_____
###Markdown
First Clustering We merge the rides and the weather datasets.
###Code
data_weather=pd.merge(data,weather,how = "left", right_on=["date","time"], left_on=["start_date","start_hour_merge"])
###Output
_____no_output_____
###Markdown
We filter out all the NAs (i.e. we keep only the data referring to February, March, and April 2021).
###Code
data_weather=data_weather.dropna()
###Output
_____no_output_____
###Markdown
We perform and plot the elbow method using distortion to compute the optimal number of clusters.
###Code
distortions = []
mapping1 = {}
K = range(1, 15)
scaler=sklearn.preprocessing.MinMaxScaler()
data_cluster_weather=data_weather.loc[:,["start_hour","weekday","member_casual_enc","Precipitation","Snow","start_lat","start_lng","end_lat","end_lng"]]
data_cluster_weather.loc[:,["start_hour","weekday","Precipitation","Snow","start_lat","start_lng","end_lat","end_lng"]]=scaler.fit_transform(data_cluster_weather.loc[:,["start_hour","weekday","Precipitation","Snow","start_lat","start_lng","end_lat","end_lng"]])
data_cluster_weather.loc[:,["start_lat","start_lng","end_lat","end_lng"]]=data_cluster_weather.loc[:,["start_lat","start_lng","end_lat","end_lng"]]/2
for k in K:
kmeanModel = sklearn.cluster.KMeans(n_clusters=k,random_state=12910).fit(data_cluster_weather)
kmeanModel.fit(data_cluster_weather)
distortions.append(sum(np.min(cdist(data_cluster_weather, kmeanModel.cluster_centers_,
'euclidean'), axis=1)) / data_cluster_weather.shape[0])
mapping1[k] = sum(np.min(cdist(data_cluster_weather, kmeanModel.cluster_centers_,
'euclidean'), axis=1)) / data_cluster_weather.shape[0]
for key, val in mapping1.items():
print(f'{key} : {val}')
plt.plot(K, distortions, 'bx-')
plt.xlabel('Values of K')
plt.ylabel('Distortion')
plt.title('The Elbow Method using Distortion')
plt.show()
###Output
1 : 4.154161741480309
2 : 2.9299357937538075
3 : 2.4878057609363524
4 : 2.2142673809997473
5 : 1.947113231824118
6 : 1.775644843388643
7 : 1.6829157190192687
8 : 1.572089182790981
9 : 1.4934567117475435
10 : 1.4270188241082706
11 : 1.3605263797778107
12 : 1.3179811214182828
13 : 1.2668708121675392
14 : 1.2334191561513348
###Markdown
We copy and scale the data as explained in the report.
###Code
data_cluster_weather=data_weather.loc[:,["start_hour","weekday","member_casual_enc","Precipitation","Snow","start_lat","start_lng","end_lat","end_lng"]].copy()
data_cluster_weather.loc[:,["start_hour","weekday","Precipitation","Snow","start_lat","start_lng","end_lat","end_lng"]]=scaler.fit_transform(data_cluster_weather.loc[:,["start_hour","weekday","Precipitation","Snow","start_lat","start_lng","end_lat","end_lng"]])
data_cluster_weather.loc[:,["start_lat","start_lng","end_lat","end_lng"]]=data_cluster_weather.loc[:,["start_lat","start_lng","end_lat","end_lng"]]/2
###Output
_____no_output_____
###Markdown
We run our clustering and assign clusters to each observation.
###Code
kmeans=sklearn.cluster.KMeans(n_clusters=8, random_state=15)
kmeans.fit(data_cluster_weather)
data_cluster_weather.loc[:,"cluster"]=kmeans.predict(data_cluster_weather)
###Output
_____no_output_____
###Markdown
We reverse the scaling.
###Code
data_cluster_weather.loc[:,["start_hour","weekday","Precipitation","Snow","start_lat","start_lng","end_lat","end_lng"]]=scaler.inverse_transform(data_cluster_weather.loc[:,["start_hour","weekday","Precipitation","Snow","start_lat","start_lng","end_lat","end_lng"]])
###Output
_____no_output_____
###Markdown
We look at the mean of the variables of interest.
###Code
data_cluster_weather.groupby("cluster").aggregate("mean","count")
###Output
_____no_output_____
###Markdown
Second clustering We again perform and plot the elbow method using distortion to check the optimal number of clusters.
###Code
distortions = []
mapping1 = {}
K = range(1, 15)
data_cluster_noweather=data.loc[:,["weekday","start_hour", "start_lat","start_lng","end_lat","end_lng"]].copy()
data_cluster_noweather.loc[:,["weekday","start_hour","start_lat","start_lng","end_lat","end_lng"]]=scaler.fit_transform(data_cluster_noweather.loc[:,["weekday","start_hour","start_lat","start_lng","end_lat","end_lng"]])
data_cluster_noweather.loc[:,["start_lat","start_lng","end_lat","end_lng"]]=data_cluster_noweather.loc[:,["start_lat","start_lng","end_lat","end_lng"]]/2
for k in K:
kmeanModel = sklearn.cluster.KMeans(n_clusters=k,random_state=12).fit(data_cluster_noweather)
kmeanModel.fit(data_cluster_noweather)
distortions.append(sum(np.min(cdist(data_cluster_noweather, kmeanModel.cluster_centers_,
'euclidean'), axis=1)) / data_cluster_noweather.shape[0])
mapping1[k] = sum(np.min(cdist(data_cluster_noweather, kmeanModel.cluster_centers_,
'euclidean'), axis=1)) / data_cluster_noweather.shape[0]
import matplotlib.pyplot as plt
for key, val in mapping1.items():
print(f'{key} : {val}')
plt.plot(K, distortions, 'bx-')
plt.xlabel('Values of K')
plt.ylabel('Distortion')
plt.title('The Elbow Method using Distortion')
plt.show()
###Output
1 : 4.51058825552743
2 : 3.0345232132076787
3 : 2.55683893601383
4 : 2.381418215590355
5 : 2.078784308352214
6 : 1.843953912118703
7 : 1.6466610516263667
8 : 1.5296930120995937
9 : 1.4366349805104044
10 : 1.3768086631327932
11 : 1.2832359171114354
12 : 1.2381113623715962
13 : 1.1859287304351782
14 : 1.1474633689388296
###Markdown
We follow a similar procedure to before and run our second clustering.
###Code
data_cluster_noweather=data.loc[:,["weekday","start_hour", "start_lat","start_lng","end_lat","end_lng"]].copy()
data_cluster_noweather.loc[:,["weekday","start_hour","start_lat","start_lng","end_lat","end_lng"]]=scaler.fit_transform(data_cluster_noweather.loc[:,["weekday","start_hour","start_lat","start_lng","end_lat","end_lng"]])
kmeans=sklearn.cluster.KMeans(n_clusters=8, random_state=20)
kmeans.fit(data_cluster_noweather)
data_cluster_noweather["cluster"]=kmeans.predict(data_cluster_noweather)
data_cluster_noweather.loc[:,["weekday","start_hour","start_lat","start_lng","end_lat","end_lng"]]=scaler.inverse_transform(data_cluster_noweather.loc[:,["weekday","start_hour","start_lat","start_lng","end_lat","end_lng"]])
###Output
_____no_output_____
###Markdown
We look at the summary statistics, by cluster, of `start_hour` and `weekday`.
###Code
data_cluster_noweather.groupby("cluster").describe()[["start_hour","weekday"]]
###Output
_____no_output_____
###Markdown
We look at the summary statistics, by cluster, of `member_casual_enc`.
###Code
data_cluster_noweather_done=pd.concat([data_cluster_noweather[["cluster"]],data],axis=1)
data_cluster_noweather_done.groupby("cluster").describe()["member_casual_enc"]
###Output
_____no_output_____
###Markdown
Plotting Functions
###Code
def zoom_center(lons: tuple=None, lats: tuple=None, lonlats: tuple=None,
format: str='lonlat', projection: str='mercator',
width_to_height: float=2.0) -> (float, dict):
"""Finds optimal zoom and centering for a plotly mapbox.
Must be passed (lons & lats) or lonlats.
Temporary solution awaiting official implementation, see:
https://github.com/plotly/plotly.js/issues/3434
Parameters
--------
lons: tuple, optional, longitude component of each location
lats: tuple, optional, latitude component of each location
lonlats: tuple, optional, gps locations
format: str, specifying the order of longitud and latitude dimensions,
expected values: 'lonlat' or 'latlon', only used if passed lonlats
projection: str, only accepting 'mercator' at the moment,
raises `NotImplementedError` if other is passed
width_to_height: float, expected ratio of final graph's with to height,
used to select the constrained axis.
Returns
--------
zoom: float, from 1 to 20
center: dict, gps position with 'lon' and 'lat' keys
>>> print(zoom_center((-109.031387, -103.385460),
... (25.587101, 31.784620)))
(5.75, {'lon': -106.208423, 'lat': 28.685861})
"""
if lons is None and lats is None:
if isinstance(lonlats, tuple):
lons, lats = zip(*lonlats)
else:
raise ValueError(
'Must pass lons & lats or lonlats'
)
maxlon, minlon = max(lons), min(lons)
maxlat, minlat = max(lats), min(lats)
center = {
'lon': round((maxlon + minlon) / 2, 6),
'lat': round((maxlat + minlat) / 2, 6)
}
# longitudinal range by zoom level (20 to 1)
# in degrees, if centered at equator
lon_zoom_range = np.array([
0.0007, 0.0014, 0.003, 0.006, 0.012, 0.024, 0.048, 0.096,
0.192, 0.3712, 0.768, 1.536, 3.072, 6.144, 11.8784, 23.7568,
47.5136, 98.304, 190.0544, 360.0
])
if projection == 'mercator':
margin = 1.2
height = (maxlat - minlat) * margin * width_to_height
width = (maxlon - minlon) * margin
lon_zoom = np.interp(width , lon_zoom_range, range(20, 0, -1))
lat_zoom = np.interp(height, lon_zoom_range, range(20, 0, -1))
zoom = round(min(lon_zoom, lat_zoom), 2)
else:
raise NotImplementedError(
f'{projection} projection is not implemented'
)
return zoom, center
def plot_start_points(data):
tstart=list()
for i in list(data["start_station_name"].unique()):
tstart.append([i,np.sum(data["start_station_name"]==i)])
datastat=pd.DataFrame()
datastat["start_long"],datastat["start_lat"], datastat["start_count"], datastat["start_stat_name"]=0,0,0,0
for i in range(len(tstart)):
datastat.loc[i] = [data[data["start_station_name"]==tstart[i][0]]["start_lng"].mean(),
data[data["start_station_name"]==tstart[i][0]]["start_lat"].mean(),
tstart[i][1],
tstart[i][0],
]
datanonstat = pd.DataFrame({'start_long': data[data["start_station_name"].isnull()]["start_lng"],
'start_lat': data[data["start_station_name"].isnull()]["start_lat"],
'start_count': np.repeat(1,len(data[data["start_station_name"].isnull()]["start_lng"])),
"start_stat_name":np.repeat("Undocked",len(data[data["start_station_name"].isnull()]["start_lng"]))})
datastat=datastat.append(datanonstat, ignore_index = True)
geometry = [Point(xy) for xy in zip(datastat['start_long'], datastat['start_lat'])]
gdf = GeoDataFrame(datastat, geometry=geometry)
fig = px.scatter_mapbox(gdf,lat=gdf.geometry.y,lon=gdf.geometry.x, size="start_count",hover_name="start_stat_name", color="start_count")
fig.update_layout(mapbox_style="open-street-map", autosize=False,
width=750,
height=750)
return fig, datastat
def plot_end_points(data):
tend=list()
for i in list(data["end_station_name"].unique()):
tend.append([i,np.sum(data["end_station_name"]==i)])
datastatend=pd.DataFrame()
datastatend["end_long"],datastatend["end_lat"], datastatend["end_count"], datastatend["end_stat_name"]=0,0,0,0
for i in range(len(tend)):
datastatend.loc[i]=[data[data["end_station_name"]==tend[i][0]]["end_lng"].mean(),
data[data["end_station_name"]==tend[i][0]]["end_lat"].mean(),
tend[i][1],
tend[i][0]]
datanonstatend = pd.DataFrame({'end_long': data[data["end_station_name"].isnull()]["end_lng"],
'end_lat': data[data["end_station_name"].isnull()]["end_lat"],
'end_count': np.repeat(1,len(data[data["end_station_name"].isnull()]["end_lng"])),
"end_stat_name":np.repeat("Undocked",len(data[data["end_station_name"].isnull()]["end_lng"]))})
datastatend=datastatend.append(datanonstatend, ignore_index = True)
geometry = [Point(xy) for xy in zip(datastatend['end_long'], datastatend['end_lat'])]
gdf = GeoDataFrame(datastatend, geometry=geometry)
fig = px.scatter_mapbox(gdf,lat=gdf.geometry.y,lon=gdf.geometry.x, size="end_count",hover_name="end_stat_name", color="end_count")
fig.update_layout(mapbox_style="open-street-map", autosize=False,
width=750,
height=750)
return fig, datastatend
def plot_route_heatmap(data,mostpopn=1000,size_lines=50):
#The function returns a plot with the mostpopn (1000 by default) popular rides (note these will only include rides from one station to another)
tstart=list()
for i in list(data["start_station_name"].unique()):
tstart.append([i,np.sum(data["start_station_name"]==i)])
tend=list()
for i in list(data["end_station_name"].unique()):
tend.append([i,np.sum(data["end_station_name"]==i)])
tfinal=list(product(np.array(tstart)[:,0], np.array(tend)[:,0]))
tfinal=pd.DataFrame(tfinal)
tfinalap=list()
for i in tstart:
for j in tend:
tfinalap.append(sum((data["start_station_name"]==i[0]) & (data["end_station_name"]==j[0])))
tfinal["2"]=tfinalap
ttouse=tfinal.sort_values(by="2",ascending=False).iloc[:mostpopn]
ttouse=pd.concat([pd.DataFrame(),ttouse], ignore_index = True)
ttouse.columns=["Start","End","Count"]
datastatfin=pd.DataFrame()
datastatfin["start_lat"], datastatfin["start_long"],datastatfin["start_stat"],datastatfin["end_lat"],datastatfin["end_long"],datastatfin["end_stat"], datastatfin["count"]=0,0,0,0,0,0,0
for i in range(len(ttouse)):
datastatfin.loc[i]=[data[data["start_station_name"]==ttouse.iloc[i][0]]["start_lat"].mean(),
data[data["start_station_name"]==ttouse.iloc[i][0]]["start_lng"].mean(),
ttouse.iloc[i][0],
data[data["end_station_name"]==ttouse.iloc[i][1]]["end_lat"].mean(),
data[data["end_station_name"]==ttouse.iloc[i][1]]["end_lng"].mean(),
ttouse.iloc[i][1],
ttouse.iloc[i][2]]
topstations=np.unique(np.array([ttouse["Start"],ttouse["End"]]).reshape(2*mostpopn))
datastations=pd.DataFrame()
datastations["station"],datastations["long"],datastations["lat"]=0,0,0
for i in range(len(topstations)):
datastations.loc[i]=[topstations[i],
data[data["start_station_name"]==topstations[i]]["start_lng"].mean(),
data[data["start_station_name"]==topstations[i]]["start_lat"].mean()]
zoom, center = zoom_center(
lons=datastations['long'],
lats=datastations['lat']
)
fig= go.Figure()
for i in range(len(datastatfin)):
fig.add_trace(
go.Scattermapbox(
lon = [datastatfin['start_long'][i], datastatfin['end_long'][i]],
lat = [datastatfin['start_lat'][i], datastatfin['end_lat'][i]],
mode = 'lines',
line = dict(width = float(datastatfin['count'][i]) / float(datastatfin['count'].max())*size_lines,color = 'red'),
opacity = float(datastatfin['count'][i]) / float(datastatfin['count'].max())
)
)
fig.add_trace(go.Scattermapbox(
lon = datastations['long'],
lat = datastations['lat'],
hoverinfo = 'text',
text = datastations['station'],
mode = 'markers',
marker = dict(
size = 4,
color = 'rgb(255, 0, 0)'),
line = dict(
width = 3,
color = 'rgba(68, 68, 68, 0)'
)
))
fig.update_layout(mapbox_style="open-street-map", width=1000,height=1000)
fig.update_mapboxes(center=center, zoom=zoom)
return fig,datastations, datastatfin
###Output
_____no_output_____
###Markdown
Clustering Plots We prepare the data to plot the results of our clustering.
###Code
plot_data=data_cluster_noweather_done.sample(50000,axis=0,random_state=2679)
plot_data_cluster0=data_cluster_noweather_done[data_cluster_noweather_done["cluster"]==0].sample(50000,random_state=345)
###Output
_____no_output_____
###Markdown
We plot the starting stations in both the whole dataset and our cluster of interest.
###Code
fig1,_=plot_start_points(plot_data)
fig2,_=plot_start_points(plot_data_cluster0)
fig1.update_layout(title_text='Starting stations in the whole dataset')
fig2.update_layout(title_text='Starting stations in cluster 0')
fig1.show()
fig2.show()
###Output
_____no_output_____
###Markdown
We plot the most popular routes in both the whole dataset and our cluster of interest.
###Code
plot_data_reduced=data_cluster_noweather_done.sample(20000,random_state=3111)
plot_data_cluster0_reduced=data_cluster_noweather_done[data_cluster_noweather_done["cluster"]==0].sample(20000,random_state=3111)
fig3,_,_=plot_route_heatmap(plot_data_reduced,500,30)
fig4,_,_=plot_route_heatmap(plot_data_cluster0_reduced,500,30)
fig3.show()
fig4.show()
###Output
_____no_output_____
###Markdown
Nightlife spots clustering We import the dataset on which to perform this clustering and drop all the NAs.
###Code
locations = pd.read_excel("/content/locations_withLL_complete.xlsx")
locations.dropna(inplace=True)
###Output
_____no_output_____
###Markdown
We ensure langitude and longitude are of the same length as the one Python expects.
###Code
import math
locations = locations[["latitude","longitude","Establishment Type","Status"]]
def truncate(x):
a = str(x)
b = a[:6]
n = int(b)/10000
return n
def truncate_long(x):
a = str(x)
b = a[1:7]
n = int(b)/10000
return -n
locations['lat']= locations['latitude'].apply(truncate)
locations['long']= locations['longitude'].apply(truncate_long)
###Output
_____no_output_____
###Markdown
We keep only the locations for which the status of the license is Active, rename Nightclubs to clubs, and filter out all the types of establishment we won't condsider in our analysis.
###Code
locations = locations[locations['Status'] == 'Active']
del locations['Status']
locations['Establishment Type'].unique()
del locations['latitude']
del locations['longitude']
locations["Establishment Type"].replace({"Nightclub":"Club"},inplace=True)
locations_all = locations[locations['Establishment Type'].isin(['Club','Tavern','Nightclub','Hotel','Restaurant'])].copy()
locations = locations[locations['Establishment Type'].isin(['Club','Tavern','Nightclub'])]
###Output
_____no_output_____
###Markdown
We perform and plot the elbow method using distortion.
###Code
distortions = []
mapping1 = {}
K = range(1, 15)
locations[["lat_s","long_s"]] = scaler.fit_transform(locations[["lat","long"]])
data = locations[["lat_s","long_s"]].copy()
for k in K:
kmeanModel = sklearn.cluster.KMeans(n_clusters=k,random_state=12).fit(data)
kmeanModel.fit(data)
distortions.append(sum(np.min(cdist(data, kmeanModel.cluster_centers_,
'euclidean'), axis=1)) / data.shape[0])
mapping1[k] = sum(np.min(cdist(data, kmeanModel.cluster_centers_,
'euclidean'), axis=1)) / data.shape[0]
import matplotlib.pyplot as plt
for key, val in mapping1.items():
print(f'{key} : {val}')
plt.plot(K, distortions, 'bx-')
plt.xlabel('Values of K')
plt.ylabel('Distortion')
plt.title('The Elbow Method using Distortion')
plt.show()
###Output
1 : 0.18867129388607146
2 : 0.13526779318458065
3 : 0.11172448225725791
4 : 0.0987944040673836
5 : 0.08544959480538329
6 : 0.07962745681587761
7 : 0.07318616824837801
8 : 0.06596561841989695
9 : 0.06238003874093469
10 : 0.057916362330781436
11 : 0.056622541652556556
12 : 0.0519398807478062
13 : 0.04794351865128075
14 : 0.04631286044735966
###Markdown
We perform the k-means clustering.
###Code
kmeanModel = sklearn.cluster.KMeans(n_clusters=4,random_state=12).fit(data)
kmeanModel.fit(data)
locations["Cluster"] = kmeanModel.fit_predict(locations[["lat_s","long_s"]])
###Output
_____no_output_____
###Markdown
We look at where locations are based on establishment.
###Code
def plot_est(data):
lst_colors = ["#092242","#f7ac2d","#609a92","#ed0f51"]
lst_elements = sorted(list(data["Establishment Type"].unique()))
data["Color"] = data["Establishment Type"].apply(lambda x: lst_colors[lst_elements.index(x)])
fig= go.Figure()
fig.update_layout(mapbox_style="open-street-map", width=1000,height=1000,showlegend=True)
zoom, center = zoom_center(
lons=data['long'],
lats=data['lat']
)
fig.add_trace(go.Scattermapbox(
lon = data[data["Establishment Type"]=="Hotel"]['long'],
lat = data[data["Establishment Type"]=="Hotel"]['lat'],
hoverinfo = 'text',
text = data[data["Establishment Type"]=="Hotel"]["Establishment Type"],
mode = 'markers',
name="Hotel",
marker = dict(
size = 11,
color = data[data["Establishment Type"]=="Hotel"]['Color'])
))
fig.add_trace(go.Scattermapbox(
lon = data[data["Establishment Type"]=="Restaurant"]['long'],
lat = data[data["Establishment Type"]=="Restaurant"]['lat'],
hoverinfo = 'text',
text = data[data["Establishment Type"]=="Restaurant"]["Establishment Type"],
mode = 'markers',
name = "Restaurant",
marker = dict(
size = 11,
color = data[data["Establishment Type"]=="Restaurant"]['Color'])
))
fig.add_trace(go.Scattermapbox(
lon = data[data["Establishment Type"]=="Club"]['long'],
lat = data[data["Establishment Type"]=="Club"]['lat'],
hoverinfo = 'text',
text = data[data["Establishment Type"]=="Club"]["Establishment Type"],
mode = 'markers',
name ="Club",
marker = dict(
size = 11,
color = data[data["Establishment Type"]=="Club"]['Color'])
))
fig.add_trace(go.Scattermapbox(
lon = data[data["Establishment Type"]=="Tavern"]['long'],
lat = data[data["Establishment Type"]=="Tavern"]['lat'],
hoverinfo = 'text',
text = data[data["Establishment Type"]=="Tavern"]["Establishment Type"],
name = "Tavern",
mode = 'markers',
marker = dict(
size = 11,
color = data[data["Establishment Type"]=="Tavern"]['Color'])
))
fig.update_mapboxes(center=center, zoom=zoom)
#color =data["Color"]
return fig
plot_est(locations_all).show()
###Output
_____no_output_____
###Markdown
We look at the results of the clustering.
###Code
def plot(data):
lst_colors = ["#092242","#f7ac2d","#609a92","#ed0f51"]
lst_elements = sorted(list(data["Cluster"].unique()))
data["Color"] = data["Cluster"].apply(lambda x: lst_colors[lst_elements.index(x)])
fig= go.Figure()
fig.update_layout(mapbox_style="open-street-map", width=1000,height=1000)
fig.add_trace(go.Scattermapbox(
lon = data['long'],
lat = data['lat'],
hoverinfo = 'text',
text = data["Cluster"],
mode = 'markers',
marker = dict(
size = 9,
color =data["Color"])
))
zoom, center = zoom_center(
lons=data['long'],
lats=data['lat']
)
fig.update_mapboxes(center=center, zoom=zoom)
return fig
plot(locations).show()
###Output
_____no_output_____
###Markdown
We look at how clubs and taverns are spread among clusters.
###Code
sns_plot = sns.histplot(data=locations, x="Establishment Type", hue="Cluster", multiple="dodge", shrink=.8,palette= ["#092242","#f7ac2d","#609a92","#ed0f51"])
plt.legend(title='Cluster', loc='upper left', labels=['North','East','SouthEast','Center (Main Cluster)'])
plt.show()
print("Center (Main Cluster) composition:")
locations[locations["Cluster"]==0]["Establishment Type"].value_counts(normalize=True)
print("SouthEast cluster composition:")
locations[locations["Cluster"]==1]["Establishment Type"].value_counts(normalize=True)
print("East cluster composition:")
locations[locations["Cluster"]==2]["Establishment Type"].value_counts(normalize=True)
print("North cluster composition:")
locations[locations["Cluster"]==3]["Establishment Type"].value_counts(normalize=True)
###Output
North cluster composition:
|
phd-thesis/nilmtk/.ipynb_checkpoints/comparing_nilm_algorithms-checkpoint.ipynb | ###Markdown
Sample code for Comparing NILM algorithms
###Code
from __future__ import print_function, division
import time
from matplotlib import rcParams
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from six import iteritems
%matplotlib inline
rcParams['figure.figsize'] = (13, 6)
from nilmtk import DataSet, TimeFrame, MeterGroup, HDFDataStore
from nilmtk.disaggregate import CombinatorialOptimisation, FHMM
import nilmtk.utils
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
Dividing data into train and test set
###Code
train = DataSet('../datasets/REDD/low_freq.h5')
test = DataSet('../datasets/REDD/low_freq.h5')
###Output
_____no_output_____
###Markdown
Let us use building 1 for demo purposes
###Code
building = 1
###Output
_____no_output_____
###Markdown
Let's split data at April 30th
###Code
# The dates are interpreted by Pandas, prefer using ISO dates (yyyy-mm-dd)
train.set_window(end="2011-04-30")
test.set_window(start="2011-04-30")
train_elec = train.buildings[1].elec
test_elec = test.buildings[1].elec
###Output
_____no_output_____
###Markdown
Selecting top-5 appliances
###Code
top_5_train_elec = train_elec.submeters().select_top_k(k=5)
###Output
15/16 MeterGroup(meters==19, building=1, dataset='REDD', appliances=[Appliance(type='unknown', instance=2)])e=1)])ce=1)])
ElecMeter(instance=3, building=1, dataset='REDD', appliances=[Appliance(type='electric oven', instance=1)])
ElecMeter(instance=4, building=1, dataset='REDD', appliances=[Appliance(type='electric oven', instance=1)])
16/16 MeterGroup(meters= for ElecMeterID(instance=4, building=1, dataset='REDD') ...
ElecMeter(instance=10, building=1, dataset='REDD', appliances=[Appliance(type='washer dryer', instance=1)])
ElecMeter(instance=20, building=1, dataset='REDD', appliances=[Appliance(type='washer dryer', instance=1)])
Calculating total_energy for ElecMeterID(instance=20, building=1, dataset='REDD') ...
###Markdown
Training and disaggregation
###Code
def predict(clf, test_elec, sample_period, timezone):
pred = {}
gt= {}
for i, chunk in enumerate(test_elec.mains().load(sample_period=sample_period)):
chunk_drop_na = chunk.dropna()
pred[i] = clf.disaggregate_chunk(chunk_drop_na)
gt[i]={}
for meter in test_elec.submeters().meters:
# Only use the meters that we trained on (this saves time!)
gt[i][meter] = next(meter.load(sample_period=sample_period))
gt[i] = pd.DataFrame({k:v.squeeze() for k,v in iteritems(gt[i]) if len(v)}, index=next(iter(gt[i].values())).index).dropna()
# If everything can fit in memory
gt_overall = pd.concat(gt)
gt_overall.index = gt_overall.index.droplevel()
pred_overall = pd.concat(pred)
pred_overall.index = pred_overall.index.droplevel()
# Having the same order of columns
gt_overall = gt_overall[pred_overall.columns]
#Intersection of index
gt_index_utc = gt_overall.index.tz_convert("UTC")
pred_index_utc = pred_overall.index.tz_convert("UTC")
common_index_utc = gt_index_utc.intersection(pred_index_utc)
common_index_local = common_index_utc.tz_convert(timezone)
gt_overall = gt_overall.loc[common_index_local]
pred_overall = pred_overall.loc[common_index_local]
appliance_labels = [m for m in gt_overall.columns.values]
gt_overall.columns = appliance_labels
pred_overall.columns = appliance_labels
return gt_overall, pred_overall
# Since the methods use randomized initialization, let's fix a seed here
# to make this notebook reproducible
import numpy.random
numpy.random.seed(42)
classifiers = {'CO':CombinatorialOptimisation(), 'FHMM':FHMM()}
predictions = {}
sample_period = 120
for clf_name, clf in classifiers.items():
print("*"*20)
print(clf_name)
print("*" *20)
clf.train(top_5_train_elec, sample_period=sample_period)
gt, predictions[clf_name] = predict(clf, test_elec, 120, train.metadata['timezone'])
rmse = {}
for clf_name in classifiers.keys():
rmse[clf_name] = nilmtk.utils.compute_rmse(gt, predictions[clf_name], pretty=True)
rmse = pd.DataFrame(rmse)
rmse
###Output
_____no_output_____ |
part_1_gain_customer_insights_Amazon_Aurora_setup.ipynb | ###Markdown
Gain customer insights, Part 1. Connect to Amazon Aurora MySQL database, data loading and extraction ---- Table of contents Section 1. Setup 1. [Prepare the Amazon Aurora MySQL Database](Prepare-the-Amazon-Aurora-MySQL-Database)2. [Download the Customer Churn Data](Download-the-Customer-Churn-Data)3. [Create Database, Table, Load Data in Amazon Aurora MySQL](Create-Database,-Table,-Load-Data-in-Amazon-Aurora-MySQL)4. [Load Customer Messages to Database](Load-Customer-Messages-to-Database) Section 2. Export data from Amazon Aurora to S31. [Export data from Amazon Aurora to S3](Section-2.-Export-data-from-Amazon-Aurora-to-S3-for-use-in-Machine-Learning)----Begin by upgrading pip. To connect to the database we will use [mysql.connector](https://dev.mysql.com/doc/connector-python/en/) module. MySQL Connector/Python enables Python programs to access MySQL databases.
###Code
import sys
# upgrade pip
!{sys.executable} -m pip install --upgrade pip
# install mysql.connector
!{sys.executable} -m pip install mysql.connector
###Output
_____no_output_____
###Markdown
For this use case, we've created the S3 bucket and appropriate IAM roles for you during the launch of the AWS CloudFormation template. The bucket name was saved in a parameter file called "cloudformation_values.py" during creation of the notebook instance, along with the DB secret name and ML endpoint name.
###Code
# import installed module
import mysql.connector as mysql
import json
import os
import pandas as pd
import numpy as np
import boto3
# to write data stream to S3
from io import StringIO
# import variables with values about the secret, region, s3 bucket, sagemaker endpoint
# this file is generated during the creation of the SageMaker notebook instance
import cloudformation_values as cfvalues
###Output
_____no_output_____
###Markdown
Next, we set up some parameters we'll use in the rest of the notebook.
###Code
s3 = boto3.resource('s3')
# get the session information
session = boto3.Session()
# get the region
region = cfvalues.REGION
# S3 bucket was created during the launch of the CloudFormation stack
bucket_name = cfvalues.S3BUCKET
prefix = 'sagemaker/xgboost-churn'
source_data = 'source_churn_data.csv'
source_data_file_name = prefix + '/' + source_data
ml_data = 'aurora/churn_data'
# AWS Secrets stores our database credentials.
db_secret_name = cfvalues.DBSECRET
###Output
_____no_output_____
###Markdown
Prepare the Amazon Aurora MySQL Database We'll create some customer data in our Amazon Aurora database, for use during the rest of our scenario. To do so, we'll take some publicly available "customer data", and load it into our database. We'll get the data from the Internet, write it out to S3, then load it into Aurora from S3. That will get us to the starting point of our scenario. Here we are using administrative credentials for connecting to the database. The credentials were created during the database creation, and are stored in AWS Secrets Manager. We'll retrieve the secret, extract the credentials and the database endpoint name, and use them to connect to the database.
###Code
# Get the secret from AWS Secrets manager. Extract user, password, host.
from utilities import get_secret
get_secret_value_response = get_secret(db_secret_name, region)
creds = json.loads(get_secret_value_response['SecretString'])
db_user = creds['username']
db_password = creds['password']
# Writer endpoint
db_host = creds['host']
# create connection to the database
cnx = mysql.connect(user = db_user,
password = db_password,
host = db_host)
# create cursor (allows traversal over the rows in the result set)
dbcursor = cnx.cursor()
###Output
_____no_output_____
###Markdown
Demonstrate the connection and functionality by showing the existing databases:
###Code
# send a query to show all existing databases and loop over the results to print:
dbcursor.execute("SHOW DATABASES")
for x in dbcursor:
print(x)
###Output
_____no_output_____
###Markdown
To disconnect from the database:
###Code
cnx.close()
###Output
_____no_output_____
###Markdown
Download the Customer Churn Data The dataset we use is publicly available and is mentioned in the book [Discovering Knowledge in Data](https://www.amazon.com/dp/0470908742/) by Daniel T. Larose. It is attributed by the author to the University of California Irvine Repository of Machine Learning Datasets. The content of each column in the data is described in another notebook [here](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_applying_machine_learning/xgboost_customer_churn/xgboost_customer_churn.ipynb).
###Code
if not os.path.exists("DKD2e_data_sets.zip"):
!wget http://dataminingconsultant.com/DKD2e_data_sets.zip
!unzip -o DKD2e_data_sets.zip
else:
print("File has been already downloaded")
# read the customer churn data to pandas DataFrame
churn = pd.read_csv('./Data sets/churn.txt')
# review the top rows
churn.head()
# get number of rows and columns in the data
churn.shape
###Output
_____no_output_____
###Markdown
We can see that the column names in this source data set have mixed case, spaces and special characters - all items that can easily cause grief in databases and when transferring data between formats and systems. To avoid these challenges, we'll simplify the column names before loading the data to Amazon Aurora.
###Code
new_columns = ["state",
"acc_length",
"area_code",
"phone",
"int_plan",
"vmail_plan",
"vmail_msg",
"day_mins",
"day_calls",
"day_charge",
"eve_mins",
"eve_calls",
"eve_charge",
"night_mins",
"night_calls",
"night_charge",
"int_mins",
"int_calls",
"int_charge",
"cust_service_calls",
"churn"]
# create a dictionary where keys are the old column names and the values are the new column names
renaming_dict = dict(list(zip(list(churn.columns), new_columns)))
# rename the columns
churn = churn.rename(columns = renaming_dict)
churn.head()
###Output
_____no_output_____
###Markdown
The resulting data frame looks much better!Now we'll write our sample data out to S3. We'll then bulk load the data from S3 directly into Amazon Aurora.
###Code
csv_buffer = StringIO()
churn.to_csv(csv_buffer, index = False)
s3.Object(bucket_name, source_data_file_name).put(Body = csv_buffer.getvalue())
print('s3://' + bucket_name + '/' + source_data_file_name)
###Output
_____no_output_____
###Markdown
Create Database, Table, Load Data in Amazon Aurora MySQLNow, we want to create the target database and table in Amazon Aurora, so we can load the data.
###Code
database_name = "telecom_customer_churn"
churn_table = "customers"
customer_msgs_table = "customer_message"
###Output
_____no_output_____
###Markdown
Connect to the database server and create a cursor object to traverse over the fetched results.
###Code
cnx = mysql.connect(user = db_user,
password = db_password,
host = db_host)
dbcursor = cnx.cursor(buffered = True)
###Output
_____no_output_____
###Markdown
Create a database:
###Code
# send a query to create a database
dbcursor.execute("CREATE DATABASE IF NOT EXISTS {}".format(database_name))
# send a query to show all existing databases and fetch all results:
dbcursor.execute("SHOW DATABASES")
databases = dbcursor.fetchall()
print(databases)
# switch to the database 'telecom_customer_churn'
dbcursor.execute("USE {}".format(database_name))
###Output
_____no_output_____
###Markdown
Now we will create a table to hold customer churn data. The column definition was taken from [this blog](https://aws.amazon.com/blogs/aws/new-for-amazon-aurora-use-machine-learning-directly-from-your-databases/).
###Code
# here we delete the table 'customers' if it already exists
dbcursor.execute("DROP TABLE IF EXISTS {}".format(churn_table))
# then, we define a new table:
dbcursor.execute("""CREATE TABLE {}
(state VARCHAR(2048),
acc_length BIGINT(20),
area_code BIGINT(20),
phone VARCHAR(2048),
int_plan VARCHAR(2048),
vmail_plan VARCHAR(2048),
vmail_msg BIGINT(20),
day_mins DOUBLE,
day_calls BIGINT(20),
day_charge DOUBLE,
eve_mins DOUBLE,
eve_calls BIGINT(20),
eve_charge DOUBLE,
night_mins DOUBLE,
night_calls BIGINT(20),
night_charge DOUBLE,
int_mins DOUBLE,
int_calls BIGINT(20),
int_charge DOUBLE,
cust_service_calls BIGINT(20),
churn VARCHAR(2048))""".format(churn_table))
# send a query to show all existing tables
dbcursor.execute("SHOW TABLES")
# fetch all results
tables = dbcursor.fetchall()
# print names of the tables in the database 'telecom_customer_churn'
for table in tables:
print(table)
###Output
_____no_output_____
###Markdown
Here we will print the list of columns that will be updated when inserting the data from the data frame.
###Code
# send a query to retrieve the column names from the table 'customers' and fetch the results
dbcursor.execute("SHOW COLUMNS FROM {}".format(churn_table))
columns = dbcursor.fetchall()
cols = "','".join([x[0] for x in columns])
# print the column names as a comma-separate string created in a previous statement.
print("'" + cols + "'")
###Output
_____no_output_____
###Markdown
Everything looks good so far! Now we're ready to [bulk load our data into Amazon Aurora from S3](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.LoadFromS3.html).
###Code
print(source_data_file_name)
# send a query to load the data into the table 'customers' from the S3 bucket.
dbcursor.execute("""LOAD DATA FROM S3 's3://{bucket}/{filename}' INTO TABLE {tablename}
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n' IGNORE 1 LINES""".format(tablename = database_name + '.' + churn_table,
bucket = bucket_name,
filename = source_data_file_name))
# commit the above transaction for all users
cnx.commit()
###Output
_____no_output_____
###Markdown
Let's check for load errors, and check whether the resulting data looks correct. The output provides us with the name of the source s3 bucket from which the data were loaded, file name and when it was loaded.
###Code
# run a query to check the history of all data loads from S3 to the database
dbcursor.execute("SELECT * from mysql.aurora_s3_load_history WHERE load_prefix = 's3://{bucket}/{filename}'".format(
tablename = churn_table,
bucket = bucket_name,
filename = source_data_file_name))
all_loads = dbcursor.fetchall()
for load in all_loads:
print(load)
# run a query to preview the first 5 rows from the table:
dbcursor.execute("SELECT * FROM `{}` LIMIT 5".format(churn_table))
result = dbcursor.fetchall()
for i in result:
print(i)
###Output
_____no_output_____
###Markdown
We can see that the customer data is now in Aurora. Load Customer Messages to DatabaseNow we'll create a second table, one with some messages from customer service calls. We'll use this table later, to test Amazon Comprehend integration with our database.
###Code
# here we are reusing the same cursor object created above and removing the table 'customer_message'
# if it already exists.
dbcursor.execute("DROP TABLE IF EXISTS `{}`".format(customer_msgs_table))
# next, we define a table with four colums: area code, phone number, text of a message from a customer and
# the time they called.
sql = """CREATE TABLE IF NOT EXISTS `{}` (
area_code BIGINT(20) NOT NULL,
phone VARCHAR(2048) NOT NULL,
message VARCHAR(255) NOT NULL,
calltime TIMESTAMP NOT NULL
);""".format(customer_msgs_table)
dbcursor.execute(sql)
# verify that the table was successfully created by showing all existing tables
# in the database 'telecom_customer_churn'
dbcursor.execute("SHOW TABLES")
tables = dbcursor.fetchall()
for table in tables:
print(table)
# here we request to see the format of the columns in the table 'customer_message'
dbcursor.execute("DESCRIBE `{}`;".format(customer_msgs_table))
dbcursor.fetchall()
# the following SQL statement loads 6 rows in the table 'customer_message' with the area code,
# phone number, generated messages and the date/time of the call.
sql_inserts =["""
INSERT INTO customer_message(area_code, phone, message, calltime)
VALUES (415, "329-6603", "Thank you very much for resolving the issues with my bill!", '2020-01-01 10:10:10');""",
"""INSERT INTO customer_message(area_code, phone, message, calltime)
VALUES (415, "351-7269", "I don't understand how I paid for 100 minutes and got only 90, you are ripping me off!",'2020-01-01 10:10:10');""",
"""INSERT INTO customer_message(area_code, phone, message, calltime)
VALUES (408, "360-1596", "Please fix this issue! I am sick of sitting on a phone every single day with you people!",'2020-01-01 10:10:10');""",
"""INSERT INTO customer_message(area_code, phone, message, calltime)
VALUES (415, "382-4657", "This is a really great feature, thank for helping me store all my phone numbers.", '2020-01-01 10:10:10');""",
"""INSERT INTO customer_message(area_code, phone, message, calltime)
VALUES (415, "371-7191", "Why am I paying so much for my international minutes?", '2020-01-01 10:10:10');""",
"""INSERT INTO customer_message(area_code, phone, message, calltime)
VALUES (415, "358-1921", "Why do I have to wait for the response from the customer service for so long? I don't have time for this.", '2020-01-01 10:10:10');"""
]
try:
for i in range(len(sql_inserts)):
dbcursor.execute(sql_inserts[i])
# NB : you won't get an IntegrityError when reading
except (MySQLdb.Error, MySQLdb.Warning) as e:
print(e)
cnx.commit()
###Output
_____no_output_____
###Markdown
Lastly, let's join the tables and read them to pandas DataFrame to check that we can see customer and complaint data as expected.
###Code
sql = """SELECT cu.state, cu.area_code, cu.phone, cu.int_plan, cu.vmail_plan, cu.churn,
calls.message
FROM {} cu, {} calls
WHERE cu.area_code = calls.area_code AND cu.phone = calls.phone
AND message is not null""".format(churn_table, customer_msgs_table)
df = pd.read_sql(sql, con = cnx)
df.head(5)
# close connection to the database
cnx.close()
###Output
_____no_output_____
###Markdown
Our Amazon Aurora database now contains our "production" data. Now we're finally at the starting point of our scenario! Section 2. Export data from Amazon Aurora to S3 for use in Machine LearningThe DBA has just received the request: "Please export the customer data to S3, so the data scientist can explore the reason for data churn. Thanks!" Luckily, there's a new Amazon Aurora feature that makes it easy: [Saving Data from an Amazon Aurora MySQL DB Cluster into Text Files in an Amazon S3 Bucket](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.SaveIntoS3.html). We'll use this feature to export our customer data to S3.
###Code
# create connection to the database
cnx = mysql.connect(user = db_user,
password = db_password,
host = db_host)
# create cursor
dbcursor = cnx.cursor(buffered = True)
dbcursor.execute("USE {}".format(database_name))
###Output
_____no_output_____
###Markdown
We can also split the data into test, training, validation and upload to s3 separately directly from our database. But for now, we'll let the data scientists deal with that! One of the requirements for performing queries against the Amazon SageMaker endpoint (which will be created shortly) is SQL privileges to invoke Amazon SageMaker and to execute functions, which is described in the documentation [here](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/mysql-ml.htmlaurora-ml-sql-privileges), section "Granting SQL Privileges for Invoking Aurora Machine Learning Services". Let's check that we have the right privileges. We should see 'SELECT INTO S3' listed. We also need to have the [right privileges](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.SaveIntoS3.htmlAuroraMySQL.Integrating.SaveIntoS3.Grant). Since we are using admin as a user to invoke queries, this step isn't needed. However, in normal circumstances, a SQL user should have these privileges granted.
###Code
# this statement displays the privileges and roles that are assigned to a MySQL user account or role
dbcursor.execute("SHOW GRANTS")
dbcursor.fetchall()
###Output
_____no_output_____
###Markdown
There are several ways we can unload the data. We could choose to unload without using headers; this would be appropriate if we're unloading a large amount of data and are using a metadata catalog (such as [AWS Glue](https://aws.amazon.com/glue/)) to store the column information. Here, as it's a small amount of data, to simplify the use case and to avoid introducing errors, we choose to unload the data in CSV format and add a header for use by the ML engineer in Part 2. Otherwise, we could also provide the ML engineer with the column list.
###Code
dbcursor.execute("""SELECT * FROM `{tablename}` INTO OUTFILE S3 's3://{bucket}/{prefix}/{mldata}'
FORMAT CSV HEADER""".format(tablename = churn_table,
bucket = bucket_name,
prefix = prefix,
mldata = ml_data))
dbcursor.execute("SHOW COLUMNS FROM {}".format(churn_table))
columns = dbcursor.fetchall()
cols = "','".join([x[0] for x in columns])
print("'" + cols + "'")
###Output
_____no_output_____
###Markdown
Now we have the column names in case we need to pass this information to the ML engineer so he knows what the columns are in our unloaded data.
###Code
cnx.close()
###Output
_____no_output_____ |
tests/test_files/nb1.ipynb | ###Markdown
My Fancy Title
###Code
print('Hello World')
a = 25*5
a
###Output
_____no_output_____ |
examples/07_Exploring_Query.ipynb | ###Markdown
07 - Exploring Query 1. Imports
###Code
import pandas as pd
import pymove as pm
from pymove import folium, MoveDataFrame
from pymove.query import query
###Output
_____no_output_____
###Markdown
2. Load Data DataSet- [Hurricanes and Typhoons](https://www.kaggle.com/noaa/hurricane-database):he NHC publishes the tropical cyclone historical database in a format known as HURDAT, short for HURricane DATabase
###Code
hurricanes_pandas_df = pd.read_csv('atlantic.csv')
hurricanes_pandas_df
#Select hurricanes from 2012 to 2015
hurricanes_pandas_df = hurricanes_pandas_df.loc[hurricanes_pandas_df['Date'] >= 20120000]
hurricanes_pandas_df = hurricanes_pandas_df.loc[hurricanes_pandas_df['Date'] < 20160000]
hurricanes_pandas_df.shape
hurricanes_pandas_df[['ID', 'Name', 'Latitude', 'Longitude', 'Date', 'Time']].head()
hurricanes_pandas_df = pm.conversions.lat_and_lon_decimal_degrees_to_decimal(
hurricanes_pandas_df, latitude='Latitude', longitude='Longitude'
)
def convert_to_datetime(row):
this_date = '{}-{}-{}'.format(str(row['Date'])[0:4], str(row['Date'])[4:6], str(row['Date'])[6:])
this_time = '{:02d}:{:02d}:00'.format(int(row['Time']/100), int(str(row['Time'])[-2:]))
return '{} {}'.format(this_date, this_time)
hurricanes_pandas_df['Datetime'] = hurricanes_pandas_df.apply(convert_to_datetime, axis=1)
hurricanes_pandas_df[['ID', 'Name', 'Latitude', 'Longitude', 'Datetime']].head()
#Converting the pandas dataframe to pymove's MoveDataFrame
hurricanes_2012 = MoveDataFrame(
data=hurricanes_pandas_df, latitude='Latitude', longitude='Longitude',datetime='Datetime', traj_id='Name'
)
print(type(hurricanes_2012))
hurricanes_2012.head()
###Output
<class 'pymove.core.pandas.PandasMoveDataFrame'>
###Markdown
Visualization
###Code
folium.plot_trajectories_with_folium(hurricanes_2012, zoom_start=2)
#Total hurricane amount between 2012 and 2015
this_ex = hurricanes_2012
this_ex['id'].unique().shape[0]
#Selecting a hurricane for demonstration
gonzalo = hurricanes_2012.loc[hurricanes_2012['id'] == ' GONZALO']
folium.plot_trajectories_with_folium(
gonzalo, lat_origin=gonzalo['lat'].median(), lon_origin=gonzalo['lon'].median(), zoom_start=2
)
###Output
_____no_output_____
###Markdown
2. Range Query Using distance MEDP (Mean Euclidean Distance Predictive)
###Code
prox_Gonzalo = query.range_query(gonzalo, hurricanes_2012, min_dist=200, distance='MEDP')
folium.plot_trajectories_with_folium(prox_Gonzalo, zoom_start=3)
###Output
_____no_output_____
###Markdown
Using Distance MEDT (Mean Euclidean Distance Trajectory)
###Code
prox_Gonzalo = query.range_query(gonzalo, hurricanes_2012, min_dist=1000, distance='MEDT')
folium.plot_trajectories_with_folium(prox_Gonzalo, zoom_start=3)
###Output
_____no_output_____
###Markdown
3. KNN (K-Nearest-Neighbor) Using distance MEDP (Mean Euclidean Distance Predictive)
###Code
prox_Gonzalo = query.knn_query(gonzalo, hurricanes_2012, id_='id', k=5, distance='MEDP')
folium.plot_trajectories_with_folium(prox_Gonzalo, zoom_start=3)
###Output
_____no_output_____
###Markdown
Using Distance MEDT (Mean Euclidean Distance Trajectory)
###Code
prox_Gonzalo = query.knn_query(gonzalo, hurricanes_2012, id_='id', k=5, distance='MEDT')
folium.plot_trajectories_with_folium(prox_Gonzalo, zoom_start=3)
###Output
_____no_output_____
###Markdown
07 - Exploring Query 1. Imports
###Code
import pandas as pd
import numpy as np
import pymove as pmv
from pymove import folium, MoveDataFrame
from pymove.query import query
from datetime import datetime
from numpy import Inf
###Output
_____no_output_____
###Markdown
2. Load Data DataSet- [Hurricanes and Typhoons](https://www.kaggle.com/noaa/hurricane-database):he NHC publishes the tropical cyclone historical database in a format known as HURDAT, short for HURricane DATabase
###Code
hurricanes_pandas_df = pd.read_csv('atlantic.csv')
hurricanes_pandas_df
#Select hurricanes from 2012 to 2015
hurricanes_pandas_df = hurricanes_pandas_df.loc[hurricanes_pandas_df['Date'] >= 20120000]
hurricanes_pandas_df = hurricanes_pandas_df.loc[hurricanes_pandas_df['Date'] < 20160000]
hurricanes_pandas_df.shape
hurricanes_pandas_df[['ID', 'Name', 'Latitude', 'Longitude', 'Date', 'Time']].head()
hurricanes_pandas_df = pmv.conversions.lat_and_lon_decimal_degrees_to_decimal(
hurricanes_pandas_df, latitude='Latitude', longitude='Longitude'
)
def convert_to_datetime(row):
this_date = '{}-{}-{}'.format(str(row['Date'])[0:4], str(row['Date'])[4:6], str(row['Date'])[6:])
this_time = '{:02d}:{:02d}:00'.format(int(row['Time']/100), int(str(row['Time'])[-2:]))
return '{} {}'.format(this_date, this_time)
hurricanes_pandas_df['Datetime'] = hurricanes_pandas_df.apply(convert_to_datetime, axis=1)
hurricanes_pandas_df[['ID', 'Name', 'Latitude', 'Longitude', 'Datetime']].head()
#Converting the pandas dataframe to pymove's MoveDataFrame
hurricanes_2012 = MoveDataFrame(
data=hurricanes_pandas_df, latitude='Latitude', longitude='Longitude',datetime='Datetime', traj_id='Name'
)
print(type(hurricanes_2012))
hurricanes_2012.head()
###Output
<class 'pymove.core.pandas.PandasMoveDataFrame'>
###Markdown
Visualization
###Code
folium.plot_trajectories_with_folium(hurricanes_2012, zoom_start=2)
#Total hurricane amount between 2012 and 2015
this_ex = hurricanes_2012
this_ex['id'].unique().shape[0]
#Selecting a hurricane for demonstration
gonzalo = hurricanes_2012.loc[hurricanes_2012['id'] == ' GONZALO']
folium.plot_trajectories_with_folium(
gonzalo, lat_origin=gonzalo['lat'].median(), lon_origin=gonzalo['lon'].median(), zoom_start=2
)
###Output
_____no_output_____
###Markdown
2. Range Query Using distance MEDP (Mean Euclidean Distance Predictive)
###Code
prox_Gonzalo = query.range_query(gonzalo, hurricanes_2012, min_dist=200, distance='MEDP')
folium.plot_trajectories_with_folium(prox_Gonzalo, zoom_start=3)
###Output
_____no_output_____
###Markdown
Using Distance MEDT (Mean Euclidean Distance Trajectory)
###Code
prox_Gonzalo = query.range_query(gonzalo, hurricanes_2012, min_dist=1000, distance='MEDT')
folium.plot_trajectories_with_folium(prox_Gonzalo, zoom_start=3)
###Output
_____no_output_____
###Markdown
3. KNN (K-Nearest-Neighbor) Using distance MEDP (Mean Euclidean Distance Predictive)
###Code
prox_Gonzalo = query.knn_query(gonzalo, hurricanes_2012, id_='id', k=5, distance='MEDP')
folium.plot_trajectories_with_folium(prox_Gonzalo, zoom_start=3)
###Output
_____no_output_____
###Markdown
Using Distance MEDT (Mean Euclidean Distance Trajectory)
###Code
prox_Gonzalo = query.knn_query(gonzalo, hurricanes_2012, id_='id', k=5, distance='MEDT')
folium.plot_trajectories_with_folium(prox_Gonzalo, zoom_start=3)
###Output
_____no_output_____ |
codewars.ipynb | ###Markdown
###Code
def how_much_water(water, load, clothes):
# Good luck!
amunt_of_water =water*1.1**(load-clothes)
dubelload=load*2
if (clothes >dubelload ):
return ('Too much clothes')
elif (clothes < load):
return ('Not enough clothes')
else:
return amunt_of_water
#how_much_water(50,15,29)
how_much_water(10,11,20)
#how_much_water(50,15,29), 189.87, ''
###Output
_____no_output_____
###Markdown
2. Create a function (or write a script in Shell) that takes an integer as an argument and returns "Even" for even numbers or "Odd" for odd numbers.
###Code
def even_or_odd(number):
if (number%2==0):
return "Even"
else:
return "Odd"
even_or_odd(-4)
###Output
_____no_output_____
###Markdown
3.Return the number (count) of vowels in the given string.
We will consider a, e, i, o, u as vowels for this Kata (but not y).
The input string will only consist of lower case letters and/or spaces.
###Code
def get_count(input_str):
input_str.lower()
num_vowels = 0
# your code here
for i in range (0,len(input_str)):
if( input_str[i] == 'a'or input_str[i] == 'e'or input_str[i] == 'i'or input_str[i] == 'o'or input_str[i] == 'u'):
num_vowels+=1
return num_vowels
get_count("abracadabra")
###Output
_____no_output_____
###Markdown
4. you get an array of numbers, return the sum of all of the positives ones.
Example
[1,-4,7,12] =>
1 + 7 + 12 = 20
###Code
def positive_sum(arr):
# Your code here
result=0
for i in range (0,len(arr)):
if (arr[i]>0):
result+=arr[i]
return result
positive_sum([1,-2,3,4,5])
###Output
_____no_output_____ |
PyTorch_beginner/download/neural_networks_tutorial.ipynb | ###Markdown
Neural Networks===============Neural networks can be constructed using the ``torch.nn`` package.Now that you had a glimpse of ``autograd``, ``nn`` depends on``autograd`` to define models and differentiate them.An ``nn.Module`` contains layers, and a method ``forward(input)``\ thatreturns the ``output``.For example, look at this network that classifies digit images:.. figure:: /_static/img/mnist.png :alt: convnet convnetIt is a simple feed-forward network. It takes the input, feeds itthrough several layers one after the other, and then finally gives theoutput.A typical training procedure for a neural network is as follows:- Define the neural network that has some learnable parameters (or weights)- Iterate over a dataset of inputs- Process input through the network- Compute the loss (how far is the output from being correct)- Propagate gradients back into the network’s parameters- Update the weights of the network, typically using a simple update rule: ``weight = weight - learning_rate * gradient``Define the network------------------Let’s define this network:
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 1 input image channel, 6 output channels, 5x5 square convolution
# kernel
self.conv1 = nn.Conv2d(1, 6, 5)
self.conv2 = nn.Conv2d(6, 16, 5)
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# Max pooling over a (2, 2) window
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
# If the size is a square you can only specify a single number
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
net = Net()
print(net)
###Output
_____no_output_____
###Markdown
You just have to define the ``forward`` function, and the ``backward``function (where gradients are computed) is automatically defined for youusing ``autograd``.You can use any of the Tensor operations in the ``forward`` function.The learnable parameters of a model are returned by ``net.parameters()``
###Code
params = list(net.parameters())
print(len(params))
print(params[0].size()) # conv1's .weight
###Output
_____no_output_____
###Markdown
Let try a random 32x32 inputNote: Expected input size to this net(LeNet) is 32x32. To use this net onMNIST dataset, please resize the images from the dataset to 32x32.
###Code
input = torch.randn(1, 1, 32, 32)
out = net(input)
print(out)
###Output
_____no_output_____
###Markdown
Zero the gradient buffers of all parameters and backprops with randomgradients:
###Code
net.zero_grad()
out.backward(torch.randn(1, 10))
###Output
_____no_output_____
###Markdown
Note``torch.nn`` only supports mini-batches. The entire ``torch.nn`` package only supports inputs that are a mini-batch of samples, and not a single sample. For example, ``nn.Conv2d`` will take in a 4D Tensor of ``nSamples x nChannels x Height x Width``. If you have a single sample, just use ``input.unsqueeze(0)`` to add a fake batch dimension.Before proceeding further, let's recap all the classes you’ve seen so far.**Recap:** - ``torch.Tensor`` - A *multi-dimensional array* with support for autograd operations like ``backward()``. Also *holds the gradient* w.r.t. the tensor. - ``nn.Module`` - Neural network module. *Convenient way of encapsulating parameters*, with helpers for moving them to GPU, exporting, loading, etc. - ``nn.Parameter`` - A kind of Tensor, that is *automatically registered as a parameter when assigned as an attribute to a* ``Module``. - ``autograd.Function`` - Implements *forward and backward definitions of an autograd operation*. Every ``Tensor`` operation, creates at least a single ``Function`` node, that connects to functions that created a ``Tensor`` and *encodes its history*.**At this point, we covered:** - Defining a neural network - Processing inputs and calling backward**Still Left:** - Computing the loss - Updating the weights of the networkLoss Function-------------A loss function takes the (output, target) pair of inputs, and computes avalue that estimates how far away the output is from the target.There are several different`loss functions `_ under thenn package .A simple loss is: ``nn.MSELoss`` which computes the mean-squared errorbetween the input and the target.For example:
###Code
output = net(input)
target = torch.randn(10) # a dummy target, for example
target = target.view(1, -1) # make it the same shape as output
criterion = nn.MSELoss()
loss = criterion(output, target)
print(loss)
###Output
_____no_output_____
###Markdown
Now, if you follow ``loss`` in the backward direction, using its``.grad_fn`` attribute, you will see a graph of computations that lookslike this::: input -> conv2d -> relu -> maxpool2d -> conv2d -> relu -> maxpool2d -> view -> linear -> relu -> linear -> relu -> linear -> MSELoss -> lossSo, when we call ``loss.backward()``, the whole graph is differentiatedw.r.t. the loss, and all Tensors in the graph that has ``requires_grad=True``will have their ``.grad`` Tensor accumulated with the gradient.For illustration, let us follow a few steps backward:
###Code
print(loss.grad_fn) # MSELoss
print(loss.grad_fn.next_functions[0][0]) # Linear
print(loss.grad_fn.next_functions[0][0].next_functions[0][0]) # ReLU
###Output
_____no_output_____
###Markdown
Backprop--------To backpropagate the error all we have to do is to ``loss.backward()``.You need to clear the existing gradients though, else gradients will beaccumulated to existing gradients.Now we shall call ``loss.backward()``, and have a look at conv1's biasgradients before and after the backward.
###Code
net.zero_grad() # zeroes the gradient buffers of all parameters
print('conv1.bias.grad before backward')
print(net.conv1.bias.grad)
loss.backward()
print('conv1.bias.grad after backward')
print(net.conv1.bias.grad)
###Output
_____no_output_____
###Markdown
Now, we have seen how to use loss functions.**Read Later:** The neural network package contains various modules and loss functions that form the building blocks of deep neural networks. A full list with documentation is `here `_.**The only thing left to learn is:** - Updating the weights of the networkUpdate the weights------------------The simplest update rule used in practice is the Stochastic GradientDescent (SGD): ``weight = weight - learning_rate * gradient``We can implement this using simple python code:.. code:: python learning_rate = 0.01 for f in net.parameters(): f.data.sub_(f.grad.data * learning_rate)However, as you use neural networks, you want to use various differentupdate rules such as SGD, Nesterov-SGD, Adam, RMSProp, etc.To enable this, we built a small package: ``torch.optim`` thatimplements all these methods. Using it is very simple:
###Code
import torch.optim as optim
# create your optimizer
optimizer = optim.SGD(net.parameters(), lr=0.01)
# in your training loop:
optimizer.zero_grad() # zero the gradient buffers
output = net(input)
loss = criterion(output, target)
loss.backward()
optimizer.step() # Does the update
###Output
_____no_output_____ |
More-MicroQiskit.ipynb | ###Markdown
A Few More FeaturesThe workshops so far have given a PewPew-based guide to the first few sections of the textbook [Learn Quantum Computation using Qiskit](https://community.qiskit.org). This has introduced most of what MicroQiskit can do, but there are still a few features to mention.First we'll take the basic program from the last section and reduce it down to a single qubit. Since this frees up some buttons, we can use one to implement a new operation. Specifically, ▲ will be used for `rx(pi/4,0)`.
###Code
%matplotlib notebook
import pew # setting up tools for the pewpew
from microqiskit import QuantumCircuit, simulate # setting up tools for quantum
from math import pi
pew.init() # initialize the game engine...
screen = pew.Pix() # ...and the screen
qc = QuantumCircuit(1,1) # create an empty circuit with one qubit and one output bit
# create circuits with the required measurements, so we can add them in easily
meas = {}
meas['Z'] = QuantumCircuit(1,1)
meas['Z'].measure(0,0)
meas['X'] = QuantumCircuit(1,1)
meas['X'].h(0)
meas['X'].measure(0,0)
basis = 'Z' # set the initial measurement basis for each qubit
# loop over the squares centered on (1,2) and (1,4) and make all dim
for (X,Y) in [(1,2),(1,4)]:
for dX in [+1,0,-1]:
for dY in [+1,0,-1]:
screen.pixel(X+dX,Y+dY,2)
pew.show(screen)
for (X,Y) in [(1,2),(1,4)]:
screen.pixel(X,Y,0) # turn off the center pixels of the squares
old_keys = 0
while True: # loop which checks for user input and responds
# look for and act upon key presses
keys = pew.keys() # get current key presses
if keys!=0 and keys!=old_keys:
if keys==pew.K_X:
basis = 'X'*(basis=='Z') + 'Z'*(basis=='X') # toggle basis
if keys==pew.K_LEFT:
qc.x(0) # x when LEFT is pressed
if keys==pew.K_UP:
qc.rx(pi/4,0) # x when LEFT is pressed
if keys==pew.K_DOWN:
qc.h(0) # h when DOWN is pressed
old_keys = keys
# execute the circuit and get a single sample of memory for the given measurement bases
m = simulate(qc+meas[basis],shots=1,get='memory')
# turn the pixels (1,2) and (1,4) (depending on basis) on or off (depending on m[0])
if m[0]=='1':
if basis=='Z':
screen.pixel(1,2,3)
else:
screen.pixel(1,4,3)
else:
if basis=='Z':
screen.pixel(1,2,0)
else:
screen.pixel(1,4,0)
# turn the pixels not used to display m[0] to dim
if basis=='Z':
screen.pixel(1,4,2)
else:
screen.pixel(1,2,2)
pew.show(screen) # update screen to display the above changes
pew.tick(1/6) # pause for a sixth of a second
###Output
_____no_output_____
###Markdown
If you use this new operation a couple of times, you'll notice that the result of z measurements becomes completely random. But you'll also find that the results of x measurements is random too. It looks like we have gotten our qubit to the point that it is not certain about anything.This only appears to be the case because we are missing an important type of measurement. Given the names of the other two measurements, you may not be suprised to learn that this is call a *y measurement*.```python y measurement of qubit jqc.rx(pi/2,j)qc.measure(j,j)```Note that you'll need to use `from math import pi` so that your program knows what `pi` is.In the program below we add in this measurement, and use the O button to switch between x and y measurements.
###Code
import pew # setting up tools for the pewpew
from microqiskit import QuantumCircuit, simulate # setting up tools for quantum
from math import pi
pew.init() # initialize the game engine...
screen = pew.Pix() # ...and the screen
qc = QuantumCircuit(1,1) # create an empty circuit with one qubit and one output bit
# create circuits with the required measurements, so we can add them in easily
meas = {}
meas['Z'] = QuantumCircuit(1,1)
meas['Z'].measure(0,0)
meas['X'] = QuantumCircuit(1,1)
meas['X'].h(0)
meas['X'].measure(0,0)
meas['Y'] = QuantumCircuit(1,1)
meas['Y'].rx(pi/2,0)
meas['Y'].measure(0,0)
basis = 'Z' # set the initial measurement basis for each qubit
# loop over the squares centered on (1,2) and (1,4) and make all dim
for (X,Y) in [(1,2),(1,4),(1,6)]:
for dX in [+1,0,-1]:
for dY in [+1,0,-1]:
screen.pixel(X+dX,Y+dY,2)
pew.show(screen)
for (X,Y) in [(1,2),(1,4),(1,6)]:
screen.pixel(X,Y,0) # turn off the center pixels of the squares
old_keys = 0
while True: # loop which checks for user input and responds
# look for and act upon key presses
keys = pew.keys() # get current key presses
if keys!=0 and keys!=old_keys:
if keys==pew.K_X:
basis = 'X'*(basis=='Z') + 'Z'*(basis=='X') + 'Y'*(basis=='Y') # toggle basis between X and Z
if keys==pew.K_O:
basis = 'X'*(basis=='Y') + 'Y'*(basis=='X') + 'Z'*(basis=='Z') # toggle basis between X and Y
if keys==pew.K_LEFT:
qc.x(0) # x when LEFT is pressed
if keys==pew.K_UP:
qc.rx(pi/4,0) # x when LEFT is pressed
if keys==pew.K_DOWN:
qc.h(0) # h when DOWN is pressed
old_keys = keys
# execute the circuit and get a single sample of memory for the given measurement bases
m = simulate(qc+meas[basis],shots=1,get='memory')
# turn the pixels (1,2) and (1,4) (depending on basis) on or off (depending on m[0])
if m[0]=='1':
if basis=='Z':
screen.pixel(1,2,3)
elif basis=='X':
screen.pixel(1,4,3)
else:
screen.pixel(1,6,3)
else:
if basis=='Z':
screen.pixel(1,2,0)
elif basis=='X':
screen.pixel(1,4,0)
else:
screen.pixel(1,6,0)
# turn the pixels not used to display m[0] to dim
if basis=='Z':
screen.pixel(1,4,2)
screen.pixel(1,6,2)
elif basis=='X':
screen.pixel(1,2,2)
screen.pixel(1,6,2)
else:
screen.pixel(1,2,2)
screen.pixel(1,4,2)
pew.show(screen) # update screen to display the above changes
pew.tick(1/6) # pause for a sixth of a second
###Output
_____no_output_____
###Markdown
After a couple of `qc.rx(pi/4,0)` operations, when both z and x measurement outcomes are random, the y measurement instead has certainty.To see this process in more detail, let's use the space we now have on the right of the screen. We'll use this to display three vertical lines. The first will represent the probability of a z outcome being `1`: Fully bright for certainly `1`, fully dim for certainly `0`, and split between the two for other probabilities. The next two lines will be the same for x and y measurements.These probabilities can be calculated by getting the results in the form of a counts dictionary. Specifically, we'll use```pythonp = simulate(qc+meas['Z'],shots=1000,get='counts')['1']/1000```Here `simulate(qc+meas['Z'],shots=1000,get='counts')` runs the circuit and gets the results from `shots=1000` samples. This is returned in a dictionary of the form `{'0':435,'1':565}`, which tells us how many samples gave which output. This means that `simulate(qc+meas['Z'],shots=1000,get='counts')['1']` directly accesses the number of samples that output `1`. By dividing this by the number of shots, we get the probability for a `1`.
###Code
import pew # setting up tools for the pewpew
from microqiskit import QuantumCircuit, simulate # setting up tools for quantum
from math import pi
pew.init() # initialize the game engine...
screen = pew.Pix() # ...and the screen
qc = QuantumCircuit(1,1) # create an empty circuit with one qubit and one output bit
# create circuits with the required measurements, so we can add them in easily
meas = {}
meas['Z'] = QuantumCircuit(1,1)
meas['Z'].measure(0,0)
meas['X'] = QuantumCircuit(1,1)
meas['X'].h(0)
meas['X'].measure(0,0)
meas['Y'] = QuantumCircuit(1,1)
meas['Y'].rx(pi/2,0)
meas['Y'].measure(0,0)
basis = 'Z' # set the initial measurement basis for each qubit
# loop over the squares centered on (1,2) and (1,4) and make all dim
for (X,Y) in [(1,2),(1,4),(1,6)]:
for dX in [+1,0,-1]:
for dY in [+1,0,-1]:
screen.pixel(X+dX,Y+dY,2)
pew.show(screen)
for (X,Y) in [(1,2),(1,4),(1,6)]:
screen.pixel(X,Y,0) # turn off the center pixels of the squares
old_keys = 0
while True: # loop which checks for user input and responds
# look for and act upon key presses
keys = pew.keys() # get current key presses
if keys!=0 and keys!=old_keys:
if keys==pew.K_X:
basis = 'X'*(basis=='Z') + 'Z'*(basis=='X') + 'Y'*(basis=='Y') # toggle basis between X and Z
if keys==pew.K_O:
basis = 'X'*(basis=='Y') + 'Y'*(basis=='X') + 'Z'*(basis=='Z') # toggle basis between X and Y
if keys==pew.K_LEFT:
qc.x(0) # x when LEFT is pressed
if keys==pew.K_UP:
qc.rx(pi/4,0) # x when UP is pressed
if keys==pew.K_DOWN:
qc.h(0) # h when DOWN is pressed
old_keys = keys
# execute the circuit and get a single sample of memory for the given measurement bases
m = simulate(qc+meas[basis],shots=1,get='memory')
# turn the pixels (1,2) and (1,4) (depending on basis) on or off (depending on m[0])
if m[0]=='1':
if basis=='Z':
screen.pixel(1,2,3)
elif basis=='X':
screen.pixel(1,4,3)
else:
screen.pixel(1,6,3)
else:
if basis=='Z':
screen.pixel(1,2,0)
elif basis=='X':
screen.pixel(1,4,0)
else:
screen.pixel(1,6,0)
# display probabilities as lines for each basis
p = simulate(qc+meas['Z'],shots=1000,get='counts')['1']/1000
for j in range(0,8):
if p>(j/8):
screen.pixel(5,7-j,3)
else:
screen.pixel(5,7-j,2)
p = simulate(qc+meas['X'],shots=1000,get='counts')['1']/1000
for j in range(0,8):
if p>(j/8):
screen.pixel(6,7-j,3)
else:
screen.pixel(6,7-j,2)
p = simulate(qc+meas['Y'],shots=1000,get='counts')['1']/1000
for j in range(0,8):
if p>(j/8):
screen.pixel(7,7-j,3)
else:
screen.pixel(7,7-j,2)
# turn the pixels not used to display m[0] to dim
if basis=='Z':
screen.pixel(1,4,2)
screen.pixel(1,6,2)
elif basis=='X':
screen.pixel(1,2,2)
screen.pixel(1,6,2)
else:
screen.pixel(1,2,2)
screen.pixel(1,4,2)
pew.show(screen) # update screen to display the above changes
pew.tick(1/6) # pause for a sixth of a second
###Output
_____no_output_____ |
notebooks/NSynth Data Exploration & Queries.ipynb | ###Markdown
NSynth Data Exploration and Queries Imports
###Code
import zipfile
import os
import json
import random
import itertools as it
import random
import numpy as np
import pandas as pd
import librosa as lb
import librosa.display
import matplotlib.pyplot as plt
import IPython.display as ipd
import os
# Feel free to change this
# You can also download the dataset using the bash script (could take a long time)
DATASET_PATH = '../datasets/nsynth-test/'
# Play a sample sound
file = DATASET_PATH+'audio/bass_electronic_018-024-025.wav'
ipd.Audio(file)
###Output
_____no_output_____
###Markdown
Extract metadata
###Code
json_metadata = open(DATASET_PATH+'examples.json').read()
metadata = json.loads(json_metadata)
# List some example data
list(metadata.items())[:2]
###Output
_____no_output_____
###Markdown
Generate query function for intrument and quality
###Code
def query_metadata(instrument=None, quality=None):
if instrument is None or quality is None:
print('Please specify both the desired instrument and quality')
return
return [i for i in metadata.keys()
if metadata[i]['instrument_family_str'] == instrument
and quality in metadata[i]['qualities_str']]
pass
query_metadata()
# Check output
res = query_metadata('guitar', 'bright')
print(res[0], '\n')
print(metadata[res[0]])
def play_sample(instrument=None, quality=None):
if instrument is None or quality is None:
print('Please specify both the desired instrument and quality')
return
audio_file_names = query_metadata(instrument, quality)
if audio_file_names is None:
print(f'No sounds found for a {quality} {instrument}')
return
print(random.choice(audio_file_names))
return ipd.Audio(DATASET_PATH+'audio/'+random.choice(audio_file_names)+'.wav')
# WARNING - might be loud lol
play_sample('organ', 'dark')
###Output
organ_electronic_028-067-075
|
jupyter_notebooks/machine_learning/ebook_mastering_ml_in_6_steps/Chapter_3_Code/Code/SVM.ipynb | ###Markdown
Support Vector MachineKey objective of SVM is to draw a hyperplane which separates the two classes optimally such that the margin is maximum between the hyperplane and the observations. Figure x below illustrates that there is possibility of different hyperplane, however the objective of SVM is to find the one which gives us high margin. To maximize the margin we need to minimize (1/2)||w||2 subject to yi(WTXi + b)-1 ≥ 0 for all i.Final SVM equation can be written mathematically as L = ∑_i▒di - 1/2 ∑_ij▒α_i α_i y_i y_j (X ̅iX ̅j) Key parameters* C: This is the penalty parameter and helps in fitting the boundaries smoothly and appropriately, default=1* Kernel: It must be one of rbf/linear/poly/sigmoid/precomputed, default=’rbf’(Radial Basis Function). Choosing appropriate kernel will result in better model fit.
###Code
from IPython.display import Image
Image(filename='../Chapter 3 Figures/SVM.png', width=800)
###Output
_____no_output_____
###Markdown
Multivariate and Multi-class Decision TreeLoading the Iris dataset from scikit-learn. Here, the third column represents the petal length, and the fourth column the petal width of the flower samples. The classes are already converted to integer labels where 0=Iris-Setosa, 1=Iris-Versicolor, 2=Iris-Virginica.
###Code
import warnings
warnings.filterwarnings('ignore')
from matplotlib.colors import ListedColormap
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn import datasets
import numpy as np
import pandas as pd
from sklearn import tree
from sklearn import metrics
iris = datasets.load_iris()
X = iris.data[:, [2, 3]]
# X = iris.data
y = iris.target
print('Class labels:', np.unique(y))
###Output
('Class labels:', array([0, 1, 2]))
###Markdown
Normalize data: the unit of measurement might differ so lets normalize the data before building the model
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
sc.fit(X)
X = sc.transform(X)
###Output
_____no_output_____
###Markdown
Split data into train and test. When ever we are using radom function its advised to use a seed to ensure the reproducibility of the results.
###Code
# split data into train and test
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
def plot_decision_regions(X, y, classifier):
h = .02 # step size in the mesh
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, h),
np.arange(x2_min, x2_max, h))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1],
alpha=0.8, c=cmap(idx),
marker=markers[idx], label=cl)
from sklearn.svm import SVC
clf = SVC(kernel='linear', C=1.0, random_state=0)
clf.fit(X_train, y_train)
# generate evaluation metrics
print "Train - Accuracy :", metrics.accuracy_score(y_train, clf.predict(X_train))
print "Train - Confusion matrix :",metrics.confusion_matrix(y_train, clf.predict(X_train))
print "Train - classification report :", metrics.classification_report(y_train, clf.predict(X_train))
print "Test - Accuracy :", metrics.accuracy_score(y_test, clf.predict(X_test))
print "Test - Confusion matrix :",metrics.confusion_matrix(y_test, clf.predict(X_test))
print "Test - classification report :", metrics.classification_report(y_test, clf.predict(X_test))
###Output
Train - Accuracy : 0.952380952381
Train - Confusion matrix : [[34 0 0]
[ 0 30 2]
[ 0 3 36]]
Train - classification report : precision recall f1-score support
0 1.00 1.00 1.00 34
1 0.91 0.94 0.92 32
2 0.95 0.92 0.94 39
avg / total 0.95 0.95 0.95 105
Test - Accuracy : 0.977777777778
Test - Confusion matrix : [[16 0 0]
[ 0 17 1]
[ 0 0 11]]
Test - classification report : precision recall f1-score support
0 1.00 1.00 1.00 16
1 1.00 0.94 0.97 18
2 0.92 1.00 0.96 11
avg / total 0.98 0.98 0.98 45
###Markdown
Plot Decision Boundary Let's consider a two class example to keep things simple
###Code
# Let's use sklearn make_classification function to create some test data.
from sklearn.datasets import make_classification
X, y = make_classification(100, 2, 2, 0, weights=[.5, .5], random_state=0)
# build a simple logistic regression model
clf = SVC(kernel='linear', random_state=0)
clf.fit(X, y)
# get the separating hyperplane
w = clf.coef_[0]
a = -w[0] / w[1]
xx = np.linspace(-5, 5)
yy = a * xx - (clf.intercept_[0]) / w[1]
# plot the parallels to the separating hyperplane that pass through the
# support vectors
b = clf.support_vectors_[0]
yy_down = a * xx + (b[1] - a * b[0])
b = clf.support_vectors_[-1]
yy_up = a * xx + (b[1] - a * b[0])
# Plot the decision boundary
plot_decision_regions(X, y, classifier=clf)
# plot the line, the points, and the nearest vectors to the plane
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1], s=80, facecolors='none')
plt.plot(xx, yy_down, 'k--')
plt.plot(xx, yy_up, 'k--')
plt.xlabel('X1')
plt.ylabel('X2')
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
###Output
_____no_output_____ |
Introduction/Figures/Storyline-schematic.ipynb | ###Markdown
Dependencies
###Code
import numpy as np
import scipy as sp
import seaborn as sn
import pandas as pd
import matplotlib
matplotlib.use('agg')
from matplotlib import pyplot as plt
import matplotlib.animation
import time
from IPython.display import HTML, Image, Video
from tqdm import tqdm
import os
import xarray as xr
import gc
import multiprocessing
from mystatsfunctions import OLSE,LMoments
from moarpalettes import get_palette
matplotlib.rcParams['axes.prop_cycle']=matplotlib.cycler('color',list(get_palette.Petroff10().to_sn_palette()))
matplotlib.rcParams['font.family']='Helvetica'
matplotlib.rcParams["animation.html"] = "jshtml"
matplotlib.rcParams['animation.embed_limit'] = 2**30
# %matplotlib inline
###Output
_____no_output_____
###Markdown
IntroductionThis notebook uses the Lorenz '63 system to demonstrate the conceptual differences between conventional "probabilistic" climate-model based attribution and our forecast-based approach. The specific system used is the Palmer '99 variant of the Lorenz dynamical model, which includes a forcing (at an angle $\theta$ in the xy place) to the system, representing some external forcing - eg. anthropogenic greenhouse gas emissions. The equations of this system are as follows:$\begin{align}\dot{x} & = \sigma(y-x) + f_0 \cos\theta \\\dot{y} & = x (\rho - z) - y + f_0 \sin\theta \\\dot{z} & = xy - \beta z\end{align}$Lorenz, E. N. (1963). Deterministic Nonperiodic Flow. Journal of the Atmospheric Sciences. [DOI](https://doi.org/10.1175/1520-0469(1963)0202.0.CO;2)Palmer, T. N. (1999). A Nonlinear Dynamical Perspective on Climate Prediction. Journal of Climate, 12(2), 575–591. [DOI](https://doi.org/10.1175/1520-0442(1999)0122.0.CO;2) Define the system
###Code
def lorenz(xyz, t, rho=28, sigma=10, beta=8/3, F=0, theta=0):
"Defines the lorez63 dynamical system. Standard default values."
x, y, z = xyz
x_dot = sigma * (y - x) + F * np.cos(theta)
y_dot = x * rho - x * z - y + F * np.sin(theta)
z_dot = x * y - beta * z
return [x_dot, y_dot, z_dot]
###Output
_____no_output_____
###Markdown
Integrate the system for long timesAnalogous to running a climate model for many years.We will use a timestep of 1/100 units & run the system for 100 000 units x 100 runs. Each run will start using the final point of the previous run.
###Code
# set solver parameters
# start_time = pd.to_datetime('1750-01-01')
# end_time = pd.to_datetime('2250-01-01')
# timeindex = pd.date_range(start_time,end_time,freq='1h')[:-1]
# runlength = (end_time.to_pydatetime()-start_time.to_pydatetime()).days
# timestep = runlength / timeindex.size
## run over this many ICs to generate branches
branches = 101
# set Lorenz system parameters
s = 10
r = 28
b = 8/3
F = (8,np.deg2rad(-40))
# ## original ICs
# IC0 = [0.01,0,0]
# IC1 = IC0[:]
# ## ICs if restarting mid-integration
# # start_from = 42
# # X = xr.open_dataset('Lorenz63-realisations/00'+str(start_from)+'.nc')
# # IC0 = X.isel(time=0).sel(type='unforced').to_array().values.flatten()
# # IC1 = X.isel(time=0).sel(type='forced').to_array().values.flatten()
# for branch in tqdm(np.arange(branches)[:]):
# runlength = 100000
# timestep = 0.01
# start = branch * runlength
# end = (branch+1) * runlength
# timeindex = np.arange(start,end,timestep)
# # unforced system
# X0 = sp.integrate.odeint(lorenz, IC0, np.arange(0,runlength+timestep,timestep), (r,s,b,0,0)).reshape(-1,1,3)
# IC0 = X0[-1,0,:]
# X0 = xr.DataArray(data=X0[:-1],dims=['time','branch','dim'],coords=dict(time=timeindex,branch=[branch],dim=['x','y','z'])).to_dataset(dim="dim")
# # system with external forcing
# ## define forcing in terms of magnitude + direction
# X1 = sp.integrate.odeint(lorenz, IC1, np.arange(0,runlength+timestep,timestep), (r,s,b,*F)).reshape(-1,1,3)
# IC1 = X1[-1,0,:]
# X1 = xr.DataArray(data=X1[:-1],dims=['time','branch','dim'],coords=dict(time=timeindex,branch=[branch],dim=['x','y','z'])).to_dataset(dim="dim")
# X = xr.concat([X0.expand_dims({'type':['unforced']}),X1.expand_dims({'type':['forced']})],dim='type')
# X.to_netcdf('./Lorenz63-realisations/'+f"{branch:04d}"+'.nc')
###Output
46%|████▌ | 46/101 [3:32:28<4:14:02, 277.14s/it]
###Markdown
Compute the diagnostics required from the system. We'll use $|x+y|$ as our "impact measure".
###Code
## define diagnostic
X['impact'] = xr.ufuncs.fabs(X.x+X.y)
## convert to pandas for easy resampling
X_df = X.to_dataframe().reset_index()
## determine daily max (preserving coord values) & mean values
X_day = X_df.groupby(['type',X_df.time.dt.date]).apply(lambda x: x.iloc[x.impact.argmax()]).drop('type',axis=1)
X_day['impact_mean'] = X_df.groupby(['type',X_df.time.dt.date]).mean().impact
X_day = X_day.rename(dict(time='maxtime',impact='impact_max'),axis=1).to_xarray()
X_day = X_day.assign_coords(time=pd.to_datetime(X_day.time.values))
## determine yearly max of max & max of mean
X_df = X_day.to_dataframe().reset_index()
X_yearmax = X_df.groupby(['type',X_df.time.dt.year]).apply(lambda x: x.iloc[x.impact_max.argmax()]).drop(['type','time'],axis=1)
X_yearmax['impact_mean'] = X_df.groupby(['type',X_df.time.dt.year]).max().impact_mean
X_yearmax = X_yearmax.to_xarray()
Show what these DataArrays look like.
X
X_day
X_yearmax
## x-y variable, reshape into "days"
X0_xy = (X0[:,0]-1.2*X0[:,1]).reshape(24,-1)
X1_xy = (X1[:,0]-1.2*X1[:,1]).reshape(24,-1)
## daymax of x-y
### X0
X0_xy_argmax = np.argmax(X0_xy,axis=0)
X0_xy_max = np.take_along_axis(X0_xy, X0_xy_argmax[None], axis=0).flatten()
### X1
X1_xy_argmax = np.argmax(X1_xy,axis=0)
X1_xy_max = np.take_along_axis(X1_xy, X1_xy_argmax[None], axis=0).flatten()
## split into "nodes"
### we'll define the nodes using the line x=0
### node1 = x > 0
### node2 = x < 0
#### X0
X0_node1 = (X0[:,0] > 0).reshape(24,-1)
X0_xy_max_node1_select = np.take_along_axis(X0_node1, X0_xy_argmax[None], axis=0).flatten()
X0_xy_max_node1 = X0_xy_max[X0_xy_max_node1_select]
X0_xy_max_node2 = X0_xy_max[~X0_xy_max_node1_select]
#### X1
X1_node1 = (X1[:,0] > 0).reshape(24,-1)
X1_xy_max_node1_select = np.take_along_axis(X1_node1, X1_xy_argmax[None], axis=0).flatten()
X1_xy_max_node1 = X1_xy_max[X1_xy_max_node1_select]
X1_xy_max_node2 = X1_xy_max[~X1_xy_max_node1_select]
### Plot these simulations
# define a color palette
plot_colors = dict(forced=get_palette.Petroff6().to_sn_palette()[0],unforced=get_palette.Petroff6().to_sn_palette()[1])
fig = plt.figure()
# create gridspec
gs = matplotlib.gridspec.GridSpec(3,2,figure=fig)
# generate axes
ax_dist = [fig.add_subplot(gs[i,1]) for i in np.arange(3)]
ax_xy = fig.add_subplot(gs[:,0])
ax_x = ax_xy.twinx()
ax_y = ax_xy.twiny()
# plot daymax(x-y) distribution
dist_bins = np.linspace(40,46,25)
[ax_dist[0].hist(X_yearmax.sel(type=x).impact_max,histtype='step',bins=dist_bins,color=plot_colors[x],label=x) for x in ['unforced','forced']]
## by nodes (right first, left second)
[ax_dist[1].hist(X_yearmax.where(X_yearmax.x>0).sel(type=x).dropna('time').impact_max,histtype='step',bins=dist_bins,color=plot_colors[x]) for x in ['unforced','forced']]
[ax_dist[2].hist(X_yearmax.where(X_yearmax.x<0).sel(type=x).dropna('time').impact_max,histtype='step',bins=dist_bins,color=plot_colors[x]) for x in ['unforced','forced']]
# plot main "butterfly"
[ax_xy.plot(X.sel(type=x).isel(time=slice(1000,6000)).x,X.sel(type=x).isel(time=slice(1000,6000)).y,lw=0.1,color=plot_colors[x]) for x in ['unforced','forced']]
# plot distributions on each axis
## x histogram
[ax_x.hist(X.sel(type=x).x,density=True,histtype='step',bins=100,color=plot_colors[x],label=x) for x in ['unforced','forced']]
## y histogram
[ax_y.hist(X.sel(type=x).y,density=True,histtype='step',bins=100,color=plot_colors[x],orientation='horizontal') for x in ['unforced','forced']]
# axes layouts
## dist
[sn.despine(ax=a) for a in ax_dist]
ax_dist[0].legend(loc='upper right',frameon=False)
ax_dist[2].set_xlabel('annual maximum of $|x+y|$')
[a.set_yticks([]) for a in ax_dist]
[a.set_ylim(0,70) for a in ax_dist]
[a.set_xlim(dist_bins[0],dist_bins[-1]) for a in ax_dist]
ax_dist[0].set_title('all',loc='left')
ax_dist[1].set_title('right node',loc='left')
ax_dist[2].set_title('left node',loc='left')
## xy
ax_xy.set_xlim(-30,30)
ax_xy.set_ylim(-30,30)
ax_xy.set_xlabel('x')
ax_xy.set_ylabel('y')
ax_xy.axvline(0,ls=':',color='xkcd:grey',lw=0.5)
ax_xy.text(0.03,0.97,'left node',transform=ax_xy.transAxes,va='top',ha='left')
ax_xy.text(0.97,0.97,'right node',transform=ax_xy.transAxes,va='top',ha='right')
### add arrow showing direction of forcing
ax_xy.arrow(x=20,y=-20,dx=10*np.cos(F[1]),dy=10*np.sin(F[1]),length_includes_head=True,width=0.1,head_length=2,head_width=1,overhang=1,fc='k',ec='k')
ax_xy.arrow(x=19,y=-20,dx=1+10*np.cos(F[1]),dy=0,length_includes_head=True,width=0.1,head_length=0,head_width=0,fc='k',ec='k')
ax_xy.text(20,-15,'$f_0$',ha='right',fontsize=15)
## x
ax_x.set_ylim(0,0.5)
ax_x.axis('off')
## y
ax_y.set_xlim(0,0.5)
ax_y.axis('off')
# fig layout
fig.suptitle(r'Lorenz system: $\rho=28$, $\sigma=10$, $\beta=8/3$, $f_0=3$, $\theta=75 \degree$',fontweight='bold')
fig.patch.set_facecolor('xkcd:white')
fig.set_size_inches(10,5)
fig.dpi=100
plt.tight_layout()
## Probabilistic attribution framing
For illustration, let's use a "severe" impacts threshold of |x+y| > 42.
# extract impact events
impact_event_count = (X_day.impact_max>42).sum('time')
# ratio of event counts is RR
FAR = 1 - (impact_event_count.sel(type='unforced')/impact_event_count.sel(type='forced')).values[()]
And thus the Fraction of Attributable Risk for all severe impact events is:
print(np.round(FAR,3))
# Extreme event attribution of a specific event
Here, we are going to look at particular events that occurred in the left hand node of the forced system.
## Forecast-based attribution
We're gonna use xarray for this to keep track of the dimensions.
Define function to produce forecasts based on ICs from the long system integration
def generate_forecasts(event,
leads = [1/8,1/4,1/2,3/4,1,2,3,5,10,20],
ens_size = 51,
ICP_noise_scale = 0.005,
fc_timestep_size='15min'):
# choose leads (current freq "days")
inidates = event.maxtime.values - pd.to_timedelta(leads,unit='d').round('1h')
# extract initial conditions before the event
ICs = X.sel(type='forced',time=inidates).rename(time='inidate').drop('impact').to_array()
# generate perturbed initial conditions
ICPs = xr.DataArray(data=np.random.normal(0,ICP_noise_scale,ens_size*3).reshape(ens_size,3),dims=['number','variable'],coords=dict(number=np.arange(ens_size),variable=['x','y','z']))
## make the first intial condition the "control"
ICPs[0] = 0
# add perturbations to control ICs to get array of ICs
ICs = (ICs + ICPs).transpose('inidate','number','variable')
## integrate each ensemble member until a couple of days after the event
## convert timestep size to units of days
fc_timestep = pd.to_timedelta(fc_timestep_size).total_seconds()/(24*3600)
fc_end = event.maxtime.values + pd.Timedelta('2d')
max_fc_length = (fc_end - inidates).days.max() / fc_timestep
### create results array
fcs0 = np.empty((ens_size,int(max_fc_length),3,inidates.size))+np.nan
fcs1 = np.empty((ens_size,int(max_fc_length),3,inidates.size))+np.nan
for i,inidate in enumerate(ICs.inidate.values):
fc_length = (fc_end - inidate)
fc_timesteps = int(fc_length.total_seconds()/(fc_timestep*(24*60*60)))
ics = ICs.sel(inidate=inidate).values
fc0 = np.array([sp.integrate.odeint(lorenz, ic, np.arange(0,fc_timesteps*fc_timestep-1e-11,fc_timestep), (r,s,b,0,0)) for ic in ics])
fc1 = np.array([sp.integrate.odeint(lorenz, ic, np.arange(0,fc_timesteps*fc_timestep-1e-11,fc_timestep), (r,s,b,*F)) for ic in ics])
fcs0[:,-fc_timesteps:,:,i] = fc0
fcs1[:,-fc_timesteps:,:,i] = fc1
# wrangle into nice forecast DataArray
fc_timeindex = pd.date_range(inidates.min(),fc_end,freq=fc_timestep_size)[:-1]
fcs0 = xr.DataArray(data=fcs0,dims=['number','time','variable','inidate'],coords=dict(number=np.arange(ens_size),time=fc_timeindex,variable=['x','y','z'],inidate=inidates)).to_dataset(dim="variable")
fcs1 = xr.DataArray(data=fcs1,dims=['number','time','variable','inidate'],coords=dict(number=np.arange(ens_size),time=fc_timeindex,variable=['x','y','z'],inidate=inidates)).to_dataset(dim="variable")
## join datasets together
fcs = xr.concat([fcs0.expand_dims({'type':['unforced']}),fcs1.expand_dims({'type':['forced']})],dim='type')
# create impact variable
fcs['impact'] = xr.ufuncs.fabs(fcs.x+fcs.y)
return fcs
### Event 1
The 1945 annual maximum event (largest right node event bewteen 1900 and 2100).
X_event = X_yearmax.sel(time=1945,type='forced')
fcs = generate_forecasts(X_event, leads=[1/24,1/12,1/8,1/6,1/4,1/2,3/4,1,2,5,10,20], ens_size=51, ICP_noise_scale=1, fc_timestep_size='12min')
g=sn.displot(data=fcs.impact.max('time').to_dataframe().reset_index(),x='impact',hue='type',col='inidate',col_wrap=6,kind='ecdf',palette=plot_colors)
[a.axvline(X_event.impact_max,ls=':',color='grey',lw=1) for a in g.axes]
''
sn.relplot(data=fcs.sel(number=0).to_dataframe().reset_index(),x='x',y='y',hue='type',col='inidate',col_wrap=6,kind='line',palette=plot_colors,sort=False)
### Event 2
The 2065 annual maximum event (largest left node event bewteen 1900 and 2100).
X_event = X_yearmax.sel(time=2065,type='forced')
fcs = generate_forecasts(X_event, leads=[1/24,1/12,1/8,1/6,1/4,1/2,3/4,1,2,5,10,20], ens_size=51, ICP_noise_scale=1, fc_timestep_size='12min')
g=sn.displot(data=fcs.impact.max('time').to_dataframe().reset_index(),x='impact',hue='type',col='inidate',col_wrap=6,kind='ecdf',palette=plot_colors)
[a.axvline(X_event.impact_max,ls=':',color='grey',lw=1) for a in g.axes]
''
sn.relplot(data=fcs.sel(number=9).to_dataframe().reset_index(),x='x',y='y',hue='type',col='inidate',col_wrap=6,kind='line',palette=plot_colors,sort=False)
sn.relplot(data=fcs.to_dataframe().reset_index(),
x='time',
y='impact',
hue='type',
col='inidate',
size='number',
sizes=(0.5,0.5),
col_wrap=6,
kind='line',
palette=plot_colors,
sort=False,
facet_kws=dict(sharey=False,sharex=False))
Things you can demonstrate here:
- Situations where the forcing affects the predictability
- Situations where the forcing impacts the event
## Alternative event definition
Rather than using $|x+y|$, here we'll try to define a line in phase space above which we'll define impact events.
We'll use $0.83 \cdot x - y$ and look for events < -10
print('relative occupancy of left node vs right')
((X.x<0).sum('time') / (X.x>0).sum('time')).to_pandas()
X['impact_II'] = np.fabs(X.x)
impact_lim = 12
print('left node')
print('count')
print((X.impact_II>impact_lim).where(X.x<0).sum('time').to_pandas())
print('magnitude')
print((X.impact_II).where((X.impact_II>impact_lim)&(X.x<0)).mean('time').to_pandas())
print('right node')
print('count')
print((X.impact_II>impact_lim).where(X.x>0).sum('time').to_pandas())
print('magnitude')
print((X.impact_II).where((X.impact_II>impact_lim)&(X.x>0)).mean('time').to_pandas())
## define diagnostic
X['impact_II'] = 0.9*X.x-X.y
## convert to pandas for easy resampling
XII_df = X.to_dataframe().reset_index()
## determine daily max (preserving coord values) & mean values
XII_day = XII_df.groupby(['type',XII_df.time.dt.date]).apply(lambda x: x.iloc[x.impact_II.argmax()]).drop('type',axis=1)
XII_day['impact_II_mean'] = XII_df.groupby(['type',XII_df.time.dt.date]).mean().impact_II
XII_day = XII_day.rename(dict(time='maxtime',impact_II='impact_II_max'),axis=1).to_xarray()
XII_day = XII_day.assign_coords(time=pd.to_datetime(XII_day.time.values))
## determine yearly max of max & max of mean
XII_df = XII_day.to_dataframe().reset_index()
XII_yearmax = XII_df.groupby(['type',XII_df.time.dt.year]).apply(lambda x: x.iloc[x.impact_II_max.argmax()]).drop(['type','time'],axis=1)
XII_yearmax['impact_II_mean'] = XII_df.groupby(['type',XII_df.time.dt.year]).min().impact_II_mean
XII_yearmax = XII_yearmax.to_xarray()
fig = plt.figure()
# create gridspec
gs = matplotlib.gridspec.GridSpec(3,2,figure=fig)
# generate axes
ax_dist = [fig.add_subplot(gs[i,1]) for i in np.arange(3)]
ax_xy = fig.add_subplot(gs[:,0])
ax_x = ax_xy.twinx()
ax_y = ax_xy.twiny()
# plot daymax(x-y) distribution
dist_bins = np.linspace(11,15,25)
[ax_dist[0].hist(XII_yearmax.sel(type=x).impact_II_max,histtype='step',bins=dist_bins,color=plot_colors[x],label=x) for x in ['unforced','forced']]
## by nodes (right first, left second)
[ax_dist[1].hist(XII_yearmax.where(XII_yearmax.x>0).sel(type=x).dropna('time').impact_II_max,histtype='step',bins=dist_bins,color=plot_colors[x]) for x in ['unforced','forced']]
[ax_dist[2].hist(XII_yearmax.where(XII_yearmax.x<0).sel(type=x).dropna('time').impact_II_max,histtype='step',bins=dist_bins,color=plot_colors[x]) for x in ['unforced','forced']]
# plot main "butterfly"
[ax_xy.plot(X.sel(type=x).isel(time=slice(1000,6000)).x,X.sel(type=x).isel(time=slice(1000,6000)).y,lw=0.1,color=plot_colors[x]) for x in ['unforced','forced']]
# plot distributions on each axis
## x histogram
[ax_x.hist(X.sel(type=x).x,density=True,histtype='step',bins=100,color=plot_colors[x],label=x) for x in ['unforced','forced']]
## y histogram
[ax_y.hist(X.sel(type=x).y,density=True,histtype='step',bins=100,color=plot_colors[x],orientation='horizontal') for x in ['unforced','forced']]
# axes layouts
## dist
[sn.despine(ax=a) for a in ax_dist]
ax_dist[0].legend(loc='upper right',frameon=False)
ax_dist[2].set_xlabel('annual maximum of $0.85 \cdot x-y$')
[a.set_yticks([]) for a in ax_dist]
[a.set_ylim(0,100) for a in ax_dist]
[a.set_xlim(dist_bins[0],dist_bins[-1]) for a in ax_dist]
ax_dist[0].set_title('all',loc='left')
ax_dist[1].set_title('right node',loc='left')
ax_dist[2].set_title('left node',loc='left')
## xy
ax_xy.set_xlim(-30,30)
ax_xy.set_ylim(-30,30)
ax_xy.set_xlabel('x')
ax_xy.set_ylabel('y')
ax_xy.axvline(0,ls=':',color='xkcd:grey',lw=0.5)
ax_xy.text(0.03,0.95,'left node',transform=ax_xy.transAxes,va='top',ha='left')
ax_xy.text(0.97,0.95,'right node',transform=ax_xy.transAxes,va='top',ha='right')
### add arrow showing direction of forcing
ax_xy.arrow(x=20,y=-20,dx=10*np.cos(F[1]),dy=10*np.sin(F[1]),length_includes_head=True,width=0.1,head_length=2,head_width=1,overhang=1,fc='k',ec='k')
ax_xy.arrow(x=21,y=-20,dx=-1+10*np.cos(F[1]),dy=0,length_includes_head=True,width=0.1,head_length=0,head_width=0,fc='k',ec='k')
ax_xy.text(20,-15,'$f_0$',ha='right',fontsize=15)
## x
ax_x.set_ylim(0,0.5)
ax_x.axis('off')
## y
ax_y.set_xlim(0,0.5)
ax_y.axis('off')
## add in threshold line & shading
xrange = np.arange(-30,30+1e-6,0.01)
yrange = 0.87*xrange-10
ax_xy.plot(xrange,yrange,ls='--',color='xkcd:red',lw=0.5)
ax_xy.fill_between(xrange,yrange,-30,alpha=0.04,color='xkcd:red',lw=0)
# fig layout
fig.suptitle(r'Lorenz system: $\rho=28$, $\sigma=10$, $\beta=8/3$, $f_0=3$, $\theta=75 \degree$',fontweight='bold')
fig.patch.set_facecolor('xkcd:white')
fig.set_size_inches(10,5)
fig.dpi=100
plt.tight_layout()
## event in the left node
event_l = XII_day.sel(time='1989-05-30',type='forced')
fcs_l = generate_forecasts(event_l, leads=[1/24,1/12,1/8,1/6,1/4,1/2,3/4,1,2,5,10,20], ens_size=501, ICP_noise_scale=1, fc_timestep_size='12min')
fcs_l['impact_II'] = 0.87*fcs_l.x-fcs_l.y
## event in the right node
event_r = XII_yearmax.sel(time=2235,type='forced')
fcs_r = generate_forecasts(event_r, leads=[1/24,1/12,1/8,1/6,1/4,1/2,3/4,1,2,5,10,20], ens_size=501, ICP_noise_scale=1, fc_timestep_size='12min')
fcs_r['impact_II'] = 0.87*fcs_r.x-fcs_r.y
g=sn.displot(data=fcs_l.impact_II.sel(time='1989-05-30').max('time').to_dataframe().reset_index(),x='impact_II',hue='type',col='inidate',col_wrap=6,kind='ecdf',palette=plot_colors)
[a.axvline(event_l.impact_II_max,ls=':',color='grey',lw=1) for a in g.axes]
''
sn.relplot(data=fcs_l.sel(number=12).to_dataframe().reset_index(),x='x',y='y',hue='type',col='inidate',col_wrap=6,kind='line',palette=plot_colors,sort=False)
sn.relplot(data=fcs_l.to_dataframe().reset_index(),
x='time',
y='impact_II',
hue='type',
col='inidate',
size='number',
sizes=(0.5,0.5),
col_wrap=6,
kind='line',
palette=plot_colors,
sort=False,
facet_kws=dict(sharey=False,sharex=False))
## Animation
We'll create an animation with the following elements:
- Trajectory plot
- 10 ensemble members up to event (solid)
- 10 ensemble members post event (solid, transparent)
- Dots that pulse when event occurs
- Histograms of event
- Histograms that build up as events occur
### left node event
## plot options
inidate = '1989-05-29 060000'
event_start_time = '1989-05-30 000000'
event_end_time = '1989-05-30 120000'
choose_mems = np.random.choice(500,10)
fig,axes = plt.subplots(1,2,figsize=(10,5))
ax,ax1=axes
sn.despine()
ax.set_xlabel('x')
ax.set_ylabel('y')
ax1.set_xlabel('x')
ax1.set_xlim(11,16)
ax1.set_ylim(0,75)
fr_no = 150
hist0_points = []
hist1_points = []
for lnum,mem in enumerate(choose_mems):
X0 = fcs_l.sel(number=mem,inidate=inidate,type='unforced',time=slice(inidate,event_end_time))
X1 = fcs_l.sel(number=mem,inidate=inidate,type='forced',time=slice(inidate,event_end_time))
maxtime0 = X0.sel(time=slice(event_start_time,event_end_time)).impact_II.idxmax('time')
maxtime1 = X1.sel(time=slice(event_start_time,event_end_time)).impact_II.idxmax('time')
argtime0 = ((maxtime0 - X0.time.isel(time=0)).dt.seconds//(12*60)).values[0]
argtime1 = ((maxtime1 - X1.time.isel(time=0)).dt.seconds//(12*60)).values[0]
X00 = X0.where(X0.time<=maxtime0).isel(time=slice(0,fr_no)).squeeze()
ax.plot(X00.x,X00.y,lw=1,color=plot_colors['unforced'],zorder=5)
X10 = X1.where(X1.time<=maxtime1).isel(time=slice(0,fr_no)).squeeze()
ax.plot(X10.x,X10.y,lw=1,color=plot_colors['forced'],zorder=5)
if fr_no >= argtime0:
X01 = X0.where(X0.time>=maxtime0).isel(time=slice(0,fr_no)).squeeze()
ax.plot(X01.x,X01.y,lw=1,color=plot_colors['unforced'],alpha=0.1,zorder=3)
X0.sel(time=maxtime0).squeeze().plot.scatter('x','y',color=plot_colors['unforced'],ax=ax,zorder=10,alpha=np.exp(0.1*(argtime0-fr_no)))
if fr_no >= argtime1:
X11 = X1.where(X1.time>=maxtime1).isel(time=slice(0,fr_no)).squeeze()
ax.plot(X11.x,X11.y,lw=1,color=plot_colors['forced'],alpha=0.1,zorder=3)
X1.sel(time=maxtime1).squeeze().plot.scatter('x','y',color=plot_colors['forced'],ax=ax,zorder=10,alpha=np.exp(0.1*(argtime1-fr_no)))
## get histogram points
event_max0 = fcs_l.where(fcs_l.x<0).sel(inidate=inidate,type='unforced',time=slice(event_start_time,event_end_time)).impact_II.max('time').squeeze().values
actual_max0 = fcs_l.where(fcs_l.x<0).sel(inidate=inidate,type='unforced',time=slice(inidate,end_time)).isel(time=slice(0,fr_no)).impact_II.max('time').squeeze().values
hist0_points = event_max0[event_max0<=actual_max0]
event_max1 = fcs_l.where(fcs_l.x<0).sel(inidate=inidate,type='forced',time=slice(event_start_time,event_end_time)).impact_II.max('time').squeeze().values
actual_max1 = fcs_l.where(fcs_l.x<0).sel(inidate=inidate,type='forced',time=slice(inidate,end_time)).isel(time=slice(0,fr_no)).impact_II.max('time').squeeze().values
hist1_points = event_max1[event_max1<=actual_max1]
## plot max points
ax.plot(xmax0_points,ymax0_points)
## plot ICs
fcs_l.sel(number=choose_mems,type='forced',inidate=inidate,time=inidate).plot.scatter('x','y',color='k',ax=ax,marker='.')
fcs_l.sel(type='forced',inidate=inidate,time=inidate).plot.scatter('x','y',color='grey',ax=ax,marker='.',s=1,zorder=-1)
## plot histograms
bins = np.linspace(11,16,41)
ax1.hist(hist0_points, bins=bins, lw=1.5, color=plot_colors['unforced'], histtype='step')
ax1.hist(hist1_points, bins=bins, lw=1.5, color=plot_colors['forced'], histtype='step')
# figure layout
###Output
_____no_output_____
###Markdown
Trying to demo that there's a difference between storyline & probabilistic -> can get different answersQuestion is the answers from my approach give more reliable guide to full ensemble vs. eg. data assimilationLimit in which has to be so -> do we converge to that result or not?One argument we could use: if we back off far enough sampling bigger range of possible forcing impacts as we sample range of statesMoving from state with little forcing impact into region with large forcing impactLorenz -> very sensitive to forcing about x,y=0 Reattempt using |x| as the metric of choiceNote: to make loading data in considerable quicker, don't decode_times in the xarray open call.
###Code
def preprocess_to_extremes(ds):
ds = ds.load()
rh_events = ds.groupby('time.year').apply(lambda x: x.sel(time=x.x.idxmax('time')))
lh_events = ds.groupby('time.year').apply(lambda x: x.sel(time=x.x.idxmin('time')))
return xr.concat([rh_events.expand_dims({'event':['max']}),lh_events.expand_dims({'event':['min']})],'event')
X = xr.open_mfdataset('./Lorenz63-realisations/00*.nc',preprocess=preprocess_to_extremes)
test=xr.open_dataset('./Lorenz63-realisations/0000.nc',decode_times=False)
test.sel(time=test.isel(time=slice(10000,None)).x.idxmax('time'))
(np.sqrt((test.sel(type='forced')-test_event).to_array()**2).sum('variable')).min()
###Output
_____no_output_____
###Markdown
convert to pandas for easy resamplingX_df = X.to_dataframe().reset_index() determine daily max (preserving coord values) & mean valuesX_day = X_df.groupby(['type',X_df.time.dt.date]).apply(lambda x: x.iloc[x.metric.argmax()]).drop('type',axis=1)X_day['metric_mean'] = X_df.groupby(['type',X_df.time.dt.date]).mean().metricX_day = X_day.rename(dict(time='maxtime',metric='metric_max'),axis=1).to_xarray()X_day = X_day.assign_coords(time=pd.to_datetime(X_day.time.values)) determine yearly max of max & max of meanX_df = X_day.to_dataframe().reset_index()X_yearmax = X_df.groupby(['type',X_df.time.dt.year]).apply(lambda x: x.iloc[x.metric_max.argmax()]).drop(['type','time'],axis=1)X_yearmax['metric_mean'] = X_df.groupby(['type',X_df.time.dt.year]).max().metric_meanX_yearmax = X_yearmax.to_xarray()
###Code
def locate_analogs(X0, n=1000, max_dist=2):
"""Finds analogs for X0 along the attractor defined by X."""
X0 = X0.squeeze()
E = xr.ufuncs.sqrt(((X.sel(type=X0.type.values) - X0).to_array()**2).sum('variable'))
analog_times = E.rename('E').to_dataframe().sort_values('E').iloc[:n].index.values
analogs = X.sel(time=analog_times,type=X0.type.values).drop('metric')
return analogs.rename(time='number').assign_coords(number=np.arange(n))
E = locate_analogs(rh_event)
E_ranked = E.rename('E').to_dataframe().sort_values('E')
E_ranked
def generate_forecasts(event,
leads = [1/8,1/4,1/2,3/4,1,2,3,5,10,20],
ens_size = 51,
ICP_type = 'random',
ran_gen = sp.stats.norm(0,0.005),
fc_timestep_size='15min'):
"""
Creates a set of initalised Lorenz63 runs.
Can choose how IC pertubrations are generated.
"""
# choose leads (current freq "days")
inidates = event.maxtime.values - pd.to_timedelta(leads,unit='d').round('1h')
# extract initial conditions before the event
ICs = X.sel(type='forced',time=inidates).rename(time='inidate').drop('metric')
# generate perturbed initial conditions
if ICP_type == 'random':
ICPs = xr.DataArray(data=ran_gen.rvs(ens_size*3).reshape(ens_size,3),dims=['number','variable'],coords=dict(number=np.arange(ens_size),variable=['x','y','z']))
## make the first intial condition the "control"
ICPs[0] = 0
# add perturbations to control ICs to get array of ICs
ICs = (ICs.to_array() + ICPs).transpose('inidate','number','variable')
elif ICP_type == 'analog':
ICs = ICs.groupby('inidate').apply(locate_analogs,n=ens_size)
ICs = ICs.to_array().transpose('inidate','number','variable')
## integrate each ensemble member until a couple of days after the event
## convert timestep size to units of days
fc_timestep = pd.to_timedelta(fc_timestep_size).total_seconds()/(24*3600)
fc_end = event.maxtime.values + pd.Timedelta('2d')
max_fc_length = ((fc_end - inidates).days+(fc_end - inidates).seconds/(3600*24)).max() / fc_timestep
### create results array
fcs0 = np.empty((ens_size,int(max_fc_length),3,inidates.size))+np.nan
fcs1 = np.empty((ens_size,int(max_fc_length),3,inidates.size))+np.nan
for i,inidate in enumerate(ICs.inidate.values):
fc_length = (fc_end - inidate)
fc_timesteps = int(fc_length.total_seconds()/(fc_timestep*(24*60*60)))
ics = ICs.sel(inidate=inidate).values
fc0 = np.array([sp.integrate.odeint(lorenz, ic, np.arange(0,fc_timesteps*fc_timestep-1e-11,fc_timestep), (r,s,b,0,0)) for ic in ics])
fc1 = np.array([sp.integrate.odeint(lorenz, ic, np.arange(0,fc_timesteps*fc_timestep-1e-11,fc_timestep), (r,s,b,*F)) for ic in ics])
fcs0[:,-fc_timesteps:,:,i] = fc0
fcs1[:,-fc_timesteps:,:,i] = fc1
# wrangle into nice forecast DataArray
fc_timeindex = pd.date_range(inidates.min(),fc_end,freq=fc_timestep_size)[:-1]
fcs0 = xr.DataArray(data=fcs0,dims=['number','time','variable','inidate'],coords=dict(number=np.arange(ens_size),time=fc_timeindex,variable=['x','y','z'],inidate=inidates)).to_dataset(dim="variable")
fcs1 = xr.DataArray(data=fcs1,dims=['number','time','variable','inidate'],coords=dict(number=np.arange(ens_size),time=fc_timeindex,variable=['x','y','z'],inidate=inidates)).to_dataset(dim="variable")
## join datasets together
fcs = xr.concat([fcs0.expand_dims({'type':['unforced']}),fcs1.expand_dims({'type':['forced']})],dim='type')
# create impact variable
# fcs['impact'] = xr.ufuncs.fabs(fcs.x+fcs.y)
return fcs
## event I (right hand lobe): '1926-07-30T19:00:00.000000000'
## event II (left hand lobe): '1897-01-17T04:00:00.000000000'
rh_event = X_day.sel(time='1926-07-30',type='forced')
fcs_r = generate_forecasts(rh_event, leads=[1/4,1,2], ens_size=1001, ICP_type='analog', ran_gen=sp.stats.norm(0,1), fc_timestep_size='1min')
fcs_r['metric'] = np.fabs(fcs_r.x)
lh_event = X_day.sel(time='1897-01-17',type='forced')
fcs_l = generate_forecasts(lh_event, leads=[1/4,1,2], ens_size=1001, ICP_type='analog', ran_gen=sp.stats.norm(0,1), fc_timestep_size='1min')
fcs_l['metric'] = np.fabs(fcs_l.x)
"""Choose whether to plot interactively (keep off for generating anomations)"""
%matplotlib inline
g=sn.displot(data=fcs_l.x.min('time').to_dataframe().reset_index(),x='x',hue='type',col='inidate',kind='hist',element='step',bins=np.linspace(-21,-16,51))
g.fig.patch.set_facecolor('xkcd:white')
# g.fig.savefig('./fcs_l.png',dpi=200)
g=sn.displot(data=fcs_r.x.max('time').to_dataframe().reset_index(),x='x',hue='type',col='inidate',kind='hist',element='step',bins=np.linspace(16,21,51))
g.fig.patch.set_facecolor('xkcd:white')
# g.fig.savefig('./fcs_r.png',dpi=200)
###Output
_____no_output_____
###Markdown
Animate the left node event first.
###Code
## left node
event_start_time = '1897-01-17T00:00:00'
event_end_time = '1897-01-17T08:00:00'
choose_mems = np.random.choice(500,15)
plot_colors = dict(forced=get_palette.Petroff6().to_sn_palette()[0],unforced=get_palette.Petroff6().to_sn_palette()[1])
def generate_plot(fr_no,inidate):
fig,axes = plt.subplots(1,2,figsize=(10,5))
ax,ax1=axes
sn.despine()
ax.set_xlabel('x')
ax.set_ylabel('y')
ax1.set_xlabel('x')
ax1.set_xlim(-21,-16)
ax1.set_ylim(0,150)
hist0_points = []
hist1_points = []
for lnum,mem in enumerate(choose_mems):
X0 = fcs_l.sel(number=mem,inidate=inidate,type='unforced',time=slice(inidate,event_end_time))
X1 = fcs_l.sel(number=mem,inidate=inidate,type='forced',time=slice(inidate,event_end_time))
maxtime0 = X0.x.idxmin('time')
maxtime1 = X1.x.idxmin('time')
argtime0 = ((maxtime0 - X0.time.isel(time=0)).astype(float)//(1e9*60)).values[()].astype(int)
argtime1 = ((maxtime1 - X1.time.isel(time=0)).astype(float)//(1e9*60)).values[()].astype(int)
X00 = X0.where(X0.time<=maxtime0).isel(time=slice(0,fr_no)).squeeze()
ax.plot(X00.x,X00.y,lw=1,color=plot_colors['unforced'],zorder=5)
X10 = X1.where(X1.time<=maxtime1).isel(time=slice(0,fr_no)).squeeze()
ax.plot(X10.x,X10.y,lw=1,color=plot_colors['forced'],zorder=5)
if fr_no >= argtime0:
X01 = X0.where(X0.time>=maxtime0).isel(time=slice(0,fr_no)).squeeze()
ax.plot(X01.x,X01.y,lw=1,color=plot_colors['unforced'],alpha=0.1,zorder=3)
X0.sel(time=maxtime0).squeeze().plot.scatter('x','y',color=plot_colors['unforced'],ax=ax,zorder=10,alpha=np.exp(0.02*(argtime0-fr_no)))
if fr_no >= argtime1:
X11 = X1.where(X1.time>=maxtime1).isel(time=slice(0,fr_no)).squeeze()
ax.plot(X11.x,X11.y,lw=1,color=plot_colors['forced'],alpha=0.1,zorder=3)
X1.sel(time=maxtime1).squeeze().plot.scatter('x','y',color=plot_colors['forced'],ax=ax,zorder=10,alpha=np.exp(0.02*(argtime1-fr_no)))
## get histogram points
if fr_no>0:
event_max0 = fcs_l.sel(inidate=inidate,type='unforced',time=slice(inidate,event_end_time)).x.min('time').squeeze().values
actual_max0 = fcs_l.sel(inidate=inidate,type='unforced',time=slice(inidate,event_end_time)).isel(time=slice(0,fr_no)).x.min('time').squeeze().values
hist0_points = event_max0[event_max0==actual_max0]
event_max1 = fcs_l.sel(inidate=inidate,type='forced',time=slice(inidate,event_end_time)).x.min('time').squeeze().values
actual_max1 = fcs_l.sel(inidate=inidate,type='forced',time=slice(inidate,event_end_time)).isel(time=slice(0,fr_no)).x.min('time').squeeze().values
hist1_points = event_max1[event_max1==actual_max1]
## plot ICs
fcs_l.sel(number=choose_mems,type='forced',inidate=inidate,time=inidate).plot.scatter('x','y',color='k',ax=ax,marker='.')
fcs_l.sel(type='forced',inidate=inidate,time=inidate).plot.scatter('x','y',color='grey',ax=ax,marker='.',s=1,zorder=-1)
## plot histograms
bins = np.linspace(-21,-16,51)
ax1.hist(hist0_points, bins=bins, lw=1.5, color=plot_colors['unforced'], histtype='step',label='unforced')
ax1.hist(hist1_points, bins=bins, lw=1.5, color=plot_colors['forced'], histtype='step',label='forced')
# figure layout
## x,y lim
xlim0 = fcs_l.sel(number=0,type='forced',inidate=inidate,time=inidate).x.values[()] + np.array([-4,4])
ylim0 = fcs_l.sel(number=0,type='forced',inidate=inidate,time=inidate).y.values[()] + np.array([-4,4])
xlim1 = np.array([-25,25])
ylim1 = np.array([-25,25])
frames_to_zoom = 120
curr_xlim = ax.set_xlim()
curr_ylim = ax.set_ylim()
xlim_min = np.min([curr_xlim[0],np.max([xlim1[0],xlim0[0]+(fr_no-180)*(xlim1[0]-xlim0[0])/frames_to_zoom])])
xlim_max = np.max([curr_xlim[1],np.min([xlim1[1],xlim0[1]+(fr_no-180)*(xlim1[1]-xlim0[1])/frames_to_zoom])])
ylim_min = np.min([curr_ylim[0],np.max([ylim1[0],ylim0[0]+(fr_no-180)*(ylim1[0]-ylim0[0])/frames_to_zoom])])
ylim_max = np.max([curr_ylim[1],np.min([ylim1[1],ylim0[1]+(fr_no-180)*(ylim1[1]-ylim0[1])/frames_to_zoom])])
ax.set_xlim(xlim_min,xlim_max)
ax.set_ylim(ylim_min,ylim_max)
ax1.legend(frameon=False,loc='upper right')
fig.savefig('../Figures/animation-figs/'+f"{fr_no:04d}"+'_'+str(inidate)+'.png',dpi=200)
fig.clear()
plt.close(fig)
gc.collect()
for inidate in tqdm(fcs_l.inidate.values):
total_frames = int((pd.to_datetime(event_end_time) - pd.to_datetime(inidate)).total_seconds()/60)
P1 = multiprocessing.Pool(processes=4)
P1.starmap(generate_plot,[(fr_no,inidate) for fr_no in np.arange(total_frames)])
P1.close()
###Output
_____no_output_____
###Markdown
Then the right.
###Code
## right node
event_start_time = '1926-07-30T15:00'
event_end_time = '1926-07-30T23:00'
choose_mems = np.random.choice(500,15)
plot_colors = dict(forced=get_palette.Petroff6().to_sn_palette()[0],unforced=get_palette.Petroff6().to_sn_palette()[1])
def generate_plot(fr_no,inidate):
fig,axes = plt.subplots(1,2,figsize=(10,5))
ax,ax1=axes
sn.despine()
ax.set_xlabel('x')
ax.set_ylabel('y')
ax1.set_xlabel('x')
ax1.set_xlim(16,21)
ax1.set_ylim(0,150)
hist0_points = []
hist1_points = []
for lnum,mem in enumerate(choose_mems):
X0 = fcs_r.sel(number=mem,inidate=inidate,type='unforced',time=slice(inidate,event_end_time))
X1 = fcs_r.sel(number=mem,inidate=inidate,type='forced',time=slice(inidate,event_end_time))
maxtime0 = X0.x.idxmax('time')
maxtime1 = X1.x.idxmax('time')
argtime0 = ((maxtime0 - X0.time.isel(time=0)).astype(float)//(1e9*60)).values[()].astype(int)
argtime1 = ((maxtime1 - X1.time.isel(time=0)).astype(float)//(1e9*60)).values[()].astype(int)
X00 = X0.where(X0.time<=maxtime0).isel(time=slice(0,fr_no)).squeeze()
ax.plot(X00.x,X00.y,lw=1,color=plot_colors['unforced'],zorder=5)
X10 = X1.where(X1.time<=maxtime1).isel(time=slice(0,fr_no)).squeeze()
ax.plot(X10.x,X10.y,lw=1,color=plot_colors['forced'],zorder=5)
if fr_no >= argtime0:
X01 = X0.where(X0.time>=maxtime0).isel(time=slice(0,fr_no)).squeeze()
ax.plot(X01.x,X01.y,lw=1,color=plot_colors['unforced'],alpha=0.1,zorder=3)
X0.sel(time=maxtime0).squeeze().plot.scatter('x','y',color=plot_colors['unforced'],ax=ax,zorder=10,alpha=np.exp(0.02*(argtime0-fr_no)))
if fr_no >= argtime1:
X11 = X1.where(X1.time>=maxtime1).isel(time=slice(0,fr_no)).squeeze()
ax.plot(X11.x,X11.y,lw=1,color=plot_colors['forced'],alpha=0.1,zorder=3)
X1.sel(time=maxtime1).squeeze().plot.scatter('x','y',color=plot_colors['forced'],ax=ax,zorder=10,alpha=np.exp(0.02*(argtime1-fr_no)))
## get histogram points
if fr_no>0:
event_max0 = fcs_r.sel(inidate=inidate,type='unforced',time=slice(inidate,event_end_time)).x.max('time').squeeze().values
actual_max0 = fcs_r.sel(inidate=inidate,type='unforced',time=slice(inidate,event_end_time)).isel(time=slice(0,fr_no)).x.max('time').squeeze().values
hist0_points = event_max0[event_max0==actual_max0]
event_max1 = fcs_r.sel(inidate=inidate,type='forced',time=slice(inidate,event_end_time)).x.max('time').squeeze().values
actual_max1 = fcs_r.sel(inidate=inidate,type='forced',time=slice(inidate,event_end_time)).isel(time=slice(0,fr_no)).x.max('time').squeeze().values
hist1_points = event_max1[event_max1==actual_max1]
## plot ICs
fcs_r.sel(number=choose_mems,type='forced',inidate=inidate,time=inidate).plot.scatter('x','y',color='k',ax=ax,marker='.')
fcs_r.sel(type='forced',inidate=inidate,time=inidate).plot.scatter('x','y',color='grey',ax=ax,marker='.',s=1,zorder=-1)
## plot histograms
bins = np.linspace(16,21,51)
ax1.hist(hist0_points, bins=bins, lw=1.5, color=plot_colors['unforced'], histtype='step',label='unforced')
ax1.hist(hist1_points, bins=bins, lw=1.5, color=plot_colors['forced'], histtype='step',label='forced')
# figure layout
## x,y lim
xlim0 = fcs_r.sel(number=0,type='forced',inidate=inidate,time=inidate).x.values[()] + np.array([-4,4])
ylim0 = fcs_r.sel(number=0,type='forced',inidate=inidate,time=inidate).y.values[()] + np.array([-4,4])
xlim1 = np.array([-25,25])
ylim1 = np.array([-25,25])
frames_to_zoom = 120
curr_xlim = ax.set_xlim()
curr_ylim = ax.set_ylim()
xlim_min = np.min([curr_xlim[0],np.max([xlim1[0],xlim0[0]+(fr_no-180)*(xlim1[0]-xlim0[0])/frames_to_zoom])])
xlim_max = np.max([curr_xlim[1],np.min([xlim1[1],xlim0[1]+(fr_no-180)*(xlim1[1]-xlim0[1])/frames_to_zoom])])
ylim_min = np.min([curr_ylim[0],np.max([ylim1[0],ylim0[0]+(fr_no-180)*(ylim1[0]-ylim0[0])/frames_to_zoom])])
ylim_max = np.max([curr_ylim[1],np.min([ylim1[1],ylim0[1]+(fr_no-180)*(ylim1[1]-ylim0[1])/frames_to_zoom])])
ax.set_xlim(xlim_min,xlim_max)
ax.set_ylim(ylim_min,ylim_max)
ax1.legend(frameon=False,loc='upper right')
fig.savefig('../Figures/animation-figs/'+f"{fr_no:04d}"+'_'+str(inidate)+'.png',dpi=200)
fig.clear()
plt.close(fig)
gc.collect()
for inidate in tqdm(fcs_r.inidate.values):
total_frames = int((pd.to_datetime(event_end_time) - pd.to_datetime(inidate)).total_seconds()/60)
P1 = multiprocessing.Pool(processes=4)
P1.starmap(generate_plot,[(fr_no,inidate) for fr_no in np.arange(total_frames)])
P1.close()
for inidate in tqdm(fcs_r.inidate.values):
fig=plt.figure()
hist0_points = fcs_r.sel(inidate=inidate,type='unforced',time=slice(inidate,event_end_time)).x.max('time').squeeze().values
hist1_points = fcs_r.sel(inidate=inidate,type='forced',time=slice(inidate,event_end_time)).x.max('time').squeeze().values
bins = np.linspace(16,21,51)
plt.hist(hist0_points, bins=bins, lw=1.5, color=plot_colors['unforced'], histtype='step')
plt.hist(hist1_points, bins=bins, lw=1.5, color=plot_colors['forced'], histtype='step')
plt.savefig('./test'+str(inidate)+'.png',dpi=200)
fig.clear()
plt.close()
###Output
_____no_output_____ |
arc_to_parquet/examples/airlines.ipynb | ###Markdown
archive to parquet - partitioned dataAilines data
###Code
MLRUN_COMMIT = "0.4.5"
!mlrun clean -p -r
import mlrun, os
mlrun.mlconf.dbpath = 'http://mlrun-api:8080'
###Output
_____no_output_____
###Markdown
parameters
###Code
ARTIFACT_PATH = os.path.join(os.getcwd(), 'artifacts', '{{run.uid}}')
FUNCTION = 'arc_to_parquet'
DESCRIPTION = 'retrieve archive table and save as partitioned parquet dataset'
BASE_IMAGE = f'mlrun/ml-base:ml:{MLRUN_COMMIT}'
JOB_KIND = 'job'
TASK_NAME = 'user-task-arc-to-part-parq'
FUNCTION_PY = 'https://raw.githubusercontent.com/yjb-ds/functions/master/arc_to_parquet/arc_to_parquet.py'
ARCHIVE_BIG = "https://s3.amazonaws.com/h2o-airlines-unpacked/allyears_10.csv"
ARCHIVE = "https://s3.amazonaws.com/h2o-airlines-unpacked/allyears.csv"
ARCHIVE_SMALL = "https://s3.amazonaws.com/h2o-airlines-unpacked/allyears2k.csv"
USE_ARCHIVE = ARCHIVE_SMALL
FILE_SHAPE = (123_534_969, 21) # (rows, cols)
SMALL_FILE_SHAPE = (43_978, 21) # (rows, cols)
LOCAL_FILE_NAME = 'airlines.pqt'
ARTIFACT_STORE_KEY = 'airlines'
PARTS_DEST_FOLDER = 'partitions'
PARTS_COLS = ['Year', 'Month']
HEADER = ['Year','Month','DayofMonth','DayOfWeek','DepTime','CRSDepTime','ArrTime','CRSArrTime',
'UniqueCarrier','FlightNum','TailNum','ActualElapsedTime','CRSElapsedTime','AirTime',
'ArrDelay','DepDelay','Origin','Dest','Distance','TaxiIn','TaxiOut','Cancelled',
'CancellationCode','Diverted','CarrierDelay','WeatherDelay','NASDelay','SecurityDelay',
'LateAircraftDelay']
INC_COLS = ['Year','Month','DayofMonth','DayOfWeek','DepTime','CRSDepTime','ArrTime','CRSArrTime',
'UniqueCarrier','FlightNum', 'CRSElapsedTime','AirTime',
'Origin','Dest','Distance', 'TaxiIn', 'TaxiOut']
ENCODING = 'latin-1'
DTYPES_COLS = {
'CRSElapsedTime': 'float32',
'TailNum': 'str',
'Distance': 'float32',
'TaxiIn' : 'float32',
'TaxiOut': 'float32',
'ArrTime': 'float32',
'AirTime': 'float32',
'DepTime':'float32',
'CarrierDelay': 'float32',
'WeatherDelay': 'float32',
'NASDelay':'float32',
'SecurityDelay':'float32',
'LateAircraftDelay':'float32'}
LABEL_COLUMN = "IsArrDelayed"
os.makedirs(os.path.join(ARTIFACT_PATH, PARTITIONS_DEST), exist_ok=True)
###Output
_____no_output_____
###Markdown
load function
###Code
func = mlrun.new_function(command=FUNCTION_PY, image=IMAGE, kind=JOB_KIND)
func.apply(mlrun.mount_v3io())
# create and run the task
arc_to_parq_task = mlrun.NewTask(
TASK_NAME,
handler=FUNCTION,
params={
'archive_url': USE_ARCHIVE,
'name' : FILE_NAME,
'key' : KEY,
'dataset' : PARTITIONS_DEST,
'part_cols' : PARTITION_COLS,
'encoding' : ENCODING,
'inc_cols' : INC_COLS,
'dtype' : DTYPES_COLS},
artifact_path=ARTIFACT_PATH)
# run
run = func.run(arc_to_parq_task)
###Output
[mlrun] 2020-02-02 19:58:43,549 starting run user-task-arc-to-part-parq uid=b6c886203bee44b086753b496299cad2 -> http://mlrun-api:8080
[mlrun] 2020-02-02 19:58:43,628 Job is running in the background, pod: user-task-arc-to-part-parq-mvpxm
[mlrun] 2020-02-02 19:58:52,598 destination file does not exist, downloading
[mlrun] 2020-02-02 19:58:53,165 saved table to /User/mlrun/airlines/dataset-small/partitions
[mlrun] 2020-02-02 19:58:53,187 log artifact airlines at /User/mlrun/airlines/dataset-small/partitions, size: None, db: Y
[mlrun] 2020-02-02 19:58:53,201 run executed, status=completed
final state: succeeded
###Markdown
tests a partitioned parquet table
###Code
import os
import pandas as pd
import pyarrow.parquet as pq
dataset = pq.ParquetDataset(os.path.join(TARGET_PATH, PARTITIONS_DEST))
df = dataset.read().to_pandas()
df.set_index(PARTITION_COLS, inplace=True)
df.head()
if USE_ARCHIVE == ARCHIVE:
assert df.shape==FILE_SHAPE
if USE_ARCHIVE == ARCHIVE_SMALL:
assert df.shape==SMALL_FILE_SHAPE, f"{df.shape}"
df.shape
###Output
_____no_output_____
###Markdown
cleanup
###Code
# import shutil
# shutil.rmtree(TARGET_PATH)
###Output
_____no_output_____ |
Notebooks/Step1-Getting_Started.ipynb | ###Markdown
TASK 3 : NLP Submission To MIDAS LABFile Name : Step1-Getting Started @Author : Vansh GuptaObjective : Understanding data and turning it into a functional format
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from collections import Counter
df = pd.read_excel("../Data/flipkart_com-ecommerce_sample.xlsx")
df.head()
###Output
_____no_output_____
###Markdown
Columns that don't add any value to our model are removed here.
###Code
col_to_drop = [
"uniq_id",
"crawl_timestamp",
"product_url",
"product_name",
"pid",
"retail_price",
"discounted_price",
"image",
"is_FK_Advantage_product",
"product_rating",
"overall_rating",
"brand",
"product_specifications",
]
df.drop(col_to_drop, axis=1, inplace=True)
df.describe()
df.info()
df.dropna(inplace=True)
df.head()
###Output
_____no_output_____
###Markdown
Let's take a closer look at the product_category_tree.
###Code
"""
Function_name: calculate_depth
Input: None
Output: list of depth and count of category tree
Logic: calculates count i.e occurance of ">>", tree depth is count+1
"""
def calculate_depth():
category_count = []
for tree in df.product_category_tree.values:
count = tree.count(">>")
category_count.append(count + 1)
c = Counter(category_count)
return c.most_common()
print(calculate_depth())
"""
Function_name: split_category_tree
Input: product_category_tree,level
Output: product_category at given level or None
Logic: splits the string by ">>" and return category at given level
"""
def split_category_tree(label_tree, level):
if ">>" in label_tree:
if level < len(label_tree.split(">>")):
return (
label_tree.replace("[", "")
.replace(",", "")
.replace('"', "")
.replace("]", "")
.split(">>")[level - 1]
)
else:
return None
else:
if level == 1:
return (
label_tree.replace("[", "")
.replace(",", "")
.replace('"', "")
.replace("]", "")
)
else:
return None
df["cat_level1"] = df.product_category_tree.apply(
split_category_tree, args=(1,)
)
df["cat_level2"] = df.product_category_tree.apply(
split_category_tree, args=(2,)
)
df["cat_level3"] = df.product_category_tree.apply(
split_category_tree, args=(3,)
)
df["cat_level4"] = df.product_category_tree.apply(
split_category_tree, args=(4,)
)
df["cat_level5"] = df.product_category_tree.apply(
split_category_tree, args=(5,)
)
df["cat_level6"] = df.product_category_tree.apply(
split_category_tree, args=(6,)
)
df.head()
df.describe()
df.to_csv("../Data/Processed_data.csv", index=False)
###Output
_____no_output_____ |
Colab_Notebooks/SMILES_ensemble.ipynb | ###Markdown
Data processing*We have our image datasets that we use for the CNN. With each on of these batches of images we have a corresponding csv with the image filename, smiles string, and target variable. Because some smiles strings' images didn't make it into the folder we'll get our datasets for the ensembling task by taking our image datasets, and selecting only the smiles strings and target variables for the samples that we got images for (and we can do this by taking the samples from the csv files stored with the images)* ***Loading in the data from the image datasets we made***
###Code
# label csv's removed from image folders so we can traverse through those
# and select the samples we have images for
chris_train_path = '/content/drive/MyDrive/2040_final_project/Molecular Exploration/Data/label_mapping_train.csv'
chris_val_path = '/content/drive/MyDrive/2040_final_project/Molecular Exploration/Data/label_mapping_val.csv'
chris_test_path = '/content/drive/MyDrive/2040_final_project/Molecular Exploration/Data/label_mapping_test.csv'
akshay_train_path = '/content/drive/MyDrive/Molecular Exploration/Data/label_mapping_train.csv'
akshay_val_path = '/content/drive/MyDrive/Molecular Exploration/Data/label_mapping_val.csv'
akshay_test_path = '/content/drive/MyDrive/Molecular Exploration/Data/label_mapping_test.csv'
train_labels = pd.read_csv(akshay_train_path, index_col='Unnamed: 0')
val_labels = pd.read_csv(akshay_val_path, index_col='Unnamed: 0')
test_labels = pd.read_csv(akshay_test_path, index_col = 'Unnamed: 0')
train_labels
# Checking for duplicates in the filename columns of the three datasets --> none found
print(sum([int(b) for b in train_labels.duplicated(subset=['file'])]))
print(sum([int(b) for b in val_labels.duplicated(subset=['file'])]))
print(sum([int(b) for b in test_labels.duplicated(subset=['file'])]))
import os
import tensorflow as tf
# (Chris's paths)
# train_imgs_path = "/content/drive/MyDrive/2040_final_project/Molecular Exploration/Data/SMILES_train_imgs"
# val_imgs_path = "/content/drive/MyDrive/2040_final_project/Molecular Exploration/Data/SMILES_val_imgs"
# test_imgs_path = "/content/drive/MyDrive/2040_final_project/Molecular Exploration/Data/SMILES_test_imgs"
# (Akshay's paths)
train_imgs_path = "/content/drive/MyDrive/Molecular Exploration/Data/SMILES_train_imgs"
val_imgs_path = "/content/drive/MyDrive/Molecular Exploration/Data/SMILES_val_imgs"
test_imgs_path = "/content/drive/MyDrive/Molecular Exploration/Data/SMILES_test_imgs"
# ======= TRAINING DATA =======
# sequence data
x_train_seq = pd.Series()
# image data --> and we really just need the file name here?
x_train_img = pd.Series()
# need to accumulate the targets too because the indexing will be off
# due to the missing images so we can't just select the subset with a big .iloc
y_train = pd.Series()
for filename in os.listdir(train_imgs_path):
if filename.endswith(".png"):
# 1) grab filename row in ____labels dataframe
row = train_labels[train_labels['file'] == filename]
# 2) add the smiles to the x series for the sequence model
x_train_seq = x_train_seq.append(pd.Series(row['smiles']), ignore_index=True)
# 3) add the filename to the x series for the CNN
x_train_img = x_train_img.append(pd.Series(row['file']), ignore_index=True)
# 4) add the target to the y series for the overalll model
y_train = y_train.append(pd.Series(row['target']))
else:
continue
print(len(x_train_img))
print(len(x_train_seq))
print(len(y_train))
# ======= VALIDATION DATA =======
# sequence data
x_val_seq = pd.Series()
# image data --> and we really just need the file name here?
x_val_img = pd.Series()
# need to accumulate the targets too because the indexing will be off
# due to the missing images so we can't just select the subset with a big .iloc
y_val = pd.Series()
for filename in os.listdir(val_imgs_path):
if filename.endswith(".png"):
# 1) grab filename row in ____labels dataframe
row = val_labels[val_labels['file'] == filename]
# 2) add the smiles to the x series for the sequence model
x_val_seq = x_val_seq.append(pd.Series(row['smiles']), ignore_index=True)
# 3) add the filename to the x series for the CNN
x_val_img = x_val_img.append(pd.Series(row['file']), ignore_index=True)
# 4) add the target to the y series for the overalll model
y_val = y_val.append(pd.Series(row['target']))
else:
continue
print(len(x_val_img))
print(len(x_val_seq))
print(len(y_val))
# ======= TESTING DATA =======
# sequence data
x_test_seq = pd.Series()
# image data --> and we really just need the file name here?
x_test_img = pd.Series()
# need to accumulate the targets too because the indexing will be off
# due to the missing images so we can't just select the subset with a big .iloc
y_test = pd.Series()
for filename in os.listdir(test_imgs_path):
if filename.endswith(".png"):
# 1) grab filename row in ____labels dataframe
row = test_labels[test_labels['file'] == filename]
# 2) add the smiles to the x series for the sequence model
x_test_seq = x_test_seq.append(pd.Series(row['smiles']), ignore_index=True)
# 3) add the filename to the x series for the CNN
x_test_img = x_test_img.append(pd.Series(row['file']), ignore_index=True)
# 4) add the target to the y series for the overalll model
y_test = y_test.append(pd.Series(row['target']))
else:
continue
print(len(x_test_img))
print(len(x_test_seq))
print(len(y_test))
###Output
268
268
268
###Markdown
Need to tokenize all of the sequence data*... and of course need to use the same tokenizer as our sequence model was trained with* ***In order to use the exact same preprocessing procedure as before, we'll tape together our smiles and target series into dataframe with those as columns***
###Code
train_df = pd.DataFrame(columns=['smiles','target'])
train_df['smiles'] = x_train_seq
train_df['target'] = y_train
val_df = pd.DataFrame(columns=['smiles','target'])
val_df['smiles'] = x_val_seq
val_df['target'] = y_val
test_df = pd.DataFrame(columns=['smiles','target'])
test_df['smiles'] = x_test_seq
test_df['target'] = y_test
print(train_df.head())
print(len(train_df))
print(val_df.head())
print(len(val_df))
print(test_df.head())
print(len(test_df))
# copying code from our sequence model notebook...
def read_molnet_df(df):
texts = []
labels = []
for index, row in df.iterrows():
texts.append(row['smiles'])
labels.append(row['target'])
return texts, labels
# train_texts, train_labels = read_molnet_df(train_srp53)
train_texts, train_labels = read_molnet_df(train_df)
val_texts, val_labels = read_molnet_df(val_df)
test_texts, test_labels = read_molnet_df(test_df)
!pip install transformers
from transformers import RobertaTokenizerFast, RobertaTokenizer, AutoTokenizer
tokenizer = RobertaTokenizer.from_pretrained('seyonec/PubChem10M_SMILES_BPE_450k')
train_encodings = tokenizer(train_texts, truncation=True, padding=True)
val_encodings = tokenizer(val_texts, truncation=True, padding=True)
test_encodings = tokenizer(test_texts, truncation=True, padding=True)
#storing the encodings before manual padding
tokenizer_before = RobertaTokenizer.from_pretrained('seyonec/PubChem10M_SMILES_BPE_450k')
train_encodings_before = tokenizer(train_texts, truncation=True, padding=True)
val_encodings_before = tokenizer(val_texts, truncation=True, padding=True)
test_encodings_before = tokenizer(test_texts, truncation=True, padding=True)
# manually adding padding to the val and test sets
max_len = len(train_encodings['input_ids'][0])
val_current_len = len(val_encodings['input_ids'][0])
val_pad_len = max_len - val_current_len
val_padding = [1]*val_pad_len
val_encodings['input_ids'] = [pad_list + val_padding for pad_list in val_encodings['input_ids']]
test_current_len = len(test_encodings['input_ids'][0])
test_pad_len = max_len - test_current_len
test_padding = [1]*test_pad_len
test_encodings['input_ids'] = [pad_list + test_padding for pad_list in test_encodings['input_ids']]
print(len(train_encodings['input_ids'][0]))
print(len(val_encodings['input_ids'][0]))
print(len(test_encodings['input_ids'][0]))
import tensorflow as tf
class MolNetDataset():
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: tf.tensor(val[idx]) for key, val in self.encodings.items()}
item['labels'] = tf.tensor(self.labels[idx])
return item
def __len__(self):
return len(self.labels)
train_dataset_tf = MolNetDataset(train_encodings, train_labels)
val_dataset_tf = MolNetDataset(val_encodings, val_labels)
test_dataset_tf = MolNetDataset(test_encodings, test_labels)
# Store the padded encodings in their own objects and abandoning the attention_mask
train_padded = train_encodings['input_ids']
val_padded = val_encodings['input_ids']
test_padded = test_encodings['input_ids']
padded_x_train = np.expand_dims(np.asarray(train_padded),-1)
padded_x_val = np.expand_dims(np.asarray(val_padded),-1)
padded_x_test = np.expand_dims(np.asarray(test_padded),-1)
# ... and convert to tensor
tf_x_train = tf.convert_to_tensor(padded_x_train, dtype='float32')
tf_x_val = tf.convert_to_tensor(padded_x_val, dtype='float32')
tf_x_test = tf.convert_to_tensor(padded_x_test, dtype='float32')
# ... and need to create the y_test and y_train
y_train = np.asarray(train_labels)
y_val = np.asarray(val_labels)
y_test = np.asarray(test_labels)
# preprocessing the datasets without manual padding
train_dataset_tf_before = MolNetDataset(train_encodings_before, train_labels)
val_dataset_tf_before = MolNetDataset(val_encodings_before, val_labels)
test_dataset_tf_before = MolNetDataset(test_encodings_before, test_labels)
# Store the padded encodings in their own objects and abandoning the attention_mask
train_padded_before = train_encodings_before['input_ids']
val_padded_before = val_encodings_before['input_ids']
test_padded_before = test_encodings_before['input_ids']
padded_x_train_before = np.expand_dims(np.asarray(train_padded_before),-1)
padded_x_val_before = np.expand_dims(np.asarray(val_padded_before),-1)
padded_x_test_before = np.expand_dims(np.asarray(test_padded_before),-1)
tf_x_train_before = tf.convert_to_tensor(padded_x_train_before, dtype='float32')
tf_x_val_before = tf.convert_to_tensor(padded_x_val_before, dtype='float32')
tf_x_test_before = tf.convert_to_tensor(padded_x_test_before, dtype='float32')
# ... and need to create the y_test and y_train
y_train_before = np.asarray(train_labels)
y_val_before = np.asarray(val_labels)
y_test_before = np.asarray(test_labels)
# checking the shapes of the without manual padding data
print(tf_x_train_before.shape)
print(tf_x_val_before.shape)
print(tf_x_test_before.shape)
###Output
(5792, 282, 1)
(1402, 212, 1)
(268, 117, 1)
###Markdown
Storing the images in memory*traversing through the set of images and casting each one to a numpy array*
###Code
# save path to access the saved loaded images
# chris path
# SAVE_PATH = '/content/drive/MyDrive/2040_final_project/Molecular Exploration/Data/'
# akshay path
SAVE_PATH = '/content/drive/MyDrive/Molecular Exploration/Data/'
# from tqdm import tqdm
# import cv2
# x_train_image_values = []
# image_path = '/content/drive/MyDrive/2040_final_project/Molecular Exploration/Data/SMILES_train_imgs/'
# for file in tqdm(x_train_img):
# image = cv2.imread(image_path+file)
# x_train_image_values.append(image)
# x_train_image_values = np.array(x_train_image_values)
# print(x_train_image_values.shape)
# ... and save this huge numpy array to file (we'll save before casting to a tensor
# because it's easy to save from numpy)
# SAVE_PATH = '/content/drive/MyDrive/2040_final_project/Molecular Exploration/Data/'
# np.save(file=SAVE_PATH+'Train_image_numpy_data.npy', arr=x_train_image_values)
# x_val_image_values = []
# image_path = '/content/drive/MyDrive/2040_final_project/Molecular Exploration/Data/SMILES_val_imgs/'
# for file in tqdm(x_val_img):
# image = cv2.imread(image_path+file)
# x_val_image_values.append(image)
# x_val_image_values = np.array(x_val_image_values)
# print(x_val_image_values.shape)
# # save to file
# np.save(file=SAVE_PATH+'Validation_image_numpy_data.npy', arr=x_val_image_values)
# x_test_image_values = []
# image_path = '/content/drive/MyDrive/2040_final_project/Molecular Exploration/Data/SMILES_test_imgs/'
# for file in tqdm(x_test_img):
# image = cv2.imread(image_path+file)
# x_test_image_values.append(image)
# x_test_image_values = np.array(x_test_image_values)
# print(x_test_image_values.shape)
# # save to file
# np.save(file=SAVE_PATH+'Test_image_numpy_data.npy', arr=x_test_image_values)
## (optionally) LOAD ARRAYS FROM FILE
x_train_image_values = np.load(SAVE_PATH+'Train_image_numpy_data.npy')
x_val_image_values = np.load(SAVE_PATH+'Validation_image_numpy_data.npy')
x_test_image_values = np.load(SAVE_PATH+'Test_image_numpy_data.npy')
# CAST ALL ARRAYS TO TENSORS
x_train_images = tf.convert_to_tensor(x_train_image_values)
x_val_images = tf.convert_to_tensor(x_val_image_values)
x_test_images = tf.convert_to_tensor(x_test_image_values)
###Output
_____no_output_____
###Markdown
Creating image data generators *By this point the data is stored as follows:*- `tf_x_[train/val/test]` : tokenized SMILES sequence data (in tf.tensor)- `x_[train/val/test]_img` : image file names (in pd.Series)- `y_[train/val/test]` : labels (in numpy array)
###Code
# wandb.init(project="SMILES_CNN")
###Output
_____no_output_____
###Markdown
***Setting up the image data generators***
###Code
# need the image size so we'll open a single image and get it from that
from PIL import Image
# Open the image form working directory
# chris path
# image = Image.open('/content/drive/MyDrive/2040_final_project/Molecular Exploration/Data/SMILES_train_imgs/0.png')
# akshay path
image = Image.open('/content/drive/MyDrive/Molecular Exploration/Data/SMILES_train_imgs/0.png')
# summarize some details about the image
IMG_SIZE = image.size
BATCH_SIZE = 64
# Data generators for the image data
data_gen = tf.keras.preprocessing.image.ImageDataGenerator()
# reloading the dataframes for the image data generators
# chris path
# train_labels_img_df = pd.read_csv('/content/drive/MyDrive/2040_final_project/Molecular Exploration/Data/label_mapping_train.csv', index_col='Unnamed: 0')
# val_labels_img_df = pd.read_csv('/content/drive/MyDrive/2040_final_project/Molecular Exploration/Data/label_mapping_val.csv', index_col='Unnamed: 0')
# test_labels_img_df = pd.read_csv('/content/drive/MyDrive/2040_final_project/Molecular Exploration/Data/label_mapping_test.csv', index_col = 'Unnamed: 0')
# akshay path
train_labels_img_df = pd.read_csv('/content/drive/MyDrive/Molecular Exploration/Data/label_mapping_train.csv', index_col='Unnamed: 0')
val_labels_img_df = pd.read_csv('/content/drive/MyDrive/Molecular Exploration/Data/label_mapping_val.csv', index_col='Unnamed: 0')
test_labels_img_df = pd.read_csv('/content/drive/MyDrive/Molecular Exploration/Data/label_mapping_test.csv', index_col = 'Unnamed: 0')
train_labels_img_df['target'] = train_labels_img_df['target'].astype(str)
train_generator = data_gen.flow_from_dataframe(dataframe=train_labels_img_df,
directory=train_imgs_path,
x_col='file',
y_col='target',
target_size=IMG_SIZE,
batch_size=BATCH_SIZE,
shuffle=False,
class_mode='categorical')
val_labels_img_df['target'] = val_labels_img_df['target'].astype(str)
valid_generator = data_gen.flow_from_dataframe(dataframe=val_labels_img_df,
directory=val_imgs_path,
x_col='file',
y_col='target',
target_size=IMG_SIZE,
batch_size=BATCH_SIZE,
shuffle=False,
class_mode='categorical')
test_labels_img_df['target']=test_labels_img_df['target'].astype(str)
test_generator = data_gen.flow_from_dataframe(dataframe=test_labels_img_df,
directory=test_imgs_path,
x_col='file',
y_col='target',
target_size=IMG_SIZE,
batch_size=BATCH_SIZE,
shuffle=False,
class_mode='categorical')
###Output
Found 5792 validated image filenames belonging to 2 classes.
Found 1402 validated image filenames belonging to 2 classes.
Found 268 validated image filenames belonging to 2 classes.
###Markdown
***Getting tokenized sequences into data generator -- (actually never mind)***
###Code
seq_dset = tf.data.Dataset.from_tensor_slices(tf_x_train)
seq_dset
tf_x_train.shape
# wandb.init('ensemble')
###Output
_____no_output_____
###Markdown
Loading in the models and creating comined architecture
###Code
# paths to the models we're combining
# chris paths
# seq_model_path = '/content/drive/MyDrive/2040_final_project/Molecular Exploration/model_checkpoints/ChemBERTa_DeepChem/run_3/sequence_checkpoint-08-0.69.hdf5'
# img_model_path = '/content/drive/MyDrive/2040_final_project/Molecular Exploration/model_checkpoints/SMILES_CNN.hdf5'
# akshay path
seq_model_path = '/content/drive/MyDrive/Molecular Exploration/model_checkpoints/ChemBERTa_DeepChem/run_3/sequence_checkpoint-08-0.69.hdf5'
img_model_path = '/content/drive/MyDrive/Molecular Exploration/model_checkpoints/SMILES_CNN.hdf5'
tf.keras.backend.clear_session()
seq_model = tf.keras.models.load_model(seq_model_path)
img_model = tf.keras.models.load_model(img_model_path)
models = [seq_model.output, img_model.output]
seq_model._name = 'sequence_mod'
img_model._name = 'image_mod'
###Output
_____no_output_____
###Markdown
***Defining the input layers*** ***Removing the last two layers from both networks***
###Code
from keras.models import Model
seq_model_embedding = Model(inputs=seq_model.input, outputs=seq_model.layers[-2].output)
img_model_embedding = Model(inputs=img_model.input, outputs=img_model.layers[-2].output)
seq_model_embedding.summary()
print(seq_model_embedding.layers[-1])
print(seq_model.layers[-2])
print(img_model_embedding.layers[-1])
print(img_model.layers[-2])
###Output
<tensorflow.python.keras.layers.pooling.GlobalAveragePooling2D object at 0x7f2727aa32d0>
<tensorflow.python.keras.layers.pooling.GlobalAveragePooling2D object at 0x7f2727aa32d0>
###Markdown
***Building the model***
###Code
# checking output dimension of the two base models
print(seq_model_embedding.layers[-1].output_shape)
print(img_model_embedding.layers[-1].output_shape)
from tensorflow import keras
from keras.layers import *
# want to shrink the image output down from 1500 to 64, so will pass through a single dense layer
x = Dense(64)(img_model_embedding.output)
img_model_shrunk = Model(inputs=img_model_embedding.input, outputs=x)
img_model_shrunk.output_shape
# defining input layers for the two submodels
sequence_input = tf.keras.layers.Input(shape=(tf_x_train.shape[1], tf_x_train.shape[2], 1))
sequence_input._name = 'seq_input'
image_input = tf.keras.layers.Input(shape=(x_train_images.shape[1], x_train_images.shape[2], 3))
image_input._name = 'img_input'
# defining composite model
seq_model_base_output = seq_model_embedding(sequence_input)
img_model_base_output = img_model_shrunk(image_input)
ensemble_layer = keras.layers.Concatenate()([seq_model_base_output, img_model_base_output])
intermed_layer = Dense(32, activation='relu', kernel_regularizer='l2')(ensemble_layer)
output = Dense(1, activation='sigmoid')(intermed_layer)
ensemble_model = keras.Model(inputs = [sequence_input, image_input], outputs = output)
ensemble_model.layers[0]._name = 'sequence_input'
ensemble_model.layers[1]._name = 'image_input'
keras.utils.plot_model(ensemble_model, show_shapes=True)
###Output
_____no_output_____
###Markdown
***Feeding the data in in parallel with dictionary being passed to fit method:***
###Code
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping, ReduceLROnPlateau
# save_path = '/content/drive/MyDrive/2040_final_project/Molecular Exploration/model_checkpoints/'
# chris path
# save_path = '/content/drive/MyDrive/DATA_2040/Molecular Exploration/model_checkpoints/ChemBERTa_DeepChem/'
# akshay path
save_path = '/content/drive/MyDrive/Molecular Exploration/model_checkpoints/ensemble/'
save_name = "ensemble_checkpoint-{epoch:02d}-{val_auc:.2f}.hdf5"
checkpoint = ModelCheckpoint(save_path+save_name,
monitor = "val_auc",
mode = "max",
save_best_only = True,
verbose = 1,
save_weights_only = True)
earlystop = EarlyStopping(monitor = 'val_auc',
min_delta = 0.001,
patience = 15,
verbose = 1,
restore_best_weights = True)
reduce_lr = ReduceLROnPlateau(monitor = 'val_auc',
factor = 0.8,
patience = 4,
verbose = 1,
min_delta = 0.0001,
min_lr = 0.00000001)
callbacks = [checkpoint, reduce_lr, earlystop]
auc_metric = tf.keras.metrics.AUC()
opt = tf.keras.optimizers.Adam(learning_rate=0.005)
# class_weight = {0: 1, 1: 15}
ensemble_model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy',auc_metric])
ensemble_history = ensemble_model.fit(
x={"sequence_input": tf_x_train, "image_input": x_train_images},
y=y_train,
validation_data=([tf_x_val, x_val_images],y_val),
steps_per_epoch = 5,
validation_steps = 5,
# class_weight=class_weight,
epochs=10,
batch_size=32,
callbacks=callbacks
)
# wandb.finish()
# save model history using pickle
import pickle
file = open('/content/drive/MyDrive/Molecular Exploration/model_checkpoints/ensemble/results/ensemble_history.save', 'wb')
pickle.dump(ensemble_history, file)
file.close()
###Output
_____no_output_____
###Markdown
Example weight loading
###Code
# ensemble_layer_example = keras.layers.Concatenate()([seq_model_base_output, img_model_base_output])
# intermed_layer_example = Dense(32, activation='relu', kernel_regularizer='l2')(ensemble_layer_example)
# output_example = Dense(1, activation='sigmoid')(intermed_layer_example)
# ensemble_model_example = keras.Model(inputs = [sequence_input, image_input], outputs = output_example)
# ensemble_model_example.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy',auc_metric])
# ensemble_model_example.evaluate({"sequence_input": tf_x_test, "image_input": x_test_images}, y_test)
# ensemble_model_example.load_weights('/content/drive/MyDrive/Molecular Exploration/model_checkpoints/ensemble/sequence_checkpoint-02-0.52.hdf5')
# ensemble_model_example.evaluate({"sequence_input": tf_x_test, "image_input": x_test_images}, y_test)
###Output
_____no_output_____
###Markdown
Checking sequence model
###Code
from keras.models import *
def rnn_bert_model():
model = Sequential()
model.add(Masking(mask_value=0.0))
model.add(Bidirectional(LSTM(256,
return_sequences = True),
# kernel_initializer=GlorotNormal,
input_shape=(tf_x_train.shape[1],tf_x_train.shape[2])))
model.add(Dropout(0.25))
model.add(Bidirectional(LSTM(256, return_sequences = True)))
model.add(Dropout(0.25))
model.add(Bidirectional(LSTM(256, return_sequences = True)))
model.add(Conv1D(64,
kernel_size=3,
padding='valid',
kernel_initializer='glorot_uniform'))
model.add(GlobalAveragePooling1D())
model.add(Dense(1, activation='sigmoid'))
# common metric for the binary classification problem: AUC
auc_metric = tf.keras.metrics.AUC()
opt = tf.keras.optimizers.Adam(learning_rate=0.01)
model.compile(loss='binary_crossentropy',
optimizer=opt,
metrics=['accuracy', auc_metric])
model.build(input_shape=(None,
tf_x_train.shape[1],
tf_x_train.shape[2]))
print(model.summary())
return model
model = rnn_bert_model()
earlystop = EarlyStopping(monitor = 'val_auc',
min_delta = 0.001,
patience = 15,
verbose = 1,
restore_best_weights = True)
reduce_lr = ReduceLROnPlateau(monitor = 'val_auc',
factor = 0.8,
patience = 4,
verbose = 1,
min_delta = 0.0001,
min_lr = 0.00000001)
callbacks = [earlystop, reduce_lr]
model.fit(tf_x_train_before,
y_train,
validation_data=(tf_x_val_before, y_val),
epochs=100,
batch_size=128,
# class_weight={0: negative, 1: positive},
callbacks=[reduce_lr]
)
###Output
_____no_output_____ |
03 Data Visualization/03 Bar Charts & Heatmaps.ipynb | ###Markdown
Bar Charts & Heatmaps------ Tutorial---
###Code
from google.colab import drive
drive.mount('/content/drive')
# Importing libraries
import pandas as pd
pd.plotting.register_matplotlib_converters()
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
# Loading Data
flight_data = pd.read_csv("/content/drive/MyDrive/Colab Notebooks/Kaggle_Courses/03 Data Visualization/flight_delays.csv", index_col="Month")
# Print the data
flight_data
###Output
_____no_output_____
###Markdown
Bar chart
###Code
# Set the width and height of the figure
plt.figure(figsize=(10,6))
# Bar chart showing average arrival delay for Spirit Airlines flights by month
sns.barplot(x=flight_data.index, y=flight_data['NK'])
plt.title("Average Arrival Delay for Spirit Airlines Flights, by Month")
plt.ylabel("Arrival delay (in minutes)")
plt.show
###Output
_____no_output_____
###Markdown
It has three main components:- `sns.barplot` - This tells the notebook that we want to create a bar chart.- `x=flight_data.index` - This determines what to use on the horizontal axis. In this case, we have selected the column that **_index_**es the rows (in this case, the column containing the months).- `y=flight_data['NK']` - This sets the column in the data that will be used to determine the height of each bar. In this case, we select the `'NK'` column.> **Important Note**: You must select the indexing column with `flight_data.index`, and it is not possible to use `flight_data['Month']` (_which will return an error_). This is because when we loaded the dataset, the `"Month"` column was used to index the rows. **We always have to use this special notation to select the indexing column.** Heatmap
###Code
# Set the width and height of the figure
plt.figure(figsize=(14,7))
# Add title
plt.title("Average Arrival Delay for Each Airline, by Month")
# Heatmap showing average arrival delay for each airline by month
sns.heatmap(data=flight_data, annot=True)
# Add label for horizontal axis
plt.xlabel("Airline")
###Output
_____no_output_____
###Markdown
This code has three main components:- `sns.heatmap` - This tells the notebook that we want to create a heatmap.- `data=flight_data` - This tells the notebook to use all of the entries in `flight_data` to create the heatmap.- `annot=True` - This ensures that the values for each cell appear on the chart. (_Leaving this out removes the numbers from each of the cells!_) Exercise---
###Code
data = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/Kaggle_Courses/03 Data Visualization/ign_scores.csv', index_col = 'Platform')
data
#What is the highest average score received by PC games, for any platform?
high_score = 7.759930
# On the Playstation Vita platform, which genre has the lowest average score?
worst_genre = 'Simulation'
# Create a bar chart that shows the average score for racing games, for each platform.
# Bar chart showing average score for racing games by platform
plt.figure(figsize = (15,5))
plt.xlabel('Platform')
plt.ylabel('Average Score')
plt.title('Average Score for racing games')
sns.barplot(x = data.index, y = data['Racing'])
###Output
_____no_output_____
###Markdown
Do you expect a racing game for the Wii platform to receive a high rating? If not, what gaming platform seems to be the best alternative?we should not expect a racing game for the Wii platform to receive a high rating. In fact, on average, racing games for Wii score lower than any other platform. Xbox One seems to be the best alternative, since it has the highest average ratings.
###Code
# Set the width and height of the figure
plt.figure(figsize=(15,10))
plt.title("Average score by genre and platform.")
# Heatmap showing average arrival delay for each airline by month
sns.heatmap(data=data, annot=True)
###Output
_____no_output_____ |
decorator_demo.ipynb | ###Markdown
Demonstration of decorator with `jupyter_display = True`
###Code
@handcalc(jupyter_display = True)
def NBCC2015LC(DL: float = 0, SDL: float = 0, SL: float = 0, LL: float = 0, WL: float= 0, EL: float = 0):
LC1 = 1.4*DL
LC2a = 1.25*DL + 1.5*LL
LC2b = 1.25*DL + 1.5*LL + 0.5*SL
LC3a = 1.25*DL + 1.5*SL
LC3b = 1.25*DL + 1.5*SL + 0.5*LL
return locals()
myfunc_results = NBCC2015LC(LL=1.5,DL=2,SL=3)
ar = np.array([1, 2.67, 3, 4])
myfunc_results = NBCC2015LC(DL=2, LL=ar, SL = 3.5)
###Output
_____no_output_____
###Markdown
You can use the decorator in a `.py` file filled with decorated functionsWhen the imported, decorated functions are called in Jupyter, then the Latex is displayed.
###Code
import eqn_lib
results = eqn_lib.NBCC2015LC(DL=2, LL=ar, SL = 3.5)
results
###Output
_____no_output_____ |
docsrc/jupyter_notebooks/Safari/Safari.ipynb | ###Markdown
Safari **Setting up the environment**Initialization of the Ontology editor in Jupyter Notebook
###Code
from cognipy.ontology import Ontology #the ontology processing class
from cognipy.ontology import CQL #SPARQL format tailored for Contolled Natural Language
from cognipy.ontology import encode_string_for_graph_label #complex datatypes encoder for the graph labels in graph visualisation
import textwrap
def graph_attribute_formatter(val):
if isinstance(val,list) or isinstance(val,set):
return " | ".join(list(map(lambda i:encode_string_for_graph_label(graph_attribute_formatter(i)),val)))
elif isinstance(val,dict):
return " | ".join(list(map(lambda i:i[0]+" : "+encode_string_for_graph_label(graph_attribute_formatter(i[1])),val.items())))
else:
return encode_string_for_graph_label(textwrap.fill(str(val),40))
###Output
_____no_output_____
###Markdown
African Wildlife.Loading the editor for the basic "safari" ontology. The Ontology is inspired by 'A Semantic Web Primer.' by 'Antoniou, G, van Harmelen, F.' 'MIT Press, 2003.' 'http://www.csd.uoc.gr/~hy566/SWbook.pdf', tuned to OWL/RL+SWRL profile. Part-1: 'simple hierarchy of beings'. Lets setup the fist ontology editor for the general knowledge.
###Code
%%writefile part_01.encnl
Namespace: 'http://cognitum.eu/african_wildlife'.
Comment: 'Let's name our instances'.
Comment: 'Let's specify the hierarchy of beings'.
Comment: 'What is what?'.
Every lion is an animal.
Every giraffe is an animal.
Every animal has a face.
Comment: 'Moreover'.
Every impala is an animal.
Every omnivore is an animal.
Every rock-dassie is an animal.
Every warthog is an animal.
Every carnivore is an animal.
Every herbivore is an animal.
Every elephant is a herbivore.
Every lion is carnivore.
Comment: 'There are also plants there:'.
Every tree is a plant.
Every grass is a plant.
Every palm-tree is a plant.
Every branch is a plant-part.
Every leaf is a plant-part.
Every twig is a plant-part.
Every phloem is a plant-part.
Every root is a plant-part.
Every parsnip is a root.
Every stem is a plant-part.
Every xylem is a plant-part.
Every fruiting-body is a plant-part.
Every berry is a fruiting-body.
Every apple is a fruiting-body.
Comment: 'We cannot use adjectives directly. To specify adjectives we need to transform them into sets that have form of buzzy-words'.
Every tasty-plant is a plant.
Every carnivorous-plant is a plant.
###Output
Overwriting part_01.encnl
###Markdown
The ontology object alows you to draw the materialised graph using several layout algorythms. Lets draw our base ontology.
###Code
onto=Ontology("cnl/file","part_01.encnl",
evaluator = lambda e:eval(e,globals(),locals()),
graph_attribute_formatter = graph_attribute_formatter)
onto.draw_graph(layout='hierarchical')
###Output
_____no_output_____
###Markdown
It is fully compatible with OWL/RDF format (via OWLAPI). We can always export it as it is. Part-2: DisjointnessIn OWL/RL+SWRL profile we deal with Open World Assumption. It means that we need to explicitly specify all the objects that are different from each other. We cannot assume (like e.g. object oriented programming language do) that the different names mean the different things, therefore we need to specify explicitly if two things are different.
###Code
%%writefile part_02.encnl
Every-single-thing that is a plant and-or is a plant-part is a herb.
No herb is an animal.
Every carnivore eats nothing-but animals.
Every lion eats nothing-but herbivores.
Every herbivore eats nothing-but herb.
Anything either is a carnivore, is a herbivore or is an omnivore or-something-else.
Anything either is a branch, is a leaf or is a twig or-something-else.
No giraffe is a lion.
Every palm-tree is not a tree.
###Output
Overwriting part_02.encnl
###Markdown
Lets test how it works on the example. The example will be defining several instances, that aim to break the consistency of the ontology in som not obvious way.
###Code
%%writefile test_01.encnl
Leo-01 is a lion.
Leo-01 eats Sophie-01.
Sophie-01 is a giraffe.
Leo-01 eats Mary-01.
Mary-01 is a palm-tree.
###Output
Overwriting test_01.encnl
###Markdown
Lets load all the parts ontology ontology togheter with the testing part into the newly created obeject.
###Code
onto=Ontology("cnl/string",'\n'.join(open(fn,"rt").read() for fn in ["part_01.encnl","part_02.encnl","test_01.encnl"]),
evaluator = lambda e:eval(e,globals(),locals()),
graph_attribute_formatter = graph_attribute_formatter)
###Output
_____no_output_____
###Markdown
We will need also some way to describe what is wrong `printReasoningInfo` and why `printWhy`.
###Code
import json
def printReasoningInfo(onto):
info=onto.reasoningInfo()
if info == "":
print('all good!')
else:
print(json.dumps(json.loads(info), indent=2, sort_keys=True))
def printWhy(onto,fact):
info=json.loads(onto.why(fact))
print(json.dumps(info, indent=2, sort_keys=True))
###Output
_____no_output_____
###Markdown
So what is wrong here?
###Code
printReasoningInfo(onto)
###Output
{
"errors": [
{
"content": "Complement classes",
"title": "inconsistency",
"vals": {
"concept": "animal",
"instance": "Mary-01"
}
}
]
}
###Markdown
Lets draw it on a diagram, this time using force directed layout.
###Code
onto.draw_graph(layout='force directed')
printWhy(onto,"Mary-01 is a animal?")
###Output
{
"by": [
{
"expr": "Every carnivore eats nothing-but animals."
},
{
"expr": "Every carnivore eats nothing-but animals."
},
{
"expr": "Leo-01 eats Mary-01."
},
{
"by": [
{
"by": [
{
"expr": "Every lion is a carnivore."
},
{
"expr": "Every carnivore eats nothing-but animals."
}
]
},
{
"expr": "Leo-01 is a lion."
}
]
}
],
"concluded": "Mary-01 is an animal."
}
###Markdown
Part-3: Modal expressions and part-whole relationships.
###Code
%%writefile part_03.encnl
Every carnivorous-plant must eat an animal.
Every carnivor must eat an animal.
Every omnivore must eat a plant.
Every branch must be-part-of a tree.
Every plant-part must be-part-of a plant.
Comment: 'Role equivalence and inverted roles'.
X has-part Y if-and-only-if Y is-part-of X.
X eats Y if-and-only-if Y is-eaten-by X.
Comment: 'Role subsumptions'.
If X is-part-of Y then X is-part-of Y.
If X has-part something that has-part Y then X has-part Y.
Comment: 'Complex role subsumptions'.
If X is-part-of something that is-part-of Y then X is-part-of Y.
%%writefile test_02.encnl
Leo-01 is a lion.
Leo-01 eats Sophie-01.
Sophie-01 is a giraffe.
Sophie-01 eats Leaf-01.
Mary-01 is a tree.
Leaf-01 is a leaf and is-part-of Branch-02.
Branch-02 is a branch and is-part-of Branch-01.
Branch-01 is a branch and is-part-of Mary-01.
Branch-03 is a branch.
onto=Ontology("cnl/string",'\n'.join(open(fn,"rt").read() for fn in ["part_01.encnl","part_02.encnl","part_03.encnl","test_02.encnl"]),
evaluator = lambda e:eval(e,globals(),locals()),
graph_attribute_formatter = graph_attribute_formatter)
printReasoningInfo(onto)
onto.draw_graph(layout="force directed")
onto.select_instances_of("eaten by Leo-01")
print(onto.why("Leo-01 is an animal?"))
onto.sparql_query(CQL("""select ?a1 ?a2 {
?a1 rdf:type <animal>.
?a2 rdf:type <animal>.
?a1 <eats> ?a2.
}""","http://cognitum.eu/african_wildlife#"))
print(onto.as_rdf())
###Output
<?xml version = '1.0' encoding = 'UTF-8'?>
<rdf:RDF xmlns="http://cognitum.eu/african_wildlife#" xml:base="http://cognitum.eu/african_wildlife" xmlns:owl="http://www.w3.org/2002/07/owl#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#">
<owl:Ontology rdf:about="http://cognitum.eu/african_wildlife#" />
<!--
///////////////////////////////////////////////////////////////////////////////////////
//
// Object Properties
//
///////////////////////////////////////////////////////////////////////////////////////
-->
<!-- http://cognitum.eu/african_wildlife#eats -->
<owl:ObjectProperty rdf:about="http://cognitum.eu/african_wildlife#eats">
<owl:equivalentProperty>
<rdf:Description>
<owl:inverseOf rdf:resource="http://cognitum.eu/african_wildlife#isEatenBy" />
</rdf:Description>
</owl:equivalentProperty>
</owl:ObjectProperty>
<!-- http://cognitum.eu/african_wildlife#has -->
<owl:ObjectProperty rdf:about="http://cognitum.eu/african_wildlife#has" />
<!-- http://cognitum.eu/african_wildlife#hasPart -->
<owl:ObjectProperty rdf:about="http://cognitum.eu/african_wildlife#hasPart">
<rdf:type rdf:resource="http://www.w3.org/2002/07/owl#TransitiveProperty" />
<owl:equivalentProperty>
<rdf:Description>
<owl:inverseOf rdf:resource="http://cognitum.eu/african_wildlife#isPartOf" />
</rdf:Description>
</owl:equivalentProperty>
</owl:ObjectProperty>
<!-- http://cognitum.eu/african_wildlife#isEatenBy -->
<owl:ObjectProperty rdf:about="http://cognitum.eu/african_wildlife#isEatenBy" />
<!-- http://cognitum.eu/african_wildlife#isPartOf -->
<owl:ObjectProperty rdf:about="http://cognitum.eu/african_wildlife#isPartOf">
<rdf:type rdf:resource="http://www.w3.org/2002/07/owl#TransitiveProperty" />
<rdfs:subPropertyOf rdf:resource="http://cognitum.eu/african_wildlife#isPartOf" />
</owl:ObjectProperty>
<!--
///////////////////////////////////////////////////////////////////////////////////////
//
// Classes
//
///////////////////////////////////////////////////////////////////////////////////////
-->
<!-- http://cognitum.eu/african_wildlife#animal -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#animal">
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="http://cognitum.eu/african_wildlife#has" />
<owl:someValuesFrom rdf:resource="http://cognitum.eu/african_wildlife#face" />
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<!-- http://cognitum.eu/african_wildlife#apple -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#apple">
<rdfs:subClassOf rdf:resource="http://cognitum.eu/african_wildlife#fruitingBody" />
</owl:Class>
<!-- http://cognitum.eu/african_wildlife#berry -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#berry">
<rdfs:subClassOf rdf:resource="http://cognitum.eu/african_wildlife#fruitingBody" />
</owl:Class>
<!-- http://cognitum.eu/african_wildlife#branch -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#branch">
<rdfs:subClassOf rdf:resource="http://cognitum.eu/african_wildlife#plantPart" />
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="http://cognitum.eu/african_wildlife#isPartOf" />
<owl:someValuesFrom rdf:resource="http://cognitum.eu/african_wildlife#tree" />
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<!-- http://cognitum.eu/african_wildlife#carnivor -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#carnivor">
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="http://cognitum.eu/african_wildlife#eats" />
<owl:someValuesFrom rdf:resource="http://cognitum.eu/african_wildlife#animal" />
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<!-- http://cognitum.eu/african_wildlife#carnivore -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#carnivore">
<rdfs:subClassOf rdf:resource="http://cognitum.eu/african_wildlife#animal" />
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="http://cognitum.eu/african_wildlife#eats" />
<owl:allValuesFrom rdf:resource="http://cognitum.eu/african_wildlife#animal" />
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<!-- http://cognitum.eu/african_wildlife#carnivorousPlant -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#carnivorousPlant">
<rdfs:subClassOf rdf:resource="http://cognitum.eu/african_wildlife#plant" />
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="http://cognitum.eu/african_wildlife#eats" />
<owl:someValuesFrom rdf:resource="http://cognitum.eu/african_wildlife#animal" />
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<!-- http://cognitum.eu/african_wildlife#elephant -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#elephant">
<rdfs:subClassOf rdf:resource="http://cognitum.eu/african_wildlife#herbivore" />
</owl:Class>
<!-- http://cognitum.eu/african_wildlife#face -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#face" />
<!-- http://cognitum.eu/african_wildlife#fruitingBody -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#fruitingBody">
<rdfs:subClassOf rdf:resource="http://cognitum.eu/african_wildlife#plantPart" />
</owl:Class>
<!-- http://cognitum.eu/african_wildlife#giraffe -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#giraffe">
<rdfs:subClassOf rdf:resource="http://cognitum.eu/african_wildlife#animal" />
<rdfs:subClassOf>
<owl:Class>
<owl:complementOf rdf:resource="http://cognitum.eu/african_wildlife#lion" />
</owl:Class>
</rdfs:subClassOf>
</owl:Class>
<!-- http://cognitum.eu/african_wildlife#grass -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#grass">
<rdfs:subClassOf rdf:resource="http://cognitum.eu/african_wildlife#plant" />
</owl:Class>
<!-- http://cognitum.eu/african_wildlife#herb -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#herb">
<rdfs:subClassOf>
<owl:Class>
<owl:complementOf rdf:resource="http://cognitum.eu/african_wildlife#animal" />
</owl:Class>
</rdfs:subClassOf>
</owl:Class>
<!-- http://cognitum.eu/african_wildlife#herbivore -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#herbivore">
<rdfs:subClassOf rdf:resource="http://cognitum.eu/african_wildlife#animal" />
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="http://cognitum.eu/african_wildlife#eats" />
<owl:allValuesFrom rdf:resource="http://cognitum.eu/african_wildlife#herb" />
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<!-- http://cognitum.eu/african_wildlife#impala -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#impala">
<rdfs:subClassOf rdf:resource="http://cognitum.eu/african_wildlife#animal" />
</owl:Class>
<!-- http://cognitum.eu/african_wildlife#leaf -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#leaf">
<rdfs:subClassOf rdf:resource="http://cognitum.eu/african_wildlife#plantPart" />
</owl:Class>
<!-- http://cognitum.eu/african_wildlife#lion -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#lion">
<rdfs:subClassOf rdf:resource="http://cognitum.eu/african_wildlife#animal" />
<rdfs:subClassOf rdf:resource="http://cognitum.eu/african_wildlife#carnivore" />
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="http://cognitum.eu/african_wildlife#eats" />
<owl:allValuesFrom rdf:resource="http://cognitum.eu/african_wildlife#herbivore" />
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<!-- http://cognitum.eu/african_wildlife#man -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#man">
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="http://cognitum.eu/african_wildlife#has" />
<owl:someValuesFrom rdf:resource="http://cognitum.eu/african_wildlife#train" />
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<!-- http://cognitum.eu/african_wildlife#omnivore -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#omnivore">
<rdfs:subClassOf rdf:resource="http://cognitum.eu/african_wildlife#animal" />
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="http://cognitum.eu/african_wildlife#eats" />
<owl:someValuesFrom rdf:resource="http://cognitum.eu/african_wildlife#plant" />
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<!-- http://cognitum.eu/african_wildlife#palmTree -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#palmTree">
<rdfs:subClassOf rdf:resource="http://cognitum.eu/african_wildlife#plant" />
<rdfs:subClassOf>
<owl:Class>
<owl:complementOf rdf:resource="http://cognitum.eu/african_wildlife#tree" />
</owl:Class>
</rdfs:subClassOf>
</owl:Class>
<!-- http://cognitum.eu/african_wildlife#parsnip -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#parsnip">
<rdfs:subClassOf rdf:resource="http://cognitum.eu/african_wildlife#root" />
</owl:Class>
<!-- http://cognitum.eu/african_wildlife#phloem -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#phloem">
<rdfs:subClassOf rdf:resource="http://cognitum.eu/african_wildlife#plantPart" />
</owl:Class>
<!-- http://cognitum.eu/african_wildlife#plant -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#plant" />
<!-- http://cognitum.eu/african_wildlife#plantPart -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#plantPart">
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="http://cognitum.eu/african_wildlife#isPartOf" />
<owl:someValuesFrom rdf:resource="http://cognitum.eu/african_wildlife#plant" />
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<!-- http://cognitum.eu/african_wildlife#rockDassie -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#rockDassie">
<rdfs:subClassOf rdf:resource="http://cognitum.eu/african_wildlife#animal" />
</owl:Class>
<!-- http://cognitum.eu/african_wildlife#root -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#root">
<rdfs:subClassOf rdf:resource="http://cognitum.eu/african_wildlife#plantPart" />
</owl:Class>
<!-- http://cognitum.eu/african_wildlife#stem -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#stem">
<rdfs:subClassOf rdf:resource="http://cognitum.eu/african_wildlife#plantPart" />
</owl:Class>
<!-- http://cognitum.eu/african_wildlife#tastyPlant -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#tastyPlant">
<rdfs:subClassOf rdf:resource="http://cognitum.eu/african_wildlife#plant" />
</owl:Class>
<!-- http://cognitum.eu/african_wildlife#train -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#train" />
<!-- http://cognitum.eu/african_wildlife#tree -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#tree">
<rdfs:subClassOf rdf:resource="http://cognitum.eu/african_wildlife#plant" />
</owl:Class>
<!-- http://cognitum.eu/african_wildlife#twig -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#twig">
<rdfs:subClassOf rdf:resource="http://cognitum.eu/african_wildlife#plantPart" />
</owl:Class>
<!-- http://cognitum.eu/african_wildlife#warthog -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#warthog">
<rdfs:subClassOf rdf:resource="http://cognitum.eu/african_wildlife#animal" />
</owl:Class>
<!-- http://cognitum.eu/african_wildlife#xylem -->
<owl:Class rdf:about="http://cognitum.eu/african_wildlife#xylem">
<rdfs:subClassOf rdf:resource="http://cognitum.eu/african_wildlife#plantPart" />
</owl:Class>
<!--
///////////////////////////////////////////////////////////////////////////////////////
//
// Individuals
//
///////////////////////////////////////////////////////////////////////////////////////
-->
<!-- http://cognitum.eu/african_wildlife#Branch01 -->
<owl:NamedIndividual rdf:about="http://cognitum.eu/african_wildlife#Branch01">
<rdf:type rdf:resource="http://cognitum.eu/african_wildlife#branch" />
<isPartOf rdf:resource="http://cognitum.eu/african_wildlife#Mary01" />
</owl:NamedIndividual>
<!-- http://cognitum.eu/african_wildlife#Branch02 -->
<owl:NamedIndividual rdf:about="http://cognitum.eu/african_wildlife#Branch02">
<rdf:type rdf:resource="http://cognitum.eu/african_wildlife#branch" />
<isPartOf rdf:resource="http://cognitum.eu/african_wildlife#Branch01" />
</owl:NamedIndividual>
<!-- http://cognitum.eu/african_wildlife#Branch03 -->
<owl:NamedIndividual rdf:about="http://cognitum.eu/african_wildlife#Branch03">
<rdf:type rdf:resource="http://cognitum.eu/african_wildlife#branch" />
</owl:NamedIndividual>
<!-- http://cognitum.eu/african_wildlife#Leaf01 -->
<owl:NamedIndividual rdf:about="http://cognitum.eu/african_wildlife#Leaf01">
<rdf:type rdf:resource="http://cognitum.eu/african_wildlife#leaf" />
<isPartOf rdf:resource="http://cognitum.eu/african_wildlife#Branch02" />
</owl:NamedIndividual>
<!-- http://cognitum.eu/african_wildlife#Leo01 -->
<owl:NamedIndividual rdf:about="http://cognitum.eu/african_wildlife#Leo01">
<rdf:type rdf:resource="http://cognitum.eu/african_wildlife#lion" />
<eats rdf:resource="http://cognitum.eu/african_wildlife#Sophie01" />
</owl:NamedIndividual>
<!-- http://cognitum.eu/african_wildlife#Mary01 -->
<owl:NamedIndividual rdf:about="http://cognitum.eu/african_wildlife#Mary01">
<rdf:type rdf:resource="http://cognitum.eu/african_wildlife#tree" />
</owl:NamedIndividual>
<!-- http://cognitum.eu/african_wildlife#Sophie01 -->
<owl:NamedIndividual rdf:about="http://cognitum.eu/african_wildlife#Sophie01">
<rdf:type rdf:resource="http://cognitum.eu/african_wildlife#giraffe" />
<eats rdf:resource="http://cognitum.eu/african_wildlife#Leaf01" />
</owl:NamedIndividual>
<!--
///////////////////////////////////////////////////////////////////////////////////////
//
// General axioms
//
///////////////////////////////////////////////////////////////////////////////////////
-->
<owl:Class>
<rdfs:subClassOf rdf:resource="http://cognitum.eu/african_wildlife#herb" />
<owl:unionOf rdf:parseType="Collection">
<rdf:Description rdf:about="http://cognitum.eu/african_wildlife#plant" />
<rdf:Description rdf:about="http://cognitum.eu/african_wildlife#plantPart" />
</owl:unionOf>
</owl:Class>
<rdf:Description>
<rdf:type rdf:resource="http://www.w3.org/2002/07/owl#AllDisjointClasses" />
<owl:members rdf:parseType="Collection">
<rdf:Description rdf:about="http://cognitum.eu/african_wildlife#carnivore" />
<rdf:Description rdf:about="http://cognitum.eu/african_wildlife#herbivore" />
<rdf:Description rdf:about="http://cognitum.eu/african_wildlife#omnivore" />
</owl:members>
</rdf:Description>
<rdf:Description>
<rdf:type rdf:resource="http://www.w3.org/2002/07/owl#AllDisjointClasses" />
<owl:members rdf:parseType="Collection">
<rdf:Description rdf:about="http://cognitum.eu/african_wildlife#branch" />
<rdf:Description rdf:about="http://cognitum.eu/african_wildlife#leaf" />
<rdf:Description rdf:about="http://cognitum.eu/african_wildlife#twig" />
</owl:members>
</rdf:Description>
</rdf:RDF>
<!-- Generated by the OWL API (version 3.5.1.c) http://owlapi.sourceforge.net -->
|
dnn-h/h2/xor/xor.ipynb | ###Markdown
ECE 194N: Homework 2 Topics: XOR Problem Due: May 14------------------------------------------------- 1. XOR: Given following samples, we will use multi-layer networks to approximate the functions defined by the samplesGiven samples- x1 = [1, 1]T, y1 = +1- x2 = [0, 0]T, y2 = +1- x3 = [1, 0]T, y3 = −1- x4 = [0, 1]T, y4 = −1Stored as follows- X = [x1, x2, x3, x4]- Y = [y1, y2, y3, y4] (a) Visualize
###Code
import numpy as np
from math import *
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
# x1, x2, x3, x4
X = np.matrix([[1,1], [0,0], [1,0], [0,1]])
# y1, y2, y3, y4
Y = np.array([[1], [1], [-1], [-1]])
print('X: \n',X.T)
print('Y: \n',Y.T)
plt.scatter(np.array(X[:,0]).flatten(), np.array(X[:,1]).flatten(), color = 'r',marker='x', label = 'X Samples')
plt.legend()
plt.title('XOR Samples')
plt.show()
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(np.array(X[:,0]).flatten(), np.array(X[:,1]).flatten(), np.array(Y[:,0]).flatten(), s=int(Y.shape[0]), c='r', marker='^')
ax.set_title('Plot with Y classes (XOR points)')
ax.set_xlabel('x[0]')
ax.set_ylabel('x[1]')
ax.set_zlabel('Y')
plt.show()
###Output
X:
[[1 0 1 0]
[1 0 0 1]]
Y:
[[ 1 1 -1 -1]]
###Markdown
(b) Implement a network to estimate the function that is generating these samples.- Wh & Wz are the weight matrices, of dimension previous layer size * next layer size.- X is the input matrix, dimension 4 * 2 = all combinations of 2 truth values.- Y is the corresponding target value of XOR of the 4 pairs of values in X.- Z is the vector of learned values for XOR. Comment on how you choose your parameters.- Since the input data comprises 2 operands for the XOR operation, the input layer devotes 1 neuron per operand. - The result of the XOR operation is one truth value, so we have one output node. - The hidden layer can have any number of nodes, 3 seems sufficient- Initialise the weights. Setting them all to the same value, e.g. zero, would be a poor choice because the weights are very likely to end up different from each other
###Code
'''
A numpy based neural network implementation
'''
class NN:
def sigmoid(x): return 1.0/(1.0 + np.exp(-x))
def sigmoid_prime(x): return sigmoid(x)*(1.0-sigmoid(x))
def tanh(x): return np.tanh(x)
def tanh_prime(x): return 1.0 - x**2
def __init__(self, layers, activation='tanh'):
if activation == 'sigmoid':
self.activation = sigmoid
self.activation_prime = sigmoid_prime
elif activation == 'tanh':
self.activation = tanh
self.activation_prime = tanh_prime
# Set weights
self.weights = []
for i in range(1, len(layers) - 1):
r = 2*np.random.random((layers[i-1] + 1, layers[i] + 1)) -1
self.weights.append(r)
# output layer - random((2+1, 1)) : 3 x 1
r = 2*np.random.random( (layers[i] + 1, layers[i+1])) - 1
self.weights.append(r)
def fit(self, X, y, learning_rate=0.2, epochs=100000):
ones = np.atleast_2d(np.ones(X.shape[0]))
X = np.concatenate((ones.T, X), axis=1)
for k in range(epochs):
if k % 10000 == 0:
print('epochs:', k)
i = np.random.randint(X.shape[0])
a = [X[i]]
for l in range(len(self.weights)):
dot_value = np.dot(a[l], self.weights[l])
activation = self.activation(dot_value)
a.append(activation)
# output layer
error = y[i] - a[-1]
deltas = [error * self.activation_prime(a[-1])]
for l in range(len(a) - 2, 0, -1):
deltas.append(deltas[-1].dot(self.weights[l].T)*self.activation_prime(a[l]))
deltas.reverse()
for i in range(len(self.weights)):
layer = np.atleast_2d(a[i])
delta = np.atleast_2d(deltas[i])
self.weights[i] += learning_rate * layer.T.dot(delta)
def predict(self, x):
a = np.concatenate((np.ones(1).T, np.array(x)), axis=0)
for l in range(0, len(self.weights)):
a = self.activation(np.dot(a, self.weights[l]))
return a
out_vector = NN([2,2,1])
X = np.array([[0, 0],
[0, 1],
[1, 0],
[1, 1]])
y = np.array([0, 1, 1, 0])
out_vector.fit(X, y)
print(y)
predicton = np.zeros([1,4])
i=0
for e in X:
if(i<4):
predicton[0][i] = out_vector.predict(e)
print(e,out_vector.predict(e))
i = i+1
else:
i=0
###Output
epochs: 0
epochs: 10000
epochs: 20000
epochs: 30000
epochs: 40000
epochs: 50000
epochs: 60000
epochs: 70000
epochs: 80000
epochs: 90000
[0 1 1 0]
[0 0] [0.0008165]
[0 1] [0.99651282]
[1 0] [0.99667517]
[1 1] [-0.01009606]
###Markdown
(c) Visualize the final classification regions on the 2 dimensional space- In order to clearly visualize this we should draw multiple linear classifiers- This can be achieved by either drawing contours or the hyperplane through the two regions identified in layers of the classifier
###Code
fig2 = plt.figure()
ax = fig2.add_subplot(111, projection='3d')
ax.scatter(np.array(X[:,0]).flatten(), np.array(X[:,1]).flatten(),np.array(predicton[0,:]).flatten() , s=int(y.shape[0]), c='b', marker='*')
ax.set_title('Plot with Y classes (XOR points)')
ax.set_xlabel('x[0]')
ax.set_ylabel('x[1]')
ax.set_zlabel('predictions')
plt.show()
import scipy
from sklearn import svm
C = 1.0 # SVM regularization parameter
clf = svm.SVC(kernel = 'rbf', gamma=0.7, C=C )
clf.fit(X, Y)
h = .02 # step size in the mesh
# create a mesh to plot in
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.contour(xx, yy, Z, cmap=plt.cm.Paired)
###Output
_____no_output_____
###Markdown
(d) generate Gaussian random noise centered on these locations- x1 ∼ m1 = [1, 1]T, S1 = S y1 = +1- x2 ∼ m1 = [0, 0]T, S2 = S y1 = +1- x3 ∼ m1 = [1, 0]T, S3 = S y1 = −1- x4 ∼ m1 = [0, 1]T, S4 = S y1 = −1- S = [[s0,0] [0, s0]]S = Covariance Running for covariance = 0.5
###Code
def generateGaussian(sigma):
m0 = (1, 1)
m1 = (0, 0)
m2 = (1, 0)
m3 = (0, 1)
cov = [[sigma, 0], [0, sigma]]
x0 = np.random.multivariate_normal(m0, cov, 1)
x1 = np.random.multivariate_normal(m1, cov, 1)
x2 = np.random.multivariate_normal(m2, cov, 1)
x3 = np.random.multivariate_normal(m3, cov, 1)
print(X)
X_vec = np.zeros([4,2])
X_vec[0,:] = (x0 + X[0,:])
X_vec[1,:] = (x1 + X[1,:])
X_vec[2,:] = (x2 + X[2,:])
X_vec[3,:] = (x3 + X[3,:])
return X_vec
# Generate the data for sigma = 0.5
X_generated = generateGaussian(0.5)
out_vector = NN([2,2,1])
X = X_generated
y = np.array([0, 1, 1, 0])
out_vector.fit(X, y)
print(y)
predicton = np.zeros([1,4])
i=0
for e in X:
if(i<4):
predicton[0][i] = out_vector.predict(e)
print(e,out_vector.predict(e))
i = i+1
else:
i=0
#print(predicton)
fig2 = plt.figure()
ax = fig2.add_subplot(111, projection='3d')
ax.scatter(np.array(X[:,0]).flatten(), np.array(X[:,1]).flatten(),np.array(predicton[0,:]).flatten() , s=int(y.shape[0]), c='b', marker='*')
ax.set_title('Plot with Y classes (XOR points)')
ax.set_xlabel('x[0]')
ax.set_ylabel('x[1]')
ax.set_zlabel('predictions')
plt.show()
###Output
[[0 0]
[0 1]
[1 0]
[1 1]]
epochs: 0
epochs: 10000
epochs: 20000
epochs: 30000
epochs: 40000
epochs: 50000
epochs: 60000
epochs: 70000
epochs: 80000
epochs: 90000
[0 1 1 0]
[1.6104164 1.60372133] [-5.79007532e-05]
[-1.17228981 1.24832816] [0.99568361]
[ 1.96705591 -0.07407664] [0.99545149]
[0.84546539 0.95717647] [9.14296131e-06]
###Markdown
Running for covariance = 1
###Code
# Generate the data for sigma = 1
X_generated = generateGaussian(1)
out_vector = NN([2,2,1])
X = X_generated
y = np.array([0, 1, 1, 0])
out_vector.fit(X, y)
print(y)
predicton = np.zeros([1,4])
i=0
for e in X:
if(i<4):
predicton[0][i] = out_vector.predict(e)
print(e,out_vector.predict(e))
i = i+1
else:
i=0
fig2 = plt.figure()
ax = fig2.add_subplot(111, projection='3d')
ax.scatter(np.array(X[:,0]).flatten(), np.array(X[:,1]).flatten(),np.array(predicton[0,:]).flatten() , s=int(y.shape[0]), c='b', marker='*')
ax.set_title('Plot with Y classes (XOR points)')
ax.set_xlabel('x[0]')
ax.set_ylabel('x[1]')
ax.set_zlabel('predictions')
plt.show()
###Output
[[ 3.24939747 4.15117176]
[ 1.16272598 0.78207415]
[ 2.22959133 0.10182728]
[-0.63952265 2.83357236]]
epochs: 0
epochs: 10000
epochs: 20000
epochs: 30000
epochs: 40000
epochs: 50000
epochs: 60000
epochs: 70000
epochs: 80000
epochs: 90000
[0 1 1 0]
[3.60022722 6.01764778] [-1.05348879e-05]
[-0.9833318 0.38698302] [0.9967276]
[2.96866475 0.77909607] [0.99823464]
[-0.41194481 3.44771688] [4.71436047e-06]
###Markdown
Running for covariance = 2
###Code
# Generate the data for sigma = 2
X_generated = generateGaussian(2)
out_vector = NN([2,2,1])
X = X_generated
y = np.array([0, 1, 1, 0])
out_vector.fit(X, y)
print(y)
predicton = np.zeros([1,4])
i=0
for e in X:
if(i<4):
predicton[0][i] = out_vector.predict(e)
print(e,out_vector.predict(e))
i = i+1
else:
i=0
fig2 = plt.figure()
ax = fig2.add_subplot(111, projection='3d')
ax.scatter(np.array(X[:,0]).flatten(), np.array(X[:,1]).flatten(),np.array(predicton[0,:]).flatten() , s=int(y.shape[0]), c='b', marker='*')
ax.set_title('Plot with Y classes (XOR points)')
ax.set_xlabel('x[0]')
ax.set_ylabel('x[1]')
ax.set_zlabel('predictions')
plt.show()
###Output
[[ 3.60022722 6.01764778]
[-0.9833318 0.38698302]
[ 2.96866475 0.77909607]
[-0.41194481 3.44771688]]
epochs: 0
epochs: 10000
epochs: 20000
epochs: 30000
epochs: 40000
epochs: 50000
epochs: 60000
epochs: 70000
epochs: 80000
epochs: 90000
[0 1 1 0]
[4.60855699 6.91513359] [0.00200687]
[ 1.59583657 -0.36492002] [0.99760703]
[ 1.7836794 -0.2379257] [0.99788448]
[-0.34263323 5.905201 ] [-0.00012823]
|
Grp7_0306_OutLReg.ipynb | ###Markdown
Data loading
###Code
// Load wrangled datasets from Phase 2: part1.csv; part2.csv; part3.csv
val creditRiskdf1 = spark.read.option("header","true").csv("part1.csv")
val creditRiskdf2 = spark.read.option("header","true").csv("part2.csv")
val creditRiskdf3 = spark.read.option("header","true").csv("part3.csv")
// combine the 3 dataframes to 1
val creditRiskdf1and2 = creditRiskdf1.union(creditRiskdf2)
val creditRiskdf = creditRiskdf1and2.union(creditRiskdf3)
// check that the rows have been aggregated correctly
println("Total data rows in loaded dataframes:"+(
creditRiskdf1.count()+creditRiskdf2.count()+creditRiskdf3.count()))
println("Total data rows in combined dataframe:"+creditRiskdf.count())
###Output
Total data rows in loaded dataframes:183875
Total data rows in combined dataframe:183875
###Markdown
Data cleansing and reshaping
###Code
// Drop features with inconsistent values from dataset.
val incon_droplist = List("previous_loans_DAYS_FIRST_DUE_mean",
"previous_loans_SELLERPLACE_AREA_min",
"previous_loans_DAYS_FIRST_DUE_sum")
val df = creditRiskdf.drop(incon_droplist:_*)
val creditRiskdf = df
//string array of features
val creditRiskFeatures = creditRiskdf.columns
creditRiskdf.cache()
//write function to check if there are any outliers in a given domain and show boxplot [Sean]
// Remove Target.
var feat_list = List[String]() //empty list to append features containing outliers in
for (feature <- creditRiskFeatures) {
//summarise column and query the DataFrame output
var firstQ = creditRiskdf.select(feature).summary().where(
$"summary" === "25%").select(feature).first().mkString.toFloat
var thirdQ = creditRiskdf.select(feature).summary().where(
$"summary" === "75%").select(feature).first().mkString.toFloat
//use formulas below to test threshold of outliers
var testValHigh = thirdQ + (1.5 * (thirdQ - firstQ))
var testValLow = firstQ - (1.5 * (thirdQ - firstQ))
//check to see if thresholds are exceeded in the column and count
var outHigh = creditRiskdf.filter(col(feature) > lit(testValHigh)).count()
var outLow = creditRiskdf.filter(col(feature) < lit(testValLow)).count()
//notify us whether or not column contains outliers
if (outHigh > 0 || outLow > 0 ){
println(feature.concat(": Contains outliers, #").concat((
outHigh + outLow).toString))
feat_list = feature :: feat_list
}
//else if (outHigh == 0 || outLow == 0){
// println(feature.concat(": Does not contain outliers"))
//}
}
val newlist = feat_list.filter(_!="TARGET")
val df = creditRiskdf.drop(newlist:_*)
val creditRiskdf = df
creditRiskdf.cache()
creditRiskdf.schema
###Output
_____no_output_____
###Markdown
Machine learning
###Code
var Array(train_df,test_df,validation_df)=creditRiskdf.randomSplit(
Array(0.7,0.2,0.1))
val creditRiskFeatures = creditRiskdf.columns
val features = creditRiskFeatures.filter(!_.contains("TARGET"))
val creditRiskFeatures = creditRiskdf.columns
for (colName<-creditRiskFeatures){
| train_df=train_df.withColumn(colName,col(colName).cast("Float"))
| }
train_df.printSchema()
val assembler = new VectorAssembler().setInputCols(features).setOutputCol("features")
val df2 = assembler.transform(train_df)
df2.printSchema()
val labelIndexer = new StringIndexer().setInputCol("TARGET").setOutputCol("label")
val df3 = labelIndexer.fit(df2).transform(df2)
val model = new LogisticRegression().fit(df3)
val predictions = model.transform(df3)
predictions.select ("features", "label", "prediction").show(3000)
predictions.schema
###Output
_____no_output_____ |
notes/ns/notebooks/review.ipynb | ###Markdown
Manual sanity checks for the formula and text within mmrd.tex*** Check fitting formula by hand, and compare with raw data
###Code
%matplotlib inline
from numpy import exp,sqrt,log,linspace,pi,sin
import kerr
from os import system
import matplotlib as mpl
from matplotlib.pyplot import *
mpl.rcParams['lines.linewidth'] = 2
mpl.rcParams['font.family'] = 'serif'
mpl.rcParams['font.size'] = 12
mpl.rcParams['axes.labelsize'] = 20
mpl.rcParams['font.weight'] = 'normal'
# print mpl.rcParams.keys()
###Output
kerr## Found model name "mmrdns"
###Markdown
Load data file that contains QNM amplitudes from fitting algorithm described in arXiv:1404.3197---
###Code
########################################################################
# NOTE THAT THESE MUST BE CONSISTENT WITH THE HARD CODED EQUATIONS BELOW
########################################################################
# Define QNM indeces of interest
l = 2; m = 2; n = 0;
# Define data storage location and full path of data file
storage_dir = '/Users/book/GARREG/Spectroscopy/Ylm_Depictions/NonPrecessing/MULTI_DATA_6/Misc/data/'
if l==m:
data_file_string = '../bin/data/complex_A_on_eta_2212%i%i%i1_l_eq_m.asc' % (l,m,n)
else:
data_file_string = '../bin/data/complex_A_on_eta_2212%i%i%i1_l_eq_m_minus_1.asc' % (l,m,n)
# Copy the data to the local repository location
system( 'cp %s/*.asc ../bin/data/' % storage_dir )
# Load the ascii data
data = np.loadtxt(data_file_string)
# Raw data values
raw_eta = data[:,0]
raw_A = data[:,2] + 1j*data[:,3]
A_err = data[:,4]
raw_jf = data[:,8]
raw_Mf = data[:,6]
# Domain over which to evaluate fits
eta = linspace(0,0.25,200)
###Output
_____no_output_____
###Markdown
Implement Final Mass and Spin Fits from arXiv:1404.3197 and Plot against data---
###Code
# Implement Final Mass and Spin Fits from arXiv:1404.3197
jfit = lambda ETA: ETA * ( 3.4339 - 3.7988*ETA + 5.7733*ETA**2 - 6.3780*ETA**3 )
Mfit = lambda ETA: 1.0 + ETA * ( -0.046297 + -0.71006*ETA + 1.5028*ETA**2 + -4.0124*ETA**3 + -0.28448*ETA**4 )
# Verify Fits with plot
figure(figsize=1.2*np.array((11, 5)), dpi=120, facecolor='w', edgecolor='k')
subplot(1,2,1)
plot( raw_eta, raw_jf, 'o', alpha=0.6, label=r'$j_f$', color=0.5*np.array([1,1,1]),markersize=8 )
plot( eta, jfit(eta), '-r', label='Fit' )
xlabel(r'$\eta$')
ylabel(r'$j_f$')
legend(loc='upper left',numpoints=1,frameon=False)
a = subplot(1,2,2)
plot( raw_eta[:-2], raw_Mf[:-2], 'o', alpha=0.6, label=r'$M_f$', color=0.5*np.array([1,1,1]),markersize=8 )
plot( raw_eta[-2:], raw_Mf[-2:], 'x', label=r'Outliers', color='k',markersize=8 )
plot( eta, Mfit(eta), '-r', label='Fit' )
xlabel(r'$\eta$')
ylabel(r'$M_f$')
legend(loc='upper right',numpoints=1,frameon=False)
savefig('review_jf_Mf.pdf')
###Output
_____no_output_____
###Markdown
Implement Model for $M_f \omega (\eta)$---
###Code
K = lambda jf: ( log(2.0-jf)/log(3.0) )**(1.0 / (2.0 + l - m) )
Mwfit= { (2,2,0) : lambda JF: 2.0/2 + K(JF) * ( 1.5557*exp(2.9034j) + 1.9311*exp(5.9219j)*K(JF) + 2.0417*exp(2.7627j)*K(JF)**2 + 1.3436*exp(5.9187j)*K(JF)**3 + 0.3835*exp(2.8029j)*K(JF)**4 ),
(3,2,0) : lambda JF: 2.0/2 + K(JF) * ( 0.5182*exp(0.3646j) + 3.1469*exp(3.1371j)*K(JF) + 4.5196*exp(6.2184j)*K(JF)**2 + 3.4336*exp(3.0525j)*K(JF)**3 + 1.0929*exp(6.1713j)*K(JF)**4 ) }
wfit = lambda ETA: Mwfit[(l,m,n)](jfit(ETA)) / Mfit(ETA)
#
figure(figsize=0.8*np.array((8, 5)), dpi=120, facecolor='w', edgecolor='k')
#
jf_test = sin( 0.5*pi*linspace( -1,1, 1e3 ) )
plot( K(jf_test), Mwfit[(l,m,n)](jf_test).real, 'k' )
xlabel('$\kappa(j_f)$')
ylabel('$\omega_{%i%i%i}(\kappa)$'%(l,m,n))
###Output
_____no_output_____
###Markdown
Plot Fit on Raw Data as well as residuals: Note that error bars do not take into account NR error, only cross-validation errors
###Code
#
Afit = { (2,2,0) : lambda ETA: (wfit(ETA)**2) * ( 0.9252*ETA + 0.1323*ETA**2 ),
(3,2,0) : lambda ETA: (wfit(ETA)**2) * ( 0.1957*exp(5.8008j)*ETA + 1.5830*exp(3.2194j)*ETA**2 + 5.0338*exp(0.6843j)*ETA**3 + 3.7366*exp(4.1217j)*ETA**4 ) }
#
figure(figsize=1.2*np.array((13, 6)), dpi=120, facecolor='w', edgecolor='k')
# Make Subplot
ax1 = subplot(1,2,1)
errorbar( raw_eta, abs(raw_A),fmt='o', yerr=A_err, alpha=0.9, label=r'$A_{lmn}$', color=0.5*np.array([1,1,1]),markersize=8 )
plot( eta, abs(Afit[(l,m,n)](eta)), '-r' )
# Label Axes
xlabel(r'$\eta$')
xlim( [0,0.251] )
ylabel(r'$|A_{%i%i%i}|$'%(l,m,n))
# Make Subplot
subplot(1,2,2)
plot( raw_eta, 100*(abs(raw_A)-abs(Afit[(l,m,n)](raw_eta)))/abs(raw_A), 'ok', alpha=0.6 )
plot( ax1.get_xlim(), [0,0], '--k', alpha=0.5 )
# Label Axes
xlabel(r'$\eta$')
xlim(ax1.get_xlim())
ylabel(r'$|A_{%i%i%i}|$'%(l,m,n))
# Save the plot
savefig('review_A%i%i%i_Amp.pdf'%(l,m,n))
# What about the phases?
###Output
_____no_output_____ |
Heart_disease_prediction_LogisticRegression.ipynb | ###Markdown
Loading the data and preparing the DataFrame from the csv file
###Code
data_frame = pd.read_csv('framingham.csv')
df = pd.DataFrame(data_frame)
df_pre=pd.DataFrame(data_frame)
df
df.isnull()
df.isnull().sum()
df.sum()
df.drop(['education'], axis = 1, inplace = True)
df.size
df.shape
df.head()
df.isnull().sum()
rcParams['figure.figsize'] = 6,5
plt.bar(df.TenYearCHD.unique(), df.TenYearCHD.value_counts(), color = ['purple', 'blue'])
plt.xticks([0, 1])
plt.xlabel('Target Classes')
plt.ylabel('Count')
plt.title('Count of each Target Class')
print(df.TenYearCHD.value_counts())
###Output
0 3596
1 644
Name: TenYearCHD, dtype: int64
###Markdown
A total of 4240 data with 15 columns, 644 observations to be risked to heart disease, and 388 data are missing or invalid. Data Preparation Dropping the missing data
###Code
df_test=df
df_test.dropna(axis=0,inplace=True)
df_test.shape
rcParams['figure.figsize'] = 6,5
plt.bar(df_test.TenYearCHD.unique(), df_test.TenYearCHD.value_counts(), color = ['purple', 'blue'])
plt.xticks([0, 1])
plt.xlabel('Target Classes')
plt.ylabel('Count')
plt.title('Count of each Target Class after Dropping the missing observations')
print(df_test.TenYearCHD.value_counts())
###Output
0 3179
1 572
Name: TenYearCHD, dtype: int64
###Markdown
Dropping so many observations in this case might cause irrelevance in the training the model. So we impute the data. Imputation and Scaling using Pipeline
###Code
data_frame = pd.read_csv('framingham.csv')
df = pd.DataFrame(data_frame)
df.drop(['education'], axis = 1, inplace = True)
df.shape
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
cols=["male","age","currentSmoker","cigsPerDay","BPMeds","prevalentStroke","prevalentHyp","diabetes","totChol","sysBP","diaBP","BMI","heartRate","glucose"]
X_components=df.columns[:-1]
ddf=df[X_components]
ddf
pipe1=Pipeline([("imputer",SimpleImputer(strategy="mean")),("scaler",StandardScaler())])
df1=pipe1.fit_transform(ddf)
df_mean=pd.DataFrame(data=df1[0:,0:], columns=cols)
pipe2=Pipeline([("imputer",SimpleImputer(strategy="median")),("scaler",StandardScaler())])
df2=pipe1.fit_transform(ddf)
df_median=pd.DataFrame(data=df2[0:,0:], columns=cols)
pipe3=Pipeline([("imputer",SimpleImputer(strategy="most_frequent")),("scaler",StandardScaler())])
df3=pipe1.fit_transform(ddf)
df_most=pd.DataFrame(data=df3[0:,0:], columns=cols)
#imp1=SimpleImputer(strategy="mean")
#imp2=SimpleImputer(strategy="median")
#imp3=SimpleImputer(strategy="most_frequent")
df_mean.shape
df_mean
###Output
_____no_output_____
###Markdown
This is the preprocessed data Exploratory Analysis Histogram
###Code
from ipywidgets import widgets
feature_desc={'age':'Age of person',
'cigsPerDay':'No of average ciggarete taken per day',
'BPMeds':'BPMeds',
'prevalentStroke':'prevalentStroke',
'prevalentHype':'prevalentHype',
'diabetes':'diabetes',
'totChol':'Total Cholesterol Value Measured',
'sysBP':'sysBP',
'diaBP':'diaBP',
'BMI':'Body Mass Index',
'heartRate':'Heart Rate',
'glucose':'Glucose',
'TenYearCHD':'Ten Year CHD'}
def hist_feature(column):
df[column].hist(bins=20,facecolor='midnightblue')
plt.show()
dropdown_menu = {v:k for k,v in feature_desc.items()}
widgets.interact(hist_feature, column=dropdown_menu)
###Output
_____no_output_____
###Markdown
Correlation Matrix Visualization
###Code
from matplotlib import rcParams
from matplotlib.pyplot import matshow
rcParams['figure.figsize'] = 3,8
plt.matshow(df.corr())
plt.yticks(np.arange(df_mean.shape[1]), df.columns)
plt.xticks(np.arange(df_mean.shape[1]), df.columns)
plt.colorbar()
rcParams['figure.figsize'] = 8,6
plt.bar(df.TenYearCHD.unique(), df.TenYearCHD.value_counts(), color = ['purple', 'blue'])
plt.xticks([0, 1])
plt.xlabel('Target Classes')
plt.ylabel('Count')
plt.title('Count of each Target Class')
df_mean.describe()
###Output
_____no_output_____
###Markdown
Conclusion of Exploratory AnalysisOut of 3715 observations over 500 observation (Patient) are at the risk of heart disease.
###Code
def draw_histograms(dataframe, features, rows, cols):
fig=plt.figure(figsize=(20,20))
for i, feature in enumerate(features):
ax=fig.add_subplot(rows,cols,i+1)
dataframe[feature].hist(bins=20,ax=ax,facecolor='midnightblue')
ax.set_title(feature+" Distribution",color='DarkRed')
fig.tight_layout()
plt.show()
draw_histograms(df,df.columns,6,3)
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.feature_selection import RFE, RFECV, SelectFromModel
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import StratifiedKFold, train_test_split
from sklearn.utils import shuffle
from sklearn.metrics import confusion_matrix,accuracy_score,classification_report
from statsmodels.tools import add_constant
df_constant = add_constant(df)
df_constant.head()
###Output
_____no_output_____
###Markdown
Why adding a constant column?It’s because you expect your dependent variable to take a nonzero value when all the otherwise included regressors are set to zero.Suppose you want to model the wage as a function of years of secondary schooling (in years). You’d estimate an equation of the formyi=α+xiβ+εibecause one can reasonably expect the wage to take, on average, a positive value even if one’s secondary schooling is null. This value would show up as a constant.Note however that a constant may take an absurd value while being relevant for the estimation, or may be irrelevant altogether. Suppose further you’re interested in estimating the model above with variables as deviations from their mean.yi−y¯=(α−α¯)+(xi−x¯)β~+νiObviously, the constant equals its average value so that the first term on the right-hand side cancels out. You end up withyi−y¯=(xi−x¯)β~+νithat is a model without constant. In practice, including one would probably not be of any concern (for a reasonable amount of observations), but would be theoretically injustified.Remember that you should always know whether what you estimate makes sense, both from a real and statistical point of view! Feature Selection 1. Backward elimination (P-value approach)
###Code
X1_components=df_mean.columns
X1=df_mean[X1_components]
y1=df.TenYearCHD
X2_components=df_median.columns
X2=df_median[X2_components]
y2=df.TenYearCHD
X1.shape
column_list=['male','age','currentSmoker','cigsPerDay' ,'BPMeds','prevalentStroke','prevalentHyp' ,'diabetes', 'totChol', 'sysBP', 'diaBP', 'BMI','heartRate', 'glucose']
df_mean.isnull().sum()
def feature_selection(data_frame, dependent_variable, column_list):
while len(column_list)>0:
model = sm.Logit(dependent_variable, data_frame[column_list])
result = model.fit(disp = 0)
largest_pvalue = round(result.pvalues, 3).nlargest(1)
if largest_pvalue[0] < (0.05):
return result
break
else:
column_list = column_list.drop(largest_pvalue.index)
cols = df_mean.columns[:-1]
result1 = feature_selection(df_mean, y1, cols)
print("This is the result using the imputation for mean values")
result1.summary()
column_list=["male","age","cigsPerDay","prevalentStroke","diabetes","sysBP"]
new=df_mean[column_list]
new
from matplotlib import rcParams
from matplotlib.pyplot import matshow
column_list=["male","age","cigsPerDay","prevalentStroke","diabetes","sysBP"]
new=df_mean[column_list]
rcParams['figure.figsize'] = 20, 14
plt.matshow(new.corr())
plt.yticks(np.arange(new.shape[1]), df.columns)
plt.xticks(np.arange(new.shape[1]), df.columns)
plt.colorbar()
result2 = feature_selection(df_median, y2, cols)
print("This is the result using the imputation for median values")
result2.summary()
###Output
This is the result using the imputation for median values
###Markdown
Without KFold
###Code
column_list=["male","age","cigsPerDay","prevalentStroke","diabetes","sysBP"]
X=df_mean[column_list]
y=df.TenYearCHD
X_train,X_test,y_train,y_test=train_test_split(*shuffle(X,y), test_size=0.2, random_state=5)
log_model=LogisticRegression()
log_model.fit(X_train,y_train)
log_model.score(X_train,y_train)
log_model.score(X_test,y_test)
results=confusion_matrix(y,log_model.predict(X))
results
classification_report(y_train, log_model.predict(X_train))
classification_report(y_test, log_model.predict(X_test))
###Output
_____no_output_____
###Markdown
2. Recursive Feature Elimination with Cross Validation
###Code
df_mean.columns
rfc = RandomForestClassifier()
rfecv = RFECV(estimator = rfc, step = 1, cv = StratifiedKFold(10), scoring = 'r2',verbose=1)
X_components=df_mean.columns
X=df_mean[X_components]
y=df.TenYearCHD
print(X.shape)
print(y.shape)
rfecv.fit(X, y)
X_components=df_mean.columns
X=df[X_components]
X
X.columns
# X_components=df_mean.columns
# X=df[X_components]
# dset = df
# print(dset.shape)
# print(X)
# dset['attr'] = X.columns
# dset['importance'] = rfecv.estimator_.feature_importances_
# dset = dset.sort_values(by='importance', ascending=False)
# plt.figure(figsize=(16, 14))
# plt.barh(y=dset['attr'], width=dset['importance'], color='#1976D2')
# plt.title('RFECV - Feature Importances', fontsize=20, fontweight='bold', pad=20)
# plt.xlabel('Importance', fontsize=14, labelpad=20)
# plt.show()
# rfecv_unscaled = RFECV(estimator = rfc,
# step = 1,
# cv = StratifiedKFold(10),
# scoring = 'accuracy',verbose=1)
# X_components=df_pre.columns
# X_unscaled=df_pre[X_components]
# y_unscaled=df_pre.TenYearCHD
# rfecv_unscaled.fit(X_unscaled, y_unscaled)
X_components=df_mean.columns
X=df_mean[X_components]
rfecv_array = [True, True, False,True,False,False,True,False,True,True,True,True,True,True]
res = [i for i, val in enumerate(rfecv_array) if not val]
X.drop(X.columns[res], axis=1, inplace=True)
X_train,X_test,y_train,y_test=train_test_split(*shuffle(X,y),
test_size=0.2,
random_state=5)
log_model=LogisticRegression()
log_model.fit(X_train,y_train)
log_model.score(X_train,y_train)
log_model.score(X_test,y_test)
rfecv.estimator_.feature_importances_
X_components=df.columns[:-1]
X=df[X_components]
dset = pd.DataFrame()
dset['attr'] = X.columns
dset['importance'] = rfecv.estimator_.feature_importances_
dset = dset.sort_values(by='importance', ascending=False)
plt.figure(figsize=(16, 14))
plt.barh(y=dset['attr'], width=dset['importance'], color='#1976D2')
plt.title('RFECV - Feature Importances', fontsize=20, fontweight='bold', pad=20)
plt.xlabel('Importance', fontsize=14, labelpad=20)
plt.show()
###Output
_____no_output_____
###Markdown
3. Coefficient values
###Code
X_components=df.columns[:-1]
X=df[X_components]
y=df.TenYearCHD
X_train, X_test, y_train, y_test= train_test_split(X, y, random_state = 0)
logreg = LogisticRegression(fit_intercept = False)
logreg.fit(X_train, y_train)
np.round(logreg.coef_, decimals = 2) > 0
# logreg.coef_
# Calculating Accuracy of coefficient values
# print(np.where(rfecv.support_ == False)[0])
coefficient_array = [ True, True, False, True, True, True, True, True, False,True, False, False, False, False]
res = [i for i, val in enumerate(coefficient_array) if not val]
X.drop(X.columns[res], axis=1, inplace=True)
X
X_train,X_test,y_train,y_test=train_test_split(*shuffle(X,y), test_size=0.2, random_state=5)
log_model=LogisticRegression()
log_model.fit(X_train,y_train)
log_model.score(X_train, y_train)
log_model.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
4. Recursive Feature Extraction
###Code
X_components=df.columns[:-1]
X=df[X_components]
y=df.TenYearCHD
X_train, X_test, y_train, y_test= train_test_split(X, y, random_state = 0)
predictors = X_train
selector = RFE(logreg, n_features_to_select = 1)
selector = selector.fit(predictors, y_train)
order = selector.ranking_
order
feature_ranks = []
for i in order:
feature_ranks.append(f"{i}.{df.columns[i]}")
feature_ranks
rfe_array = [True,True,True,True,True,True,True,True,False,False, False, True,True,False]
res = [i for i, val in enumerate(rfe_array) if not val]
X.drop(X.columns[res], axis=1, inplace=True)
X
X_train,X_test,y_train,y_test=train_test_split(*shuffle(X,y), test_size=0.2, random_state=5)
log_model=LogisticRegression()
log_model.fit(X_train,y_train)
log_model.score(X_train, y_train)
log_model.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
5. Feature Extraction Using SFM
###Code
X_components=df.columns[:-1]
X=df[X_components]
y=df.TenYearCHD
X_train, X_test, y_train, y_test= train_test_split(X, y, random_state = 0)
smf = SelectFromModel(logreg, threshold = -np.inf, max_features = 8)
smf.fit(X_train, y_train)
feature_idx = smf.get_support()
feature_idx
# feature_name = df.columns[feature_idx]
# feature_name
sfm_array =[ True, True, True, False, True, False, True, True, False,False, True, True, False, False]
res = [i for i, val in enumerate(sfm_array) if not val]
X.drop(X.columns[res], axis=1, inplace=True)
X_train,X_test,y_train,y_test=train_test_split(*shuffle(X,y), test_size=0.2, random_state=5)
log_model=LogisticRegression()
log_model.fit(X_train,y_train)
log_model.score(X_train, y_train)
log_model.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
Naive Bayes
###Code
def separate_by_class(dataset):
separated = dict()
for i in range(len(dataset)):
vector = dataset[i]
class_value = vector[-1]
if (class_value not in separated):
separated[class_value] = list()
separated[class_value].append(vector)
return separated
separate_by_class(df)
###Output
_____no_output_____ |
analysis/results_visualization.ipynb | ###Markdown
english
###Code
path = os.path.join(data_dir, 'en', 'conversations.csv')
conv = pd.read_csv(path)
conv.shape
geo_conv = conv.merge(geo_tweets, on='id', how='left')
geo_conv = geo_conv.groupby('geocountry').size().reset_index()
geo_conv.columns = ['geocountry', 'num_tweets']
geo_conv['num_tweets'] = np.log( geo_conv.num_tweets)
geo_conv['info_level'] = geo_conv.num_tweets / geo_conv.num_tweets.sum()
geo_conv.head(10)
countries_ds = pd.DataFrame(countries_geocodes, columns=['geocountry'])
countries_ds = countries_ds.merge(geo_conv, on='geocountry', how='left')
countries_ds.loc[countries_ds.num_tweets.isnull(), 'info_level'] = 0.00001
#countries_ds['info_level'] = np.random.randint( 1, 10, (countries_ds.shape[0],1))
countries_ds.drop_duplicates('geocountry', inplace=True)
countries_ds.shape
from branca.colormap import linear
colormap = linear.YlOrRd_06.scale(
geo_conv.info_level.min(),
geo_conv.info_level.max())
print(colormap(5.0))
colormap
earthquake_dict = countries_ds.set_index('geocountry')['info_level']
earthquake_dict['USA']
m = folium.Map([43, -100], zoom_start=4)
def whatsup(feature):
#print(feature['id'])
return colormap(earthquake_dict[feature['id']])
folium.GeoJson(
geo_json_data,
name='earthquake',
style_function=lambda feature: {
'fillColor': whatsup(feature),
'color': 'black',
'weight': 1,
'dashArray': '5, 5',
'fillOpacity': 0.7,
}
).add_to(m)
folium.LayerControl().add_to(m)
m.save(os.path.join('../results', 'en_viz.html'))
#m
###Output
_____no_output_____
###Markdown
what are saying
###Code
path = os.path.join(data_dir, 'tweets_geocodes.csv')
geo_tweets = pd.read_csv(path,dtype={'id':object})
geo_tweets.shape
path = os.path.join(data_dir, '2016_ecuador_eq_es.csv')
annotated_tweets = pd.read_csv(path, parse_dates=['timestamp'],dtype={'id':object})
annotated_tweets.shape
geo_annotated_tweets = annotated_tweets.merge(geo_tweets, on='id', how='inner')
geo_annotated_tweets.shape
geo_annotated_tweets[geo_annotated_tweets.geocountry.isnull()].shape[0]/annotated_tweets.shape[0]
country_cat=geo_annotated_tweets.groupby(['geocountry','choose_one_category']).size().sort_values(ascending=False)
country_cat=country_cat.reset_index()
country_cat.columns = ['geocountry','cat','num']
country_cat['perc'] = country_cat['num'] / country_cat.num.sum()
#country_cat[country_cat.cat=='donation_needs_or_offers_or_volunteering_services'].perc[1:].sum()
country_cat[(country_cat.cat=='injured_or_dead_people')&(country_cat.geocountry!='ECU')].perc.sum()
#country_cat
###Output
_____no_output_____
###Markdown
english
###Code
path = os.path.join(data_dir, '2016_ecuador_eq_en.csv')
annotated_tweets = pd.read_csv(path, parse_dates=['timestamp'],dtype={'id':object})
annotated_tweets.shape
geo_annotated_tweets = annotated_tweets.merge(geo_tweets, on='id', how='inner')
geo_annotated_tweets.shape
geo_annotated_tweets[geo_annotated_tweets.geocountry.isnull()].shape[0]/annotated_tweets.shape[0]
country_cat=geo_annotated_tweets.groupby(['geocountry','choose_one_category']).size().sort_values(ascending=False)
country_cat=country_cat.reset_index()
country_cat.columns = ['geocountry','cat','num']
country_cat['perc'] = country_cat['num'] / country_cat.num.sum()
#country_cat[(country_cat.cat=='donation_needs_or_offers_or_volunteering_services')&(country_cat.geocountry=='ECU')]#.perc[1:].sum()
#country_cat[(country_cat.cat=='donation_needs_or_offers_or_volunteering_services')&(country_cat.geocountry!='ECU')].perc.sum()
#country_cat[(country_cat.cat=='injured_or_dead_people')&(country_cat.geocountry=='ECU')].perc.sum()
country_cat[(country_cat.cat=='injured_or_dead_people')&(country_cat.geocountry!='ECU')].perc.sum()
#country_cat
###Output
_____no_output_____ |
module_9_statistics_probability/probability_mass_function.ipynb | ###Markdown
Describing discrete functions using Probability Mass Functions
###Code
import numpy as np
import matplotlib.pyplot as plt
##simulate a die rolling experiment
die_rolls_50 = np.random.randint(1, 7, 50)
die_rolls_50
val, freq = np.unique(die_rolls_50, return_counts=True)
print(val, freq)
plt.bar(val, freq/len(die_rolls_50))
##simulating the same exp 10000 times
die_rolls_10k = np.random.randint(1, 7, 10000)
die_rolls_10k
val, freq = np.unique(die_rolls_10k, return_counts=True)
plt.bar(val, freq/len(die_rolls_10k))
###Output
_____no_output_____ |
Final_KNN_Algorithm_for_Breast_Cancer.ipynb | ###Markdown
K Nearest Neighbors Simple and can be used for both regression and classification For the classification problems, the algorithm compares the distance from a new observation to that of each observation in a training set. It returns the closest k neighbor of the new observation
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
print("pandas version:", pd.__version__)
import matplotlib
print("matplotlib version:", matplotlib.__version__)
import matplotlib.pyplot as plt
import numpy as np
print("NumPy version:", np.__version__)
import scipy as sp
print("SciPy version:", sp.__version__)
import IPython
print("IPython version:", IPython.__version__)
from IPython.display import display
import sklearn
print("scikit-learn version:", sklearn.__version__)
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.model_selection import cross_val_score
from sklearn import preprocessing
from sklearn import metrics
import torch
print("Torch version:", torch.__version__)
import torch.nn as nn
import torchvision
print("TorchVision version:", torchvision.__version__)
import torch.optim as optim
from torch.utils import data
from torch.optim import lr_scheduler
from torch.nn.functional import softmax
from torchvision import datasets, models, transforms
import seaborn as sns
import random
import math
from math import sqrt
import time
import copy
import os
from google.colab import drive
###Output
pandas version: 1.1.5
matplotlib version: 3.2.2
NumPy version: 1.19.5
SciPy version: 1.4.1
IPython version: 5.5.0
scikit-learn version: 0.22.2.post1
Torch version: 1.8.1+cu101
TorchVision version: 0.9.1+cu101
###Markdown
The data is already preinstalled in the sklearn dataset
###Code
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
print("The keys are \n", cancer.keys(), "\n")
print("Shape of cancer data: ", cancer.data.shape, "\n") #569 data points, 30 features
print("Feature names are \n", cancer.feature_names)
print("Sample counts per class: \n", {n: v for n, v in zip(cancer.target_names, np.bincount(cancer.target))})
!pip install mglearn
import mglearn
###Output
Requirement already satisfied: mglearn in /usr/local/lib/python3.7/dist-packages (0.1.9)
Requirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from mglearn) (1.1.5)
Requirement already satisfied: cycler in /usr/local/lib/python3.7/dist-packages (from mglearn) (0.10.0)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from mglearn) (1.19.5)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from mglearn) (3.2.2)
Requirement already satisfied: joblib in /usr/local/lib/python3.7/dist-packages (from mglearn) (1.0.1)
Requirement already satisfied: pillow in /usr/local/lib/python3.7/dist-packages (from mglearn) (7.1.2)
Requirement already satisfied: imageio in /usr/local/lib/python3.7/dist-packages (from mglearn) (2.4.1)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.7/dist-packages (from mglearn) (0.22.2.post1)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas->mglearn) (2.8.1)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas->mglearn) (2018.9)
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from cycler->mglearn) (1.15.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->mglearn) (2.4.7)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->mglearn) (1.3.1)
Requirement already satisfied: scipy>=0.17.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->mglearn) (1.4.1)
###Markdown
The "k" in k-neighbors Represent the neighbors to pull in Variation in the nearest neighbor models determine distance and generates a prediction
###Code
mglearn.plots.plot_knn_classification(n_neighbors = 1)
###Output
/usr/local/lib/python3.7/dist-packages/sklearn/utils/deprecation.py:87: FutureWarning: Function make_blobs is deprecated; Please import make_blobs directly from scikit-learn
warnings.warn(msg, category=FutureWarning)
###Markdown
Use this function to find the Euclidean distance which is a measure of a straight line between 2 points! From a pythagorean theorem point of view, the greater the value, the greater the difference!
###Code
def euclidean_distance(row1, row2):
distance = 0.0
#make this a float
for dist in range(len(row1) - 1):
distance += (row1[dist] - row2[dist])**2
return sqrt(distance)
#Measure the square difference of 2 points
###Output
_____no_output_____
###Markdown
Function to find the neighbors in a new observation. It will return the lowest k-distance
###Code
def get_neighbors(train, new_obs, k):
#Trying to locate similar neighbors or "k neighbors"
#Using euclidian distances
"""
Paramater:
train: a dataset (array)
new_observation: observation of which neighbors are found
k: k-neighbors (number of neighbors to be found (in int))
"""
distances = []
neighbors = []
if type(train) == pd.core.frame.DataFrame:
for dist, rows in train.iterrows():
length = euclidean_distance(new_obs, list(rows))
distances.append((dist, length))
distances.sort(key = lambda tup: tup[1])
else:
for dist, rows in enumerate(train):
length = euclidean_distance(new_obs, row)
distances.append((dist, length))
distances.sort(key = lambda tup: tup[1])
for i in range(k):
neighbors.append(distances[i])
return neighbors
###Output
_____no_output_____
###Markdown
Prediction from the nearest neighbor algorithm. Use a labeling system to achieve a higher accuracy over a voting systen
###Code
"""
Predict the class from new observation from the provided training data
The parameters:
train: training a pandas dataframe or array
new_observation: observation where the neighbors are found
k-neighbors: the number of neighbors to be found
"""
def predict_classification(train, new_obs, k):
neighbors = get_neighbors(train, new_obs, k) #compile a list of neighbors
n_index = neighbors[0][0]
#the index for the closest neighbors using splicing
if type(train) == pd.core.frame.DataFrame:
loc = train.columns[-1]
pred = train[loc][n_index] #the labels are on the last column of the dataframe
else:
pred = train[n_index][-1]
return pred
###Output
_____no_output_____
###Markdown
Accuracy is the number of correct over the total number.
###Code
def accuracy(x, y):
correct = 0
for i in range(len(x)):
if type(x) == pd.core.series.Series:
if x.iloc[i] == y[i]:
correct += 1
else:
if x[i] == y[i]:
correct += 1
return correct / float(len(x))
from sklearn.model_selection import train_test_split
cols = ["sepal_len", "sepal_wid", "petal_len", "petal_wid", "class"]
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data"
df = pd.read_csv(url, names = cols)
names = []
for x in df["class"]:
x = x.replace("Iris-", "")
names.append(x)
df["class"] = names
labels = []
for x in df["class"]:
x = x.replace("versicolor","0")
x = x.replace("virginica","1")
x = x.replace("setosa","2")
x = int(x)
labels.append(x)
df["class"] = labels
train, test = train_test_split(df, train_size=0.70, test_size=0.30, random_state=5)
target = "class"
X_test = test.drop(target, axis=1)
y_test = test[target]
# Generate Predictions
predictions = []
###Output
_____no_output_____
###Markdown
KNN Class in OOP
###Code
class KNearestNeighbor:
def __init__(self, n_neighbors):
self.n_neighbors = n_neighbors
self.train = None
def __euclidean_distance(self, row1, row2):
"""
The square root of the sum of the squared differences between two vectors.
The smaller the value, the more similar two records will be.
Value of 0 indicates no difference.
euclidian distance = sqrt(sum i to N (x1_i - x2_i)^2)
"""
# 0.0 so that distance will float
distance = 0.0
# loop for columns
for i in range(len(row1) - 1):
# squared difference between the two vectors
distance += (row1[i] - row2[i])**2
return sqrt(distance)
def fit(self, train):
"""Fits model to training data"""
self.train = train
def __get_neighbors(self, train, new_obs, k):
"""
Locates most similar neighbors via euclidian distance.
Params:
train: a dataset
new_obs: a new observation; observation for which neighbors are to be found
k: k-neighbors; the number of neighbors to be found (int)
"""
distances = []
neighbors = []
# Rules for whether or not train is a pandas.DataFrame
if type(train) == pd.core.frame.DataFrame:
for i,row in train.iterrows():
# calculate distance
d = self.__euclidean_distance(new_obs, list(row))
# fill distances list with tuples of row index and distance
distances.append((i, d))
# sort distances by second value in tuple
distances.sort(key=lambda tup: tup[1])
else:
for i,row in enumerate(train):
# calculate distance
d = self.__euclidean_distance(new_obs, row)
# fill distances list with tuples of row index and distance
distances.append((i, d))
# sort distances by second value in tuple
distances.sort(key=lambda tup: tup[1])
for i in range(k):
# Grabs k-records from distances list
neighbors.append(distances[i])
return neighbors
def predict(self, train, new_obs):
"""
Predicts a class label on a new observation from provided training data.
Params:
new_obs: a new observation; observation for which neighbors are to be found
k: k-neighbors; the number of neighbors to be found (int)
"""
self.train = train #> for some reason, defining the model again with passing
#> in train with method call brought accuracy up to 95%,
#> whereas without this, accuracy was 31%. Not clear why
#> this is the case since self.train is already defined in
#> the `model.fit()` call ...
# Compile list of neighbors
neighbors = self.__get_neighbors(self.train, new_obs, self.n_neighbors)
# Grab index of the closest neighbor
n_index = neighbors[0][0]
# Add rules for if train is a pandas.DataFrame
if type(self.train) == pd.core.frame.DataFrame:
# Assumes labels are in last column of dataframe
loc = self.train.columns[-1]
pred = self.train[loc][n_index]
else:
# Prediction is the label from train record at n_index location. Assumes label
# is at end of record.
pred = self.train[n_index][-1]
return pred
def score(self, x, y):
"""
Calculates accuracy of predictions (on classification problems).
Params:
x: actual, or correct labels
y: predicated labels
"""
correct = 0
for i in range(len(x)):
# Rules for if `x` is a pandas.Series
if type(x) == pd.core.series.Series:
if x.iloc[i] == y[i]:
correct += 1
else:
if x[i] == y[i]:
correct += 1
return correct / float(len(x))
nn = KNearestNeighbor(n_neighbors=3)
predictions = []
for _, obs in X_test.iterrows():
pred = nn.predict(train, list(obs))
predictions.append(pred)
print(f"Iris KNearestNeighbors Accuracy: {accuracy(y_test, predictions):.2f}")
###Output
Iris KNearestNeighbors Accuracy: 0.02
###Markdown
Comparing the built KNN Algorithm to the built in algorithm!
###Code
from sklearn.neighbors import KNeighborsClassifier
X_train = train.drop(target, axis=1)
y_train = train[target]
scitkit_nn = KNeighborsClassifier(n_neighbors=3)
scitkit_nn.fit(X_train, y_train)
scitkit_preds = scitkit_nn.predict(X_test)
print(f"Scikit-learn KNeighborsClassifier on Iris Accuracy: {scitkit_nn.score(X_test, y_test):.2f}")
###Output
Scikit-learn KNeighborsClassifier on Iris Accuracy: 0.96
###Markdown
As seen here, the Scikit-Learn preinstalled KNeighbors Classifier is so much better than the one built here!
###Code
#Breast Cancer Data
from sklearn.model_selection import train_test_split
cancer_cols = ["Class", "Age", "Menopause", "Tumor-size", "inv-nodes", "node-caps", "deg-malig", "breast", "breast-quad", "irradiat"]
cancer_url = "https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer/breast-cancer.data"
cancer_df = pd.read_csv(cancer_url, names = cancer_cols)
names = []
for x in cancer_df["Class"]:
x = x.replace("-events", "")
names.append(x)
cancer_df["Class"] = names
labels = []
for x in cancer_df["Class"]:
x = x.replace("no-recurrrence","0")
x = x.replace("recurrence","1")
labels.append(x)
cancer_df["Class"] = labels
cancer_train, cancer_test = train_test_split(cancer_df, train_size=0.70, test_size=0.30, random_state=5)
target = "Class"
X_test = cancer_test.drop(target, axis=1)
y_test = cancer_test[target]
# Generate Predictions
predictions = []
nn = KNearestNeighbor(n_neighbors=3)
predictions = []
for _, obs in X_test.iterrows():
pred = nn.predict(cancer_train, list(obs))
predictions.append(pred)
print(f"Cancer KNearestNeighbors Accuracy: {accuracy(y_test, predictions):.2f}")
###Output
_____no_output_____
###Markdown
Preprocessing is important! The error here was that the dataset was not preproccessed. It does not make sense to find Euclidean distance on strings and not floats!So download the Wisconsin Breast cancer data and preprocess it on local library!"https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.data"
###Code
drive.mount('/content/drive/', force_remount = True)
os.chdir("/content/drive/My Drive/Final Project_Data/")
matplotlib.style.use('ggplot')
df=pd.read_csv('wdbc.data', names=['ID','Diagnosis','Radius','Texture','Perimeter','Area','Smoothness',
'Compactness','Concavity','Concave_point','Symmetry','Fractal_dimensions',
'RadiusSE','TextureSE','PerimeterSE','AreaSE','SmoothnessSE',
'CompactnessSE','ConcavitySE','Concave_pointSE','SymmetrySE','Fractal_dimensionsSE',
'RadiusW','TextureW','PreimeterW','AreaW','SmoothnessW',
'CompactnessW','ConcavityW','Concave_pointW','SymmetryW','Fractal_dimensionsW'],engine='c')
df.head()
print(df.dtypes)
df.head()
df.describe()
df.info()
print(df.shape)
###Output
(569, 32)
###Markdown
Convert the strings to numbers to avoid another error.
###Code
def diagnosis_value(diagnosis):
if diagnosis == 'M':
return 1
else:
return 0
df['Diagnosis'] = df['Diagnosis'].apply(diagnosis_value)
###Output
_____no_output_____
###Markdown
Normalize the data
###Code
X = preprocessing.StandardScaler().fit(X).transform(X.astype(float))
X[0:5]
sns.lmplot(x = 'Radius', y = 'Texture', hue = 'Diagnosis', data = df)
X = np.array(df.iloc[:, 1:])
y = np.array(df['Diagnosis'])
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size = 0.33, random_state = 42)
knn = KNeighborsClassifier(n_neighbors = 1)
knn.fit(X_train, y_train)
knn.score(X_test, y_test)
knn = KNeighborsClassifier(n_neighbors = 2)
knn.fit(X_train, y_train)
knn.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
So changing the k number can change the accuracy! Let's do further experiment on this!
###Code
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2)
print ('Train set:', X_train.shape, y_train.shape)
print ('Test set:', X_test.shape, y_test.shape)
k = 10
#Train Model and Predict
neigh = KNeighborsClassifier(n_neighbors = k).fit(X_train,y_train)
print(neigh)
#predicting part
yhat = neigh.predict(X_test)
yhat[0:5]
mean_acc = np.zeros((k-1))
std_acc = np.zeros((k-1))
for n in range(1,k):
#Train Model and Predict
neigh = KNeighborsClassifier(n_neighbors = n).fit(X_train,y_train)
yhat=neigh.predict(X_test)
mean_acc[n-1] = metrics.accuracy_score(y_test, yhat)
std_acc[n-1]=np.std(yhat==y_test)/np.sqrt(yhat.shape[0])
print("Train set Accuracy: ", metrics.accuracy_score(y_train, neigh.predict(X_train)))
print("Test set Accuracy: ", metrics.accuracy_score(y_test, yhat))
plt.plot(range(1,k),mean_acc,'g')
plt.fill_between(range(1,k),mean_acc - 1 * std_acc,mean_acc + 1 * std_acc, alpha=0.10)
plt.legend(('Accuracy ', '+/- 3 std'))
plt.ylabel('Accuracy ')
plt.xlabel('Number of estimators (K)')
plt.tight_layout()
plt.show()
print( "The best accuracy was with", mean_acc.max(), "with k=", mean_acc.argmax()+1)
###Output
The best accuracy was with 0.973404255319149 with k= 7
###Markdown
But that was only in the range of 0 to 10. What if there was a k value between 0 to infinity? Let's find out the optimal K value Cross ValidationNow performing Cross Validation to find the optimal number of neighbors in order to have a better prediction!
###Code
neighbors = []
cv_scores = []
# perform 10 fold cross validation
for k in range(1, 51, 2):
neighbors.append(k)
knn = KNeighborsClassifier(n_neighbors = k)
scores = cross_val_score(
knn, X_train, y_train, cv = 10, scoring = 'accuracy')
cv_scores.append(scores.mean())
MSE = [1-x for x in cv_scores]
# determining the best k
optimal_k = neighbors[MSE.index(min(MSE))]
print('The optimal number of k neighbors to use is % d ' % optimal_k)
# plot misclassification error versus k
plt.figure(figsize = (15, 6))
plt.plot(neighbors, MSE)
plt.xlabel('Neighbors Number')
plt.ylabel('Misclassification Error')
plt.show()
###Output
The optimal number of k neighbors to use is 13
###Markdown
Well let's check out the N neighbor = 7!
###Code
knn = KNeighborsClassifier(n_neighbors = 7)
knn.fit(X_train, y_train)
knn.score(X_test, y_test)
###Output
_____no_output_____ |
examples/time_series_autocorrelation.ipynb | ###Markdown
Test Difference of Proportionbased on [this example](https://online.stat.psu.edu/stat415/lesson/9/9.4)
###Code
## non-smoker data
n1 = 605. # total number of participants
y1 = 351. # number who answered "yes"
## smoker data
n2 = 195. # total number of participants
y2 = 41. # number who answered "yes"
# Null hypothesis is that p1 = p2
# proportion of sample 1 is equal to sample 2
# two-tailed test will be required
def test_diff_proportion(ts1, ts2, alpha):
'''
Calculate the test statistic for testing
the difference in two population proportions
Y1 : the number sample 1 that answer 'yes'
Y2 : the number of sample 2 that answer 'yes'
n1 : the size of sample 1
n2 : the size of sample 2
alpha : significance level
return
Z : the test statistic
p : the p-value at the significance level (alpha)
'''
n1 = len(ts1)
n2 = len(ts2)
y1 = ts1.sum()
y2 = ts2.sum()
p1 = y1/n1 # proportion of sample 1 who said yes
p2 = y2/n2 # proportion of sample 2 who said yes
phat = (Y1+Y2)/(n1 + n2)
print('phat: ', phat)
std_err = np.sqrt(phat*(1-phat)*(1/n1 + 1/n2))
Z = ((p1 - p2) - 0)/(std_err)
print('Z: value: ', Z)
# Calculate the p-value
# based on the standard normal distribution z-test
pvalue = 2*dist.norm.cdf(-np.abs(Z)) # Multiplied by two indicates a two tailed testing.
print("Computed P-value is", pvalue)
if pvalue < alpha:
print('Reject null hypothesis, statistical significance found')
test_statistic(y1, y2, n1, n2, 0.05)
###Output
0.5801652892561984 0.21025641025641026
phat: 0.49
8.985900954503084
Computed P-value is 2.566230446480293e-19
Reject null hypothesis, statistical significance found
Critical t-value: 1.6467653442385173
|
notebooks/.ipynb_checkpoints/Synthesizing Anomalies with Autoencoders-checkpoint.ipynb | ###Markdown
Synthesizing Anomalies with AutoencodersThis proposed methods utilizes an overfitted autoencoder model to synthesize anomalies. This procedure is in line with the principles discussed in the paper Systematic Construction of Anomaly Detection Benchmarks from Real Data (Emmott et. al.).
###Code
%matplotlib inline
# Import necessary libraries
import sys
sys.path.append("../pyanomaly/lib")
from deep_autoencoder import DeepAutoencoder
import pandas as pd
import random
import numpy as np
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
# Utility method to sample values on tail ends of a distribution given mean and standard deviation
# n refers to the number of samples generated
def sample_tails(mu, sig, n=1000):
samples = np.array(sorted(np.random.normal(mu, sig, n)))
q1, q3 = np.percentile(samples, [25, 75])
iqr = q3 - q1
lower_bound = q1 - (iqr * 1.5)
upper_bound = q3 + (iqr * 1.5)
return np.concatenate((samples[samples < lower_bound], samples[samples > upper_bound]))
# The configuration for the autoencoder
config = {
"input_size": 11,
"o_activation": "sigmoid",
"h_activation": "relu",
"optimizer": {
"name": "adam",
"learning_rate": 0.001,
"momentum": 0.0,
"decay": 0.0
},
"encoding_layers": [
{ "size": 9, "activation": "relu", "bias": 1.0 },
{ "size": 5, "activation": "relu", "bias": 1.0 }
],
"decoding_layers": [
{ "size": 9, "activation": "relu", "bias": 1.0 },
{ "size": 11, "activation": "sigmoid", "bias": 1.0 }
],
"epochs": 5,
"loss": "mse",
"bias": 1.0,
"batch_size": 10
}
autoencoder = DeepAutoencoder(config)
autoencoder.compile()
autoencoder.summary()
###Output
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
sequential (Sequential) (None, 5) 60
_________________________________________________________________
sequential_1 (Sequential) (None, 11) 66
=================================================================
Total params: 126
Trainable params: 126
Non-trainable params: 0
_________________________________________________________________
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 5) 60
=================================================================
Total params: 60
Trainable params: 60
Non-trainable params: 0
_________________________________________________________________
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 11) 66
=================================================================
Total params: 66
Trainable params: 66
Non-trainable params: 0
_________________________________________________________________
###Markdown
TrainingTrain the model until it is overfitting.
###Code
input_file = "../pyanomaly/data/magic04_normalized_gamma.csv"
input_data = pd.read_csv(input_file)
input_data_no_labels = input_data.drop(['y'], axis=1)
autoencoder.train(input_data_no_labels)
###Output
Epoch 1/5
1234/1234 [==============================] - 6s 5ms/step - loss: 0.0184 - acc: 0.9975
Epoch 2/5
1234/1234 [==============================] - 6s 5ms/step - loss: 0.0043 - acc: 0.9997
Epoch 3/5
1234/1234 [==============================] - 6s 5ms/step - loss: 0.0021 - acc: 0.9997
Epoch 4/5
1234/1234 [==============================] - 6s 5ms/step - loss: 0.0014 - acc: 0.9997
Epoch 5/5
1234/1234 [==============================] - 7s 5ms/step - loss: 0.0011 - acc: 0.9997
###Markdown
Encoding and Latent ParametersCreate a dataset corresponding to the encoded version of the input data. Create vector of distribution parameters based on encoded data.
###Code
encoded_data = autoencoder.encode(input_data_no_labels)
df_encoded_data = pd.DataFrame(data=encoded_data)
df_encoded_data_mean = df_encoded_data.mean(axis=0)
df_encoded_data_std = df_encoded_data.std(axis=0)
predicted_data = autoencoder.predict(input_data_no_labels)
print("Encoded Data Mean")
print(df_encoded_data_mean)
print("Encoded Data Standard Deviation")
print(df_encoded_data_std)
###Output
Encoded Data Mean
0 2.047977
1 1.615716
2 1.492735
3 1.784805
4 0.931840
dtype: float32
Encoded Data Standard Deviation
0 0.513069
1 0.497778
2 0.342301
3 0.706304
4 0.388156
dtype: float32
###Markdown
Sampling Anomalous DataFrom mean and std of encoded data, randomly select x dimensions and sample from their tail ends. Dimensions that do not belong to x will use sampled data from the original encoded set. For example, if we set `stochastic_dimension_count` to `5` it means we randomly select `5` dimensions from the encoded vector's dimensionality and sample tail end values from a nomral distrubution of those `5` dimensions. The remaining dimensions' values will be sampled randomly from existing data in `df_encoded_data`.The intuition is to create encoded data that is sampled from latent space based on the "oracle" (in this case, the autoencoder). This captures the relevant information as computed by the neural network. Relative FrequencyAccording to the paper, relative frequency `K` is the ratio of anomalous datapoints existing in a dataset. Example values would be `0.001`, `0.01`, `0.1`. In terms of sampling, we synthesize `K * num_datapoints` anomalous datapoints.
###Code
stochastic_dimension_count = 2
stochastic_dimensions = random.sample(range(len(df_encoded_data.columns)), stochastic_dimension_count)
relative_frequency = 0.01
num_datapoints = len(input_data.index)
num_to_synthesize = round(relative_frequency * num_datapoints)
sampled_original = (df_encoded_data.sample(num_to_synthesize)).reset_index(drop=True)
# Loop through each sampled_original and sample a tail end for each dimension in stohastic_dimensions
synthetic_data = sampled_original.copy()
for index, row in synthetic_data.iterrows():
for d in stochastic_dimensions:
tail_values = sample_tails(df_encoded_data_mean.values[d], df_encoded_data_std.values[d], 10000)
sampled_outlier_feature = random.choice(tail_values)
synthetic_data.at[index, d] = sampled_outlier_feature
print("Stochastic Dimensions:")
print(stochastic_dimensions)
print("Number of datapoints:", num_datapoints)
print("Number of anomalous points to synthesize:", num_to_synthesize)
print("Sampled Original:")
print(sampled_original)
print("Synthetic Data:")
print(synthetic_data)
print(synthetic_data - sampled_original)
###Output
Stochastic Dimensions:
[4, 3]
Number of datapoints: 12332
Number of anomalous points to synthesize: 1233
Sampled Original:
0 1 2 3 4
0 1.837891 2.003318 1.254090 2.140128 1.365250
1 2.113434 1.904563 1.735907 1.531288 0.392458
2 1.773737 1.327538 1.675419 2.223125 1.005364
3 2.114700 1.864477 1.510483 1.727374 0.820564
4 1.099463 0.411052 2.110406 2.755559 0.927806
... ... ... ... ... ...
1228 1.510326 1.114374 1.915824 2.417154 0.729046
1229 2.009846 1.801640 1.355264 1.943054 1.149281
1230 1.959446 1.481402 1.708193 2.060549 0.831175
1231 2.372378 1.305571 1.271079 1.535080 1.261582
1232 1.612499 1.562196 1.711550 2.220245 0.633377
[1233 rows x 5 columns]
Synthetic Data:
0 1 2 3 4
0 1.837891 2.003318 1.254090 3.999917 -0.219636
1 2.113434 1.904563 1.735907 -0.176547 2.005381
2 1.773737 1.327538 1.675419 -0.304861 -0.165976
3 2.114700 1.864477 1.510483 3.662202 2.173570
4 1.099463 0.411052 2.110406 3.769748 2.012355
... ... ... ... ... ...
1228 1.510326 1.114374 1.915824 -0.314298 1.999428
1229 2.009846 1.801640 1.355264 3.747456 2.130700
1230 1.959446 1.481402 1.708193 -0.225299 2.068302
1231 2.372378 1.305571 1.271079 -0.303773 -0.158744
1232 1.612499 1.562196 1.711550 -0.269697 -0.188669
[1233 rows x 5 columns]
0 1 2 3 4
0 0.0 0.0 0.0 1.859789 -1.584885
1 0.0 0.0 0.0 -1.707835 1.612922
2 0.0 0.0 0.0 -2.527987 -1.171340
3 0.0 0.0 0.0 1.934829 1.353006
4 0.0 0.0 0.0 1.014189 1.084549
... ... ... ... ... ...
1228 0.0 0.0 0.0 -2.731452 1.270382
1229 0.0 0.0 0.0 1.804401 0.981419
1230 0.0 0.0 0.0 -2.285848 1.237127
1231 0.0 0.0 0.0 -1.838853 -1.420326
1232 0.0 0.0 0.0 -2.489942 -0.822046
[1233 rows x 5 columns]
###Markdown
Synthesize Anomalous Data PointsTo synthesize anomalous datapoints, we decode the synthetic data and map it back to its original dimensionality using the trained weights from the original autoencoder.
###Code
original_datapoints = input_data_no_labels.values
reconstructed_datapoints = autoencoder.decode(encoded_data)
synthesized_datapoints = autoencoder.decode(synthetic_data)
original_length = len(original_datapoints)
reconstructed_length = len(reconstructed_datapoints)
synthesized_length = len(synthesized_datapoints)
print("Original Lengt:", original_length)
print("Reconstructed Length:", reconstructed_length)
print("Synthesized Length:", synthesized_length)
###Output
Original Lengt: 12332
Reconstructed Length: 12332
Synthesized Length: 1233
###Markdown
VisualizationVisualize the data using TSNE
###Code
# Create a dataframe with labels 0 for normal data and 1 for anomalous data
anomalous_data_labels = np.ones((synthesized_length, 1))
original_data_labels = np.zeros((original_length, 1))
anomalous_data_with_labels = np.append(synthesized_datapoints, anomalous_data_labels, axis=1)
original_data_with_labels = np.append(original_datapoints, original_data_labels, axis=1)
reconstructed_data_with_labels = np.append(reconstructed_datapoints, original_data_labels, axis=1)
X = np.concatenate((anomalous_data_with_labels, original_data_with_labels))
X_r = np.concatenate((anomalous_data_with_labels, reconstructed_data_with_labels))
tsne = TSNE(n_components=2, random_state=0)
X_2d = tsne.fit_transform(np.delete(X, np.s_[-1:], axis=1))
X_2d_r = tsne.fit_transform(np.delete(X_r, np.s_[-1:], axis=1))
plt.figure(figsize=(6,5))
plt.title("Gamma Dataset")
plt.scatter(X_2d[0:(synthesized_length - 1),0], X_2d[0:(synthesized_length - 1),1], c='r', label='Normal', alpha=0.8)
plt.scatter(X_2d[(synthesized_length):-1,0], X_2d[(synthesized_length):-1,1], c='b', label='Normal', alpha=0.8)
plt.figure(figsize=(6,5))
plt.title("Gamma Dataset (reconstructed)")
plt.scatter(X_2d_r[0:(synthesized_length - 1),0], X_2d_r[0:(synthesized_length - 1),1], c='r', label='Normal', alpha=0.8)
plt.scatter(X_2d_r[(synthesized_length):-1,0], X_2d_r[(synthesized_length):-1,1], c='b', label='Normal', alpha=0.8)
###Output
_____no_output_____ |
notebooks/weekly_test_summary_fast.ipynb | ###Markdown
Fast generation of positive and negative test result counts by period
###Code
%matplotlib inline
import math
import numpy as np
from numba import njit
import matplotlib.pyplot as plt
from exetera.core.session import Session
from exetera.core.utils import Timer
from exetera.processing.date_time_helpers import\
get_periods, generate_period_offset_map, get_days, get_period_offsets
###Output
_____no_output_____
###Markdown
Helper functions
###Code
def human_readable_date(date):
'''
Transfer the float timestamp to a string representated date.
'''
if isinstance(date, float):
date = datetime.fromtimestamp(date)
return date.strftime("%Y-%m-%d")
###Output
_____no_output_____
###Markdown
Fill in these parameters
###Code
from datetime import datetime, timedelta
filename = # filename
start_dt = # the starting datetime
end_dt = # the ending datetime
###Output
_____no_output_____
###Markdown
Generate the summaries by seven day period Generate the seven day periods corresponding to the start and end dates
###Code
# Trunk the dates range into seven-day periods
start_ts = start_dt.timestamp()
end_ts = end_dt.timestamp()
periods = get_periods(end_dt, start_dt, 'week', -1)
periods.reverse()
print("Weekly periods from {} to {}".format(human_readable_date(periods[0]),
human_readable_date(periods[-1])))
###Output
_____no_output_____
###Markdown
Create the Session objectNote, you can also use `with Session() as s:` if you don't mind opening the session in each cell
###Code
s = Session() # Open the ExeTera session
src = s.open_dataset(filename, 'r', 'src') # Open the dataset with read-only 'r' mode
test_df = src['tests'] # Get the dataframe named 'tests'
###Output
_____no_output_____
###Markdown
Get the timestamp for each user signup
###Code
with Timer("Fetching test 'date_taken_specific' values"): # Record the time usage
test_dates = test_df['date_taken_specific'].data[:] # Load all the data into memory
###Output
_____no_output_____
###Markdown
Calculate on what day (relative to the start of the first period) each user signed up`get_days` also returns a filter indicating whether a given record is within the date range of interest
###Code
with Timer("Calculating day offsets for tests"): # Record the time usage
# Converts a field of timestamps into a field of relative elapsed days
test_days, inrange = get_days(test_dates,
start_date=periods[0].timestamp(),
end_date=periods[-1].timestamp())
###Output
_____no_output_____
###Markdown
Clear the days that fall outside of the specified range
###Code
with Timer("Filter out days that fall outside of the specified range"):
test_days = test_days[inrange]
###Output
_____no_output_____
###Markdown
Map the days to their corresponding periodsWe generate the map using `generate_period_offset_map` and then pass it to `generate_period_offsets`
###Code
with Timer("Convert from days to periods"):
test_periods = get_period_offsets(generate_period_offset_map(periods),
test_days)
# cat_counts = np.unique(cat_period, return_counts=True)
###Output
_____no_output_____
###Markdown
Generate 'positive' and 'negative' test filtersIgnore all other test results
###Code
with Timer("Generate positive and negative status arrays"):
positive = test_df['result'].apply_filter(inrange) == 4 # Filter created according to data value defined in scheme
negative = test_df['result'].apply_filter(inrange) == 3
###Output
_____no_output_____
###Markdown
Summarise positive and negative by period
###Code
with Timer("Summarise positive and negative test counts by period"):
negative_counts = np.unique(test_periods[negative.data[:]], return_counts=True) # Count number of negative tests in each period
all_negative_counts = np.zeros(len(periods), dtype=np.int32)
for k, v in zip(negative_counts[0], negative_counts[1]):
all_negative_counts[k] = v # Assign the counts to an array
positive_counts = np.unique(test_periods[positive.data[:]], return_counts=True) # Similar to positive tests
all_positive_counts = np.zeros(len(periods), dtype=np.int32)
for k, v in zip(positive_counts[0], positive_counts[1]):
all_positive_counts[k] = v
###Output
_____no_output_____
###Markdown
Generate the charts for positive / (positive + negative) test results
###Code
width = 1
widths = [width * d for d in range(len(periods))]
fig, ax = plt.subplots(2, 1, figsize=(10, 10))
negtests = ax[0].bar(widths, all_negative_counts)
postests = ax[0].bar(widths, all_positive_counts, bottom=all_negative_counts)
ax[0].set_title("Negative and positive test counts by week")
ax[0].set_xticks(np.arange(len(periods)-1))
ax[0].set_xticklabels([human_readable_date(d) for d in periods[:-1]], rotation=270)
ax[0].legend((negtests, postests), ("'Negative'", "'Positive'"))
ax[0].set_xlabel("Week starting")
ax[0].set_ylabel("Tests per week")
all_counts = all_negative_counts + all_positive_counts
all_counts = np.where(all_counts == 0, 1, all_counts)
pos_fraction = all_positive_counts / all_counts
pfbar = ax[1].bar(widths, pos_fraction, color="#ff7f0e")
ax[1].set_title("Positive tests by fraction of all definite results by week")
ax[1].set_xticks(np.arange(len(periods)-1))
ax[1].set_xticklabels([human_readable_date(d) for d in periods[:-1]], rotation=270)
ax[1].legend((pfbar,), ("Positive test fraction",))
ax[1].set_xlabel("Week starting")
ax[1].set_ylabel("Positive test fraction")
fig.tight_layout(h_pad=2.5)
plt.show()
# Close the session manually; not needed if opening the session using 'with' statement.
s.close()
###Output
_____no_output_____ |
openCV_tutorials/mini projedct (Shape identification).ipynb | ###Markdown
Shape identifiaction Mini project>various basic shapes are loaded in file
###Code
def im(image,name="image"):
cv2.imshow(name,image)
cv2.waitKey(0)
cv2.destroyAllWindows()
import cv2
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
shape=cv2.imread("shapes_5.jpg")
im(shape)
def find_contours(shape):
temp_shape=cv2.cvtColor(shape,cv2.COLOR_BGR2GRAY)
can=cv2.Canny(temp_shape,30,200)
con_img,contours,hierarchy=cv2.findContours(can,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
return contours
# cv2.drawContours(shape,contours,-1,(0,255,0),5)
# plt.imshow(shape)
def center_of_contour(x_c):
M=cv2.moments(x_c)
return (int(M['m10']/M['m00']),int(M['m01']/M['m00']))
def x_cord_contour(x_c):
if cv2.contourArea(x_c) >9:
return center_of_contour(x_c)[0]
def shape_id(shape,contour,name,color=(0,255,0),magnifier=0.2):
cv2.drawContours(shape,[c],0,color,-1)
cv2.putText(shape,name,center_of_contour(c),cv2.FONT_HERSHEY_COMPLEX,0.2,(255,255,255),1)
contours=find_contours(shape)
for c in contours:
accuracy=0.01*cv2.arcLength(c,True)
approx=cv2.approxPolyDP(c,accuracy,True)
if len(approx)==3:
shape_name="triangle"
cv2.drawContours(shape,[c],0,(0,255,0),-1)
cv2.putText(shape,shape_name,center_of_contour(c),cv2.FONT_HERSHEY_COMPLEX,0.2,(255,255,255),1)
elif(len(approx)==4):
x,y,w,h=cv2.boundingRect(c)
if w==h:
shape_id(shape,c,'square',color=(255,0,0))
else:
shape_id(shape,c,'rectangle',color=(2,100,50))
elif(len(approx)==10):
shape_id(shape,c,'star',color=(2,25,70))
elif(len(approx)>20):
shape_id(shape,c,'circle',color=(25,255,0))
print(len(approx))
im(shape)
cv2.waitKey()
cv2.destroyAllWindows()
###Output
10
15
3
4
4
|
instance_classification/simple_knn_eda.ipynb | ###Markdown
Simple classification ---_You are currently looking at **version 1.0** of this notebook._--- Import
###Code
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
###Output
_____no_output_____
###Markdown
Data
###Code
path = !find ../.. | grep -i fruit_data_with_colors
path
fruits = pd.read_table(path[0])
fruits.head()
###Output
_____no_output_____
###Markdown
Map labels to name in dictionary
###Code
lookup_fruit_name = dict(fruits.loc[:, ['fruit_label', 'fruit_name']].sort_values('fruit_label').values)
lookup_fruit_name
###Output
_____no_output_____
###Markdown
Exploring the data Train-test split for selected features- default is 75% / 25% train-test split
###Code
X, y = fruits[['height', 'width', 'mass', 'color_score']], fruits['fruit_label']
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
###Output
_____no_output_____
###Markdown
Plot Scatter matrix Set colormap
###Code
from matplotlib import cm
cmap = cm.get_cmap('gnuplot')
figsize = (10, 10)
###Output
_____no_output_____
###Markdown
Plot
###Code
scatter = pd.plotting.scatter_matrix(X_train, c=y_train, marker='o', s=40, hist_kwds={'bins':15}, figsize=figsize, cmap=cmap)
###Output
_____no_output_____
###Markdown
3D scatter plot
###Code
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=figsize)
ax = fig.add_subplot(111, projection = '3d')
ax.scatter(X_train['width'], X_train['height'], X_train['color_score'], c=y_train, marker='o', s=100)
ax.set_xlabel('width')
ax.set_ylabel('height')
ax.set_zlabel('color_score')
plt.show();
###Output
_____no_output_____
###Markdown
Train-test split for selected features
###Code
X, y = fruits[['mass', 'width', 'height']], fruits['fruit_label']
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
###Output
_____no_output_____
###Markdown
k-NN Classifier - KNeighborsClassifier
###Code
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Estimate the accuracy of the classifier on test data
###Code
knn.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
Use the trained model to classify new(unseen) objects
###Code
# first example: a small fruit with mass 20g, width 4.3 cm, height 5.5 cm
fruit_prediction = knn.predict([[20, 4.3, 5.5]])
fruit_prediction[0], lookup_fruit_name[fruit_prediction[0]]
# second example: a larger, elongated fruit with mass 100g, width 6.3 cm, height 8.5 cm
fruit_prediction = knn.predict([[100, 6.3, 8.5]])
fruit_prediction[0], lookup_fruit_name[fruit_prediction[0]]
###Output
_____no_output_____
###Markdown
Plot the decision boundaries of the k-NN classifier
###Code
def plot_fruit_knn(X, y, n_neighbors, weights):
from matplotlib.colors import ListedColormap, BoundaryNorm
import matplotlib.patches as mpatches
X_mat = X[['height', 'width']].as_matrix()
y_mat = y.as_matrix()
# Create color maps
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF','#AFAFAF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF','#AFAFAF'])
clf = KNeighborsClassifier(n_neighbors, weights=weights)
clf.fit(X_mat, y_mat)
# Plot the decision boundary by assigning a color in the color map
# to each mesh point.
mesh_step_size = .01 # step size in the mesh
plot_symbol_size = 50
x_min, x_max = X_mat[:, 0].min() - 1, X_mat[:, 0].max() + 1
y_min, y_max = X_mat[:, 1].min() - 1, X_mat[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, mesh_step_size),
np.arange(y_min, y_max, mesh_step_size))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot training points
plt.scatter(X_mat[:, 0], X_mat[:, 1], s=plot_symbol_size, c=y, cmap=cmap_bold, edgecolor = 'black')
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
patch0 = mpatches.Patch(color='#FF0000', label='apple')
patch1 = mpatches.Patch(color='#00FF00', label='mandarin')
patch2 = mpatches.Patch(color='#0000FF', label='orange')
patch3 = mpatches.Patch(color='#AFAFAF', label='lemon')
plt.legend(handles=[patch0, patch1, patch2, patch3])
plt.xlabel('height (cm)')
plt.ylabel('width (cm)')
plt.show()
plot_fruit_knn(X_train, y_train, 5, 'uniform') # we choose 5 nearest neighbors
###Output
_____no_output_____
###Markdown
k-NN classification accuracy with respect to the 'k' parameter
###Code
k_range = range(1, 20)
scores = []
for k in k_range:
knn = KNeighborsClassifier(n_neighbors = k)
knn.fit(X_train, y_train)
scores.append(knn.score(X_test, y_test))
plt.figure()
plt.xlabel('k')
plt.ylabel('accuracy')
plt.plot(k_range, scores, 'o-')
plt.xticks([0,5,10,15,20]);
###Output
_____no_output_____
###Markdown
k-NN classification (test) accuracy with respect to the train/test split
###Code
train_proportions = [0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2]
knn = KNeighborsClassifier(n_neighbors=5)
plt.figure()
mean_scores = []
for perc in train_proportions:
scores = []
for _ in range(1, 100):
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=1-perc)
knn.fit(X_train, y_train)
scores.append(knn.score(X_test, y_test))
mean_scores.append(np.mean(scores))
plt.plot(t, mean_scores, 'o-')
plt.xlabel('Training set proportion (%)')
plt.ylabel('accuracy');
###Output
_____no_output_____ |
nxt/sensors/index.ipynb | ###Markdown
SensorsHi ha quatre sensors diferents montats i connectats al robot:[](http://www.legoengineering.com/nxt-sensors/)Anem a comprovar el funcionament de cadascun d'ells.Primer, necessitem algunes funcions, i com sempre, connectar-nos al robot.
###Code
from functions import connect, touch, light, sound, ultrasonic, disconnect
connect(12)
###Output
_____no_output_____
###Markdown
Sensor de tacteÉs un polsador, que segons estiga polsat o no, donarà un valor vertader (`True`) o fals (`False`). Per a comprovar-ho, proveu a executar vàries vegades la funció següent, amb el sensor activat o sense activar-lo.
###Code
touch() # Per a executar repetidament, useu Ctrl + Enter
###Output
_____no_output_____
###Markdown
Sensor de llumEstà format per un transistor que emet llum, i un diode que detecta la llum reflectida per la superfície. Dóna un valor numèric, més alt com més quantitat de llum, és a dir, valors baixos (pròxims a 0) per a les superfícies fosques, i valor alts (pròxims a 100) per a les clares.
###Code
light() # Per a executar repetidament, useu Ctrl + Enter
###Output
_____no_output_____
###Markdown
Sensor de so (micròfon)Permet mesurar el so ambient en decibelis, tornant un valor en percentatge, més alt com més fort és el so. Per exemple:* 4-5% és com una sala d'estar en silenci* 5-10% seria algú parlant a certa distància* 10-30% és una conversa normal a prop del sensor o música reproduïda en un nivell normal* 30-100% són gent cridant o la música que s'està reproduint a un volum alt
###Code
sound() # Per a executar repetidament, useu Ctrl + Enter
###Output
_____no_output_____
###Markdown
Sensor ultrasònicAquest sensor funciona emetent ultrasons, i medint el temps que tarda l'eco del senyal en tornar al sensor. D'eixa manera permet calcular la distància (en cm) a un obstacle que estiga al davant. És el mateix [principi que usen els ratpenats](https://ca.wikipedia.org/wiki/RatpenatEcolocalitzaci.C3.B3).
###Code
ultrasonic() # Per a executar repetidament, useu Ctrl + Enter
###Output
_____no_output_____
###Markdown
Comprovació dels sensorsPer a finalitzar, la següent funció mostra repetidament en pantalla els valors de tots els sensors, per a comprovar fàcilment el funcionament correcte de tots ells. Per a parar l'execució, has de prémer el botó `interrupt kernel` del panell de dalt.
###Code
from functions import test_sensors
test_sensors()
###Output
_____no_output_____
###Markdown
És el moment de fer nous programes amb els sensors, però abans cal desconnectar el robot d'esta pàgina.
###Code
disconnect()
###Output
_____no_output_____ |
day80/predict-house-prices/Multivariable_Regression_and_Valuation_Model_(start).ipynb | ###Markdown
Setup and Context IntroductionWelcome to Boston Massachusetts in the 1970s! Imagine you're working for a real estate development company. Your company wants to value any residential project before they start. You are tasked with building a model that can provide a price estimate based on a home's characteristics like:* The number of rooms* The distance to employment centres* How rich or poor the area is* How many students there are per teacher in local schools etcTo accomplish your task you will:1. Analyse and explore the Boston house price data2. Split your data for training and testing3. Run a Multivariable Regression4. Evaluate how your model's coefficients and residuals5. Use data transformation to improve your model performance6. Use your model to estimate a property price Upgrade plotly (only Google Colab Notebook)Google Colab may not be running the latest version of plotly. If you're working in Google Colab, uncomment the line below, run the cell, and restart your notebook server.
###Code
# %pip install --upgrade plotly
###Output
_____no_output_____
###Markdown
Import Statements
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import plotly.express as px
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
# TODO: Add missing import statements
###Output
_____no_output_____
###Markdown
Notebook Presentation
###Code
pd.options.display.float_format = '{:,.2f}'.format
###Output
_____no_output_____
###Markdown
Load the DataThe first column in the .csv file just has the row numbers, so it will be used as the index.
###Code
data = pd.read_csv('boston.csv', index_col=0)
###Output
_____no_output_____
###Markdown
Understand the Boston House Price Dataset---------------------------**Characteristics:** :Number of Instances: 506 :Number of Attributes: 13 numeric/categorical predictive. The Median Value (attribute 14) is the target. :Attribute Information (in order): 1. CRIM per capita crime rate by town 2. ZN proportion of residential land zoned for lots over 25,000 sq.ft. 3. INDUS proportion of non-retail business acres per town 4. CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise) 5. NOX nitric oxides concentration (parts per 10 million) 6. RM average number of rooms per dwelling 7. AGE proportion of owner-occupied units built prior to 1940 8. DIS weighted distances to five Boston employment centres 9. RAD index of accessibility to radial highways 10. TAX full-value property-tax rate per $10,000 11. PTRATIO pupil-teacher ratio by town 12. B 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town 13. LSTAT % lower status of the population 14. PRICE Median value of owner-occupied homes in $1000's :Missing Attribute Values: None :Creator: Harrison, D. and Rubinfeld, D.L.This is a copy of [UCI ML housing dataset](https://archive.ics.uci.edu/ml/machine-learning-databases/housing/). This dataset was taken from the StatLib library which is maintained at Carnegie Mellon University. You can find the [original research paper here](https://deepblue.lib.umich.edu/bitstream/handle/2027.42/22636/0000186.pdf?sequence=1&isAllowed=y). Preliminary Data Exploration 🔎**Challenge*** What is the shape of `data`? * How many rows and columns does it have?* What are the column names?* Are there any NaN values or duplicates?
###Code
###Output
_____no_output_____
###Markdown
Data Cleaning - Check for Missing Values and Duplicates
###Code
###Output
_____no_output_____
###Markdown
Descriptive Statistics**Challenge*** How many students are there per teacher on average?* What is the average price of a home in the dataset?* What is the `CHAS` feature? * What are the minimum and the maximum value of the `CHAS` and why?* What is the maximum and the minimum number of rooms per dwelling in the dataset?
###Code
###Output
_____no_output_____
###Markdown
Visualise the Features**Challenge**: Having looked at some descriptive statistics, visualise the data for your model. Use [Seaborn's `.displot()`](https://seaborn.pydata.org/generated/seaborn.displot.htmlseaborn.displot) to create a bar chart and superimpose the Kernel Density Estimate (KDE) for the following variables: * PRICE: The home price in thousands.* RM: the average number of rooms per owner unit.* DIS: the weighted distance to the 5 Boston employment centres i.e., the estimated length of the commute.* RAD: the index of accessibility to highways. Try setting the `aspect` parameter to `2` for a better picture. What do you notice in the distributions of the data? House Prices 💰
###Code
###Output
_____no_output_____
###Markdown
Distance to Employment - Length of Commute 🚗
###Code
###Output
_____no_output_____
###Markdown
Number of Rooms
###Code
###Output
_____no_output_____
###Markdown
Access to Highways 🛣
###Code
###Output
_____no_output_____
###Markdown
Next to the River? ⛵️**Challenge**Create a bar chart with plotly for CHAS to show many more homes are away from the river versus next to it. The bar chart should look something like this:You can make your life easier by providing a list of values for the x-axis (e.g., `x=['No', 'Yes']`)
###Code
###Output
_____no_output_____
###Markdown
Understand the Relationships in the Data Run a Pair Plot**Challenge**There might be some relationships in the data that we should know about. Before you run the code, make some predictions:* What would you expect the relationship to be between pollution (NOX) and the distance to employment (DIS)? * What kind of relationship do you expect between the number of rooms (RM) and the home value (PRICE)?* What about the amount of poverty in an area (LSTAT) and home prices? Run a [Seaborn `.pairplot()`](https://seaborn.pydata.org/generated/seaborn.pairplot.html?highlight=pairplotseaborn.pairplot) to visualise all the relationships at the same time. Note, this is a big task and can take 1-2 minutes! After it's finished check your intuition regarding the questions above on the `pairplot`.
###Code
###Output
_____no_output_____
###Markdown
**Challenge**Use [Seaborn's `.jointplot()`](https://seaborn.pydata.org/generated/seaborn.jointplot.html) to look at some of the relationships in more detail. Create a jointplot for:* DIS and NOX* INDUS vs NOX* LSTAT vs RM* LSTAT vs PRICE* RM vs PRICETry adding some opacity or `alpha` to the scatter plots using keyword arguments under `joint_kws`. Distance from Employment vs. Pollution**Challenge**: Compare DIS (Distance from employment) with NOX (Nitric Oxide Pollution) using Seaborn's `.jointplot()`. Does pollution go up or down as the distance increases?
###Code
###Output
_____no_output_____
###Markdown
Proportion of Non-Retail Industry 🏭🏭🏭 versus Pollution **Challenge**: Compare INDUS (the proportion of non-retail industry i.e., factories) with NOX (Nitric Oxide Pollution) using Seaborn's `.jointplot()`. Does pollution go up or down as there is a higher proportion of industry?
###Code
###Output
_____no_output_____
###Markdown
% of Lower Income Population vs Average Number of Rooms**Challenge** Compare LSTAT (proportion of lower-income population) with RM (number of rooms) using Seaborn's `.jointplot()`. How does the number of rooms per dwelling vary with the poverty of area? Do homes have more or fewer rooms when LSTAT is low?
###Code
###Output
_____no_output_____
###Markdown
% of Lower Income Population versus Home Price**Challenge**Compare LSTAT with PRICE using Seaborn's `.jointplot()`. How does the proportion of the lower-income population in an area affect home prices?
###Code
###Output
_____no_output_____
###Markdown
Number of Rooms versus Home Value**Challenge** Compare RM (number of rooms) with PRICE using Seaborn's `.jointplot()`. You can probably guess how the number of rooms affects home prices. 😊
###Code
###Output
_____no_output_____
###Markdown
Split Training & Test DatasetWe *can't* use all 506 entries in our dataset to train our model. The reason is that we want to evaluate our model on data that it hasn't seen yet (i.e., out-of-sample data). That way we can get a better idea of its performance in the real world. **Challenge*** Import the [`train_test_split()` function](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) from sklearn* Create 4 subsets: X_train, X_test, y_train, y_test* Split the training and testing data roughly 80/20. * To get the same random split every time you run your notebook use `random_state=10`. This helps us get the same results every time and avoid confusion while we're learning. Hint: Remember, your **target** is your home PRICE, and your **features** are all the other columns you'll use to predict the price.
###Code
###Output
_____no_output_____
###Markdown
Multivariable RegressionIn a previous lesson, we had a linear model with only a single feature (our movie budgets). This time we have a total of 13 features. Therefore, our Linear Regression model will have the following form:$$ PR \hat ICE = \theta _0 + \theta _1 RM + \theta _2 NOX + \theta _3 DIS + \theta _4 CHAS ... + \theta _{13} LSTAT$$ Run Your First Regression**Challenge**Use sklearn to run the regression on the training dataset. How high is the r-squared for the regression on the training data?
###Code
###Output
_____no_output_____
###Markdown
Evaluate the Coefficients of the ModelHere we do a sense check on our regression coefficients. The first thing to look for is if the coefficients have the expected sign (positive or negative). **Challenge** Print out the coefficients (the thetas in the equation above) for the features. Hint: You'll see a nice table if you stick the coefficients in a DataFrame. * We already saw that RM on its own had a positive relation to PRICE based on the scatter plot. Is RM's coefficient also positive?* What is the sign on the LSAT coefficient? Does it match your intuition and the scatter plot above?* Check the other coefficients. Do they have the expected sign?* Based on the coefficients, how much more expensive is a room with 6 rooms compared to a room with 5 rooms? According to the model, what is the premium you would have to pay for an extra room?
###Code
###Output
_____no_output_____
###Markdown
Analyse the Estimated Values & Regression ResidualsThe next step is to evaluate our regression. How good our regression is depends not only on the r-squared. It also depends on the **residuals** - the difference between the model's predictions ($\hat y_i$) and the true values ($y_i$) inside `y_train`. ```predicted_values = regr.predict(X_train)residuals = (y_train - predicted_values)```**Challenge**: Create two scatter plots.The first plot should be actual values (`y_train`) against the predicted value values: The cyan line in the middle shows `y_train` against `y_train`. If the predictions had been 100% accurate then all the dots would be on this line. The further away the dots are from the line, the worse the prediction was. That makes the distance to the cyan line, you guessed it, our residuals 😊The second plot should be the residuals against the predicted prices. Here's what we're looking for:
###Code
###Output
_____no_output_____
###Markdown
Why do we want to look at the residuals? We want to check that they look random. Why? The residuals represent the errors of our model. If there's a pattern in our errors, then our model has a systematic bias.We can analyse the distribution of the residuals. In particular, we're interested in the **skew** and the **mean**.In an ideal case, what we want is something close to a normal distribution. A normal distribution has a skewness of 0 and a mean of 0. A skew of 0 means that the distribution is symmetrical - the bell curve is not lopsided or biased to one side. Here's what a normal distribution looks like: **Challenge*** Calculate the mean and the skewness of the residuals. * Again, use Seaborn's `.displot()` to create a histogram and superimpose the Kernel Density Estimate (KDE)* Is the skewness different from zero? If so, by how much? * Is the mean different from zero?
###Code
###Output
_____no_output_____
###Markdown
Data Transformations for a Better FitWe have two options at this point: 1. Change our model entirely. Perhaps a linear model is not appropriate. 2. Transform our data to make it fit better with our linear model. Let's try a data transformation approach. **Challenge**Investigate if the target `data['PRICE']` could be a suitable candidate for a log transformation. * Use Seaborn's `.displot()` to show a histogram and KDE of the price data. * Calculate the skew of that distribution.* Use [NumPy's `log()` function](https://numpy.org/doc/stable/reference/generated/numpy.log.html) to create a Series that has the log prices* Plot the log prices using Seaborn's `.displot()` and calculate the skew. * Which distribution has a skew that's closer to zero?
###Code
###Output
_____no_output_____
###Markdown
How does the log transformation work?Using a log transformation does not affect every price equally. Large prices are affected more than smaller prices in the dataset. Here's how the prices are "compressed" by the log transformation:We can see this when we plot the actual prices against the (transformed) log prices.
###Code
plt.figure(dpi=150)
plt.scatter(data.PRICE, np.log(data.PRICE))
plt.title('Mapping the Original Price to a Log Price')
plt.ylabel('Log Price')
plt.xlabel('Actual $ Price in 000s')
plt.show()
###Output
_____no_output_____
###Markdown
Regression using Log PricesUsing log prices instead, our model has changed to:$$ \log (PR \hat ICE) = \theta _0 + \theta _1 RM + \theta _2 NOX + \theta_3 DIS + \theta _4 CHAS + ... + \theta _{13} LSTAT $$**Challenge**: * Use `train_test_split()` with the same random state as before to make the results comparable. * Run a second regression, but this time use the transformed target data. * What is the r-squared of the regression on the training data? * Have we improved the fit of our model compared to before based on this measure?
###Code
###Output
_____no_output_____
###Markdown
Evaluating Coefficients with Log Prices**Challenge**: Print out the coefficients of the new regression model. * Do the coefficients still have the expected sign? * Is being next to the river a positive based on the data?* How does the quality of the schools affect property prices? What happens to prices as there are more students per teacher? Hint: Use a DataFrame to make the output look pretty.
###Code
###Output
_____no_output_____
###Markdown
Regression with Log Prices & Residual Plots**Challenge**: * Copy-paste the cell where you've created scatter plots of the actual versus the predicted home prices as well as the residuals versus the predicted values. * Add 2 more plots to the cell so that you can compare the regression outcomes with the log prices side by side. * Use `indigo` as the colour for the original regression and `navy` for the color using log prices.
###Code
###Output
_____no_output_____
###Markdown
**Challenge**: Calculate the mean and the skew for the residuals using log prices. Are the mean and skew closer to 0 for the regression using log prices?
###Code
###Output
_____no_output_____
###Markdown
Compare Out of Sample PerformanceThe *real* test is how our model performs on data that it has not "seen" yet. This is where our `X_test` comes in. **Challenge**Compare the r-squared of the two models on the test dataset. Which model does better? Is the r-squared higher or lower than for the training dataset? Why?
###Code
###Output
_____no_output_____
###Markdown
Predict a Property's Value using the Regression CoefficientsOur preferred model now has an equation that looks like this:$$ \log (PR \hat ICE) = \theta _0 + \theta _1 RM + \theta _2 NOX + \theta_3 DIS + \theta _4 CHAS + ... + \theta _{13} LSTAT $$The average property has the mean value for all its charactistics:
###Code
# Starting Point: Average Values in the Dataset
features = data.drop(['PRICE'], axis=1)
average_vals = features.mean().values
property_stats = pd.DataFrame(data=average_vals.reshape(1, len(features.columns)),
columns=features.columns)
property_stats
###Output
_____no_output_____
###Markdown
**Challenge**Predict how much the average property is worth using the stats above. What is the log price estimate and what is the dollar estimate? You'll have to [reverse the log transformation with `.exp()`](https://numpy.org/doc/stable/reference/generated/numpy.exp.html?highlight=expnumpy.exp) to find the dollar value.
###Code
###Output
_____no_output_____
###Markdown
**Challenge**Keeping the average values for CRIM, RAD, INDUS and others, value a property with the following characteristics:
###Code
# Define Property Characteristics
next_to_river = True
nr_rooms = 8
students_per_classroom = 20
distance_to_town = 5
pollution = data.NOX.quantile(q=0.75) # high
amount_of_poverty = data.LSTAT.quantile(q=0.25) # low
# Solution:
###Output
_____no_output_____ |
Coronovirus_Data_Analysis.ipynb | ###Markdown
What is Coronavirus2019 Novel Coronavirus (2019-nCoV) is a virus (more specifically, a coronavirus) identified as the cause of an outbreak of respiratory illness first detected in Wuhan, China. Early on, many of the patients in the outbreak in Wuhan, China reportedly had some link to a large seafood and animal market, suggesting animal-to-person spread. However, a growing number of patients reportedly have not had exposure to animal markets, indicating person-to-person spread is occurring. At this time, it’s unclear how easily or sustainably this virus is spreading between people - CDCThis dataset has daily level information on the number of affected cases, deaths and recovery from 2019 novel coronavirus.The data is available from 22 Jan 2020. Define the ProblemCoronaviruses are a large family of viruses that are common in many different species of animals, including camels, cattle, cats, and bats. Rarely, animal coronaviruses can infect people and then spread between people such as with MERS, SARS, and now with 2019-nCoV.Outbreaks of novel virus infections among people are always of public health concern. The risk from these outbreaks depends on characteristics of the virus, including whether and how well it spreads between people, the severity of resulting illness, and the medical or other measures available to control the impact of the virus (for example, vaccine or treatment medications).This is a very serious public health threat. The fact that this virus has caused severe illness and sustained person-to-person spread in China is concerning, but it’s unclear how the situation in the United States will unfold at this time.The risk to individuals is dependent on exposure. At this time, some people will have an increased risk of infection, for example healthcare workers caring for 2019-nCoV patients and other close contacts. For the general American public, who are unlikely to be exposed to this virus, the immediate health risk from 2019-nCoV is considered low. The goal of the ongoing U.S. public health response is to prevent sustained spread of 2019-nCov in this country. PrecautionsHealth authorities and scientists say the same precautions against other viral illnesses can be used: wash your hands frequently, cover up your coughs, try not to touch your face. And anyone who does come down with the virus should be placed in isolation. "Considering that substantial numbers of patients with SARS and MERS were infected in health-care settings", precautions need to be taken to prevent that happening again, the Chinese team warned in The Lancet. Coronovirus Exploratory Data AnalysisWe can explore the analysis of the corono virus affected stats
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
The data source is from https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset
###Code
nCov_df = pd.read_csv('2019_nCoV_data.csv')
nCov_df.columns
###Output
_____no_output_____
###Markdown
Column Description2019_ncov_data.csvSno - Serial number Date - Date and time of the observation in MM/DD/YYYY HH:MM:SSProvince / State - Province or state of the observation (Could be empty when missing)Country - Country of observationLast Update - Time in UTC at which the row is updated for the given province or country. (Not standardised currently. So please clean them before using it)Confirmed - Number of confirmed casesDeaths - Number of deathsRecovered - Number of recovered casesThe sample data are given below
###Code
nCov_df.head()
nCov_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1199 entries, 0 to 1198
Data columns (total 8 columns):
Sno 1199 non-null int64
Date 1199 non-null object
Province/State 888 non-null object
Country 1199 non-null object
Last Update 1199 non-null object
Confirmed 1199 non-null float64
Deaths 1199 non-null float64
Recovered 1199 non-null float64
dtypes: float64(3), int64(1), object(4)
memory usage: 75.0+ KB
###Markdown
Based on the above information ,The Province/State having some missing values
###Code
nCov_df.describe()
nCov_df[['Confirmed', 'Deaths', 'Recovered']].sum().plot(kind='bar')
###Output
_____no_output_____
###Markdown
Observations1. The dataset is contains many countries like China, Japan, US, India and so on.2. The comparision of confirmed with Recovered, It clearly states that the recovery action from virsu is dead slow.3. The data clearly indicating the spreadness of virus is so fast with out any control
###Code
nCov_df.columns
###Output
_____no_output_____
###Markdown
Data Clean up
###Code
nCov_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1199 entries, 0 to 1198
Data columns (total 8 columns):
Sno 1199 non-null int64
Date 1199 non-null object
Province/State 888 non-null object
Country 1199 non-null object
Last Update 1199 non-null object
Confirmed 1199 non-null float64
Deaths 1199 non-null float64
Recovered 1199 non-null float64
dtypes: float64(3), int64(1), object(4)
memory usage: 75.0+ KB
###Markdown
Removing the unwanted columns from tha data
###Code
nCov_df.drop(['Sno', 'Last Update'], axis=1, inplace=True)
nCov_df.columns
###Output
_____no_output_____
###Markdown
Converted the date data type object into datetime
###Code
nCov_df['Date'] = nCov_df['Date'].apply(pd.to_datetime)
nCov_df.info()
nCov_df['Date'].head()
nCov_df[nCov_df['Province/State'] == 'Hong Kong']
###Output
_____no_output_____
###Markdown
Replacing the wrongly mapped country value towards states
###Code
nCov_df[nCov_df['Province/State'] == 'Taiwan']['Country'] = 'Taiwan'
nCov_df[nCov_df['Province/State'] == 'Hong Kong']['Country'] = 'Hong Kong'
nCov_df['Country'].unique()
nCov_df.replace({'Country': 'Mainland China'}, 'China', inplace=True)
###Output
_____no_output_____
###Markdown
Listing all the countries which is affected with corono virus
###Code
nCov_df['Country'].unique()
###Output
_____no_output_____
###Markdown
Country based virus affected people information
###Code
nCov_df.columns
nCov_df.groupby(['Country']).Confirmed.count().reset_index().sort_values(['Country'], ascending = True)
###Output
_____no_output_____
###Markdown
Top most Severely affected countries
###Code
nCov_df.groupby(['Country']).Confirmed.count().reset_index().sort_values(['Confirmed'], ascending=False).head(10)
###Output
_____no_output_____
###Markdown
List all the Provinces/States that were affected with Virus
###Code
nCov_df.columns
nCov_df['Province/State'].unique()
###Output
_____no_output_____
###Markdown
Impact in india
###Code
nCov_df[nCov_df.Country == 'India']
###Output
_____no_output_____
###Markdown
Exploratory Analysis Country most affected
###Code
nCov_df.groupby(['Country']).Confirmed.max().reset_index().sort_values(['Confirmed'], ascending=False).head(20).plot(x='Country',
kind='bar', figsize=(12,6))
###Output
_____no_output_____
###Markdown
Country most recovered
###Code
nCov_df.groupby(['Country']).Recovered.max().reset_index().sort_values(['Recovered'], ascending=False).head(20).plot(x='Country',
kind='bar', figsize=(12,6))
###Output
_____no_output_____
###Markdown
Country faced more deaths over the world
###Code
nCov_df.groupby(['Country']).Deaths.max().reset_index().sort_values(['Deaths'], ascending=False).head(20).plot(x='Country',
kind='bar', figsize=(12,6))
###Output
_____no_output_____
###Markdown
Recovery vs Deaths in world wide
###Code
nCov_df[['Country', 'Deaths', 'Recovered']].groupby('Country').max().plot(kind='bar', figsize=(12, 7))
###Output
_____no_output_____
###Markdown
Recovery vs Deaths in world wide other than China
###Code
nCov_df[nCov_df['Country'] != 'China'][['Country', 'Deaths', 'Recovered']].groupby('Country').max().plot(kind='bar', figsize=(12, 7))
nCov_df['Country'].unique()
###Output
_____no_output_____
###Markdown
Philippines clearly show that the no recovered happen
###Code
nCov_df[nCov_df['Country'] == 'Philippines'][['Country', 'Confirmed', 'Deaths', 'Recovered']].groupby('Country').max().plot(kind='bar')
###Output
_____no_output_____
###Markdown
When did Virus Confirmed initially?
###Code
nCov_df['Date'].min()
###Output
_____no_output_____
###Markdown
When was the Virus Confirmed recently?
###Code
nCov_df['Date'].max()
###Output
_____no_output_____
###Markdown
How many total no.of persons were identified with Virus on each day
###Code
nCov_df.groupby('Date')[['Confirmed', 'Deaths', 'Recovered']].max().reset_index()
###Output
_____no_output_____
###Markdown
Case confirmed for each countries
###Code
nCov_df.groupby(['Country']).Confirmed.max().reset_index().plot(x='Country', kind='bar', figsize=(10,6))
###Output
_____no_output_____
###Markdown
Case confirmed other than China
###Code
nCov_df[nCov_df['Country'] != 'China'].groupby(['Country']).Confirmed.max().reset_index().plot(x='Country', kind='bar', figsize=(10,6))
###Output
_____no_output_____
###Markdown
The virus spreadness over the confirmed, Deaths and Recovered in globally
###Code
nCov_df.groupby('Date')[['Confirmed', 'Deaths', 'Recovered']].max().reset_index().plot(x='Date',
y=['Confirmed', 'Deaths', 'Recovered'],
figsize=(12, 7))
###Output
_____no_output_____
###Markdown
Spreadness of virus , Deaths and recovery data other than China
###Code
nCov_df[nCov_df['Country'] != 'China'].groupby('Date')[['Confirmed', 'Deaths', 'Recovered']].max().reset_index().plot(x='Date',
y=['Confirmed', 'Deaths', 'Recovered'],
figsize=(12, 7))
###Output
_____no_output_____
###Markdown
List the States in China which were affected
###Code
nCov_df.columns
nCov_df[nCov_df['Country'] == 'China'].groupby('Province/State')[['Confirmed']].count().reset_index().plot(x='Province/State',
y=['Confirmed'],kind='bar',
figsize=(12, 7))
nCov_df[nCov_df.Country == 'China'][['Province/State', 'Deaths', 'Recovered']].groupby('Province/State').max().plot(kind='bar',
figsize=(12, 7))
nCov_df[nCov_df['Country'] == 'China'].groupby('Province/State')[['Confirmed', 'Deaths', 'Recovered']].max().reset_index().plot(x='Province/State',
y=['Confirmed', 'Deaths', 'Recovered'],
figsize=(12, 7))
nCov_df.columns
###Output
_____no_output_____
###Markdown
Countries those have worst recovery services
###Code
nCov_df[nCov_df['Recovered'] < nCov_df['Deaths']][['Country', 'Confirmed', 'Deaths', 'Recovered']].groupby('Country').max().plot(kind='bar')
###Output
_____no_output_____
###Markdown
Countries death rate high and 0 recovery rate
###Code
nCov_df[(nCov_df['Recovered'] < nCov_df['Deaths'])&(nCov_df['Country'] != 'China')][['Country', 'Confirmed', 'Deaths', 'Recovered']].groupby('Country').max().plot(kind='bar',
figsize=(12,7))
nCov_df[(nCov_df['Recovered'] == 0 )&( nCov_df['Deaths'] != 0)][['Country', 'Confirmed', 'Deaths', 'Recovered']].groupby('Country').max().plot(kind='bar')
###Output
_____no_output_____
###Markdown
Very slow recovery in china
###Code
nCov_df[(nCov_df['Country'] == 'China') & (nCov_df['Recovered'] == 0 )&( nCov_df['Deaths'] != 0)][['Province/State', 'Confirmed', 'Deaths', 'Recovered']].groupby('Province/State').max().plot(kind='bar',
figsize=(12, 7))
###Output
_____no_output_____ |
1_image_classification/test_vgg.ipynb | ###Markdown
別画像でテストしてみた- 上田先生の画像をお借りしました 学習目標1. PyTorchでImangeNetデータセットでの学習済みモデルをロードできるようになる2. VGGモデルについて理解する3. 入力画像のサイズや色を変換できるようになる 事前準備1. 書籍の指示に従い、make_folders_and_data_downloads.ipynbを実行して、本章で使用するデータをダウンロード2. PyTorchのインストールページ( https://pytorch.org/get-started/locally/ )を参考に、PyTorch1.0をインストールconda install pytorch-cpu torchvision-cpu -c pytorch(Windowsで、GPUなしの環境をcondaでインストールする場合)3. Matplotlibをインストールconda install -c conda-forge matplotlib パッケージのimportとPyTorchのバージョンを確認
###Code
# パッケージのimport
import numpy as np
import json
from PIL import Image
import matplotlib.pyplot as plt
%matplotlib inline
import torch
import torchvision
from torchvision import models, transforms
# PyTorchのバージョン確認
print("PyTorch Version: ",torch.__version__)
print("Torchvision Version: ",torchvision.__version__)
###Output
PyTorch Version: 1.6.0.dev20200610+cpu
Torchvision Version: 0.7.0.dev20200610+cpu
###Markdown
VGG-16の学習済みモデルをロード
###Code
# 学習済みのVGG-16モデルをロード
# 初めて実行する際は、学習済みパラメータをダウンロードするため、実行に時間がかかります
# VGG-16モデルのインスタンスを生成
use_pretrained = True # 学習済みのパラメータを使用
net = models.vgg16(pretrained=use_pretrained)
net.eval() # 推論モードに設定
# モデルのネットワーク構成を出力
print(net)
###Output
VGG(
(features): Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU(inplace=True)
(2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU(inplace=True)
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(6): ReLU(inplace=True)
(7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(8): ReLU(inplace=True)
(9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): ReLU(inplace=True)
(12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(13): ReLU(inplace=True)
(14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(15): ReLU(inplace=True)
(16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(18): ReLU(inplace=True)
(19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(20): ReLU(inplace=True)
(21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(22): ReLU(inplace=True)
(23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(25): ReLU(inplace=True)
(26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(27): ReLU(inplace=True)
(28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(29): ReLU(inplace=True)
(30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
(classifier): Sequential(
(0): Linear(in_features=25088, out_features=4096, bias=True)
(1): ReLU(inplace=True)
(2): Dropout(p=0.5, inplace=False)
(3): Linear(in_features=4096, out_features=4096, bias=True)
(4): ReLU(inplace=True)
(5): Dropout(p=0.5, inplace=False)
(6): Linear(in_features=4096, out_features=1000, bias=True)
)
)
###Markdown
入力画像の前処理クラスを作成
###Code
# 入力画像の前処理のクラス
class BaseTransform():
"""
画像のサイズをリサイズし、色を標準化する。
Attributes
----------
resize : int
リサイズ先の画像の大きさ。
mean : (R, G, B)
各色チャネルの平均値。
std : (R, G, B)
各色チャネルの標準偏差。
"""
def __init__(self, resize, mean, std):
self.base_transform = transforms.Compose([
transforms.Resize(resize), # 短い辺の長さがresizeの大きさになる
transforms.CenterCrop(resize), # 画像中央をresize × resizeで切り取り
transforms.ToTensor(), # Torchテンソルに変換
transforms.Normalize(mean, std) # 色情報の標準化
])
def __call__(self, img):
return self.base_transform(img)
# 画像前処理の動作を確認
# 1. 画像読み込み
image_file_path = './data/test_Dr.Ueda.jpeg'
img = Image.open(image_file_path) # [高さ][幅][色RGB]
# 2. 元の画像の表示
plt.imshow(img)
plt.show()
# 3. 画像の前処理と処理済み画像の表示
resize = 224
mean = (0.485, 0.456, 0.406)
std = (0.229, 0.224, 0.225)
transform = BaseTransform(resize, mean, std)
img_transformed = transform(img) # torch.Size([3, 224, 224])
# (色、高さ、幅)を (高さ、幅、色)に変換し、0-1に値を制限して表示
img_transformed = img_transformed.numpy().transpose((1, 2, 0))
img_transformed = np.clip(img_transformed, 0, 1)
plt.imshow(img_transformed)
plt.show()
###Output
_____no_output_____
###Markdown
出力結果からラベルを予測する後処理クラスを作成
###Code
# ILSVRCのラベル情報をロードし辞意書型変数を生成します
ILSVRC_class_index = json.load(open('./data/imagenet_class_index.json', 'r'))
ILSVRC_class_index
# 出力結果からラベルを予測する後処理クラス
class ILSVRCPredictor():
"""
ILSVRCデータに対するモデルの出力からラベルを求める。
Attributes
----------
class_index : dictionary
クラスindexとラベル名を対応させた辞書型変数。
"""
def __init__(self, class_index):
self.class_index = class_index
def predict_max(self, out):
"""
確率最大のILSVRCのラベル名を取得する。
Parameters
----------
out : torch.Size([1, 1000])
Netからの出力。
Returns
-------
predicted_label_name : str
最も予測確率が高いラベルの名前
"""
maxid = np.argmax(out.detach().numpy())
predicted_label_name = self.class_index[str(maxid)][1]
return predicted_label_name
###Output
_____no_output_____
###Markdown
学習済みVGGモデルで手元の画像を予測
###Code
# ILSVRCのラベル情報をロードし辞意書型変数を生成します
ILSVRC_class_index = json.lo・PyTorchによる発展ディープラーニング
https://github.com/HayatoKitaura/pytorch_advanced/blob/master/1_image_classification/test_vgg.ipynbVGGad(open('./data/imagenet_class_index.json', 'r'))
# ILSVRCPredictorのインスタンスを生成します
predictor = ILSVRCPredictor(ILSVRC_class_index)
# 入力画像を読み込む
image_file_path = './data/test_Dr.Ueda.jpeg'
img = Image.open(image_file_path) # [高さ][幅][色RGB]
# 前処理の後、バッチサイズの次元を追加する
transform = BaseTransform(resize, mean, std) # 前処理クラス作成
img_transformed = transform(img) # torch.Size([3, 224, 224])
inputs = img_transformed.unsqueeze_(0) # torch.Size([1, 3, 224, 224])
# モデルに入力し、モデル出力をラベルに変換する
out = net(inputs) # torch.Size([1, 1000])
result = predictor.predict_max(out)
# 予測結果を出力する
print("入力画像の予測結果:", result)
###Output
入力画像の予測結果: laptop
|
multi_epoch-max-altitude.ipynb | ###Markdown
===================================================================Determine the observable time of the Canopus on the Vernal and Autumnal equinox among -2000 B.C.E. ~ 0 B.C.
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from astropy.visualization import astropy_mpl_style
plt.style.use(astropy_mpl_style)
import astropy.units as u
from astropy.time import Time
from astropy.coordinates import SkyCoord, EarthLocation, AltAz, ICRS
###Output
_____no_output_____
###Markdown
The observing period is the whole year of -2000 B.C.E. ~ 0 B.C.To represent the epoch before the common era, I use the Julian date.
###Code
We can see that if we transformate the dates into UTC, they don't exactly respond to March 21 or September 23.
This is normal since UTC is used only after 1960-01-01.
In my opinion, this won't affect our results.
###Output
_____no_output_____
###Markdown
I calculate the altitude and azimuth of Sun and Canopus among 4:00~8:00 in autumnal equinox and 16:00~20:00 in vernal equinox for every year.
###Code
def observable_altitude(obs_time):
"""
"""
# Assume we have an observer in Tai Mountain.
taishan = EarthLocation(lat=36.2*u.deg, lon=117.1*u.deg, height=1500*u.m)
utcoffset = +8 * u.hour # Daylight Time
midnight = obs_time - utcoffset
# Position of the Canopus with the proper motion correction at the beginning of the year.
# This effect is very small.
dt_jyear = obs_time.jyear - 2000.0
ra = 95.98787790 * u.deg + 19.93 * u.mas * dt_jyear
dec = -52.69571787 * u.deg + 23.24 * u.mas * dt_jyear
hip30438 = SkyCoord(ra=ra, dec=dec, frame="icrs")
delta_midnight = np.arange(0, 24, 1./30) * u.hour # Interval of 2 minutes
obser_time = midnight + delta_midnight
local_frame = AltAz(obstime=obser_time,
location=taishan)
hip30438altazs = hip30438.transform_to(local_frame)
return hip30438altazs.alt.max().value
# Vernal equinox
year_arr = np.arange(0, 2000, 1)
# Number of days for every year
date_nb = np.ones_like(year_arr)
date_nb = np.where(year_arr % 4 == 0, 366, 365)
date_nb = np.where((year_arr % 100 == 0) & (
year_arr % 400 != 0), 365, date_nb)
total_date_nb = np.zeros_like(year_arr)
for i in range(year_arr.size):
total_date_nb[i] = np.sum(date_nb[:i+1])
# Autumnal equinox of every year
obs_time_ver = Time("0000-03-21 00:00:00") - total_date_nb * u.day
# Calculate the highest altitude
max_alt_ver = np.zeros_like(obs_time_ver)
for i, obs_timei in enumerate(obs_time_ver):
# we calculate the 30 days before and after the equinox
delta_date = np.arange(-5, 5, 1) * u.day
obs_time0 = obs_timei + delta_date
max_alt_ver0 = np.zeros_like(obs_time0)
for j, obs_time0j in enumerate(obs_time0):
# Vernal equninox
max_alt_ver0[j] = observable_altitude(obs_time0j)
max_alt_ver[i] = np.max(max_alt_ver0)
# Autumnal equinox
year_arr = np.arange(0, 2000, 1)
# Number of days for every year
date_nb = np.ones_like(year_arr)
date_nb = np.where(year_arr % 4 == 0, 366, 365)
date_nb = np.where((year_arr % 100 == 0) & (
year_arr % 400 != 0), 365, date_nb)
total_date_nb = np.zeros_like(year_arr)
for i in range(year_arr.size):
total_date_nb[i] = np.sum(date_nb[:i+1])
# Autumnal equinox of every year
obs_time_aut = Time("0000-09-23 00:00:00") - total_date_nb * u.day
# Calculate the highest altitude
max_alt_aut = np.zeros_like(obs_time_aut)
for i, obs_timei in enumerate(obs_time_aut):
# we calculate the 30 days before and after the equinox
delta_date = np.arange(-5, 5, 1) * u.day
obs_time0 = obs_timei + delta_date
max_alt_aut0 = np.zeros_like(obs_time0)
for j, obs_time0j in enumerate(obs_time0):
# Vernal equninox
max_alt_aut0[j] = observable_altitude(obs_time0j)
max_alt_aut[i] = np.max(max_alt_aut0)
###Output
_____no_output_____
###Markdown
I assume that the Canopus can be observed by the local observer only when the observable duration in one day is longer than 10 minitues.With such an assumption, I determine the observable period of the Canopus.
###Code
# Save data
np.save("multi_epoch-max-altitude-output",
[autumnal_equinox.jyear, max_alt_aut, vernal_equinox.jyear, max_alt_ver])
autumnal_equinox.jyear, max_alt_aut, vernal_equinox.jyear, max_alt_ver = np.load(
"multi_epoch-max-altitude-output.npy")
fig, ax = plt.subplots(figsize=(12, 8))
ax.plot(vernal_equinox.jyear, max_alt_ver,
"b.", ms=2, label="Vernal")
ax.plot(autumnal_equinox.jyear, max_alt_aut,
"r.", ms=2, label="Autumnal")
# ax.fill_between(obs_time_aut.jyear, 0, 24,
# (obs_dur1 >= 1./6) & (obs_dur2 >= 1./6), color="0.8", zorder=0)
ax.set_xlabel("Year", fontsize=15)
ax.set_xlim([-2000, 0])
ax.set_xticks(np.arange(-2000, 1, 100))
ax.set_ylim([0, 2.0])
ax.set_ylabel("Time (hour)", fontsize=15)
ax.set_title("Maximum altitude of Canopus among $-2000$ B.C.E and 0")
ax.legend(fontsize=15)
fig.tight_layout()
plt.savefig("multi_epoch-max-duration.eps", dpi=100)
plt.savefig("multi_epoch-max-duration.png", dpi=100)
###Output
_____no_output_____ |
DAY 001 ~ 100/DAY038_[HackerRank] Diagonal Difference (Python).ipynb | ###Markdown
2020년 3월 15일 일요일 HackerRank - Diagonal Difference 문제 : https://www.hackerrank.com/challenges/diagonal-difference/problem 블로그 : https://somjang.tistory.com/entry/HackerRank-Diagonal-Difference-Python 첫번째 시도
###Code
#!/bin/python3
import math
import os
import random
import re
import sys
#
# Complete the 'diagonalDifference' function below.
#
# The function is expected to return an INTEGER.
# The function accepts 2D_INTEGER_ARRAY arr as parameter.
#
def diagonalDifference(arr):
# Write your code here
left_diagonal = 0
right_diagonal = 0
for i in range(len(arr[0])):
left_diagonal = left_diagonal + arr[i][i]
right_diagonal = right_diagonal + arr[i][len(arr[0])-i-1]
answer = abs(left_diagonal - right_diagonal)
return answer
###Output
_____no_output_____ |
notebook/CNN/.ipynb_checkpoints/CNN-train-4-checkpoint.ipynb | ###Markdown
If you have been save the data, you don't have to preprocessing and save the data
###Code
c_drone_path = '../../../inner10m/*.wav'
m_drone_path = '../../../20m/*.wav'
f_drone_path = '../../../50m/*.wav'
background_path = '../../data/background/*.wav'
c_drone_files = glob.glob(c_drone_path)
m_drone_files = glob.glob(m_drone_path)
f_drone_files = glob.glob(f_drone_path)
background_files = glob.glob(background_path)
CHUNK_SIZE = 8192
SR = 22050
N_MFCC = 16
def load(files, sr=22050):
[raw, sr] = librosa.load(files[0], sr=sr)
for f in files[1:]:
[array, sr] = librosa.load(f, sr=sr)
raw = np.hstack((raw, array))
print(raw.shape)
return raw
c_drone_raw = load(c_drone_files)
m_drone_raw = load(m_drone_files)
f_drone_raw = load(f_drone_files)
background_raw = load(background_files)
###Output
(4464640,)
(2232320,)
(2232320,)
(23317637,)
###Markdown
Data Processing
###Code
def mfcc4(raw, label, chunk_size=8192, window_size=4096, sr=44100, n_mfcc=16, n_frame=16):
mfcc = np.empty((0, n_mfcc, n_frame))
y = []
print(raw.shape)
for i in range(0, len(raw), chunk_size//2):
mfcc_slice = librosa.feature.mfcc(raw[i:i+chunk_size], sr=sr, n_mfcc=n_mfcc) #n_mfcc,17
if mfcc_slice.shape[1] < 17:
print("small end:", mfcc_slice.shape)
continue
mfcc_slice = mfcc_slice[:,:-1]
mfcc_slice = mfcc_slice.reshape((1, mfcc_slice.shape[0], mfcc_slice.shape[1]))
mfcc = np.vstack((mfcc, mfcc_slice))
y.append(label)
y = np.array(y)
return mfcc, y
c_mfcc_drone, c_y_drone = mfcc4(c_drone_raw, 3)
m_mfcc_drone, m_y_drone = mfcc4(m_drone_raw, 2)
f_mfcc_drone, f_y_drone = mfcc4(f_drone_raw, 1)
mfcc_background, y_background = mfcc4(background_raw, 0)
print(c_mfcc_drone.shape, c_y_drone.shape)
print(m_mfcc_drone.shape, m_y_drone.shape)
print(f_mfcc_drone.shape, f_y_drone.shape)
print(mfcc_background.shape, y_background.shape)
X = np.concatenate((c_mfcc_drone,m_mfcc_drone,f_mfcc_drone, mfcc_background), axis=0)
#X = np.concatenate((mfcc_drone), axis=0)
#X = X.reshape(-1, 16,16,1)
y = np.hstack((c_y_drone, m_y_drone, f_y_drone, y_background))
#y = np.hstack(y_drone)
print(X.shape, y.shape)
n_labels = y.shape[0]
n_unique_labels = 4
y_encoded = np.zeros((n_labels, n_unique_labels))
y_encoded[np.arange(n_labels), y] = 1
print(y_encoded.shape)
# Split data
from sklearn import model_selection
X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y_encoded, test_size=0.2, random_state=42)
X_train, X_val, y_train, y_val = model_selection.train_test_split(X_train, y_train, test_size=0.2, random_state=42)
print(X_train.shape, X_test.shape)
print(X_val.shape, y_val.shape)
print(y_train.shape, y_test.shape)
# Save Data
np.save('../../data/X_train_cnn', X_train)
np.save('../../data/X_test_cnn', X_test)
np.save('../../data/X_val_cnn', X_val)
np.save('../../data/y_val_cnn', y_val)
np.save('../../data/y_train_cnn', y_train)
np.save('../../data/y_test_cnn', y_test)
###Output
_____no_output_____
###Markdown
Until this part
###Code
# Load Data
X_train = np.load('../../data/X_train_cnn.npy')
X_test = np.load('../../data/X_test_cnn.npy')
X_val = np.load('../../data/X_val_cnn.npy')
y_val = np.load('../../data/y_val_cnn.npy')
y_train = np.load('../../data/y_train_cnn.npy')
y_test = np.load('../../data/y_test_cnn.npy')
###Output
_____no_output_____
###Markdown
Experiment 3 - One convolutional layer /w no dropout Experiment 3-2- learning rate 0.005- pooling stride 1x1- filter 1- best result among every other settings- cost kept fluctuated during training. (0.8 -> 1.3) -- why is that?
###Code
tf.reset_default_graph()
n_mfcc = 16
n_frame = 16
n_classes = 4
n_channels = 1
learning_rate = 0.0002 # 0.005
training_epochs = 500 # 수정해봐
###Output
_____no_output_____
###Markdown
Layer
###Code
X = tf.placeholder(tf.float32, shape=[None,n_mfcc*n_frame*n_channels])
X = tf.reshape(X, [-1, n_mfcc, n_frame, n_channels])
Y = tf.placeholder(tf.float32, shape=[None,n_classes])
conv1 = tf.layers.conv2d(inputs=X, filters=1, kernel_size=[3, 3],
padding="SAME", activation=tf.nn.relu)
pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2],
padding="SAME", strides=1)
# dropout넣어야하나
conv2 = tf.layers.conv2d(inputs=pool1, filters=1, kernel_size=[3, 3],
padding="SAME", activation=tf.nn.relu)
pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2],
padding="SAME", strides=1)
# 여기도
flat = tf.reshape(pool2, [-1, 16*16*1])
dense2 = tf.layers.dense(inputs=flat, units=625, activation=tf.nn.relu)
logits = tf.layers.dense(inputs=dense2, units=4)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=logits, labels=Y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
Y_pred = tf.contrib.layers.fully_connected(logits,n_classes,activation_fn = None)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
X_train2 = X_train.reshape(X_train.shape[0], X_train.shape[1], X_train.shape[2], 1)
X_test2 = X_test.reshape(X_test.shape[0], X_test.shape[1], X_test.shape[2], 1)
X_val2 = X_val.reshape(X_val.shape[0], X_val.shape[1], X_val.shape[2], 1)
# model save
model_path = '../../model/CNN/4_cnn_model'
saver = tf.train.Saver()
###Output
_____no_output_____
###Markdown
Trainning
###Code
from sklearn.metrics import precision_recall_fscore_support
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
import itertools as it
###########################
batch_size = 32
cost_history = np.empty(shape=[1], dtype=float)
with tf.device("/gpu:0"):
for epoch in range(training_epochs):#training epoch 500 / batch_size 128 --> acc 90%
avg_cost = 0
val_avg_cost =0
total_batch = int(y_train.shape[0] / batch_size)
for i in range(0, y_train.shape[0], batch_size):
feed_dict={X:X_train2[i:i+batch_size,:,:,:], Y:y_train[i:i+batch_size,:]}
c, _ = sess.run([cost, optimizer], feed_dict=feed_dict)
cost_history = np.append(cost_history,cost)
avg_cost += c/total_batch
y_pred = sess.run(logits, feed_dict={X:X_val2})
y_pred = sess.run(tf.argmax(y_pred,1))
y_true = y_val
y_true = sess.run(tf.argmax(y_true,1))
print(len(y_pred),end=' ')
print('Epoch:', '%04d' % (epoch+1), 'cost = ', '{:.9f}'.format(avg_cost), 'val = ','%f' %(accuracy_score(y_true, y_pred)) )
saver.save(sess, model_path)
###Output
1259 Epoch: 0001 cost = 0.716306003 val = 0.818904
1259 Epoch: 0002 cost = 0.499342425 val = 0.830818
1259 Epoch: 0003 cost = 0.455866381 val = 0.834790
1259 Epoch: 0004 cost = 0.431440004 val = 0.841144
1259 Epoch: 0005 cost = 0.405369158 val = 0.850675
1259 Epoch: 0006 cost = 0.381440111 val = 0.859412
1259 Epoch: 0007 cost = 0.357561896 val = 0.861795
1259 Epoch: 0008 cost = 0.339701771 val = 0.867355
1259 Epoch: 0009 cost = 0.320948987 val = 0.872915
1259 Epoch: 0010 cost = 0.305717982 val = 0.874504
1259 Epoch: 0011 cost = 0.290599016 val = 0.878475
1259 Epoch: 0012 cost = 0.278286371 val = 0.879269
1259 Epoch: 0013 cost = 0.267124090 val = 0.881652
1259 Epoch: 0014 cost = 0.256110789 val = 0.880858
1259 Epoch: 0015 cost = 0.247010396 val = 0.880064
1259 Epoch: 0016 cost = 0.238223732 val = 0.881652
1259 Epoch: 0017 cost = 0.229676822 val = 0.880858
1259 Epoch: 0018 cost = 0.221253199 val = 0.878475
1259 Epoch: 0019 cost = 0.216025308 val = 0.880064
1259 Epoch: 0020 cost = 0.208847653 val = 0.878475
1259 Epoch: 0021 cost = 0.202297026 val = 0.882446
1259 Epoch: 0022 cost = 0.197112381 val = 0.882446
1259 Epoch: 0023 cost = 0.192932297 val = 0.878475
1259 Epoch: 0024 cost = 0.185635419 val = 0.880858
1259 Epoch: 0025 cost = 0.179641343 val = 0.881652
1259 Epoch: 0026 cost = 0.171791657 val = 0.885624
1259 Epoch: 0027 cost = 0.161710824 val = 0.889595
1259 Epoch: 0028 cost = 0.155071682 val = 0.892772
1259 Epoch: 0029 cost = 0.147370172 val = 0.891978
1259 Epoch: 0030 cost = 0.139658855 val = 0.895155
1259 Epoch: 0031 cost = 0.133814893 val = 0.897538
1259 Epoch: 0032 cost = 0.129939044 val = 0.895949
1259 Epoch: 0033 cost = 0.124528488 val = 0.895155
1259 Epoch: 0034 cost = 0.119994864 val = 0.894361
1259 Epoch: 0035 cost = 0.116643832 val = 0.896743
1259 Epoch: 0036 cost = 0.113636386 val = 0.898332
1259 Epoch: 0037 cost = 0.111713318 val = 0.897538
1259 Epoch: 0038 cost = 0.110347354 val = 0.901509
1259 Epoch: 0039 cost = 0.110249776 val = 0.905481
1259 Epoch: 0040 cost = 0.109699103 val = 0.904686
1259 Epoch: 0041 cost = 0.108928336 val = 0.907863
1259 Epoch: 0042 cost = 0.109907495 val = 0.903892
1259 Epoch: 0043 cost = 0.112930716 val = 0.907863
1259 Epoch: 0044 cost = 0.115576998 val = 0.912629
1259 Epoch: 0045 cost = 0.119421972 val = 0.916600
1259 Epoch: 0046 cost = 0.121534267 val = 0.922160
1259 Epoch: 0047 cost = 0.121104320 val = 0.913423
1259 Epoch: 0048 cost = 0.120793405 val = 0.910246
1259 Epoch: 0049 cost = 0.109659388 val = 0.911835
1259 Epoch: 0050 cost = 0.097185618 val = 0.914218
1259 Epoch: 0051 cost = 0.090437695 val = 0.914218
1259 Epoch: 0052 cost = 0.084128044 val = 0.915806
1259 Epoch: 0053 cost = 0.080326529 val = 0.919778
1259 Epoch: 0054 cost = 0.075570809 val = 0.922160
1259 Epoch: 0055 cost = 0.070229901 val = 0.923749
1259 Epoch: 0056 cost = 0.063202103 val = 0.923749
1259 Epoch: 0057 cost = 0.058383598 val = 0.926132
1259 Epoch: 0058 cost = 0.054184634 val = 0.927720
1259 Epoch: 0059 cost = 0.050842688 val = 0.928515
1259 Epoch: 0060 cost = 0.047859962 val = 0.928515
1259 Epoch: 0061 cost = 0.045545858 val = 0.929309
1259 Epoch: 0062 cost = 0.042846851 val = 0.930898
1259 Epoch: 0063 cost = 0.040665250 val = 0.930103
1259 Epoch: 0064 cost = 0.038238896 val = 0.930103
1259 Epoch: 0065 cost = 0.035592644 val = 0.926926
1259 Epoch: 0066 cost = 0.032582981 val = 0.927720
1259 Epoch: 0067 cost = 0.030934615 val = 0.928515
1259 Epoch: 0068 cost = 0.028390405 val = 0.925338
1259 Epoch: 0069 cost = 0.026209846 val = 0.917395
1259 Epoch: 0070 cost = 0.024753890 val = 0.917395
1259 Epoch: 0071 cost = 0.024350947 val = 0.913423
1259 Epoch: 0072 cost = 0.024397778 val = 0.915012
1259 Epoch: 0073 cost = 0.022429494 val = 0.913423
1259 Epoch: 0074 cost = 0.022474858 val = 0.907863
1259 Epoch: 0075 cost = 0.022170620 val = 0.907863
1259 Epoch: 0076 cost = 0.021902976 val = 0.911041
1259 Epoch: 0077 cost = 0.023129732 val = 0.907069
1259 Epoch: 0078 cost = 0.029165863 val = 0.902303
1259 Epoch: 0079 cost = 0.027451113 val = 0.906275
1259 Epoch: 0080 cost = 0.022869320 val = 0.925338
1259 Epoch: 0081 cost = 0.022239029 val = 0.920572
1259 Epoch: 0082 cost = 0.020004063 val = 0.922160
1259 Epoch: 0083 cost = 0.024394760 val = 0.928515
1259 Epoch: 0084 cost = 0.031357869 val = 0.926926
1259 Epoch: 0085 cost = 0.042474821 val = 0.927720
1259 Epoch: 0086 cost = 0.034384340 val = 0.926132
1259 Epoch: 0087 cost = 0.031764961 val = 0.911835
1259 Epoch: 0088 cost = 0.023586254 val = 0.915012
1259 Epoch: 0089 cost = 0.018651719 val = 0.914218
1259 Epoch: 0090 cost = 0.018723781 val = 0.913423
1259 Epoch: 0091 cost = 0.016954439 val = 0.913423
1259 Epoch: 0092 cost = 0.014589099 val = 0.918189
1259 Epoch: 0093 cost = 0.012962762 val = 0.912629
1259 Epoch: 0094 cost = 0.009772269 val = 0.925338
1259 Epoch: 0095 cost = 0.008934532 val = 0.928515
1259 Epoch: 0096 cost = 0.006238874 val = 0.931692
1259 Epoch: 0097 cost = 0.005448082 val = 0.927720
1259 Epoch: 0098 cost = 0.004454965 val = 0.927720
1259 Epoch: 0099 cost = 0.003943611 val = 0.927720
1259 Epoch: 0100 cost = 0.003461903 val = 0.927720
1259 Epoch: 0101 cost = 0.003317764 val = 0.928515
1259 Epoch: 0102 cost = 0.003129642 val = 0.927720
1259 Epoch: 0103 cost = 0.003734729 val = 0.918983
1259 Epoch: 0104 cost = 0.003723289 val = 0.928515
1259 Epoch: 0105 cost = 0.002830130 val = 0.930898
1259 Epoch: 0106 cost = 0.002556305 val = 0.928515
1259 Epoch: 0107 cost = 0.002645375 val = 0.926132
1259 Epoch: 0108 cost = 0.001969402 val = 0.928515
1259 Epoch: 0109 cost = 0.001774388 val = 0.926132
1259 Epoch: 0110 cost = 0.001544930 val = 0.926926
1259 Epoch: 0111 cost = 0.001382663 val = 0.927720
1259 Epoch: 0112 cost = 0.001249745 val = 0.925338
1259 Epoch: 0113 cost = 0.001120547 val = 0.927720
1259 Epoch: 0114 cost = 0.001012786 val = 0.927720
1259 Epoch: 0115 cost = 0.000924845 val = 0.924543
1259 Epoch: 0116 cost = 0.000836689 val = 0.929309
1259 Epoch: 0117 cost = 0.000756642 val = 0.930103
1259 Epoch: 0118 cost = 0.000691021 val = 0.930103
1259 Epoch: 0119 cost = 0.000621290 val = 0.930103
1259 Epoch: 0120 cost = 0.000577615 val = 0.930103
1259 Epoch: 0121 cost = 0.000531286 val = 0.927720
1259 Epoch: 0122 cost = 0.000496429 val = 0.929309
1259 Epoch: 0123 cost = 0.000456870 val = 0.929309
1259 Epoch: 0124 cost = 0.000425704 val = 0.930103
1259 Epoch: 0125 cost = 0.000395590 val = 0.929309
1259 Epoch: 0126 cost = 0.000368482 val = 0.929309
1259 Epoch: 0127 cost = 0.000343942 val = 0.929309
1259 Epoch: 0128 cost = 0.000319539 val = 0.929309
1259 Epoch: 0129 cost = 0.000297449 val = 0.928515
1259 Epoch: 0130 cost = 0.000275055 val = 0.929309
1259 Epoch: 0131 cost = 0.000256469 val = 0.929309
1259 Epoch: 0132 cost = 0.000238581 val = 0.928515
1259 Epoch: 0133 cost = 0.000220524 val = 0.929309
1259 Epoch: 0134 cost = 0.000204134 val = 0.928515
1259 Epoch: 0135 cost = 0.000184775 val = 0.930103
1259 Epoch: 0136 cost = 0.000175481 val = 0.929309
1259 Epoch: 0137 cost = 0.000161781 val = 0.929309
1259 Epoch: 0138 cost = 0.000148463 val = 0.927720
1259 Epoch: 0139 cost = 0.000138675 val = 0.930103
1259 Epoch: 0140 cost = 0.000127483 val = 0.928515
1259 Epoch: 0141 cost = 0.000117201 val = 0.929309
1259 Epoch: 0142 cost = 0.000108376 val = 0.928515
1259 Epoch: 0143 cost = 0.000100547 val = 0.929309
1259 Epoch: 0144 cost = 0.000091870 val = 0.929309
1259 Epoch: 0145 cost = 0.000084846 val = 0.929309
1259 Epoch: 0146 cost = 0.000078198 val = 0.929309
1259 Epoch: 0147 cost = 0.000072051 val = 0.929309
1259 Epoch: 0148 cost = 0.000066134 val = 0.928515
1259 Epoch: 0149 cost = 0.000061507 val = 0.928515
1259 Epoch: 0150 cost = 0.000056625 val = 0.929309
1259 Epoch: 0151 cost = 0.000052020 val = 0.928515
1259 Epoch: 0152 cost = 0.000047949 val = 0.928515
1259 Epoch: 0153 cost = 0.000044252 val = 0.929309
1259 Epoch: 0154 cost = 0.000041079 val = 0.929309
1259 Epoch: 0155 cost = 0.000037695 val = 0.926926
###Markdown
Prediction
###Code
y_pred = sess.run(tf.argmax(logits,1),feed_dict={X: X_test2})
y_true = sess.run(tf.argmax(y_test,1))
# Print Result
from sklearn.metrics import precision_recall_fscore_support
p,r,f,s = precision_recall_fscore_support(y_true, y_pred, average='micro')
print("F-Score:", round(f,3))
from sklearn.metrics import accuracy_score
print("Accuracy: ", accuracy_score(y_true, y_pred))
from sklearn.metrics import classification_report
print(classification_report(y_true, y_pred))
from sklearn.metrics import confusion_matrix
print(confusion_matrix(y_true, y_pred))
###Output
F-Score: 0.908
Accuracy: 0.9078780177890724
precision recall f1-score support
0 0.98 0.97 0.97 1118
1 0.66 0.82 0.73 104
2 0.63 0.55 0.59 117
3 0.84 0.83 0.84 235
micro avg 0.91 0.91 0.91 1574
macro avg 0.78 0.79 0.78 1574
weighted avg 0.91 0.91 0.91 1574
[[1085 12 4 17]
[ 1 85 14 4]
[ 12 25 64 16]
[ 14 7 19 195]]
|
Distributed-Resource/2020_11/1117_getSumPow.ipynb | ###Markdown
전체 pow sum -> soiling 자료 뽑기
###Code
#---- to do list -----
# err_data_list 파일자동화
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import os
from tensorflow.python.keras.optimizer_v2.rmsprop import RMSProp
from math import sqrt
from numpy import concatenate
from matplotlib import pyplot
from pandas import read_csv, DataFrame, concat
from sklearn.preprocessing import MinMaxScaler, LabelEncoder
from sklearn.metrics import mean_squared_error
from keras.models import Sequential
from keras.layers import Dense, RepeatVector, LSTM, Input, TimeDistributed, Activation, Dropout
from keras.optimizers import SGD
np.set_printoptions(suppress=True)
#pow 낮값만 추출 test
#pow = 0인 구간 : 0~4, 21-23시
powhr_start = 5
powhr_end = 20
shift_days = 3
hoursteps = powhr_end-powhr_start+1 #(16)
timesteps = shift_days*hoursteps #hours step
data_dim = 7
out_dim = 1
n_model = 10
data_dir = 'C:/Users/VISLAB_PHY/Desktop/Workspace/Data'
season_mod = 'all_1102_f7'
date_start = '10190901'
date_end = '30191201'
err_date_list = ['20190912',
'20191122',
'20191130',
'20191217',
'20200501',
'20200502',
'20191028',
'20191107',
'20191108',
'20191109',
'20191110',
'20191111',
'20191112',
'20200214',
'20200307',
'20200308',
'20200309',
'20200310',
'20200328',
'20200329',
'20200625',
'20200809']
###Output
_____no_output_____
###Markdown
Get pow data
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import os
import fnmatch
from pandas import read_csv
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import PowerTransformer
#from feature_engine import variable_transformers as vt
from scipy.stats import yeojohnson
#############################################
# 태양광 전력
#############################################
#def get_pow():
# pow 파일 load
dir_path = data_dir+"/pow_24/UR00000126_csv"
file_list = os.listdir(dir_path)
print(len(file_list))
hrPow = []
hrDate = []
# pow측정값 에러가 큰 일자 제거
for filename in file_list:
#if (filename[:-4] not in err_date_list):
if ((filename[:-4]>=date_start) & (filename<date_end)):
filedata = pd.read_csv(dir_path+'/'+filename).values[:,0]
hrPow.append(filedata.sum())
hrDate.append(filename[:-4])
#hrPow = list(zip(hrDate, hrPow))
#print(hrPow)
#print("filedata type : ", type(filedata))
# scale
#sc_pow = MinMaxScaler(feature_range = (0, 1))
#scaled_pow = sc_pow.fit_transform(pow_dataset.values)
#df_pow = pd.DataFrame(hrPow)
df_pow = pd.DataFrame(hrPow, index=hrDate, columns=['pow'])
print(df_pow)
# return df_pow, sc_pow
df_pow.to_csv("C:/Users/VISLAB_PHY/Desktop/WORKSPACE/Origin/data/pow_test22.csv",mode='w',index=False)
# 5. 모델 학습 과정 표시하기
%matplotlib inline
import matplotlib.pyplot as plt
ax = df_pow.plot.bar(figsize=(20, 15))#(bins=50, figsize=(20, 15))
ax.set_ylabel("sum(pow)")
ax.set_xlabel("date")
ax.legend()
plt.show()
'''
fig, loss_ax = plt.subplots()
loss_ax.plot(hrPow, 'y', label='train loss')
loss_ax.set_xlabel('epoch')
loss_ax.set_ylabel('loss')
loss_ax.legend(loc='upper right')
plt.show()
print('result : ', results)
'''
# 5. 모델 학습 과정 표시하기
%matplotlib inline
import matplotlib.pyplot as plt
step = 100
for i in range(0, df_pow.shape[0]-step, step):
ax = df_pow[i:i+step].plot.bar(figsize=(10, 3))#(bins=50, figsize=(20, 15))
ax.set_ylabel("sum(pow)")
ax.set_xlabel("date")
ax.legend()
plt.ylim(0,1300000)
plt.show()
# 5. 모델 학습 과정 표시하기
%matplotlib inline
import matplotlib.pyplot as plt
ax = df_pow[195:].plot.bar(figsize=(10, 5))#(bins=50, figsize=(20, 15))
ax.set_ylabel("sum(pow)")
ax.set_xlabel("date")
ax.legend()
plt.ylim(0,1300000)
plt.show()
###Output
_____no_output_____ |
source_nbs/12_8_problem_type_vector_fit.ipynb | ###Markdown
VectorFit(vector_fit)This module includes neccessary part to register vector_fit problem type. Imports and utils
###Code
# export
from typing import Dict, Tuple
import numpy as np
import tensorflow as tf
from m3tl.base_params import BaseParams
from m3tl.problem_types.utils import (empty_tensor_handling_loss,
nan_loss_handling)
from m3tl.special_tokens import PREDICT
from m3tl.utils import get_phase
###Output
_____no_output_____
###Markdown
Top Layer
###Code
# export
def cosine_wrapper(labels, logits, from_logits=True):
return tf.keras.losses.cosine_similarity(labels, logits)
class VectorFit(tf.keras.Model):
def __init__(self, params: BaseParams, problem_name: str) -> None:
super(VectorFit, self).__init__(name=problem_name)
self.params = params
self.problem_name = problem_name
self.num_classes = self.params.get_problem_info(problem=problem_name, info_name='num_classes')
self.dense = tf.keras.layers.Dense(self.num_classes)
def call(self, inputs: Tuple[Dict]):
mode = get_phase()
feature, hidden_feature = inputs
pooled_hidden = hidden_feature['pooled']
logits = self.dense(pooled_hidden)
if mode != PREDICT:
# this is actually a vector
label = feature['{}_label_ids'.format(self.problem_name)]
loss = empty_tensor_handling_loss(label, logits, cosine_wrapper)
loss = nan_loss_handling(loss)
self.add_loss(loss)
self.add_metric(tf.math.negative(
loss), name='{}_cos_sim'.format(self.problem_name), aggregation='mean')
return logits
test_top_layer(VectorFit, problem='weibo_fake_vector_fit', params=params, sample_features=one_batch, hidden_dim=hidden_dim)
###Output
Testing VectorFit
###Markdown
Get or make label encoder function
###Code
# export
def vector_fit_get_or_make_label_encoder_fn(params: BaseParams, problem, mode, label_list, *args, **kwargs):
if label_list:
# set params num_classes for this problem
label_array = np.array(label_list)
params.set_problem_info(problem=problem, info_name='num_classes', info=label_array.shape[-1])
return None
###Output
_____no_output_____
###Markdown
Label handing function
###Code
# export
def vector_fit_label_handling_fn(target, label_encoder=None, tokenizer=None, decoding_length=None, *args, **kwargs):
# return label_id and label mask
label_id = np.array(target, dtype='float32')
return label_id, None
###Output
_____no_output_____ |
Notebook 1 Nonlinear prediction of chaotic dynamical systems.ipynb | ###Markdown
Nonlinear prediction of chaotic dynamical systems Assume you observe a time series $(y_1, y_2, \dots, y_T)$ that represents a variable of a high dimensional and possibly chaotic dynamical system.The method proposed by Sugihara and May (Nature, 1990) entails first choosing an embedding dimension, $n$, and then predicting $y_t$ by using past observations $\vec y_p(t) = (y_{t-1}, y_{t-2}, \dots, y_{t-n}) \in \mathbb{R}^n$. Intuitively, the prediction is obtained by finding a set of vectors $\{ \vec y^1_p, \dots, \vec y^n_p \}$ in the past, which are nearest neighbors of $\vec y_p$, and basing future predictions on what occurred immediately after these past events. Let's begin with an example.
###Code
# simulate Lorenz attractor
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
from mpl_toolkits.mplot3d import Axes3D
# standard parameters
rho=28;sigma=10;beta=8/3
def f(X, t):
x, y, z = X
return sigma * (y - x), x * (rho - z) - y, x * y - beta * z
# initial value
x0 = np.array([-11.40057002, -14.01987468, 27.49928125])
t = np.arange(0.0, 100, 0.01)
lorenz = odeint(f, x0, t)
# 3D plot of Lorenz attractor - beautiful!
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot(lorenz[:,0], lorenz[:,1], lorenz[:,2])
plt.axis('off')
plt.xlim([-13,14])
plt.ylim([-20,25])
ax.set_zlim([15,38])
plt.show()
# plot only first component
plt.plot(lorenz[:,0])
###Output
_____no_output_____
###Markdown
How can we predict the future of such a chaotic system, given its past?? Many methods have been proposed that are based on Takens (1981) and Sauer (1991) theorems (see: http://www.scholarpedia.org/article/Attractor_reconstruction)In detail, the method proposed by Sugihara and May (https://www.nature.com/articles/344734a0) works in three steps: ( 1 ) divide your time series $y_1,\dots, y_T$ into train and test. Use train time series as a library of past patterns, i.e., by computing past vectors $\vec y_p(t) = (y_{t-1}, y_{t-2}, \dots, y_{t-n}) \in \mathbb{R}^n$ for each point $t>n$, with associated future value $y_t$. ( 2 ) for each test time point $y^*$ compute its past vector $\vec y_p^* = (y_{t^*-1}, y_{t^*-2}, \dots, y_{t^*-n})$, and find the $n+1$ nearest neighbors in the library of past patterns: $\{y_p^1, \dots, y_p^{n+1}\}$ and compute their distance to $y_p^*$: $d_i = || y_p^* - y_p^i||$. ( 3 ) predict test time point $y^*$ by taking a weighted average of future values of the $n+1$ nearest neighbors found in the library of past patterns: \begin{equation} \hat y^* = \frac{\sum_i^{n+1} y^i e^{-d_i}}{\sum_i^{n+1} e^{-d_i}}. \end{equation}
###Code
from sklearn.neighbors import NearestNeighbors
from scipy import stats as stats
# split dataset into train/test
def train_test_split(X, fraction_train=.75):
split = int(len(X)*fraction_train)
return X[:split], X[split:]
# exponential weights: w_i = exp(-d_i) / sum_i exp(-d_i)
def weights(distances):
num = np.exp(-distances) # numerator: e^{-d_i}
den = np.sum(num,axis=1,keepdims=True) # denominator: sum_i e^{-d_i}
return num/den
# embed vectors into n-dimensional past values (the last element is the one to be predicted)
def embed_vectors_1d(X, n_embed):
size = len(X)
leng = size-n_embed
out_vects = np.zeros((leng,n_embed + 1))
for i in range(leng):
out_vects[i,:] = X[i:i+n_embed+1]
return out_vects
# implement the Sugihara nonlinear prediction
def nonlinear_prediction(X_train, X_test, n_embed):
# initialize nearest neighbors from sklearn
knn = NearestNeighbors(n_neighbors=n_embed+1)
# Nearest neigbors is fit on the train data (only on the past vectors - i.e. till [:-1])
knn.fit(X_train[:,:-1])
# find the nearest neighbors for each test vector input
dist,ind = knn.kneighbors(X_test[:,:-1])
# compute exponential weights given distances
W = weights(dist)
# predict test using train (weighted average)
x_pred = np.sum(X_train[ind][:,-1] * W, axis=1)
return x_pred
###Output
_____no_output_____
###Markdown
Find the best embedding dimension by cross-validation - i.e., find best reconstruction
###Code
nonlinear_reconstr_cor = []
for n_embed in np.arange(1,5):
X = embed_vectors_1d(lorenz[:,0],n_embed)
# split train/test
X_train, X_test = train_test_split(X,fraction_train=0.7)
# nonlinear prediction on individual time series
x_p = nonlinear_prediction(X_train, X_test, n_embed)
# simply check correlation of real vs predicted
nonlinear_reconstr_cor.append(np.corrcoef(X_test[:,-1], x_p)[0,1])
plt.plot(np.arange(1,5),nonlinear_reconstr_cor[:5])
plt.xticks(np.arange(1,5))
plt.xlabel('Embedding dimension (n_embed)')
plt.ylabel('Pearson correlation')
###Output
_____no_output_____
###Markdown
The best embedding dimension is 2!Now visualize the results and check where the highest errors are
###Code
n_embed = 2
# prediction using optimal n_embed
X = embed_vectors_1d(lorenz[:,0],n_embed)
# split train/test
X_train, X_test = train_test_split(X,fraction_train=0.7)
# nonlinear prediction on individual time series
x_p = nonlinear_prediction(X_train, X_test, n_embed)
# simply check correlation of
nonlinear_reconstr_cor.append(np.corrcoef(X_test[:,-1], x_p)[0,1])
# scatter plot real vs predicted
plt.scatter(X_test[:,-1], x_p)
plt.xlabel('Test Data')
plt.ylabel('Prediction Data')
###Output
_____no_output_____
###Markdown
Are errors higher where the gradient is higher? Yes!
###Code
plt.scatter(np.abs(np.gradient(X_test[:,-1])), np.abs(X_test[:,-1]- x_p))
plt.xlabel('Abs. Gradient')
plt.ylabel('Abs. Prediction Error')
###Output
_____no_output_____ |
notebooks/vega/Vega.ipynb | ###Markdown
Basic Vega Visualization
###Code
from vega import Vega
Vega({
"width": 400,
"height": 200,
"padding": {"top": 10, "left": 30, "bottom": 30, "right": 10},
"data": [
{
"name": "table",
"values": [
{"x": 1, "y": 28}, {"x": 2, "y": 55},
{"x": 3, "y": 43}, {"x": 4, "y": 91},
{"x": 5, "y": 81}, {"x": 6, "y": 53},
{"x": 7, "y": 19}, {"x": 8, "y": 87},
{"x": 9, "y": 52}, {"x": 10, "y": 48},
{"x": 11, "y": 24}, {"x": 12, "y": 49},
{"x": 13, "y": 87}, {"x": 14, "y": 66},
{"x": 15, "y": 17}, {"x": 16, "y": 27},
{"x": 17, "y": 68}, {"x": 18, "y": 16},
{"x": 19, "y": 49}, {"x": 20, "y": 15}
]
}
],
"scales": [
{
"name": "x",
"type": "ordinal",
"range": "width",
"domain": {"data": "table", "field": "x"}
},
{
"name": "y",
"type": "linear",
"range": "height",
"domain": {"data": "table", "field": "y"},
"nice": True
}
],
"axes": [
{"type": "x", "scale": "x"},
{"type": "y", "scale": "y"}
],
"marks": [
{
"type": "rect",
"from": {"data": "table"},
"properties": {
"enter": {
"x": {"scale": "x", "field": "x"},
"width": {"scale": "x", "band": True, "offset": -1},
"y": {"scale": "y", "field": "y"},
"y2": {"scale": "y", "value": 0}
},
"update": {
"fill": {"value": "steelblue"}
},
"hover": {
"fill": {"value": "red"}
}
}
}
]
})
import pandas as pd
df = pd.read_json('./data/iris.json')
Vega({
"width": 600,
"height": 600,
"data": [
{
"name": "iris",
"url": "./data/iris.json"
},
{
"name": "fields",
"values": ["petalWidth", "petalLength", "sepalWidth", "sepalLength"]
}
],
"scales": [
{
"name": "gx",
"type": "ordinal",
"range": "width",
"round": True,
"domain": {"data": "fields", "field": "data"}
},
{
"name": "gy",
"type": "ordinal",
"range": "height",
"round": True,
"reverse": True,
"domain": {"data": "fields", "field": "data"}
},
{
"name": "c",
"type": "ordinal",
"domain": {"data": "iris", "field": "species"},
"range": "category10"
}
],
"legends": [
{
"fill": "c",
"title": "Species",
"offset": 10,
"properties": {
"symbols": {
"fillOpacity": {"value": 0.5},
"stroke": {"value": "transparent"}
}
}
}
],
"marks": [
{
"type": "group",
"from": {
"data": "fields",
"transform": [{"type": "cross"}]
},
"properties": {
"enter": {
"x": {"scale": "gx", "field": "a.data"},
"y": {"scale": "gy", "field": "b.data"},
"width": {"scale": "gx", "band": True, "offset":-35},
"height": {"scale": "gy", "band": True, "offset":-35},
"fill": {"value": "#fff"},
"stroke": {"value": "#ddd"}
}
},
"scales": [
{
"name": "x",
"range": "width",
"zero": False,
"round": True,
"domain": {"data": "iris", "field": {"parent": "a.data"}}
},
{
"name": "y",
"range": "height",
"zero": False,
"round": True,
"domain": {"data": "iris", "field": {"parent": "b.data"}}
}
],
"axes": [
{"type": "x", "scale": "x", "ticks": 5},
{"type": "y", "scale": "y", "ticks": 5}
],
"marks": [
{
"type": "symbol",
"from": {"data": "iris"},
"properties": {
"enter": {
"x": {"scale": "x", "field": {"datum": {"parent": "a.data"}}},
"y": {"scale": "y", "field": {"datum": {"parent": "b.data"}}},
"fill": {"scale": "c", "field": "species"},
"fillOpacity": {"value": 0.5}
},
"update": {
"size": {"value": 36},
"stroke": {"value": "transparent"}
},
"hover": {
"size": {"value": 100},
"stroke": {"value": "white"}
}
}
}
]
}
]
}, df)
###Output
_____no_output_____ |
sandbox/mi_scene_parsing_enumerated_max_entropy.ipynb | ###Markdown
Minimal grammar definition: each node has:- a pose x- a type nameA node class defines:- its child type (by name or None)- max of children- the geometric stop prob p (1. = always 1 child, 0. = infinite children)- the region in which children will be produced (uniformly at random), in the frame of the node, in the form of an axis-aligned bounding boxE.g. object groups in plane:- Root node produces object clusters and uniform random locations inside [0, 1]^2.- Each cluster produces up points uniformly in a 0.1-length box centered at the cluster center.
###Code
# Grammar implementing clusters of points in 2D.
cluster_grammar = {
"point": NodeDefinition(child_type=None, p=None, max_children=None, bounds=None),
#"cluster_cluster": NodeDefinition(child_type="cluster", p=0.5, max_children=3, bounds=[-np.ones(2)*0.2, np.ones(2)*0.22]),
"cluster": NodeDefinition(child_type="point", p=0.25, max_children=4, bounds=[-np.ones(2)*0.2, np.ones(2)*0.2]),
"root": NodeDefinition(child_type="cluster", p=0.5, max_children=4, bounds=[np.ones(2)*0.2, np.ones(2)*0.8])
}
cluster_grammar_observed_types = ["point"]
np.random.seed(7)
example_tree = sample_tree(cluster_grammar)
observed_nodes = deepcopy(get_observed_nodes(example_tree, cluster_grammar_observed_types))
plt.figure().set_size_inches(8, 4)
plt.subplot(1, 2, 1)
draw_tree(example_tree)
plt.title("Full tree")
plt.subplot(1, 2, 2)
draw_observed_nodes(observed_nodes)
plt.title("Observed nodes")
def make_super_tree(grammar):
# Same logic as sampling a tree, but instead takes *all*
# choices (but doesn't bother with sampling x).
tree = nx.DiGraph()
root = Node("root", x=np.array([0., 0.]))
tree.add_node(root)
node_queue = [root]
while len(node_queue) > 0:
parent = node_queue.pop(0)
assert parent.type in grammar.keys()
parent_def = grammar[parent.type]
if parent_def.child_type is None:
continue
n_children = parent_def.max_children
for k in range(n_children):
child = Node(parent_def.child_type, np.zeros(2))
tree.add_node(child)
tree.add_edge(parent, child)
node_queue.append(child)
return tree
super_tree = make_super_tree(cluster_grammar)
draw_tree(super_tree, draw_pos=False)
plt.xlim([-2., 2.])
plt.ylim([-2., 2.])
plt.title("Super tree for cluster grammar")
###Output
_____no_output_____
###Markdown
Posterior estimation on the super tree using full enumeration of treesNow look at the full Bayesian form of scene parsing: given an observed scene $obs$ + a grammar description, find a representation of the posterior over parse trees $T$ for a given scene $obs$ that maximizes the (log) evidence:$$ \log p(obs) = \log \left[ \sum_T p(obs | T) p(T)\right] $$Now we need to worry about distributions over *whole trees*. Using the super-tree idea, we can describe any given tree as a binary vector $B = [b_1, ..., b_N]$ and continuous vector $C = [xy_1, ..., xy_N]$, concatenating the activation and xy variables for the $N$ nodes in the super tree. Distributions over trees are distributions over the joint space $p(B, C)$.That space is nasty to represent, but we can try approximations -- here's a pretty exhaustive but expressive one:1) For every distinct tree in the $2^N$ options (i.e. for each binary vector $B$), create decision variables as the parameters for a distribution over $C$. (i.e., fit a distribution for C for every distinct possible tree.)2) Create $2^N$ continuous decision variables for the Categorical weights for a given tree / binary vector $B$ being chosen.What distribution we use for $C$ probably depends on the grammar; maybe I can relate it to the variational approximation in VI.The optimization becomes optimizing those parameters w.r.t.$$\begin{align}\max \log p(obs) &= \log \left[ \sum_T p(obs | T) p(T) \right] \\&\geq \sum_{T \, | \, p(obs | T) = 1 \, and \, p(T) > 0} \log p(T)\end{align}$$by dropping zero-probability terms from the sum and then applying Jensen's inequality. (In this situation, $p(obs | T)$ is still either 0 or 1, if the tree does or does node induce $obs$.)Evaluation of $\log p(T)$ should be doable; the topology is fixed from the value for $B$, but the exact form of the parameter space for $C$ will decide how this is done.Evaluation of $p(obs | T) = 1$ is a feasibility check: can the current tree produce the observation set? Importantly, this is a set-to-set evaluation: for each observed node, is the distance to the closest active node zero? (This requires a set of binary correspondence variables for each $T$...) (I think that's going to create and explosion of variables, but we can try it.)TODO(gizatt) IMPLEMENTATION BELOW IS TRASHY ABOUT COST: MIXES REGULAR PROB, LOG PROB, ETc.KL divergence derivation:Split $T$ into its discrete latent structure $B$; its continuous latent node positions $X$; and the latent-to-observed correspondences $C$.Find $q(T)$ close to $p(T | o)$. Roughly following (6.1) in Ritchie's PhD thesis (but with a different setup) -- importantly, keeping the KL divergence ordering the same:$$\begin{align}\min_\theta D_{KL}&(p(T | o) || q(T) \\=& min_\theta \sum_{T} P(T | o) \log \dfrac{p(T | o)}{q(T)} \\ =& min_\theta \sum_{T \, s.t. \, P(T | o) > 0} P(T | o) \log p(T | o) - P(T | o) \log q_\theta(T) \\=& max_\theta \sum_{T \, s.t. \, P(T | o) > 0} P(T | o) \log q_\theta(T) \\=& max_\theta \sum_{\text{feasible } B} P(B) * \left[ \sum_{\text{feasible } X, C} P(X, C | o, B) \log q_{\theta}(B, X, C) \right]\end{align}$$$q\theta(T)$ is parameterized by a set of weights $\theta^w$ over the feasible $B$, plus separate parameter sets $\theta^{XC}_i$ for variational posteriors over $X$ and $C$ for each feasible $B$.In this strategy, we're taking the outer sum explicitly; but to estimate $P(X, C | o, B) \log q_\theta(B, X, C)$, we assume that posterior is concentrated at one optimal $X, C$ for each $o, B$, which we find and optimize jointly with $\theta$ in a MIP that looks like our oneshot parsing problem.
###Code
# Make a super tree and observed node set, which we'll modify to organize our optimization variables.
def infer_tree_dist_by_tree_enum(grammar, base_observed_nodes, observed_node_types, verbose=False):
setup_start_time = time.time()
base_super_tree = make_super_tree(grammar)
base_observed_nodes = deepcopy(base_observed_nodes)
prog = MathematicalProgram()
# Assumption here that super_tree.nodes is consistently ordered before;after deepcopy.
# Testing real quick...
copied_nodes = deepcopy(base_super_tree).nodes
assert([n1.type == n2.type and np.allclose(n1.x, n2.x)
for n1, n2 in zip(base_super_tree.nodes, deepcopy(base_super_tree).nodes)])
# Give nodes consistent and easy-to-get indices.
for k, node in enumerate(base_super_tree.nodes):
node.ind = k
n_nodes = len(base_super_tree.nodes)
n_combinations = 2**n_nodes
all_super_trees = []
all_observed_nodes = []
# Create categorical weights.
#categorical_weights = prog.NewContinuousVariables(n_combinations, "categorical_weights")
#prog.AddLinearConstraint(sum(categorical_weights) == 1.)
#prog.AddBoundingBoxConstraint(0, 1, categorical_weights)
for combo_k in range(n_combinations):
# Convert int to binary with zero padding on the front.
node_activations = [int(i) for i in format(combo_k, '#0%db' % (n_nodes+2))[2:]]
assert(len(node_activations) == n_nodes)
# Pre-check basic tree feasibility, and reject & set weight
# to zero & add no additional variables if it fails.
def active(node):
return node_activations[node.ind]
def is_feasible(node_activations):
for parent in list(base_super_tree.nodes):
children = list(base_super_tree.successors(parent))
# Enforce > 0 children for active nonterminals to have no hanging nonterminals.
if active(parent) and grammar[parent.type].child_type is not None and sum([active(c) for c in children]) == 0:
return False
# Child active -> parent active.
for child in children:
if active(child) and not active(parent):
return False
# Child ordering legal (always take earliest child).
for child, next_child in zip(children[:-1], children[1:]):
if active(next_child) and not active(child):
return False
# Number of active observed node types matches.
for obs_type in observed_node_types:
n_observed = len([o for o in base_observed_nodes if o.type == obs_type])
n_active = len([n for n in base_super_tree if n.type == obs_type and active(n)])
if n_observed != n_active:
return False
return True
if not is_feasible(node_activations):
#prog.AddBoundingBoxConstraint(0., 0., categorical_weights[combo_k])
all_super_trees.append(None)
all_observed_nodes.append(None)
continue
# Now we know this tree *could* create the desired output, and we don't
# need to enforce the above constraints.
print("Got good tree config!")
print("Node activations: ", node_activations)
# Create a supertree and observed not set copy for data storage.
super_tree = deepcopy(base_super_tree)
# Record node active/not for convenience / reference later.
for node in super_tree:
node.active = active(node)
all_super_trees.append(super_tree)
observed_nodes = deepcopy(base_observed_nodes)
all_observed_nodes.append(observed_nodes)
# For each node, if it's not an observed type, create a uniform distribution on its x position.
# Otherwise, create a Delta.
for node_k, node in enumerate(super_tree.nodes):
if node.type in observed_node_types:
node.x_optim = prog.NewContinuousVariables(2, "%d_%d_x_delta" % (combo_k, node_k))
prog.AddBoundingBoxConstraint(np.zeros(2), np.ones(2), node.x_optim)
else:
node.x_optim_lb = prog.NewContinuousVariables(2, "%d_%d_x_lb" % (combo_k, node_k))
node.x_optim_ub = prog.NewContinuousVariables(2, "%d_%d_x_lb" % (combo_k, node_k))
# Finite support, please...
for k in range(2):
prog.AddLinearConstraint(node.x_optim_lb[k] + 1E-6 <= node.x_optim_ub[k])
prog.AddBoundingBoxConstraint(np.zeros(2), np.ones(2), node.x_optim_lb)
prog.AddBoundingBoxConstraint(np.zeros(2), np.ones(2), node.x_optim_ub)
if node.type is "root":
# Force to origin
prog.AddBoundingBoxConstraint(np.zeros(2), np.zeros(2), node.x_optim_lb)
prog.AddBoundingBoxConstraint(np.zeros(2), np.zeros(2), node.x_optim_ub)
# We still aren't *guaranteed* feasibility; it's possible that there's no
# physically valid arrangement of these nodes to match the observation. So
# use a slack sort of arrangement to detect infeasibility.
feasibility_indicator = prog.NewBinaryVariables(1, "%d_feas_slack" % combo_k)[0]
#prog.AddLinearConstraint(categorical_weights[combo_k] <= feasibility_indicator)
for parent_node in super_tree.nodes:
children = list(super_tree.successors(parent_node))
# The child support needs to be feasible no matter where the parent
# is drawn in its own support.
# TODO(gizatt) This is overly restrictive? It's like an inner approx
# of the the support region?
# This constraint is deactived if the feasibility indicator is off.
if len(children) > 0:
lb, ub = grammar[parent_node.type].bounds
for child_node in children:
if parent_node.type in observed_node_types:
raise NotImplementedError("Observed non-terminal not handled yet. Doable though?")
if child_node.type in observed_node_types:
# | Child location - parent location| <= bounds.
M = 1. # Max position error in an axis
for k in range(2):
prog.AddLinearConstraint(
child_node.x_optim[k] >= parent_node.x_optim_ub[k] + lb[k]
- M * (1. - feasibility_indicator)
)
prog.AddLinearConstraint(
child_node.x_optim[k] <= parent_node.x_optim_lb[k] + ub[k]
+ M * (1. - feasibility_indicator)
)
else:
# Child lb and ub both need to be possible to be
# generated from the parent anywhere the parent can be drawn.
M = 1. # Max position error in an axis
for bound in [child_node.x_optim_lb, child_node.x_optim_ub]:
for k in range(2):
prog.AddLinearConstraint(
bound[k] >= parent_node.x_optim_ub[k] + lb[k]
- M * (1. - feasibility_indicator)
)
prog.AddLinearConstraint(
bound[k] <= parent_node.x_optim_lb[k] + ub[k]
+ M * (1. - feasibility_indicator)
)
# Child supports should be ordered, to break symmetries.
for child, next_child in zip(children[:-1], children[1:]):
if child.type in observed_node_types:
prog.AddLinearConstraint(next_child.x_optim[0] >= child.x_optim[0])
else:
# TODO(gizatt) Upper bounds too? Worried about being too restrictive.
prog.AddLinearConstraint(next_child.x_optim_lb[0] >= child.x_optim_lb[0])
# Finally, feasibility depends on existence of a legal corresponce between
# the active observed nodes and observed nodes.
for n in super_tree:
# (first prep some bookkeeping)
n.outgoings = []
for observed_node in observed_nodes:
possible_sources = [n for n in super_tree if n.type == observed_node.type and n.active]
source_actives = prog.NewBinaryVariables(len(possible_sources), "%d_%s_sources" % (combo_k, observed_node.type))
# Store these variables
observed_node.source_actives = source_actives
for k, n in enumerate(possible_sources):
n.outgoings.append(source_actives[k])
# Each observed node needs exactly one explaining input.
# (This is relaxed is feasibility is impossible, which allows the
# trivial solution that the observed node is unexplained and supplies no
# constraints to the dead / zero-prob supertree.)
prog.AddLinearEqualityConstraint(sum(source_actives) == feasibility_indicator)
for k, node in enumerate(possible_sources):
M = 1. # Should upper bound positional error in any single dimension
# When correspondence is active, force the node to match the observed node.
# Otherwise, it can vary within a big M of the observed node.
for i in range(2):
prog.AddLinearConstraint(node.x_optim[i] <= observed_node.x[i] + 1E-6 + (1. - source_actives[k]) * M)
prog.AddLinearConstraint(node.x_optim[i] >= observed_node.x[i] - 1E-6 - (1. - source_actives[k]) * M)
# Go back and make sure no node in the super tree is being used
# to explain more than one observed node.
for node in super_tree:
if node.type in observed_node_types:
if len(node.outgoings) > 0:
prog.AddLinearConstraint(sum(node.outgoings) <= 1)
# Finally, pull out probabilities from the tree.
# Apply a massive negative penalty if this tree arrangement is deactived,
# since that would be an admission that P(T) = 0, and deactive the penalty
# if not.
M = 100.
total_log_prob = -M * (1. - feasibility_indicator) # 0. # = categorical_weights[combo_k]
log_prob_of_tree_structure = 0.
# And sum of log probs of discrete and continuous choices in tree.
for parent_node in super_tree.nodes:
children = list(super_tree.successors(parent_node))
# Geometric value of having this # of children.
active_children = [c for c in children if c.active]
if len(active_children) > 0:
p = grammar[parent_node.type].p
log_prob_of_tree_structure += np.log((1. - p) ** len(active_children) * p)
for child in active_children:
# Density of child support region
if child.type in observed_node_types:
# Delta, no density
pass
else:
# TODO(gizatt) This isn't quite right -- need log() this difference...
# TODO Full maximum entropy derivation, gonna be rough...
total_log_prob += sum(child.x_optim_ub - child.x_optim_lb)
super_tree.total_log_prob = total_log_prob
super_tree.log_prob_of_tree_structure = log_prob_of_tree_structure
super_tree.feasibility_indicator = feasibility_indicator
# Maximize log prob, so min -ll.
prog.AddLinearCost(-total_log_prob)
solver = GurobiSolver()
options = SolverOptions()
logfile = "/tmp/gurobi.log"
os.system("rm %s" % logfile)
options.SetOption(solver.id(), "LogFile", logfile)
if verbose:
print("Num vars: ", prog.num_vars())
print("Num constraints: ", sum([c.evaluator().num_constraints() for c in prog.GetAllConstraints()]))
print("Setup time: ", time.time() - setup_start_time)
solve_start_time = time.time()
result = solver.Solve(prog, None, options)
if verbose:
print("Optimization success?: ", result.is_success())
print("Solve time: ", time.time() - solve_start_time)
print("Logfile: ")
with open(logfile) as f:
print(f.read())
# Post-process a bit: for each feasible tree, grab the computed log-density
# of that tree to build the categorical weights.
categorical_weights_ll = []
for super_tree in all_super_trees:
if super_tree is None or not result.GetSolution(super_tree.feasibility_indicator):
categorical_weights_ll.append(-1E10)
else:
ll = result.GetSolution(super_tree.log_prob_of_tree_structure).Evaluate()
print(result.GetSolution(super_tree.feasibility_indicator), ll)
categorical_weights_ll.append(ll)
# Normalize in log space since some of the values will be nasty.
categorical_weights_ll -= sp.special.logsumexp(categorical_weights_ll)
categorical_weights = np.exp(categorical_weights_ll)
TreeDistInferenceResults = namedtuple(
"TreeDistInferenceResults",
["optim_result", "categorical_weights", "all_super_trees", "all_observed_nodes", "base_observed_nodes", "grammar", "observed_node_types"]
)
return TreeDistInferenceResults(
result, categorical_weights, all_super_trees, all_observed_nodes,
base_observed_nodes, grammar, observed_node_types
)
observed_nodes = deepcopy(get_observed_nodes(example_tree, cluster_grammar_observed_types))
print("Starting with %d observed nodes" % len(observed_nodes))
start_time = time.time()
inference_results = infer_tree_dist_by_tree_enum(
cluster_grammar, observed_nodes, cluster_grammar_observed_types,
verbose=True
)
elapsed = time.time() - start_time
print("Took %f secs" % elapsed)
print("Nonzero categorical weights: ", [c for c in inference_results.categorical_weights if c > 0])
print("Sparsity: %d/%d weights are nonzero." % (
len([c for c in inference_results.categorical_weights if c > 0]),
len(inference_results.categorical_weights)
))
# Draw all ways of explaining the given scene, with support boxes for each node type.
# Sample trees from the results:
from matplotlib.patches import Rectangle
def draw_all_explanations(inference_results, width=None, draw_as_supertree=False):
nonzero_weights_and_combos = [(k, w) for k, w in enumerate(inference_results.categorical_weights) if w > 1E-4]
if width:
height = len(nonzero_weights_and_combos) // width
plt.figure().set_size_inches(5*width, 5*height)
for k, (combo_k, weight) in enumerate(nonzero_weights_and_combos):
if width:
plt.subplot(width, height, k+1)
else:
plt.figure().set_size_inches(5, 5)
# Grab that supertree from optimization
optim_result = inference_results.optim_result
super_tree = inference_results.all_super_trees[combo_k]
assert optim_result.GetSolution(super_tree.feasibility_indicator)
observed_nodes = inference_results.all_observed_nodes[combo_k]
observed_node_types = inference_results.observed_node_types
# Sanity-check observed nodes are explained properly.
for observed_node in observed_nodes:
if not np.isclose(np.sum(optim_result.GetSolution(observed_node.source_actives)), 1.):
print("WARN: observed node at %s not explained by MLE sol." % observed_node.x)
illustration_tree = nx.DiGraph()
colors_by_type = {}
node_colors = {}
for node in super_tree:
if node.active:
illustration_tree.add_node(node)
color = plt.get_cmap("jet")(np.random.random())
node_colors[node] = color
# Put node xy at center of support region
# and draw support regions
if node.type in observed_node_types:
# Node xy distribution is delta -> copy it over
node.x = optim_result.GetSolution(node.x_optim)
else:
# Node xy distribution is Uniform -> sample
lb = optim_result.GetSolution(node.x_optim_lb)
ub = optim_result.GetSolution(node.x_optim_ub)
node.x = (lb + ub) / 2.
if not draw_as_supertree:
plt.gca().add_patch(
Rectangle(lb, ub[0]-lb[0], ub[1]-lb[1],
color=color,
alpha=0.5))
parents = list(super_tree.predecessors(node))
assert len(parents) <= 1
if len(parents) == 1:
parent = parents[0]
assert parent.active
illustration_tree.add_edge(parent, node)
# Draw
if draw_as_supertree:
draw_tree(illustration_tree, alpha=0.9, ax=plt.gca(), draw_pos=False)
plt.gca().set_xlim(-2, 2)
plt.gca().set_ylim(-2, 2)
else:
draw_tree(illustration_tree, alpha=0.5, ax=plt.gca(), node_size=50)
binary_string = format(combo_k, '#0%db' % (len(super_tree.nodes)+2))[2:]
plt.title("Combo %0s, weight %f" % (binary_string, weight))
draw_all_explanations(inference_results, draw_as_supertree=True, width=2)
# Sample trees from the results:
def sample_tree_from_results(inference_results):
# Pick which tree structure
weights = inference_results.categorical_weights
combo_k = np.random.choice(len(weights),p=weights)
# Grab that supertree from optimization
optim_result = inference_results.optim_result
super_tree = inference_results.all_super_trees[combo_k]
assert optim_result.GetSolution(super_tree.feasibility_indicator)
observed_nodes = inference_results.all_observed_nodes[combo_k]
observed_node_types = inference_results.observed_node_types
# Sanity-check observed nodes are explained properly.
for observed_node in observed_nodes:
if not np.isclose(np.sum(optim_result.GetSolution(observed_node.source_actives)), 1.):
print("WARN: observed node at %s not explained by MLE sol." % observed_node.x)
sampled_tree = nx.DiGraph()
super_tree = deepcopy(super_tree)
for node in super_tree:
if node.active:
sampled_tree.add_node(node)
# Sample node xy
if node.type in observed_node_types:
# Node xy distribution is delta -> copy it over
node.x = optim_result.GetSolution(node.x_optim)
else:
# Node xy distribution is Uniform -> sample
lb = optim_result.GetSolution(node.x_optim_lb)
ub = optim_result.GetSolution(node.x_optim_ub)
node.x = np.random.uniform(lb, ub)
parents = list(super_tree.predecessors(node))
assert len(parents) <= 1
if len(parents) == 1:
parent = parents[0]
assert parent.active
sampled_tree.add_edge(parent, node)
return sampled_tree, weights[combo_k]
width = 3
height = 3
plt.figure().set_size_inches(5*width, 5*height)
for k in range(width*height):
plt.subplot(width, height, k+1)
sampled_tree, weight = sample_tree_from_results(inference_results)
draw_tree(sampled_tree, alpha=0.75, ax=plt.gca())
plt.title("Combo weight %f" % weight)
# Plot intermediate node density as a heatmap, overlaid on
from scipy.stats.kde import gaussian_kde
def draw_intermediate_node_heatmaps(inference_results, n_samples=1000):
grammar = inference_results.grammar
observed_node_types = inference_results.observed_node_types
non_root_non_observed_types = [type for type in grammar.keys() if type not in observed_node_types and type is not "root"]
# Sample a bunch of trees
trees = [sample_tree_from_results(inference_results) for k in range(n_samples)]
# Pick out nodes of each type
by_type = {type: [] for type in non_root_non_observed_types}
for tree, weight in trees:
for type in by_type.keys():
by_type[type] += [(n, weight) for n in tree.nodes if n.type == type]
plt.figure().set_size_inches(10, 10)
for k, type in enumerate(by_type.keys()):
ax = plt.subplot(1, len(non_root_non_observed_types), k+1)
x = [n.x[0] for n, _ in by_type[type]]
y = [n.x[1] for n, _ in by_type[type]]
weights = [w for _, w in by_type[type]]
#k = gaussian_kde(np.vstack([x, y]), weights=weights)
#xi, yi = np.mgrid[0:1:0.01,0:1:0.01]
#zi = k(np.vstack([xi.flatten(), yi.flatten()]))
#plt.gca().pcolormesh(xi, yi, zi.reshape(xi.shape), alpha=0.75, cmap=plt.get_cmap("GnBu"))
#plt.gca().contourf(xi, yi, zi.reshape(xi.shape), alpha=1.0, cmap=plt.get_cmap("GnBu"))
plt.scatter(x, y, alpha=0.1)
draw_observed_nodes(inference_results.base_observed_nodes)
plt.title("Intermediate %s positions over %d samples" % (type, n_samples))
plt.gca().add_patch(Rectangle([0, 0], 1, 1, facecolor='none'))
draw_intermediate_node_heatmaps(inference_results, n_samples=1000)
# Try to sample scenes with matching observation
sampled_scenes = []
EPS = 0.1
our_x = np.stack([n.x for n in observed_nodes], axis=0)
print(our_x)
print(our_x.shape)
for k in range(1000000):
if k % 25000 == 0:
print(k)
guess_tree = sample_tree(cluster_grammar)
guess_observed_nodes = deepcopy(get_observed_nodes(guess_tree, cluster_grammar_observed_types))
if len(guess_observed_nodes) == len(observed_nodes):
good = True
for n in guess_observed_nodes:
if np.min(np.sum(np.abs(our_x - n.x), axis=-1)) >= EPS:
good = False
break
if good:
sampled_scenes.append(guess_tree)
print("Got %d scenes" % len(sampled_scenes))
# Plot intermediate node density as a heatmap, overlaid on
from scipy.stats.kde import gaussian_kde
def draw_intermediate_node_heatmaps_for_sampled_scenes(trees, grammar, observed_node_types):
# Pick out nodes of each type
non_root_non_observed_types = [type for type in grammar.keys() if type not in observed_node_types and type is not "root"]
by_type = {type: [] for type in non_root_non_observed_types}
for tree in trees:
for type in by_type.keys():
by_type[type] += [n for n in tree.nodes if n.type == type]
plt.figure().set_size_inches(10, 10)
for k, type in enumerate(by_type.keys()):
ax = plt.subplot(1, len(non_root_non_observed_types), k+1)
x = [n.x[0] for n in by_type[type]]
y = [n.x[1] for n in by_type[type]]
#k = gaussian_kde(np.vstack([x, y]), weights=weights)
#xi, yi = np.mgrid[0:1:0.01,0:1:0.01]
#zi = k(np.vstack([xi.flatten(), yi.flatten()]))
#plt.gca().pcolormesh(xi, yi, zi.reshape(xi.shape), alpha=0.75, cmap=plt.get_cmap("GnBu"))
#plt.gca().contourf(xi, yi, zi.reshape(xi.shape), alpha=1.0, cmap=plt.get_cmap("GnBu"))
plt.scatter(x, y, alpha=0.1)
draw_observed_nodes(inference_results.base_observed_nodes)
plt.title("Intermediate %s positions over %d samples" % (type, len(trees)))
plt.gca().add_patch(Rectangle([0, 0], 1, 1, facecolor='none'))
draw_intermediate_node_heatmaps_for_sampled_scenes(sampled_scenes, cluster_grammar, cluster_grammar_observed_types)
###Output
_____no_output_____ |
ipynb/fft.ipynb | ###Markdown
Effect of Zero-padding and Windowing on FFT
###Code
# import necessary packages and modules
from scipy import signal
import numpy as np
import matplotlib.pyplot as plt
# create helper function to print out expected and calculated frequency of signal
def print_freq(expected, freq, mag):
print("Expected Frequency: ", expected)
calculated_freq = freq[np.argmax(mag)]
print("Calculated Frequency: ", calculated_freq)
###Output
_____no_output_____
###Markdown
Create a sine wave
###Code
%matplotlib inline
freq = 4
pts = 230
x = np.linspace(0,1,pts)
sine = np.sin(x*freq*2*np.pi)
plt.plot(x,sine)
plt.show()
###Output
_____no_output_____
###Markdown
Here we will see the result of the FFT of this signal without windowing or zero padding. The real FFT will be better for inspection since this signal is real and the negative frequencies are redundant.
###Code
timestep = x[1] - x[0]
magnitude = abs(np.fft.rfft(sine))
frequencies = np.fft.rfftfreq(sine.size,d=timestep)
plt.plot(frequencies,magnitude)
plt.show()
print_freq(freq, frequencies, magnitude)
###Output
_____no_output_____
###Markdown
Lets take the FFT of the signal with zero-padding to the next power of two.
###Code
N = int(np.ceil(np.log2(sine.size)))
pad_len = 2 ** N
magnitude = abs(np.fft.rfft(sine, n=pad_len))
frequencies = np.fft.rfftfreq(pad_len,d=timestep)
plt.plot(frequencies,magnitude)
plt.show()
print_freq(freq, frequencies, magnitude)
###Output
_____no_output_____
###Markdown
Now lets zero pad to the next plus one power of two.
###Code
N = int(np.ceil(np.log2(sine.size)))
pad_len = 2 ** (N + 1)
magnitude = abs(np.fft.rfft(sine, n=pad_len))
frequencies = np.fft.rfftfreq(pad_len,d=timestep)
plt.plot(frequencies,magnitude)
plt.show()
print_freq(freq, frequencies, magnitude)
###Output
_____no_output_____
###Markdown
Here we will see the effect on the FFT when you multiply the signal by a window function (Hann in this case).
###Code
window = signal.hann(sine.size)
magnitude = abs(np.fft.rfft(sine * window))
frequencies = np.fft.rfftfreq(sine.size,d=timestep)
plt.plot(frequencies,magnitude)
plt.show()
print_freq(freq, frequencies, magnitude)
###Output
_____no_output_____
###Markdown
Now lets use a window with zero padding. We will zero pad to the next plus one power of two, to avoid circular convolution.
###Code
N = int(np.ceil(np.log2(sine.size)))
pad_len = 2 ** (N + 1)
window = signal.hann(sine.size)
magnitude = abs(np.fft.rfft(sine * window, n=pad_len))
frequencies = np.fft.rfftfreq(pad_len,d=timestep)
plt.plot(frequencies,magnitude)
plt.show()
print_freq(freq, frequencies, magnitude)
###Output
_____no_output_____
###Markdown
More interesting will be to see the effect of windowing and zero padding when random Gaussian noise is added to the signal.
###Code
freq = 4
pts = 230
x = np.linspace(0,1,pts)
noise = np.random.randn(pts)
sine = np.sin(x*freq*2*np.pi) + noise
plt.plot(x,sine)
plt.show()
###Output
_____no_output_____
###Markdown
No zero-padding or windowing FFT
###Code
timestep = x[1] - x[0]
magnitude = abs(np.fft.rfft(sine))
frequencies = np.fft.rfftfreq(sine.size,d=timestep)
plt.plot(frequencies,magnitude)
plt.show()
print_freq(freq, frequencies, magnitude)
###Output
_____no_output_____
###Markdown
With zero-padding and windowing
###Code
N = int(np.ceil(np.log2(sine.size)))
pad_len = 2 ** (N + 1)
window = signal.hann(sine.size)
magnitude = abs(np.fft.rfft(sine * window, n=pad_len))
frequencies = np.fft.rfftfreq(pad_len,d=timestep)
plt.plot(frequencies,magnitude)
plt.show()
print_freq(freq, frequencies, magnitude)
###Output
_____no_output_____
###Markdown
Lets see the effect when the effect on the FFT with a linear combination of two sinusoids with noise First lets make a new print function that will find the two frequencies with the largest magnitude.
###Code
def print_two_freqs(exptd, freq, mag):
rel_max = signal.argrelmax(mag)[0]
sort_mag_rel_max = np.argsort(mag[rel_max])[::-1]
calc_f1, calc_f2 = freq[rel_max[sort_mag_rel_max]][:2]
print("Expected Frequencies: {}, {}".format(exptd[0],exptd[1]))
print("Calculated Frequencies: {}, {}".format(calc_f1, calc_f2))
freq_1 = 4
freq_2 = 20
pts = 230
x = np.linspace(0,1,pts)
noise = np.random.randn(pts)
sine = np.sin(x*freq_1*2*np.pi) + np.sin(x*freq_2*2*np.pi) + noise
plt.plot(x,sine)
plt.show()
###Output
_____no_output_____
###Markdown
No zero-padding or windowing FFT
###Code
magnitude = abs(np.fft.rfft(sine))
frequencies = np.fft.rfftfreq(sine.size,d=timestep)
plt.plot(frequencies,magnitude)
plt.show()
print_two_freqs((freq_1,freq_2), frequencies, magnitude)
###Output
_____no_output_____
###Markdown
With zero-padding and windowing
###Code
N = int(np.ceil(np.log2(sine.size)))
pad_len = 2 ** (N + 1)
window = signal.hann(sine.size)
magnitude = abs(np.fft.rfft(sine * window, n=pad_len))
frequencies = np.fft.rfftfreq(pad_len,d=timestep)
plt.plot(frequencies,magnitude)
plt.show()
print_two_freqs((freq_1,freq_2), frequencies, magnitude)
###Output
_____no_output_____
###Markdown
Lets see the Power Spectral Density (PSD) of the above random signal with no zero-padding or windowing. Since it is a real signal we can just take the magnitude of the FFT pointwise multiplied with itself to get the PSD.
###Code
magnitude = abs(np.fft.rfft(sine))**2
frequencies = np.fft.rfftfreq(sine.size,d=timestep)
plt.plot(frequencies,magnitude)
plt.show()
print_two_freqs((freq_1,freq_2), frequencies, magnitude)
###Output
_____no_output_____
###Markdown
Zero-padding and windowing before calculating the PSD
###Code
N = int(np.ceil(np.log2(sine.size)))
pad_len = 2 ** (N + 1)
window = signal.hann(sine.size)
magnitude = abs(np.fft.rfft(sine * window, n=pad_len))**2
frequencies = np.fft.rfftfreq(pad_len,d=timestep)
plt.plot(frequencies,magnitude)
plt.show()
print_two_freqs((freq_1,freq_2), frequencies, magnitude)
###Output
_____no_output_____ |
_old_/docker/all/work/connector-examples/Elasticsearch.ipynb | ###Markdown
Example to Read / Write to Elasticsearch with SparkDocumentation: https://www.elastic.co/guide/en/elasticsearch/hadoop/current/spark.htmlspark-python
###Code
import pyspark
from pyspark.sql import SparkSession
# MINIO CONFIGURATION
elastic_host = "elasticsearch"
elastic_port = "9200"
# Spark init
spark = SparkSession.builder \
.master("local") \
.appName('jupyter-pyspark') \
.config("spark.jars.packages","org.elasticsearch:elasticsearch-spark-20_2.12:7.15.0")\
.config("spark.es.nodes", elastic_host) \
.config("spark.es.port",elastic_port) \
.getOrCreate()
sc = spark.sparkContext
sc.setLogLevel("ERROR")
# read local data
df = spark.read.option("multiline","true").json("/home/jovyan/datasets/json-samples/stocks.json")
df.toPandas()
# Write to Elastic Under index stocks with default type (_doc)
df.write.mode("Overwrite").format("es").save("stocks/_doc")
# read back from Elasticsearch
df1 = spark.read.format("es").load("stocks/_doc")
df1.toPandas()
###Output
_____no_output_____ |
Baseline - GRU-GAN.ipynb | ###Markdown
Yahoo S5
###Code
total_scores = {'dataset': [], 'f1': [], 'pr_auc': [], 'roc_auc': []}
for loader in [load_yahoo_A1, load_yahoo_A2, load_yahoo_A3, load_yahoo_A4]:
datasets = loader(8, 4)
x_trains, x_tests, y_tests = datasets['x_train'], datasets['x_test'], datasets['y_test']
for i in tqdm(range(len(x_trains))):
tf.keras.backend.clear_session()
X_train = x_trains[i]
X_test = x_tests[i]
gan = get_gan(X_train)
dataset = tf.data.Dataset.from_tensor_slices(X_train)
dataset = dataset.batch(128, drop_remainder=True).prefetch(1)
train_gan(gan, dataset, 128, X_train.shape[1], X_train.shape[2])
X_test_rec = gan.layers[0].predict(X_test)
scores = evaluate(X_test, X_test_rec, y_tests[i], is_reconstructed=True)
total_scores['dataset'].append(loader.__name__.replace('load_', ''))
total_scores['f1'].append(np.max(scores['f1']))
total_scores['pr_auc'].append(scores['pr_auc'])
total_scores['roc_auc'].append(scores['roc_auc'])
print(loader.__name__.replace('load_', ''), np.max(scores['f1']), scores['pr_auc'], scores['roc_auc'])
yahoo_results = pd.DataFrame(total_scores)
yahoo_results.groupby('dataset').mean()
###Output
_____no_output_____
###Markdown
NASA
###Code
total_scores = {'dataset': [], 'f1': [], 'pr_auc': [], 'roc_auc': []}
for loader in [load_nasa]:
datasets = loader(8, 4)
x_trains, x_tests, y_tests = datasets['x_train'], datasets['x_test'], datasets['y_test']
for i in tqdm(range(len(x_trains))):
tf.keras.backend.clear_session()
X_train = x_trains[i]
X_test = x_tests[i]
gan = get_gan(X_train)
dataset = tf.data.Dataset.from_tensor_slices(X_train)
dataset = dataset.batch(128, drop_remainder=True).prefetch(1)
train_gan(gan, dataset, 128, X_train.shape[1], X_train.shape[2])
X_test_rec = gan.layers[0].predict(X_test)
scores = evaluate(X_test, X_test_rec, y_tests[i], is_reconstructed=True)
total_scores['dataset'].append(f'D{i+1}')
total_scores['f1'].append(np.max(scores['f1']))
total_scores['pr_auc'].append(scores['pr_auc'])
total_scores['roc_auc'].append(scores['roc_auc'])
print(f'D{i+1}', np.max(scores['f1']), scores['pr_auc'], scores['roc_auc'])
nasa_results = pd.DataFrame(total_scores)
nasa_results.groupby('dataset').mean()
###Output
_____no_output_____
###Markdown
SMD
###Code
total_scores = {'dataset': [], 'f1': [], 'pr_auc': [], 'roc_auc': []}
for loader in [load_smd]:
datasets = loader(8, 4)
x_trains, x_tests, y_tests = datasets['x_train'], datasets['x_test'], datasets['y_test']
for i in tqdm(range(len(x_trains))):
tf.keras.backend.clear_session()
X_train = x_trains[i]
X_test = x_tests[i]
gan = get_gan(X_train)
dataset = tf.data.Dataset.from_tensor_slices(X_train)
dataset = dataset.batch(128, drop_remainder=True).prefetch(1)
train_gan(gan, dataset, 128, X_train.shape[1], X_train.shape[2])
X_test_rec = gan.layers[0].predict(X_test)
scores = evaluate(X_test, X_test_rec, y_tests[i], is_reconstructed=True)
total_scores['dataset'].append(loader.__name__.replace('load_', ''))
total_scores['f1'].append(np.max(scores['f1']))
total_scores['pr_auc'].append(scores['pr_auc'])
total_scores['roc_auc'].append(scores['roc_auc'])
print(loader.__name__.replace('load_', ''), np.max(scores['f1']), scores['pr_auc'], scores['roc_auc'])
smd_results = pd.DataFrame(total_scores)
smd_results.groupby('dataset').mean()
###Output
_____no_output_____
###Markdown
ECG
###Code
total_scores = {'dataset': [], 'f1': [], 'pr_auc': [], 'roc_auc': []}
for loader in [load_ecg]:
datasets = loader(4, 2)
x_trains, x_tests, y_tests = datasets['x_train'], datasets['x_test'], datasets['y_test']
for i in tqdm(range(len(x_trains))):
tf.keras.backend.clear_session()
X_train = x_trains[i]
X_test = x_tests[i]
gan = get_gan(X_train)
dataset = tf.data.Dataset.from_tensor_slices(X_train)
dataset = dataset.batch(128, drop_remainder=True).prefetch(1)
train_gan(gan, dataset, 128, X_train.shape[1], X_train.shape[2])
X_test_rec = gan.layers[0].predict(X_test)
scores = evaluate(X_test, X_test_rec, y_tests[i], is_reconstructed=True)
total_scores['dataset'].append(f'D{i+1}')
total_scores['f1'].append(np.max(scores['f1']))
total_scores['pr_auc'].append(scores['pr_auc'])
total_scores['roc_auc'].append(scores['roc_auc'])
print(f'D{i+1}', np.max(scores['f1']), scores['pr_auc'], scores['roc_auc'])
ecg_results = pd.DataFrame(total_scores)
ecg_results.groupby('dataset').mean()
###Output
_____no_output_____
###Markdown
Power Demand
###Code
total_scores = {'dataset': [], 'f1': [], 'pr_auc': [], 'roc_auc': []}
for loader in [load_power_demand]:
datasets = loader(16, 8)
x_trains, x_tests, y_tests = datasets['x_train'], datasets['x_test'], datasets['y_test']
for i in tqdm(range(len(x_trains))):
tf.keras.backend.clear_session()
X_train = x_trains[i]
X_test = x_tests[i]
gan = get_gan(X_train)
dataset = tf.data.Dataset.from_tensor_slices(X_train)
dataset = dataset.batch(128, drop_remainder=True).prefetch(1)
train_gan(gan, dataset, 128, X_train.shape[1], X_train.shape[2])
X_test_rec = gan.layers[0].predict(X_test)
scores = evaluate(X_test, X_test_rec, y_tests[i], is_reconstructed=True)
total_scores['dataset'].append(loader.__name__.replace('load_', ''))
total_scores['f1'].append(np.max(scores['f1']))
total_scores['pr_auc'].append(scores['pr_auc'])
total_scores['roc_auc'].append(scores['roc_auc'])
print(loader.__name__.replace('load_', ''), np.max(scores['f1']), scores['pr_auc'], scores['roc_auc'])
power_results = pd.DataFrame(total_scores)
power_results.groupby('dataset').mean()
###Output
_____no_output_____
###Markdown
2D Gesture
###Code
total_scores = {'dataset': [], 'f1': [], 'pr_auc': [], 'roc_auc': []}
for loader in [load_gesture]:
datasets = loader(4, 2)
x_trains, x_tests, y_tests = datasets['x_train'], datasets['x_test'], datasets['y_test']
for i in tqdm(range(len(x_trains))):
tf.keras.backend.clear_session()
X_train = x_trains[i]
X_test = x_tests[i]
gan = get_gan(X_train)
dataset = tf.data.Dataset.from_tensor_slices(X_train)
dataset = dataset.batch(128, drop_remainder=True).prefetch(1)
train_gan(gan, dataset, 128, X_train.shape[1], X_train.shape[2])
X_test_rec = gan.layers[0].predict(X_test)
scores = evaluate(X_test, X_test_rec, y_tests[i], is_reconstructed=True)
total_scores['dataset'].append(loader.__name__.replace('load_', ''))
total_scores['f1'].append(np.max(scores['f1']))
total_scores['pr_auc'].append(scores['pr_auc'])
total_scores['roc_auc'].append(scores['roc_auc'])
print(loader.__name__.replace('load_', ''), np.max(scores['f1']), scores['pr_auc'], scores['roc_auc'])
gesture_results = pd.DataFrame(total_scores)
gesture_results.groupby('dataset').mean()
###Output
_____no_output_____ |
notebooks/simple-model.ipynb | ###Markdown
Bag of Words Model for the OHSUmed corpus Establishing a baseline in KerasThe OHSUmed test collection is a subset of the MEDLINE database, which is a bibliographic database of important, peer-reviewed medical literature maintained by the National Library of Medicine. The subset we consider is the collection consisting of the first 20,000 documents from the 50,216 medical abstracts of the year 1991. The classification scheme consists of the 23 Medical Subject Headings (MeSH) categories of cardiovascular diseases group. After selecting such category subset, the document number is 13,924 documents (6,285 for training and 7,649 for testing). Of the 23 categories of the cardiovascular diseases group
###Code
# OPTIONAL: Load the "autoreload" extension so that code can change
%load_ext autoreload
# OPTIONAL: always reload modules so that as you change code in src, it gets loaded
%autoreload 2
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tensorflow as tf
from src.models import metrics
from sklearn.preprocessing import LabelBinarizer, LabelEncoder
from sklearn.metrics import confusion_matrix
from tensorflow import keras
from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout
from keras.preprocessing import text, sequence
from keras import utils
# incorporate only frequent labels
def isolate_frequent_labels(X, label_column, threshold_count):
# returns: dataframe with only infrequent labels
df = X.groupby(label_column).size()[X.groupby(label_column).size() > threshold_count].reset_index()
frequent_labels = df.iloc[:,0]
return X[X.label.isin(frequent_labels)]
OHSUcsv = pd.read_csv("../data/processed/ohsumed_abstracts.csv", index_col ="Unnamed: 0")
data = isolate_frequent_labels(OHSUcsv, 'label', 200)
#data = OHSUcsv[OHSUcsv.label.isin(frequent_labels)]
train_posts = data.loc[data.split == 'train', 'doc']
train_tags = data.loc[data.split == 'train', 'label']
test_posts = data.loc[data.split == 'test', 'doc']
test_tags = data.loc[data.split == 'test', 'label']
###Output
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
###Markdown
Preprocessing with Keras Text InputKeras has some built in methods for preprocessing text to make preprocessing simple. Tokenizer classThe Tokenizer class provides methods to count the unique words in our vocabulary and assign each of those words to indices. We’ll create an instance of the Tokenizer class, and then pass it the Pandas dataframe of text we want to train on. Although the Tokenizer function, takes a 'num_words' argument to limit the text to a certain vocabulary size (say, 10,000), we will instead use all words. Note that stopwords were already removed. fit_on_textsCalling fit_on_texts() automatically creates a word index lookup of our vocabulary, thereby associating each word with a unique number. texts_to_matrixWith our Tokenizer, we can now use the texts_to_matrix method to create the training data we’ll pass our model. This will take each post’s text and turn it into a vocab_size “bag” array, with 1s indicating the indices where words in a question are present in the vocabulary.
###Code
vocab_size = 10000
tokenize = text.Tokenizer(num_words = vocab_size)
tokenize.fit_on_texts(train_posts)
x_train = tokenize.texts_to_matrix(train_posts)
x_test = tokenize.texts_to_matrix(test_posts)
###Output
_____no_output_____
###Markdown
Preprocessing output labelsThe tag for each question is a number (i.e. “1” or “2”). Instead of using a single int as the label for each input, we’ll turn it into a one-hot vector. **We feed a one-hot vector to our model instead of a single integer because the models will output a vector of probabilities for each document** scikit-learn has a **LabelBinarizer class** which makes it easy to build these one-hot vectors. We can pass it the labels column from our Pandas DataFrame and then call fit() and transform() on it:
###Code
encoder = LabelBinarizer()
encoder.fit(train_tags)
y_train = encoder.transform(train_tags)
y_test = encoder.transform(test_tags)
###Output
_____no_output_____
###Markdown
Building a simple multilayer perception Using the Sequential Model API in KerasTo define the layers of our model we’ll use the Keras **Sequential model API**, which composes a linear stack of layers.**1st layer**: The input layer will take the vocab_size arrays for each comment. We’ll specify this as a Dense layer in Keras, which means each neuron in this layer will be fully connected to all neurons in the next layer. We pass the Dense layer two parameters: the dimensionality of the layer’s output (number of neurons) and the shape of our input data. Choosing the number of dimensions requires some experimentation, but most use a power of 2 as the number of dimensions, so we’ll start with 512.**2nd Layer**: The final layer will use the Softmax activation function, which normalizes the evidence for each possible label into a probability (from 0 to 1). For each category, there is a True and False label, so we have 2 units. If we were to allow our model to model the probability of all N (mutually-exclusive) categories, this final layer would have N+1 units. **Training Protocol**: We call the compile method with the loss function we want to use, the type of optimizer, and the metrics our model should evaluate during training and testing. We’ll use the cross entropy loss function, since each of our abstrats can only belong to one post. The optimizer is the function our model uses to minimize loss. In this example we’ll use the Adam optimizer. There are many optimizers available, all of which are different implementations of gradient descent. For metrics we’ll evaluate accuracy, which will tell us the percentage of abstracts assigned the correct label.
###Code
model = Sequential()
model.add(Dense(512, input_shape=(vocab_size,)))
model.add(Activation('relu'))
model.add(Dense(units=2, activation='sigmoid'))
model.compile(loss='sparse_categorical_crossentropy',
optimizer='adam',
metrics=['binary_accuracy'])
###Output
_____no_output_____
###Markdown
Training the Model Assigning a Cost Function and Optimizer To train our model, we’ll call the fit() method, pass it our training data and labels, the number of examples to process in each batch (batch size), how many times the model should train on our entire dataset (epochs), and the validation split. validation_split tells Keras what percentage of our training data to reserve for validation.**Try tweaking these hyperparameters when using this model on our own data!**
###Code
# try to predict only the first label
column_index = 1
label_train = y_train[:,column_index]
history = model.fit(x_train, label_train,
batch_size=32,
epochs=1,
verbose=1,
validation_split=0.1)
y_pred = model.predict(x_test, verbose=1)
print(metrics.get_binary_results(y_pred, y_test, 1))
###Output
returns: dataframe with precision, recall, and F
precision recall f1
0 0.001735 0.021739 0.003213
###Markdown
Evaluation
###Code
y_test[:,0]
###Output
_____no_output_____ |
speed_test/multitread.ipynb | ###Markdown
Multi Processing for librosa
###Code
n_workers = 40
def foo(i): # the managed list `L` passed explicitly.
spec = stft(i)
return spec
###Output
_____no_output_____
###Markdown
With appending function (free order)
###Code
start = time()
n_workers = mp.cpu_count()
pool = mp.Pool(n_workers)
result = []
for filename in waveforms:
pool.apply_async(foo, args=(filename, ), callback=result.append)
pool.close() # no more jobs will be added
pool.join()
print('{:2f} seconds are used to finish the conversion'.format(time()-start))
###Output
11.595662 seconds are used to finish the conversion
###Markdown
With appending function (ensure in order)
###Code
start = time()
n_workers = mp.cpu_count()
pool = mp.Pool(n_workers)
result = []
for filename in waveforms:
r = pool.apply_async(foo, args=(filename, ), callback=result.append)
r.wait()
pool.close() # no more jobs will be added
pool.join()
print('{:2f} seconds are used to finish the conversion'.format(time()-start))
###Output
28.850948 seconds are used to finish the conversion
###Markdown
Multi Processing approach
###Code
start = time()
manager = Manager()
L = manager.list() # <-- can be shared between processes.
processes = []
for i in waveforms:
p = Process(target=dothing, args=(L,i)) # Passing the list
p.start()
processes.append(p)
for p in processes:
p.join()
# print(L)
print('{:2f} seconds are used to finish the conversion'.format(time()-start))
###Output
42.939029 seconds are used to finish the conversion
42.939029 seconds are used to finish the conversion
42.939029 seconds are used to finish the conversion
###Markdown
Multi Processing Pool approach
###Code
from multiprocessing import Pool
def f(x):
return stft(x)
start = time()
p = Pool(n_workers)
p.map(f, waveforms)
print('{:2f} seconds are used to finish the conversion'.format(time()-start))
###Output
13.009506 seconds are used to finish the conversion
13.009506 seconds are used to finish the conversion
13.009506 seconds are used to finish the conversion
###Markdown
For loop approach
###Code
L2 = []
start = time()
for i in waveforms:
L2.append(stft(i))
print('{:2f} seconds are used to finish the conversion'.format(time()-start))
###Output
9.738493 seconds are used to finish the conversion
###Markdown
Comparing the results
###Code
counter = 0
for i in range(len(result)):
if np.array_equal(result[i], L2[i]) == True:
counter += 1
counter
###Output
_____no_output_____ |
WEEKS/wk17/d4/Copy_of_ArraysAndStrings.ipynb | ###Markdown
Arrays and Strings16Bits = 2Bytes8Bit, 16Bit, 32Bit, 64Bit, 128Bit```[123, "hello" ]A = ["Hello", 232, 100]A[0] -> @A + offest 0 * 2A[1] -> @A + offset 1 * 2 => 100 + 1 * 2offest = base addr * size of data in bucket 128 64 32 16 8 4 2 100000000 0 1 1 1 1 0 1 1``````[0x100: 000000000x101: 011110110x102:0x103:.........0x123: "H0x124 e0x125 l0x126 l0x127] o"``````python=a = [("bob", (1, 2, 3, "dave"), [{"bob": [1, 2, "Hello"]}], (123, 22)), (2.7), "bob"]a[0][1][3][1][] -> () -> 1 2 3```()123 **CODE**: 3672
###Code
[12, 23, 34, 44]
###Output
_____no_output_____
###Markdown
Demo[1, 2, 3]H T U U1 2 3 + 8TH H T U U1, 0 0 0 + 1if number 9 change it to zeromove to the next power of tenrepeat
###Code
"""
You are given a non-empty array that represents the digits of a non-negative integer.
Write a function that increments the number by 1.
The digits are stored so that the digit representing the most significant place value
is at the beginning of the array. Each element in the array only contains a single digit.
You will not receive a leading 0 in your input array (except for the number 0 itself).
Example 1:
Input: [1,3,2]
Output: [1,3,3]
Explanation: The input array represents the integer 132. 132 + 1 = 133.
Example 2:
Input: [3,2,1,9]
Output: [3,2,2,0]
Explanation: The input array represents the integer 3219. 3219 + 1 = 3220.
Example 3:
Input: [9,9,9]
Output: [1,0,0,0]
Explanation: The input array represents the integer 999. 999 + 1 = 1000.
[0, 0, 0]
n = 3
idx = 3 - 1 - 2
"""
def plus_one(digits):
# Your code here
n = len(digits)
# iterate over the list from right to left
for i in range(n - 1, -1, -1):
# idx = n - 1 - i
idx = i
# if the current digit is a 9 then set it to a 0
if digits[idx] == 9:
digits[idx] = 0
# otherwise increment the current digit and return digits
else:
digits[idx] += 1
return digits
# return digits we only get here if the list was all 9's
return [1] + digits
print(plus_one([1, 3, 2])) # [1, 3, 3]
print(plus_one([3, 2, 1, 9])) # [3, 2, 2, 0]
print(plus_one([9, 9, 9])) # [1, 0, 0, 0]
###Output
[1, 3, 3]
[3, 2, 2, 0]
[1, 0, 0, 0]
|
content/downloads/notebooks/Bayesian Machine Learning I: Bajo la capa de Bayes.ipynb | ###Markdown
Introduccion.El objetivo de este post es demostrar la principales diferencias entre un aproximamiento al analisis de datos desde el frecuentismo y el bayesianismo, dos escuelas estadisticas que comparten metodos pero que en el fondo interpretan los datos de una manera totalmente diferente.Este post sera el primero de una serie de post donde ilustrare la parte mas practica de la programacion probabilistica y la inferencia bayesiana aplicada al Machine learning y al analisis de datos. Diferencias filosoficas. La principal muestra diferentatoria entre las dos teorias es la manera en el concepto de probabilidad.Para los frecuentistas la probabilidad representa unica y exclusivamente un caso de mediciones repetidas sobre una muestra de datos que es aleatoria, si queremos medir una cantidad $X$ de un proceso aleatorio o a priori aleatorio, repetimos la medicion una y otra vez, cada vez que repetimos dicha medicion obtenemos diferentes resultados; Estas diferencias se deben principalmente a el error estadistico o de medida (sesgos);Si nos vamos a los extremos de esta repeticion de medicciones, la frecuencia que tiene cualquier valor sobre esa cantidad $X$ es lo que llamamos frecuencia, y tiene una relacion estrecha con la probabilidad pues una probabilidad es el la cuenta del numero de eventos observados, entonces el frecuentista habla de $X$ como de un valor fijo.Para los bayesianos, la probabilidad es una manera de cuantificar el grado de creencia sobre los hechos; $X$ puede ser medido con una probabilidad $P(X)$, si bien es cierto que esta probabilidad puede ser medida de las frecuencias dado una gran cantidad de datos, esto es lo que no queremos hacer, el big data ha hecho que la diferencia entre estas dos vertientes de conocimiento no se distingan bien, pero que pasa cuando los datos son pequenos y no podemos tener a mano las frecuencias?, los bayesianos imponen restricciones al modelo y las probabilidades se transforman a una forma de relacionar la nuestro conocimiento (restricciones) con un evento. De que nos sirve esto?, muy facil, ahora $X$ no es un valor unico , la probabilidad codifica nuestro conocimiento a priori con la informacion que tenemos disponible (mucha o poca) y hablamos de un rango de valores para $X$, lo cual puede ser de gran utilidad en el caso del "tiny data" que he citado anteriormente.
###Code
import numpy as np
from scipy import stats
import statsmodels.api as sm
from statsmodels.base.model import GenericLikelihoodModel
%matplotlib inline
import matplotlib.pyplot as plt
true_val = 2500
N = 50 # number of measurements
F = stats.poisson(true_val).rvs(N)
e = np.sqrt(F) # errors on Poisson counts estimated via square root
fig, ax = plt.subplots()
ax.errorbar(F, np.arange(N), xerr=e, fmt='ok', ecolor='blue', alpha=0.4)
ax.vlines([true_val], 0, N, linewidth=5, alpha=0.2)
ax.set_xlabel("value");ax.set_ylabel("# measures");
from scipy.stats import rv_continious
sm_probit_manual = MaxLikelihood(F, exog).fit()
print(sm_probit_manual.summary())
###Output
_____no_output_____ |
src/notebooks/01/3-Pandas-Reference.ipynb | ###Markdown
Introduction to PandasHaving explored NumPy, it is time to get to know the other workhorse of data science in Python: pandas. The pandas library in Python really does a lot to make working with data--and importing, cleaning, and organizing it--so much easier that it is hard to imagine doing data science in Python without it.But it was not always this way. Wes McKinney developed the library out of necessity in 2008 while at AQR Capital Management in order to have a better tool for dealing with data analysis. The library has since taken off as an open-source software project that has become a mature and integral part of the data science ecosystem. (In fact, some examples in this section will be drawn from McKinney's book, *Python for Data Analysis*.)The name 'pandas' actually has nothing to do with Chinese bears but rather comes from the term *panel data*, a form of multi-dimensional data involving measurements over time that comes out the econometrics and statistics community. Ironically, while panel data is a usable data structure in pandas, it is not generally used today and we will not examine it in this course. Instead, we will focus on the two most widely used data structures in pandas: `Series` and `DataFrame`s. Reminders about importing and documentationJust as you imported NumPy undwither the alias ``np``, we will import Pandas under the alias ``pd``:
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
As with the NumPy convention, `pd` is an important and widely used convention in the data science world; we will use it here and we advise you to use it in your own coding.As we progress through Section 5, don't forget that IPython provides tab-completion feature and function documentation with the ``?`` character. If you don't understand anything about a function you see in this section, take a moment and read the documentation; it can help a great deal. As a reminder, to display the built-in pandas documentation, use this code:```ipythonIn [4]: pd?```Because it can be useful to lean about `Series` and `DataFrame`s in pandas a extension of `ndarray`s in NumPy, go ahead also import NumPy; you will want it for some of the examples later on:
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Now, on to pandas! Fundamental panda data structuresBoth `Series` and `DataFrame`s are a lot like they `ndarray`s you encountered in the last section. They provide clean, efficent data storage and handling at the scales necessary for data science. What both of them provide that `ndarray`s lack, however, are essential data-science features like flexibility when dealing with missing data and the ability to label data. These capabilities (along with others) help make `Series` and `DataFrame`s essential to the "data munging" that make up so much of data science. `Series` objects in pandasA pandas `Series` is a lot like an `ndarray` in NumPy: a one-dimensional array of indexed data.You can create a simple Series from an array of data like this:
###Code
series_example = pd.Series([-0.5, 0.75, 1.0, -2])
series_example
###Output
_____no_output_____
###Markdown
Similar to an `ndarray`, a `Series` upcasts entries to be of the same type of data (that `-2` integer in the original array became a `-2.00` float in the `Series`).What is different from an `ndarray` is that the ``Series`` automatically wraps both a sequence of values and a sequence of indices. These are two separate objects within the `Seriers` object that can access with the ``values`` and ``index`` attributes.Try accessing the ``values`` first; they are just a familiar NumPy array:
###Code
series_example.values
###Output
_____no_output_____
###Markdown
The ``index`` is also an array-like object:
###Code
series_example.index
###Output
_____no_output_____
###Markdown
Just as with `ndarra`s, you can access specific data elements in a `Series` via the familiar Python square-bracket index notation and slicing:
###Code
series_example[1]
series_example[1:3]
###Output
_____no_output_____
###Markdown
Despite a lot of similarities, pandas `Series` have an important distinction from NumPy `ndarrays`: whereas `ndarrays` have *implicitly defined* integer indices (as do Python lists), pandas `Series` have *explicitly defined* indices. The best part is that you can set the index:
###Code
series_example2 = pd.Series([-0.5, 0.75, 1.0, -2], index=['a', 'b', 'c', 'd'])
series_example2
###Output
_____no_output_____
###Markdown
These explicit indices work exactly the way you would expect them to:
###Code
series_example2['b']
###Output
_____no_output_____
###Markdown
Exercise:
###Code
# Do explicit Series indices work *exactly* the way you might expect?
# Try slicing series_example2 using its explicit index and find out.
###Output
_____no_output_____
###Markdown
With explicit indices in the mix, a `Series` is basically a fixed-length, ordered dictionary in that it maps arbitrary typed index values to arbitrary typed data values. But like `ndarray`s these data are all of the same type, which is important. Just as the type-specific compiled code behind `ndarray` makes them more efficient than a Python lists for certain operations, the type information of pandas ``Series`` makes them much more efficient than Python dictionaries for certain operations.But the connection between `Series` and dictionaries is nevertheless very real: you can construct a ``Series`` object directly from a Python dictionary:
###Code
population_dict = {'France': 65429495,
'Germany': 82408706,
'Russia': 143910127,
'Japan': 126922333}
population = pd.Series(population_dict)
population
###Output
_____no_output_____
###Markdown
Did you see what happened there? The order of the keys `Russia` and `Japan` in the switched places between the order in which they were entered in `population_dict` and how they ended up in the `population` `Series` object. While Python dictionary keys have no order, `Series` keys are ordered.So, at one level, you can interact with `Series` as you would with dictionaries:
###Code
population['Russia']
###Output
_____no_output_____
###Markdown
But you can also do powerful array-like operations with `Series` like slicing:
###Code
# Try slicing on the population Series on your own.
# Would slicing be possible if Series keys were not ordered?
###Output
_____no_output_____
###Markdown
You can also add elements to a `Series` the way that you would to an `ndarray`. Try it in the code cell below:
###Code
# Try running population['Albania'] = 2937590 (or another country of your choice)
# What order do the keys appear in when you run population? Is it what you expected?
###Output
_____no_output_____
###Markdown
Anoter useful `Series` feature (and definitely a difference from dictionaries) is that `Series` automatically aligns differently indexed data in arithmetic operations:
###Code
pop2 = pd.Series({'Spain': 46432074, 'France': 102321, 'Albania': 50532})
population + pop2
###Output
_____no_output_____
###Markdown
Notice that in the case of Germany, Japan, Russia, and Spain (and Albania, depending on what you did in the previous exercise), the addition operation produced `NaN` (not a number) values. pandas does not treat missing values as `0`, but as `NaN` (and it can be helpful to think of arithmetic operations involving `NaN` as essentially `NaN`$ + x=$ `NaN`). `DataFrame` object in pandasThe other crucial data structure in pandas to get to know for data science is the `DataFrame`.Like the ``Series`` object, ``DataFrame``s can be thought of either as generalizations of `ndarray`s (or as specializations of Python dictionaries).Just as a ``Series`` is like a one-dimensional array with flexible indices, a ``DataFrame`` is like a two-dimensional array with both flexible row indices and flexible column names. Essentially, a `DataFrame` represents a rectangular table of data and contains an ordered collection of labeled columns, each of which can be a different value type (`string`, `int`, `float`, etc.).The DataFrame has both a row and column index; in this way you can think of it as a dictionary of `Series`, all of which share the same index.Let's take a look at how this works in practice. We will start by creating a `Series` called `area`:
###Code
area_dict = {'Albania': 28748,
'France': 643801,
'Germany': 357386,
'Japan': 377972,
'Russia': 17125200}
area = pd.Series(area_dict)
area
###Output
_____no_output_____
###Markdown
Now you can combine this with the `population` `Series` you created earlier by using a dictionary to construct a single two-dimensional table containing data from both `Series`:
###Code
countries = pd.DataFrame({'Population': population, 'Area': area})
countries
###Output
_____no_output_____
###Markdown
As with `Series`, note that `DataFrame`s also automatically order indices (in this case, the column indices `Area` and `Population`).So far we have combined dictionaries together to compose a `DataFrame` (which has given our `DataFrame` a row-centric feel), but you can also create `DataFrame`s in a column-wise fashion. Consider adding a `Capital` column using our reliable old array-analog, a list:
###Code
countries['Capital'] = ['Tirana', 'Paris', 'Berlin', 'Tokyo', 'Moscow']
countries
###Output
_____no_output_____
###Markdown
As with `Series`, even though initial indices are ordered in `DataFrame`s, subsequent additions to a `DataFrame` stay in the ordered added. However, you can explicitly change the order of `DataFrame` column indices this way:
###Code
countries = countries[['Capital', 'Area', 'Population']]
countries
###Output
_____no_output_____
###Markdown
Commonly in a data science context, it is necessary to generate new columns of data from existing data sets. Because `DataFrame` columns behave like `Series`, you can do this is by performing operations on them as you would with `Series`:
###Code
countries['Population Density'] = countries['Population'] / countries['Area']
countries
###Output
/home/nbuser/anaconda3_420/lib/python3.5/site-packages/ipykernel/__main__.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
if __name__ == '__main__':
###Markdown
Note: don't worry if IPython gives you a warning over this. The warning is IPython trying to be a little too helpful. The new column you created is an actual part of the `DataFrame` and not a copy of a slice. We have stated before that `DataFrame`s are like dictionaries, and it's true. You can retrieve the contents of a column just as you would the value for a specific key in an ordinary dictionary:
###Code
countries['Area']
###Output
_____no_output_____
###Markdown
What about using the row indices?
###Code
# Now try accessing row data with a command like countries['Japan']
###Output
_____no_output_____
###Markdown
This returns an error: `DataFrame`s are dictionaries of `Series`, which are the columns. `DataFrame` rows often have heterogeneous data types, so different methods are necessary to access row data. For that, we use the `.loc` method:
###Code
countries.loc['Japan']
###Output
_____no_output_____
###Markdown
Note that what `.loc` returns is an indexed object in its own right and you can access elements within it using familiar index syntax:
###Code
countries.loc['Japan']['Area']
# Can you think of a way to return the area of Japan without using .iloc?
# Hint: Try putting the column index first.
# Can you slice along these indices as well?
###Output
_____no_output_____
###Markdown
Sometimes it is helpful in data science projects to add a column to a `DataFrame` without assigning values to it:
###Code
countries['Debt-to-GDP Ratio'] = np.nan
countries
###Output
/home/nbuser/anaconda3_420/lib/python3.5/site-packages/ipykernel/__main__.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
if __name__ == '__main__':
###Markdown
Again, you can disregard the warning (if it triggers) about adding the column this way.You can also add columns to a `DataFrame` that do not have the same number of rows as the `DataFrame`:
###Code
debt = pd.Series([0.19, 2.36], index=['Russia', 'Japan'])
countries['Debt-to-GDP Ratio'] = debt
countries
###Output
/home/nbuser/anaconda3_420/lib/python3.5/site-packages/ipykernel/__main__.py:2: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
from ipykernel import kernelapp as app
###Markdown
You can use the `del` command to delete a column from a `DataFrame`:
###Code
del countries['Capital']
countries
###Output
_____no_output_____
###Markdown
In addition to their dictionary-like behavior, `DataFrames` also behave like two-dimensional arrays. For example, it can be useful at times when working with a `DataFrame` to transpose it:
###Code
countries.T
###Output
_____no_output_____
###Markdown
Again, note that `DataFrame` columns are `Series` and thus the data types must consistent, hence the upcasting to floating-point numbers. **If there had been strings in this `DataFrame`, everything would have been upcast to strings.** Use caution when transposing `DataFrame`s. From a two-dimensional NumPy arrayGiven a two-dimensional array of data, we can create a ``DataFrame`` with any specified column and index names.If omitted, an integer index will be used for each:
###Code
pd.DataFrame(np.random.rand(3, 2),
columns=['foo', 'bar'],
index=['a', 'b', 'c'])
###Output
_____no_output_____
###Markdown
Manipulating data in pandasA huge part of data science is manipulating data in order to analyze it. (One rule of thumb is that 80% of any data science project will be concerned with cleaning and organizing the data for the project.) So it makes sense to lear the tools that pandas provides for handling data in `Series` and especially `DataFrame`s. Because both of those data structures are ordered, let's first start by taking a closer look at what gives them their structure: the `Index`. Index objects in pandasBoth ``Series`` and ``DataFrame``s in pandas have explicit indices that enable you to reference and modify data in them. These indices are actually objects themselves. The ``Index`` object can be thought of as both an immutable array or as fixed-size set. It's worth the time to get to know the properties of the `Index` object. Let's return to an example from earlier in the section to examine these properties.
###Code
series_example = pd.Series([-0.5, 0.75, 1.0, -2], index=['a', 'b', 'c', 'd'])
ind = series_example.index
ind
###Output
_____no_output_____
###Markdown
The ``Index`` works a lot like an array. we have already seen how to use standard Python indexing notation to retrieve values or slices:
###Code
ind[1]
ind[::2]
###Output
_____no_output_____
###Markdown
But ``Index`` objects are immutable; you cannot be modified via the normal means:
###Code
ind[1] = 0
###Output
_____no_output_____
###Markdown
This immutability is a good thing: it makes it safer to share indices between multiple ``Series`` or ``DataFrame``s without the potential for problems arising from inadvertent index modification. In addition to being array-like, a Index also behaves like a fixed-size set, including following many of the conventions used by Python's built-in ``set`` data structure, so that unions, intersections, differences, and other combinations can be computed in a familiar way. Let's play around with this to see it in action.
###Code
ind_odd = pd.Index([1, 3, 5, 7, 9])
ind_prime = pd.Index([2, 3, 5, 7, 11])
###Output
_____no_output_____
###Markdown
In the code cell below, try out the intersection (`ind_odd & ind_prime`), union (`ind_odd | ind_prime`), and the symmetric difference (`ind_odd ^ ind_prime`) of `ind_odd` and `ind_prime`. These operations may also be accessed via object methods, for example ``ind_odd.intersection(ind_prime)``. Below is a table listing some useful `Index` methods and properties. | **Method** | **Description** ||:---------------|:------------------------------------------------------------------------------------------|| [`append`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.append.html) | Concatenate with additional `Index` objects, producing a new `Index` || [`diff`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.diff.html) | Compute set difference as an Index || [`drop`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop.html) | Compute new `Index` by deleting passed values || [`insert`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.insert.html) | Compute new `Index` by inserting element at index `i` || [`is_monotonic`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.is_monotonic.html) | Returns `True` if each element is greater than or equal to the previous element || [`is_unique`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.is_unique.html) | Returns `True` if the Index has no duplicate values || [`isin`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.isin.html) | Compute boolean array indicating whether each value is contained in the passed collection || [`unique`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.unique.html) | Compute the array of unique values in order of appearance | Data Selection in SeriesAs a refresher, a ``Series`` object acts in many ways like both a one-dimensional `ndarray` and a standard Python dictionary.Like a dictionary, the ``Series`` object provides a mapping from a collection of arbitrary keys to a collection of arbitrary values. Back to an old example:
###Code
series_example2 = pd.Series([-0.5, 0.75, 1.0, -2], index=['a', 'b', 'c', 'd'])
series_example2
series_example2['b']
###Output
_____no_output_____
###Markdown
You can also examine the keys/indices and values using dictionary-like Python tools:
###Code
'a' in series_example2
series_example2.keys()
list(series_example2.items())
###Output
_____no_output_____
###Markdown
As with dictionaries, you can extend a dictionary by assigning to a new key, you can extend a ``Series`` by assigning to a new index value:
###Code
series_example2['e'] = 1.25
series_example2
###Output
_____no_output_____
###Markdown
Series as one-dimensional arrayBecause ``Series`` also provide array-style functionality, you can use the NumPy techniques we looked at in Section 3 like slices, masking, and fancy indexing:
###Code
# Slicing using the explicit index
series_example2['a':'c']
# Slicing using the implicit integer index
series_example2[0:2]
# Masking
series_example2[(series_example2 > -1) & (series_example2 < 0.8)]
# Fancy indexing
series_example2[['a', 'e']]
###Output
_____no_output_____
###Markdown
One note to avoid confusion. When slicing with an explicit index (i.e., ``series_example2['a':'c']``), the final index is **included** in the slice; when slicing with an implicit index (i.e., ``series_example2[0:2]``), the final index is **excluded** from the slice. Indexers: `loc` and `iloc`A great thing about pandas is that you can use a lot different things for your explicit indices. A potentially confusing thing about pandas is that you can use a lot different things for your explicit indices, including integers. To avoid confusion between integer indices that you might supply and those implicit integer indices that pandas generates, pandas provides special *indexer* attributes that explicitly expose certain indexing schemes.(A technical note: These are not functional methods; they are attributes that expose a particular slicing interface to the data in the ``Series``.)The ``loc`` attribute allows indexing and slicing that always references the explicit index:
###Code
series_example2.loc['a']
series_example2.loc['a':'c']
###Output
_____no_output_____
###Markdown
The ``iloc`` attribute enables indexing and slicing using the implicit, Python-style index:
###Code
series_example2.iloc[0]
series_example2.iloc[0:2]
###Output
_____no_output_____
###Markdown
A guiding principle of the Python language is the idea that "explicit is better than implicit." Professional code will generally use explicit indexing with ``loc`` and ``iloc`` and you should as well in order to make your code clean and readable. Data selection in DataFrames``DataFrame``s also exhibit dual behavior, acting both like a two-dimensional `ndarray` and like a dictionary of ``Series`` sharing the same index. DataFrame as dictionary of SeriesLet's return to our earlier example of countries' areas and populations in order to examine `DataFrame`s as a dictionary of `Series`.
###Code
area = pd.Series({'Albania': 28748,
'France': 643801,
'Germany': 357386,
'Japan': 377972,
'Russia': 17125200})
population = pd.Series ({'Albania': 2937590,
'France': 65429495,
'Germany': 82408706,
'Russia': 143910127,
'Japan': 126922333})
countries = pd.DataFrame({'Area': area, 'Population': population})
countries
###Output
_____no_output_____
###Markdown
You can access the individual ``Series`` that make up the columns of a ``DataFrame`` via dictionary-style indexing of the column name:
###Code
countries['Area']
###Output
_____no_output_____
###Markdown
An you can use dictionary-style syntax can also be used to modify `DataFrame`s, such as by adding a new column:
###Code
countries['Population Density'] = countries['Population'] / countries['Area']
countries
###Output
_____no_output_____
###Markdown
DataFrame as two-dimensional arrayYou can also think of ``DataFrame``s as two-dimensional arrays. You can examine the raw data in the `DataFrame`/data array using the ``values`` attribute:
###Code
countries.values
###Output
_____no_output_____
###Markdown
Viewed thsi way it makes sense that we can transpose the rows and columns of a `DataFrame` the same way we would an array:
###Code
countries.T
###Output
_____no_output_____
###Markdown
`DataFrame`s also uses the ``loc`` and ``iloc`` indexers. With ``iloc``, you can index the underlying array as if it were an `ndarray` but with the ``DataFrame`` index and column labels maintained in the result:
###Code
countries.iloc[:3, :2]
###Output
_____no_output_____
###Markdown
``loc`` also permits array-like slicing but using the explicit index and column names:
###Code
countries.loc[:'Germany', :'Population']
###Output
_____no_output_____
###Markdown
You can also use array-like techniques such as masking and fancing indexing with `loc`.
###Code
# Can you think of how to combine masking and fancy indexing in one line?
# Your masking could be somthing like countries['Population Density'] > 200
# Your fancy indexing could be something like ['Population', 'Population Density']
# Be sure to put the the masking and fancy indexing inside the square brackets: countries.loc[]
###Output
_____no_output_____
###Markdown
Indexing conventionsIn practice in the world of data science (and pandas more generally), *indexing* refers to columns while *slicing* refers to rows:
###Code
countries['France':'Japan']
###Output
_____no_output_____
###Markdown
Such slices can also refer to rows by number rather than by index:
###Code
countries[1:3]
###Output
_____no_output_____
###Markdown
Similarly, direct masking operations are also interpreted row-wise rather than column-wise:
###Code
countries[countries['Population Density'] > 200]
###Output
_____no_output_____
###Markdown
These two conventions are syntactically similar to those on a NumPy array, and while these may not precisely fit the mold of the Pandas conventions, they are nevertheless quite useful in practice. Operating on Data in PandasAs you begin to work in data science, operating on data is imperative. It is the very heart of data science. Another aspect of pandas that makes it a compelling tool for many data scientists is pandas' capability to perform efficient element-wise operations on data. pandas builds on ufuncs from NumPy to supply theses capabilities and then extends them to provide additional power for data manipulation: - For unary operations (such as negation and trigonometric functions), ufuncs in pandas **preserve index and column labels** in the output. - For binary operations (such as addition and multiplication), pandas automatically **aligns indices** when passing objects to ufuncs.These critical features of ufuncs in pandas mean that data retains its context when operated on and, more importantly still, drastically helps reduce errors when you combine data from multiple sources. Index Preservationpandas is explicitly designed to work with NumPy. As a results, all NumPy ufuncs will work on Pandas ``Series`` and ``DataFrame`` objects.We can see this more clearly if we create a simple ``Series`` and ``DataFrame`` of random numbers on which to operate.
###Code
rng = np.random.RandomState(42)
ser_example = pd.Series(rng.randint(0, 10, 4))
ser_example
###Output
_____no_output_____
###Markdown
Did you notice the NumPy function we used with the variable `rng`? By specifying a seed for the random-number generator, you get the same result each time. This can be useful trick when you need to produce psuedo-random output that also needs to be replicatable by others. (Go ahead and re-run the code cell above a couple of times to convince yourself that it produces the same output each time.)
###Code
df_example = pd.DataFrame(rng.randint(0, 10, (3, 4)),
columns=['A', 'B', 'C', 'D'])
df_example
###Output
_____no_output_____
###Markdown
Let's apply a ufunc to our example `Series`:
###Code
np.exp(ser_example)
###Output
_____no_output_____
###Markdown
The same thing happens with a slightly more complex operation on our example `DataFrame`:
###Code
np.cos(df_example * np.pi / 4)
###Output
_____no_output_____
###Markdown
Note that you can use all of the ufuncs we discussed in Section 3 the same way. Index alignmentAs mentioned above, when you perform a binary operation on two ``Series`` or ``DataFrame`` objects, pandas will align indices in the process of performing the operation. This is essential when working with incomplete data (and data is usually incomplete), but it is helpful to see this in action to better understand it. Index alignment with SeriesFor our first example, suppose we are combining two different data sources and find only the top five countries by *area* and the top five countries by *population*:
###Code
area = pd.Series({'Russia': 17075400, 'Canada': 9984670,
'USA': 9826675, 'China': 9598094,
'Brazil': 8514877}, name='area')
population = pd.Series({'China': 1409517397, 'India': 1339180127,
'USA': 324459463, 'Indonesia': 322179605,
'Brazil': 207652865}, name='population')
# Now divide these to compute the population density
###Output
_____no_output_____
###Markdown
Your resulting array contains the **union** of indices of the two input arrays: seven countries in total. All of the countries in the array without an entry (because they lacked either area data or population data) are marked with the now familiar ``NaN``, or "Not a Number," designation.Index matching works the same way built-in Python arithmetic expressions and missing values are filled in with `NaN`s. You can see this clearly by adding two `Series` that are slightly misaligned in their indices:
###Code
series1 = pd.Series([2, 4, 6], index=[0, 1, 2])
series2 = pd.Series([3, 5, 7], index=[1, 2, 3])
series1 + series2
###Output
_____no_output_____
###Markdown
`NaN` values are not always convenient to work with; `NaN` combined with any other values results in `NaN`, which can be a pain, particulalry if you are combining multiple data sources with missing values. To help with this, pandas allows you to specify a default value to use for missing values in the operation. For example, calling `series1.add(series2)` is equivalent to calling `series1 + series2`, but you can supply the fill value:
###Code
series1.add(series2, fill_value=0)
###Output
_____no_output_____
###Markdown
Much better! Index alignment with DataFramesThe same kind of alignment takes place in both dimension (columns and indices) when you perform operations on ``DataFrame``s.
###Code
df1 = pd.DataFrame(rng.randint(0, 20, (2, 2)),
columns=list('AB'))
df1
df2 = pd.DataFrame(rng.randint(0, 10, (3, 3)),
columns=list('BAC'))
df2
# Add df1 and df2. Is the output what you expected?
###Output
_____no_output_____
###Markdown
Even though we passed the columns in a different order in `df2` than in `df1`, the indices were aligned correctly sorted in the resulting union of columns.You can also use fill values for missing values with `Data Frame`s. In this example, let's fill the missing values with the mean of all values in `df1` (computed by first stacking the rows of `df1`):
###Code
fill = df1.stack().mean()
df1.add(df2, fill_value=fill)
###Output
_____no_output_____
###Markdown
This table lists Python operators and their equivalent pandas object methods:| Python Operator | Pandas Method(s) ||-----------------|---------------------------------------|| ``+`` | ``add()`` || ``-`` | ``sub()``, ``subtract()`` || ``*`` | ``mul()``, ``multiply()`` || ``/`` | ``truediv()``, ``div()``, ``divide()``|| ``//`` | ``floordiv()`` || ``%`` | ``mod()`` || ``**`` | ``pow()`` | Operations between DataFrames and SeriesIndex and column alignment gets maintained in operations between a `DataFrame` and a `Series` as well. To see this, consider a common operation in data science, wherein we find the difference of a `DataFrame` and one of its rows. Because pandas inherits ufuncs from NumPy, pandas will compute the difference row-wise by default:
###Code
df3 = pd.DataFrame(rng.randint(10, size=(3, 4)), columns=list('WXYZ'))
df3
df3 - df3.iloc[0]
###Output
_____no_output_____
###Markdown
But what if you need to operate column-wise? You can do this by using object methodsand specifying the ``axis`` keyword.
###Code
df3.subtract(df3['X'], axis=0)
###Output
_____no_output_____
###Markdown
And when you do operations between `DataFrame`s and `Series` operations, you still get automatic index alignment:
###Code
halfrow = df3.iloc[0, ::2]
halfrow
###Output
_____no_output_____
###Markdown
Note that the output from that operation was transposed. That was so that we can subtract it from the `DataFrame`:
###Code
df3 - halfrow
###Output
_____no_output_____ |
spinup/algos/uncertainty_estimate/Comparison_of_Uncertainty_Estimation_on_Toy_Example-Copy3.ipynb | ###Markdown
Comparison of Uncertainty Estimation on Toy Example
###Code
import numpy as np
import numpy.matlib
import seaborn as sns
import matplotlib.pyplot as plt
import tensorflow as tf
from spinup.algos.uncertainty_estimate.core import MLP, BeroulliDropoutMLP, BootstrappedEnsemble, get_vars, ReplayBuffer
###Output
WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
* https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
* https://github.com/tensorflow/addons
If you depend on functionality not listed there, please file an issue.
###Markdown
Generate Training Data
###Code
# Target from "Deep Exploration via Bootstrapped DQN"
# y = x + sin(alpha*(x+w)) + sin(beta*(x+w)) + w
# w ~ N(mean=0, var=0.03**2)
# Training set: x in (0, 0.6) or (0.8, 1), alpha=4, beta=13
def generate_label(x, noisy=True):
num = len(x)
alpha, beta = 4, 13
if noisy:
sigma = 0.03
else:
sigma = 0
omega = np.random.normal(0, sigma, num)
y = x + np.sin(alpha*(x+omega)) + np.sin(beta*(x+omega)) + omega
return y
def plot_training_data_and_underlying_function(train_size=20, train_s1=0, train_e1=0.6, train_s2=0.8, train_e2=1.4):
x_f = np.arange(-1, 2, 0.005)
# True function
y_f = generate_label(x_f, noisy=False)
# Noisy data
y_noisy = generate_label(x_f, noisy=True)
# Training data
x_train = np.concatenate((np.random.uniform(train_s1, train_e1, int(train_size/2)), np.random.uniform(train_s2, train_e2, int(train_size/2))))
y_train = generate_label(x_train)
plt.figure()
plt.plot(x_f, y_f, color='k')
plt.plot(x_f, y_noisy, '.', color='r', alpha=0.3)
plt.plot(x_train, y_train, '.', color='b')
plt.legend(['underlying function', 'noisy data', '{} training data'.format(train_size)])
plt.tight_layout()
plt.savefig('./underlying_function_for_generating_data.jpg', dpi=300)
plt.show()
return x_train, y_train, x_f, y_f
# sns.set(style="darkgrid", font_scale=1.5)
training_data_size = 200#20#50
x_train, y_train, x_f, y_f = plot_training_data_and_underlying_function(train_size=training_data_size,
train_s1=0, train_e1=0.6, train_s2=0.8, train_e2=1.4)
x_train = x_train.reshape(-1,1)
X_train = np.concatenate([x_train, x_train**2, x_train**3], axis=1)
# X_train = x_train
# X_train = np.concatenate([x_train, x_train, x_train], axis=1)
X_train.shape
###Output
_____no_output_____
###Markdown
Build Neural Networks
###Code
seed=0
x_dim=X_train.shape[1]
y_dim = 1
hidden_sizes = [300, 300]
x_low = -10
x_high = 10
max_steps=int(1e6)
learning_rate=1e-3
batch_size=100
replay_size=int(1e6)
BerDrop_n_post=50#100
dropout_rate = 0.05
bootstrapp_p = 0.75
tf.set_random_seed(seed)
np.random.seed(seed)
# Define input placeholder
x_ph = tf.placeholder(dtype=tf.float32, shape=(None, x_dim))
y_ph = tf.placeholder(dtype=tf.float32, shape=(None, y_dim))
layer_sizes = hidden_sizes + [y_dim]
hidden_activation=tf.keras.activations.relu
output_activation = tf.keras.activations.linear
# 1. Create MLP to learn RTN:
# which is only used for generating target value.
mlp_replay_buffer = ReplayBuffer(x_dim=x_dim, y_dim=y_dim, size=replay_size)
with tf.variable_scope('MLP'):
mlp = MLP(layer_sizes, hidden_activation=hidden_activation, output_activation=output_activation)
mlp_y = mlp(x_ph)
mlp_loss = tf.reduce_mean((y_ph - mlp_y)**2) # mean-square-error
mlp_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
mlp_train_op = mlp_optimizer.minimize(mlp_loss, var_list=mlp.variables)
# 2. Create BernoulliDropoutMLP:
# which is trained with dropout masks and regularization term
with tf.variable_scope('BernoulliDropoutUncertaintyTrain'):
bernoulli_dropout_mlp = BeroulliDropoutMLP(layer_sizes, weight_regularizer=1e-6, dropout_rate=dropout_rate,
hidden_activation = hidden_activation,
output_activation = output_activation)
ber_drop_mlp_y = bernoulli_dropout_mlp(x_ph, training=True) # Must set training=True to use dropout mask
ber_drop_mlp_reg_losses = tf.reduce_sum(
tf.losses.get_regularization_losses(scope='BernoulliDropoutUncertaintyTrain'))
ber_drop_mlp_loss = tf.reduce_sum(
(y_ph - ber_drop_mlp_y) ** 2 + ber_drop_mlp_reg_losses) # TODO: heteroscedastic loss
ber_drop_mlp_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
ber_drop_mlp_train_op = ber_drop_mlp_optimizer.minimize(ber_drop_mlp_loss,
var_list=bernoulli_dropout_mlp.variables)
# 3. Create lazy BernoulliDropoutMLP:
# which copys weights from MLP by:
# lazy_bernoulli_dropout_mlp_sample.set_weights(mlp.get_weights())
# then post sample predictions with dropout masks.
with tf.variable_scope('LazyBernoulliDropoutUncertaintySample'):
lazy_bernoulli_dropout_mlp = BeroulliDropoutMLP(layer_sizes, weight_regularizer=1e-6, dropout_rate=dropout_rate,
hidden_activation=hidden_activation,
output_activation=output_activation)
lazy_ber_drop_mlp_y = lazy_bernoulli_dropout_mlp(x_ph, training=True) # Set training=True to sample with dropout masks
lazy_ber_drop_mlp_update = tf.group([tf.assign(v_lazy_ber_drop_mlp, v_mlp)
for v_mlp, v_lazy_ber_drop_mlp in zip(mlp.variables, lazy_bernoulli_dropout_mlp.variables)])
# Create BootstrappedEnsembleNN
with tf.variable_scope('BootstrappedEnsembleUncertainty'):
boots_ensemble = BootstrappedEnsemble(ensemble_size=BerDrop_n_post, x_dim=x_dim, y_dim=y_dim, replay_size=replay_size,
x_ph=x_ph, y_ph=y_ph, layer_sizes=layer_sizes,
hidden_activation=hidden_activation,
output_activation=output_activation,
learning_rate=learning_rate)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
###Output
_____no_output_____
###Markdown
Training
###Code
# Add training set to bootstrapped_ensemble
for i in range(X_train.shape[0]):
boots_ensemble.add_to_replay_buffer(X_train[i], y_train[i], bootstrapp_p=bootstrapp_p)
training_epoches = 500#1000#500
ber_drop_mlp_train_std = np.zeros((training_epoches,))
ber_drop_mlp_train_loss = np.zeros((training_epoches,))
lazy_ber_drop_mlp_train_std = np.zeros((training_epoches,))
lazy_ber_drop_mlp_train_loss = np.zeros((training_epoches,))
boots_ensemble_train_std = np.zeros((training_epoches,))
boots_ensemble_train_loss = np.zeros((training_epoches,))
for ep_i in range(training_epoches):
if ep_i%100==0:
print('epoch {}'.format(ep_i))
# TODO: uncertainty on training set
# repmat X_train for post sampling: N x BerDrop_n_post x x_dim
ber_drop_mlp_post = np.zeros([X_train.shape[0], BerDrop_n_post, y_dim])
lazy_ber_drop_mlp_post = np.zeros([X_train.shape[0], BerDrop_n_post, y_dim])
boots_ensemble_post = np.zeros([X_train.shape[0], BerDrop_n_post, y_dim])
for x_i in range(X_train.shape[0]):
x_post = np.matlib.repmat(X_train[x_i,:], BerDrop_n_post, 1) # repmat x for post sampling
# BernoulliDropoutMLP
ber_drop_mlp_post[x_i,:,:] = sess.run(ber_drop_mlp_y, feed_dict={x_ph: x_post})
# LazyBernoulliDropoutMLP
lazy_ber_drop_mlp_post[x_i,:,:] = sess.run(lazy_ber_drop_mlp_y, feed_dict={x_ph: x_post})
# BootstrappedEnsemble
boots_ensemble_post[x_i,:,:] = boots_ensemble.prediction(sess, X_train[x_i,:])
# Everage std on training set
ber_drop_mlp_train_std[ep_i] = np.mean(np.std(ber_drop_mlp_post,axis=1))
lazy_ber_drop_mlp_train_std[ep_i] = np.mean(np.std(lazy_ber_drop_mlp_post,axis=1))
boots_ensemble_train_std[ep_i] = np.mean(np.std(boots_ensemble_post,axis=1))
# Train MLP
mlp_outs = sess.run([mlp_loss, mlp_train_op], feed_dict={x_ph: X_train, y_ph: y_train.reshape(-1,y_dim)})
lazy_ber_drop_mlp_train_loss[ep_i] = mlp_outs[0]
sess.run(lazy_ber_drop_mlp_update) # copy weights
# Train BernoulliDropoutMLP on the same batch with MLP
ber_drop_outs = sess.run([ber_drop_mlp_loss, ber_drop_mlp_train_op], feed_dict={x_ph:X_train, y_ph: y_train.reshape(-1,y_dim)})
ber_drop_mlp_train_loss[ep_i] = ber_drop_outs[0]
# Train BootstrappedEnsemble
boots_ensemble_loss = boots_ensemble.train(sess, batch_size)
boots_ensemble_train_loss[ep_i] = np.mean(boots_ensemble_loss)
marker = '.'
markersize = 1
# Loss
f, axes = plt.subplots(1, 3)
f.set_figwidth(18)
f.set_figheight(3.5)
axes[0].plot(lazy_ber_drop_mlp_train_loss, marker, markersize=markersize)
axes[0].set_title('LazyBernoulliDropout (MLP) Average Training Loss')
axes[0].set_xlabel('Training Epochs')
axes[0].set_ylabel('Loss Value on Training Data')
axes[1].plot(ber_drop_mlp_train_loss, marker, markersize=markersize)
axes[1].set_title('BernoulliDropout Average Training Loss')
axes[1].set_xlabel('Training Epochs')
axes[2].plot(boots_ensemble_train_loss, marker, markersize=markersize)
axes[2].set_title('BootsEnsemble Average Training Loss')
axes[2].set_xlabel('Training Epochs')
f.savefig('./toy_example_loss_on_training_data.jpg', dpi=300)
# Uncertainty
f, axes = plt.subplots(1, 3, sharey=True)
f.set_figwidth(18)
f.set_figheight(3.5)
axes[0].plot(lazy_ber_drop_mlp_train_std, markersize=markersize)
axes[0].set_title('Lazy Bernoulli Dropout Average Uncertainty')
axes[0].set_xlabel('Training Epochs')
axes[0].set_ylabel('Average Uncertainty on Trainig Data')
axes[1].plot(ber_drop_mlp_train_std,marker, markersize=markersize)
axes[1].set_title('Bernoulli Dropout Average Uncertainty')
axes[1].set_xlabel('Training Epochs')
axes[2].plot(boots_ensemble_train_std, marker, markersize=markersize)
axes[2].set_title('Bootstrapped Ensemble Average Uncertainty')
axes[2].set_xlabel('Training Epochs')
f.savefig('./toy_example_uncertainty_on_training_data.jpg', dpi=300)
###Output
_____no_output_____
###Markdown
Post Sampling to Estimate Uncertainty
###Code
x_test = np.arange(-1, 2, 0.005)
x_test = x_test.reshape(-1,1)
X_test = np.concatenate([x_test, x_test**2, x_test**3], axis=1)
# X_test = x_test
# X_test = np.concatenate([x_test, x_test, x_test], axis=1)
X_test.shape
# post sampling
mlp_postSamples = np.zeros([X_test.shape[0], BerDrop_n_post, y_dim])
ber_drop_mlp_postSamples = np.zeros([X_test.shape[0], BerDrop_n_post, y_dim])
lazy_ber_drop_mlp_postSamples = np.zeros([X_test.shape[0], BerDrop_n_post, y_dim])
boots_ensemble_postSamples = np.zeros([X_test.shape[0], BerDrop_n_post, y_dim])
for i in range(X_test.shape[0]):
x = X_test[i,:]
x_postSampling = np.matlib.repmat(x, BerDrop_n_post, 1) # repmat x for post sampling
# MLP
mlp_postSamples[i,:,:] = sess.run(mlp_y, feed_dict={x_ph: x_postSampling})
# BernoulliDropoutMLP
ber_drop_mlp_postSamples[i,:,:] = sess.run(ber_drop_mlp_y, feed_dict={x_ph: x_postSampling})
# LazyBernoulliDropoutMLP
sess.run(lazy_ber_drop_mlp_update) # copy weights
lazy_ber_drop_mlp_postSamples[i,:,:] = sess.run(lazy_ber_drop_mlp_y, feed_dict={x_ph: x_postSampling})
# BootstrappedEnsemble
boots_ensemble_postSamples[i,:,:] = boots_ensemble.prediction(sess, x)
mlp_mean = np.mean(mlp_postSamples,axis=1)
mlp_std = np.std(mlp_postSamples,axis=1)
ber_drop_mlp_mean = np.mean(ber_drop_mlp_postSamples,axis=1)
ber_drop_mlp_std = np.std(ber_drop_mlp_postSamples,axis=1)
lazy_ber_drop_mlp_mean = np.mean(lazy_ber_drop_mlp_postSamples,axis=1)
lazy_ber_drop_mlp_std = np.std(lazy_ber_drop_mlp_postSamples,axis=1)
boots_ensemble_mean = np.mean(boots_ensemble_postSamples,axis=1)
boots_ensemble_std = np.std(boots_ensemble_postSamples,axis=1)
markersize = 5
f, axes = plt.subplots(1,4,sharey=True)
# f.suptitle('n_training_data={}, n_post_samples={}, dropout_rate={}, n_trainig_epochs={}, bootstrapp_p={}'.format(training_data_size,
# BerDrop_n_post,
# dropout_rate,
# training_epoches,
# bootstrapp_p),
# fontsize=20)
f.set_figwidth(20)
f.set_figheight(4)
axes[0].plot(x_test, mlp_mean, 'k')
axes[0].plot(x_train, y_train, 'r.', markersize=markersize)
axes[0].plot(x_f, y_f,'m', alpha=0.5)
axes[0].fill_between(x_test.flatten(),
(mlp_mean+mlp_std).flatten(),
(mlp_mean-mlp_std).flatten())
axes[0].set_title('MLP', fontsize=15)
axes[1].plot(x_test, lazy_ber_drop_mlp_mean, 'k')
axes[1].plot(x_train, y_train, 'r.', markersize=markersize)
axes[1].plot(x_f, y_f,'m', alpha=0.5)
axes[1].fill_between(x_test.flatten(),
(lazy_ber_drop_mlp_mean+lazy_ber_drop_mlp_std).flatten(),
(lazy_ber_drop_mlp_mean-lazy_ber_drop_mlp_std).flatten())
axes[1].set_title('LazyBernoulliDropoutMLP', fontsize=15)
axes[2].plot(x_test, ber_drop_mlp_mean, 'k')
axes[2].plot(x_train, y_train, 'r.', markersize=markersize)
axes[2].plot(x_f, y_f,'m', alpha=0.5)
axes[2].fill_between(x_test.flatten(),
(ber_drop_mlp_mean+ber_drop_mlp_std).flatten(),
(ber_drop_mlp_mean-ber_drop_mlp_std).flatten())
axes[2].set_title('BernoulliDropoutMLP', fontsize=15)
prediction_mean_h, = axes[3].plot(x_test, boots_ensemble_mean, 'k')
training_data_h, = axes[3].plot(x_train, y_train, 'r.', markersize=markersize)
underlying_function_h, = axes[3].plot(x_f, y_f,'m', alpha=0.5)
prediction_std_h = axes[3].fill_between(x_test.flatten(),
(boots_ensemble_mean+boots_ensemble_std).flatten(),
(boots_ensemble_mean-boots_ensemble_std).flatten())
axes[3].set_title('BootstrappedEnsemble', fontsize=15)
axes[3].set_ylim(-6, 9)
axes[0].legend(handles=[underlying_function_h, training_data_h, prediction_mean_h, prediction_std_h],
labels=['underlying function', '{} training data'.format(training_data_size), 'prediction mean', 'prediction mean $\pm$ standard deviation'])
plt.tight_layout()
f.subplots_adjust(top=0.8)
plt.savefig('./toy_example_comparison_of_uncertainty_estimation.jpg', dpi=300)
###Output
_____no_output_____ |
AppendixIV/Training_OpenAI_GPT_2.ipynb | ###Markdown
Training OpenAI GTP-2Copyright 2020, Denis Rothman MIT License. Denis Rothman created the Colab notebook using the OpenAI repository, adding title steps for educational purposes only.It is important to note that we are running a low-level GPT-2 model and not a one-line call to obtain a result. We are also avoiding pre-packaged versions. We are getting our hands dirty to understand the architecture of a GPT-2 from scratch. You might get some deprecation messages. However, the effort is worthwhile.***Code References***[Reference: OpenAI Repository](https://github.com/openai/gpt-2)The repository was cloned and adapted to N Shepperd's repository.[Reference: N Shepperd Repository](https://github.com/nshepperd/gpt-2)The repository was not cloned. N Shepperd's training programs were inserted into the OpenAI Repository. The list of N Shepperd's programs are cited in the 'N Shepperd' section of the notebook. Some programs were modified for educational purposes only to work with this notebook.***Model Reference Paper***[Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever,2019,'Language Models are Unsupervised Multitask Learners'](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf)***Step 1: Pre-requisites:***a) activate GPU in the notebook settings runTime menu b) Upload the following program files and dset.txt(dataset) with the file manager: train.py,load_dataset.py,encode.py,accumulate,memory_saving_gradients.py,dset.txt
###Code
#@title Step 2: Cloning the OpenAI GPT-2 Repository
#!git clone https://github.com/nshepperd/gpt-2.git
!git clone https://github.com/openai/gpt-2.git
#@title Step 3: Installing the requirements
import os # when the VM restarts import os necessary
os.chdir("/content/gpt-2")
!pip3 install -r requirements.txt
!pip install toposort
#@title Step 4: Checking TensorFlow version
#Colab has tf 1.x and tf 2.x installed
#Restart runtime using 'Runtime' -> 'Restart runtime...'
%tensorflow_version 1.x
import tensorflow as tf
print(tf.__version__)
#@title Step 5: Downloading 117M parameter GPT-2 Model
# run code and send argument
import os # after runtime is restarted
os.chdir("/content/gpt-2")
!python3 download_model.py '117M' #creates model directory
#@title Step 6: Copying the Project Resources to scr
!cp /content/dset.txt /content/gpt-2/src/
!cp -r /content/gpt-2/models/ /content/gpt-2/src/
#@title Step 7: Copying the N Shepperd Training Files
#Referfence GitHub repository: https://github.com/nshepperd/gpt-2
import os # import after runtime is restarted
!cp /content/train.py /content/gpt-2/src/
!cp /content/load_dataset.py /content/gpt-2/src/
!cp /content/encode.py /content/gpt-2/src/
!cp /content/accumulate.py /content/gpt-2/src/
!cp /content/memory_saving_gradients.py /content/gpt-2/src/
#@title Step 8:Encoding dataset
import os # import after runtime is restarted
os.chdir("/content/gpt-2/src/")
model_name="117M"
!python /content/gpt-2/src/encode.py dset.txt out.npz
###Output
Reading files
100% 1/1 [00:10<00:00, 10.06s/it]
Writing out.npz
###Markdown
@title Step 9:Training the Modelimport os import after runtime is restartedos.chdir("/content/gpt-2/src/")!python train.py --dataset out.npzimport os import after runtime is restartedos.chdir("/content/gpt-2/src/")!python train.py --dataset out.npz
###Code
#@title Step 9:Training the Model
#Model saved after 1000 steps
import os # import after runtime is restarted
os.chdir("/content/gpt-2/src/")
!python train.py --dataset out.npz
#@title Step 10: Creating a Training Model directory
#Creating a Training Model directory named 'tgmodel'
import os
run_dir = '/content/gpt-2/models/tgmodel'
if not os.path.exists(run_dir):
os.makedirs(run_dir)
#@title Step 10A: Copying training Files
!cp /content/gpt-2/src/checkpoint/run1/model-1000.data-00000-of-00001 /content/gpt-2/models/tgmodel
!cp /content/gpt-2/src/checkpoint/run1/checkpoint /content/gpt-2/models/tgmodel
!cp /content/gpt-2/src/checkpoint/run1/model-1000.index /content/gpt-2/models/tgmodel
!cp /content/gpt-2/src/checkpoint/run1/model-1000.meta /content/gpt-2/models/tgmodel
#@title Step 10B: Copying the OpenAI GPT-2 117M Model files
!cp /content/gpt-2/models/117M/encoder.json /content/gpt-2/models/tgmodel
!cp /content/gpt-2/models/117M/hparams.json /content/gpt-2/models/tgmodel
!cp /content/gpt-2/models/117M/vocab.bpe /content/gpt-2/models/tgmodel
#@title Step 10C: Renaming the model directories
import os
!mv /content/gpt-2/models/117M /content/gpt-2/models/117M_OpenAI
!mv /content/gpt-2/models/tgmodel /content/gpt-2/models/117M
###Output
_____no_output_____
###Markdown
@title Step 11: Generating Unconditional Samplesimport os import after runtime is restartedos.chdir("/content/gpt-2/src")!python generate_unconditional_samples.py --model_name '117M'
###Code
import os # import after runtime is restarted
os.chdir("/content/gpt-2/src")
!python generate_unconditional_samples.py --model_name '117M'
#@title Step 12: Interactive Context and Completion Examples
import os # import after runtime is restarted
os.chdir("/content/gpt-2/src")
!python interactive_conditional_samples.py --temperature 0.8 --top_k 40 --model_name '117M'
###Output
WARNING:tensorflow:From interactive_conditional_samples.py:57: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.
2021-06-17 10:13:24.531482: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2021-06-17 10:13:24.558429: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-06-17 10:13:24.559022: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 0 with properties:
name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59
pciBusID: 0000:00:04.0
2021-06-17 10:13:24.559380: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2021-06-17 10:13:24.561035: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2021-06-17 10:13:24.562597: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2021-06-17 10:13:24.562925: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2021-06-17 10:13:24.564438: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2021-06-17 10:13:24.565137: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2021-06-17 10:13:24.567984: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2021-06-17 10:13:24.568105: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-06-17 10:13:24.568709: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-06-17 10:13:24.569216: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1767] Adding visible gpu devices: 0
2021-06-17 10:13:24.569564: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX512F
2021-06-17 10:13:24.573608: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2000194999 Hz
2021-06-17 10:13:24.573793: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x559017c26d80 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2021-06-17 10:13:24.573820: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2021-06-17 10:13:24.672175: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-06-17 10:13:24.672893: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x559017c26bc0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2021-06-17 10:13:24.672925: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla T4, Compute Capability 7.5
2021-06-17 10:13:24.673089: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-06-17 10:13:24.673652: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 0 with properties:
name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59
pciBusID: 0000:00:04.0
2021-06-17 10:13:24.673721: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2021-06-17 10:13:24.673745: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2021-06-17 10:13:24.673765: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2021-06-17 10:13:24.673784: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2021-06-17 10:13:24.673807: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2021-06-17 10:13:24.673826: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2021-06-17 10:13:24.673845: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2021-06-17 10:13:24.673917: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-06-17 10:13:24.674470: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-06-17 10:13:24.674969: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1767] Adding visible gpu devices: 0
2021-06-17 10:13:24.675034: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2021-06-17 10:13:24.676149: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1180] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-06-17 10:13:24.676176: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1186] 0
2021-06-17 10:13:24.676190: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 0: N
2021-06-17 10:13:24.676305: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-06-17 10:13:24.676886: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-06-17 10:13:24.677388: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
2021-06-17 10:13:24.677428: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14257 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5)
WARNING:tensorflow:From interactive_conditional_samples.py:58: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
WARNING:tensorflow:From interactive_conditional_samples.py:60: The name tf.set_random_seed is deprecated. Please use tf.compat.v1.set_random_seed instead.
WARNING:tensorflow:From /content/gpt-2/src/sample.py:51: The name tf.AUTO_REUSE is deprecated. Please use tf.compat.v1.AUTO_REUSE instead.
WARNING:tensorflow:From /content/gpt-2/src/model.py:148: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.
WARNING:tensorflow:From /content/gpt-2/src/model.py:152: The name tf.get_variable is deprecated. Please use tf.compat.v1.get_variable instead.
WARNING:tensorflow:From /content/gpt-2/src/model.py:36: The name tf.rsqrt is deprecated. Please use tf.math.rsqrt instead.
WARNING:tensorflow:From /content/gpt-2/src/sample.py:64: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
WARNING:tensorflow:From /content/gpt-2/src/sample.py:16: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
WARNING:tensorflow:From /content/gpt-2/src/sample.py:67: multinomial (from tensorflow.python.ops.random_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.random.categorical` instead.
WARNING:tensorflow:From interactive_conditional_samples.py:68: The name tf.train.Saver is deprecated. Please use tf.compat.v1.train.Saver instead.
Model prompt >>> Human reason, in one sphere of its cognition, is called upon to consider questions, which it cannot decline, as they are presented by its own nature, but which it cannot answer, as they transcend every faculty of the mind.
2021-06-17 10:13:41.056992: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
======================================== SAMPLE 1 ========================================
The first question, to which it is obliged to respond, is, how can it be true that a mind which is ignorant of the truth of any proposition can, or ought, to, accept any proposition which it has not given it? And this question has been so well answered that it is impossible for any man to believe in the existence of any mind which has never received any information. How can a man who is ignorant of the truth of any proposition accept a proposition which he does not understand? And this is the very question which has been so frequently answered, that it is the most difficult to believe in the existence of any mind which does not receive it.
The philosophers have, in their great study of the subject, to deal with the question of the mind at all. The philosophers have not done so well, because they have not dealt with the subject with an impartial spirit. They have not engaged in a great deal of investigation, because they have not yet come to an end. But they have in their philosophy an important object, which is, that every man may be persuaded to consider the existence of any mind which he does not understand. They have, therefore, so much as they have to say on this subject, and so little to say on the other. And this very matter is the subject as a whole, and not only on the ground, that, no matter how much they speak of the subject, they have not treated it with a moral, and have not even tried to determine the subject by it."
"It is not the mind of any man, as far as we can judge, which is the subject of any philosophical inquiry. It is the mind of the minds of men, in their opinion, which they consider the most to be their most important subject. And if they can see through this, they will see it, and they will understand it, and they will understand it."
"You see, then, that the mind of any man is not the object of any philosophical inquiry. You are, indeed, able to see through all the world, and even all the facts which come up before you. And if you can see through the world of things, you will know how very much they are; and, if anything happens to the world, you will understand what it is."
"Then you do not know, then, that the mind of any man is not the object of any philosophical inquiry. That is the mind of a man, that is, the mind of a man
================================================================================
Model prompt >>> Traceback (most recent call last):
File "/usr/lib/python3.7/contextlib.py", line 130, in __exit__
self.gen.throw(type, value, traceback)
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/framework/ops.py", line 5480, in get_controller
yield g
File "interactive_conditional_samples.py", line 73, in interact_model
raw_text = input("Model prompt >>> ")
KeyboardInterrupt
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "interactive_conditional_samples.py", line 91, in <module>
fire.Fire(interact_model)
File "/usr/local/lib/python3.7/dist-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/usr/local/lib/python3.7/dist-packages/fire/core.py", line 471, in _Fire
target=component.__name__)
File "/usr/local/lib/python3.7/dist-packages/fire/core.py", line 681, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "interactive_conditional_samples.py", line 88, in interact_model
print("=" * 80)
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/client/session.py", line 1633, in __exit__
close_thread.start()
File "/usr/lib/python3.7/threading.py", line 857, in start
self._started.wait()
File "/usr/lib/python3.7/threading.py", line 552, in wait
signaled = self._cond.wait(timeout)
File "/usr/lib/python3.7/threading.py", line 296, in wait
waiter.acquire()
KeyboardInterrupt
^C
|
project/distribution_tests/exact_multivariate_ampl_distributions_python.ipynb | ###Markdown
Exact Multivariate Amplitude Distributions Implementation Test Juan Camilo Henao Londono
###Code
# Modules
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
# Gamma function
from scipy.special import gamma
# Modified Bessel function of the second kind of real order v
from scipy.special import kv
# Gauss hypergeometric function 2F1(a, b; c; z)
from scipy.special import hyp2f1
# Confluent hypergeometric function U
from scipy.special import hyperu
# Parameters
returns = np.arange(-10, 11, 0.01)
N = 5
Lambda = 1
K = 100
L = 55
l = 55
###Output
_____no_output_____
###Markdown
Gaussian Probability Density Function
###Code
def gaussian_distribution(
mean: float, variance: float, x_values: np.ndarray
) -> np.ndarray:
"""Compute the Gaussian distribution values.
:param mean: mean of the Gaussian distribution.
:param variance: variance of the Gaussian distribution.
:param x_values: array of the values to compute the Gaussian
distribution
"""
return (1 / (2 * np.pi * variance) ** 0.5) * np.exp(
-((x_values - mean) ** 2) / (2 * variance)
)
pdf_g = gaussian_distribution(0, 1, returns)
###Output
_____no_output_____
###Markdown
Algebraic Probability Density Function
###Code
def algebraic_distribution(
K_value: int, l_value: int, x_values: np.ndarray
) -> np.ndarray:
"""Compute the algebraic distribution values.
:param variance: variance of the algebraic distribution.
:param K_value: number of companies analyzed.
:param l_value: shape parameter.
:param x_values: array of the values to compute the Gaussian
distribution
"""
m = 2 * l_value - K_value - 2
assert m > 0
return (
(1 / np.sqrt(2 * np.pi))
* (np.sqrt(2 / m))
* (gamma(l_value - (K_value - 1) / 2) / gamma(l_value - K_value / 2))
* (1 / (1 + (1 / m) * x_values * x_values) ** (l_value - (K_value - 1) / 2))
)
x_val = np.arange(-10, 11)
print(algebraic_distribution(1, 2, x_vals))
pdf_a = algebraic_distribution(1, 2, returns)
###Output
_____no_output_____
###Markdown
Gaussian-Gaussian Probability Density Function
###Code
def pdf_gaussian_gaussian(returns: np.ndarray, N: float, Lambda: float) -> np.ndarray:
"""Computes the one dimensional Gaussian-Gaussian PDF.
:param returns: numpy array with the returns values.
:param N: strength of the fluctuations around the mean.
:param Lambda: variance of the returns.
:return: numpy array with the pdf values.
"""
first_part: np.float = 1 / (
2 ** ((N - 1) / 2) * gamma(N / 2) * np.sqrt((np.pi * Lambda) / N)
)
second_part: np.ndarray = np.sqrt((N * returns ** 2) / Lambda) ** ((N - 1) / 2)
third_part: np.ndarray = kv((1 - N) / 2, np.sqrt(N * returns ** 2) / Lambda)
return first_part * second_part * third_part
pdf_gg = pdf_gaussian_gaussian(returns, N, Lambda)
###Output
_____no_output_____
###Markdown
Gaussian-Algebraic Probability Density Function
###Code
def pdf_gaussian_algebraic(
returns: np.ndarray, K: float, L: float, N: float, Lambda: float
) -> np.ndarray:
"""Computes de one dimensional Gaussian-Algebraic PDF.
:param returns: numpy array with the returns values.
:param K: number of companies.
:param L: shape parameter.
:param N: strength of the fluctuations around the mean.
:param Lambda: variance of the returns.
:return: numpy array with the pdf values.
"""
M = 2 * L - K - N - 1
numerator: np.float = gamma(L - (K + N) / 2 + 1) * gamma(L - (K - 1) / 2)
denominator: np.float = (
gamma(L - (K + N - 1) / 2) * gamma(N / 2) * np.sqrt(2 * np.pi * Lambda * M / N)
)
frac: np.float = numerator / denominator
function: np.ndarray = hyperu(
L - (K + N) / 2 + 1, (1 - N) / 2 + 1, (N * returns ** 2) / (2 * M * Lambda)
)
return frac * function
pdf_ga = pdf_gaussian_algebraic(returns, K, L, N, Lambda)
###Output
_____no_output_____
###Markdown
Algebraic-Gaussian Probability Density Function
###Code
def pdf_algebraic_gaussian(
returns: np.ndarray, K: float, l: float, N: float, Lambda: float
) -> np.ndarray:
"""Computes de one dimensional Algebraic-Gaussian PDF.
:param returns: numpy array with the returns values.
:param K: number of companies.
:param l: shape parameter.
:param N: strength of the fluctuations around the mean.
:param Lambda: variance of the returns.
:return: numpy array with the pdf values.
"""
m = 2 * l - K - 2
numerator: np.float = gamma(l - (K - 1) / 2) * gamma(l - (K - N) / 2)
denominator: np.float = (
gamma(l - K / 2) * gamma(N / 2) * np.sqrt(2 * np.pi * Lambda * m / N)
)
frac: np.float = numerator / denominator
function: np.ndarray = hyperu(
l - (K - 1) / 2, (1 - N) / 2 + 1, (N * returns ** 2) / (2 * m * Lambda)
)
return frac * function
pdf_ag = pdf_algebraic_gaussian(returns, K, l, N, Lambda)
###Output
_____no_output_____
###Markdown
Algebraic-Algebraic Probability Density Function
###Code
def pdf_algebraic_algebraic(
returns: np.ndarray, K: float, L: float, l: float, N: float, Lambda: float
) -> np.ndarray:
"""Computes de one dimensional Algebraic-Algebraic PDF.
:param returns: numpy array with the returns values.
:param K: number of companies.
:param L: shape parameter.
:param l: shape parameter.
:param N: strength of the fluctuations around the mean.
:param Lambda: variance of the returns.
:return: numpy array with the pdf values.
"""
M = 2 * L - K - N - 1
m = 2 * l - K - 2
numerator: np.float = (
gamma(l - (K - 1) / 2)
* gamma(l - (K - N) / 2)
* gamma(L - (K - 1) / 2)
* gamma(L - (K + N) / 2 + 1)
)
denominator: np.float = (
np.sqrt(np.pi * Lambda * M * m / N)
* gamma(l - K / 2)
* gamma(L + l - (K - 1))
* gamma(L - (K + N - 1) / 2)
* gamma(N / 2)
)
frac: np.float = numerator / denominator
function: np.ndarray = hyp2f1(
l - (K - 1) / 2,
L - (K + N) / 2 + 1,
L + l - (K - 1),
1 - (N * returns ** 2) / (M * m * Lambda),
)
return frac * function
pdf_aa = pdf_algebraic_algebraic(returns, K, L, l, N, Lambda)
###Output
_____no_output_____
###Markdown
Plots
###Code
plt.figure(figsize=(16, 9))
plt.plot(returns, pdf_g, "-", label="Gaussian")
plt.plot(returns, pdf_a, "-", label="Algebraic")
plt.legend(fontsize=15)
plt.xlim(-2, 2)
plt.xlabel(r"$\tilde{r}$", fontsize=15)
plt.ylabel("pdf", fontsize=15)
plt.grid(True)
plt.tight_layout()
plt.figure(figsize=(16, 9))
plt.semilogy(returns, pdf_g, "-", label="Gaussian")
plt.semilogy(returns, pdf_a, "-", label="Algebraic")
plt.legend(fontsize=15)
plt.xlim(-8, 8)
plt.ylim(10 ** -7, 1)
plt.xlabel(r"$\tilde{r}$", fontsize=15)
plt.ylabel("pdf", fontsize=15)
plt.grid(True)
plt.tight_layout()
plt.figure(figsize=(16, 9))
plt.plot(returns, pdf_gg, "-", label="GG")
plt.plot(returns, pdf_ga, "-", label="GA")
plt.plot(returns, pdf_ag, "-", label="AG")
plt.plot(returns, pdf_aa, "-", label="AA")
plt.legend(fontsize=15)
plt.xlim(-2, 2)
plt.xlabel(r"$\tilde{r}$", fontsize=15)
plt.ylabel("pdf", fontsize=15)
plt.grid(True)
plt.tight_layout()
plt.figure(figsize=(16, 9))
plt.semilogy(returns, pdf_gg, "-", label="GG")
plt.semilogy(returns, pdf_ga, "-", label="GA")
plt.semilogy(returns, pdf_ag, "-", label="AG")
plt.semilogy(returns, pdf_aa, "-", label="AA")
plt.legend(fontsize=15)
plt.xlim(-8, 8)
plt.ylim(10 ** -7, 1)
plt.xlabel(r"$\tilde{r}$", fontsize=15)
plt.ylabel("pdf", fontsize=15)
plt.grid(True)
plt.tight_layout()
###Output
_____no_output_____ |
nbs/dl1/datablock-playground.ipynb | ###Markdown
Applying the data_block API
###Code
from fastai import *
from fastai.vision import *
path = Path('data/aircrafts')
path.ls()
# (path/'train').ls()
# (path/'test').ls()
###Output
_____no_output_____
###Markdown
**Transformations** We can get a lot more specific with this. For example, for satellite images you can use `planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.)`
###Code
tfms = get_transforms(do_flip=False)
data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders
.split_by_folder() #How to split in train/valid? -> use the folders
.label_from_folder() #How to label? -> depending on the folder of the filenames
.add_test_folder() #Optionally add a test set (here default name is test)
.transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64
.databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch
###Output
_____no_output_____
###Markdown
Alternatively, split by random instead of by folder using split_by_rand_pct()
###Code
data = (ImageList.from_folder(path) #Where to find the data? -> in planet 'train' folder
.split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid
.label_from_folder() #How to label?
.transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 128
.databunch()) #Finally -> use the defaults for conversion to databunch
###Output
_____no_output_____
###Markdown
We can also split up the source and the data itself and normalize using these `imagenet_stats`
###Code
np.random.seed(42)
src = (ImageList.from_folder(path)
.split_by_folder()
.label_from_folder())
data = (src.transform(tfms, size=64)
.databunch().normalize(imagenet_stats))
###Output
_____no_output_____
###Markdown
View data Look at the data from the created databunch
###Code
data.show_batch(3, figsize=(6,6), hide_axis=False)
data.c
###Output
_____no_output_____
###Markdown
Classes are inferred from folder names
###Code
data.classes, data.c, len(data.train_ds), len(data.valid_ds)
# data.valid_ds.classes
# data.train_ds.classes
data.classes, data.c, len(data.train_ds), len(data.valid_ds)
###Output
_____no_output_____
###Markdown
Train model
###Code
arch = models.resnet50
learn = cnn_learner(data, arch, metrics=accuracy)
learn.lr_find()
lr_find(learn)
learn.recorder.plot()
lr = 0.01
learn.fit_one_cycle(5, slice(lr))
learn.save('stage-1-rn50')
learn.unfreeze()
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(5, slice(1e-3, lr/5))
learn.save('stage-2-rn50')
learn.unfreeze()
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(5, slice(1e-4, lr/5))
learn.show_results(rows=3, figsize=(9,9))
###Output
_____no_output_____
###Markdown
Interpretation
###Code
learn.load('stage-2-rn50');
interp = ClassificationInterpretation.from_learner(learn)
losses,idxs = interp.top_losses()
len(data.valid_ds)==len(losses)==len(idxs)
interp.plot_top_losses(9, figsize=(15,11))
interp.most_confused()
###Output
_____no_output_____
###Markdown
Try with `learn.predict()` using an actual image. Steps taken from https://github.com/npatta01/web-deep-learning-classifier/blob/master/notebooks/1_train.ipynb
###Code
plane_url = "https://upload.wikimedia.org/wikipedia/commons/5/5e/ANA_777-300_Taking_off_from_JFK.jpg"
url = plane_url
def fetch_image(url):
response = requests.get(url)
img = open_image(BytesIO(response.content))
return img
img = fetch_image(plane_url)
pred_class, pred_idx, outputs = learn.predict(img)
pred_class, pred_idx, outputs
import pprint
def predict(url):
img = fetch_image(url)
pred_class,pred_idx,outputs = learn.predict(img)
res = zip (learn.data.classes, outputs.tolist())
predictions = sorted(res, key=lambda x:x[1], reverse=True)
top_predictions = predictions[0:5]
pprint.pprint(top_predictions)
return img.resize(500)
predict(plane_url)
###Output
[('airplane', 0.9839030504226685),
('space_shuttle', 0.012163753621280193),
('rocket', 0.003933195490390062)]
|
src/TrainTwoYearsData_Open-High-Low-Close-Volume_GADF_GridSearch_15PND.ipynb | ###Markdown
32, 32, 128, pooling=false, dropout=true
###Code
cnn = create_model(activation='relu', dropout=False, pooling=True)
cnn.summary()
kf = StratifiedKFold(n_splits=3)
history = []
confusions= []
classifReports= []
fold = 0
for train, test in kf.split(X_dataArr, Y_dataBinary):
print('Running fold [%d]'.ljust(100,'*') %fold)
fold +=1
cnn = create_model(activation='relu', dropout=False, pooling=True)
x_train, x_test = X_dataArr[train], X_dataArr[test]
y_train, y_test = Y_dataBinary[train], Y_dataBinary[test]
hist = cnn.fit(x=x_train, y=y_train, validation_split=0.2, epochs=40, batch_size=100, verbose=0)
history.append(hist)
y_pred = cnn.predict(x_test)
y_pred_R = np.round(y_pred)
conf = confusion_matrix(y_test, y_pred_R)
confusions.append(conf)
clfr = classification_report(y_test, y_pred_R, output_dict=True)
print(clfr)
classifReports.append(clfr)
j=2
plt.plot(history[j].history['acc'])
plt.plot(history[j].history['val_acc'])
plt.legend(['loss','val_loss'])
plt.plot(history[j].history['loss'])
plt.plot(history[j].history['val_loss'])
plt.legend(['loss','val_loss'])
#re running for epoch 22
kf = StratifiedKFold(n_splits=3)
history = []
confusions= []
classifReports= []
fold = 0
for train, test in kf.split(X_dataArr, Y_dataBinary):
print('Running fold [%d]'.ljust(100,'*') %fold)
fold +=1
cnn = create_model(activation='relu', dropout=False, pooling=True)
x_train, x_test = X_dataArr[train], X_dataArr[test]
y_train, y_test = Y_dataBinary[train], Y_dataBinary[test]
hist = cnn.fit(x=x_train, y=y_train, validation_split=0.2, epochs=22, batch_size=100, verbose=0)
history.append(hist)
y_pred = cnn.predict(x_test)
y_pred_R = np.round(y_pred)
conf = confusion_matrix(y_test, y_pred_R)
confusions.append(conf)
clfr = classification_report(y_test, y_pred_R, output_dict=True)
print(clfr)
classifReports.append(clfr)
f1=[ rep['1']['f1-score'] for rep in classifReports ]
recal=[ rep['1']['recall'] for rep in classifReports ]
prec=[ rep['1']['precision'] for rep in classifReports ]
print(mean(f1))
print(mean(recal))
print(mean(prec))
finConf=np.zeros((2,2), dtype=int)
for elem in confusions:
for i in range(2):
for j in range(2):
finConf[i][j] += elem[i][j]
labels = ['True Neg','False Pos','False Neg','True Pos']
labels = np.asarray(labels).reshape(2,2)
sns.heatmap(finConf/np.sum(finConf), annot=True, fmt='.2%', cmap='Blues')
macroPrec=[]
macroRecall=[]
macrof1=[]
for elem in classifReports:
macroPrec.append(elem['macro avg']['precision'])
macroRecall.append(elem['macro avg']['recall'])
macrof1.append(elem['macro avg']['f1-score'])
print(np.mean(macroPrec))
print(np.mean(macroRecall))
print(np.mean(macrof1))
weighPrec=[]
weighRecall=[]
weighf1=[]
for elem in classifReports:
weighPrec.append(elem['weighted avg']['precision'])
weighRecall.append(elem['weighted avg']['recall'])
weighf1.append(elem['weighted avg']['f1-score'])
print(np.mean(weighPrec))
print(np.mean(weighRecall))
print(np.mean(weighf1))
###Output
_____no_output_____
###Markdown
Grid search 64,64,128
###Code
def create_model_64(dropout=True, pooling=True, dropoutp1=0.25, dropoutp2=0.5):
cnn=Sequential()
cnn.add(Conv2D(filters=64, kernel_size=(2,2), padding='same', activation='relu', input_shape=(INPUT_MATRIX_WIDTH, INPUT_MATRIX_WIDTH, 5)))
if pooling==True:
cnn.add(MaxPooling2D(pool_size=(2,2)))
cnn.add(Conv2D(filters=64, kernel_size=(2,2), padding='same', activation='relu'))
if pooling==True:
cnn.add(MaxPooling2D(pool_size=(2,2)))
if dropout==True:
cnn.add(Dropout(dropoutp1))
cnn.add(Flatten())
cnn.add(Dense(128, activation='relu'))
if dropout==True:
cnn.add(Dropout(dropoutp2))
cnn.add(Dense(1, activation='sigmoid'))
cnn.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
return cnn
model = KerasClassifier(build_fn=create_model_64, epochs=25, verbose=0)
dropout=[True, False]
pooling=[True, False]
batch_size=[50, 100]
dropoutp1 =[0.25,0.5]
dropoutp2 =[0.25,0.5]
param_grid=dict(dropout=dropout, pooling=pooling, batch_size=batch_size, dropoutp1=dropoutp1, dropoutp2=dropoutp2)
grid=GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1, cv=3)
grid_result = grid.fit(X_dataArr, Y_dataBinary)
print("best %f using %s" %(grid_result.best_score_, grid_result.best_params_))
kf = StratifiedKFold(n_splits=3)
history = []
confusions= []
classifReports= []
fold = 0
for train, test in kf.split(X_dataArr, Y_dataBinary):
print('Running fold [%d]'.ljust(100,'*') %fold)
fold +=1
cnn = create_model_64(dropout=True, pooling=True, dropoutp1=0.25, dropoutp2=0.5)
x_train, x_test = X_dataArr[train], X_dataArr[test]
y_train, y_test = Y_dataBinary[train], Y_dataBinary[test]
hist = cnn.fit(x=x_train, y=y_train, validation_split=0.2, epochs=40, batch_size=50, verbose=0)
history.append(hist)
y_pred = cnn.predict(x_test)
y_pred_R = np.round(y_pred)
conf = confusion_matrix(y_test, y_pred_R)
confusions.append(conf)
clfr = classification_report(y_test, y_pred_R, output_dict=True)
print(clfr)
classifReports.append(clfr)
cnn = create_model_64(dropout=True, pooling=True, dropoutp1=0.25, dropoutp2=0.5)
cnn.summary()
j=2
plt.plot(history[j].history['acc'])
plt.plot(history[j].history['val_acc'])
plt.legend(['acc','val_acc'])
f1=[ rep['1']['f1-score'] for rep in classifReports ]
recal=[ rep['1']['recall'] for rep in classifReports ]
prec=[ rep['1']['precision'] for rep in classifReports ]
print(mean(f1))
print(mean(recal))
print(mean(prec))
plt.plot(history[j].history['loss'])
plt.plot(history[j].history['val_loss'])
plt.legend(['loss','val_loss'])
#re run for epoch 14
kf = StratifiedKFold(n_splits=3)
history = []
confusions= []
classifReports= []
fold = 0
for train, test in kf.split(X_dataArr, Y_dataBinary):
print('Running fold [%d]'.ljust(100,'*') %fold)
fold +=1
cnn = create_model_64(dropout=True, pooling=True, dropoutp1=0.25, dropoutp2=0.5)
x_train, x_test = X_dataArr[train], X_dataArr[test]
y_train, y_test = Y_dataBinary[train], Y_dataBinary[test]
hist = cnn.fit(x=x_train, y=y_train, validation_split=0.2, epochs=14, batch_size=50, verbose=0)
history.append(hist)
y_pred = cnn.predict(x_test)
y_pred_R = np.round(y_pred)
conf = confusion_matrix(y_test, y_pred_R)
confusions.append(conf)
clfr = classification_report(y_test, y_pred_R, output_dict=True)
print(clfr)
classifReports.append(clfr)
finConf=np.zeros((2,2), dtype=int)
for elem in confusions:
for i in range(2):
for j in range(2):
finConf[i][j] += elem[i][j]
labels = ['True Neg','False Pos','False Neg','True Pos']
labels = np.asarray(labels).reshape(2,2)
sns.heatmap(finConf/np.sum(finConf), annot=True, fmt='.2%', cmap='Blues')
macroPrec=[]
macroRecall=[]
macrof1=[]
for elem in classifReports:
macroPrec.append(elem['macro avg']['precision'])
macroRecall.append(elem['macro avg']['recall'])
macrof1.append(elem['macro avg']['f1-score'])
print(np.mean(macroPrec))
print(np.mean(macroRecall))
print(np.mean(macrof1))
weighPrec=[]
weighRecall=[]
weighf1=[]
for elem in classifReports:
weighPrec.append(elem['weighted avg']['precision'])
weighRecall.append(elem['weighted avg']['recall'])
weighf1.append(elem['weighted avg']['f1-score'])
print(np.mean(weighPrec))
print(np.mean(weighRecall))
print(np.mean(weighf1))
###Output
_____no_output_____
###Markdown
64, 64, 128, pooling=false, dropout=true
###Code
kf = StratifiedKFold(n_splits=3)
history = []
confusions= []
classifReports= []
fold = 0
for train, test in kf.split(X_dataArr, Y_dataBinary):
print('Running fold [%d]'.ljust(100,'*') %fold)
fold +=1
cnn=Sequential()
cnn.add(Conv2D(filters=64, kernel_size=(2,2), padding='same', activation='relu', input_shape=(INPUT_MATRIX_WIDTH, INPUT_MATRIX_WIDTH, 5)))
cnn.add(Conv2D(filters=64, kernel_size=(2,2), padding='same', activation='relu'))
cnn.add(Dropout(0.25))
cnn.add(Flatten())
cnn.add(Dense(128, activation='relu'))
cnn.add(Dropout(0.5))
cnn.add(Dense(1, activation='sigmoid'))
cnn.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
x_train, x_test = X_dataArr[train], X_dataArr[test]
y_train, y_test = Y_dataBinary[train], Y_dataBinary[test]
hist = cnn.fit(x=x_train, y=y_train, validation_split=0.2, epochs=40, batch_size=50, verbose=0)
history.append(hist)
y_pred = cnn.predict(x_test)
y_pred_R = np.round(y_pred)
conf = confusion_matrix(y_test, y_pred_R)
confusions.append(conf)
clfr = classification_report(y_test, y_pred_R, output_dict=True)
print(clfr)
classifReports.append(clfr)
j=2
plt.plot(history[j].history['acc'])
plt.plot(history[j].history['val_acc'])
plt.plot(history[j].history['loss'])
plt.plot(history[j].history['val_loss'])
plt.legend(['acc','val_acc','loss','val_loss'])
f1=[ rep['1']['f1-score'] for rep in classifReports ]
recal=[ rep['1']['recall'] for rep in classifReports ]
prec=[ rep['1']['precision'] for rep in classifReports ]
print(mean(f1))
print(mean(recal))
print(mean(prec))
###Output
_____no_output_____
###Markdown
64, 64, 256, pooling=false, dropout=true
###Code
kf = StratifiedKFold(n_splits=3)
history = []
confusions= []
classifReports= []
fold = 0
for train, test in kf.split(X_dataArr, Y_dataBinary):
print('Running fold [%d]'.ljust(100,'*') %fold)
fold +=1
cnn=Sequential()
cnn.add(Conv2D(filters=64, kernel_size=(2,2), padding='same', activation='relu', input_shape=(INPUT_MATRIX_WIDTH, INPUT_MATRIX_WIDTH, 5)))
cnn.add(Conv2D(filters=64, kernel_size=(2,2), padding='same', activation='relu'))
cnn.add(Dropout(0.25))
cnn.add(Flatten())
cnn.add(Dense(256, activation='relu'))
cnn.add(Dropout(0.5))
cnn.add(Dense(1, activation='sigmoid'))
cnn.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
x_train, x_test = X_dataArr[train], X_dataArr[test]
y_train, y_test = Y_dataBinary[train], Y_dataBinary[test]
hist = cnn.fit(x=x_train, y=y_train, validation_split=0.2, epochs=20, batch_size=50, verbose=0)
history.append(hist)
y_pred = cnn.predict(x_test)
y_pred_R = np.round(y_pred)
conf = confusion_matrix(y_test, y_pred_R)
confusions.append(conf)
clfr = classification_report(y_test, y_pred_R, output_dict=True)
print(clfr)
classifReports.append(clfr)
j=2
plt.plot(history[j].history['acc'])
plt.plot(history[j].history['val_acc'])
plt.plot(history[j].history['loss'])
plt.plot(history[j].history['val_loss'])
plt.legend(['acc','val_acc','loss','val_loss'])
f1=[ rep['1']['f1-score'] for rep in classifReports ]
recal=[ rep['1']['recall'] for rep in classifReports ]
prec=[ rep['1']['precision'] for rep in classifReports ]
print(mean(f1))
print(mean(recal))
print(mean(prec))
###Output
_____no_output_____ |
experiments/create_embeddings.ipynb | ###Markdown
**Read Data Stored Locally**
###Code
# All have order preserved across them
titles = pd.read_pickle('data/titles.pkl')
ids = pd.read_pickle('data/ids.pkl')
texts = pd.read_pickle('data/texts.pkl')
# create df
df = pd.DataFrame()
df['ids'] = ids
df['titles'] = titles
df['texts'] = texts
df.isna().sum()
# drop all entries which have a null value in any of id, titles, texts
df = df.dropna().copy()
df = df.reset_index(drop=True).copy()
df.head()
df['texts'].value_counts()
df['titles'].value_counts()
#remove entries that have deleted or removed text
df = df[(df['texts'] != '[removed]') & (df['texts'] != '[deleted]')].copy()
df.shape
# df.to_pickle('data/clean_data.pkl')
NUM_DOCS = len(df)
# load & cache tensorflow model
embed = hub.load("https://tfhub.dev/google/universal-sentence-encoder-large/5")
embed(['testing'])
print('model cached')
from tensorflow.python.client import device_lib
def get_available_gpus():
local_device_protos = device_lib.list_local_devices()
return [x.name for x in local_device_protos if x.device_type == 'GPU']
get_available_gpus()
###Output
_____no_output_____
###Markdown
**Converting Each Post From Text To Embedding**- Start off with using the title of the post only
###Code
# calculates end index for a particular iteration for looping through documents in batches
def calcEndIdx(start_idx, batch_size, ndocs):
end_idx = start_idx + batch_size
end_idx = ndocs if end_idx > ndocs - 1 else end_idx
return end_idx
# convert text to embeddings in batches (model can handle multiple texts at once)
# batch size depends on compute power
text_data = df['texts'].values # text data - can be texts col or title col
embeddings = [] # empty array to store embeddings as we iterate through docs
BATCH_SIZE = 16
for start_idx in tqdm.tqdm_notebook(range(232800, NUM_DOCS, BATCH_SIZE)):
end_idx = calcEndIdx(start_idx, BATCH_SIZE, NUM_DOCS)
curr_embeddings = embed(text_data[start_idx:end_idx]).numpy()
embeddings.append(curr_embeddings)
#embeddings = np.concatenate(embeddings) # convert batched arrays to shape (N, Vector Size)
len(embeddings)
np.savez('data/texts_embeddings_until_232800_end_preconcat.npz', a=embeddings)
###Output
_____no_output_____ |
MLaaS.ipynb | ###Markdown
How to save and load machine learning models using Pickle Machine learning models can take days to train. Pickle save and Pickle load allows you to save them to share or re-run them later, without the need for re-training.
###Code
'''
Loading required librairies
'''
import pandas as pd
import pickle
from sklearn.datasets import load_diabetes
from sklearn.model_selection import train_test_split
from xgboost import XGBClassifier
###Output
_____no_output_____
###Markdown
Load the data
###Code
X, y = load_diabetes(return_X_y=True, as_frame=True)
X.head()
###Output
_____no_output_____
###Markdown
Train the model
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
model = XGBClassifier(random_state=0)
model.fit(X_train, y_train)
XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
colsample_bynode=1, colsample_bytree=1, gamma=0, gpu_id=-1,
importance_type='gain', interaction_constraints='',
learning_rate=0.300000012, max_delta_step=0, max_depth=6,
min_child_weight=1, monotone_constraints='()',
n_estimators=100, n_jobs=0, num_parallel_tree=1,
objective='multi:softprob', random_state=0, reg_alpha=0,
reg_lambda=1, scale_pos_weight=None, subsample=1,
tree_method='exact', validate_parameters=1, verbosity=None)
###Output
_____no_output_____
###Markdown
Save the model
###Code
pickle.dump(model, open('model.pkl', 'wb'))
###Output
_____no_output_____
###Markdown
Load the model
###Code
pickled_model = pickle.load(open('model.pkl', 'rb'))
pickled_model.predict(X_test)
###Output
_____no_output_____
###Markdown
By Reda Mastouri @2022``` https://www.algoexpert.io/team```
###Code
###Output
_____no_output_____ |
tutorials/template.ipynb | ###Markdown
[short title reflecting purpose of tutorial]This Jupyter Notebook tutorial uses Python to show a user ... There are many comments throughout this template, be sure to view them! ---<!-- Replace these contents with something meaningful to the tutorialIf your titles are single words, you can just use the title (in lowercase) as the linkMore complex titles will need to be anchored by adding an HTML anchor tag with the title: The anchor is then used as the link.NOTE: once you use an anchor you will need to anchor all titles to use as a link--> Contents- [Overview](overview)- [Audience](audience)- [What to expect](expect)- [Prerequisites](prerequisites)- [Cautions](cautions)- [Step 1 - Python Imports and Setup](step1)- [Step 2 - the next step](step2)- [Completed](completed)- [Feedback](feedback)- [References](references)- [Acknowledgements](acknowledgements)--- OverviewThis tutorial covers ... .Completing this tutorial will ... . AudienceThis tutorial is geared towards people that ... .It will also be helpful, but not required, to have ... .This notebook might also be of interest if you have a background in ... . Time to complete You can expect to complete this tutorial in ... What to expect We will be using a ... .Each step of this tutorial contains text describing what needs to be done and then presents code that performs those actions . Prerequisites To successfully complete this tutorial you will need to have ... .Additionally, the ... . Cautions There are two main files in the Clowder dataset to be processed that, if they are in the dataset, will be overwritten.These files are the *experiment.yaml* file and the *extractors-opendronemap.txt* file.If you have placed these files in the dataset this tutorial will process, you should download them to preserve them. --- Step 1 - Python Imports and Setup <!-- For each step:- start with why we're performing the step- add in some reference information if that's helpful- list things that are needed for that section (as related to the code step)- add subsection on what somethings needed are- add subsection on why something is done/needed- how to read the code that follows: for example, what to replace, logic flows, etc.-->The first step is to let Python know which libraries you will be needing for your commands.We are also going to define the Clowder URL so the calls we make know which instance to access.You will need to replace the endpoint with the URL of your Clowder instance.
###Code
# Importing the libraries we will need
import pipelineutils
clowder_url="https://my.clowder.path" # Replace this value with your Clowder URL
###Output
_____no_output_____
###Markdown
--- Step 2 - the next step <!-- The next step: - start with why we're performing the step- add in some reference information if that's helpful- list things that are needed for that section (as related to the code step)- add subsection on what somethings needed are- add subsection on why something is done/needed- how to read the code that follows: for example, what to replace, logic flows, etc.-->
###Code
# Provide experiment information for the extractor
experiment = prepare_experiment("<study name>", # Replace <study name> with your study name
"<season name>", # Replace <season name> with your season name
"<timestamp>" # Replace <timestamp> with your timestamp
)
###Output
_____no_output_____ |
_build/jupyter_execute/notebooks/03/machine-scheduling.ipynb | ###Markdown
Machine Scheduling"Which job should be done next?" is a questiona one face in modern life, whetherfor a busy student working on course assignments, a courier delivering packages, a server waiting on tables in a busy restaurant, a computer processing threads, or a machine on a complex assembly line. There are empirical answers to this question, among them "first in, first out", "last in, first out", or "shortest job first". What we consider in this notebook is the modeling finding solutions to this class of problem using optimiziation techniques. This notebook demonstrates the formulation of a model for scheduling a single machine scheduling using disjuctive programming in Pyomo. The problem is to schedule a set of jobs on a single machine given the release time, duration, and due time for each job. Date for the example problem is from Christelle Gueret, Christian Prins, Marc Sevaux, "Applications of Optimization with Xpress-MP," Chapter 5, Dash Optimization, 2000.
###Code
# Import Pyomo and solvers for Google Colab
import sys
if "google.colab" in sys.modules:
!wget -N -q https://raw.githubusercontent.com/jckantor/MO-book/main/tools/install_on_colab.py
%run install_on_colab.py
###Output
_____no_output_____
###Markdown
Learning Goals* Optimal scheduling for a single machine* Disjuctive programming in Pyomo ExampleThe problem is to schedule a sequence of jobs for a single machine. The problem data as a nested Python dictionary of jobs. Each job is labeled by a key. For each key there is an associated data dictionary giving the time at which the job is released to the for machine processing, the expected duration of the job, and the due date. The optimization objective is to find a sequence the jobs on the machine that meets the the due dates. If no such schedule exists, then the objective is to find a schedule minimizing some measure of "badness".
###Code
import pandas as pd
jobs = pd.DataFrame({
'A': {'release': 2, 'duration': 5, 'due': 10},
'B': {'release': 5, 'duration': 6, 'due': 21},
'C': {'release': 4, 'duration': 8, 'due': 15},
'D': {'release': 0, 'duration': 4, 'due': 10},
'E': {'release': 0, 'duration': 2, 'due': 5},
'F': {'release': 8, 'duration': 3, 'due': 15},
'G': {'release': 9, 'duration': 2, 'due': 22},
}).T
jobs
###Output
_____no_output_____
###Markdown
Gantt chartA traditional means of visualizing scheduling data in the form of a Gantt chart. The next cell presents a function `gantt` that plots a Gantt chart given jobs and schedule information. If no schedule information is given, the jobs are performed in the order listed in the jobs dataframe.
###Code
import matplotlib.pyplot as plt
def gantt(jobs, schedule=None):
bw = 0.25 # bar_width
fig, ax = plt.subplots(1, 1, figsize=(12, 0.7*len(jobs.index)))
# plot release/due windows
for k, job in enumerate(jobs.index):
x = jobs.loc[job, "release"]
y = jobs.loc[job, "due"]
ax.fill_between([x, y], [k-bw, k-bw], [k+bw, k+bw], color="cyan", alpha=0.6)
# if no schedule, perform jobs in order given in jobs
if schedule is None:
schedule = pd.DataFrame(index=jobs.index)
t = 0
for job in jobs.index:
t = max(t, jobs.loc[job]["release"])
schedule.loc[job, "start"] = t
t += jobs.loc[job, "duration"]
schedule.loc[job, "finish"] = t
schedule.loc[job, "past"] = max(0, t - jobs.loc[job, "due"])
# plot job schedule
for k, job in enumerate(schedule.index):
x = schedule.loc[job, "start"]
y = schedule.loc[job, "finish"]
ax.fill_between([x, y], [k-bw, k-bw], [k+bw, k+bw], color="red", alpha=0.5)
ax.text((schedule.loc[job, "start"] + schedule.loc[job, "finish"])/2.0, k,
"Job " + job, color="white", weight="bold",
ha="center", va="center")
if schedule.loc[job, "past"] > 0:
ax.text(schedule.loc[job, "finish"] + 1, k,
f"{schedule.loc[job, 'past']} past due", va="center")
total_past_due = schedule["past"].sum()
ax.set_ylim(-0.5, len(jobs.index)-0.5)
ax.set_title(f'Job Schedule total past due = {total_past_due}')
ax.set_xlabel('Time')
ax.set_ylabel('Jobs')
ax.set_yticks(range(len(jobs.index)), jobs.index)
ax.grid(True)
gantt(jobs)
###Output
_____no_output_____
###Markdown
ModelingThe modeling starts by defining the problem data.| Symbol | Description ||:---- | :--- || $\text{release}_j$ | when job $j$ is available|| $\text{duration}_j$ | how long job $j$ || $\text{due}_j$ | when job $j$ is due |The essential decision variable is the time at which the job starts processing.| Symbol | Description ||:---- | :--- || $\text{start}_j$ | when job $j$ starts || $\text{finish}_j$ | when job $j$ finishes || $\text{past}_j$ | how long job $j$ is past due |Depending on application and circumstances, one could entertain many different choices for objective function. Minimizing the number of past due jobs, or minimizing the maximum past due, or the total amount of time past due would all be appropriate objectives. The following Pyomo model minimizes the total time past due, that is$$\min \sum_j \text{past}_j$$The constraints describe the relationships amomg the decision variables. For example, a job cannot start until it is released for processing$$\begin{align*}\text{start}_{j} & \geq \text{release}_{j}\\\end{align*}$$Once started the processing continues until the job is finished. The finish time is compared to the due time, and the result stored the $\text{past}_j$ decision variable. These decision variables are needed to handle cases where it might not be possible to complete all jobs by the time they are due.$$\begin{align*}\text{finish}_j & = \text{start}_j + \text{duration}_j \\\text{past}_{j} & \geq \text{finish}_j - \text{due}_{j} \\\text{past}_{j} & \geq 0\end{align*}$$The final set of constraints require that no pair of jobs be operating on the same machine at the same time. For this purpose, we consider each unique pair ($i$, $j$) where the constraint $i < j$ to imposed to avoid considering the same pair twice. Then for any unique pair $i$ and $j$, either $i$ finishes before $j$ starts, or $j$ finishes before $i$ starts. This is expressed as the family of disjuctions $$\begin{align*}\begin{bmatrix}\text{finish}_i \leq \text{start}_j\end{bmatrix}& \veebar\begin{bmatrix}\text{finish}_j \leq \text{start}_i\end{bmatrix}& \forall i < j\end{align*}$$This model and constraints can be directly translated to Pyomo.
###Code
import pyomo.environ as pyo
import pyomo.gdp as gdp
def build_model(jobs):
m = pyo.ConcreteModel()
m.JOBS = pyo.Set(initialize=jobs.index)
m.PAIRS = pyo.Set(initialize=m.JOBS * m.JOBS, filter = lambda m, i, j: i < j)
m.start = pyo.Var(m.JOBS, domain=pyo.NonNegativeReals, bounds=(0, 300))
m.finish = pyo.Var(m.JOBS, domain=pyo.NonNegativeReals, bounds=(0, 300))
m.past = pyo.Var(m.JOBS, domain=pyo.NonNegativeReals, bounds=(0, 300))
@m.Constraint(m.JOBS)
def job_release(m, job):
return m.start[job] >= jobs.loc[job, "release"]
@m.Constraint(m.JOBS)
def job_duration(m, job):
return m.finish[job] == m.start[job] + jobs.loc[job, "duration"]
@m.Constraint(m.JOBS)
def past_due_constraint(m, job):
return m.past[job] >= m.finish[job] - jobs.loc[job, "due"]
@m.Disjunction(m.PAIRS, xor=True)
def machine_deconflict(m, job_a, job_b):
return [m.finish[job_a] <= m.start[job_b],
m.finish[job_b] <= m.start[job_a]]
@m.Objective(sense=pyo.minimize)
def minimize_past(m):
return sum(m.past[job] for job in m.JOBS)
pyo.TransformationFactory("gdp.bigm").apply_to(m)
return m
def solve_model(m, solver_name="cbc"):
solver = pyo.SolverFactory(solver_name)
solver.solve(m)
schedule = pd.DataFrame({
"start" : {job: m.start[job]() for job in m.JOBS},
"finish" : {job: m.finish[job]() for job in m.JOBS},
"past" : {job: m.past[job]() for job in m.JOBS},
})
return schedule
model = build_model(jobs)
schedule = solve_model(model)
display(jobs)
display(schedule)
gantt(jobs, schedule)
###Output
_____no_output_____ |
notebooks/NB1 - Data Loading & Data Cleaning.ipynb | ###Markdown
Table of Contents1 Introduction1.1 Notebook Introduction2 Setup2.1 Importing the libraries2.2 Read in the dataset3 Dataset Quick Exploration3.1 Quick Summary3.2 Column Definitions3.3 Missing Values3.3.1 Missing Values Matrix3.3.2 Percentage Missing3.3.3 Handling Missing Data : Description3.3.4 Handling Missing Data : CustomerID3.4 Data Quality / Validation Tests3.4.1 Validation Test 1: Unique Invoices3.4.2 Validation Test 2 : Non-Negative Quantity and Unit Price3.4.3 Validation Test 3 : Unique Stock Codes3.4.4 Validation Test 4 : Customer Country4 Data Preparation4.1 Stock Codes Exploration4.1.1 Non-Digit Codes4.1.2 Discount Codes4.2 Dealing with Canceled Orders4.2.1 Summary of Canceled Orders4.2.2 Identify Order Pairs4.2.3 Remove Cancelled Transactions4.3 Adding the Price Column4.4 Rename and Rearranging5 Data Aggregation and Output5.1 Transaction Dataset5.2 Aggregated Data5.2.1 Customer Data5.2.2 Product Data5.2.3 Invoice Data5.2.4 Main Data6 Conclusion E-Commerce **Notebook 1 - Data Loading & Date Cleaning**This project will explore an E-commerce dataset of transactions from a UK registered online store. The dataset covers the period of 01/12/2010 - 09/12/2011. To access the dataset and read more about please refer to its [UCI repo](http://archive.ics.uci.edu/ml/datasets/Online+Retail). Introduction This project will go through the following stages using this data. There is a separate notebook for each process.- NB1: Data loading & Data Cleaning- NB2: Exploratory Data Analysis (EDA)- NB3: Customer Segmentation- NB4: Attrition Prevention Strategies - NB5: Product Recommendation (WIP)The project is using the cookiecutter [data science template](https://github.com/drivendata/cookiecutter-data-science). More about this can be found [in this article](https://medium.com/@rrfd/cookiecutter-data-science-organize-your-projects-atom-and-jupyter-2be7862f487e). Notebook IntroductionThis notebook goes through the initial data loading, high level exploration of the structure of the data and data cleaning. This is one of the most important stages in a data science project as failing to do efficient cleaning and run the correct validations tests will result in inaccuracies in your analysis. The objectives of these notebook are:1. Successfully load the data and get key characteristics (dimensions, dtypes etc.).2. Understand the degree of missing values in the dataset.3. Run any data quality / validation tests to ensure our data behaves the way we expect it.4. Clean and manipulate some of the columns to create new views of the data that will be useful for EDA.By the end of this notebook we should have a set of separate datasets that will be handy in the next notebooks. SetupThis section will setup our notebook by importing the right libraries, setting paths and reading the data. Importing the libraries The following libraries and paths that will be used through out the project.
###Code
# This allows us to syncronise our IDE with
# the notebook for efficient function storage.
%load_ext autoreload
%autoreload 2
# Generic libraries
import os
import sys
from pathlib import Path
import warnings
from tqdm import tqdm
# Data manipulation
import numpy as np
import pandas as pd
import missingno as mn
# Visualisation
import matplotlib.pyplot as plt
%matplotlib inline
# Import our helpers module
import src
from src.data import utils
from src.features import build_features
# Ensure that we are operating from our base dir
os.chdir(Path(src.__file__).resolve().parents[1])
# Define a function that saves the data
# to the corresponding folder
data_folder = "data"
raw_path = os.path.join(data_folder, "raw")
int_path = os.path.join(data_folder, "interim")
processed_path = os.path.join(data_folder, "processed")
###Output
_____no_output_____
###Markdown
You should be able to see your project directory if you run the below command (e.g. `C:\Users\username\Desktop\ecom_project`). Failing to do so, will result in file loading problems.
###Code
# print(os.getcwd())
###Output
_____no_output_____
###Markdown
Read in the datasetThe dataset has been downloaded from [this website](http://archive.ics.uci.edu/ml/datasets/Online+Retail). The data is in the `data/raw` folder.
###Code
# Read the data
fn = "Online Retail.xlsx"
df_raw = pd.read_excel(os.path.join(raw_path, fn))
###Output
_____no_output_____
###Markdown
Dataset Quick ExplorationThis section will look at some general characteristics of our data. Quick SummaryWe define a function that gives us a summary of the dataframe characteristics. We store this function in the `utils` module for later usage. ```pythondef quick_summary(df, title, row_num=5, show_summary=True): """ Returns a quick summary of a given dataset Parameters: ----------- df : dataframe The dataframe to return the summary title : string The title of the dataframe row_num : int (default = 5) The number of rows to return of the dataframe show_summary : bool (default = True) Print the summary of dtypes and null values as well as a preview Return: ------- None """ Print the title print("\n") print_bold(f"{title.upper()}") print("-" * (len(title) + 1)) print("\n") print(f"Number of rows: {df.shape[0]} \t Number of Columns: {df.shape[1]}") Print the dataframe display(df.head(row_num)) if show_summary: Get the overall summary print("\n\n") print_bold("OVERALL SUMMARY") print("-" * 15) print(df.info()) return```
###Code
utils.quick_summary(df_raw,
title="E-Commerce Raw Dataset",
row_num=10)
###Output
[1mE-COMMERCE RAW DATASET[0m
-----------------------
Number of rows: 541909 Number of Columns: 8
###Markdown
**Comments**From the above we can make the following comments and plan our actions:- We have around 540k individual transactions that can be grouped by Invoice and Customer ID.- The data preview suggests that we are looking at the "Transactions" table of a standard relational database evident from the user of various IDs which are probably used to establish the relationships. - From the Overall Summary we can see that the CustomerID and Description have some null values. We will explore this in the next section.- Pandas have automatically picked up the right data types for most columns with the exception of "CustomerID" which has been identified as "Float". Column DefinitionsThe columns are pretty intuitive but the [UCI website](http://archive.ics.uci.edu/ml/datasets/Online+Retail) also includes a dictionary that helps us get some more insights on what we are looking at. This is summarised in the table below.*Dataset Dictionary***InvoiceNo:** Invoice number. Nominal, a 6-digit integral number uniquely assigned to each transaction. If this code starts with letter 'c', it indicates a cancellation.**StockCode:** Product (item) code. Nominal, a 5-digit integral number uniquely assigned to each distinct product.**Description:** Product (item) name. Nominal.**Quantity:** The quantities of each product (item) per transaction. Numeric.**InvoiceDate:** Invice Date and time. Numeric, the day and time when each transaction was generated.**UnitPrice:** Unit price. Numeric, Product price per unit in sterling.**CustomerID:** Customer number. Nominal, a 5-digit integral number uniquely assigned to each customer.Country: Country name. Nominal, the name of the country where each customer resides. **Comments**One thing that we will need to explore is the "Cancelled" invoices which can be found by filtering for invoice numbers that start with the letter C.We can achieve this quickly by creating a Boolean column that checks for "C" in the invoice number. We will later on process this column further to understand more about the nature of cancelled transactions.
###Code
# Get all cancelled transactions by finding the
# invoices that start with "C"
df_raw['Cancelled'] = df_raw['InvoiceNo'].astype(str).str.upper().str.startswith("C").astype(int)
###Output
_____no_output_____
###Markdown
Missing ValuesWe already know that some columns have missing values. Using the library [missigno](https://github.com/ResidentMario/missingno) we will explore these further in this section. Missing Values Matrix
###Code
mn.matrix(df_raw, figsize=(15, 3), fontsize=10);
plt.title("Missing Values");
###Output
_____no_output_____
###Markdown
Percentage Missing From the above we can see that a significant amount of customer IDs are missing. Let's checkout what the percentage is. We define the following function that provides a summary of all missing values. We also add this in our `utils` module. ```pythondef missing_summary(df): """ Takes in a dataframe and returns a summary of all missing values. Parameters: ----------- df : dataframe Dataframe to calculate the missing summary from. Returns: -------- df_miss : dataframe Missing values summary """ Copy for output df_out = df.copy() Create a new summary dataframe for each column. df_miss = df_out.notnull().sum().reset_index() df_miss['Missing'] = df_out.isnull().sum().values df_miss['Percentage Missing'] = ((df_miss['Missing'] / df_out.shape[0]) * 100).round(1) Rename all the columns df_miss.columns = ['Column', 'Not-Null', 'Missing', 'Perc Missing (%)'] return df_miss```
###Code
utils.missing_summary(df_raw)
###Output
_____no_output_____
###Markdown
**Comments**We can see that we have a very small number of Descriptions missing and around 25% of customerIDs. We will deal with these separately. Handling Missing Data : Description The description data has a very small number of values that are missing so we will start by just examining these first rows.
###Code
# We print the data that has missing
# description to understand whether
# there is anything unique about it
df_raw.loc[df_raw['Description'].isnull()].head(20)
# Check whether all UnitPrice for these columns is zero
df_raw.loc[df_raw['Description'].isnull()]['UnitPrice'].sum() == 0
###Output
_____no_output_____
###Markdown
As observed all these transactions have no "UnitPrice". These could be either False transactions or some other kind of 0 charge. As most of our analysis will be centered around "Price" we don't need these so we will just remove them.
###Code
# Remove the null values for Description and
# get the missing summary
df_raw = df_raw.dropna(subset=['Description']).copy()
utils.missing_summary(df_raw)
###Output
_____no_output_____
###Markdown
Handling Missing Data : CustomerIDFrom the above we can see that almost 25% of the transactions have no customerID. This is a significant amount of data that is missing. We don't have any information as to why this is happening. As we can still tie these transactions to an invoice number we can still get some value from that data. First we need to have a quick look at those particular transactions and see if there is anything that catches our attention.
###Code
# We will print a preview of the data that has
# missing CustomerIDs
df_tmp = df_raw[df_raw['CustomerID'].isnull()]
utils.quick_summary(df_tmp, title="transactions with missing customerID",
row_num=20, show_summary=False)
###Output
[1mTRANSACTIONS WITH MISSING CUSTOMERID[0m
-------------------------------------
Number of rows: 133626 Number of Columns: 9
###Markdown
These look like genuine transactions but for some reason have no customerID. The next thing to check is whether we can infer the customerIDs using Invoice. To do that we need to check whether any of the invoice numbers that have missing customerIDs have any records without a missing one.
###Code
# Get all invoice numbers that have no customer ID
# and check if there are any records of them without a missing ID.
inv_nums = df_tmp['InvoiceNo'].unique().tolist()
not_nulls = df_raw.loc[df_raw['InvoiceNo'].isin(inv_nums)]['CustomerID'].notnull().sum()
utils.print_bold(f"Total of {len(inv_nums)} invoices without customerID")
print(f"Total of {not_nulls} of those invoices have transactions with at least one non-missing CustomerID")
###Output
[1mTotal of 2256 invoices without customerID[0m
Total of 0 of those invoices have transactions with at least one non-missing CustomerID
###Markdown
Unfortunately that isn't the case. For now, we will keep this data as we can still carry out some analysis aggregated at the Product or Invoice Number. For simplicity, we will replace the missing values with "00000". This will also allows us to do some targetted analysis on them later on if needed.
###Code
# Ensure there are no codes with "00000" already
assert df_raw.loc[df_raw['CustomerID'] == "00000"].shape[0] == 0, "CustomerID 00000 already exists."
df_raw['CustomerID'] = df_raw['CustomerID'].fillna("00000")
# Check the missing summary
utils.missing_summary(df_raw)
###Output
_____no_output_____
###Markdown
As confirmed we have now dealt with all missing values. Data Quality / Validation TestsThis section will test a list of some fairly "intuitive hypothesis" to ensure we have the data quality we expect. As we are not familiar with the detailed data architecture of this store we will use intuition to come up with what we need to test. Below is a list of validation tests we will run through:1. All invoice numbers are unique both from a customer and *InvoiceDate* perspective. That means there should be no records of the same invoice number that have either a different *CustomerID* or a different *InvoiceDate*. Violation of this rule will make some of our analysis later on much more complex.2. *UnitPrice* and *Quantity* can never be negative with the exception of Cancellations. 3. All stock codes have unique descriptions. In other words stock code has a one-to-one relationship to descriptions. This will ensure that we can use those two interchangably. 4. All customerIDs have the same Country information. As the country information relates to the customer (according to the column definitions rather than the transaction) it means that a customer should have the same country. There could be rare exceptions when the customer moved to a new country. Validation Test 1: Unique Invoices The first test will check for unique invoices for each customer and invoice date. Instead of using assertions we will use warnings as this is something for the user to know rather than something that should interrupt the program.
###Code
# Identify any cases that we have an invoice number with multiple InvoiceDates
df_test = df_raw.groupby(['InvoiceNo'])['InvoiceDate'].nunique().sort_values(ascending=False) > 1
if df_test.sum() != 0:
warnings.warn(f"There are {df_test.sum()} invoice numbers with more than one Invoice Date.")
###Output
<ipython-input-52-d823f040d0a6>:4: UserWarning: There are 43 invoice numbers with more than one Invoice Date.
warnings.warn(f"There are {df_test.sum()} invoice numbers with more than one Invoice Date.")
###Markdown
From above we can see that there are 43 invoices that have multiple invoice dates. Let's investigate those.
###Code
# Extract all invoices that violate the rule
df_test = df_test.reset_index().copy()
df_test = df_test.loc[df_test['InvoiceDate'] == True]
invalid_invs = df_test['InvoiceNo'].unique()
df_raw.loc[df_raw['InvoiceNo'].isin(invalid_invs)].groupby(['InvoiceNo', 'InvoiceDate'])['InvoiceDate'].count().head(10)
###Output
_____no_output_____
###Markdown
It appears that these just differ by 1 minute which could be down to processing errors. We will now loop through all invoices and check the difference between the dates. If the difference is more than one hour we will print those cases for further investigation. Otherwise we just replace with the minimum date to keep it consistent.
###Code
df_temp = df_raw.loc[df_raw['InvoiceNo'].isin(invalid_invs)].copy()
for invoice in invalid_invs:
# Get all records of that invoice
# and find the difference between
# the dates. If the difference is
# less than 1 hour then discard
df_filt = df_temp.loc[df_temp['InvoiceNo'] == invoice, ['InvoiceNo','InvoiceDate']].copy()
df_filt['Shift'] = df_filt['InvoiceDate'].shift(1)
df_filt['Diff'] = (df_filt['InvoiceDate'] - df_filt['Shift']).astype('timedelta64[h]').fillna(0)
# Check if we are above 1 hour
# if not just replace with the first
# date otherwise print
if (df_filt['Diff'] > 0).sum() > 0:
print(f"Dates above 1 hr interval found for invoice : {inv}")
else:
df_raw.loc[df_raw['InvoiceNo'] == invoice, "InvoiceDate"] = df_filt['InvoiceDate'].min()
###Output
_____no_output_____
###Markdown
As we have no print messages we can assume that all differences were less that 1 hour apart. Let's re-run the check.
###Code
# Identify any cases that we have an invoice number with multiple InvoiceDates
df_test = df_raw.groupby(['InvoiceNo'])['InvoiceDate'].nunique().sort_values(ascending=False) > 1
if df_test.sum() != 0:
warnings.warn(f"There are {df_test.sum()} invoice numbers with more than one Invoice Date.")
###Output
_____no_output_____
###Markdown
Good, it appears that the problem has been solved. Validation Test 2 : Non-Negative Quantity and Unit PriceThis section will ensure that Unit Prices and Quantities are more than zero with the exception of cancellations.
###Code
df_test = (df_raw.loc[df_raw['Cancelled'] == 0, "Quantity"] <= 0)
if df_test.sum() != 0:
warnings.warn(f"There are {df_test.sum()} non-cancelled transactions with quantity less than or equal to zero.")
df_test = (df_raw.loc[df_raw['Cancelled'] == 0, "UnitPrice"] <= 0)
if df_test.sum() != 0:
warnings.warn(f"There are {df_test.sum()} non-cancelled transactions with unit price less than or equal to zero.")
###Output
<ipython-input-56-5f3cc5a765ed>:3: UserWarning: There are 474 non-cancelled transactions with quantity less than or equal to zero.
warnings.warn(f"There are {df_test.sum()} non-cancelled transactions with quantity less than or equal to zero.")
<ipython-input-56-5f3cc5a765ed>:7: UserWarning: There are 1063 non-cancelled transactions with unit price less than or equal to zero.
warnings.warn(f"There are {df_test.sum()} non-cancelled transactions with unit price less than or equal to zero.")
###Markdown
We can see that we have 474 cases of transactions with negative or zero quantities and 1063 transactions with negative or zero unit prices. We will start by investigating the unit prices as they are more.
###Code
# Get a preview of the the data
df_test = df_raw.loc[(df_raw['Cancelled'] == 0) & (df_raw['UnitPrice'] < 0)].copy()
utils.quick_summary(df_test, "transactions with negative unit prices", row_num=20, show_summary=False)
# Get a preview of the the data
df_test = df_raw.loc[(df_raw['Cancelled'] == 0) & (df_raw['UnitPrice'] == 0)].copy()
utils.quick_summary(df_test, "transactions with zero unit prices", row_num=20, show_summary=False)
###Output
[1mTRANSACTIONS WITH NEGATIVE UNIT PRICES[0m
---------------------------------------
Number of rows: 2 Number of Columns: 9
###Markdown
There are only two transactions with negative unit price and multiple with zeros. The ones that have negative have a description "Adjust bad debt" which we can't get much information about so we will just ignore for now. The ones with zero price can be anything. From free orders (e.g. delayed orders, to damaged goods to just a mistake in the dataset). Therefore, although these are likely to have some very interesting insights due to their small proportion (<0.1%) we will ignore them.
###Code
# Filter out all non-cancelled orders with negative or
# zero unit prices
df_raw = df_raw.loc[((df_raw['Cancelled'] == 0) & (df_raw['UnitPrice'] > 0)) |
(df_raw['Cancelled'] == 1)]
###Output
_____no_output_____
###Markdown
We now rerun the tests to check if the *Quantity* issues is still there.
###Code
df_test = (df_raw.loc[df_raw['Cancelled'] == 0, "Quantity"] <= 0)
if df_test.sum() != 0:
warnings.warn(f"There are {df_test.sum()} non-cancelled transactions with quantity less than or equal to zero.")
df_test = (df_raw.loc[df_raw['Cancelled'] == 0, "UnitPrice"] <= 0)
if df_test.sum() != 0:
warnings.warn(f"There are {df_test.sum()} non-cancelled transactions with unit price less than or equal to zero.")
###Output
_____no_output_____
###Markdown
It appears that sorting out the Unit Price problem solved the Quantity issue. Validation Test 3 : Unique Stock CodesThis section will ensure that stock codes have a 1-to-1 relationship with Descriptions.
###Code
# Identify any cases that we have an invoice number with multiple InvoiceDates
df_test = df_raw.groupby(['StockCode'])['Description'].nunique().sort_values(ascending=False) > 1
if df_test.sum() != 0:
warnings.warn(f"There are {df_test.sum()} stock codes with more than one product description.")
###Output
<ipython-input-60-db681dc77651>:4: UserWarning: There are 220 stock codes with more than one product description.
warnings.warn(f"There are {df_test.sum()} stock codes with more than one product description.")
###Markdown
There are 220 cases with non unique descriptions. This can cause some confusion down the line. Let's investigate those.
###Code
# Extract all invoices that violate the rule
df_test = df_test.reset_index().copy()
df_test = df_test.loc[df_test['Description'] == True]
invalid_codes = df_test['StockCode'].unique()
df_raw.loc[df_raw['StockCode'].isin(invalid_codes)].groupby(['StockCode', 'Description'])['Description'].count().head(20)
###Output
_____no_output_____
###Markdown
From the above it appears that they are just simple typos in the database. The simplest solution, is to loop through all cases find the most occurring description and replace them all with that.
###Code
df_temp = df_raw.loc[df_raw['StockCode'].isin(invalid_codes)].copy()
# Loop through all codes, extract
# the first description and compare
# it with the others
for code in tqdm(invalid_codes):
df_filt = df_temp.loc[df_temp['StockCode'] == code].copy()
# get the most common description
top_desc = df_filt['Description'].value_counts().sort_values(ascending=False).index[0]
# assign this to all the codes
df_raw.loc[df_raw['StockCode'] == code, "Description"] = top_desc.strip()
###Output
100%|██████████████████████████████████████████████████████████████████████████████| 220/220 [00:10<00:00, 21.15it/s]
###Markdown
This appears to have worked let's rerun the test.
###Code
# Identify any cases that we have an invoice number with multiple InvoiceDates
df_test = df_raw.groupby(['StockCode'])['Description'].nunique().sort_values(ascending=False) > 1
if df_test.sum() != 0:
warnings.warn(f"There are {df_test.sum()} stock codes with more than one product description.")
###Output
_____no_output_____
###Markdown
Good, we have no warnings so the solution must have worked. Validation Test 4 : Customer CountryThis part will test whether all unique customers have one country entry.
###Code
# Group the customer by country
# we expect to only see 1s here
df_grp = df_raw.groupby(['CustomerID'])['Country'].nunique()
dupl_cust = df_grp.loc[df_grp > 1].index.tolist()
dupl_cust.remove("00000")
print(dupl_cust)
###Output
[12370.0, 12394.0, 12417.0, 12422.0, 12429.0, 12431.0, 12455.0, 12457.0]
###Markdown
Ignoring the 00000 as we already know that it's a code to mark missing customers, it appears that we have a few cases of customers with multiple country entries. Let's test this by aggregating only those customers and taking the sum of quantity.
###Code
# Groupby for only those customers
df_grp = df_raw.loc[df_raw['CustomerID'].isin(dupl_cust)]
df_grp = df_grp.groupby(["CustomerID", "Country"])['Quantity'].sum()
df_grp
###Output
_____no_output_____
###Markdown
We see that there are only a few cases. This could be due to an error in the dataset or just simply the customer has moved. For simplicity we will keep the record with the most sales.
###Code
df_grp = df_grp.reset_index()
max_idx = df_grp.groupby(["CustomerID"])['Quantity'].transform(max) == df_grp['Quantity']
df_grp = df_grp[max_idx].reset_index(drop=True)
df_grp
# Loop through all customers and make the replacements
values_dict = df_grp[['CustomerID', 'Country']].to_dict()
country_dict = values_dict['Country']
customer_dict = values_dict['CustomerID']
for idx, cust_id in customer_dict.items():
country = country_dict[idx]
# Make the replacement
df_raw.loc[df_raw['CustomerID'] == cust_id, "Country"] = country
###Output
_____no_output_____
###Markdown
Let's rerun the check.
###Code
df_grp = df_raw.groupby(['CustomerID'])['Country'].nunique()
dupl_cust = df_grp.loc[df_grp > 1].index.tolist()
dupl_cust.remove("00000")
utils.print_bold(f"There are {len(dupl_cust)} customers with duplicate countries")
###Output
[1mThere are 0 customers with duplicate countries[0m
###Markdown
Good we are now ready to move to data preparation. Data PreparationThis section will prepare the dataset for exploratory data analysis. This section includes the following:1. Cleanning of columns2. Creation of new Features / Columns3. Aggregation of the dataset in different formatsThe easiest way to approach this, is to split into areas of interest. The following will be explored:- Cancellation orders- Stock codes- UnitPricesWe start by creating a copy of the current dataframe.
###Code
df_clean = df_raw.copy()
###Output
_____no_output_____
###Markdown
Stock Codes ExplorationWhile previewing the dataset, it was observed that the stock code is not always a simple numeric code. There are generally three different types of stock codes:- A simple numeric number, these are likely to be genuine codes.- A simple numeric number with a letter / letters in it, these could indicate something but due to the limited information we have we will treat those as genuine codes.- Codes that don't include any numbers. We need to investigate these further. Non-Digit Codes We start by filtering for codes that don't contain any numeric digit.
###Code
# Extract the list of codes that don't contain
# any digits
non_digit_codes = df_clean[~(df_clean['StockCode'].astype(str).str.contains("\d", regex=True))]['StockCode'].unique()
print(non_digit_codes)
###Output
['POST' 'D' 'DOT' 'M' 'BANK CHARGES' 'S' 'AMAZONFEE' 'm' 'DCGSSBOY'
'DCGSSGIRL' 'PADS' 'B' 'CRUK']
###Markdown
We observe that these are codes that could relate to some sort of "Service" or additional charge. We need to understand how these are represented in the dataset. We need to check a couple things:- What do they relate to and how often they occur, by looking at their description.- Whether they have their own InvoiceNo or they are part of other invoices.
###Code
# Filter for only transactions with non-digit codes
df_non_digit = df_clean.loc[df_clean['StockCode'].isin(non_digit_codes)].copy()
df_non_digit.groupby(['StockCode', 'Description'])['Description'].count().sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
The above confirms that all these codes relate services. We now need to test how they are recorded, i.e. whether they are part of separate invoices or not. The following code, loops through all non-digit codes and checks whether their invoices contain any StockCodes other than the non-digit ones. For all cases where there are invoices with only non-digit codes we choose to remove them as we have no easy way of tying them to the actual invoices they relate to. In reality we would ask for more information about those. However, as we don't have more information we choose to exclude them from the analysis as they will interfere with the results when we start aggregating. The exception to this rule is Discounts (Code D) as we want to keep those.
###Code
removed_invs = []
for code in non_digit_codes:
# Filter the dataframe for that code
# and get the invoices.
tmp_df = df_non_digit.loc[df_non_digit['StockCode'] == code].copy()
inv_list = tmp_df['InvoiceNo'].tolist()
total_inv = len(inv_list)
# Filter the original dataset and get all
# invoices that have at least one non-digit code in them
df_code = df_clean.loc[df_clean['InvoiceNo'].isin(inv_list)].copy()
digit_invs = df_code.loc[~(df_code['StockCode'].isin(non_digit_codes))]['InvoiceNo'].unique()
non_digit_only = df_code.loc[~(df_code['InvoiceNo'].isin(digit_invs))]
only_non_digit_invs = non_digit_only['InvoiceNo'].unique().tolist()
utils.quick_summary(non_digit_only, f"transactions for {code} with InvoiceNo that have non-digit only StockCodes", row_num=5,
show_summary=False)
print(f"{code} contains {len(digit_invs)} / {total_inv} invoices without at least one digit stock code")
print()
# Get all invoices that have only non-digit
# codes and remove them from the original
# dataframe. For discounts just save them
# separately
if code != "D":
removed_invs += only_non_digit_invs
else:
discount_invs = only_non_digit_invs
###Output
[1mTRANSACTIONS FOR POST WITH INVOICENO THAT HAVE NON-DIGIT ONLY STOCKCODES[0m
-------------------------------------------------------------------------
Number of rows: 197 Number of Columns: 9
###Markdown
First we remove all invoices that have only non-digit codes with them.
###Code
df_clean = df_clean.loc[~(df_clean['InvoiceNo'].isin(removed_invs))].copy()
# Print the summary
print(f"Total of {len(removed_invs)} invoices removed. The remaining transactions correspond to {df_clean.shape[0]} / {df_raw.shape[0]}.")
###Output
Total of 560 invoices removed. The remaining transactions correspond to 538752 / 539392.
###Markdown
As expected, quite a large number of invoices was removed but the overall effect on the transactions data was minimal. Discount CodesRecall that we retained the discount codes. This is because knowing when a discount was applied is useful (as discount can relate to the motivation behind a purchase).
###Code
utils.quick_summary(df_clean.loc[df_clean['InvoiceNo'].isin(discount_invs)], "Discount invoices information",
row_num=15, show_summary=False)
###Output
[1mDISCOUNT INVOICES INFORMATION[0m
------------------------------
Number of rows: 76 Number of Columns: 9
###Markdown
**Comments**A couple of observations:- Discounts are all marked as cancelled invoices. This will probably make it challenging to relate back to the actual purchase. We can investigate that once we start aggregating at the invoice level.- There are only 76 invoices which are probably insignificant given that we have more than 3000 invoices. Nevertheless we will keep these. For ease of analysis we will remove the "Cancelled" flag from these to avoid confusion.
###Code
df_clean.loc[df_clean['InvoiceNo'].isin(discount_invs), "Cancelled"] = 0
###Output
_____no_output_____
###Markdown
Dealing with Canceled Orders As we saw before, part of our invoices have been canceled. We created a boolean column that indicates when that happened. The first thing we do, is understand the percentage of canceled orders. To do that we need to aggregate at the invoice level. Summary of Canceled Orders
###Code
# Indicate which invoices have been cancelled
df_canc = df_clean.groupby(["InvoiceNo"])['Cancelled'].max().reset_index(name="Cancelled")
utils.quick_summary(df_canc, "all invoices with cancellation flag",
row_num=5, show_summary=False)
# Get the summary stats
total_orders = df_canc.shape[0]
total_canc = df_canc['Cancelled'].sum()
perc_canc = round((total_canc / total_orders * 100), 1)
utils.print_bold(f"There are {total_canc} / {total_orders} cancelled orders. This is approx. {perc_canc}% of all orders.")
###Output
[1mThere are 3422 / 23262 cancelled orders. This is approx. 14.7% of all orders.[0m
###Markdown
As we can see a significant amount of orders is canceled. Identify Order PairsAs this dataset is a simple list of orders we can expect that for most of cancellations there should be an earlier record of an order (i.e. the one that was canceled). We start by obtaining all canceled transactions
###Code
# Get all canceled transactions and their
# corresponding invoices
df_cancel = df_clean.loc[df_clean['Cancelled'] == 1]
canc_inv_list = df_cancel['InvoiceNo'].unique().tolist()
df_cancel.head(10)
###Output
_____no_output_____
###Markdown
The following function will identify cancellation pairs of transactions and only keep the original one (i.e the purchase one). This will allow us to avoid duplication when we are aggeragating. The steps are as follows:1. Loop through all cancellation transactions and check whether there is a transaction which happened earlier, from the same CustomerID and Same Code with the same or less quantity.2. If there is at least one transaction that matches the above then get the latest one and update its *Quantity Canceled* and *Canceled_Date* metrics. Ensure that we are only canceling as much quantity as in the cancellation rather than the entire purchase.3. For cases where there are multiple matches we follow these rules. If there is at least one transaction that has the exact quantity as the canceled one, then get the most recent one of those. Otherwise keep eliminating transaction (in order of most recent) until you get to the total cancellation quantity.**Please note:** As we only have a snapshot of the transaction data it is very likely that for more recent cancellations we won't be able to map onto their original purchases. In addition as we saw above, it appears that for this dataset, canceled Invoices are not always triggered by an existing purchase. We save this output under the `features\build_features.py` module. ```pythondef process_cancellations(df, limit_rows=None): """ Takes in the dataframe of transactions and identifies all cancellations. It then runs through the following logic to identify matches for those cancellations. For each cancellation identifies all transactions that have the same CustomerID, StockCode are in the past and have the same or less Quantity. It excludes cancellations with no CustomerID. For cancellations with no matches it just takes a note of the index. For single matches it adds the canceled quantity to the original dataframe. For multi-matches it either picks up the transaction with an exact match on Quantity or keeps eliminating transactions until it covers all cancellation quantities. Parameters: ----------- df : dataframe A dataframe of transactions that has the "Cancelled" column limit_rows : int (default : None) Limits the numbers of cancellations to look through. This is useful for testing. If None looks through all of them. Returns: -------- df_clean : dataframe A dataframe with all canceled transactions dropped and the paired ones marked down. match_dict : dictionary A dictionary of all indices of the cancellation transactions split by their matched category """ Create the main dataframes df_clean = df.copy() df_cancel = df_clean.loc[(df_clean['Cancelled'] == 1) & (df_clean['CustomerID'] != "00000")] incomplete_cancelations = [] Intilize the dictionary and the columns match_dict = {"no_match" : [], "one_match": [], "mult_match": []} df_clean['Quantity_Canc'] = 0 df_clean['Cancel_Date'] = np.nan if limit_rows is not None: df_cancel = df_cancel.iloc[:limit_rows] for index, row in tqdm(df_cancel.iterrows(), total = df_cancel.shape[0]): for index, row in df_cancel.iterrows(): Extract all useful information customer_id = row['CustomerID'] stock_code = row['StockCode'] canc_quantity = row["Quantity"] canc_date = row['InvoiceDate'] Get all transactions that have the same customerID and Stock Code but happened earlier than the cancellation df_tmp = df_clean.loc[(df_clean['CustomerID'] == customer_id) & (df_clean['StockCode'] == stock_code) & (df_clean['InvoiceDate'] <= canc_date) & (df_clean['Cancelled'] != 1)] If we have no matches just record that cancelation as unmatches if df_tmp.shape[0] == 0: match_dict['no_match'].append(index) If we have only one match then take that as its match. Ensure we get the minimum between the quantity match and the available cancelations elif df_tmp.shape[0] == 1: matched = df_tmp.iloc[0] quantity_bought = matched['Quantity'] already_canc = matched['Quantity_Canc'] If we don't find enough purchases to match the cancelations then keep track of them if quantity_bought < (canc_quantity * -1): incomplete_cancelations.append(index) if (quantity_bought - already_canc) >= (canc_quantity * -1): match_dict['one_match'].append(index) Take the minimum between remainder and total bought actual_cancel = min(quantity_bought, (canc_quantity * -1)) Update the original dataframe df_clean.loc[matched.name, "Quantity_Canc"] += actual_cancel df_clean.loc[matched.name, "Cancel_Date"] = canc_date print() print(index) display(df_cancel.loc[index:index, :]) display(df_tmp) print() print(f"{matched.name} was chosen with {actual_cancel} taken out of it.") display(df_clean.loc[matched.name:matched.name, :]) print() else: match_dict['no_match'].append(index) In the case that we have more than one matches the follow rules apply. If there is an exact match to the quantity take the most recent one. Otherwise keep taking recent transactions until you get all total cancelations. elif df_tmp.shape[0] > 1: match_dict['mult_match'].append(index) print() print(index) display(df_cancel.loc[index:index, :]) display(df_tmp) print() Check if there are any exact matches or greater matches of Quantity exact_matches = df_tmp.loc[(df_tmp['Quantity'] == (canc_quantity * -1)) & (df_tmp['Quantity'] >= (df_tmp['Quantity_Canc'] + (canc_quantity * -1)))] if len(exact_matches) == 0: Loop through the array from bottom up and only mark transactions until you match the total quantity canceled cum_quant = 0 for idx, r in df_tmp[::-1].iterrows(): quantity_bought = r['Quantity'] - r['Quantity_Canc'] quantity_left = quantity_bought - r['Quantity_Canc'] if quantity_left <= (canc_quantity * -1): continue elif cum_quant < (canc_quantity * -1): Ensure we are only assigning as much quantity as available remainder = (canc_quantity * -1) - cum_quant actual_cancel = min(quantity_bought, remainder) cum_quant += actual_cancel print(f"Cancelled {actual_cancel} / {quantity_bought} of order {idx}") print(f"Added transaction {idx} and cum_quant is now: {cum_quant} / {canc_quantity * -1}") Update the original dataframe df_clean.loc[idx, "Quantity_Canc"] += actual_cancel df_clean.loc[idx, "Cancel_Date"] = canc_date Take the latest exact match as the correct transaction else: matched = exact_matches.iloc[-1] idx = matched.name actual_cancel = canc_quantity * -1 Update the original dataframe df_clean.loc[idx, "Quantity_Canc"] += actual_cancel df_clean.loc[idx, "Cancel_Date"] = canc_date print(f"{idx} was chosen.") display(df_clean.loc[idx:idx, :]) print() Print the summary print(f"Total Cancelation Summary") print(f"Total Cancelations: {df_cancel.shape[0]}") print(f"No-Matches: {len(match_dict['no_match'])} ({round((len(match_dict['no_match']) / df_cancel.shape[0] * 100), 1)}%)") print(f"Single-Matches: {len(match_dict['one_match'])} ({round((len(match_dict['one_match']) / df_cancel.shape[0] * 100), 1)}%)") print(f"Multi-Matches: {len(match_dict['mult_match'])} ({round((len(match_dict['mult_match']) / df_cancel.shape[0] * 100), 1)}%)") At the end ensure that we don't have any canceled quantities above the actual quantity except for Discounts df_test = df_clean[(df_clean["Cancelled"] != 1) & (df_clean['StockCode'] != "D")].copy() assert (df_test['Quantity'] bought quantities" return df_clean, match_dict``` We run the above function to get the final summary.
###Code
df_clean, match_dict = build_features.process_cancellations(df=df_clean)
###Output
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8604/8604 [11:11<00:00, 12.82it/s]
###Markdown
From the above results we can see that the majority of the canceled orders had no matches. If you were working on this project as part of a new strategy, this is where you would extract those 13.3% cancellations or (0.2% of all transactions) and talk with the SMEs to understand what they could mean. For the benefit of this analysis we will ignore them. Remove Cancelled TransactionsWe are almost done with the basic dataframe. Now that we have taken care of the matched transactions we can remove the ones that didn't have a match. We can also drop the "Canceled" column as we no longer need it.
###Code
# Drop all the canceled transactions
# and report the losses
all_canc = df_clean[df_clean['Cancelled'] == 1].shape[0]
perc_all = round((all_canc / df_clean.shape[0] * 100), 1)
utils.print_bold(f"A total of {all_canc} ({perc_all}%) cancellations will be dropped from the dataset. We have succesfully matched {len(match_dict['one_match']) + len(match_dict['mult_match'])} / {all_canc} of them.")
df_clean = df_clean[df_clean['Cancelled'] == 0].copy()
df_clean = df_clean.drop("Cancelled", axis=1)
# # Recreate the cancelled columns as
# # transactions that had all their
# # quantity cancelled
df_clean['Full_Canc'] = 0
df_clean.loc[df_clean['Quantity'] == df_clean['Quantity_Canc'], "Full_Canc"] = 1
###Output
[1mA total of 8774 (1.6%) cancellations will be dropped from the dataset. We have succesfully matched 7523 / 8774 of them.[0m
###Markdown
Adding the Price ColumnNow that we have processed our dataset we are ready to add the *Total Price* column taking into consideration the cancellations. This corresponds to the total price of the purchase defined as $$Total Price = Quantity + UnitPrice$$This column will be used later on to understand the spending of our customers.
###Code
# Define the price columns
df_clean['Actual_Quantity'] = df_clean['Quantity'] - df_clean['Quantity_Canc']
df_clean['Total_Price'] = df_clean['Actual_Quantity'] * df_clean['UnitPrice']
###Output
_____no_output_____
###Markdown
Rename and Rearranging Now that we have finished with the data preparation it's time to rename our columns to something easier to use when we plot graphs. We also want to rearrange the columns to make more sense. Let's recall what the current columns are.
###Code
df_clean.columns
###Output
_____no_output_____
###Markdown
Below we define a function that rearranges the columns of a dataframe based on an input and then renames them. We want to rename the columns to something shorter, lower case and more Pythonic. ```pythondef rearrange_and_rename(df, col_order, rename_dict=None): """ Takes in a dataframe, a column order and optionally a rename dictionary. It rearranges the column order by putting the ones defined in "col_order" first. It retains the order of the remaining ones. Parameters: ----------- df : dataframe Dataframe to rearrange col_order : list List of the columns to go at the beginning of the dataframe rename_dict : dictionary Dictionary """ df_out = df.copy() Rearrange the columns based on the order provided other_cols = [col for col in df_out.columns if col not in col_order] df_out = df_out[col_order + other_cols] if rename_dict is not None: Rename using the rename dictionary df_out = df_out.rename(columns=rename_dict) return df_out```
###Code
# Define a new column order and a rename dictionary
col_order = ["CustomerID", "Country", "InvoiceNo", "InvoiceDate", "StockCode", "Description",
"Actual_Quantity", "UnitPrice", "Total_Price"]
rename_dict = {"InvoiceNo" : "invc_num",
"StockCode" : "stock_code",
"Description" : "prod_desc",
"Quantity" : "qty_all",
"InvoiceDate" : "invc_date",
"UnitPrice" : "unit_price",
"CustomerID" : "customer_id",
"Country" : "country",
"Quantity_Canc" : "qty_canc",
"Cancel_Date" : "canc_date",
"Full_Canc" : "full_canc",
"Actual_Quantity": "qty",
"Total_Price" : "total_price"}
df_clean = utils.rearrange_and_rename(df_clean, col_order = col_order, rename_dict=rename_dict)
# Display the final result
utils.quick_summary(df_clean, "final processed dataset", row_num=10,
show_summary=False)
###Output
[1mFINAL PROCESSED DATASET[0m
------------------------
Number of rows: 529978 Number of Columns: 13
###Markdown
Data Aggregation and OutputThere are various formats of this dataset that will be useful. We two different formats of the data:1. The data as it is which breaks down to the transaction level2. An aggregated version of the data at the CustomerID and Invoice. This will allow us to get a good summary of our data. Transaction DatasetThis is the dataset we have been working for the entire notebook. This is ready for output now. We use the `data/interim` folder for it.
###Code
df_clean.to_csv(os.path.join(int_path, "data_cleanned.csv"), index=False)
###Output
_____no_output_____
###Markdown
Aggregated DataThis version of the dataset will be aggregated at the at different levels to suit our analysis. These are:- Customer Data- Product Data- Invoice Data- Main Data Customer DataThis dataset will contain all information for each customer. We collect the following information: 1. Number of invoices2. First and Last purchase3. Total quantity of products4. Total spend5. Cancellation rate and total amount cancelledThis dataset will be primarily used in the 3rd notebook where we explore *customer segmentation*. We also add a new column called "canc_loss" which indicates the total revenue lost due to cancellations.
###Code
df_clean['canc_loss'] = (df_clean['qty_all'] * df_clean['unit_price']) - df_clean['total_price']
cust_cols = ["customer_id", "country"]
df_cust = df_clean[cust_cols].drop_duplicates()
# Create a dataframe for
# number of purchases and first purchase
# of the customer and sort by our oldest customer
df_grp = df_clean.groupby(['customer_id','country']).agg({"invc_num" : "nunique",
"invc_date" : ["min", "max"],
"qty" : "sum",
"stock_code" : "nunique",
"total_price" : "sum",
"full_canc" : "mean",
"canc_loss" : "sum"})
df_grp.columns = ["invc_num", "invc_date", "last_purchase", "qty",
"stock_code", "total_price", "full_canc", "canc_loss"]
df_grp = df_grp.reset_index()
df_grp = df_grp.rename(columns={"invc_num" : "orders",
"qty" : "quantity",
"stock_code" : "unq_products",
"total_price" : "total_spend",
"invc_date" : "first_purchase",
"full_canc" : "cancel_rate",
"canc_loss" : "total_loss"})
df_cust = df_cust.merge(df_grp, how="left", on=["customer_id", "country"]).sort_values(by="total_spend", ascending=False)
# Output the dataframe
df_cust.to_csv(os.path.join(int_path, "customer_data.csv"), index=False)
utils.quick_summary(df_cust, "Customer Dataset sorted by first purchase", show_summary=False)
###Output
[1mCUSTOMER DATASET SORTED BY FIRST PURCHASE[0m
------------------------------------------
Number of rows: 4343 Number of Columns: 10
###Markdown
We can see that we have 4343 unique customers. We will explore this later on. Product DataThis view of the data will look at the properties of each unique product. For each product we get:- Product Description- Total Quantity- Total Revenue- Median UnitPrice (This is because the UnitPrice tends to vary)
###Code
# Extract all relavant columns and drop
# duplicates
prod_cols = ['stock_code', 'prod_desc']
df_prod = df_clean[prod_cols].drop_duplicates()
# Get all the relevant aggreagated metrics
# and merge onto the product dataframe
# df_grp =
df_grp = df_clean.groupby(["stock_code"]).agg({"qty" : "sum",
"unit_price" : "median",
"total_price" : "sum"}).reset_index()
df_grp = df_grp.rename(columns={"qty" : "sales",
"unit_price" : "med_unit_price",
"total_price" : "revenue"})
# Add the "perc of total revenue"
df_grp['sales_perc'] = (df_grp['sales'] / df_grp['sales'].sum()).round(3)
df_grp['revenue_perc'] = (df_grp['revenue'] / df_grp['revenue'].sum()).round(3)
df_prod = df_prod.merge(df_grp, how="left", on="stock_code")
df_prod = df_prod.sort_values(by="revenue_perc", ascending=False)
# Output the dataframe
df_prod.to_csv(os.path.join(int_path, "product_data.csv"), index=False)
# Preview the final dataframe
utils.quick_summary(df_prod, "Product dataframe sorted by revenue (%)", show_summary=False)
###Output
[1mPRODUCT DATAFRAME SORTED BY REVENUE (%)[0m
----------------------------------------
Number of rows: 3919 Number of Columns: 7
###Markdown
Invoice DataThis dataset will contain information specific to each invoice. Specifically:- Invoice date- Customer Id- Number of unique products bought- Total quantity- Total price- Percent Canceled- Item list
###Code
# Extract all the relevant columns for the invoices
invc_cols = ['invc_num', 'customer_id', 'invc_date']
df_invc = df_clean[invc_cols].drop_duplicates()
# Get the aggregate of all invoices metrics
# and merge with the above
df_grp = df_clean.groupby(['invc_num']).agg({"qty" : "sum",
"stock_code" : "nunique",
"total_price" : "sum",
"full_canc" : "mean",
"prod_desc" : list}).reset_index()
df_grp = df_grp.rename(columns={"qty" : "total_qty",
"stock_code" : "unq_products",
"full_canc" : "perc_canc",
"total_price" : "revenue",
"prod_desc" : "item_list"})
# Add the cancellation and discount flag
df_grp['cancelled'] = (df_grp['perc_canc'] == 1).astype(int)
df_grp['is_discount'] = (df_grp['total_qty'] < 0).astype(int)
df_invc = df_invc.merge(df_grp, on=['invc_num'], how="outer").sort_values(by="revenue", ascending=False)
# Output into the Interim file
df_invc.to_csv(os.path.join(int_path, "invoice_data.csv"), index=False)
# Display the final dataframe
utils.quick_summary(df_invc, "invoice dataframe sorted by revenue", show_summary=False)
###Output
[1mINVOICE DATAFRAME SORTED BY REVENUE[0m
------------------------------------
Number of rows: 19840 Number of Columns: 10
###Markdown
Main DataThis dataset is a combination of the invoice data and the customer data. This will be the primary dataset we will use during the exploratory data analysis part.
###Code
# We start by aggregating all possible columns to avoid duplication
agg_cols = ['customer_id', 'invc_num', 'country']
df_main = df_clean.groupby(agg_cols)['invc_date'].min().reset_index()
# Merge the customer
df_main = df_main.merge(df_invc, on=['customer_id', 'invc_num', 'invc_date'], how="left")
###Output
_____no_output_____
###Markdown
We now want to add various date features that will be handy for our analysis. To do that, we define the following function that extracts common seasonal features from a date columns. This will go in our `features\build_features.py` module. ```pythondef get_df_date_features(date_df, date_column): """ Takes in a dataframe and the corresponding date_column. From that it extracts the following information: - Month - Month Name - Day - Week Num - Season - Year - Is_Weekend Parameters ---------- date_df: dataframe A timeseries dataset that contains a date column where features can be extracted from. date_column: str Column name of where the dates are in the dataframe Returns ------- edited_df: dateframe Dataframe with the features mentioned above added as columns """ Copy the dataframe df_edited = date_df.copy() df_edited[date_column] = pd.to_datetime(df_edited[date_column]) Get the Year / Date / Month / Day / Week dates = list(df_edited[date_column].dt.strftime("%d/%m/%Y")) years = list(df_edited[date_column].dt.year) months = list(df_edited[date_column].dt.month) month_names = list(df_edited[date_column].dt.month_name().apply(lambda x: x[:3])) days = list(df_edited[date_column].dt.day_name().apply(lambda x: x[:3])) day_num = list(df_edited[date_column].dt.day) weeks = list(df_edited[date_column].dt.week) Add the seasons d = { "Winter": [12, 1, 2], "Spring": [3, 4, 5], "Summer": [6, 7, 8], "Autumn": [9, 10, 11], } seasons = [] Go through all months and find out which season they belong for x in months: for key, value in d.items(): if x in value: seasons.append(key) continue continue Add to dataset df_edited["month"] = month_names df_edited["day"] = days df_edited['day_num'] = day_num df_edited["date"] = dates df_edited["date"] = pd.to_datetime(df_edited["date"], format="%d/%m/%Y") df_edited["week_in_year"] = weeks df_edited["year"] = years df_edited["season"] = seasons Create the christmas_hol columns df_edited['is_christmas'] = ((df_edited['week_in_year'] >= 49) | (df_edited['week_in_year'] <= 2)).astype(int) Create the is_weekend column df_edited["is_weekend"] = ( (df_edited["day"] == "Sun") | (df_edited["day"] == "Sat") ).astype(int) Create the year + month col df_edited["month_n_year"] = df_edited["month"] + " " + df_edited["year"].astype(str) return df_edited```
###Code
df_main = build_features.get_df_date_features(df_main, date_column="invc_date")
# Output the main dataframe
df_main.to_csv(os.path.join(int_path, "main_data.csv"), index=False)
# Print the final dataframe
utils.quick_summary(df_main, "overall aggregated dataframe with date features",
show_summary=False)
###Output
[1mOVERALL AGGREGATED DATAFRAME WITH DATE FEATURES[0m
------------------------------------------------
Number of rows: 19840 Number of Columns: 21
|
ml_optimalization.ipynb | ###Markdown
Optimalization - find minimum of a function using SciPyThe optimization means a problem of finding numerically minimums (or maximums or zeros) of a function. The function is called cost function, or objective function, or energy.Source:- https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html- https://scipy-lectures.org/advanced/mathematical_optimization/index.html
###Code
import numpy as np
import matplotlib.pylab as plt
import math
from scipy.optimize import minimize, minimize_scalar
def func_sin(x):
return math.sin(x)
def func_sin_array(x):
return np.array([func_sin(xi) for xi in x])
x = np.linspace(0, 2*np.pi, 100)
plt.plot(x, func_sin_array(x))
plt.xlabel('Angle [rad]')
plt.ylabel('sin(x)')
plt.axis('tight')
plt.show()
for method in ['Brent', 'Golden']:
# useful to specify close interval to search
result = minimize_scalar(func_sin, [4, 5], method=method)
print ('Scalar method:', method, ', success:', result.success, ', minimum: [', result.x, ',', result.fun, ']')
# More info: https://scipy-lectures.org/advanced/mathematical_optimization/index.html
def func_exp(x):
return -np.exp(-(x - 0.7)**2)
def func_exp_array(x):
return np.array([func_exp(xi) for xi in x])
x2 = np.linspace(-5, 5, 100)
plt.plot(x2, func_exp_array(x2))
plt.xlabel('x')
plt.ylabel('custom(x)')
plt.axis('tight')
plt.show()
for method in ['Brent', 'Golden']:
result = minimize_scalar(func_exp, [0, 1], method=method)
print ('Scalar method:', method, ', success:', result.success, ', minimum: [', result.x, ',', result.fun, ']')
# Show 3D representation of Rosenbrock function
# - adapted from https://www.cc.gatech.edu/classes/AY2015/cs2316_fall/codesamples/rosenbrock_demo.py
import matplotlib.pyplot as plot
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
from mpl_toolkits.mplot3d import Axes3D
fig = plot.figure(figsize=(10,10))
ax = fig.gca(projection='3d')
s = 0.05 # Try s=1, 0.25, 0.1, or 0.05
X = np.arange(-2, 2.+s, s) #Could use linspace instead if dividing
Y = np.arange(-2, 3.+s, s) #evenly instead of stepping...
# Create the mesh grid(s) for all X/Y combos.
X, Y = np.meshgrid(X, Y)
# Rosenbrock function w/ two parameters using numpy Arrays
Z = 0.5*(1.-X)**2 + 1.0*(Y-X*X)**2
# try cmap=cm.coolwarm vs cmap=cm.jet
surf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=cm.coolwarm, linewidth=0, antialiased=False)
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
fig.colorbar(surf, shrink=0.5, aspect=5)
plot.show()
# More info:
# - https://docs.scipy.org/doc/scipy/reference/optimize.html
# - https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html
def func_rosenbrock(x):
return .5*(1 - x[0])**2 + (x[1] - x[0]**2)**2
for method in [
'Nelder-Mead',
'Powell',
'CG',
'BFGS',
'L-BFGS-B',
'TNC',
'COBYLA',
'SLSQP']:
result = minimize(func_rosenbrock, [2, -1], method=method)
print ('Method:', method, ', success:', result.success, ', minimum:', result.fun, 'x:', result.x)
###Output
Method: Nelder-Mead , success: True , minimum: 1.11527915993744e-10 x: [1.00001481 1.00002828]
Method: Powell , success: True , minimum: 2.391234618951192e-30 x: [1. 1.]
Method: CG , success: True , minimum: 1.6486281937428067e-11 x: [0.99999426 0.99998864]
Method: BFGS , success: True , minimum: 4.9931043527025166e-15 x: [0.99999991 0.99999979]
Method: L-BFGS-B , success: True , minimum: 3.7328852106324297e-13 x: [1.00000053 1.00000057]
Method: TNC , success: True , minimum: 7.014903400494414e-08 x: [0.99962707 0.99922957]
Method: COBYLA , success: True , minimum: 1.7339665207898726e-06 x: [0.99835934 0.99609841]
Method: SLSQP , success: True , minimum: 9.926858237392154e-08 x: [1.00003255 0.99975088]
|
content/post/yt-internal-and-external-ecosystems/yt-internal-and-external-ecosystems.ipynb | ###Markdown
I think I've talked myself into proposing a big change in yt. I'm not the "boss" of yt, so it might not happen, but I've kind of worked up my courage to make a serious suggestion.This last week I have been at [SciPy 2019](https://scipy2019.scipy.org/) and I had the opportunity to see a lot of talks.There were a few that really stuck with me, but for the purposes of this rather technically-focused blog post, I'm going to stick to just one in particular.[Matt Rocklin](http://matthewrocklin.com/) gave a talk about [refactoring the ecosystem to prepare for heterogeneous computing](https://www.youtube.com/watch?v=Q0DsdiY-jiw) (you should go watch it!). More specifically, though, what it seemed to me was that it was a talk more about an opportunity to avoid fragmentation and think more carefully about how arrays and APIs are thought of and used. That got me thinking about something I've kind of touched on in previous posts ([here](https://matthewturk.github.io/post/refactoring-yt-frontends-part1/), [here](https://matthewturk.github.io/post/refactoring-yt-frontends-part2/) and [here](https://matthewturk.github.io/post/refactoring-yt-frontends-part3/)) -- basically, that yt is pretty monolithic, and that's not really the best way to evolve with the ecosystem.I'll be using [findimports](https://github.com/mgedmin/findimports) for exploring how monolithic it *is* versus how monolithic it *appears to be*. Basically, I want to see: is it one repo with lots of interconnections, or is it essentially a couple repos?(Also at the end I'll give a pitch for why this is relevant, so if you're even remotely intrigued, at *least* scroll down to the section labeled "OK, the boring stuff is over.")
###Code
import pickle
import findimports
yt_imports = pickle.load(open("yt/yt/import_output.pkl", "rb"))
###Output
_____no_output_____
###Markdown
The structure of this is a set of keys that are strings of the filename/modulename, with values that are the objects in question. The `findimports` objects have an attribute `imports` which is what we're going to look at first, but they also have an `imported_names` attribute which is the list of names that get imported, in the form of `ImportInfo` objects. These have `name`, `filename`, `level` and `lineno` to show where and what they are.
###Code
yt_imports['yt.visualization.plot_window'].imports
###Output
_____no_output_____
###Markdown
There happen to be a fair number of things in here that are external to yt! So, let's set up a filtering process for those. We'll filter the `name` that is imported.One thing I should note is that yt does many, but not *all*, of its imports in absolute form, which maybe isn't ... so great ... but which lets us do this more easily.
###Code
filter_imports = lambda a: [_ for _ in sorted(a, key=lambda b: b.name) if _.name.startswith("yt.")]
###Output
_____no_output_____
###Markdown
We'll apply it to the `imported_names` attribute, since we're interested in characterizing how things are related and interweaved.
###Code
import_lists = {_ : filter_imports(yt_imports[_].imported_names) for _ in yt_imports}
import_lists['yt.visualization.plot_window']
###Output
_____no_output_____
###Markdown
This still isn't *incredibly* useful, since we kind of want to look at imports at a higher level. For instance, I want to know what `yt.visualization.plot_window` imports from in the broad cross-section of the code base. So let's write something to collapse the package *under* yt that we import from. We used `startswith(".yt")` earlier, so it'll be safe to do a split here.
###Code
collapse_subpackage = lambda a: set(_.name.split(".")[1] for _ in a)
collapse_subpackage(import_lists['yt.visualization.plot_window'])
###Output
_____no_output_____
###Markdown
Interesting. We import from frontends?! I guess I kind of missed that earlier. Let's see if we can figure out the connections between different modules to see if anything stands out.
###Code
from collections import defaultdict
subpackage_imports = defaultdict(set)
for fn, v in import_lists.items():
if not fn.startswith("yt."): continue # Get rid of our tests, etc.
subpackage = fn.split(".")[1]
subpackage_imports[subpackage].update(collapse_subpackage(v))
###Output
_____no_output_____
###Markdown
Let's break this down before we go any further -- for starters, not *everything* is an absolute import. So that makes things a bit tricky! But we can deal with that later. Let's first see what all we have:
###Code
subpackage_imports.keys()
###Output
_____no_output_____
###Markdown
A few things stand out right away. Some of these we can immediately get rid of and not consider. For instance, `pmods` is an MPI-aware importer, `mods` is a pretty old-school approach to yt importing, and we will just ignore `testing`, `analysis_modules`, `extensions` and `extern` since they're (in order) testing utilities, gone, a fake hook system, and "vendored" libraries that we should probably get rid of and just make requirements anyway. `units` is now part of [`unyt`](https://github.com/yt-project/unyt) and some of the others are by-design grabbing lots of stuff.
###Code
blacklist = ["testing", "analysis_modules", "extensions", "extern", "pmods",
"mods", "__init__", "api", "arraytypes", "config", "convenience",
"exthook", "funcs", "tests", "units", "startup_tasks"]
list(subpackage_imports.pop(_, None) for _ in blacklist);
###Output
_____no_output_____
###Markdown
We just want to see the interrelationships, so we'll look for N-by-N collisions, where N is just the values that show up as keys.
###Code
collide_with = set(subpackage_imports.keys())
collisions = {_: collide_with.intersection(subpackage_imports[_]) for _ in subpackage_imports}
###Output
_____no_output_____
###Markdown
And here we have it, the moment of truth! What do we see ...
###Code
print({_:len(__) for _, __ in collisions.items()})
###Output
{'data_objects': 6, 'fields': 5, 'frontends': 6, 'geometry': 5, 'utilities': 6, 'visualization': 6}
###Markdown
Huh. Well, that was not the dramatic, amazing reveal I'd hoped for.
###Code
subpackage_imports = defaultdict(set)
for fn, v in import_lists.items():
if not fn.startswith("yt.") or "tests" in fn: continue # Get rid of our tests, etc.
subpackage = fn.split(".")[1]
subpackage_imports[subpackage].update(collapse_subpackage(v))
list(subpackage_imports.pop(_, None) for _ in blacklist);
collisions = {_: collide_with.intersection(subpackage_imports[_]) for _ in subpackage_imports}
print({_:len(__) for _, __ in collisions.items()})
###Output
{'data_objects': 6, 'fields': 4, 'frontends': 6, 'geometry': 4, 'utilities': 5, 'visualization': 6}
###Markdown
It gets a little bit better, but honestly, not much. Our most isolated package -- by this (likely flawed) method -- are the `geometry` and `fields` packages. So let's break down a bit more what we're seeing, by not filtering quite as much, and by setting up a reverse mapping. And let's do it for both the collapsed name and the non-collapsed name.
###Code
subpackage_imports = defaultdict(set)
imported_by = defaultdict(list)
for fn, v in import_lists.items():
if not fn.startswith("yt.") or "tests" in fn: continue # Get rid of our tests, etc.
subpackage = fn.split(".")[1]
subpackage_imports[subpackage].update(set(_.name for _ in v))
[imported_by[_.name].append(fn) for _ in v]
[imported_by[_].append(fn) for _ in collapse_subpackage(v)]
###Output
_____no_output_____
###Markdown
And now we might be getting somewhere. So now we can look up for any given import which files have imported it. Let's see what imports the progress bar:
###Code
imported_by["yt.funcs.get_pbar"]
###Output
_____no_output_____
###Markdown
Nice. Now, let's look at visualization.
###Code
imported_by["yt.visualization.api.SlicePlot"], imported_by["yt.visualization.plot_window.SlicePlot"]
###Output
_____no_output_____
###Markdown
We're starting to see that things might not be quite as clear-cut as we thought. Let's look at geometry. And I'm going to set up a filtering method so that we can avoid lots of redundant pieces of info -- for instance, I don't care about things importing themselves.
###Code
filter_self_imports = lambda a: [_ for _ in imported_by[a] if not _.startswith("yt.{}".format(a))]
###Output
_____no_output_____
###Markdown
We'll only look at the first ten, because it's really long...
###Code
filter_self_imports("geometry")[:10]
###Output
_____no_output_____
###Markdown
Here things are *much* clearer. We import geometry *once* in the visualization subsystem, under `plot_modifications`. I looked it up, and here's what it is:```pythonif not issubclass(type(index), UnstructuredIndex): raise RuntimeError("Mesh line annotations only work for " "unstructured or semi-structured mesh data.")```This is probably an anti-pattern, but even if we wanted to retain this specific behavior, we could remedy it without too much trouble by having an attribute check, or some kind of string-key check.As for all the `frontends` imports, those are all because they subclass `Index`! And many of the places importing it in `data_objects` are just due to a lack of organization in the geometry/utilities/indexing code. **Historical Sidenote**: As I was doing this, I read the header for `grid_patch.py` and it reads: `"Python-based grid handler, not to be confused with the SWIG-handler"`. I am reasonably certain that it has been *years* since I thought about the proto-SWIG system I'd written to wrap around the Enzo C++ code. Kinda supports the point I intend to make when I end this post, I think. Back to the task at hand, let's look at some of the other top-level packages and how they related. I'm now specifically interested in the `visualization` and `data_objects` ones.
###Code
filter_self_imports("visualization")
###Output
_____no_output_____ |
ResultsAnalytics/PlotDownloadFrequency.ipynb | ###Markdown
Get all images flagged as downloaded from database
###Code
import sqlite3
import pandas as pd
connection = sqlite3.connect('/Users/Peterg/code/IgScrapper/Database_img/IGdata.db')
cursor = connection.cursor()
df = pd.read_sql_query("SELECT downloaded FROM photo", connection)
###Output
_____no_output_____
###Markdown
count how often the download date occur in the dataframe
###Code
from collections import Counter
i = 0
datestr = []
for date in df['downloaded']:
datestr.append(date.split(" ")[0])
datestr.sort()
x = Counter(datestr).keys()
y = Counter(datestr).values()
###Output
_____no_output_____
###Markdown
Plot result
###Code
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(22,6))
ax = fig.add_axes([0,0,1,1])
ax.bar(x,y, edgecolor='#E6E6E6', color='#EE6666')
ax.set_ylabel('downloaded images', fontsize=25)
ax.set_xlabel('date', fontsize=25)
plt.xticks(rotation=70)
ax.set_facecolor('#E6E6E6')
plt.grid(color='w', linestyle='solid')
ax.tick_params(colors='gray', direction='out')
for tick in ax.get_xticklabels():
tick.set_color('gray')
tick.set_fontsize(14)
for tick in ax.get_yticklabels():
tick.set_color('gray')
tick.set_fontsize(14)
plt.show()
###Output
_____no_output_____ |
001-Jupyter/001-Tutorials/006-Bokeh/tutorial/A3 - High-Level Charting with Holoviews.ipynb | ###Markdown
<img src="assets/bokeh-transparent.png" style="width:50px" > Bokeh Tutorial A3. High-Level Charting with Holoviews Bokeh is designed to make it possible to construct rich, deeply interactive browser-based visualizations from Python source code. It has a syntax more compact and natural than older libraries like Matplotlib, particularly when using the Charts API, but it still requires a good bit of code to do relatively common data-science tasks like complex multi-figure layouts, animations, and widgets for parameter space exploration.To make it feasible to generate complex interactive visualizations "on the fly" in Jupyter notebooks while exploring data, we have created the new [HoloViews](http://holoviews.org) library built on top of Bokeh. HoloViews allows you to annotate your data with a small amount of metadata that makes it instantly visualizable, usually without writing any plotting code. HoloViews makes it practical to explore datasets and visualize them from every angle interactively, wrapping up Bokeh code for common tasks into a set of configurable and composable components. HoloViews installs separately from Bokeh, e.g. using `conda install holoviews`, and also works with matplotlib.
###Code
import holoviews as hv
import numpy as np
hv.notebook_extension('bokeh')
###Output
_____no_output_____
###Markdown
A simple function First, let us define a mathematical function to explore, using the Numpy array library:
###Code
def sine(x, phase=0, freq=100):
return np.sin((freq * x + phase))
###Output
_____no_output_____
###Markdown
We will examine the effect of varying phase and frequency:
###Code
phases = np.linspace(0,2*np.pi,7) # Explored phases
freqs = np.linspace(50,150,5) # Explored frequencies
###Output
_____no_output_____
###Markdown
Over a specific spatial area, sampled on a grid:
###Code
dist = np.linspace(-0.5,0.5,81) # Linear spatial sampling
x,y = np.meshgrid(dist, dist)
grid = (x**2+y**2) # 2D spatial sampling
###Output
_____no_output_____
###Markdown
Succinct data visualization With HoloViews, we can immediately view our simple function as an image in a Bokeh plot in the Jupyter notebook, without any coding:
###Code
hv.__version__
hv.Image(sine(grid, freq=20))
###Output
_____no_output_____
###Markdown
But we can just as easily use ``+`` to combine ``Image`` and ``Curve`` objects, visualizing both the 2D array (with associated histogram) and a 1D cross-section:
###Code
grating = hv.Image(sine(grid, freq=20), label="Sine Grating")
((grating * hv.HLine(y=0)).hist() + grating.sample(y=0).relabel("Sine Wave"))
###Output
_____no_output_____
###Markdown
Here you can see that a HoloViews object isn't really a plot (though it generates a Bokeh Plot when requested for display by the Jupyter notebook); it is just a wrapper around your data, and the data can be processed directly (as when taking the cross-section using `sample()` here). In fact, your raw data is *always* still available,allowing you to go back and forth between visualizations and numerical analysis easily and flexibly:
###Code
grating[0,0]
type(grating.data)
###Output
_____no_output_____
###Markdown
Here the underlying data is the original Numpy array, but Python dictionaries as well as Pandas and other data formats can also be supplied. The underlying objects and data can always be retrieved, even in complex multi-figure objects, if you look at the `repr` of the object to find the indexes needed to address that data:
###Code
layout = ((grating * hv.HLine(y=0)) + grating.sample(y=0))
print(repr(layout))
layout.Overlay.Sine_Grating.Image.Sine_Grating[0,0]
###Output
_____no_output_____
###Markdown
Here `layout` is the name of the full complex object, and `Overlay.Sine_Grating` selects the first item (an HLine overlaid on a grating), and `Image.Sine_Grating` selects the grating within the overlay. The grating itself is then indexed by 'x' and 'y' as shown in the repr, and the return value from such indexing is 'z' (nearly zero in this case, which you can also see by examining the curve plot above). Interactive explorationHoloViews is designed to explore complicated datasets, where there can often be much more data than can be shown on screen at once. If there are dimensions to your data that have not been laid out as adjacent plots or overlaid plots, then HoloViews will automatically generate sliders covering the remaining range of the data. For instance, if we add an additional dimension `Y` indicating the location of the cross-section, we'll get a slider for `Y`:
###Code
positions = np.linspace(-0.3, 0.3, 17)
hv.HoloMap({y: (grating * hv.HLine(y)) for y in positions}, kdims='Y') + \
hv.HoloMap({y: (grating.sample(y=y)) for y in positions}, kdims='Y')
###Output
_____no_output_____
###Markdown
By default the data will be embedded fully into the output, allowing export to static HTML/JavaScript for distribution, but for parameter spaces too large or using dynamic data, a dynamic callback can be used with a callback that generates the data on the fly using a [DynamicMap](http://holoviews.org/Tutorials/Dynamic_Map.html). Setting display optionsHoloViews objects like `grating` above directly contain only your data and associated metadata, not any plotting details. Metadata like titles and units can be set on the objects either when created or subsequently, as shown using `label` and `relabel` above. Other properties of the visualization that are just about the view of it, not the actual data, are not stored on the HoloViews objects, but in a separate data structure. To make it easy to control such options in the notebook, a special syntax is provided:
###Code
%%opts Image (cmap='RdYlGn') Curve (color='b' line_dash="dotted") HLine (line_color='white' line_width=9)
((grating * hv.HLine(y=0)).hist() + grating.sample(y=0))
###Output
_____no_output_____
###Markdown
One advantage of this special "magic" syntax is that the names and values tab complete in the Jupyter notebook (try it!). Here the regular parentheses '(' indicate options that are backend-specific; these are generally passed directly to Bokeh. Options processed by HoloViews itself are specified using square brackets '['. The `%%opts` command above applies only to the object in that cell, while the `%opts` form below will apply throughout the rest of the document.
###Code
%opts Points (size=3)
### EXERCISE: Try changing various parameters in the above plot, using tab completion to discover the names and values
###Output
_____no_output_____
###Markdown
Of course, you can express any option setting using standard Python syntax instead. However, for technical reasons that syntax is much less succinct, and more importantly it mixes up display options with the actual data objects:
###Code
grating(options={'Image':{'style':{'cmap':'RdGy'}}})
###Output
_____no_output_____
###Markdown
Using the `%opts`/`%%opts` syntax above is both less verbose and helps keep the styling information separated from the much more important data and metadata, so that you can ignore styling when working directly with your data. Normalizing your data HoloViews is designed to make it easy to understand your data. For instance, consider two circular waves with very different amplitudes:
###Code
comparison = hv.Image(sine(grid)) + hv.Image(sine(grid, phase=np.pi)*0.02)
###Output
_____no_output_____
###Markdown
HoloViews ensures that these differences are visible by default, by normalizing across any elements of the same type that are displayed together, and even across the frames of an animation:
###Code
%%opts Image (cmap='gray')
comparison = hv.Image(sine(grid)) + hv.Image(sine(grid, phase=np.pi)*0.02)
comparison
###Output
_____no_output_____
###Markdown
This default visualization makes it clear that the two patterns differ greatly in amplitude. However, it is difficult to see the structure of the low-amplitude wave in **B**. If you wish to focus on the spatial structure rather than the amplitude, you can instruct HoloViews to normalize data in different axes separately:
###Code
%%opts Image {+axiswise} (cmap='gray')
comparison
###Output
_____no_output_____
###Markdown
Similarly, you could supply ``+framewise`` to tell it to normalize data per frame of an animation, not across all frames as it does by default. As with any other customization, you can always specify which specific element you want the customization to apply to, even in a complex multiple-subfigure layout. External data sourcesTo show how HoloViews differs from the standard Bokeh API, let's revisit the `iris` example from tutorial 1.
###Code
from bokeh.sampledata.iris import flowers
flowers.head()
###Output
_____no_output_____
###Markdown
Plotting this data using the usual Bokeh Charts API can quickly give a visualization, e.g. by typing:```show(Scatter(flowers, x='petal_length', y='petal_width'))```However, the results are limited to a few standard configurations, and you have to use the full Bokeh API for more complex visualizations. With HoloViews, it's just as simple as in the Charts API to make a simple plot:
###Code
hv.Points(flowers, kdims=['petal_length','petal_width'], vdims=[])
###Output
_____no_output_____
###Markdown
Or a somewhat more complicated plot:
###Code
%opts NdOverlay [legend_position='top_left']
irises = hv.Dataset(flowers).to(hv.Points, kdims=['petal_length','petal_width'], groupby=['species'])
irises.overlay()
###Output
_____no_output_____
###Markdown
But now you can very easily generate widgets, animations, and layouts. E.g. if you don't use `.overlay()` to tell HoloViews what to do with the species, it will become a widget automatically, without having to redefine anything that made up this plot:
###Code
irises.overlay() + irises
###Output
_____no_output_____
###Markdown
Here the previous plot has been added on the left to demonstrate that laying out data is just as simple as always in HoloViews. You can instead tell HoloViews to lay out the species data side by side, just as easily:
###Code
irises.layout()
###Output
_____no_output_____ |
lesson_11_NLP/NLP_lecture.ipynb | ###Markdown
**Chapter 16 – Natural Language Processing with RNNs and Attention** _This notebook contains all the sample code in chapter 16._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
!pip install -q -U tensorflow-addons
IS_COLAB = True
except Exception:
IS_COLAB = False
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
if not tf.config.list_physical_devices('GPU'):
print("No GPU was detected. LSTMs and CNNs can be very slow without a GPU.")
if IS_COLAB:
print("Go to Runtime > Change runtime and select a GPU hardware accelerator.")
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
tf.random.set_seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "nlp"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
No GPU was detected. LSTMs and CNNs can be very slow without a GPU.
###Markdown
NLP is a classcial goal of computer science starting with Alan Turing. Humans can be sometimes fooled by charbots with hard-coded rules: "If human says "How are you?" then answer "I am fine". But mastering a language is very difficult task. Another important reason to study NLP is that most of the human knowledge is presented in form of text and it reqires mastering of a language.A common method for NLP is RNN, because a test/dialog is a series of letters, words or sentences connected to each other. After the RNN model is trained on a corpus (large collection of texts) it can be used to generate new texts, or at least predict the next charater/word/sentence.First we will cover the *stateless* RNN which treats each batch of text separately, like we treated a random draw of 20 days of stock market data. This model does not connect a sample with the rest of the series/text. Next we will study *stateful* RNN, which runs through the whole data preserving the hidden states of RNN as it moves from one batch to the next. We will use this to predict what would the Shakespeare write next. Then we will use RNNs for sentimental analysis of moview reviews (rater's emotional attituted to the movie). Then we will use Encoder–Decoder architecture capable of performingmneural machine translation (NMT). Then we look at a very successful attentiononly architecture called the Transformer. Finally, we will take a look at some of themost important advances in NLP in the recent years, including incredibly powerfullanguage models such as GPT-2, GPT-3, and BERT, both based on Transformers. Char-RNNChar-RNN can then be used to generate novel text, one character at a time. We will train it on the full corpus of Shakespeare. Example of Shakespeare like text generated by a model:Alas, I think he shall be come approached and the day When little srain would be attain’d into being never fed,And who is but a chain and subjects of his death,I should not sleep. Splitting a sequence into batches of shuffled windows For example, let's split the sequence 0 to 14 into windows of length 5, each shifted by 2 (e.g.,`[0, 1, 2, 3, 4]`, `[2, 3, 4, 5, 6]`, etc.), then shuffle them, and split them into inputs (the first 4 steps) and targets (the last 4 steps) (e.g., `[2, 3, 4, 5, 6]` would be split into `[[2, 3, 4, 5], [3, 4, 5, 6]]`), then create batches of 3 such input/target pairs:
###Code
np.random.seed(42)
tf.random.set_seed(42)
n_steps = 5
# get a list from 0 to 14
dataset = tf.data.Dataset.from_tensor_slices(tf.range(15))
dataset = dataset.window(n_steps, shift=2, drop_remainder=True)
# divide data into batches with 5 steps (periods), the batches should start from every other digit [2,4,6]
dataset = dataset.flat_map(lambda window: window.batch(n_steps))
# flatten the vector
# shuffle(10) loads 10 observations in the memory and shufles them, and then adds another 10. This parameters
# is added to reduce the memory requirements for very large dataset. Here it is just for illustration.
dataset = dataset.shuffle(10).map(lambda window: (window[:-1], window[1:]))
# get batches with 3 observations in each. # Then the whole dataset will be covered in 2 batches.
#Prefetch just prepares one batch as we build the one. It improves speed,
# and have not effect on results. You can try different values with prefetch and you will get identical retults.
dataset = dataset.batch(3).prefetch(1)
# get
for index, (X_batch, Y_batch) in enumerate(dataset):
print("_" * 20, "Batch", index, "\nX_batch")
print(X_batch.numpy())
print("=" * 5, "\nY_batch")
print(Y_batch.numpy())
###Output
____________________ Batch 0
X_batch
[[6 7 8 9]
[2 3 4 5]
[4 5 6 7]]
=====
Y_batch
[[ 7 8 9 10]
[ 3 4 5 6]
[ 5 6 7 8]]
____________________ Batch 1
X_batch
[[ 0 1 2 3]
[ 8 9 10 11]
[10 11 12 13]]
=====
Y_batch
[[ 1 2 3 4]
[ 9 10 11 12]
[11 12 13 14]]
###Markdown
Loading the Data and Preparing the Dataset
###Code
#Load Shakespeare
shakespeare_url = "https://raw.githubusercontent.com/karpathy/char-rnn/master/data/tinyshakespeare/input.txt"
filepath = keras.utils.get_file("shakespeare.txt", shakespeare_url)
with open(filepath) as f:
shakespeare_text = f.read()
print(shakespeare_text[:148])
# Hint: This is the start of the Tragedy of the Coriolanes
###Output
First Citizen:
Before we proceed any further, hear me speak.
All:
Speak, speak.
First Citizen:
You are all resolved rather to die than to famish?
###Markdown
Let's look at the set of all characters used in the text:
###Code
char_list = "".join(sorted(set(shakespeare_text.lower())))
print(char_list)
print(f"The list has {len(char_list)} elements")
###Output
!$&',-.3:;?abcdefghijklmnopqrstuvwxyz
The list has 39 elements
###Markdown
Next, we encode every character as an integer for prediction. We will use Keras’s Tokenizer class that will do the conversion for us. The tokenizer will find all the characters used in the text and map each of them to a different character ID, from 1to the number of distinct characters (it starts from 1, rather than from 0).
###Code
# load tokenizer
tokenizer = keras.preprocessing.text.Tokenizer(char_level=True)
# Fit tokenizer for our corpus:
tokenizer.fit_on_texts(shakespeare_text)
tokenizer.texts_to_sequences(["First"])
# codes for the letters
tokenizer.sequences_to_texts([[20, 6, 9, 8, 3]])
#going back we will get first. By default the tokenizer will make all characters low case, if you want to keep upper
# case charachters separate use: = keras.preprocessing.text.Tokenizer(char_level=True, lower=False)
max_id = len(tokenizer.word_index) # number of distinct characters
dataset_size = tokenizer.document_count # total number of characters
print(max_id, dataset_size)
# 39 distinct characters, and about 1 million characters
# Encode characters as integerst to save space. In Python integer is 24 bytes, string is 58 bytes.
print(sys.getsizeof((10)))
print(sys.getsizeof(('f')))
# Array starts from 0, so the token 1 becomes token 0.
[encoded] = np.array(tokenizer.texts_to_sequences([shakespeare_text])) - 1
# 'F' become 19, rather than 20 it was before
print([encoded][0:5])
# get training data, first 90% of the corpus
train_size = dataset_size * 90 // 100
dataset = tf.data.Dataset.from_tensor_slices(encoded[:train_size])
###Output
28
58
[array([19, 5, 8, ..., 20, 26, 10])]
###Markdown
- We need to split the dataset into a training, validation, and test sets. We cannot shuffle all characters -- the text will become meaningless. So, we need to select the chunks of texts. The sets will be need to separated, so the same sentences/paragraphs don't appear in different sets. - Splitting the time-series is a difficult task: splitting 2012-2015 vs 2016-2018 will introduce bias, as would splitting Hamlet from Romeo and Juliet. The works may be structurally different. The trade off: how to preserve enough structure for training/testing without being biased from training/testing sets having different structure. The high quality split may take a lot of trials and errors. - We will simply takes first 90% of the text for training and the rest of testing/validation. Our training set has now 1 million characters. Training over it in one go would require NN with millions of neurons, which will take forever and may result in over-fitting. Instead we will use window() method to convert his longsequence of characters into many smaller windows of text.Every training instance will be a short substring of the whole text, and the RNN will be unrolled only over the length of these substrings. This is called truncated backpropagation through time.
###Code
# Take a window of 101 characters: X will be 100 characters and 101st character will be predicted by RNN.
n_steps = 100
window_length = n_steps + 1 # target = input shifted 1 character ahead
# Get windows from text shifting by 1, so we predict each of character out of 1M based on the 100 preceeding
#characters
dataset = dataset.repeat().window(window_length, shift=1, drop_remainder=True)
# Flatten the vectors
dataset = dataset.flat_map(lambda window: window.batch(window_length))
np.random.seed(42)
tf.random.set_seed(42)
batch_size = 32
# Get baches of 32 phrases. Shuffle and break into batches 10K series at a time
dataset = dataset.shuffle(10000).batch(batch_size)
# break data into X and Y
dataset = dataset.map(lambda windows: (windows[:, :-1], windows[:, 1:]))
###Output
_____no_output_____
###Markdown
 Categorical input features should be encoded as one-hot vectors or as embeddings.Here, we will encode each character using a one-hot vector because there are few distinct characters (only 39):
###Code
dataset = dataset.map(
# one-hot vector
lambda X_batch, Y_batch: (tf.one_hot(X_batch, depth=max_id), Y_batch))
# just add prefetchin for speed
dataset = dataset.prefetch(1)
# show the first batch
for X_batch, Y_batch in dataset.take(1):
print(X_batch[0], Y_batch[0], X_batch[0][0])
print("Shapes")
print(X_batch.shape, Y_batch.shape)
# categorical input features should generally be encoded, usually as one-hot vectors or as embeddings.
#There are only 39, so we use one-hot vector.
###Output
tf.Tensor(
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
...
[0. 1. 0. ... 0. 0. 0.]
[1. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]], shape=(100, 39), dtype=float32) tf.Tensor(
[ 5 7 0 7 4 15 7 0 2 6 1 0 21 1 11 11 15 17 0 14 4 8 24 0
14 1 17 31 31 10 10 19 5 8 7 2 0 18 5 2 5 35 1 9 23 10 4 15
17 0 7 5 8 28 0 16 1 11 11 17 0 16 1 11 11 26 10 10 14 1 9 1
9 5 13 7 23 10 27 2 6 3 13 20 6 0 4 11 11 0 4 2 0 3 9 18
1 0 18 4], shape=(100,), dtype=int64) tf.Tensor(
[0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.], shape=(39,), dtype=float32)
Shapes
(32, 100, 39) (32, 100)
###Markdown
Creating and Training the ModelWe predict the next character based on the previous 100 characters usingRNN with 2 GRU layers of 128 units each and 20% dropout on both the inputs and hiddens states.The output layer is a time-distributed Dense layer with 39 neurons.We apply the softmax activation function to pick most the character with the highest probability.On my fast computer one epoch takes a bit over one hour, so the estimation takes about 10 hours.
###Code
model = keras.models.Sequential([
keras.layers.GRU(128, return_sequences=True, input_shape=[None, max_id],
dropout=0.2, recurrent_dropout=0.2),
keras.layers.GRU(128, return_sequences=True,
dropout=0.2, recurrent_dropout=0.2),
keras.layers.TimeDistributed(keras.layers.Dense(max_id,
activation="softmax"))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="adam")
history = model.fit(dataset, steps_per_epoch=train_size // batch_size,
epochs=10)
def preprocess(texts):
X = np.array(tokenizer.texts_to_sequences(texts)) - 1
return tf.one_hot(X, max_id)
X_new = preprocess(["How are yo"])
Y_pred = model.predict_classes(X_new)
tokenizer.sequences_to_texts(Y_pred + 1)[0][-1] # 1st sentence, last char
tf.random.set_seed(42)
tf.random.categorical([[np.log(0.5), np.log(0.4), np.log(0.1)]], num_samples=40).numpy()
def next_char(text, temperature=1):
X_new = preprocess([text])
y_proba = model.predict(X_new)[0, -1:, :]
rescaled_logits = tf.math.log(y_proba) / temperature
char_id = tf.random.categorical(rescaled_logits, num_samples=1) + 1
return tokenizer.sequences_to_texts(char_id.numpy())[0]
tf.random.set_seed(42)
next_char("How are yo", temperature=1)
def complete_text(text, n_chars=50, temperature=1):
for _ in range(n_chars):
text += next_char(text, temperature)
return text
tf.random.set_seed(42)
print(complete_text("t", temperature=0.2))
print(complete_text("t", temperature=1))
print(complete_text("t", temperature=2))
model = keras.models.Sequential([
keras.layers.GRU(128, return_sequences=True, input_shape=[None, max_id],
dropout=0.2, recurrent_dropout=0.2),
keras.layers.GRU(128, return_sequences=True,
dropout=0.2, recurrent_dropout=0.2),
keras.layers.TimeDistributed(keras.layers.Dense(max_id,
activation="softmax"))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="adam")
history = model.fit(dataset, steps_per_epoch=train_size // batch_size,
epochs=10)
###Output
Train for 31370 steps
Epoch 1/10
31370/31370 [==============================] - 7150s 228ms/step - loss: 1.4671
Epoch 2/10
31370/31370 [==============================] - 7094s 226ms/step - loss: 1.3614
Epoch 3/10
31370/31370 [==============================] - 7063s 225ms/step - loss: 1.3404
Epoch 4/10
31370/31370 [==============================] - 7039s 224ms/step - loss: 1.3311
Epoch 5/10
31370/31370 [==============================] - 7056s 225ms/step - loss: 1.3256
Epoch 6/10
31370/31370 [==============================] - 7049s 225ms/step - loss: 1.3209
Epoch 7/10
31370/31370 [==============================] - 7068s 225ms/step - loss: 1.3166
Epoch 8/10
31370/31370 [==============================] - 7030s 224ms/step - loss: 1.3138
Epoch 9/10
31370/31370 [==============================] - 7061s 225ms/step - loss: 1.3120
Epoch 10/10
31370/31370 [==============================] - 7177s 229ms/step - loss: 1.3105
###Markdown
Using the Model to Generate Text
###Code
def preprocess(texts):
X = np.array(tokenizer.texts_to_sequences(texts)) - 1
return tf.one_hot(X, max_id)
# convert sentence to vector
X_new = preprocess(["How are yo"])
# predict next character
Y_pred = model.predict_classes(X_new)
# show prediction
tokenizer.sequences_to_texts(Y_pred + 1)[0][-1] # 1st sentence, last char
###Output
_____no_output_____
###Markdown
We could generate new text using the Char-RNN model by feeding it some text, makethe model predict the most likely next letter, add it at the end of the text, then give theextended text to the model to guess the next letter, and so on.In practice this often leads to the same words being repeated over and over again. So, instead, we pick the next character randomly, with a probability equal to the estimated probability,using TensorFlow’s tf.random.categorical() function. This will generate morediverse and interesting text.The categorical() function samples random class indices, given the class log probabilities (logits). For more control over the diversity of the generated text, we can divide the logits by a number called the temperature: a temperature close to 0 will favor the high-probability characters, while a very high temperature will give all characters an equal probability.
###Code
#Example of the sample of logits
tf.random.set_seed(42)
tf.random.categorical([[np.log(0.5), np.log(0.4), np.log(0.1)]], num_samples=40).numpy()
def next_char(text, temperature=1):
X_new = preprocess([text])
y_proba = model.predict(X_new)[0, -1:, :]
rescaled_logits = tf.math.log(y_proba) / temperature
char_id = tf.random.categorical(rescaled_logits, num_samples=1) + 1
return tokenizer.sequences_to_texts(char_id.numpy())[0]
tf.random.set_seed(42)
next_char("How are yo", temperature=1)
def complete_text(text, n_chars=50, temperature=1):
for _ in range(n_chars):
text += next_char(text, temperature)
return text
tf.random.set_seed(42)
print(complete_text("t", temperature=0.2))
print(complete_text("t", temperature=0.5))
print(complete_text("Truth", temperature=0.7))
tf.random.set_seed(42)
#You can see how temperature increases randomness
print(next_char("How are yo", temperature=1))
print(next_char("How are yo", temperature=3))
print(next_char("How are yo", temperature=5))
print(next_char("How are yo", temperature=10))
print(next_char("How are yo", temperature=20))
###Output
u
u
-
x
m
###Markdown
If you want to see state-of-the-art GPT-3 text generation check this out:https://arr.am/2020/07/14/elon-musk-by-dr-seuss-gpt-3/ Once there was a man who really was a Musk. He liked to build robots and rocket ships and such. He said, “I’m building a car that’s electric and cool. I’ll bet it outsells those Gasoline-burning clunkers soon!” Apparently our Shakespeare model works best at a temperature close to 1. But the results are not great, what we can do:1. To generate more convincing text, you could try using more GRU layers and more neurons perlayer, train for longer, and add some regularization.2. The major drawback of the model -- it's incapable oflearning patterns longer than 100 characters. We could make the window larger, but it will also make training harder, and even LSTM and GRU cells cannot handle very long sequences. 3. Alternatively, you could use a stateful RNN. Stateful RNNUntil now, we have used only stateless RNNs: - each training iteration the model starts with a hidden state full of zeros,- then it updates this state at each time step, - after the last time step, it throws it away, as it is not needed anymore. Stateful RNN can preserve its final state after processing one training batch and use it asthe initial state for the next training batch. This way the model can learn long-termpatterns despite only backpropagating through short sequences.Stateful RNN needs each input sequence in a batch to start exactly where the corresponding sequence in the previous batch left off. We need to do to build a stateful RNN is to use sequential and nonoverlaping input sequences (rather than the shuffled and overlapping sequences we used totrain stateless RNNs). We will use shift=n_steps (instead of shift=1) when calling the window() method and would not use shuffle. Batching is much harder with stateful RNN. batch(32) would produce 32 consecutive windows in the samethe same batch, and the following batch would not continue each of these window where it left off. The first batch would contain windows 1 to 32 and the second batchwould contain windows 33 to 64, so if you consider, say, the first window of eachbatch (i.e., windows 1 and 33), you can see that they are not consecutive. The simplestsolution to this problem is to just use “batches” containing a single window:We could chop Shakespeare’s text into 32 texts of equal length, create one dataset of consecutive input sequencesfor each of them, and finally use tf.train.Dataset.zip(datasets).map(lambda*windows: tf.stack(windows)) to create proper consecutive batches, where the nthinput sequence in a batch starts off exactly where the nth input sequence ended in theprevious batch (see the notebook for the full code).
###Code
tf.random.set_seed(42)
# get TF dataset from training data
dataset = tf.data.Dataset.from_tensor_slices(encoded[:train_size])
# break it by window with shift=n_steps (100)
dataset = dataset.window(window_length, shift=n_steps, drop_remainder=True)
# flatten the data
dataset = dataset.flat_map(lambda window: window.batch(window_length))
# get batches. repeat() the dataset resamples indefinitely.
dataset = dataset.repeat().batch(1)
# create overlapping windows function
dataset = dataset.map(lambda windows: (windows[:, :-1], windows[:, 1:]))
# get batches
dataset = dataset.map(
lambda X_batch, Y_batch: (tf.one_hot(X_batch, depth=max_id), Y_batch))
dataset = dataset.prefetch(1)
# set batch_size 32 consequitive windows
batch_size = 32
# split training data into 32 pieces
encoded_parts = np.array_split(encoded[:train_size], batch_size)
datasets = []
# for each of 32 batches:
for encoded_part in encoded_parts:
# convert data to tensors
dataset = tf.data.Dataset.from_tensor_slices(encoded_part)
# get windows
dataset = dataset.window(window_length, shift=n_steps, drop_remainder=True)
# flatten the arrays
dataset = dataset.flat_map(lambda window: window.batch(window_length))
# append data
datasets.append(dataset)
# put the 32 as tuples back together
dataset = tf.data.Dataset.zip(tuple(datasets)).map(lambda *windows: tf.stack(windows))
# attach the dataset
dataset = dataset.repeat().map(lambda windows: (windows[:, :-1], windows[:, 1:]))
#get X and Y data for training
dataset = dataset.map(
lambda X_batch, Y_batch: (tf.one_hot(X_batch, depth=max_id), Y_batch))
dataset = dataset.prefetch(1)
model = keras.models.Sequential([
# Pay attention to stateful = True
keras.layers.GRU(128, return_sequences=True, stateful=True,
dropout=0.2, recurrent_dropout=0.2,
# RNN needs to know batch size to preserve the states in a correct way
batch_input_shape=[batch_size, None, max_id]),
keras.layers.GRU(128, return_sequences=True, stateful=True,
dropout=0.2, recurrent_dropout=0.2),
keras.layers.TimeDistributed(keras.layers.Dense(max_id,
activation="softmax"))
])
#At the end of each epoch, we need to reset the states before we go back to the beginning
#of the text.
class ResetStatesCallback(keras.callbacks.Callback):
def on_epoch_begin(self, epoch, logs):
self.model.reset_states()
model.compile(loss="sparse_categorical_crossentropy", optimizer="adam")
steps_per_epoch = train_size // batch_size // n_steps
history = model.fit(dataset, steps_per_epoch=steps_per_epoch, epochs=50,
callbacks=[ResetStatesCallback()])
###Output
Epoch 1/50
313/313 [==============================] - 37s 117ms/step - loss: 2.6141
Epoch 2/50
313/313 [==============================] - 34s 109ms/step - loss: 2.2166
Epoch 3/50
313/313 [==============================] - 34s 108ms/step - loss: 2.4933
Epoch 4/50
313/313 [==============================] - 33s 106ms/step - loss: 2.4647
Epoch 5/50
313/313 [==============================] - 40s 129ms/step - loss: 2.1566
Epoch 6/50
313/313 [==============================] - 42s 135ms/step - loss: 2.1538
Epoch 7/50
313/313 [==============================] - 41s 131ms/step - loss: 2.0766
Epoch 8/50
313/313 [==============================] - 47s 150ms/step - loss: 1.9987
Epoch 9/50
313/313 [==============================] - 40s 127ms/step - loss: 1.9450
Epoch 10/50
313/313 [==============================] - 43s 137ms/step - loss: 1.9253
Epoch 11/50
313/313 [==============================] - 45s 142ms/step - loss: 1.8333
Epoch 12/50
313/313 [==============================] - 51s 163ms/step - loss: 1.7943
Epoch 13/50
313/313 [==============================] - 81s 260ms/step - loss: 1.7648
Epoch 14/50
313/313 [==============================] - 77s 246ms/step - loss: 1.7442
Epoch 15/50
313/313 [==============================] - 77s 247ms/step - loss: 1.7258
Epoch 16/50
313/313 [==============================] - 78s 250ms/step - loss: 1.7109
Epoch 17/50
313/313 [==============================] - 78s 250ms/step - loss: 1.7007
Epoch 18/50
313/313 [==============================] - 81s 259ms/step - loss: 1.6864
Epoch 19/50
313/313 [==============================] - 86s 273ms/step - loss: 1.6775
Epoch 20/50
313/313 [==============================] - 82s 261ms/step - loss: 1.6689
Epoch 21/50
313/313 [==============================] - 78s 249ms/step - loss: 1.6606
Epoch 22/50
313/313 [==============================] - 82s 263ms/step - loss: 1.6529
Epoch 23/50
313/313 [==============================] - 82s 261ms/step - loss: 1.6480
Epoch 24/50
313/313 [==============================] - 76s 244ms/step - loss: 1.6419
Epoch 25/50
313/313 [==============================] - 76s 242ms/step - loss: 1.6351
Epoch 26/50
313/313 [==============================] - 75s 241ms/step - loss: 1.6301
Epoch 27/50
313/313 [==============================] - 75s 241ms/step - loss: 1.6250
Epoch 28/50
313/313 [==============================] - 76s 243ms/step - loss: 1.6212
Epoch 29/50
313/313 [==============================] - 75s 240ms/step - loss: 1.6162
Epoch 30/50
313/313 [==============================] - 76s 243ms/step - loss: 1.6116
Epoch 31/50
313/313 [==============================] - 78s 248ms/step - loss: 1.6076
Epoch 32/50
313/313 [==============================] - 63s 201ms/step - loss: 1.6042
Epoch 33/50
313/313 [==============================] - 37s 119ms/step - loss: 1.6007
Epoch 34/50
313/313 [==============================] - 35s 110ms/step - loss: 1.5974
Epoch 35/50
313/313 [==============================] - 33s 104ms/step - loss: 1.5947
Epoch 36/50
313/313 [==============================] - 32s 103ms/step - loss: 1.5912
Epoch 37/50
313/313 [==============================] - 33s 106ms/step - loss: 1.5884
Epoch 38/50
313/313 [==============================] - 33s 107ms/step - loss: 1.5867
Epoch 39/50
313/313 [==============================] - 32s 104ms/step - loss: 1.5839
Epoch 40/50
313/313 [==============================] - 33s 104ms/step - loss: 1.5819
Epoch 41/50
313/313 [==============================] - 33s 105ms/step - loss: 1.5785
Epoch 42/50
313/313 [==============================] - 33s 105ms/step - loss: 1.5773
Epoch 43/50
313/313 [==============================] - 32s 102ms/step - loss: 1.5740
Epoch 44/50
313/313 [==============================] - 32s 101ms/step - loss: 1.5718
Epoch 45/50
313/313 [==============================] - 32s 102ms/step - loss: 1.5704
Epoch 46/50
313/313 [==============================] - 34s 107ms/step - loss: 1.5690
Epoch 47/50
313/313 [==============================] - 39s 126ms/step - loss: 1.5676
Epoch 48/50
313/313 [==============================] - 35s 111ms/step - loss: 1.5654
Epoch 49/50
313/313 [==============================] - 34s 107ms/step - loss: 1.5638
Epoch 50/50
313/313 [==============================] - 32s 103ms/step - loss: 1.5621
###Markdown
To use the model with different batch sizes, we need to create a stateless copy. We can get rid of dropout since it is only used during training:
###Code
# Create a copy of this model that we can use to stateless data.
stateless_model = keras.models.Sequential([
keras.layers.GRU(128, return_sequences=True, input_shape=[None, max_id]),
keras.layers.GRU(128, return_sequences=True),
keras.layers.TimeDistributed(keras.layers.Dense(max_id,
activation="softmax"))
])
###Output
_____no_output_____
###Markdown
To set the weights, we first need to build the model (so the weights get created):
###Code
# build a model without estimation
stateless_model.build(tf.TensorShape([None, None, max_id]))
# copy the weights from the stateful model to stateless model
stateless_model.set_weights(model.get_weights())
model = stateless_model
tf.random.set_seed(42)
# run model
print(complete_text("t"))
###Output
ting;
do desire.
escalus:
no mouth, and fly very w
###Markdown
Sentiment AnalysisIMDb reviews dataset is the “hello world” of natural language processing, like MNIST for image recognition. - 50,000 movie reviews in English (25,000 for training, 25,000 for testing) - binary target for whether review is negative (0) or positive (1).
###Code
tf.random.set_seed(42)
###Output
_____no_output_____
###Markdown
You can load the IMDB dataset easily:
###Code
(X_train, y_test), (X_valid, y_test) = keras.datasets.imdb.load_data()
X_train[0][:10]
###Output
_____no_output_____
###Markdown
X_train is the preprocessed list integers, where each integer represents a word.All punctuation was removed, and then words were converted to lowercase, split by spaces, and finallyindexed by frequency (so low integers correspond to frequent words).The integers 0, 1, and 2 are special: they represent the padding token, the start-of-sequence (SSS)token, and unknown words, respectively. If you want to visualize a review, you candecode it like this:
###Code
word_index = keras.datasets.imdb.get_word_index()
# get word dictionary
id_to_word = {id_ + 3: word for word, id_ in word_index.items()}
for id_, token in enumerate(("<pad>", "<sos>", "<unk>")):
#add these special symbols to the word dictionary
id_to_word[id_] = token
" ".join([id_to_word[id_] for id_ in X_train[0][:10]])
#!pip install tensorflow-datasets
import tensorflow_datasets as tfds
datasets, info = tfds.load("imdb_reviews", as_supervised=True, with_info=True)
###Output
_____no_output_____
###Markdown
In a real project, you will have to preprocess the text yourself. You can do that using the same Tokenizer class we used earlier, but this time setting char_level=False. It will filter out a lot of characters, includingmost punctuation, line breaks, and tabs (you can change that).The tokenizer will identify space as word boundary. This will work for English, but not for some other languages: Chinese (no spaces in sentence), Vietnamese (spaces between words). Even in English some words like “San Francisco” or“ILoveDeepLearning.” are hard to tokenize properly.Google’s SentencePiece project provides unsupervised learning technique to tokenize and detokenize text at the subword level in a language-independent way, treating spaces like other characters. With this approach, even if your model encounters a word it has never seen before, it can stillguess what it means. For example, it may never have seen the word“smartest” during training, but perhaps it learned the word “smart” and it alsolearned that the suffix “est” means “the most,” so it can infer the meaning of “smartest.”Another option was proposed in an earlier paper by Rico Sennrich et al. thatexplored other ways of creating subword encodings (e.g., using byte pair encoding).Last but not least, the TensorFlow team released the TF.Text library in June 2019,which implements various tokenization strategies, including WordPiece7 (a variant ofbyte pair encoding).If you want to deploy your model to a mobile device or a web browser, and you don’twant to have to write a different preprocessing function every time, then you willwant to handle preprocessing using only TensorFlow operations, so it can be includedin the model itself. Let’s see how. First, let’s load the original IMDb reviews, as text(byte strings), using TensorFlow Datasets (introduced in Chapter 13):
###Code
# subsets
datasets.keys()
# get training and testing
train_size = info.splits["train"].num_examples
test_size = info.splits["test"].num_examples
train_size, test_size
for item in datasets["train"].take(3):
print(item)
# get two batches and one elements from each batch
for X_batch, y_batch in datasets["train"].batch(2).take(1):
for review, label in zip(X_batch.numpy(), y_batch.numpy()):
print("Review:", review.decode("utf-8")[:200], "...")
print("Label:", label, "= Positive" if label else "= Negative")
print()
# We need to clean this
def preprocess(X_batch, y_batch):
# get first 300 words
X_batch = tf.strings.substr(X_batch, 0, 300)
X_batch = tf.strings.regex_replace(X_batch, rb"<br\s*/?>", b" ")
X_batch = tf.strings.regex_replace(X_batch, b"[^a-zA-Z']", b" ")
X_batch = tf.strings.split(X_batch)
return X_batch.to_tensor(default_value=b"<pad>"), y_batch
###Output
_____no_output_____
###Markdown
* keeping only the first 300 characters of each review to speed up training. We can figure the sentiment fast.* Use regular expressions to replace tags with spaces.* Replace any characters other than letters and quotes with spaces. Finally, the preprocess() function splits the reviews by the spaces, which returns a ragged tensor, and then dense tensor, and then padding all reviews with the padding token "" so that they all have the same length.
###Code
preprocess(X_batch, y_batch)
###Output
_____no_output_____
###Markdown
We will construct the vocabulary by going through the whole training set once, applying our preprocess() function, and using a Counter to count the number of occurrences of each word:
###Code
from collections import Counter
# get frequency counter
vocabulary = Counter()
for X_batch, y_batch in datasets["train"].batch(32).map(preprocess):
for review in X_batch:
vocabulary.update(list(review.numpy()))
print(f"Most common {vocabulary.most_common()[:3]}, the least common {vocabulary.most_common()[-3:]}")
#least common
len(vocabulary)
###Output
_____no_output_____
###Markdown
We don't need all words, let's keep 10,000 most common:
###Code
vocab_size = 10000
truncated_vocabulary = [
word for word, count in vocabulary.most_common()[:vocab_size]]
word_to_id = {word: index for index, word in enumerate(truncated_vocabulary)}
for word in b"This movie was faaaaaantastic".split():
print(word_to_id.get(word) or vocab_size)
###Output
22
12
11
10000
###Markdown
Now we need to add a preprocessing step to replace each word with its ID (i.e., itsindex in the vocabulary). We will create a lookup table for this, using 1,000 out-of-vocabulary (oov) buckets. The oov are used to account for the differences in vocabularly between training and testing date. We expect to have maximum 1000 words in testing data that are not found in training data. We will use oov for these words, if the actual number of new words will be higher, the model will have an index conflic which will decrease it's efficiency.
###Code
# define vocabylary: list of all possible categories
words = tf.constant(truncated_vocabulary)
# tensor corresponding to indexes of word IDs
word_ids = tf.range(len(truncated_vocabulary), dtype=tf.int64)
#Create an initializer for the lookup table, passing it the list of categories and their corresponding indices.
# This is basically Tensorflow dictionary
vocab_init = tf.lookup.KeyValueTensorInitializer(words, word_ids)
num_oov_buckets = 1000
table = tf.lookup.StaticVocabularyTable(vocab_init, num_oov_buckets)
###Output
_____no_output_____
###Markdown
We can then use this table to look up the IDs of a few words:
###Code
table.lookup(tf.constant([b"This movie was faaaaaantastic".split()]))
###Output
_____no_output_____
###Markdown
The words “this,” “movie,” and “was” were found in the table, so their IDsare lower than 10,000, while the word “faaaaaantastic” was not found, so it was mappedto one of the oov buckets, with an ID greater than 10,000.
###Code
# Fundction that return codes for the words in X_batch
def encode_words(X_batch, y_batch):
return table.lookup(X_batch), y_batch
#batch the reviews and then convert them to short sequences of words using the preprocess() function
train_set = datasets["train"].repeat().batch(32).map(preprocess)
# Encode words with numeric codes
train_set = train_set.map(encode_words).prefetch(1)
# Look at the data shape
for X_batch, y_batch in train_set.take(1):
print(X_batch)
print(y_batch)
###Output
tf.Tensor(
[[ 22 11 28 ... 0 0 0]
[ 6 21 70 ... 0 0 0]
[4099 6881 1 ... 0 0 0]
...
[ 22 12 118 ... 331 1047 0]
[1757 4101 451 ... 0 0 0]
[3365 4392 6 ... 0 0 0]], shape=(32, 60), dtype=int64)
tf.Tensor([0 0 0 1 1 1 0 0 0 0 0 1 1 0 1 0 1 1 1 0 1 1 1 1 1 0 0 0 1 0 0 0], shape=(32,), dtype=int64)
###Markdown
Embeddings The first layer is an Embedding layer, which will convert word IDs into embeddings -- relationships between words(see chapter 13 for more details). The embedding matrix needs to have one row per word ID (vocab_size + num_oov_buckets) and one column per embedding dimension (this example uses 128 dimensions, but this is a hyperparameter you could tune).The inputs of the model will be 2D tensors of shape [batch size, time steps], the output of the Embedding layer will be a 3D tensor of shape [batch size, time steps,embedding size].The Embedding layer can be understood as a lookup table that maps from integer indices (which stand for specific words) to dense vectors (their embeddings). The dimensionality (or width) of the embedding is a parameter works in the same way as the number of neurons in a Dense layer.The idea of using vectors to represent words dates back to the 1960s. The idea is to reduce the dimensionality of language, while preserving it's ability to describe key concepts. First,mneural network predicts the words near any given word, and obtains connections between them. For example, synonyms had very close embeddings, and semantically related words such as France, Spain, and Italy ended up clustered together.It’s not just about proximity, though: word embeddings were also organized alongmeaningful axes in the embedding space. Here is a famous example: if you compute $King – Man + Woman \simeq Female \; King \simeq Queen$ $Madrid – Spain + France \simeq French \; Capital \simeq Paris$ The word embeddings encode the concept of gender and capital citiesNext we add two GRU layers with the second one returning only the output of the last time step. The output layer a single neuron using the sigmoid activation function to output the estimated probabilitythat the review expresses a positive sentiment regarding the movie.
###Code
# Run model
embed_size = 128
model = keras.models.Sequential([
keras.layers.Embedding(vocab_size + num_oov_buckets, embed_size,
mask_zero=True, # not shown in the book
input_shape=[None]),
keras.layers.GRU(128, return_sequences=True),
keras.layers.GRU(128),
keras.layers.Dense(1, activation="sigmoid")
])
model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
history = model.fit(train_set, steps_per_epoch=train_size // 32, epochs=5)
###Output
Epoch 1/5
781/781 [==============================] - 59s 75ms/step - loss: 0.5305 - accuracy: 0.7282
Epoch 2/5
781/781 [==============================] - 59s 75ms/step - loss: 0.3459 - accuracy: 0.8554
Epoch 3/5
781/781 [==============================] - 62s 80ms/step - loss: 0.1913 - accuracy: 0.9319
Epoch 4/5
781/781 [==============================] - 58s 75ms/step - loss: 0.1341 - accuracy: 0.9535
Epoch 5/5
781/781 [==============================] - 59s 76ms/step - loss: 0.1011 - accuracy: 0.9624
###Markdown
The model will need to ignore padding tokens (reviews less than 300 characters). This trivial task wastes time and accuracy. If we add *mask_zero=True* they will be ignored by all downstream layers. If you do it manually: the Embedding layer creates a mask tensor equal toK.not_equal(inputs, 0) (where K = keras.backend): it is a Boolean tensor withthe same shape as the inputs, and it is equal to False anywhere the word IDs are 0, orTrue otherwise.This mask tensor is then automatically propagated by the model to all subsequent layers.Both GRU layers will receive this mask automatically, but since the second GRU layerdoes not return sequences (it only returns the output of the last time step), the maskwill not be transmitted to the Dense layer.Each layer may handle the mask differently, but in general they simply ignore masked time steps (i.e., time steps for which the mask is False). For example, when a recurrent layer encounters a masked time step,it simply copies the output from the previous time step. If the mask propagates all theway to the output (in models that output sequences, which is not the case in thisexample), then it will be applied to the losses as well, so the masked time steps willnot contribute to the loss (their loss will be 0).All layers that receive the mask must support masking (or else an exception will beraised). This includes all recurrent layers, as well as the TimeDistributed layer and afew other layers.Any layer that supports masking must have a supports_maskingattribute equal to True.Using masking layers and automatic mask propagation works best for simpleSequential models. It will not always work for more complex models, such as whenyou need to mix Conv1D layers with recurrent layers. After training for a few epochs, this model will become quite good at judging whethera review is positive or not. t’simpressive that the model is able to learn useful word embeddings based on just25,000 movie reviews. Imagine how good the embeddings would be if we had billionsof reviews to train on! Unfortunately we don’t, but perhaps we can reuse wordembeddings trained on some other large text corpus (e.g., Wikipedia articles), even ifit is not composed of movie reviews? After all, the word “amazing” generally has thesame meaning whether you use it to talk about movies or anything else. Moreover,perhaps embeddings would be useful for sentiment analysis even if they were trainedon another task: since words like “awesome” and “amazing” have a similar meaning,they will likely cluster in the embedding space even for other tasks (e.g., predictingthe next word in a sentence). If all positive words and all negative words form clusters,then this will be helpful for sentiment analysis. So instead of using so manyparameters to learn word embeddings, let’s see if we can’t just reuse pretrainedembeddings. Reusing Pretrained EmbeddingsWe reuse a sentence encoder: it takes strings as input and encodes each one as asingle vector (in this case, a 50-dimensional vector).It parses the string (splitting words on spaces) and embeds each word using an embedding matrix thatwas pretrained on a huge corpus: the Google News 7B corpus (seven billion wordslong!). Then it computes the mean of all the word embeddings, and the result is thesentence embedding.We can then add two simple Dense layers to create a good sentimentanalysis model. By default, a hub.KerasLayer is not trainable, but you can settrainable=True when creating it to change that so that you can fine-tune it for yourtask.
###Code
tf.random.set_seed(42)
TFHUB_CACHE_DIR = os.path.join(os.curdir, "my_tfhub_cache")
os.environ["TFHUB_CACHE_DIR"] = TFHUB_CACHE_DIR
!pip install tensorflow_hub
import tensorflow_hub as hub
model = keras.Sequential([
hub.KerasLayer("https://tfhub.dev/google/tf2-preview/nnlm-en-dim50/1",
dtype=tf.string, input_shape=[], output_shape=[50]),
keras.layers.Dense(128, activation="relu"),
keras.layers.Dense(1, activation="sigmoid")
])
model.compile(loss="binary_crossentropy", optimizer="adam",
metrics=["accuracy"])
for dirpath, dirnames, filenames in os.walk(TFHUB_CACHE_DIR):
for filename in filenames:
print(os.path.join(dirpath, filename))
import tensorflow_datasets as tfds
datasets, info = tfds.load("imdb_reviews", as_supervised=True, with_info=True)
train_size = info.splits["train"].num_examples
batch_size = 32
train_set = datasets["train"].repeat().batch(batch_size).prefetch(1)
history = model.fit(train_set, steps_per_epoch=train_size // batch_size, epochs=5)
###Output
Epoch 1/5
781/781 [==============================] - 34s 43ms/step - loss: 0.5460 - accuracy: 0.7267
Epoch 2/5
781/781 [==============================] - 35s 44ms/step - loss: 0.5129 - accuracy: 0.7495
Epoch 3/5
781/781 [==============================] - 35s 45ms/step - loss: 0.5082 - accuracy: 0.7530
Epoch 4/5
781/781 [==============================] - 35s 45ms/step - loss: 0.5047 - accuracy: 0.7533
Epoch 5/5
781/781 [==============================] - 35s 45ms/step - loss: 0.5015 - accuracy: 0.7560
###Markdown
Automatic TranslationLet's try to translated English sentence to French. The English sentences are fed to the encoder, and the decoder outputs theFrench translations.The French translations are also used as inputs to the decoder, but shifted back by one step. Because it not only trasnlate the sentence, but also tried to predict the fitting French word next in the sentence. The first word is the start-of-sequence (SOS) token and the end is end-of-sequence (EOS) token.The English sentence is reversed before they are fed to the encoder: “I drink milk” -> “milk drink I.”In translation the key informaiton often occurs in the end of the sentence, so decoded needs to translate it first.Each word is its ID (e.g., 288 for the word “milk”). Next, an embedding layer returns the word embedding. These word embeddings are what is actually fed to the encoder and the decoder. At each step, the decoder outputs a score for each word in the output vocabulary (i.e.,French), and then the softmax layer turns these scores into probabilities.For example, at the first step the word “Je” may have a probability of 20%, “Tu” may have a probability of 1%, and so on. The word with the highest probability is output. Few details: * Sames sentences use different number of words in different languages. We group sentences into buckets of similar lengths (e.g., a bucket for the 1- to 6-word sentences, another for the 7- to 12-word sentences, and so on), using padding for the shorter sequences to ensure all sentences in a bucket have the samelength: For example, “I drink milk” becomes “ milk drink I.” • We want to ignore any output past the EOS token, so these tokens should notcontribute to the loss (they must be masked out). For example, if the model outputs“Je bois du lait oui,” the loss for the last word should be ignored.• When the output vocabulary is large the outputting a probabilityfor each and every possible word would be terribly slow. 50,000 words -> 50,000 dimensional output vector of probabilities. * One way to speed it up is look at the probability of a correct word and a random sample of incorrect words for accuracy calculation.
###Code
tf.random.set_seed(42)
vocab_size = 100
embed_size = 10
import tensorflow_addons as tfa
# flexible model for any lengths
encoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)
decoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)
#
sequence_lengths = keras.layers.Input(shape=[], dtype=np.int32)
# First layer is word embeddings
embeddings = keras.layers.Embedding(vocab_size, embed_size)
# inputs and outpus
encoder_embeddings = embeddings(encoder_inputs)
decoder_embeddings = embeddings(decoder_inputs)
# hidden LSTM layers
encoder = keras.layers.LSTM(512, return_state=True)
# Convert hidden layers outputs to embeddings
encoder_outputs, state_h, state_c = encoder(encoder_embeddings)
encoder_state = [state_h, state_c]
sampler = tfa.seq2seq.sampler.TrainingSampler()
# Next decode from embeddings to French words translations
decoder_cell = keras.layers.LSTMCell(512)
# French words layers, here we both translate words and predict the next word based on the other French word in
# a sentence
output_layer = keras.layers.Dense(vocab_size)
decoder = tfa.seq2seq.basic_decoder.BasicDecoder(decoder_cell, sampler,
output_layer=output_layer)
# save output from the decode rnn
final_outputs, final_state, final_sequence_lengths = decoder(
decoder_embeddings, initial_state=encoder_state,
sequence_length=sequence_lengths)
# pick the word with the highest probability
Y_proba = tf.nn.softmax(final_outputs.rnn_output)
# construct a model
model = keras.models.Model(
inputs=[encoder_inputs, decoder_inputs, sequence_lengths],
outputs=[Y_proba])
###Output
_____no_output_____
###Markdown
Set return_state=True when creating the LSTM layer to get the final hiddenstate and pass it to the decoder (LTSM returns two hidden states short term and long term).The TrainingSampler is one of severalsamplers available in TensorFlow Addons: their role is to tell the decoder at each stepwhat it should pretend the previous output was. To predict the next word based on the target language model.
###Code
model.compile(loss="sparse_categorical_crossentropy", optimizer="adam")
# Test model on random word IDs using 1000 15-word sentences
X = np.random.randint(100, size=10*1000).reshape(1000, 10)
Y = np.random.randint(100, size=15*1000).reshape(1000, 15)
X_decoder = np.c_[np.zeros((1000, 1)), Y[:, :-1]]
seq_lengths = np.full([1000], 15)
history = model.fit([X, X_decoder, seq_lengths], Y, epochs=2)
###Output
Epoch 1/2
32/32 [==============================] - 4s 130ms/step - loss: 4.6052
Epoch 2/2
32/32 [==============================] - 3s 90ms/step - loss: 4.6027
###Markdown
Bidirectional Recurrent LayersRNN looks in the past and generates output for the future. It works well for time-series forecasting, but not for translation, where you need to go back and edit knowing what word would go next. For example, consider: “the Queen of the United Kingdom,” “the queen of hearts,” and “the queen bee”: to properlyencode the word “queen,” you need to look ahead. To implement this, run two recurrent layers on the same inputs, one reading the words from left to right and theother reading them from right to left. Then concatenate the outputs are each steps. To implement a bidirectional recurrent layer in Keras, wrap a recurrent layer in a keras.layers.Bidirectional layer.
###Code
model = keras.models.Sequential([
keras.layers.GRU(10, return_sequences=True, input_shape=[None, 10]),
keras.layers.Bidirectional(keras.layers.GRU(10, return_sequences=True))
])
model.summary()
###Output
Model: "sequential_6"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
gru_16 (GRU) (None, None, 10) 660
_________________________________________________________________
bidirectional (Bidirectional (None, None, 20) 1320
=================================================================
Total params: 1,980
Trainable params: 1,980
Non-trainable params: 0
_________________________________________________________________
###Markdown
Exercise solutions 1. to 7. See Appendix A. 8._Exercise:_ Embedded Reber grammars _were used by Hochreiter and Schmidhuber in [their paper](https://homl.info/93) about LSTMs. They are artificial grammars that produce strings such as "BPBTSXXVPSEPE." Check out Jenny Orr's [nice introduction](https://homl.info/108) to this topic. Choose a particular embedded Reber grammar (such as the one represented on Jenny Orr's page), then train an RNN to identify whether a string respects that grammar or not. You will first need to write a function capable of generating a training batch containing about 50% strings that respect the grammar, and 50% that don't._ First we need to build a function that generates strings based on a grammar. The grammar will be represented as a list of possible transitions for each state. A transition specifies the string to output (or a grammar to generate it) and the next state.
###Code
default_reber_grammar = [
[("B", 1)], # (state 0) =B=>(state 1)
[("T", 2), ("P", 3)], # (state 1) =T=>(state 2) or =P=>(state 3)
[("S", 2), ("X", 4)], # (state 2) =S=>(state 2) or =X=>(state 4)
[("T", 3), ("V", 5)], # and so on...
[("X", 3), ("S", 6)],
[("P", 4), ("V", 6)],
[("E", None)]] # (state 6) =E=>(terminal state)
embedded_reber_grammar = [
[("B", 1)],
[("T", 2), ("P", 3)],
[(default_reber_grammar, 4)],
[(default_reber_grammar, 5)],
[("T", 6)],
[("P", 6)],
[("E", None)]]
def generate_string(grammar):
state = 0
output = []
while state is not None:
index = np.random.randint(len(grammar[state]))
production, state = grammar[state][index]
if isinstance(production, list):
production = generate_string(grammar=production)
output.append(production)
return "".join(output)
###Output
_____no_output_____
###Markdown
Let's generate a few strings based on the default Reber grammar:
###Code
np.random.seed(42)
for _ in range(25):
print(generate_string(default_reber_grammar), end=" ")
###Output
_____no_output_____
###Markdown
Looks good. Now let's generate a few strings based on the embedded Reber grammar:
###Code
np.random.seed(42)
for _ in range(25):
print(generate_string(embedded_reber_grammar), end=" ")
###Output
_____no_output_____
###Markdown
Okay, now we need a function to generate strings that do not respect the grammar. We could generate a random string, but the task would be a bit too easy, so instead we will generate a string that respects the grammar, and we will corrupt it by changing just one character:
###Code
POSSIBLE_CHARS = "BEPSTVX"
def generate_corrupted_string(grammar, chars=POSSIBLE_CHARS):
good_string = generate_string(grammar)
index = np.random.randint(len(good_string))
good_char = good_string[index]
bad_char = np.random.choice(sorted(set(chars) - set(good_char)))
return good_string[:index] + bad_char + good_string[index + 1:]
###Output
_____no_output_____
###Markdown
Let's look at a few corrupted strings:
###Code
np.random.seed(42)
for _ in range(25):
print(generate_corrupted_string(embedded_reber_grammar), end=" ")
###Output
_____no_output_____
###Markdown
We cannot feed strings directly to an RNN, so we need to encode them somehow. One option would be to one-hot encode each character. Another option is to use embeddings. Let's go for the second option (but since there are just a handful of characters, one-hot encoding would probably be a good option as well). For embeddings to work, we need to convert each string into a sequence of character IDs. Let's write a function for that, using each character's index in the string of possible characters "BEPSTVX":
###Code
def string_to_ids(s, chars=POSSIBLE_CHARS):
return [POSSIBLE_CHARS.index(c) for c in s]
string_to_ids("BTTTXXVVETE")
###Output
_____no_output_____
###Markdown
We can now generate the dataset, with 50% good strings, and 50% bad strings:
###Code
def generate_dataset(size):
good_strings = [string_to_ids(generate_string(embedded_reber_grammar))
for _ in range(size // 2)]
bad_strings = [string_to_ids(generate_corrupted_string(embedded_reber_grammar))
for _ in range(size - size // 2)]
all_strings = good_strings + bad_strings
X = tf.ragged.constant(all_strings, ragged_rank=1)
y = np.array([[1.] for _ in range(len(good_strings))] +
[[0.] for _ in range(len(bad_strings))])
return X, y
np.random.seed(42)
X_train, y_train = generate_dataset(10000)
X_valid, y_valid = generate_dataset(2000)
###Output
_____no_output_____
###Markdown
Let's take a look at the first training sequence:
###Code
X_train[0]
###Output
_____no_output_____
###Markdown
What classes does it belong to?
###Code
y_train[0]
###Output
_____no_output_____
###Markdown
Perfect! We are ready to create the RNN to identify good strings. We build a simple sequence binary classifier:
###Code
np.random.seed(42)
tf.random.set_seed(42)
embedding_size = 5
model = keras.models.Sequential([
keras.layers.InputLayer(input_shape=[None], dtype=tf.int32, ragged=True),
keras.layers.Embedding(input_dim=len(POSSIBLE_CHARS), output_dim=embedding_size),
keras.layers.GRU(30),
keras.layers.Dense(1, activation="sigmoid")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum = 0.95, nesterov=True)
model.compile(loss="binary_crossentropy", optimizer=optimizer, metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=20, validation_data=(X_valid, y_valid))
###Output
_____no_output_____
###Markdown
Now let's test our RNN on two tricky strings: the first one is bad while the second one is good. They only differ by the second to last character. If the RNN gets this right, it shows that it managed to notice the pattern that the second letter should always be equal to the second to last letter. That requires a fairly long short-term memory (which is the reason why we used a GRU cell).
###Code
test_strings = ["BPBTSSSSSSSXXTTVPXVPXTTTTTVVETE",
"BPBTSSSSSSSXXTTVPXVPXTTTTTVVEPE"]
X_test = tf.ragged.constant([string_to_ids(s) for s in test_strings], ragged_rank=1)
y_proba = model.predict(X_test)
print()
print("Estimated probability that these are Reber strings:")
for index, string in enumerate(test_strings):
print("{}: {:.2f}%".format(string, 100 * y_proba[index][0]))
###Output
_____no_output_____
###Markdown
Ta-da! It worked fine. The RNN found the correct answers with very high confidence. :) 9._Exercise: Train an Encoder–Decoder model that can convert a date string from one format to another (e.g., from "April 22, 2019" to "2019-04-22")._ Let's start by creating the dataset. We will use random days between 1000-01-01 and 9999-12-31:
###Code
from datetime import date
# cannot use strftime()'s %B format since it depends on the locale
MONTHS = ["January", "February", "March", "April", "May", "June",
"July", "August", "September", "October", "November", "December"]
def random_dates(n_dates):
min_date = date(1000, 1, 1).toordinal()
max_date = date(9999, 12, 31).toordinal()
ordinals = np.random.randint(max_date - min_date, size=n_dates) + min_date
dates = [date.fromordinal(ordinal) for ordinal in ordinals]
x = [MONTHS[dt.month - 1] + " " + dt.strftime("%d, %Y") for dt in dates]
y = [dt.isoformat() for dt in dates]
return x, y
###Output
_____no_output_____
###Markdown
Here are a few random dates, displayed in both the input format and the target format:
###Code
np.random.seed(42)
n_dates = 3
x_example, y_example = random_dates(n_dates)
print("{:25s}{:25s}".format("Input", "Target"))
print("-" * 50)
for idx in range(n_dates):
print("{:25s}{:25s}".format(x_example[idx], y_example[idx]))
###Output
_____no_output_____
###Markdown
Let's get the list of all possible characters in the inputs:
###Code
INPUT_CHARS = "".join(sorted(set("".join(MONTHS)))) + "01234567890, "
INPUT_CHARS
###Output
_____no_output_____
###Markdown
And here's the list of possible characters in the outputs:
###Code
OUTPUT_CHARS = "0123456789-"
###Output
_____no_output_____
###Markdown
Let's write a function to convert a string to a list of character IDs, as we did in the previous exercise:
###Code
def date_str_to_ids(date_str, chars=INPUT_CHARS):
return [chars.index(c) for c in date_str]
date_str_to_ids(x_example[0], INPUT_CHARS)
date_str_to_ids(y_example[0], OUTPUT_CHARS)
def prepare_date_strs(date_strs, chars=INPUT_CHARS):
X_ids = [date_str_to_ids(dt, chars) for dt in date_strs]
X = tf.ragged.constant(X_ids, ragged_rank=1)
return (X + 1).to_tensor() # using 0 as the padding token ID
def create_dataset(n_dates):
x, y = random_dates(n_dates)
return prepare_date_strs(x, INPUT_CHARS), prepare_date_strs(y, OUTPUT_CHARS)
np.random.seed(42)
X_train, Y_train = create_dataset(10000)
X_valid, Y_valid = create_dataset(2000)
X_test, Y_test = create_dataset(2000)
Y_train[0]
###Output
_____no_output_____
###Markdown
First version: a very basic seq2seq model Let's first try the simplest possible model: we feed in the input sequence, which first goes through the encoder (an embedding layer followed by a single LSTM layer), which outputs a vector, then it goes through a decoder (a single LSTM layer, followed by a dense output layer), which outputs a sequence of vectors, each representing the estimated probabilities for all possible output character.Since the decoder expects a sequence as input, we repeat the vector (which is output by the decoder) as many times as the longest possible output sequence.
###Code
embedding_size = 32
max_output_length = Y_train.shape[1]
np.random.seed(42)
tf.random.set_seed(42)
encoder = keras.models.Sequential([
keras.layers.Embedding(input_dim=len(INPUT_CHARS) + 1,
output_dim=embedding_size,
input_shape=[None]),
keras.layers.LSTM(128)
])
decoder = keras.models.Sequential([
keras.layers.LSTM(128, return_sequences=True),
keras.layers.Dense(len(OUTPUT_CHARS) + 1, activation="softmax")
])
model = keras.models.Sequential([
encoder,
keras.layers.RepeatVector(max_output_length),
decoder
])
optimizer = keras.optimizers.Nadam()
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
history = model.fit(X_train, Y_train, epochs=20,
validation_data=(X_valid, Y_valid))
###Output
_____no_output_____
###Markdown
Looks great, we reach 100% validation accuracy! Let's use the model to make some predictions. We will need to be able to convert a sequence of character IDs to a readable string:
###Code
def ids_to_date_strs(ids, chars=OUTPUT_CHARS):
return ["".join([("?" + chars)[index] for index in sequence])
for sequence in ids]
###Output
_____no_output_____
###Markdown
Now we can use the model to convert some dates
###Code
X_new = prepare_date_strs(["September 17, 2009", "July 14, 1789"])
ids = model.predict_classes(X_new)
for date_str in ids_to_date_strs(ids):
print(date_str)
###Output
_____no_output_____
###Markdown
Perfect! :) However, since the model was only trained on input strings of length 18 (which is the length of the longest date), it does not perform well if we try to use it to make predictions on shorter sequences:
###Code
X_new = prepare_date_strs(["May 02, 2020", "July 14, 1789"])
ids = model.predict_classes(X_new)
for date_str in ids_to_date_strs(ids):
print(date_str)
###Output
_____no_output_____
###Markdown
Oops! We need to ensure that we always pass sequences of the same length as during training, using padding if necessary. Let's write a little helper function for that:
###Code
max_input_length = X_train.shape[1]
def prepare_date_strs_padded(date_strs):
X = prepare_date_strs(date_strs)
if X.shape[1] < max_input_length:
X = tf.pad(X, [[0, 0], [0, max_input_length - X.shape[1]]])
return X
def convert_date_strs(date_strs):
X = prepare_date_strs_padded(date_strs)
ids = model.predict_classes(X)
return ids_to_date_strs(ids)
convert_date_strs(["May 02, 2020", "July 14, 1789"])
###Output
_____no_output_____
###Markdown
Cool! Granted, there are certainly much easier ways to write a date conversion tool (e.g., using regular expressions or even basic string manipulation), but you have to admit that using neural networks is way cooler. ;-) However, real-life sequence-to-sequence problems will usually be harder, so for the sake of completeness, let's build a more powerful model. Second version: feeding the shifted targets to the decoder (teacher forcing) Instead of feeding the decoder a simple repetition of the encoder's output vector, we can feed it the target sequence, shifted by one time step to the right. This way, at each time step the decoder will know what the previous target character was. This should help is tackle more complex sequence-to-sequence problems.Since the first output character of each target sequence has no previous character, we will need a new token to represent the start-of-sequence (sos).During inference, we won't know the target, so what will we feed the decoder? We can just predict one character at a time, starting with an sos token, then feeding the decoder all the characters that were predicted so far (we will look at this in more details later in this notebook).But if the decoder's LSTM expects to get the previous target as input at each step, how shall we pass it it the vector output by the encoder? Well, one option is to ignore the output vector, and instead use the encoder's LSTM state as the initial state of the decoder's LSTM (which requires that encoder's LSTM must have the same number of units as the decoder's LSTM).Now let's create the decoder's inputs (for training, validation and testing). The sos token will be represented using the last possible output character's ID + 1.
###Code
sos_id = len(OUTPUT_CHARS) + 1
def shifted_output_sequences(Y):
sos_tokens = tf.fill(dims=(len(Y), 1), value=sos_id)
return tf.concat([sos_tokens, Y[:, :-1]], axis=1)
X_train_decoder = shifted_output_sequences(Y_train)
X_valid_decoder = shifted_output_sequences(Y_valid)
X_test_decoder = shifted_output_sequences(Y_test)
###Output
_____no_output_____
###Markdown
Let's take a look at the decoder's training inputs:
###Code
X_train_decoder
###Output
_____no_output_____
###Markdown
Now let's build the model. It's not a simple sequential model anymore, so let's use the functional API:
###Code
encoder_embedding_size = 32
decoder_embedding_size = 32
lstm_units = 128
np.random.seed(42)
tf.random.set_seed(42)
encoder_input = keras.layers.Input(shape=[None], dtype=tf.int32)
encoder_embedding = keras.layers.Embedding(
input_dim=len(INPUT_CHARS) + 1,
output_dim=encoder_embedding_size)(encoder_input)
_, encoder_state_h, encoder_state_c = keras.layers.LSTM(
lstm_units, return_state=True)(encoder_embedding)
encoder_state = [encoder_state_h, encoder_state_c]
decoder_input = keras.layers.Input(shape=[None], dtype=tf.int32)
decoder_embedding = keras.layers.Embedding(
input_dim=len(OUTPUT_CHARS) + 2,
output_dim=decoder_embedding_size)(decoder_input)
decoder_lstm_output = keras.layers.LSTM(lstm_units, return_sequences=True)(
decoder_embedding, initial_state=encoder_state)
decoder_output = keras.layers.Dense(len(OUTPUT_CHARS) + 1,
activation="softmax")(decoder_lstm_output)
model = keras.models.Model(inputs=[encoder_input, decoder_input],
outputs=[decoder_output])
optimizer = keras.optimizers.Nadam()
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
history = model.fit([X_train, X_train_decoder], Y_train, epochs=10,
validation_data=([X_valid, X_valid_decoder], Y_valid))
###Output
_____no_output_____
###Markdown
This model also reaches 100% validation accuracy, but it does so even faster. Let's once again use the model to make some predictions. This time we need to predict characters one by one.
###Code
sos_id = len(OUTPUT_CHARS) + 1
def predict_date_strs(date_strs):
X = prepare_date_strs_padded(date_strs)
Y_pred = tf.fill(dims=(len(X), 1), value=sos_id)
for index in range(max_output_length):
pad_size = max_output_length - Y_pred.shape[1]
X_decoder = tf.pad(Y_pred, [[0, 0], [0, pad_size]])
Y_probas_next = model.predict([X, X_decoder])[:, index:index+1]
Y_pred_next = tf.argmax(Y_probas_next, axis=-1, output_type=tf.int32)
Y_pred = tf.concat([Y_pred, Y_pred_next], axis=1)
return ids_to_date_strs(Y_pred[:, 1:])
predict_date_strs(["July 14, 1789", "May 01, 2020"])
###Output
_____no_output_____
###Markdown
Works fine! :) Third version: using TF-Addons's seq2seq implementation Let's build exactly the same model, but using TF-Addon's seq2seq API. The implementation below is almost very similar to the TFA example higher in this notebook, except without the model input to specify the output sequence length, for simplicity (but you can easily add it back in if you need it for your projects, when the output sequences have very different lengths).
###Code
import tensorflow_addons as tfa
np.random.seed(42)
tf.random.set_seed(42)
encoder_embedding_size = 32
decoder_embedding_size = 32
units = 128
encoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)
decoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)
sequence_lengths = keras.layers.Input(shape=[], dtype=np.int32)
encoder_embeddings = keras.layers.Embedding(
len(INPUT_CHARS) + 1, encoder_embedding_size)(encoder_inputs)
decoder_embedding_layer = keras.layers.Embedding(
len(INPUT_CHARS) + 2, decoder_embedding_size)
decoder_embeddings = decoder_embedding_layer(decoder_inputs)
encoder = keras.layers.LSTM(units, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_embeddings)
encoder_state = [state_h, state_c]
sampler = tfa.seq2seq.sampler.TrainingSampler()
decoder_cell = keras.layers.LSTMCell(units)
output_layer = keras.layers.Dense(len(OUTPUT_CHARS) + 1)
decoder = tfa.seq2seq.basic_decoder.BasicDecoder(decoder_cell,
sampler,
output_layer=output_layer)
final_outputs, final_state, final_sequence_lengths = decoder(
decoder_embeddings,
initial_state=encoder_state)
Y_proba = keras.layers.Activation("softmax")(final_outputs.rnn_output)
model = keras.models.Model(inputs=[encoder_inputs, decoder_inputs],
outputs=[Y_proba])
optimizer = keras.optimizers.Nadam()
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
history = model.fit([X_train, X_train_decoder], Y_train, epochs=15,
validation_data=([X_valid, X_valid_decoder], Y_valid))
###Output
_____no_output_____
###Markdown
And once again, 100% validation accuracy! To use the model, we can just reuse the `predict_date_strs()` function:
###Code
predict_date_strs(["July 14, 1789", "May 01, 2020"])
###Output
_____no_output_____
###Markdown
However, there's a much more efficient way to perform inference. Until now, during inference, we've run the model once for each new character. Instead, we can create a new decoder, based on the previously trained layers, but using a `GreedyEmbeddingSampler` instead of a `TrainingSampler`.At each time step, the `GreedyEmbeddingSampler` will compute the argmax of the decoder's outputs, and run the resulting token IDs through the decoder's embedding layer. Then it will feed the resulting embeddings to the decoder's LSTM cell at the next time step. This way, we only need to run the decoder once to get the full prediction.
###Code
inference_sampler = tfa.seq2seq.sampler.GreedyEmbeddingSampler(
embedding_fn=decoder_embedding_layer)
inference_decoder = tfa.seq2seq.basic_decoder.BasicDecoder(
decoder_cell, inference_sampler, output_layer=output_layer,
maximum_iterations=max_output_length)
batch_size = tf.shape(encoder_inputs)[:1]
start_tokens = tf.fill(dims=batch_size, value=sos_id)
final_outputs, final_state, final_sequence_lengths = inference_decoder(
start_tokens,
initial_state=encoder_state,
start_tokens=start_tokens,
end_token=0)
inference_model = keras.models.Model(inputs=[encoder_inputs],
outputs=[final_outputs.sample_id])
###Output
_____no_output_____
###Markdown
A few notes:* The `GreedyEmbeddingSampler` needs the `start_tokens` (a vector containing the start-of-sequence ID for each decoder sequence), and the `end_token` (the decoder will stop decoding a sequence once the model outputs this token).* We must set `maximum_iterations` when creating the `BasicDecoder`, or else it may run into an infinite loop (if the model never outputs the end token for at least one of the sequences). This would force you would to restart the Jupyter kernel.* The decoder inputs are not needed anymore, since all the decoder inputs are generated dynamically based on the outputs from the previous time step.* The model's outputs are `final_outputs.sample_id` instead of the softmax of `final_outputs.rnn_outputs`. This allows us to directly get the argmax of the model's outputs. If you prefer to have access to the logits, you can replace `final_outputs.sample_id` with `final_outputs.rnn_outputs`. Now we can write a simple function that uses the model to perform the date format conversion:
###Code
def fast_predict_date_strs(date_strs):
X = prepare_date_strs_padded(date_strs)
Y_pred = inference_model.predict(X)
return ids_to_date_strs(Y_pred)
fast_predict_date_strs(["July 14, 1789", "May 01, 2020"])
###Output
_____no_output_____
###Markdown
Let's check that it really is faster:
###Code
%timeit predict_date_strs(["July 14, 1789", "May 01, 2020"])
%timeit fast_predict_date_strs(["July 14, 1789", "May 01, 2020"])
###Output
_____no_output_____
###Markdown
That's more than a 10x speedup! And it would be even more if we were handling longer sequences. Fourth version: using TF-Addons's seq2seq implementation with a scheduled sampler **Warning**: due to a TF bug, this version only works using TensorFlow 2.2. When we trained the previous model, at each time step _t_ we gave the model the target token for time step _t_ - 1. However, at inference time, the model did not get the previous target at each time step. Instead, it got the previous prediction. So there is a discrepancy between training and inference, which may lead to disappointing performance. To alleviate this, we can gradually replace the targets with the predictions, during training. For this, we just need to replace the `TrainingSampler` with a `ScheduledEmbeddingTrainingSampler`, and use a Keras callback to gradually increase the `sampling_probability` (i.e., the probability that the decoder will use the prediction from the previous time step rather than the target for the previous time step).
###Code
import tensorflow_addons as tfa
np.random.seed(42)
tf.random.set_seed(42)
n_epochs = 20
encoder_embedding_size = 32
decoder_embedding_size = 32
units = 128
encoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)
decoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)
sequence_lengths = keras.layers.Input(shape=[], dtype=np.int32)
encoder_embeddings = keras.layers.Embedding(
len(INPUT_CHARS) + 1, encoder_embedding_size)(encoder_inputs)
decoder_embedding_layer = keras.layers.Embedding(
len(INPUT_CHARS) + 2, decoder_embedding_size)
decoder_embeddings = decoder_embedding_layer(decoder_inputs)
encoder = keras.layers.LSTM(units, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_embeddings)
encoder_state = [state_h, state_c]
sampler = tfa.seq2seq.sampler.ScheduledEmbeddingTrainingSampler(
sampling_probability=0.,
embedding_fn=decoder_embedding_layer)
# we must set the sampling_probability after creating the sampler
# (see https://github.com/tensorflow/addons/pull/1714)
sampler.sampling_probability = tf.Variable(0.)
decoder_cell = keras.layers.LSTMCell(units)
output_layer = keras.layers.Dense(len(OUTPUT_CHARS) + 1)
decoder = tfa.seq2seq.basic_decoder.BasicDecoder(decoder_cell,
sampler,
output_layer=output_layer)
final_outputs, final_state, final_sequence_lengths = decoder(
decoder_embeddings,
initial_state=encoder_state)
Y_proba = keras.layers.Activation("softmax")(final_outputs.rnn_output)
model = keras.models.Model(inputs=[encoder_inputs, decoder_inputs],
outputs=[Y_proba])
optimizer = keras.optimizers.Nadam()
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
def update_sampling_probability(epoch, logs):
proba = min(1.0, epoch / (n_epochs - 10))
sampler.sampling_probability.assign(proba)
sampling_probability_cb = keras.callbacks.LambdaCallback(
on_epoch_begin=update_sampling_probability)
history = model.fit([X_train, X_train_decoder], Y_train, epochs=n_epochs,
validation_data=([X_valid, X_valid_decoder], Y_valid),
callbacks=[sampling_probability_cb])
###Output
_____no_output_____
###Markdown
Not quite 100% validation accuracy, but close enough! For inference, we could do the exact same thing as earlier, using a `GreedyEmbeddingSampler`. However, just for the sake of completeness, let's use a `SampleEmbeddingSampler` instead. It's almost the same thing, except that instead of using the argmax of the model's output to find the token ID, it treats the outputs as logits and uses them to sample a token ID randomly. This can be useful when you want to generate text. The `softmax_temperature` argument serves the same purpose as when we generated Shakespeare-like text (the higher this argument, the more random the generated text will be).
###Code
softmax_temperature = tf.Variable(1.)
inference_sampler = tfa.seq2seq.sampler.SampleEmbeddingSampler(
embedding_fn=decoder_embedding_layer,
softmax_temperature=softmax_temperature)
inference_decoder = tfa.seq2seq.basic_decoder.BasicDecoder(
decoder_cell, inference_sampler, output_layer=output_layer,
maximum_iterations=max_output_length)
batch_size = tf.shape(encoder_inputs)[:1]
start_tokens = tf.fill(dims=batch_size, value=sos_id)
final_outputs, final_state, final_sequence_lengths = inference_decoder(
start_tokens,
initial_state=encoder_state,
start_tokens=start_tokens,
end_token=0)
inference_model = keras.models.Model(inputs=[encoder_inputs],
outputs=[final_outputs.sample_id])
def creative_predict_date_strs(date_strs, temperature=1.0):
softmax_temperature.assign(temperature)
X = prepare_date_strs_padded(date_strs)
Y_pred = inference_model.predict(X)
return ids_to_date_strs(Y_pred)
tf.random.set_seed(42)
creative_predict_date_strs(["July 14, 1789", "May 01, 2020"])
###Output
_____no_output_____
###Markdown
Dates look good at room temperature. Now let's heat things up a bit:
###Code
tf.random.set_seed(42)
creative_predict_date_strs(["July 14, 1789", "May 01, 2020"],
temperature=5.)
###Output
_____no_output_____
###Markdown
Oops, the dates are overcooked, now. Let's call them "creative" dates. Fifth version: using TFA seq2seq, the Keras subclassing API and attention mechanisms The sequences in this problem are pretty short, but if we wanted to tackle longer sequences, we would probably have to use attention mechanisms. While it's possible to code our own implementation, it's simpler and more efficient to use TF-Addons's implementation instead. Let's do that now, this time using Keras' subclassing API.**Warning**: due to a TensorFlow bug (see [this issue](https://github.com/tensorflow/addons/issues/1153) for details), the `get_initial_state()` method fails in eager mode, so for now we have to use the subclassing API, as Keras automatically calls `tf.function()` on the `call()` method (so it runs in graph mode). In this implementation, we've reverted back to using the `TrainingSampler`, for simplicity (but you can easily tweak it to use a `ScheduledEmbeddingTrainingSampler` instead). We also use a `GreedyEmbeddingSampler` during inference, so this class is pretty easy to use:
###Code
class DateTranslation(keras.models.Model):
def __init__(self, units=128, encoder_embedding_size=32,
decoder_embedding_size=32, **kwargs):
super().__init__(**kwargs)
self.encoder_embedding = keras.layers.Embedding(
input_dim=len(INPUT_CHARS) + 1,
output_dim=encoder_embedding_size)
self.encoder = keras.layers.LSTM(units,
return_sequences=True,
return_state=True)
self.decoder_embedding = keras.layers.Embedding(
input_dim=len(OUTPUT_CHARS) + 2,
output_dim=decoder_embedding_size)
self.attention = tfa.seq2seq.LuongAttention(units)
decoder_inner_cell = keras.layers.LSTMCell(units)
self.decoder_cell = tfa.seq2seq.AttentionWrapper(
cell=decoder_inner_cell,
attention_mechanism=self.attention)
output_layer = keras.layers.Dense(len(OUTPUT_CHARS) + 1)
self.decoder = tfa.seq2seq.BasicDecoder(
cell=self.decoder_cell,
sampler=tfa.seq2seq.sampler.TrainingSampler(),
output_layer=output_layer)
self.inference_decoder = tfa.seq2seq.BasicDecoder(
cell=self.decoder_cell,
sampler=tfa.seq2seq.sampler.GreedyEmbeddingSampler(
embedding_fn=self.decoder_embedding),
output_layer=output_layer,
maximum_iterations=max_output_length)
def call(self, inputs, training=None):
encoder_input, decoder_input = inputs
encoder_embeddings = self.encoder_embedding(encoder_input)
encoder_outputs, encoder_state_h, encoder_state_c = self.encoder(
encoder_embeddings,
training=training)
encoder_state = [encoder_state_h, encoder_state_c]
self.attention(encoder_outputs,
setup_memory=True)
decoder_embeddings = self.decoder_embedding(decoder_input)
decoder_initial_state = self.decoder_cell.get_initial_state(
decoder_embeddings)
decoder_initial_state = decoder_initial_state.clone(
cell_state=encoder_state)
if training:
decoder_outputs, _, _ = self.decoder(
decoder_embeddings,
initial_state=decoder_initial_state,
training=training)
else:
start_tokens = tf.zeros_like(encoder_input[:, 0]) + sos_id
decoder_outputs, _, _ = self.inference_decoder(
decoder_embeddings,
initial_state=decoder_initial_state,
start_tokens=start_tokens,
end_token=0)
return tf.nn.softmax(decoder_outputs.rnn_output)
np.random.seed(42)
tf.random.set_seed(42)
model = DateTranslation()
optimizer = keras.optimizers.Nadam()
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
history = model.fit([X_train, X_train_decoder], Y_train, epochs=25,
validation_data=([X_valid, X_valid_decoder], Y_valid))
###Output
_____no_output_____
###Markdown
Not quite 100% validation accuracy, but close. It took a bit longer to converge this time, but there were also more parameters and more computations per iteration. And we did not use a scheduled sampler.To use the model, we can write yet another little function:
###Code
def fast_predict_date_strs_v2(date_strs):
X = prepare_date_strs_padded(date_strs)
X_decoder = tf.zeros(shape=(len(X), max_output_length), dtype=tf.int32)
Y_probas = model.predict([X, X_decoder])
Y_pred = tf.argmax(Y_probas, axis=-1)
return ids_to_date_strs(Y_pred)
fast_predict_date_strs_v2(["July 14, 1789", "May 01, 2020"])
###Output
_____no_output_____
###Markdown
There are still a few interesting features from TF-Addons that you may want to look at:* Using a `BeamSearchDecoder` rather than a `BasicDecoder` for inference. Instead of outputing the character with the highest probability, this decoder keeps track of the several candidates, and keeps only the most likely sequences of candidates (see chapter 16 in the book for more details).* Setting masks or specifying `sequence_length` if the input or target sequences may have very different lengths.* Using a `ScheduledOutputTrainingSampler`, which gives you more flexibility than the `ScheduledEmbeddingTrainingSampler` to decide how to feed the output at time _t_ to the cell at time _t_+1. By default it feeds the outputs directly to cell, without computing the argmax ID and passing it through an embedding layer. Alternatively, you specify a `next_inputs_fn` function that will be used to convert the cell outputs to inputs at the next step. 10._Exercise: Go through TensorFlow's [Neural Machine Translation with Attention tutorial](https://homl.info/nmttuto)._ Simply open the Colab and follow its instructions. Alternatively, if you want a simpler example of using TF-Addons's seq2seq implementation for Neural Machine Translation (NMT), look at the solution to the previous question. The last model implementation will give you a simpler example of using TF-Addons to build an NMT model using attention mechanisms. 11._Exercise: Use one of the recent language models (e.g., GPT) to generate more convincing Shakespearean text._ The simplest way to use recent language models is to use the excellent [transformers library](https://huggingface.co/transformers/), open sourced by Hugging Face. It provides many modern neural net architectures (including BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet and more) for Natural Language Processing (NLP), including many pretrained models. It relies on either TensorFlow or PyTorch. Best of all: it's amazingly simple to use. First, let's load a pretrained model. In this example, we will use OpenAI's GPT model, with an additional Language Model on top (just a linear layer with weights tied to the input embeddings). Let's import it and load the pretrained weights (this will download about 445MB of data to `~/.cache/torch/transformers`):
###Code
from transformers import TFOpenAIGPTLMHeadModel
model = TFOpenAIGPTLMHeadModel.from_pretrained("openai-gpt")
###Output
_____no_output_____
###Markdown
Next we will need a specialized tokenizer for this model. This one will try to use the [spaCy](https://spacy.io/) and [ftfy](https://pypi.org/project/ftfy/) libraries if they are installed, or else it will fall back to BERT's `BasicTokenizer` followed by Byte-Pair Encoding (which should be fine for most use cases).
###Code
from transformers import OpenAIGPTTokenizer
tokenizer = OpenAIGPTTokenizer.from_pretrained("openai-gpt")
###Output
_____no_output_____
###Markdown
Now let's use the tokenizer to tokenize and encode the prompt text:
###Code
prompt_text = "This royal throne of kings, this sceptred isle"
encoded_prompt = tokenizer.encode(prompt_text,
add_special_tokens=False,
return_tensors="tf")
encoded_prompt
###Output
_____no_output_____
###Markdown
Easy! Next, let's use the model to generate text after the prompt. We will generate 5 different sentences, each starting with the prompt text, followed by 40 additional tokens. For an explanation of what all the hyperparameters do, make sure to check out this great [blog post](https://huggingface.co/blog/how-to-generate) by Patrick von Platen (from Hugging Face). You can play around with the hyperparameters to try to obtain better results.
###Code
num_sequences = 5
length = 40
generated_sequences = model.generate(
input_ids=encoded_prompt,
do_sample=True,
max_length=length + len(encoded_prompt[0]),
temperature=1.0,
top_k=0,
top_p=0.9,
repetition_penalty=1.0,
num_return_sequences=num_sequences,
)
generated_sequences
###Output
_____no_output_____
###Markdown
Now let's decode the generated sequences and print them:
###Code
for sequence in generated_sequences:
text = tokenizer.decode(sequence, clean_up_tokenization_spaces=True)
print(text)
print("-" * 80)
###Output
_____no_output_____ |
raspberrypi_growlight.ipynb | ###Markdown
This is a simple bare-bones webserver run on my raspberry pi to turn my dwarf citrus tree's grow light on at sunrise and off at sunset. Imports
###Code
# export
import datetime
import sys
import time
import apscheduler.schedulers.background
import astral
import astral.sun
import pytz
import RPi.GPIO as GPIO
from flask import Flask
###Output
_____no_output_____
###Markdown
Config Raspberry Pi
###Code
# export
PIN_GPIO = 17
GPIO.setwarnings(False)
GPIO.setmode(GPIO.BCM) # Broadcom Chip
GPIO.setup(PIN_GPIO, GPIO.OUT) # Output
###Output
_____no_output_____
###Markdown
Test out gpio stuff manually
###Code
GPIO.output(PIN_GPIO, True)
GPIO.output(PIN_GPIO, False)
###Output
_____no_output_____
###Markdown
Timezone Needed to localize `datetime.now()`
###Code
# export
TIMEZONE = pytz.timezone('US/Eastern')
###Output
_____no_output_____
###Markdown
Server Set interval in seconds to check schedule
###Code
# export
INTERVAL = 1
###Output
_____no_output_____
###Markdown
`Growlight` Growlight `on()` will turn on grow light and `off()` will turn off grow light
###Code
# export
class Growlight:
def on(self):
GPIO.output(PIN_GPIO, True)
def off(self):
GPIO.output(PIN_GPIO, False)
###Output
_____no_output_____
###Markdown
Test to see if growlight turns on and off
###Code
growlight = Growlight()
growlight.on()
time.sleep(1)
growlight.off()
###Output
_____no_output_____
###Markdown
`Schedule` Given an input time, `Schedule` will return `'ON'` if grow light should be on and `'OFF'` if growlight should be off
###Code
# export
class Schedule:
def get_status(T):
return 'OFF'
###Output
_____no_output_____
###Markdown
Sunlight schedule will turn on during sunlight
###Code
# export
class SunlightSchedule(Schedule):
def __init__(self, city):
self.city = city
def get_status(self, T):
s = astral.sun.sun(self.city.observer) # Get current status of sun
if s['sunrise'] < T < s['sunset']: return 'ON'
else: return 'OFF'
def __repr__(self):
return f'sunlight schedule for {self.city.name}.'
###Output
_____no_output_____
###Markdown
Make a schedule based on lowell
###Code
# export
class LowellSunlightSchedule(SunlightSchedule):
def __init__(self):
super().__init__(city=astral.LocationInfo(name='Lowell',
region='USA',
timezone='Eastern',
latitude=42.640999,
longitude=-71.316711))
###Output
_____no_output_____
###Markdown
Test out schedule
###Code
schedule = LowellSunlightSchedule()
schedule
T = TIMEZONE.localize(datetime.datetime.now())
schedule.get_status(T)
###Output
_____no_output_____
###Markdown
Add some time to see if scheduler works
###Code
schedule.get_status(T + datetime.timedelta(hours=10))
###Output
_____no_output_____
###Markdown
`GrowlightScheduler` Given a `Schedule` and a `GrowLight`, `GrowlightScheduler` will turn on or off the growlight based on the schedule at a given `interval`.
###Code
# export
class GrowlightScheduler:
def __init__(self, schedule, growlight, interval=INTERVAL):
self.schedule = schedule
self.growlight = growlight
self.interval = interval
self.scheduler = apscheduler.schedulers.background.BackgroundScheduler()
self.scheduler.add_job(self.job, 'interval', seconds=self.interval)
def job(self):
T = TIMEZONE.localize(datetime.datetime.now())
status = self.schedule.get_status(T)
if status == 'ON': self.growlight.on()
elif status == 'OFF': self.growlight.off()
else: raise RuntimeError(f'Unknown status: {status}')
def start(self):
if self.scheduler.running: self.scheduler.resume()
else: self.scheduler.start()
def stop(self):
if self.scheduler.running: self.scheduler.pause()
schedule = LowellSunlightSchedule()
growlight = Growlight()
scheduler_growlight = GrowlightScheduler(schedule, growlight)
scheduler_growlight.start()
scheduler_growlight.stop()
growlight.off()
###Output
_____no_output_____
###Markdown
Flask app Create flask app with the following API:* `ON` - stops scheduler and turns growlight on* `OFF` - stops scheduler and turns growlight off* `START` - starts scheduler* `STOP` - stops scheduler* `STATUS` - returns current status
###Code
# export
app = Flask(__name__)
schedule = LowellSunlightSchedule()
growlight = Growlight()
scheduler_growlight = GrowlightScheduler(schedule, growlight)
STATUS = None
# export
def _status():
return f'Growlight status: {STATUS}'
# export
@app.route('/ON/', methods=['GET'])
def ON():
global STATUS
scheduler_growlight.stop()
growlight.on()
STATUS = 'ON'
return _status()
# export
@app.route('/OFF/', methods=['GET'])
def OFF():
global STATUS
scheduler_growlight.stop()
growlight.off()
STATUS = 'OFF'
return _status()
# export
@app.route('/START/', methods=['GET'])
def START():
global STATUS
scheduler_growlight.start()
STATUS = str(scheduler_growlight.schedule)
return _status()
# export
@app.route('/STOP/', methods=['GET'])
def PAUSE():
global STATUS
scheduler_growlight.stop()
STATUS = 'stopped ' + str(scheduler_growlight.schedule)
return _status()
# export
@app.route('/STATUS/', methods=['GET'])
def STATUS():
return _status()
###Output
_____no_output_____
###Markdown
For testing purposes change name
###Code
__name__ = '__notebook__'
###Output
_____no_output_____
###Markdown
By default, start grownlight scheduler when server starts up
###Code
# export
if __name__ == '__main__':
START()
app.run(host='0.0.0.0', port=8080)
###Output
_____no_output_____
###Markdown
Build
###Code
!nbdev_build_lib --fname raspberrypi_growlight.ipynb
!jupyter nbconvert --to markdown --output README raspberrypi_growlight.ipynb
###Output
[NbConvertApp] Converting notebook raspberrypi_growlight.ipynb to markdown
[NbConvertApp] Writing 6151 bytes to README.md
|
Jupyter_Notebooks/NamedEntities/JQA_GeoTagger-Spacy.ipynb | ###Markdown
GeoTagger - Spacy
###Code
# Import necessary libraries.
import re, warnings, urllib, requests, spacy, geopy, folium, os, sys, glob
import pandas as pd
import numpy as np
from collections import Counter
from geopy.extra.rate_limiter import RateLimiter
# Import project-specific functions.
# Python files (.py) have to be in same folder to work.
lib_path = os.path.abspath(os.path.join(os.path.dirname('JQA_XML_parser.py'), '../Scripts'))
sys.path.append(lib_path)
from JQA_XML_parser import *
nlp = spacy.load('en_core_web_sm')
# Ignore warnings related to deprecated functions.
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
Get XML Files
###Code
%%time
# Declare directory location to shorten filepaths later.
abs_dir = "/Users/quinn.wi/Documents/Data"
files = glob.glob(abs_dir + "/PSC/JQA/*/*.xml")
len(files)
###Output
CPU times: user 3.52 ms, sys: 4.52 ms, total: 8.05 ms
Wall time: 14.3 ms
###Markdown
Build Dataframe
###Code
%%time
# Build dataframe from XML files.
# build_dataframe() called from Correspondence_XML_parser
df = build_dataframe(files)
df.head(3)
###Output
CPU times: user 3.38 ms, sys: 3.83 ms, total: 7.2 ms
Wall time: 5.92 ms
###Markdown
Get Place Names
###Code
%%time
def get_placenames(text):
doc = nlp(text)
places = [ent.text for ent in doc.ents if ent.label_ in ['LOC', 'GPE']]
return places
df['places'] = df['text'].apply(lambda x: get_placenames(x))
df = df[['entry', 'date', 'places']]
df = df.explode('places')
df.head(3)
###Output
CPU times: user 1min 7s, sys: 1.56 s, total: 1min 9s
Wall time: 1min 9s
###Markdown
GeoCode Places
###Code
%%time
# https://www.natasshaselvaraj.com/a-step-by-step-guide-on-geocoding-in-python/
def geocode(place):
# url = 'https://nominatim.openstreetmap.org/search/' + urllib.parse.quote(place) +'?format=json'
url = 'https://nominatim.openstreetmap.org/search/' + str(place) + '?format=json'
response = requests.get(url).json()
if (len(response) != 0):
# Default (response[0]): select first search hit in OpenStreetMap.
return (float(response[0]['lat']), float(response[0]['lon']))
else:
return None
df['coordinates'] = df['places'].apply(geocode)
df[['lat', 'lon']] = pd.DataFrame(df['coordinates'].tolist(), index = df.index)
# Convert to floats.
df['lat'] = df['lat'].apply(lambda x: float(x))
df['lon'] = df['lon'].apply(lambda x: float(x))
df = df.dropna()
df.head(3)
###Output
CPU times: user 2min 23s, sys: 10.4 s, total: 2min 33s
Wall time: 1h 52min 5s
###Markdown
Save Results
###Code
%%time
df.to_csv(os.path.abspath('../../lab_space/projects/jqa/jqa-geoReference.csv', sep = ',', index = False)
###Output
CPU times: user 75.3 ms, sys: 4.56 ms, total: 79.8 ms
Wall time: 83.4 ms
|
build_data/Build_Yield_Carry_Data_WRDS.ipynb | ###Markdown
Save to Excel
###Code
with pd.ExcelWriter('treasury_data.xlsx') as writer:
prices.to_excel(writer, sheet_name= 'prices')
yields.to_excel(writer, sheet_name='yields')
###Output
_____no_output_____ |
testing_scan.ipynb | ###Markdown
Reproducing stored results This code generates csv files containing the results of the analysis of time series obtained from scanning over connectance, noise level, etc as needed to produce the figures of the article https://doi.org/10.1101/2021.02.19.431657For the code actually used in https://github.com/lanadescheemaeker/rank_abundance/blob/master/scan_heavytails.py=> here we provide a code generating and saving the statistics of a few time series:
###Code
import numpy as np
from scipy import stats
import itertools
import functools
import python_codes.Timeseries_glv as glv #glv is a .py file in which you can find definitions
import python_codes.Timeseries_ibm as ibm #idem
from python_codes.noise_parameters import NOISE
from python_codes.models import MODEL
from python_codes.heavytails import fit_heavytail
from python_codes.neutrality_analysis import BrayCurtis, BrayCurtis_neutrality, KullbackLeibler, KullbackLeibler_neutrality, JensenShannon
from python_codes.variation import variation_coefficient, JS
import warnings
#import python_codes.powerlaw
debug = True
noise_implementation = NOISE.LANGEVIN_LINEAR # CONSTANT
if debug:
import matplotlib.pyplot as plt
from python_codes.timeseries_plotting import PlotTimeseries
from python_codes.heavytails import plot_heavytail
def random_parameter_set(S, connectance=0.3, minint=-0.5, maxint=0.5,
minmigration=0.4, maxmigration=0.4,
minextinction=0.5, maxextinction=0.5, growth_rate=1.5):
""" Return a set of random parameters to generate glv time series """
# Interaction matrix
interaction = np.random.uniform(minint, maxint, [S, S])
# Impose connectance: set interaction matrix elements to zero such that the percentage of non-zero elements
# is equal to the connectance
interaction *= np.random.choice([0, 1], interaction.shape, p=[1 - connectance, connectance])
# Self-interaction is -1 for all species.
np.fill_diagonal(interaction, -1.)
# Growth rate is equal for all species (value is growth_rate).
growth_rate = np.full([S, 1], growth_rate)
# Uniform immigration and extinction rates.
immigration = np.random.uniform(minmigration, maxmigration, [S, 1])
extinction = np.random.uniform(minextinction, maxextinction, [S, 1])
return {'interaction_matrix': interaction, 'immigration_rate': immigration,
'extinction_rate': extinction, 'growth_rate': growth_rate}
def random_parameter_set_ibm(S, connectance=0.3, minint=-0.5, maxint=0.5,
minmigration=0.4, maxmigration=0.4,
minextinction=0.5, maxextinction=0.5, growth_rate=1.5, SIS=[], SISfactor=200):
params = random_parameter_set(S, connectance, minint, maxint,
minmigration, maxmigration, minextinction, maxextinction, growth_rate)
# Generate strongly-interacting-species (SIS) vector.
SISvector = np.ones(S, dtype=int)
SISvector[SIS] *= SISfactor
params['SISvector'] = SISvector
return params
def random_parameter_set_logistic(S, width_growth=1):
# Set growth rates.
if width_growth == 0:
growth_rate = np.ones([S, 1])
else:
growth_rate = stats.lognorm.rvs(loc=0, s=width_growth, size=[S, 1])
# No interactions becauce logistic model
interaction = np.zeros([S, S])
# Calculate and set self-interactions.
if width_growth == 2:
self_int = np.ones(S)
else:
self_int = stats.lognorm.rvs(loc=0, s=np.sqrt(4 - width_growth ** 2), size=S)
np.fill_diagonal(interaction, -self_int)
# No immigration or extinction.
immigration = np.zeros([S, 1])
extinction = np.zeros([S, 1])
return {'interaction_matrix': interaction, 'immigration_rate': immigration,
'extinction_rate': extinction, 'growth_rate': growth_rate}
def add_SIS(interaction, SISvector):
interaction_SIS = interaction * SISvector
np.fill_diagonal(interaction_SIS, np.diag(interaction))
return interaction_SIS
def line_statistics(params, model=MODEL.GLV):
""" Generates a time series with the given parameters and returns a string with all statistical parameters
of this time series."""
# Initiate empty line
line = ''
# First simulate without noise to allow system to go to steady state
params_nonoise = params.copy() # parameters without noise
for noise in ['noise_linear', 'noise_constant']:
if noise in params_nonoise:
params_nonoise[noise] = 0
if model in [MODEL.GLV, MODEL.MAX, MODEL.MAX_IMMI]:
discrete = False
# Find steady state without noise
ts = glv.Timeseries(params_nonoise, T=250, dt=0.01, tskip=99, model=model)
if debug:
PlotTimeseries(ts.timeseries)
# Determine deterministic stability: stable if less than 10% change for last 50 time points.
deterministic_stability = (np.max(np.abs((ts.timeseries.iloc[-50, 1:] - ts.timeseries.iloc[-1, 1:]) / ts.timeseries.iloc[-50,
1:])) < 0.1)
line += ',%d' % deterministic_stability
# Find steady state with noise
# Set steady state to deterministic steady state
params['initial_condition'] = ts.endpoint.values.astype('float')
ts = glv.Timeseries(params, T=500, dt=0.01, tskip=99, model=model, noise_implementation=noise_implementation)
if debug:
PlotTimeseries(ts.timeseries)
elif model == MODEL.IBM:
discrete = True
# Time series to find "steady state", transient dynamics
params['initial_condition'] = ibm.Timeseries(params, T=50).endpoint.values.astype('int').flatten()
# Time series for IBM.
ts = ibm.Timeseries(params, T=250)
else:
raise ValueError("Unknown model: %s" % model.__name__)
endpoint = ts.endpoint
# Remove species that are "extinct", by definition: smaller than 6 orders of magnitude smaller than maximal abundance
col_to_drop = endpoint.index[endpoint.endpoint < 1e-6 * np.max(endpoint.endpoint)]
with warnings.catch_warnings(): # Ignore the NAN-warnings when removing species.
warnings.filterwarnings('ignore', r'All-NaN (slice|axis) encountered')
endpoint = endpoint.values.astype('float').flatten()
endpoint = endpoint[endpoint > 1e-6 * np.nanmax(endpoint)]
if model in [MODEL.GLV, MODEL.MAX, MODEL.MAX_IMMI]:
ts_trimmed = ts.timeseries.drop(columns=col_to_drop)
elif model == MODEL.IBM:
ts_trimmed = ts.timeseries
else:
raise ValueError("Unknown model: %s" % model.__name__)
# Diversity is number of remaining species.
diversity = len(endpoint)
line += ',%d' % diversity
# Normalized time series.
ts_norm = ts.timeseries.div(
ts.timeseries.loc[:, [col for col in ts.timeseries.columns if col.startswith('species')]].sum(axis=1), axis=0)
ts_norm.time = ts.timeseries.time
# Calculate variation coefficient for time series, and normalized time series.
for tsi in [ts_trimmed, ts_norm]:
params = variation_coefficient(tsi)
for par in params:
line += ',%.3E' % par
# Calculate Jensen Shannon distance.
params_JS = JS(ts_trimmed)
for par in params_JS:
line += ',%.3E' % par
# Calculate parameters for fitting heavy tailed distributions.
for func in ['lognorm', 'pareto', 'powerlaw', 'trunc_powerlaw', 'expon', 'norm']:
params = fit_heavytail(endpoint, func=func, discrete=discrete)
for par in params:
line += ',%.3E' % par
if debug:
fig = plt.figure()
ax = fig.add_subplot(111)
params = fit_heavytail(endpoint, func='lognorm', discrete=discrete)
plot_heavytail(endpoint, params, func='lognorm', ax=ax, discrete=discrete)
print("Width lognorm:", params[0])
print("Stat lognorm:", params[-2])
for f in ['expon', 'norm', 'powerlaw', 'pareto']:
params = fit_heavytail(endpoint, func=f, discrete=discrete)
plot_heavytail(endpoint, params, func=f, ax=ax, discrete=discrete)
print("Stat %s:" % f, params[-2])
params = fit_heavytail(endpoint, func='trunc_powerlaw', discrete=discrete)
plot_heavytail(endpoint, params, func='trunc_powerlaw', ax=ax, discrete=discrete)
print("Stat trunc powerlaw:", params[-2])
print("R powerlaw (negative -> lognormal):", params[2])
plt.show()
return line
def initial_condition(S_, model, max_cap, absent_init):
initcond = np.random.uniform(0, 1, [S_, 1])
if 'MAX' in model.name:
# Rescale initial condition with maximum capacity.
initcond *= min(1., 1. * max_cap / S_)
if absent_init:
# Set random species to zero, they may enter the system through immigration.
initcond *= np.random.choice([0, 1], size=initcond.shape, p=[0.2, 0.8])
return initcond
def one_set_glv(input_pars, file='', N=1, S=None, model=MODEL.GLV, absent_init=False, use_lognormal_params=False):
""" Generates N time series of glv systems according to input parameters and writes summary
of the statistics of time series to file."""
connectance, immigration, noise, int_strength, max_cap = input_pars
if 'MAX' in model.name and np.isinf(max_cap):
model = MODEL.GLV
# Reduce number of species (S_) until more than half of the solutions are good (not all 0 or NaN abundances).
S_ = S
Ngood_solutions = 0
while Ngood_solutions < N / 2 and S_ > 1:
Ngood_solutions = 0
line_stat = ''
for k in range(N):
# Set parameters.
params = random_parameter_set(S=S_,
minmigration=immigration, maxmigration=immigration, connectance=connectance,
minint=-int_strength, maxint=int_strength)
# Maximum capacity parameter.
if 'MAX' in model.name:
params['maximum_capacity'] = max_cap
# Noise parameters.
if noise_implementation == NOISE.LANGEVIN_LINEAR:
params['noise_linear'] = noise
elif noise_implementation == NOISE.LANGEVIN_CONSTANT:
params['noise_constant'] = noise
if use_lognormal_params:
# Set growth rate and self-interaction parameters to lognormally distributed parameters.
np.fill_diagonal(params['interaction_matrix'], -stats.lognorm.rvs(loc=0, s=1, size=S_))
params['growth_rate'] = stats.lognorm.rvs(loc=0, s=1, size=[S_, 1])
# Set initial condition
params['initial_condition'] = initial_condition(S_, model, max_cap, absent_init)
# Generate the time series and do the statistics
line_stat += line_statistics(params, model)
# Check whether solution is good (not all abundances 0 or NAN)
if np.any([number not in ['NAN', '0', ''] for number in line_stat.split(',')]):
Ngood_solutions += 1
# Reduce S_ for next iteration if there were not enough 'good' solutions
S_ = int(0.95 * S_) if S_ > 10 else (S_ - 1)
# Write results to file.
line = '%.3E,%.3E,%.3E,%.3E,%3E' % input_pars + line_stat + '\n'
with open(file, 'a') as f:
f.write(line)
def one_set_logistic(input_pars, file='', N=10, S=None):
""" Generates N time series of logistic systems according to input parameters and writes summary
of the statistics of time series to file."""
width_growth, noise = input_pars
line_stat = ''
for k in range(N):
# Set parameters.
params = random_parameter_set_logistic(S=S, width_growth=width_growth)
# Set initial condition.
params['initial_condition'] = np.random.uniform(0, 1, [S, 1])
# Set noise paramters.
if noise_implementation == NOISE.LANGEVIN_CONSTANT:
params['noise_constant'] = noise
elif noise_implementation == NOISE.LANGEVIN_LINEAR:
params['noise_linear'] = noise
line_stat += line_statistics(params, model=MODEL.GLV)
# Write data to file.
line = '%.3E,%.3E' % input_pars + line_stat + '\n'
with open(file, 'a') as f:
f.write(line)
def one_set_ibm(input_pars, file='', N=10, S=None):
""" Generates N time series of IBM according to input parameters and writes summary
of the statistics of time series to file."""
connectance, immigration, int_strength, sites = input_pars
line_stat = ''
for k in range(N):
# Set parameters.
params = random_parameter_set_ibm(S=S, minmigration=immigration, maxmigration=immigration,
connectance=connectance,
minint=-int_strength, maxint=int_strength)
# Set initial condition, assert it does not exceed the number of sites.
initcond = np.random.randint(0, int(0.66 * sites / S), S)
assert np.sum(initcond) <= sites
params['initial_condition'] = initcond
params['sites'] = sites
# Generate time series and its statistics.
line_stat += line_statistics(params, model=MODEL.IBM)
# Write data to file.
line = '%.3E,%.3E,%.3E,%.3E' % input_pars + line_stat + '\n'
with open(file, 'a') as f:
f.write(line)
def header_time_series(N, ibm=False):
""" Return string for header of statistical parameters of N time series."""
line = ""
# Statistical parameters for one time series.
subline = 'number_%d,' \
'variation_mean_%d,variation_std_%d,variation_min_%d,variation_max_%d,' \
'variationnorm_mean_%d,variationnorm_std_%d,variationnorm_min_%d,variationnorm_max_%d,' \
'JS_mean_%d,JS_std_%d,JS_min_%d,JS_max_%d,JS_stab_%d,' \
'log_width_%d,log_loc_%d,log_scale_%d,log_stat_%d,log_pval_%d,' \
'pareto_a_%d,pareto_loc_%d,pareto_scale_%d,pareto_stat_%d,pareto_pval_%d,' \
'pow_a_%d,pow_loc_%d,pow_scale_%d,pow_stat_%d,pow_pval_%d,' \
'tpow_a_%d,tpow_scale_%d,tpow_R_%d,tpow_p_%d,tpow_stat_%d,tpow_pval_%d,' \
'exp_loc_%d,exp_scale_%d,exp_stat_%d,exp_pval_%d,' \
'norm_loc_%d,norm_scale_%d,norm_stat_%d,norm_pval_%d'
Npars = 43 # number of paramters in line
# Add stability paramter if not for ibm.
if not ibm:
subline = ',stability_%d,' + subline
Npars += 1
# Add statistical parameters for number of time series N.
for i in range(1, N + 1):
line += subline % ((i,) * Npars)
return line
def setup_glv(file, N=10):
""" Adds header line for gLV time series."""
# Fixed parameters for line.
line = 'connectance,immigration,noise,interaction,max_cap' + header_time_series(N) + '\n'
# Write header to file.
with open(file, 'w') as f:
f.write(line)
def setup_logistic(file, N=10):
""" Adds header line for logistic time series."""
# Fixed parameters for line.
line = 'width_growth,noise' + header_time_series(N) + '\n'
# Write header to file.
with open(file, 'w') as f:
f.write(line)
def setup_ibm(file, N=10):
""" Adds header line for ibm time series."""
# Fixed parameters for line.
line = 'connectance,immigration,interaction,sites' + header_time_series(N, ibm=True) + '\n'
# Write header to file.
with open(file, 'w') as f:
f.write(line)
# Test functions
def test_absent_species_initial_condition(file):
connectance = 0.5
immigration = 0.1
int_strength = 0.5
noise = 0.01
max_cap = np.inf
setup_glv(file, N=2)
model = MODEL.MAX_IMMI
one_set_glv((connectance, immigration, noise, int_strength, max_cap), file=file,
N=2, S=200, model=model, absent_init=True)
def test_glv(file):
connectance = 0.2
immigration = 0.01
int_strength = 0.1
noise = 0.01
max_cap = np.inf
setup_glv(file, N=4)
one_set_glv((connectance, immigration, noise, int_strength, max_cap), file=file, N=4, S=10, model=MODEL.MAX_IMMI)
def test_ibm(file):
setup_ibm(file, N=1)
S = 200
sites = 1000
connectance = 0.3
immigration = 0.01
interaction_strength = 0.1
setup_ibm(file, N=3)
one_set_ibm((connectance, immigration, interaction_strength, sites), file=file, N=3, S=S)
# Main function performs different tests
#
def main():
# test_absent_species_initial_condition('test_absent_species_initial_condition.csv')
test_glv('test_glv.csv')
#test_ibm('test_ibm.csv')
if __name__ == "__main__":
main()
###Output
species_1 1.158024
species_2 1.007564
species_3 1.008717
species_4 0.846207
species_5 0.864310
species_6 1.205211
species_7 1.103887
species_8 1.074144
species_9 0.737011
species_10 1.088146
dtype: float64
species_1 0.114733
species_2 0.099826
species_3 0.099940
species_4 0.083839
species_5 0.085633
species_6 0.119408
species_7 0.109369
species_8 0.106422
species_9 0.073020
species_10 0.107810
dtype: float64
|
scratch work/Nonlinear Modifications by Hao/.ipynb_checkpoints/Scenario2 linear with reduced feature set without feature selection -checkpoint.ipynb | ###Markdown
Gradient descent algorithm for Scenario 2In this part, we implement an gradient descent algorithm to optimization the objective loss function in Scenario 2:$$\min F := \min \frac{1}{2(n-i)} \sum_{i=1000}^n (fbpredic(i) + a*tby(i) +b*ffr(i) + c*fta(i) - asp(i))^2$$Gradient descent: $$ \beta_k = \beta_{k-1} + \delta* \nabla F, $$where $\delta$ control how far does each iteration go. Detailed planFirst, split the data as train and test with 80% and 20% respectively. For the training part, we need prophet() predicted price, there are a couple of issues. One is prophet() can not predict too far in the future. The other is we can not call prophet() too many times, this takes a lot of time. So we will use a sliding window strategy:1, Split the train data as train_1 and train_2, where train_1 is used as a sliding window to fit prophet(), and give predictions in train_2. Train_2 is used train the model we proposed above.2, After we got full size (size of train_2) predictions from prophet(), then we use gradient descent to fit the above model, extracting the coefficients of features to make predicution in the testing data.
###Code
import pandas as pd
import numpy as np
## For plotting
import matplotlib.pyplot as plt
from matplotlib import style
import datetime as dt
import seaborn as sns
from datetime import datetime
sns.set_style("whitegrid")
df= pd.read_csv('dff5.csv', parse_dates = True)
df = df[['ds', 'y', 'fbsp','diff', 'tby', 'ffr', 'fta', 'eps', 'div', 'per', 'une',
'wti', 'ppi', 'rfs']]
df
p = 0.9
# Train around 90% of dataset
cutoff = int((p*len(df)//100)*100)
df_train = df[:cutoff].copy()
df_test = df[cutoff:].copy()
#changing to datetime
df_train['ds'] = pd.to_datetime(df_train['ds'])
df_test['ds'] = pd.to_datetime(df_test['ds'])
df.columns
possible_features = ['ds', 'y', 'fbsp','diff', 'tby', 'ffr', 'fta', 'eps', 'div', 'per', 'une',
'wti', 'ppi', 'rfs']
from sklearn.linear_model import LinearRegression
reg = LinearRegression(fit_intercept=False, normalize=True, copy_X = True)
reg.fit(df_train[possible_features], df_train['y'] - df_train.fbsp)
coef = []
for i in range(len(possible_features)):
coef.append(np.round(reg.coef_[i],5))
print(coef)
pp_test = df_test.fbsp.copy() # predicted price on testing data
pp_train = df_train.fbsp.copy() # predicted price on training data
df_test1 = df_test[possible_features].copy()
df_train1 = df_train[possible_features].copy()
for i in range(len(possible_features)):
pp_test += coef[i] * df_test1[df_test1.columns[i]].ravel()
pp_train += coef[i] * df_train1[df_train1.columns[i]].ravel()
from sklearn.metrics import mean_squared_error as MSE
# MSE for test data
# Actual close price: df_test[:test_time].y
# Predicted price by prophet: pred_test
# Predicted price by tuning
mse1 = MSE(df_test.y, df_test.fbsp) #
mse2 = MSE(df_test.y, pp_test)
print(mse1,mse2)
# MSE for train data
mse3 = MSE(df_train.y, df_train.fbsp)
mse4 = MSE(df_train.y, pp_train)
print(mse3,mse4)
plt.figure(figsize=(18,10))
# plot the training data
plt.plot(df_train.ds,df_train.y,'b',
label = "Training Data")
plt.plot(df_train.ds, pp_train,'g-',
label = "Improved Fitted Values")
# plot the fit
plt.plot(df_train.ds, df_train.fbsp,'r-',
label = "FB Fitted Values")
# # plot the forecast
plt.plot(df_test.ds, df_test.fbsp,'r--',
label = "FB Forecast")
plt.plot(df_test.ds, pp_test,'g--',
label = "Improved Forecast")
plt.plot(df_test.ds,df_test.y,'b--',
label = "Test Data")
plt.legend(fontsize=14)
plt.xlabel("Date", fontsize=16)
plt.ylabel("SP&500 Close Price", fontsize=16)
plt.show()
###Output
_____no_output_____ |
notebooks/tutorial/icestick/TFF.ipynb | ###Markdown
Toggle Flip-FlopIn this example we create a toggle flip-flop (TFF) from a d-flip-flop (DFF) and an xor gate. In `Magma`, finite state machines can be constructed by composing combinational logic with register primitives, such as a `DFF` or `Register`.
###Code
import magma as m
m.set_mantle_target("ice40")
###Output
_____no_output_____
###Markdown
As before, we can use a native Python function to organize the definition of our TFF into a reusable component.
###Code
from mantle import DFF
class TFF(m.Circuit):
IO = ['O', m.Out(m.Bit)] + m.ClockInterface()
@classmethod
def definition(io):
# instance a dff to hold the state of the toggle flip-flop - this needs to be done first
dff = DFF()
# compute the next state as the not of the old state ff.O
io.O <= dff(~dff.O)
def tff():
return TFF()()
###Output
import lattice ice40
import lattice mantle40
###Markdown
Then we simply call this function inside our definition of the IceStick `main`.
###Code
from loam.boards.icestick import IceStick
icestick = IceStick()
icestick.Clock.on()
icestick.J3[0].rename('J3').output().on()
main = icestick.DefineMain()
main.J3 <= tff()
m.EndDefine()
###Output
_____no_output_____
###Markdown
We'll compile and build our program using the standard flow.
###Code
m.compile("build/tff", main)
%%bash
cd build
yosys -q -p 'synth_ice40 -top main -blif tff.blif' tff.v
arachne-pnr -q -d 1k -o tff.txt -p tff.pcf tff.blif
icepack tff.txt tff.bin
#iceprog tff.bin
###Output
/Users/hanrahan/git/magmathon/notebooks/tutorial/icestick/build
###Markdown
Let's inspect the generated verilog.
###Code
%cat build/tff.v
###Output
module TFF (output O, input CLK);
wire SB_DFF_inst0_Q;
wire SB_LUT4_inst0_O;
SB_DFF SB_DFF_inst0 (.C(CLK), .D(SB_LUT4_inst0_O), .Q(SB_DFF_inst0_Q));
SB_LUT4 #(.LUT_INIT(16'h5555)) SB_LUT4_inst0 (.I0(SB_DFF_inst0_Q), .I1(1'b0), .I2(1'b0), .I3(1'b0), .O(SB_LUT4_inst0_O));
assign O = SB_DFF_inst0_Q;
endmodule
module main (output J3, input CLKIN);
wire TFF_inst0_O;
TFF TFF_inst0 (.O(TFF_inst0_O), .CLK(CLKIN));
assign J3 = TFF_inst0_O;
endmodule
###Markdown
We can verify our implementation is function correctly by using a logic analyzer.
###Code
%cat build/tff.pcf
###Output
set_io J3 62
set_io CLKIN 21
|
module5-sprint-challenge/LS_DS_Unit_4_Sprint_3_Challenge.ipynb | ###Markdown
Major Neural Network Architectures Challenge *Data Science Unit 4 Sprint 3 Challenge*In this sprint challenge, you'll explore some of the cutting edge of Data Science. This week we studied several famous neural network architectures: recurrent neural networks (RNNs), long short-term memory (LSTMs), convolutional neural networks (CNNs), and Generative Adverserial Networks (GANs). In this sprint challenge, you will revisit these models. Remember, we are testing your knowledge of these architectures not your ability to fit a model with high accuracy. __*Caution:*__ these approaches can be pretty heavy computationally. All problems were designed so that you should be able to achieve results within at most 5-10 minutes of runtime on Colab or a comparable environment. If something is running longer, doublecheck your approach! Challenge Objectives*You should be able to:** Part 1: Train a RNN classification model* Part 2: Utilize a pre-trained CNN for objective detection* Part 3: Describe the components of an autoencoder* Part 4: Describe yourself as a Data Science and elucidate your vision of AI Part 1 - RNNsUse an RNN/LSTM to fit a multi-class classification model on reuters news articles to distinguish topics of articles. The data is already encoded properly for use in an RNN model. Your Tasks: - Use Keras to fit a predictive model, classifying news articles into topics. - Report your overall score and accuracyFor reference, the [Keras IMDB sentiment classification example](https://github.com/keras-team/keras/blob/master/examples/imdb_lstm.py) will be useful, as well the RNN code we used in class.__*Note:*__ Focus on getting a running model, not on maxing accuracy with extreme data size or epoch numbers. Only revisit and push accuracy if you get everything else done!
###Code
from tensorflow.keras.datasets import reuters
(X_train, y_train), (X_test, y_test) = reuters.load_data(num_words=None,
skip_top=0,
maxlen=None,
test_split=0.2,
seed=723812,
start_char=1,
oov_char=2,
index_from=3)
# Demo of encoding
word_index = reuters.get_word_index(path="reuters_word_index.json")
print(f"Iran is encoded as {word_index['iran']} in the data")
print(f"London is encoded as {word_index['london']} in the data")
print("Words are encoded as numbers in our dataset.")
from tensorflow.keras.preprocessing import sequence
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Embedding, LSTM
batch_size = 46
max_features = len(word_index.values())
maxlen = 200
print(len(X_train), 'train sequences')
print(len(X_test), 'test sequences')
print('Pad sequences (samples x time)')
X_train = sequence.pad_sequences(X_train, maxlen=maxlen)
X_test = sequence.pad_sequences(X_test, maxlen=maxlen)
print('X_train shape:', X_train.shape)
print('X_test shape:', X_test.shape)
print('Build model...')
# TODO - your code!
# You should only run this cell once your model has been properly configured
model.compile(loss='sparse_categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print('Train...')
model.fit(X_train, y_train,
batch_size=batch_size,
epochs=1,
validation_data=(X_test, y_test))
score, acc = model.evaluate(X_test, y_test,
batch_size=batch_size)
print('Test score:', score)
print('Test accuracy:', acc)
###Output
_____no_output_____
###Markdown
Sequence Data Question *Describe the `pad_sequences` method used on the training dataset. What does it do? Why do you need it?*Please add your answer in markdown here. RNNs versus LSTMs *What are the primary motivations behind using Long-ShortTerm Memory Cell unit over traditional Recurrent Neural Networks?*Please add your answer in markdown here. RNN / LSTM Use Cases *Name and Describe 3 Use Cases of LSTMs or RNNs and why they are suited to that use case*Please add your answer in markdown here. Part 2- CNNs Find the FrogTime to play "find the frog!" Use Keras and ResNet50 (pre-trained) to detect which of the following images contain frogs:
###Code
!pip install google_images_download
from google_images_download import google_images_download
response = google_images_download.googleimagesdownload()
arguments = {"keywords": "animal pond", "limit": 4, "print_urls": True}
absolute_image_paths = response.download(arguments)
###Output
_____no_output_____
###Markdown
At time of writing at least a few do, but since the Internet changes - it is possible your 5 won't. You can easily verify yourself, and (once you have working code) increase the number of images you pull to be more sure of getting a frog. Your goal is to validly run ResNet50 on the input images - don't worry about tuning or improving the model.*Hint* - ResNet 50 doesn't just return "frog". The three labels it has for frogs are: `bullfrog, tree frog, tailed frog`*Stretch goal* - also check for fish.
###Code
# You've got something to do in this cell. ;)
import numpy as np
from tensorflow.keras.applications.resnet50 import ResNet50
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions
def process_img_path(img_path):
return image.load_img(img_path, target_size=(224, 224))
def img_contains_frog(img):
""" Scans image for Frogs
Should return a integer with the number of frogs detected in an
image.
Inputs:
---------
img: Precrossed image ready for prediction. The `process_img_path` function should already be applied to the image.
Returns:
---------
frogs (boolean): TRUE or FALSE - There are frogs in the image.
"""
# Your Code Here
# TODO - your code!
return frogs
###Output
_____no_output_____
###Markdown
Displaying PredictionsThe next two cells are just to display some of your predictions. You will not be graded on their output.
###Code
import matplotlib.pyplot as plt
def display_predictions(urls):
image_data = []
frogs = []
for url in urls:
x = process_img_path(url)
x = image.img_to_array(x)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
image_data.append(x)
frogs.append(img_contains_frog(x))
return image_data,frogs
f, axarr = plt.subplots(2,2)
imgs, frogs = display_predictions(absolute_image_paths[0]['animal pond'])
for x,y in [(0,0),(0,1), (1,0), (1,1)]:
axarr[x,y].imshow(np.squeeze(imgs[x], axis=0) / 255)
axarr[x,y].set_title(f"Frog: {frogs[x]}")
axarr[x,y].axis('off')
###Output
_____no_output_____
###Markdown
Part 3 - AutoencodersDescribe a use case for an autoencoder given that an autoencoder tries to predict its own input. __*Your Answer:*__ Part 4 - More... Answer the following questions, with a target audience of a fellow Data Scientist:- What do you consider your strongest area, as a Data Scientist?- What area of Data Science would you most like to learn more about, and why?- Where do you think Data Science will be in 5 years?- What are the threats posed by AI to our society?- How do you think we can counteract those threats? - Do you think achieving General Artifical Intelligence is ever possible?A few sentences per answer is fine - only elaborate if time allows. Congratulations! Thank you for your hard work, and congratulations! You've learned a lot, and you should proudly call yourself a Data Scientist.
###Code
from IPython.display import HTML
HTML("""<iframe src="https://giphy.com/embed/26xivLqkv86uJzqWk" width="480" height="270" frameBorder="0" class="giphy-embed" allowFullScreen></iframe><p><a href="https://giphy.com/gifs/mumm-champagne-saber-26xivLqkv86uJzqWk">via GIPHY</a></p>""")
###Output
_____no_output_____ |
notebooks/11.0-hwant-AR1-sim.ipynb | ###Markdown
Read in data
###Code
data = pd.read_csv("D:\\Users\\Nicholas\\Projects\\repos\\spc_charts\\data\\raw\\ar1.csv")
data.plot.line(figsize = (15, 5))
###Output
_____no_output_____
###Markdown
Hypothesis test for autocorrelation
###Code
_ = plot_acf(data.x, lags=10)
_ = plot_pacf(data.x, lags=10)
# ljungbox test for autocorrelation
st.ljungbox_(data.x)
###Output
Statistics=78.355, p=0.000
There is correlation up to lag 10 (reject H0)
###Markdown
Hypothesis test for normality
###Code
fig = sm.qqplot(data.x, fit=True, line='45')
plt.show()
# shapiro-wilks test for normality
st.shapiro_wilks_(data.x, alpha=0.05)
# jarque-bera test for normality
st.jarque_bera_(data.x, alpha=0.05)
###Output
Statistics=3.292, p=0.193, skew=-0.444, kurt=2.956
Sample looks Gaussian (fail to reject H0)
###Markdown
Get in control mean
###Code
# Get in-control mean
data.x.mean()
in_control_mean = data.x.mean()
###Output
_____no_output_____
###Markdown
Get in control moving range
###Code
MR = cp.calculate_MR(data.x)
in_control_sigma = cp.estimate_sigma_from_MR(MR)
in_control_sigma
###Output
_____no_output_____
###Markdown
Build individual control chart
###Code
x_ind_params_df = cp.x_ind_params(x = data.x, sigma = in_control_sigma, center = in_control_mean)
x_ind_params_df = x_ind_params_df.reset_index()
pf.plot_control_chart(
data = x_ind_params_df,
index = 'index',
obs = 'obs',
UCL = 'UCL',
center = 'Center',
LCL = 'LCL',
title='Individual Measurement Chart',
ylab='x',
xlab='Index',
all_dates=False,
rot=0)
###Output
_____no_output_____
###Markdown
Try simple AR1 model
###Code
data2 = data.copy()
data2['x1'] = data2['x'].shift(periods = 1)
data2.dropna(inplace = True)
features = ['x1']
lm = LinearRegression(fit_intercept=True)
lm.fit(data2.loc[:, features].values, data2['x'].values)
lm.intercept_, lm.coef_
# get residuals
residuals = pd.Series(data2['x'].values - lm.predict(data2.loc[:, features].values))
###Output
_____no_output_____
###Markdown
Test for Autocorrelation
###Code
_ = plot_acf(residuals, lags=10)
_ = plot_pacf(residuals, lags=10)
st.ljungbox_(residuals, print_extra=True)
###Output
Statistics=8.998, p=0.532
No auto correlation up to lag 10 (fail to reject H0)
[0.0909722 1.16253197 1.31944111 2.58175801 4.71417027 5.84128895
6.30127625 6.88265835 8.99677041 8.99789718]
[0.76294482 0.55918999 0.72452185 0.63005799 0.45174901 0.44120198
0.50504405 0.54934438 0.43757236 0.53230315]
###Markdown
Hypothesis test for normality
###Code
fig = sm.qqplot(residuals, fit=True, line='45')
plt.show()
st.shapiro_wilks_(residuals)
st.jarque_bera_(residuals)
###Output
Statistics=0.780, p=0.677, skew=-0.060, kurt=2.582
Sample looks Gaussian (fail to reject H0)
###Markdown
Individual control chart for residuals
###Code
in_control_mean = 0
MR = cp.calculate_MR(residuals)
in_control_sigma = cp.estimate_sigma_from_MR(MR)
in_control_mean, in_control_sigma
full_residuals = residuals
x_ind_params_df = cp.x_ind_params(x=full_residuals, sigma = in_control_sigma, center=in_control_mean)
x_ind_params_df = x_ind_params_df.reset_index()
pf.plot_control_chart(
data = x_ind_params_df,
index = 'index',
obs = 'obs',
UCL = 'UCL',
center = 'Center',
LCL = 'LCL',
title='Individual Measurement Chart',
ylab='Residuals',
xlab='Index',
all_dates=False,
rot=0)
###Output
_____no_output_____
###Markdown
Plot original measure chart
###Code
x_ind_params_df2 = fn.convert_residuals_to_original(chart_df = x_ind_params_df,
predictions = lm.predict(data2.loc[:, features].values))
pf.plot_control_chart(
data = x_ind_params_df2,
index = 'index',
obs = 'obs',
UCL = 'UCL',
center = 'Center',
LCL = 'LCL',
drawstyle='default',
title='Individual Measurement Chart',
ylab='x',
xlab='Index',
all_dates=False,
rot=0)
###Output
_____no_output_____ |
Scripts/TrainingScripts/IDD_Dv3p_mobilenetV2_upsampledby2_custom_model_alpha0.35_bs8.ipynb | ###Markdown
TF data API
###Code
train_X_y_paths = list(zip(train_x, train_y))
val_X_y_paths = list(zip(val_x, val_y))
IMG_SIZE = 512
def parse_x_y(img_path,mask_path):
image = tf.io.read_file(img_path)
image = tf.image.decode_png(image, channels=3)
image = tf.image.convert_image_dtype(image, tf.uint8)
mask = tf.io.read_file(mask_path)
mask = tf.image.decode_png(mask, channels=1)
return {'image': image, 'segmentation_mask': mask}
@tf.function
def normalize(input_image: tf.Tensor, input_mask: tf.Tensor) -> tuple:
input_image = tf.cast(input_image, tf.float32) / 255.0
return input_image, input_mask
@tf.function
def load_image_train(datapoint: dict) -> tuple:
input_image = tf.image.resize(datapoint['image'], (IMG_SIZE, IMG_SIZE))
input_mask = tf.image.resize(datapoint['segmentation_mask'], (IMG_SIZE, IMG_SIZE),method='nearest')
# if tf.random.uniform(()) > 0.5:
# input_image = tf.image.flip_left_right(input_image)
# input_mask = tf.image.flip_left_right(input_mask)
input_image, input_mask = normalize(input_image, input_mask)
input_mask = tf.one_hot(input_mask, 3)
input_mask = tf.reshape(input_mask, (IMG_SIZE, IMG_SIZE, 3))
return input_image, input_mask
AUTOTUNE = tf.data.experimental.AUTOTUNE
SEED = 42
BATCH_SIZE = 8
BUFFER_SIZE = 2*BATCH_SIZE
train_dataset = tf.data.Dataset.from_tensor_slices((train_x,train_y))
train_dataset = train_dataset.map(parse_x_y)
val_dataset = tf.data.Dataset.from_tensor_slices((val_x,val_y))
val_dataset =val_dataset.map(parse_x_y)
dataset = {"train": train_dataset, "val": val_dataset}
dataset['train'] = dataset['train'].map(
load_image_train,
num_parallel_calls=tf.data.experimental.AUTOTUNE
).shuffle(buffer_size=BUFFER_SIZE, seed=SEED).batch(BATCH_SIZE).prefetch(buffer_size=AUTOTUNE)
dataset['val'] = dataset['val'].map(
load_image_train,
num_parallel_calls=tf.data.experimental.AUTOTUNE
).shuffle(buffer_size=BUFFER_SIZE, seed=SEED).batch(BATCH_SIZE).prefetch(buffer_size=AUTOTUNE)
for image,label in dataset['train'].take(1):
print("Train image: ",image.shape)
print("Train label: ",label.shape,"\n\tunique values", np.unique(label[0]))
for image,label in dataset['val'].take(1):
print("Val image: ",image.shape)
print("Val label: ",label.shape,"\n\tunique values", np.unique(label[0]))
import matplotlib.pyplot as plt
def display_sample(display_list):
"""Show side-by-side an input image,
the ground truth and the prediction.
"""
plt.figure(figsize=(7, 7))
title = ['Input Image', 'True Mask', 'Predicted Mask']
for i in range(len(display_list)):
plt.subplot(1, len(display_list), i+1)
plt.title(title[i])
plt.imshow(tf.keras.preprocessing.image.array_to_img(display_list[i]))
plt.axis('off')
plt.show()
i=0
for image, mask in dataset['train'].take(5):
i=i+1
# print(i)
sample_image, sample_mask = image, mask
t = np.argmax(sample_mask[0],axis=-1)
t = tf.expand_dims(t,axis=-1)
display_sample([sample_image[0],t])
%env SM_FRAMEWORK=tf.keras
# !pip install keras-segmentation
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
def convolution_block(
block_input,
num_filters=256,
kernel_size=3,
dilation_rate=1,
padding="same",
use_bias=False,
):
x = layers.Conv2D(
num_filters,
kernel_size=kernel_size,
dilation_rate=dilation_rate,
padding="same",
use_bias=use_bias,
kernel_initializer=keras.initializers.HeNormal(),
)(block_input)
x = layers.BatchNormalization()(x)
return tf.nn.relu(x)
def DilatedSpatialPyramidPooling(dspp_input):
dims = dspp_input.shape
x = layers.AveragePooling2D(pool_size=(dims[-3], dims[-2]))(dspp_input)
x = convolution_block(x, kernel_size=1, use_bias=True)
out_pool = layers.UpSampling2D(
size=(dims[-3] // x.shape[1], dims[-2] // x.shape[2]), interpolation="bilinear",
)(x)
out_1 = convolution_block(dspp_input, kernel_size=1, dilation_rate=1)
out_6 = convolution_block(dspp_input, kernel_size=3, dilation_rate=6)
out_12 = convolution_block(dspp_input, kernel_size=3, dilation_rate=12)
out_18 = convolution_block(dspp_input, kernel_size=3, dilation_rate=18)
x = layers.Concatenate(axis=-1)([out_pool, out_1, out_6, out_12, out_18])
output = convolution_block(x, kernel_size=1)
return output
###Output
_____no_output_____
###Markdown
import tensorflow as tffrom tensorflow import kerasfrom tensorflow.keras import layersfrom tensorflow.keras.layers import Conv2D, Activation, BatchNormalizationfrom tensorflow.keras.layers import UpSampling2D, Input, Concatenatefrom tensorflow.keras.models import Modelfrom tensorflow.keras.applications import MobileNetV2from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateaufrom tensorflow.keras.metrics import Recall, Precisionfrom tensorflow.keras import backend as Kinputs = Input(shape=(512,512, 3), name="input_image")encoder = MobileNetV2( input_tensor=(inputs), weights="pretrained_weights/mobilenet_v2_weights_tf_dim_ordering_tf_kernels_0.35_224_no_top.h5", include_top=False, alpha=0.35)encoder.summary()for i in encoder.layers: if "relu" in i.name or "input" in i.name: print(i.name, i.output.shape)skip_connection_names = ["input_image", "block_1_expand_relu", "block_3_expand_relu", "block_6_expand_relu"]
###Code
from tensorflow.keras.layers import Conv2D, Activation, BatchNormalization
from tensorflow.keras.layers import UpSampling2D, Input, Concatenate
from tensorflow.keras.models import Model
from tensorflow.keras.applications import MobileNetV2
def DeeplabV3Plus(image_size, num_classes):
inputs = keras.Input(shape=(image_size, image_size, 3),name="input_image")
encoder = MobileNetV2(
input_tensor=inputs,
weights="pretrained_weights/mobilenet_v2_weights_tf_dim_ordering_tf_kernels_0.35_224_no_top.h5",
include_top=False, alpha=0.35)
skip_connection_names = ["input_image", "block_1_expand_relu", "block_3_expand_relu", "block_6_expand_relu"]
encoder_output = encoder.get_layer("block_13_expand_relu").output
x = DilatedSpatialPyramidPooling(encoder_output)
t = encoder.get_layer("input_image").output
f = [16, 32, 48, 64]
for i in range(1, len(skip_connection_names)+1, 1):
x_skip = encoder.get_layer(skip_connection_names[-i]).output
x = UpSampling2D((2, 2),interpolation="bilinear")(x)
x = Concatenate()([x, x_skip])
x = Conv2D(f[-i], (3, 3), padding="same")(x)
x = BatchNormalization()(x)
x = Activation("relu")(x)
x = Conv2D(f[-i], (3, 3), padding="same")(x)
x = BatchNormalization()(x)
x = Activation("relu")(x)
x = Conv2D(num_classes, (1, 1), padding="same")(x)
x = Activation("softmax")(x)
model = Model(inputs, x)
return model
model = DeeplabV3Plus(image_size=IMG_SIZE, num_classes=3)
model.summary()
# from tensorflow.keras.applications import MobileNetV2
# encoder = MobileNetV2(input_shape=[512,512,3], weights="pretrained_weights/mobilenet_v2_weights_tf_dim_ordering_tf_kernels_0.35_224_no_top.h5", include_top=False, alpha=0.35)
# # encoder.load_weights("pretrained_weights/mobilenet_v2_weights_tf_dim_ordering_tf_kernels_0.35_224_no_top.h5")
from segmentation_models.losses import cce_jaccard_loss, dice_loss, JaccardLoss
from segmentation_models.metrics import iou_score, f1_score, precision, recall
ls = dice_loss + cce_jaccard_loss
metrics = [precision, recall, f1_score, iou_score]
# from tensorflow.keras.models import load_model
# model = load_model('IDD_mobilenetV2_edge/ckpt_path/350.h5',
# custom_objects={'dice_loss_plus_categorical_crossentropy_plus_jaccard_loss':ls,
# 'precision':precision, 'recall':recall, 'f1-score':f1_score, 'iou_score':iou_score})
import os, time, keras
%env SM_FRAMEWORK=tf.keras
import numpy as np
import tensorflow as tf
from segmentation_models.losses import cce_jaccard_loss, dice_loss, JaccardLoss
from segmentation_models.metrics import iou_score, f1_score, precision, recall
from tensorflow.keras.callbacks import ModelCheckpoint, ReduceLROnPlateau, CSVLogger, EarlyStopping
""" Hyperparamaters """
BATCH_SIZE = 8
epochs = 1000
base_dir = 'RESULTS/IDD_Dv3p_mobilenetV2_alpha0.35_custom_bs8'
if not os.path.exists(base_dir):
os.mkdir(base_dir)
os.mkdir(f"{base_dir}/ckpt_path")
csv_path = f"{base_dir}/history.csv"
""" callbacks """
root_logdir = os.path.join(os.curdir, f"{base_dir}/logs","fit","")
def get_run_logdir():
run_id = time.strftime("run_%Y_%m_%d-%H_%M_%S")
return os.path.join(root_logdir, run_id)
run_logdir = get_run_logdir()
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir, histogram_freq=1,profile_batch='10,15')
checkpoint_filepath = f'{base_dir}/'+'ckpt_path/{epoch}.h5'
model_checkpoint_callback = ModelCheckpoint(
filepath=checkpoint_filepath,
save_weights_only=False,
# monitor='val_iou_score',
# mode='max',
verbose = 1,
period = 2,
save_best_only=False
)
callbacks = [
model_checkpoint_callback,
ReduceLROnPlateau(monitor="val_loss", patience=5, factor=0.1, verbose=1),
CSVLogger(csv_path),
# EarlyStopping(monitor="val_loss", patience=10),
tensorboard_cb
]
""" steps per epochs """
train_steps = len(train_x)//BATCH_SIZE
if len(train_x) % BATCH_SIZE != 0:
train_steps += 1
test_steps = len(val_x)//BATCH_SIZE
if len(val_x) % BATCH_SIZE != 0:
test_steps += 1
print("train_steps", train_steps, "test_steps",test_steps)
# """ Model training """
# for layer in model.layers:
# if layer.name == "global_average_pooling2d":
# break
# else:
# layer.trainable = False
# for layer in model.layers:
# print(layer.name,layer.trainable)
model.compile(
loss=ls,
optimizer= "adam", #tf.keras.optimizers.Adam(lr),
metrics=metrics
)
# model.summary()
# pretrain model decoder
history = model.fit(
dataset["train"],
validation_data=dataset["val"],
epochs=1000,
initial_epoch = 0,
steps_per_epoch=train_steps,
validation_steps=test_steps,
callbacks=callbacks
)
###Output
Epoch 1/1000
|
3-object-tracking-and-localization/slam-project/2. Omega and Xi, Constraints.ipynb | ###Markdown
Omega and XiTo implement Graph SLAM, a matrix and a vector (omega and xi, respectively) are introduced. The matrix is square and labelled with all the robot poses (xi) and all the landmarks (Li). Every time you make an observation, for example, as you move between two poses by some distance `dx` and can relate those two positions, you can represent this as a numerical relationship in these matrices.It's easiest to see how these work in an example. Below you can see a matrix representation of omega and a vector representation of xi.Next, let's look at a simple example that relates 3 poses to one another. * When you start out in the world most of these values are zeros or contain only values from the initial robot position* In this example, you have been given constraints, which relate these poses to one another* Constraints translate into matrix valuesIf you have ever solved linear systems of equations before, this may look familiar, and if not, let's keep going! Solving for xTo "solve" for all these x values, we can use linear algebra; all the values of x are in the vector `mu` which can be calculated as a product of the inverse of omega times xi.---**You can confirm this result for yourself by executing the math in the cell below.**
###Code
import numpy as np
# define omega and xi as in the example
omega = np.array([[1,0,0],
[-1,1,0],
[0,-1,1]])
xi = np.array([[-3],
[5],
[3]])
# calculate the inverse of omega
omega_inv = np.linalg.inv(np.matrix(omega))
# calculate the solution, mu
mu = omega_inv*xi
# print out the values of mu (x0, x1, x2)
print(mu)
###Output
[[-3.]
[ 2.]
[ 5.]]
|
ipynb/example/2_simulation-shotgun.ipynb | ###Markdown
Description* Time to make a simple SIP data simulation with the [dataset](./1_dataset.ipynb) that you alreadly created> Make sure you have created the dataset before trying to run this notebook Setting variables* "workDir" is the path to the working directory for this analysis (where the files will be download to) * **NOTE:** MAKE SURE to modify this path to the directory where YOU want to run the example.* "nprocs" is the number of processors to use (3 by default, since only 3 genomes). Change this if needed.
###Code
workDir = '../../t/SIPSim_example/'
nprocs = 3
###Output
_____no_output_____
###Markdown
Init
###Code
import os
# Note: you will need to install `rpy2.ipython` and the necessary R packages (see next cell)
%load_ext rpy2.ipython
%%R
library(ggplot2)
library(dplyr)
library(tidyr)
workDir = os.path.abspath(workDir)
if not os.path.isdir(workDir):
os.makedirs(workDir)
%cd $workDir
genomeDir = os.path.join(workDir, 'genomes_rn')
###Output
/home/nick/notebook/SIPSim/t/SIPSim_example
###Markdown
Experimental design* How many gradients?* Which are labeled treatments & which are controls?* For this tutorial, we'll keep things simple and just simulate one control & one treatment * For the labeled treatment, 34% of the taxa (1 of 3) will incorporate 50% isotope The script below ("`SIPSim incorpConfigExample`") is helpful for making simple experimental designs
###Code
%%bash
source activate SIPSim
# creating example config
SIPSim incorp_config_example \
--percTaxa 34 \
--percIncorpUnif 50 \
--n_reps 1 \
> incorp.config
!cat incorp.config
###Output
[1]
# baseline: no incorporation
treatment = control
[[intraPopDist 1]]
distribution = uniform
[[[start]]]
[[[[interPopDist 1]]]]
distribution = uniform
start = 0
end = 0
[[[end]]]
[[[[interPopDist 1]]]]
distribution = uniform
start = 0
end = 0
[2]
# 'treatment' community: possible incorporation
treatment = labeled
max_perc_taxa_incorp = 34
[[intraPopDist 1]]
distribution = uniform
[[[start]]]
[[[[interPopDist 1]]]]
start = 50
distribution = uniform
end = 50
[[[end]]]
[[[[interPopDist 1]]]]
start = 50
distribution = uniform
end = 50
###Markdown
Pre-fractionation communities* What is the relative abundance of taxa in the pre-fractionation samples?
###Code
%%bash
source activate SIPSim
SIPSim communities \
--config incorp.config \
./genomes_rn/genome_index.txt \
> comm.txt
!cat comm.txt
###Output
library taxon_name rel_abund_perc rank
1 Escherichia_coli_1303 68.253282118 1
1 Clostridium_ljungdahlii_DSM_13528 28.910126208 2
1 Streptomyces_pratensis_ATCC_33331 2.836591673 3
2 Escherichia_coli_1303 92.501402170 1
2 Clostridium_ljungdahlii_DSM_13528 6.504836572 2
2 Streptomyces_pratensis_ATCC_33331 0.993761258 3
###Markdown
**Note:** "library" = gradient Simulating gradient fractions* BD size ranges for each fraction (& start/end of the total BD range)
###Code
%%bash
source activate SIPSim
SIPSim gradient_fractions \
--BD_min 1.67323 \
--BD_max 1.7744 \
comm.txt \
> fracs.txt
!head -n 6 fracs.txt
###Output
library fraction BD_min BD_max fraction_size
1 1 1.673 1.678 0.005
1 2 1.678 1.681 0.003
1 3 1.681 1.685 0.004
1 4 1.685 1.688 0.003
1 5 1.688 1.691 0.003
###Markdown
Simulating fragments* Simulating shotgun-fragments* Fragment length distribution: skewed-normal Primer sequences (wait... what?)* If you were to simulate amplicons, instead of shotgun fragments, you can use something like the following:
###Code
# primers = """>515F
# GTGCCAGCMGCCGCGGTAA
# >806R
# GGACTACHVGGGTWTCTAAT
# """
# F = os.path.join(workDir, '515F-806R.fna')
# with open(F, 'wb') as oFH:
# oFH.write(primers)
# print 'File written: {}'.format(F)
###Output
_____no_output_____
###Markdown
Simulation
###Code
%%bash -s $genomeDir
source activate SIPSim
# skewed-normal
SIPSim fragments \
$1/genome_index.txt \
--fp $1 \
--fld skewed-normal,9000,2500,-5 \
--flr None,None \
--nf 1000 \
--debug \
--tbl \
> shotFrags.txt
!head -n 5 shotFrags.txt
!tail -n 5 shotFrags.txt
###Output
taxon_name scaffoldID fragStart fragLength fragGC
Clostridium_ljungdahlii_DSM_13528 NC_014328_1_Clostridium_ljungdahlii_DSM_13528 1296246 5561 33.26739795
Clostridium_ljungdahlii_DSM_13528 NC_014328_1_Clostridium_ljungdahlii_DSM_13528 4068528 5412 33.2779009608
Clostridium_ljungdahlii_DSM_13528 NC_014328_1_Clostridium_ljungdahlii_DSM_13528 2495157 7520 32.6329787234
Clostridium_ljungdahlii_DSM_13528 NC_014328_1_Clostridium_ljungdahlii_DSM_13528 897751 7963 31.2193896773
Streptomyces_pratensis_ATCC_33331 NC_016114_1_Streptomyces_pratensis_ATCC_33331 250918 8676 70.7353619179
Streptomyces_pratensis_ATCC_33331 NC_016114_1_Streptomyces_pratensis_ATCC_33331 724379 4989 72.2589697334
Streptomyces_pratensis_ATCC_33331 NC_016114_1_Streptomyces_pratensis_ATCC_33331 7086109 7293 69.5598519128
Streptomyces_pratensis_ATCC_33331 NC_016114_1_Streptomyces_pratensis_ATCC_33331 3183927 7265 72.6496902959
Streptomyces_pratensis_ATCC_33331 NC_016114_1_Streptomyces_pratensis_ATCC_33331 6176829 5531 73.9106852287
###Markdown
Plotting fragments
###Code
%%R -w 700 -h 350
df = read.delim('shotFrags.txt')
p = ggplot(df, aes(fragGC, fragLength, color=taxon_name)) +
geom_density2d() +
scale_color_discrete('Taxon') +
labs(x='Fragment G+C', y='Fragment length (bp)') +
theme_bw() +
theme(
text = element_text(size=16)
)
plot(p)
###Output
_____no_output_____
###Markdown
**Note:** for information on what's going on in this config file, use the command: `SIPSim isotope_incorp -h` Converting fragments to a 2d-KDE* Estimating the joint-probabilty for fragment G+C & length
###Code
%%bash
source activate SIPSim
SIPSim fragment_KDE \
shotFrags.txt \
> shotFrags_kde.pkl
!ls -thlc shotFrags_kde.pkl
###Output
-rw-rw-r-- 1 nick nick 49K Jul 13 14:56 shotFrags_kde.pkl
###Markdown
* **Note:** The generated list of KDEs (1 per taxon per gradient) are in a binary file format * To get a table of length/G+C values, use the command: `SIPSim KDE_sample` Adding diffusion * Simulating the BD distribution of fragments as Gaussian distributions. * One Gaussian distribution per homogeneous set of DNA molecules (same G+C and length) > See the README if you get `MKL` errors with the next step and re-run the `fragment KDE` generation step
###Code
%%bash
source activate SIPSim
SIPSim diffusion \
shotFrags_kde.pkl \
--np 3 \
> shotFrags_kde_dif.pkl
!ls -thlc shotFrags_kde_dif.pkl
###Output
-rw-rw-r-- 1 nick nick 12M Jul 13 14:56 shotFrags_kde_dif.pkl
###Markdown
Plotting fragment distribution w/ and w/out diffusion Making a table of fragment values from KDEs
###Code
n = 100000
%%bash -s $n
source activate SIPSim
SIPSim KDE_sample -n $1 shotFrags_kde.pkl > shotFrags_kde.txt
SIPSim KDE_sample -n $1 shotFrags_kde_dif.pkl > shotFrags_kde_dif.txt
ls -thlc shotFrags_kde*.txt
###Output
-rw-rw-r-- 1 nick nick 4.2M Jul 13 14:56 shotFrags_kde_dif.txt
-rw-rw-r-- 1 nick nick 4.2M Jul 13 14:56 shotFrags_kde.txt
###Markdown
Plotting* plotting KDE with or without diffusion added
###Code
%%R
df1 = read.delim('shotFrags_kde.txt', sep='\t')
df2 = read.delim('shotFrags_kde_dif.txt', sep='\t')
df1$data = 'no diffusion'
df2$data = 'diffusion'
df = rbind(df1, df2) %>%
gather(Taxon, BD, Clostridium_ljungdahlii_DSM_13528,
Escherichia_coli_1303, Streptomyces_pratensis_ATCC_33331) %>%
mutate(Taxon = gsub('_(ATCC|DSM)', '\n\\1', Taxon))
df %>% head(n=3)
%%R -w 800 -h 300
p = ggplot(df, aes(BD, fill=data)) +
geom_density(alpha=0.25) +
facet_wrap( ~ Taxon) +
scale_fill_discrete('') +
theme_bw() +
theme(
text=element_text(size=16),
axis.title.y = element_text(vjust=1),
axis.text.x = element_text(angle=50, hjust=1)
)
plot(p)
###Output
_____no_output_____
###Markdown
Adding diffusive boundary layer (DBL) effects* 'smearing' effects
###Code
%%bash
source activate SIPSim
SIPSim DBL \
shotFrags_kde_dif.pkl \
--np 3 \
> shotFrags_kde_dif_DBL.pkl
# viewing DBL logs
!ls -thlc *pkl
###Output
-rw-rw-r-- 1 nick nick 12M Jul 13 14:56 shotFrags_kde_dif_DBL.pkl
-rw-rw-r-- 1 nick nick 12M Jul 13 14:56 shotFrags_kde_dif.pkl
-rw-rw-r-- 1 nick nick 49K Jul 13 14:56 shotFrags_kde.pkl
-rw-rw-r-- 1 nick nick 23M Jul 13 14:55 shotFrags_KDE_dif_DBL_inc.pkl
###Markdown
Adding isotope incorporation* Using the config file produced in the Experimental Design section
###Code
%%bash
source activate SIPSim
SIPSim isotope_incorp \
--comm comm.txt \
--np 3 \
shotFrags_kde_dif_DBL.pkl \
incorp.config \
> shotFrags_KDE_dif_DBL_inc.pkl
!ls -thlc *.pkl
###Output
-rw-rw-r-- 1 nick nick 23M Jul 13 14:56 shotFrags_KDE_dif_DBL_inc.pkl
-rw-rw-r-- 1 nick nick 12M Jul 13 14:56 shotFrags_kde_dif_DBL.pkl
-rw-rw-r-- 1 nick nick 12M Jul 13 14:56 shotFrags_kde_dif.pkl
-rw-rw-r-- 1 nick nick 49K Jul 13 14:56 shotFrags_kde.pkl
###Markdown
**Note:** statistics on how much isotope was incorporated by each taxon are listed in "BD-shift_stats.txt"
###Code
%%R
df = read.delim('BD-shift_stats.txt', sep='\t')
df
###Output
_____no_output_____
###Markdown
Making an OTU table* Number of amplicon-fragment in each fraction in each gradient* Assuming a total pre-fractionation community size of **1e7**
###Code
%%bash
source activate SIPSim
SIPSim OTU_table \
--abs 1e7 \
--np 3 \
shotFrags_KDE_dif_DBL_inc.pkl \
comm.txt \
fracs.txt \
> OTU.txt
!head -n 7 OTU.txt
###Output
library taxon fraction BD_min BD_mid BD_max count rel_abund
1 Clostridium_ljungdahlii_DSM_13528 -inf-1.673 -inf 1.672 1.672 1075 0.484452456061
1 Clostridium_ljungdahlii_DSM_13528 1.673-1.678 1.673 1.675 1.678 984 0.704871060172
1 Clostridium_ljungdahlii_DSM_13528 1.678-1.681 1.678 1.679 1.681 7069 0.968754282582
1 Clostridium_ljungdahlii_DSM_13528 1.681-1.685 1.681 1.683 1.685 135783 0.994914894085
1 Clostridium_ljungdahlii_DSM_13528 1.685-1.688 1.685 1.687 1.688 518595 0.996690479073
1 Clostridium_ljungdahlii_DSM_13528 1.688-1.691 1.688 1.69 1.691 980471 0.993720272392
###Markdown
Plotting fragment count distributions
###Code
%%R -h 350 -w 750
df = read.delim('OTU.txt', sep='\t')
p = ggplot(df, aes(BD_mid, count, fill=taxon)) +
geom_area(stat='identity', position='dodge', alpha=0.5) +
scale_x_continuous(expand=c(0,0)) +
labs(x='Buoyant density') +
labs(y='Shotgun fragment counts') +
facet_grid(library ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
axis.title.y = element_text(vjust=1),
axis.title.x = element_blank()
)
plot(p)
###Output
_____no_output_____
###Markdown
**Notes:** * This plot represents the theoretical number of amplicon-fragments at each BD across each gradient. * Derived from subsampling the fragment BD proability distributions generated in earlier steps.* The fragment BD distribution of one of the 3 taxa should have shifted in Gradient 2 (the treatment gradient).* The fragment BD distributions of the other 2 taxa should be approx. the same between the two gradients. Viewing fragment counts as relative quantities
###Code
%%R -h 350 -w 750
p = ggplot(df, aes(BD_mid, count, fill=taxon)) +
geom_area(stat='identity', position='fill') +
scale_x_continuous(expand=c(0,0)) +
scale_y_continuous(expand=c(0,0)) +
labs(x='Buoyant density') +
labs(y='Shotgun fragment counts') +
facet_grid(library ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
axis.title.y = element_text(vjust=1),
axis.title.x = element_blank()
)
plot(p)
###Output
_____no_output_____
###Markdown
Adding effects of PCR* This will alter the fragment counts based on the PCR kinetic model of:>Suzuki MT, Giovannoni SJ. (1996). Bias caused by template annealing in the amplification of mixtures of 16S rRNA genes by PCR. Appl Environ Microbiol 62:625-630.
###Code
%%bash
source activate SIPSim
SIPSim OTU_PCR OTU.txt > OTU_PCR.txt
!head -n 5 OTU_PCR.txt
!tail -n 5 OTU_PCR.txt
###Output
library taxon fraction BD_min BD_mid BD_max count rel_abund
1 Clostridium_ljungdahlii_DSM_13528 -inf-1.673 -inf 1.672 1.672 8466486 0.423324319678
1 Clostridium_ljungdahlii_DSM_13528 1.673-1.678 1.673 1.675 1.678 9469222 0.473461104378
1 Clostridium_ljungdahlii_DSM_13528 1.678-1.681 1.678 1.679 1.681 13571521 0.678576027723
1 Clostridium_ljungdahlii_DSM_13528 1.681-1.685 1.681 1.683 1.685 14776381 0.738819047776
2 Streptomyces_pratensis_ATCC_33331 1.757-1.761 1.757 1.759 1.761 3256167 0.162808343771
2 Streptomyces_pratensis_ATCC_33331 1.761-1.764 1.761 1.762 1.764 0 0.0
2 Streptomyces_pratensis_ATCC_33331 1.764-1.770 1.764 1.767 1.77 2373725 0.118686263969
2 Streptomyces_pratensis_ATCC_33331 1.770-1.774 1.77 1.772 1.774 4147246 0.207362287667
2 Streptomyces_pratensis_ATCC_33331 1.774-inf 1.775 1.775 inf 4686423 0.234321138305
###Markdown
**Notes*** The table is in the same format as with the original OTU table, but the counts and relative abundances should be altered. Simulating sequencing* Sampling from the OTU table
###Code
%%bash
source activate SIPSim
SIPSim OTU_subsample OTU_PCR.txt > OTU_PCR_sub.txt
!head -n 5 OTU_PCR_sub.txt
###Output
library fraction taxon BD_min BD_mid BD_max count rel_abund
1 -inf-1.673 Clostridium_ljungdahlii_DSM_13528 -inf 1.672 1.672 5862 0.41433418151
1 1.673-1.678 Clostridium_ljungdahlii_DSM_13528 1.673 1.675 1.678 8144 0.481352325788
1 1.678-1.681 Clostridium_ljungdahlii_DSM_13528 1.678 1.679 1.681 17007 0.679926438252
1 1.681-1.685 Clostridium_ljungdahlii_DSM_13528 1.681 1.683 1.685 9657 0.734763752568
###Markdown
**Notes*** The table is in the same format as with the original OTU table, but the counts and relative abundances should be altered. Plotting
###Code
%%R -h 350 -w 750
df = read.delim('OTU_PCR_sub.txt', sep='\t')
p = ggplot(df, aes(BD_mid, rel_abund, fill=taxon)) +
geom_area(stat='identity', position='fill') +
scale_x_continuous(expand=c(0,0)) +
scale_y_continuous(expand=c(0,0)) +
labs(x='Buoyant density') +
labs(y='Taxon relative abundances') +
facet_grid(library ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
axis.title.y = element_text(vjust=1),
axis.title.x = element_blank()
)
plot(p)
###Output
_____no_output_____
###Markdown
Misc A 'wide' OTU table* If you want to reformat the OTU table to a more standard 'wide' format (as used in Mothur or QIIME):
###Code
%%bash
source activate SIPSim
SIPSim OTU_wide_long -w \
OTU_PCR_sub.txt \
> OTU_PCR_sub_wide.txt
!head -n 4 OTU_PCR_sub_wide.txt
###Output
taxon 1__-inf-1.673 1__1.673-1.678 1__1.678-1.681 1__1.681-1.685 1__1.685-1.688 1__1.688-1.691 1__1.691-1.695 1__1.695-1.698 1__1.698-1.700 1__1.700-1.701 1__1.701-1.706 1__1.706-1.711 1__1.711-1.714 1__1.714-1.719 1__1.719-1.723 1__1.723-1.726 1__1.726-1.729 1__1.729-1.733 1__1.733-1.737 1__1.737-1.743 1__1.743-1.748 1__1.748-1.754 1__1.754-1.758 1__1.758-1.762 1__1.762-1.764 1__1.764-1.769 1__1.769-1.774 1__1.774-inf 2__-inf-1.673 2__1.673-1.676 2__1.676-1.679 2__1.679-1.682 2__1.682-1.683 2__1.683-1.686 2__1.686-1.687 2__1.687-1.690 2__1.690-1.695 2__1.695-1.701 2__1.701-1.706 2__1.706-1.709 2__1.709-1.712 2__1.712-1.717 2__1.717-1.720 2__1.720-1.723 2__1.723-1.728 2__1.728-1.731 2__1.731-1.738 2__1.738-1.742 2__1.742-1.748 2__1.748-1.750 2__1.750-1.753 2__1.753-1.757 2__1.757-1.761 2__1.761-1.764 2__1.764-1.770 2__1.770-1.774 2__1.774-inf
Clostridium_ljungdahlii_DSM_13528 5862 8144 17007 9657 21963 14036 17167 9246 9017 9862 5614 4053 3199 1872 1732 2479 3130 1616 1564 3670 4557 10329 7487 10113 9165 10417 6893 4079 9272 6392 15337 12439 14250 12147 14685 14996 18042 12109 4568 8219 8292 2549 810 273 209 56 194 1203 2939 4730 4073 8853 6321 8377 9516 6888 2839
Escherichia_coli_1303 6197 6295 8006 3117 6307 4637 7212 6735 10821 14835 12669 17948 19208 13957 12065 7349 8095 2964 3036 4573 6246 11653 6364 11246 11855 12350 11452 12231 11730 7479 14461 6619 6076 3321 3123 3032 4292 4594 4274 14313 18945 12104 16580 16720 23626 15140 21221 11106 10054 14644 10206 16293 11684 17544 14749 14489 11961
Streptomyces_pratensis_ATCC_33331 2089 2480 0 369 483 150 221 104 38 81 168 179 675 2660 7375 11155 19695 11506 8952 8535 3843 6646 3051 5421 4164 4737 4326 6756 3229 3210 3746 1429 0 527 0 305 385 474 544 1240 1702 2058 3816 4775 8344 5865 9006 4382 3237 4647 2785 4184 3342 0 3225 5510 4639
###Markdown
SIP metadata* If you want to make a table of SIP sample metadata
###Code
%%bash
source activate SIPSim
SIPSim OTU_sample_data \
OTU_PCR_sub.txt \
> OTU_PCR_sub_meta.txt
!head OTU_PCR_sub_meta.txt
###Output
sample library fraction BD_min BD_max BD_mid
1__-inf-1.673 1 -inf-1.673 -inf 1.673 -inf
1__1.673-1.678 1 1.673-1.678 1.673 1.678 1.6755
1__1.678-1.681 1 1.678-1.681 1.678 1.681 1.6795
1__1.681-1.685 1 1.681-1.685 1.681 1.685 1.683
1__1.685-1.688 1 1.685-1.688 1.685 1.688 1.6865
1__1.688-1.691 1 1.688-1.691 1.688 1.691 1.6895
1__1.691-1.695 1 1.691-1.695 1.691 1.695 1.693
1__1.695-1.698 1 1.695-1.698 1.695 1.698 1.6965
1__1.698-1.700 1 1.698-1.700 1.698 1.700 1.699
###Markdown
Other SIPSim commands`SIPSim -l` will list all available SIPSim commands
###Code
%%bash
source activate SIPSim
SIPSim -l
###Output
#-- Commands --#
BD_shift
communities
DBL
deltaBD
diffusion
fragment_KDE
fragment_KDE_cat
fragment_parse
fragments
genome_download
genome_index
genome_rename
gradient_fractions
HRSIP
incorp_config_example
isotope_incorp
KDE_bandwidth
KDE_info
KDE_parse
KDE_plot
KDE_sample
KDE_select_taxa
OTU_add_error
OTU_PCR
OTU_sample_data
OTU_subsample
OTU_sum
OTU_table
OTU_wide_long
qSIP
qSIP_atom_excess
tree_sim
|
docs/bayesian_hypothesis_tests.ipynb | ###Markdown
Table of ContentsContinuous Variables, $X \in \mathbb{R}$Hierarchical Gaussian ModelModel Specification:method_type:model_params (hyperparameters):Hierachical Student's-t ModelModel Specification:method_type:model_params (hyperparameters):Binary / Proportion Variables, $X \in (0, 1) \subset \mathbb{R}$Beta-Binomial ModelModel Specification:method_type:model_params (hyperparameters):Beta-Bernoulli ModelModel Specification:method_type:model_params (hyperparameters):Count / Rate Variables, $X \in \mathbb{N} \cup {\{0\}}$Gamma-Poisson ModelModel Specification:method_type:model_params (hyperparameters): Bayesian Hypothesis Test Model SpecificationWhen using Bayesian models for hypothesis testing, hyperparameters can specified during `HypothesisTest` initialization by providing a `model_params` dictionary. If no `model_params` are provided, default hyperparameters will be used, generally corresponding with weak or non-informative priors.
###Code
from abra import HypothesisTest
# Beta-Binomial model for running inference on the results of as series of binary trials
model_type = 'beta_binomial'
# Beta(alpha=100, beta=100) indicates fairly strong prior belief that p_success = .5
hyper_params = dict(alpha=100., beta=100.)
bayesian_test = HypothesisTest(
method=model_type,
model_params=hyper_params
)
###Output
_____no_output_____ |
01-intro-101/python/practices/05-python-avanzado/your-solution-here/mod01-05-rodrigo-delaplaza/05.1_python_avanzado_rodri.ipynb | ###Markdown
Programación para *Data Science*============================Intro101 - 05.1: Conceptos avanzados de Python--------------------------------------En este Notebook encontraréis dos conjuntos de ejercicios: un primer conjunto de ejercicios para practicar y que no puntuan, pero que recomendamos intentar resolver y un segundo conjunto que evaluaremos como actividad. Ejercicios para practicar**Los siguientes 3 ejercicios no puntuan**, pero os recomendamos que los intentéis resolver antes de pasar a los ejercicios propios. También podéis encontrar las soluciones a estos ejercicios al final del Notebook. Ejercicio 1El siguiente ejercicio consiste en pasar un número en base 16 (hexadecimal, 0-9/A-F) a base 10 (decimal). Para hacerlo, debéis crear una **función** que dado un _string_ que representa un número en hexadecimal, por ejemplo, `AE3F`, devuelva el número natural correspondiente. En este caso, el resultado sería `44607`.
###Code
# Respuesta
def conversion_decimal(hexadecimal):
decimal = 0
hexadecimal = hexadecimal[::-1]
for i in range(len(hexadecimal)):
decimal = decimal + int(hexadecimal[i], 16) * 16**i
return decimal
print(conversion_decimal('AE3F'))
# No entiendo este paso:
variable = int('A',16)
print(variable)
###Output
44607
10
###Markdown
Ejercicio 2Las excepciones son errores detectados en tiempo de ejecución. Pueden y deben ser manejadas por el programador para minimizar el riesgo de que un determinado programa falle de forma no controlada. Escribid, en lenguaje Python, cómo generar y capturar la siguiente excepción: **ZeroDivisionError**.
###Code
# Respuesta
# Generar la excepción
#div = 4/0
# print (div)
# Intentar la division, excepto que sea por cero
try:
div = 4/0
print (div)
except ZeroDivisionError:
print ('No se puede dividir por cero')
###Output
No se puede dividir por cero
###Markdown
Ejercicio 3Completad el código necesario para calcular el número de vocales y de consonantes respectivamente de un texto.
###Code
texto = "Hola a todos!"
print(texto)
texto_2 = "Bienvenidos!"
print(texto_2)
def imprime(argumento):
nuevo = argumento + texto_2
print(nuevo)
imprime(texto)
imprime(texto)
###Output
Hola a todos!Bienvenidos!
###Markdown
PseudocódigoDado un texto con n palabrasla operación tiene que contar vocales y consonantesSI en el texto existe una vocal IF: num_vocales +1 ELSE desde (A AND la Z) OR (ç AND ñ): num_consonantes +1
###Code
def contar_vocales_y_consonantes(texto):
# Cuenta las vocales contenidas en el string texto y también las consonantes.
num_vocales = 0
num_consonantes = 0
# Código que hay que completar.
#texto = texto.decode('utf-8')
# Definimos una lista con las vocales en unicode
vocales = [u'a', u'e', u'i', u'o', u'u', u'à', u'á', u'ä', u'ï', u'è', u'é', u'í', u'ï', u'ò', u'ó', u'ú', u'ü']
for t in texto.lower():
if t in vocales:
num_vocales += 1
elif t > 'a' and t <= 'z' or t == u'ç' or t == u'ñ':
num_consonantes += 1
return num_vocales, num_consonantes
texto = "Orbiting Earth in the spaceship, I saw how beautiful our planet is. \
People, let us preserve and increase this beauty, not destroy it!"
num_vocales, num_consonantes = contar_vocales_y_consonantes(texto)
print ("El número de vocales es %d." % num_vocales)
print ("El número de consonantes es %d." % num_consonantes)
###Output
El número de vocales es 44.
El número de consonantes es 62.
###Markdown
--- Ejercicios y preguntas teóricasA continuación encontraréis los **ejercicios y preguntas teóricas que debéis completar en este módulo intro-101** y que forman parte de la evaluación de esta unidad. Pregunta 1Las funciones _range_ y _xrange_ pueden utilizarse con la misma finalidad, pero su funcionamiento es diferente. Poned un ejemplo donde sería recomendable intercambiar la función _range_ por la función _xrange_.**Respuesta:** Es recomendable utilizar la función _xrange_ cuando los parámetros que definen una lista sean valores muy elevados (_-100000, 100000, por ejemplo_), ya que en este caso la lista de elementos no se almacena en la memoria, por lo que es más eficiente. Pregunta 2a) Explicad brevemente cada línea de código del siguiente bloque (añadid comentarios en el mismo bloque de código):
###Code
# Añadid vuestros comentarios de código en este mismo bloque
# Definimos una función para crear un generador. Si se utiliza solo esta función no se obtiene ningún resultado
def create_generator():
for i in range(10):
yield i
# Asignamos el generador a una variable
num_generator = create_generator()
# Iteramos por el generador creado utilizando la nueva variable
for i in num_generator:
print("Primera iteración: número generado =", i)
# Como el bucle se ha interrumpido, no inicia el siguiente. Necesitaríamos volver a llamar a la función para que se inicie el siguiente bucle
for j in num_generator:
print("Segunda iteración: número generado =", j)
###Output
Primera iteración: número generado = 0
Primera iteración: número generado = 1
Primera iteración: número generado = 2
Primera iteración: número generado = 3
Primera iteración: número generado = 4
Primera iteración: número generado = 5
Primera iteración: número generado = 6
Primera iteración: número generado = 7
Primera iteración: número generado = 8
Primera iteración: número generado = 9
###Markdown
b) Explicad brevemente la salida por pantalla que observamos al ejecutar el código anterior.**Respuesta** En la salida observamos solamente la primera iteración, ya que el _yield_ ha interrumpido la secuencia antes de la segunda iteración. Ejercicio 1Escribid una función que dada una lista de planetas del sistema solar, pregunte al usuario que introduzca una posición y muestre el plante correspondiente a dicha posición. Por ejemplo, si tenemos la siguiente lista: `['Mercurio', 'Venus', 'Tierra', 'Marte']` y el usuario nos ha introducido la posición `3`, hemos de mostrar como resultado por pantalla: `Tierra`.Consideraciones:- La posición que introduzca el usuario tiene que ser un número entero estrictamente positivo.- La función debe controlar el acceso a una posición fuera de la lista mediante una **excepción**. Por ejemplo, en el caso anterior debemos mostrar una mensaje de error si el usuario pide acceder a la posición 10.
###Code
# Respuesta
planetas = ['Mercurio', 'Venus', 'Tierra', 'Marte']
planetas
type(planetas)
len(planetas)
planetas[0]
planetas[1]
for planeta in planetas:
print(planeta)
for i, planeta in enumerate(planetas):
print(i, planeta)
for i, planeta in enumerate(planetas):
print(i+1, planeta)
# Marcamos un rango de valores válidos entre 1 y 4
valores_validos = range(1,5)
# Partimos de una posición no válida para iniciar el bucle
posicion_planeta = 0
while posicion_planeta not in valores_validos:
try:
posicion_planeta = int(input('Introduzca una posición entre el 1 y el 4 para la que desee conocer el planeta correspondiente: '))
except ValueError:
print('Tiene que ser un número entero positivo entre el 1 y el 4')
# Una vez almacenada la posición en la variable, debemos mostrar por pantalla el planeta correspondiente
planetas = ['Mercurio', 'Venus', 'Tierra', 'Marte']
print (planetas[posicion_planeta-1])
###Output
Introduzca una posición entre el 1 y el 4 para la que desee conocer el planeta correspondiente: 5
Introduzca una posición entre el 1 y el 4 para la que desee conocer el planeta correspondiente: 6
Introduzca una posición entre el 1 y el 4 para la que desee conocer el planeta correspondiente: -1
Introduzca una posición entre el 1 y el 4 para la que desee conocer el planeta correspondiente: b
###Markdown
Ejercicio 2Dada una lista de planetas del sistema solar, determinad cuales de estos planetas tienen una masa superior a la de la Tierra. Por ejemplo, si la lista inicial es `['Venus', 'Marte', 'Saturno']`, el resultado que mostraríamos por pantalla sería `['Saturno']` ya que el planeta Saturno tiene una masa `95.2` veces superior a la Tierra.Consideraciones:- Debéis tener en cuenta que el nombre de los planetas que nos pasan por parámetro puede estar en minúsculas, mayúsculas o una combinación de ambas.- Podéis asumir que no habrá acentos en el nombre de los planetas.- Debéis determinar aquellos planetas que tiene una massa estrictamente superior a la de la Tierra.- No habrá planetas repetidos en la lista que nos pasan por parámetro.
###Code
masas = {'Mercurio': 0.06, 'Venus': 0.82, 'Tierra': 1, 'Marte': 0.11, 'Jupiter': 317.8,
'Saturno': 95.2, 'Urano': 14.6, 'Neptuno': 17.2, 'Pluto': 0.0022}
type(masas)
len(masas)
# Llamar el primer elemento del diccionario
masas[0]
masas.keys()
masas.values()
masas['Neptuno']
# Masas medidas con respecto a la Tierra
# Es decir, un valor de 14.6 representaria una masa 14.6 veces superior a la de la Tierra
masas = {'Mercurio': 0.06, 'Venus': 0.82, 'Tierra': 1, 'Marte': 0.11, 'Jupiter': 317.8,
'Saturno': 95.2, 'Urano': 14.6, 'Neptuno': 17.2, 'Pluto': 0.0022}
def planetas_mas_grandes_que_Tierra(planetas):
"""
Planetas con una masa superior a la de la Tierra
"""
planetas_minusculas = []
for planeta in planetas:
planetas_minusculas.append(planeta.lower())
planetas_primera_mayuscula = []
for planeta in planetas_minusculas:
planetas_primera_mayuscula.append(planeta.capitalize())
planetas_masa_superior = []
for planeta in planetas_primera_mayuscula:
if masas [planeta] > masas ['Tierra']:
planetas_masa_superior.append(planeta)
return planetas_masa_superior
# Ejemplos de uso de la función anterior
print(planetas_mas_grandes_que_Tierra(['Venus', 'Mercurio', 'Marte']))
print(planetas_mas_grandes_que_Tierra(['Jupiter', 'Saturno', 'Pluto']))
print(planetas_mas_grandes_que_Tierra(['urano', 'tierra', 'neptuno', 'marte', 'Venus']))
print(planetas_mas_grandes_que_Tierra(['Tierra', 'MeRcUrIo', 'PLUTO', 'SATURNO']))
###Output
[]
['Jupiter', 'Saturno']
['Urano', 'Neptuno']
['Saturno']
###Markdown
Ejercicio 3Completad las siguientes funciones y documentad el código si lo consideráis oportuno. Finalmente, escribid al menos un ejemplo de uso para cada función.
###Code
# No entiendo a qué funciones se refiere este ejercicio
###Output
_____no_output_____
###Markdown
Ejercicio 4Escribid una función que dado un número entero positivo, `N`, genere un fichero con el nombre `output.txt` que contendrá `N` líneas, donde cada línea deberá mostrar una número consecutivo de letras `A`.Por ejemplo, si `N = 4`, el fichero generado deberá contener el siguiente contenido:```AAAAAAAAAA```
###Code
# Respuesta
# Importamos la librerías os
import os
# Definimos la función que va a escribir el archivo .txt con un bucle que escriba N líneas
def generar_fichero (N):
out = open('output.txt', 'w')
for i in range (N+1):
out.write(i*'A'+'\n')
out.close()
# Aplicamos la función con un valor aleatorio, en este caso 10
generar_fichero(10)
# Leemos el archivo que acabamos de escribir para comprobar que contiene el texto deseado
with open('output.txt') as f:
for line in f:
print (line)
###Output
A
AA
AAA
AAAA
AAAAA
AAAAAA
AAAAAAA
AAAAAAAA
AAAAAAAAA
AAAAAAAAAA
###Markdown
Ejercicio 5 Dada una cadena de caracteres, `s`, de longitud `n` y un número entero positivo `k`, siendo `k` un divisor de `n`, podemos dividir la cadena `s` en `n / k` sub-cadenas de la misma longitud.Escribid una función que, dada una cadena `s` y un número entero `k`, devuelva las `n/k` sub-cadenas teniendo en cuenta las siguientes consideraciones:- El orden de los caracteres en las sub-cadenas debe ser el mismo que en la cadena original.- Todos los caracteres de las sub-cadenas deben aparecer una única vez. Es decir, si un caracter se repite dentro de una sub-cadena, sólo hemos de mostrar la primera ocurrencia.Por ejemplo, si tenemmoss = AABCCAADAk = 3el resultado a mostrar por pantalla sería:ABCAADTenemos que la longitud de la cadena es 9 y por lo tanto, podemos formar 3 sub-cadenas:`AAB -> AB` (el caracter A se repite dos veces)`CCA -> CA` (el caracter C se repite dos veces)`ADA -> AD` (el caracter A se repite dos veces)
###Code
# Respuesta
# Cadena original
s = 'AABCCAADA'
# Numero de caracteres de las subcadenas
k = 3
def dividir_cadena (s, k):
# Numero de caracteres de la cadena
n = len(s)
# Numero de subcadenas
numero_subcadenas = int(n / k)
# Lista de subcadenas
lista_subcadenas = []
for i in range (numero_subcadenas):
subcadena = s [i*k:(i+1)*k]
lista_subcadenas.append(subcadena)
# Eliminar caracteres duplicados en las subcadenas
nueva_lista_subcadenas = []
from collections import OrderedDict
for subcadena in lista_subcadenas:
subcadena1 = "".join(OrderedDict.fromkeys(subcadena))
nueva_lista_subcadenas.append(subcadena1)
return nueva_lista_subcadenas
dividir_cadena ('AABCCAADA', 3)
###Output
_____no_output_____
###Markdown
Ejercicio 6 (Opcional)Al final de la Edad Media, en Francia, el diplomático francés Blaise de Vigenère desarrollo un algoritmo para cifrar mensajes que nadie fue capaz de romper durante aproximadamente 250 años. El algoritmo se conoce con el nombre de [cifrado de Vigenère](https://es.wikipedia.org/wiki/Cifrado_de_Vigen%C3%A8re).El cifrado de Vigenère consiste en añadir a cada una de las letras de un texto un desplazamiento a partir de una clave secreta para conseguir una nueva letra diferente de la original. Veamos un ejemplo:Si asignamos el número 1 a la primera letra del abecedario, A, 2 a la siguiente, B, etc., imaginad que tenemos el siguiente mensaje:ABC123y la siguiente clave secreta:DEF456A cada letra del mensaje original aplicamos un desplazamiento en función de la misma posición dentro de la clave secreta. Por lo tanto, el mensaje cifrado quedaría de la siguiente forma: E G I(1 + 4) (2 + 5) (3 + 6)Escribid una función que, dado un mensaje y una clave secreta, calcule y devuelva el mensaje cifrado.*Consideraciones.*- Utilizad como alfabeto de entrada **el alfabeto inglés en mayúsculas**.- El valor por defecto de la clave secreta será **DATASCI**.
###Code
def cifrado_vigenere(mensaje, clave="DATASCI"):
"""
Cifra el mensaje utilizando el cifrado de Vigenère
"""
mensaje_cifrado = ""
# Código que hay que completar
return mensaje_cifrado
# Aquí podéis añadir más ejemplos:
print(cifrado_vigenere("ATACAREMOS AL AMANECER"))
###Output
_____no_output_____
###Markdown
--- Soluciones ejercicios para practicar Ejercicio 1El siguiente ejercicio consiste en pasar un número en base 16 (hexadecimal, 0-9/A-F) a base 10 (decimal). Para hacerlo, debéis crear una **función** que dado un _string_ que representa un número en hexadecimal, por ejemplo, `AE3F`, devuelva el número natural correspondiente. En este caso, el resultado sería `44607`.**Respuesta** En Python disponemos de una función muy útil que nos permite pasar a un número decimal des de cualquier base (```int(x, base=y)```). Dado que el objetivo es jugar un poco con el lenguaje Python, vamos a usar dicha función sólo de forma parcial para calcular el número decimal correspondiente a cada carácter hexadecimal individualmente.La fórmula para convertir un número hexadecimal a un número decimal, tomando como ejemplo el número AE3F, es:```A * 16**3 + E * 16**2 + 3 * 16**1 + F * 16**0 = 10 * 16**3 + 14 * 16**2 + 3 * 16**1 + 15 * 16**0```
###Code
# Importamos el string '0123456789abcdefABCDEF' que nos puede ser muy útil para comprobar el formato
from string import hexdigits
def hex_to_dec(numero_hexadecimal):
# Primero, comprobamos que el número que se pasa por parámetro es hexadecimal
if all(c in hexdigits for c in numero_hexadecimal):
# Definimos la base para realizar las operaciones
base = 16
numero_decimal = 0
# Invertimos el número hexadecimal para que nos sea más fácil trabajar con los índices
numero_hexadecimal = numero_hexadecimal[::-1]
for i in range(len(numero_hexadecimal)):
# Para cada carácter hexadecimal aplicamos la formula c * base ** i,
# donde c es la representación decimal del carácter y
# sumamos el resultado al resultado obtenido en la iteración anterior
numero_decimal += int(numero_hexadecimal[i], 16) * base**i
return numero_decimal
else:
print('El número introducido no es hexadecimal')
print(hex_to_dec('AE3F'))
print(hex_to_dec('FFF'))
print(hex_to_dec('123'))
###Output
_____no_output_____
###Markdown
Ejercicio 2Las excepciones son errores detectados en tiempo de ejecución. Pueden y deben ser manejadas por el programador para minimizar el riesgo de que un determinado programa falle de forma no controlada. Escribid, en lenguaje Python, cómo generar y capturar la siguiente excepción: **ZeroDivisionError**.**Respuesta** En Pyhon podemos utilizar el bloque try ... except para capturar excepciones. Primero se intentará ejecutar el código dentro del bloque try y si hay una excepción se buscará una instrucción except que capture dicha excepción. En caso de encontrarla se ejecutará el código dentro del bloque except.
###Code
try:
print( 5/0) # División por cero - genera ZeroDivisionError
except ZeroDivisionError:
print("¡Cuidado! División por cero.")
###Output
_____no_output_____
###Markdown
Ejercicio 3Completad el código necesario para calcular el número de vocales y de consonantes respectivamente de un texto.**Respuesta**
###Code
def contar_vocales_y_consonantes(texto):
# Cuenta las vocales contenidas en el string texto y también las consonantes.
num_vocales = 0
num_consonantes = 0
# Definimos una lista con las vocales
vocales = ['a', 'e', 'i', 'o', 'u']
for c in texto.lower(): # Podemos convertir el texto a minúsculas para simplificar los cálculos
if c in vocales:
num_vocales += 1
elif c > 'a' and c <= 'z':
num_consonantes += 1
return num_vocales, num_consonantes
texto = "Orbiting Earth in the spaceship, I saw how beautiful our planet is. \
People, let us preserve and increase this beauty, not destroy it!"
num_vocales, num_consonantes = contar_vocales_y_consonantes(texto)
print ("El número de vocales es de %d" % num_vocales)
print ("El número de consonantes es de %d" % num_consonantes)
###Output
_____no_output_____
###Markdown
Si queremos considerar también las vocales acentuadas o caracteres especiales, podemos modificar el código anterior para tenerlo en cuenta:
###Code
def contar_vocales_y_consonantes(texto):
# Cuenta las vocales contenidas en el string texto y también las consonantes.
num_vocales = 0
num_consonantes = 0
# Convertimos el texto a Unicode
# En este caso sabemos seguro que la codificación de caracteres es UTF-8,
# pero si nuestro código se puediera ejecutar fuera del Notebook podríamos
# incluir la codificación como otro parámetro de la función
texto = texto.decode('utf-8')
# Definimos una lista con las vocales en unicode
vocales = [u'a', u'e', u'i', u'o', u'u', u'à', u'á', u'è', u'é', u'í', u'ï', u'ò', u'ó', u'ú', u'ü']
for c in texto.lower(): # Podemos convertir el texto a minúsculas para simplificar los cálculos
if c in vocales:
num_vocales += 1
elif c > 'a' and c <= 'z' or c == u'ç' or c == u'ñ':
num_consonantes += 1
return num_vocales, num_consonantes
texto = "Orbiting Earth in the spaceship, I saw how beautiful our planet is. \
People, let us preserve and increase this beauty, not destroy it!"
num_vocales, num_consonantes = contar_vocales_y_consonantes(texto)
print ("El número de vocales es de %d" % num_vocales)
print ("El número de consonantes es de %d" % num_consonantes)
texto = "áéióúY"
num_vocales, num_consonantes = contar_vocales_y_consonantes(texto)
print( "El número de vocales es de %d" % num_vocales)
print ("El número de consonantes es de %d" % num_consonantes)
###Output
_____no_output_____ |
handwriting_generator/handwriting_generator.ipynb | ###Markdown
Handwriting Generator[Reference Jypyter Notebook](https://nbviewer.jupyter.org/github/greydanus/scribe/blob/master/sample.ipynb)[Data](https://github.com/greydanus/scribe/tree/master/data)[Scribe Github](https://github.com/greydanus/scribe)
###Code
!pip3 uninstall tensorflow
!pip3 install tensorflow==1.0.0
%tensorflow_version 1.x
import numpy as np
import numpy.matlib
import matplotlib.pyplot as plt
import matplotlib.cm as cm
%matplotlib inline
import math
import random
import time
import os
import pickle as pickle
import tensorflow as tf #built with TensorFlow version 1.0
print(tf.__version__)
# in the real project class, we use argparse (https://docs.python.org/3/library/argparse.html)
class FakeArgParse():
def __init__(self):
pass
args = FakeArgParse()
#general model params
args.train = False
args.rnn_size = 100 #400 hidden units
args.tsteps = 256 if args.train else 1
args.batch_size = 32 if args.train else 1
args.nmixtures = 8 # number of Gaussian mixtures in MDN
#window params
args.kmixtures = 1 # number of Gaussian mixtures in attention mechanism (for soft convolution window)
args.alphabet = ' abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ' #later we'll add an <UNK> slot for unknown chars
args.tsteps_per_ascii = 25 # an approximate estimate
#book-keeping
args.save_path = './saved/model.ckpt'
args.data_dir = './data'
args.log_dir = './logs/'
args.text = 'call me ishmael some years ago'
args.style = -1 # don't use a custom style
args.bias = 1.0
args.eos_prob = 0.4 # threshold probability for ending a stroke
# in real life the model is a class. I used this hack to make the iPython notebook more readable
class FakeModel():
def __init__(self):
pass
model = FakeModel()
model.char_vec_len = len(args.alphabet) + 1 #plus one for <UNK> token
model.ascii_steps = len(args.text)
model.graves_initializer = tf.truncated_normal_initializer(mean=0., stddev=.075, seed=None, dtype=tf.float32)
model.window_b_initializer = tf.truncated_normal_initializer(mean=-3.0, stddev=.25, seed=None, dtype=tf.float32)
# ----- build the basic recurrent network architecture
cell_func = tf.contrib.rnn.LSTMCell # could be GRUCell or RNNCell
model.cell0 = cell_func(args.rnn_size, state_is_tuple=True, initializer=model.graves_initializer)
model.cell1 = cell_func(args.rnn_size, state_is_tuple=True, initializer=model.graves_initializer)
model.cell2 = cell_func(args.rnn_size, state_is_tuple=True, initializer=model.graves_initializer)
model.input_data = tf.placeholder(dtype=tf.float32, shape=[None, args.tsteps, 3])
model.target_data = tf.placeholder(dtype=tf.float32, shape=[None, args.tsteps, 3])
model.istate_cell0 = model.cell0.zero_state(batch_size=args.batch_size, dtype=tf.float32)
model.istate_cell1 = model.cell1.zero_state(batch_size=args.batch_size, dtype=tf.float32)
model.istate_cell2 = model.cell2.zero_state(batch_size=args.batch_size, dtype=tf.float32)
#slice the input volume into separate vols for each tstep
inputs = [tf.squeeze(input_, [1]) for input_ in tf.split(model.input_data, args.tsteps, 1)]
#build model.cell0 computational graph
outs_cell0, model.fstate_cell0 = tf.contrib.legacy_seq2seq.rnn_decoder(inputs, model.istate_cell0, \
model.cell0, loop_function=None, scope='cell0')
# ----- build the gaussian character window
def get_window(alpha, beta, kappa, c):
# phi -> [? x 1 x ascii_steps] and is a tf matrix
# c -> [? x ascii_steps x alphabet] and is a tf matrix
ascii_steps = c.get_shape()[1].value #number of items in sequence
phi = get_phi(ascii_steps, alpha, beta, kappa)
window = tf.matmul(phi,c)
window = tf.squeeze(window, [1]) # window ~ [?,alphabet]
return window, phi
#get phi for all t,u (returns a [1 x tsteps] matrix) that defines the window
def get_phi(ascii_steps, alpha, beta, kappa):
# alpha, beta, kappa -> [?,kmixtures,1] and each is a tf variable
u = np.linspace(0,ascii_steps-1,ascii_steps) # weight all the U items in the sequence
kappa_term = tf.square( tf.subtract(kappa,u))
exp_term = tf.multiply(-beta,kappa_term)
phi_k = tf.multiply(alpha, tf.exp(exp_term))
phi = tf.reduce_sum(phi_k,1, keep_dims=True)
return phi # phi ~ [?,1,ascii_steps]
def get_window_params(i, out_cell0, kmixtures, prev_kappa, reuse=True):
hidden = out_cell0.get_shape()[1]
n_out = 3*kmixtures
with tf.variable_scope('window',reuse=reuse):
window_w = tf.get_variable("window_w", [hidden, n_out], initializer=model.graves_initializer)
window_b = tf.get_variable("window_b", [n_out], initializer=model.window_b_initializer)
abk_hats = tf.nn.xw_plus_b(out_cell0, window_w, window_b) # abk_hats ~ [?,n_out] = "alpha, beta, kappa hats"
abk = tf.exp(tf.reshape(abk_hats, [-1, 3*kmixtures,1]))
alpha, beta, kappa = tf.split(abk, 3, 1) # alpha_hat, etc ~ [?,kmixtures]
kappa = kappa + prev_kappa
return alpha, beta, kappa # each ~ [?,kmixtures,1]
model.init_kappa = tf.placeholder(dtype=tf.float32, shape=[None, args.kmixtures, 1])
model.char_seq = tf.placeholder(dtype=tf.float32, shape=[None, model.ascii_steps, model.char_vec_len])
wavg_prev_kappa = model.init_kappa
prev_window = model.char_seq[:,0,:]
#add gaussian window result
reuse = False
for i in range(len(outs_cell0)):
[alpha, beta, new_kappa] = get_window_params(i, outs_cell0[i], args.kmixtures, wavg_prev_kappa, reuse=reuse)
window, phi = get_window(alpha, beta, new_kappa, model.char_seq)
outs_cell0[i] = tf.concat((outs_cell0[i],window), 1) #concat outputs
outs_cell0[i] = tf.concat((outs_cell0[i],inputs[i]), 1) #concat input data
# prev_kappa = new_kappa #tf.ones_like(new_kappa, dtype=tf.float32, name="prev_kappa_ones") #
wavg_prev_kappa = tf.reduce_mean( new_kappa, reduction_indices=1, keep_dims=True) # mean along kmixtures dimension
reuse = True
model.window = window #save the last window (for generation)
model.phi = phi #save the last window (for generation)
model.new_kappa = new_kappa #save the last window (for generation)
model.alpha = alpha #save the last window (for generation)
model.wavg_prev_kappa = wavg_prev_kappa
# ----- finish building second recurrent cell
outs_cell1, model.fstate_cell1 = tf.contrib.legacy_seq2seq.rnn_decoder(outs_cell0, model.istate_cell1, model.cell1, \
loop_function=None, scope='cell1') #use scope from training
# ----- finish building third recurrent cell
outs_cell2, model.fstate_cell2 = tf.contrib.legacy_seq2seq.rnn_decoder(outs_cell1, model.istate_cell2, model.cell2, \
loop_function=None, scope='cell2')
out_cell2 = tf.reshape(tf.concat(outs_cell2, 1), [-1, args.rnn_size]) #concat outputs for efficiency
#put a dense cap on top of the rnn cells (to interface with the mixture density network)
n_out = 1 + args.nmixtures * 6 # params = end_of_stroke + 6 parameters per Gaussian
with tf.variable_scope('mdn_dense'):
output_w = tf.get_variable("output_w", [args.rnn_size, n_out], initializer=model.graves_initializer)
output_b = tf.get_variable("output_b", [n_out], initializer=model.graves_initializer)
output = tf.nn.xw_plus_b(out_cell2, output_w, output_b) #data flows through dense nn
# ----- build mixture density cap on top of second recurrent cell
def gaussian2d(x1, x2, mu1, mu2, s1, s2, rho):
# define gaussian mdn (eq 24, 25 from http://arxiv.org/abs/1308.0850)
x_mu1 = tf.subtract(x1, mu1)
x_mu2 = tf.subtract(x2, mu2)
Z = tf.square(tf.div(x_mu1, s1)) + \
tf.square(tf.div(x_mu2, s2)) - \
2*tf.div(tf.multiply(rho, tf.multiply(x_mu1, x_mu2)), tf.multiply(s1, s2))
rho_square_term = 1-tf.square(rho)
power_e = tf.exp(tf.div(-Z,2*rho_square_term))
regularize_term = 2*np.pi*tf.multiply(tf.multiply(s1, s2), tf.sqrt(rho_square_term))
gaussian = tf.div(power_e, regularize_term)
return gaussian
# now transform dense NN outputs into params for MDN
def get_mdn_coef(Z):
# returns the tf slices containing mdn dist params (eq 18...23 of http://arxiv.org/abs/1308.0850)
eos_hat = Z[:, 0:1] #end of sentence tokens
pi_hat, mu1_hat, mu2_hat, sigma1_hat, sigma2_hat, rho_hat = tf.split(Z[:, 1:], 6, 1)
model.pi_hat, model.sigma1_hat, model.sigma2_hat = \
pi_hat, sigma1_hat, sigma2_hat # these are useful for biasing
eos = tf.sigmoid(-1*eos_hat) # technically we gained a negative sign
pi = tf.nn.softmax(pi_hat) # softmax z_pi:
mu1 = mu1_hat; mu2 = mu2_hat # leave mu1, mu2 as they are
sigma1 = tf.exp(sigma1_hat); sigma2 = tf.exp(sigma2_hat) # exp for sigmas
rho = tf.tanh(rho_hat) # tanh for rho (squish between -1 and 1)
return [eos, pi, mu1, mu2, sigma1, sigma2, rho]
# reshape target data (as we did the input data)
flat_target_data = tf.reshape(model.target_data,[-1, 3])
[x1_data, x2_data, eos_data] = tf.split(flat_target_data, 3, 1) #we might as well split these now
[model.eos, model.pi, model.mu1, model.mu2, model.sigma1, model.sigma2, model.rho] = get_mdn_coef(output)
!git clone https://github.com/greydanus/scribe.git
!ls scribe/data
!cp -r scribe/data data
!git clone https://github.com/wileyw/DeepLearningDemos.git
!ls DeepLearningDemos/handwriting_generator/saved.tgz
!tar -xzvf DeepLearningDemos/handwriting_generator/saved.tgz
!ls saved
model.sess = tf.InteractiveSession()
model.saver = tf.train.Saver(tf.global_variables())
model.sess.run(tf.global_variables_initializer())
load_was_success = True # yes, I'm being optimistic
global_step = 0
try:
save_dir = '/'.join(args.save_path.split('/')[:-1])
ckpt = tf.train.get_checkpoint_state(save_dir)
load_path = ckpt.model_checkpoint_path
print('------------')
print(load_path)
model.saver.restore(model.sess, load_path)
print('----------')
except Exception as e:
print("no saved model to load. starting new session")
print(e)
load_was_success = False
else:
print("loaded model: {}".format(load_path))
model.saver = tf.train.Saver(tf.global_variables())
global_step = int(load_path.split('-')[-1])
# utility function for converting input ascii characters into vectors the network can understand.
# index position 0 means "unknown"
def to_one_hot(s, ascii_steps, alphabet):
steplimit=3e3; s = s[:3e3] if len(s) > 3e3 else s # clip super-long strings
seq = [alphabet.find(char) + 1 for char in s]
if len(seq) >= ascii_steps:
seq = seq[:ascii_steps]
else:
seq = seq + [0]*(ascii_steps - len(seq))
one_hot = np.zeros((ascii_steps,len(alphabet)+1))
one_hot[np.arange(ascii_steps),seq] = 1
return one_hot
def get_style_states(model, args):
with open(os.path.join(args.data_dir, 'styles.p'),'r') as f:
style_strokes, style_strings = pickle.load(f)
style_strokes, style_string = style_strokes[args.style], style_strings[args.style]
style_onehot = [to_one_hot(style_string, model.ascii_steps, args.alphabet)]
c0, c1, c2 = model.istate_cell0.c.eval(), model.istate_cell1.c.eval(), model.istate_cell2.c.eval()
h0, h1, h2 = model.istate_cell0.h.eval(), model.istate_cell1.h.eval(), model.istate_cell2.h.eval()
if args.style is -1: return [c0, c1, c2, h0, h1, h2] #model 'chooses' random style
style_stroke = np.zeros((1, 1, 3), dtype=np.float32)
style_kappa = np.zeros((1, args.kmixtures, 1))
prime_len = 500 # must be <= 700
for i in xrange(prime_len):
style_stroke[0][0] = style_strokes[i,:]
feed = {model.input_data: style_stroke, model.char_seq: style_onehot, model.init_kappa: style_kappa, \
model.istate_cell0.c: c0, model.istate_cell1.c: c1, model.istate_cell2.c: c2, \
model.istate_cell0.h: h0, model.istate_cell1.h: h1, model.istate_cell2.h: h2}
fetch = [model.wavg_prev_kappa, \
model.fstate_cell0.c, model.fstate_cell1.c, model.fstate_cell2.c,
model.fstate_cell0.h, model.fstate_cell1.h, model.fstate_cell2.h]
[style_kappa, c0, c1, c2, h0, h1, h2] = model.sess.run(fetch, feed)
return [c0, c1, c2, np.zeros_like(h0), np.zeros_like(h1), np.zeros_like(h2)] #only the c vectors should be primed
!ls data
print(args.data_dir)
# initialize some sampling parameters
one_hot = [to_one_hot(args.text, model.ascii_steps, args.alphabet)] # convert input string to one-hot vector
print(args)
[c0, c1, c2, h0, h1, h2] = get_style_states(model, args) # get numpy zeros states for all three LSTMs
kappa = np.zeros((1, args.kmixtures, 1)) # attention's read head starts at index 0
prev_x = np.asarray([[[0, 0, 1]]], dtype=np.float32) # start with a pen stroke at (0,0)
strokes, pis, windows, phis, kappas = [], [], [], [], [] # the data we're going to generate will go here
def sample_gaussian2d(mu1, mu2, s1, s2, rho):
mean = [mu1, mu2]
cov = [[s1*s1, rho*s1*s2], [rho*s1*s2, s2*s2]]
x = np.random.multivariate_normal(mean, cov, 1)
return x[0][0], x[0][1]
finished = False ; i = 0
while not finished and i < 800:
feed = {model.input_data: prev_x, model.char_seq: one_hot, model.init_kappa: kappa, \
model.istate_cell0.c: c0, model.istate_cell1.c: c1, model.istate_cell2.c: c2, \
model.istate_cell0.h: h0, model.istate_cell1.h: h1, model.istate_cell2.h: h2}
fetch = [model.pi_hat, model.mu1, model.mu2, model.sigma1_hat, model.sigma2_hat, model.rho, model.eos, \
model.window, model.phi, model.new_kappa, model.wavg_prev_kappa, model.alpha, \
model.fstate_cell0.c, model.fstate_cell1.c, model.fstate_cell2.c,\
model.fstate_cell0.h, model.fstate_cell1.h, model.fstate_cell2.h]
[pi_hat, mu1, mu2, sigma1_hat, sigma2_hat, rho, eos, window, phi, kappa, wavg_kappa, alpha, \
c0, c1, c2, h0, h1, h2] = model.sess.run(fetch, feed)
#bias stuff:
sigma1 = np.exp(sigma1_hat - args.bias)
sigma2 = np.exp(sigma2_hat - args.bias)
pi_hat *= 1 + args.bias # apply bias
pi = np.zeros_like(pi_hat) # need to preallocate
pi[0] = np.exp(pi_hat[0]) / np.sum(np.exp(pi_hat[0]), axis=0) # softmax
# choose a component from the MDN
idx = np.random.choice(pi.shape[1], p=pi[0])
eos = 1 if args.eos_prob < eos[0][0] else 0 # use 0.5 as arbitrary boundary
x1, x2 = sample_gaussian2d(mu1[0][idx], mu2[0][idx], sigma1[0][idx], sigma2[0][idx], rho[0][idx])
# store the info at this time step
windows.append(window)
phis.append(phi[0])
kappas.append(kappa[0])
pis.append(pi[0])
strokes.append([mu1[0][idx], mu2[0][idx], sigma1[0][idx], sigma2[0][idx], rho[0][idx], eos])
# test if finished (has the read head seen the whole ascii sequence?)
main_kappa_idx = np.where(alpha[0]==np.max(alpha[0])); # choose the read head with the highes alpha value
finished = True if kappa[0][main_kappa_idx] > len(args.text) + 1 else False
# new input is previous output
prev_x[0][0] = np.array([x1, x2, eos], dtype=np.float32)
kappa = wavg_kappa
i+=1
windows = np.vstack(windows)
phis = np.vstack(phis)
kappas = np.vstack(kappas)
pis = np.vstack(pis)
strokes = np.vstack(strokes)
# the network predicts the displacements between pen points, so do a running sum over the time dimension
strokes[:,:2] = np.cumsum(strokes[:,:2], axis=0)
# plots parameters from the attention mechanism
def window_plots(phis, windows):
plt.figure(figsize=(16,4))
plt.subplot(121)
plt.title('Phis', fontsize=20)
plt.xlabel("ascii #", fontsize=15)
plt.ylabel("time steps", fontsize=15)
plt.imshow(phis, interpolation='nearest', aspect='auto', cmap=cm.jet)
plt.subplot(122)
plt.title('Soft attention window', fontsize=20)
plt.xlabel("one-hot vector", fontsize=15)
plt.ylabel("time steps", fontsize=15)
plt.imshow(windows, interpolation='nearest', aspect='auto', cmap=cm.jet)
window_plots(phis, windows)
plt.figure(figsize=(8,4))
plt.title("How MDN $\pi$ values change over time", fontsize=15)
plt.xlabel("$\pi$ values", fontsize=15)
plt.ylabel("time step", fontsize=15)
plt.imshow(pis, interpolation='nearest', aspect='auto', cmap=cm.jet)
def gauss_plot(strokes, title, figsize = (20,2)):
plt.figure(figsize=figsize)
import matplotlib.mlab as mlab
buff = 1 ; epsilon = 1e-4
minx, maxx = np.min(strokes[:,0])-buff, np.max(strokes[:,0])+buff
miny, maxy = np.min(strokes[:,1])-buff, np.max(strokes[:,1])+buff
delta = abs(maxx-minx)/400. ;
x = np.arange(minx, maxx, delta)
y = np.arange(miny, maxy, delta)
X, Y = np.meshgrid(x, y)
Z = np.zeros_like(X)
for i in range(strokes.shape[0]):
gauss = mlab.bivariate_normal(X, Y, mux=strokes[i,0], muy=strokes[i,1], \
sigmax=strokes[i,2], sigmay=strokes[i,3], sigmaxy=0) # sigmaxy=strokes[i,4] gives error
Z += gauss * np.power(strokes[i,3] + strokes[i,2], .4) / (np.max(gauss) + epsilon)
plt.title(title, fontsize=20)
plt.imshow(Z)
gauss_plot(strokes, "Stroke probability", figsize = (2*model.ascii_steps,4))
# plots the stroke data (handwriting!)
def line_plot(strokes, title, figsize = (20,2)):
plt.figure(figsize=figsize)
eos_preds = np.where(strokes[:,-1] == 1)
eos_preds = [0] + list(eos_preds[0]) + [-1] #add start and end indices
for i in range(len(eos_preds)-1):
start = eos_preds[i]+1
stop = eos_preds[i+1]
plt.plot(strokes[start:stop,0], strokes[start:stop,1],'b-', linewidth=2.0) #draw a stroke
plt.title(title, fontsize=20)
plt.gca().invert_yaxis()
plt.show()
line_plot(strokes, 'Line plot: "{}"'.format(args.text), figsize=(model.ascii_steps,2))
###Output
_____no_output_____ |
python/pyLec/pyLec_20200410.ipynb | ###Markdown
Numpy- python에서 사용하는 과학 계산용 패키지- pandas가 numpy를 사용해서 속도가 빠릅니다- numpy는 c, c++, fortran으로 만들어짐- 선형대수 (linear algebra) 계산을 주로 사용 - 행렬 연산 - 스칼라, 벡터, 매트릭스 - Index - ndarray 객체 생성 - 데이터 선택 - 데이터 수정 - 데이터 연산 - ndarray 함수 사용 1. 데이터의 생성
###Code
# 동일한 데이터타입만 가능 -> pandas의 column
arr1 = np.array([1, 2, 3])
type(arr1), arr1, arr1.dtype
arr2 = arr1.astype(np.float64)
arr2, arr2.dtype
###Output
_____no_output_____
###Markdown
2. 데이터의 선택
###Code
arr = np.random.randint(5, size=(3,3))
arr
arr[1][2], arr[1,2]
###Output
_____no_output_____
###Markdown
*주의
###Code
arr[1:][1:]
arr[1:, 1:]
###Output
_____no_output_____
###Markdown
3. 데이터의 수정
###Code
arr
# 브로드캐스팅
arr[1] = 10 # scalar
arr
# 이건 대입
arr[1] = [11, 12, 13] # vector
arr
arr[arr > 3] = 20
arr
###Output
_____no_output_____
###Markdown
4. 데이터의 연산
###Code
arr1 = np.array([1, 2, 3])
arr2 = np.array([4, 5, 6])
arr1 + arr2
arr1 + 10
#any(or), all(and)
arr3 = np.array([2, 1, 3])
arr1 == arr3
np.any(arr1==arr3), np.all(arr1==arr3)
# quiz
# 1. 60 ~ 100까지 10점 단위로 10*5 행렬을 생성
# 2. 평균이 80점 이상인 row 데이터를 출력하는 코드를 작성
# 1. 10*5 행렬 생성
datas = np.random.randint(6, 10, size=(10,5))
datas *= 10
datas
datas.shape[1] # column의 개수
# 2. row별 평균 데이터 구하기
avg_data = np.sum(datas, axis=1) / datas.shape[1] # axis = 1 -> 가로로 더함
avg_data
# 3. 80점이 넘는 데이터만 출력
datas[avg_data >= 80]
# 4. 원래 데이터에서 총점하고 평균을 컬럼에 추가
total = np.sum(datas, axis=1)
total, avg_data
np.c_[datas, total, avg_data]
# append
# total.reshape(10,1), avg_data.reshape(10,1)
result_data = np.append(datas, total.reshape(10,1), axis=1) # axis=1은 가로
result_data = np.append(result_data, avg_data.reshape(10,1), axis=1)
result_data
# 데이터 프레임으로 만들기
columns = ["국어", "영어", "수학", "과학", "코딩", "총점", "평균"]
point_df = pd.DataFrame(result_data, columns=columns)
point_df
###Output
_____no_output_____
###Markdown
5. ndarray의 함수 사용- sum, mean, median, var, std, min, max- unique(중복된 데이터 제거), split(column 잘라주는거), sort, concatenate(횡렬데이터 합쳐줌), Pandas- 데이터 분석을 위한 사용이 쉽고 성능이 좋은 오픈소스 python 라이브러리- Series, DataFrame
###Code
point_df.tail(3)
# 마지막 row에 과목별 평균 row을 추가
# loc 사용 시 숫자를 넣으면 해당 row이 수정되지만,
# 아래와 같이 새로운 이름을 넣게 되면, 새로운 이름으로 row이 추가됨
point_df.loc["평균"] = point_df.sum() / len(point_df)
point_df
# 평균이 80점 이상이면 PASS, 아니면 FAIL이 들어가는 컬럼을 추가 #### APPLY 더 공부하기. 왜 언제 쓰는건지? map?
point_df["PASS/FAIL"] = point_df["평균"].apply(lambda data : "PASS" if data >= 80 else "FAIL")
point_df
# groupby, merge(join), pivot, apply
# 이거 확실하게 알고 넘어가야함
###Output
_____no_output_____ |
.ipynb_checkpoints/sql_for_data_analysis9-checkpoint.ipynb | ###Markdown
ADVANCED SQL 3: Union and Performance Tuning We connect to MySQL server and workbench and make analysis with the parch-and-posey database. This course is the practicals of the course SQL for Data Analysis at Udacity.
###Code
# we import some required libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from pprint import pprint
import time
print('Done!')
import mysql
from mysql.connector import Error
from getpass import getpass
db_name = 'parch_and_posey'
try:
connection = mysql.connector.connect(host='localhost',
database=db_name,
user=input('Enter UserName:'),
password=getpass('Enter Password:'))
if connection.is_connected():
db_Info = connection.get_server_info()
print("Connected to MySQL Server version ", db_Info)
cursor = connection.cursor()
cursor.execute("select database();")
record = cursor.fetchone()
print("You're connected to database: ", record)
except Error as e:
print("Error while connecting to MySQL", e)
def query_to_df(query):
st = time.time()
# Assert Every Query ends with a semi-colon
try:
assert query.endswith(';')
except AssertionError:
return 'ERROR: Query Must End with ;'
# so we never have more than 20 rows displayed
pd.set_option('display.max_rows', 30)
df = None
# Process the query
cursor.execute(query)
columns = cursor.description
result = []
for value in cursor.fetchall():
tmp = {}
for (index,column) in enumerate(value):
tmp[columns[index][0]] = [column]
result.append(tmp)
# Create a DataFrame from all results
for ind, data in enumerate(result):
if ind >= 1:
x = pd.DataFrame(data)
df = pd.concat([df, x], ignore_index=True)
else:
df = pd.DataFrame(data)
print(f'Query ran for {time.time()-st} secs!')
return df
# Let's see the tables in Parch-and-Posey database
query_to_df(
'SHOW TABLES;'
)
# 1. For the accounts table
query = 'SELECT * FROM accounts LIMIT 3;'
query_to_df(query)
# 2. For the orders table
query = 'SELECT * FROM orders LIMIT 3;'
query_to_df(query)
# 3. For the sales_reps table
query = 'SELECT * FROM sales_reps LIMIT 3;'
query_to_df(query)
# 4. For the web_events table
query = 'SELECT * FROM web_events LIMIT 3;'
query_to_df(query)
# 5. For the region table
query = 'SELECT * FROM region LIMIT 3;'
query_to_df(query)
###Output
Query ran for 0.0 secs!
###Markdown
UNION:While JOINs allow us to stack tables or their columns side-by-side horizontally, UNIONs allow us to stack two tables vertically, one atop the other. Appending Data via UNION**SQL's two strict rules for appending data:*** Both tables must have the same number of columns.* Those columns must have the same data types in the same order as the first table.A common misconception is that column names have to be the same. Column names, in fact, don't need to be the same to append two tables but you will find that they typically are. UNION Use Case* The UNION operator is used to combine the result sets of 2 or more SELECT statements. It removes duplicate rows between the various SELECT statements.* Each SELECT statement within the UNION must have the same number of fields in the result sets with similar data types.Typically, the use case for leveraging the UNION command in SQL is when a user wants to pull together distinct values of specified columns that are spread across multiple tables. For example, a chef wants to pull together the ingredients and respective aisle across three separate meals that are maintained in different tables. Details of UNION* There must be the same number of expressions in both SELECT statements.* The corresponding expressions must have the same data type in the SELECT statements. For example: expression1 must be the same data type in both the first and second SELECT statement.**Expert Tip*** UNION removes duplicate rows.* UNION ALL does not remove duplicate rows.* We'd likely use UNION ALL far more often than UNION in data analysis**[LINK](https://www.techonthenet.com/sql/union.php)****QUIZ Appending Data via UNION**Write a query that uses UNION ALL on two instances (and selecting all columns) of the accounts table. Then inspect the results and answer the subsequent quiz. The first part of the query...
###Code
query_to_df(
"SELECT * FROM accounts WHERE name < primary_poc;"
)
###Output
Query ran for 0.3004770278930664 secs!
###Markdown
The second part of the query...
###Code
query_to_df(
"SELECT * FROM accounts WHERE id % 3 = 0;"
)
###Output
Query ran for 0.21996235847473145 secs!
###Markdown
Combining both parts with UNION-ALL...
###Code
query_to_df(
"SELECT * FROM accounts WHERE name < primary_poc \
UNION ALL \
SELECT * FROM accounts WHERE id % 3 = 0;"
)
###Output
Query ran for 0.5679306983947754 secs!
###Markdown
**QUERY 2**
###Code
query_to_df(
"SELECT * FROM accounts WHERE name='Walmart' \
UNION ALL \
SELECT * FROM accounts WHERE name='Disney';"
)
###Output
Query ran for 0.006983757019042969 secs!
###Markdown
The above result from the Union-All query can simply be derived via...
###Code
query_to_df(
"SELECT * FROM accounts WHERE name='Walmart' OR name='Disney';"
)
###Output
Query ran for 0.01495981216430664 secs!
###Markdown
Performing Operations on a Combined DatasetPerform a query that does `UNION-ALL` on all rows and all columns of the accounts table. Wrap that in a **Common-Table-Expression(CTE)** or `WITH` clause called _double_accounts_ and then do a `COUNT` of the number of times an account name appears in _double_accounts_.
###Code
query_to_df(
"WITH \
double_accounts AS (SELECT * FROM accounts UNION ALL SELECT * FROM accounts) \
SELECT name acct_name, COUNT(*) count FROM double_accounts GROUP BY name;"
)
###Output
Query ran for 0.3335433006286621 secs!
###Markdown
SQL Query Performance Tuning One way to make a query run faster is to reduce the number of calculations that need to be performed. Some of the high-level things that will affect the number of calculations a given query will make include:* Table size* Joins* AggregationsQuery runtime is also dependent on some things that you can’t really control related to the database itself:* Other users running queries concurrently on the database* Database software and optimization (e.g., Postgres is optimized differently than Redshift) Factors Under Our Control:* Filtering the data for only the observations we need can dramatically improve query speed. For example, if we have time-series data, limiting to a small time span can allow the query run faster.* Keep in mind that we can always perform EDA on a subset of data, refine the work into a final query, then remove the limitation and run the query on entire dataset. * Point immediately above is why most SQL editors automatically append a limit to most SQL queries* It's better to reduce table sizes before joining them using simple pre-aggregation. But make sure aggregation logic and join logic in your query are correct so as to derive the correct results.* We can add `EXPLAIN` at the beginning of every working query to get a rough estimate of how expensive that query would run. This returns a query-plan that shows the order in which our query will be executed. This is more useful when we run EXPLAIN on a query, then modify it to get a simpler running cost n time.* Sub-queries can be particularly useful in improving the runtime of queries. We can use them to pre-aggregate our results in sub-queries then finalize the work on the main query using these pre-aggregated results. Expert TipIf you’d like to understand this a little better, you can do some extra research on cartesian products. It’s also worth noting that the FULL JOIN and COUNT above actually runs pretty fast—it’s the COUNT(DISTINCT) that takes forever.
###Code
query_to_df(
"EXPLAIN WITH \
double_accounts AS (SELECT * FROM accounts UNION ALL SELECT * FROM accounts) \
SELECT name acct_name, COUNT(*) count FROM double_accounts GROUP BY name;"
)
# Change False to True below and run cell to terminate connection
if True and connection.is_connected():
cursor.close()
connection.close()
print(f'Connection Terminated: {record} Database.')
###Output
Connection Terminated: ('parch_and_posey',) Database.
|
jane/jane_debug_console.ipynb | ###Markdown
Debug consoleThe code below can be used to debug communication between PC and PSoC
###Code
from pynq import Overlay, PL
from pynq.mmio import MMIO
from pynq.gpio import GPIO
import numpy as np
import sys
from jane_socket import LINK
from pynq import Clocks
from pynq import Xlnk
xlnk = Xlnk()
#jane = Overlay("/home/xilinx/pynq/overlays/jane/jane.bit")
jane = Overlay("jane.bit")
#memory = MMIO(jane.ip_dict['axi_bram_ctrl_0']['phys_addr'],jane.ip_dict['axi_bram_ctrl_0']['addr_range'])
Clocks.fclk0_mhz = 100.0 #Default is 100MHz
#The following is a hack. By setting 100MHz the board produces ~66.6 MHz instead.
# We added a clock synthesizer inside the overlay that takes ~66.6 MHz as input and generates
#99.921 MHz as output (default)
#dma_send = jane.pseudoclock.PS_to_PL
dma_send = jane.PS_to_PL
#num_words_mmio = MMIO(PL.ip_dict['pseudoclock/num_words']['phys_addr'],PL.ip_dict['pseudoclock/num_words']['addr_range'])
num_words_mmio = MMIO(PL.ip_dict['num_words']['phys_addr'],PL.ip_dict['num_words']['addr_range'])
run_pin = GPIO(GPIO.get_gpio_pin(jane.gpio_dict['run']['index']),"out")
clk_pin = GPIO(GPIO.get_gpio_pin(jane.gpio_dict['clk_pin']['index']),"out")
reset_pin = GPIO(GPIO.get_gpio_pin(jane.gpio_dict['reset_pin']['index']),"out")
status_pins = (GPIO(GPIO.get_gpio_pin(3), 'in'), #stop
GPIO(GPIO.get_gpio_pin(4), 'in'), #reset
GPIO(GPIO.get_gpio_pin(5), 'in'), #running
GPIO(GPIO.get_gpio_pin(6), 'in')) #wait
def print_status_pins():
print("STOP:{}".format(status_pins[0].read()))
print("RESET:{}".format(status_pins[1].read()))
print("RUN:{}".format(status_pins[2].read()))
print("WAIT:{}".format(status_pins[3].read()))
def print_program_line(program, n):
#print("{:032b}|{:032b}|{:032b}|{:032b}".format(
# program[n*4+3],program[n*4+2],program[n*4+1],program[n*4]))
bitstream = "{:032b}{:032b}{:032b}{:032b}".format(
program[n*4+3],program[n*4+2],program[n*4+1],program[n*4])[::-1]
opcode = int(bitstream[52:56][::-1],2)
flags = int(bitstream[56:120][::-1],2)
data = int(bitstream[32:52][::-1],2)
time =int(bitstream[0:32][::-1],2)
print("{:064b}|{}|{}|{}".format(flags,data,opcode,time))
def toggle_start():
run_pin.write(1)
run_pin.write(0)
print("Status pins after start:")
print_status_pins()
def toggle_trigger():
clk_pin.write(1)
clk_pin.write(0)
print("Status pins after trigger:")
print_status_pins()
def read_clk_freq():
data = np.array(Clocks.fclk0_mhz, dtype = np.float64)
connection.connection.sendall(data)
def reset_brd():
reset_pin.write(1)
print("Status pins during reset:")
print_status_pins()
reset_pin.write(0)
connection.connection.sendall(b'\x00')
print("Status pins after reset:")
print_status_pins()
def send_status():
status = 0
for n,w in enumerate(status_pins):
status += w.read()*(2**n)
status = np.array(status, dtype = np.uint8)
connection.connection.sendall(status)
def watchdog():
'''This is a dummy command.
'''
pass
def abort():
'''This is a dummy command.
'''
pass
def receive_program(connection):
print("STOP:{}".format(status_pins[0].read()))
print("RESET:{}".format(status_pins[1].read()))
print("WAIT:{}".format(status_pins[2].read()))
print("RUN:{}".format(status_pins[3].read()))
#Receiving data size as 4 bytes to firm a 32 bit number
data_size = np.array(0,dtype=np.uint32)
buff = connection.read_all_data(4)
np.copyto(data_size,np.frombuffer(buff,dtype=np.uint32))
print("Received size of program: {} bytes".format(data_size))
#Receiving program
buff = connection.read_all_data(data_size)
#print("Buffer received {} bytes".format(len(buff)))
#Allocating memory
program = xlnk.cma_array(shape=(data_size//4,), dtype=np.uint32)
#print("Memory allocated")
np.copyto(program,np.frombuffer(buff,dtype=np.uint32))
print("Memory content:")
for n in range(0, 10):
print_program_line(program,n)
#Sending program size to DMA engine
num_words_mmio.array[0]=data_size//4-1
#Starting the DMA channel
dma_send.sendchannel.start()
#Starting DMA transfer
dma_send.sendchannel.transfer(program)
print("DMA started and waiting...")
dma_send.sendchannel.wait()
print("...DMA done!")
program.close()
del program
#print("Memory de-allocated")
while True:
connection = LINK('192.168.2.99',reuse_address=True)
allowed_instr = {'receive_program':receive_program, #connection.receive_program
'toggle_start': toggle_start,
'read_clk_freq': read_clk_freq,
'reset_brd': reset_brd,
'print': print,
'watchdog': watchdog,
'abort': abort,
'send_status': send_status}
instruction = ""
while not (instruction == "abort()") :
# eval(instruction,{'__builtins__': None}, allowed_instr)
try:
instruction = connection.receive_string()
#print("I have received the instruction: {}".format(instruction))
eval(instruction)
except Exception as e:
print(e)
print("\u001b[2KTimeout: connection lost! Ready for reconnection. ",end = '\r')
connection.reconnect()
#print("Connection properly closed by client!")
del connection
###Output
_____no_output_____
###Markdown
Code to emulate a trigger pulse
###Code
for n in range(1):
clk_pin.write(1)
clk_pin.write(0)
###Output
_____no_output_____ |
Intel Machine Learning 501/Week3_Train_Test_Splits_Validation_Linear_Regression_HW.ipynb | ###Markdown
Train Test Splits, Cross Validation, and Linear Regression IntroductionWe will be working with a data set based on [housing prices in Ames, Iowa](https://www.kaggle.com/c/house-prices-advanced-regression-techniques). It was compiled for educational use to be a modernized and expanded alternative to the well-known Boston Housing dataset. This version of the data set has had some missing values filled for convenience.There are an extensive number of features, so they've been described in the table below. Predictor* SalePrice: The property's sale price in dollars. Features MoSold: Month Sold YrSold: Year Sold SaleType: Type of sale SaleCondition: Condition of sale MSSubClass: The building class MSZoning: The general zoning classification Neighborhood: Physical locations within Ames city limits Street: Type of road access Alley: Type of alley access LotArea: Lot size in square feet LotConfig: Lot configuration LotFrontage: Linear feet of street connected to property LotShape: General shape of property LandSlope: Slope of property LandContour: Flatness of the property YearBuilt: Original construction date YearRemodAdd: Remodel date OverallQual: Overall material and finish quality OverallCond: Overall condition rating Utilities: Type of utilities available Foundation: Type of foundation Functional: Home functionality rating BldgType: Type of dwelling HouseStyle: Style of dwelling 1stFlrSF: First Floor square feet 2ndFlrSF: Second floor square feet LowQualFinSF: Low quality finished square feet (all floors) GrLivArea: Above grade (ground) living area square feet TotRmsAbvGrd: Total rooms above grade (does not include bathrooms) Condition1: Proximity to main road or railroad Condition2: Proximity to main road or railroad (if a second is present) RoofStyle: Type of roof RoofMatl: Roof material ExterQual: Exterior material quality ExterCond: Present condition of the material on the exterior Exterior1st: Exterior covering on house Exterior2nd: Exterior covering on house (if more than one material) MasVnrType: Masonry veneer type MasVnrArea: Masonry veneer area in square feet WoodDeckSF: Wood deck area in square feet OpenPorchSF: Open porch area in square feet EnclosedPorch: Enclosed porch area in square feet 3SsnPorch: Three season porch area in square feet ScreenPorch: Screen porch area in square feet PoolArea: Pool area in square feet PoolQC: Pool quality Fence: Fence quality PavedDrive: Paved driveway GarageType: Garage location GarageYrBlt: Year garage was built GarageFinish: Interior finish of the garage GarageCars: Size of garage in car capacity GarageArea: Size of garage in square feet GarageQual: Garage quality GarageCond: Garage condition Heating: Type of heating HeatingQC: Heating quality and condition CentralAir: Central air conditioning Electrical: Electrical system FullBath: Full bathrooms above grade HalfBath: Half baths above grade BedroomAbvGr: Number of bedrooms above basement level KitchenAbvGr: Number of kitchens KitchenQual: Kitchen quality Fireplaces: Number of fireplaces FireplaceQu: Fireplace quality MiscFeature: Miscellaneous feature not covered in other categories MiscVal: Value of miscellaneous feature BsmtQual: Height of the basement BsmtCond: General condition of the basement BsmtExposure: Walkout or garden level basement walls BsmtFinType1: Quality of basement finished area BsmtFinSF1: Type 1 finished square feet BsmtFinType2: Quality of second finished area (if present) BsmtFinSF2: Type 2 finished square feet BsmtUnfSF: Unfinished square feet of basement area BsmtFullBath: Basement full bathrooms BsmtHalfBath: Basement half bathrooms TotalBsmtSF: Total square feet of basement area
###Code
from __future__ import print_function
import os
data_path = ['data']
###Output
_____no_output_____
###Markdown
Question 1* Import the data using Pandas and examine the shape. There are 79 feature columns plus the predictor, the sale price (`SalePrice`). * There are three different types: integers (`int64`), floats (`float64`), and strings (`object`, categoricals). Examine how many there are of each data type.
###Code
import pandas as pd
import numpy as np
# Import the data using the file path
filepath = os.sep.join(data_path + ['Ames_Housing_Sales.csv'])
data = pd.read_csv(filepath, sep=',')
print(data.shape)
data.dtypes.value_counts()
###Output
_____no_output_____
###Markdown
Question 2As discussed in the lecture, a significant challenge, particularly when dealing with data that have many columns, is ensuring each column gets encoded correctly. This is particularly true with data columns that are ordered categoricals (ordinals) vs unordered categoricals. Unordered categoricals should be one-hot encoded, however this can significantly increase the number of features and creates features that are highly correlated with each other.Determine how many total features would be present, relative to what currently exists, if all string (object) features are one-hot encoded. Recall that the total number of one-hot encoded columns is `n-1`, where `n` is the number of categories.
###Code
# Select the object (string) columns
mask = data.dtypes == np.object
categorical_cols = data.columns[mask]
# Determine how many extra columns would be created
num_ohc_cols = (data[categorical_cols]
.apply(lambda x: x.nunique())
.sort_values(ascending=False))
# No need to encode if there is only one value
small_num_ohc_cols = num_ohc_cols.loc[num_ohc_cols>1]
# Number of one-hot columns is one less than the number of categories
small_num_ohc_cols -= 1
# This is 215 columns, assuming the original ones are dropped.
# This is quite a few extra columns!
small_num_ohc_cols.sum()
###Output
_____no_output_____
###Markdown
Question 3Let's create a new data set where all of the above categorical features will be one-hot encoded. We can fit this data and see how it affects the results.* Used the dataframe `.copy()` method to create a completely separate copy of the dataframe for one-hot encoding* On this new dataframe, one-hot encode each of the appropriate columns and add it back to the dataframe. Be sure to drop the original column.* For the data that are not one-hot encoded, drop the columns that are string categoricals.For the first step, numerically encoding the string categoricals, either Scikit-learn;s `LabelEncoder` or `DictVectorizer` can be used. However, the former is probably easier since it doesn't require specifying a numerical value for each category, and we are going to one-hot encode all of the numerical values anyway. (Can you think of a time when `DictVectorizer` might be preferred?)
###Code
from sklearn.preprocessing import OneHotEncoder, LabelEncoder
# Copy of the data
data_ohc = data.copy()
# The encoders
le = LabelEncoder()
ohc = OneHotEncoder()
for col in num_ohc_cols.index:
# Integer encode the string categories
dat = le.fit_transform(data_ohc[col]).astype(np.int)
# Remove the original column from the dataframe
data_ohc = data_ohc.drop(col, axis=1)
# One hot encode the data--this returns a sparse array
new_dat = ohc.fit_transform(dat.reshape(-1,1))
# Create unique column names
n_cols = new_dat.shape[1]
col_names = ['_'.join([col, str(x)]) for x in range(n_cols)]
# Create the new dataframe
new_df = pd.DataFrame(new_dat.toarray(),
index=data_ohc.index,
columns=col_names)
# Append the new data to the dataframe
data_ohc = pd.concat([data_ohc, new_df], axis=1)
# Column difference is as calculated above
data_ohc.shape[1] - data.shape[1]
print(data.shape[1])
# Remove the string columns from the dataframe
data = data.drop(num_ohc_cols.index, axis=1)
print(data.shape[1])
###Output
80
37
###Markdown
Question 4* Create train and test splits of both data sets. To ensure the data gets split the same way, use the same `random_state` in each of the two splits.* For each data set, fit a basic linear regression model on the training data. * Calculate the mean squared error on both the train and test sets for the respective models. Which model produces smaller error on the test data and why?
###Code
from sklearn.model_selection import train_test_split
y_col = 'SalePrice'
# Split the data that is not one-hot encoded
feature_cols = [x for x in data.columns if x != y_col]
X_data = data[feature_cols]
y_data = data[y_col]
X_train, X_test, y_train, y_test = train_test_split(X_data, y_data,
test_size=0.3, random_state=42)
# Split the data that is one-hot encoded
feature_cols = [x for x in data_ohc.columns if x != y_col]
X_data_ohc = data_ohc[feature_cols]
y_data_ohc = data_ohc[y_col]
X_train_ohc, X_test_ohc, y_train_ohc, y_test_ohc = train_test_split(X_data_ohc, y_data_ohc,
test_size=0.3, random_state=42)
# Compare the indices to ensure they are identical
(X_train_ohc.index == X_train.index).all()
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
LR = LinearRegression()
# Storage for error values
error_df = list()
# Data that have not been one-hot encoded
LR = LR.fit(X_train, y_train)
y_train_pred = LR.predict(X_train)
y_test_pred = LR.predict(X_test)
error_df.append(pd.Series({'train': mean_squared_error(y_train, y_train_pred),
'test' : mean_squared_error(y_test, y_test_pred)},
name='no enc'))
# Data that have been one-hot encoded
LR = LR.fit(X_train_ohc, y_train_ohc)
y_train_ohc_pred = LR.predict(X_train_ohc)
y_test_ohc_pred = LR.predict(X_test_ohc)
error_df.append(pd.Series({'train': mean_squared_error(y_train_ohc, y_train_ohc_pred),
'test' : mean_squared_error(y_test_ohc, y_test_ohc_pred)},
name='one-hot enc'))
# Assemble the results
error_df = pd.concat(error_df, axis=1)
error_df
###Output
_____no_output_____
###Markdown
Note that the error values on the one-hot encoded data are very different for the train and test data. In particular, the errors on the test data are much higher. Based on the lecture, this is because the one-hot encoded model is overfitting the data. We will learn how to deal with issues like this in the next lesson. Question 5For each of the data sets (one-hot encoded and not encoded):* Scale the all the non-hot encoded values using one of the following: `StandardScaler`, `MinMaxScaler`, `MaxAbsScaler`.* Compare the error calculated on the test setsBe sure to calculate the skew (to decide if a transformation should be done) and fit the scaler on *ONLY* the training data, but then apply it to both the train and test data identically.
###Code
# Mute the setting wtih a copy warnings
pd.options.mode.chained_assignment = None
from sklearn.preprocessing import StandardScaler, MinMaxScaler, MaxAbsScaler
scalers = {'standard': StandardScaler(),
'minmax': MinMaxScaler(),
'maxabs': MaxAbsScaler()}
training_test_sets = {
'not_encoded': (X_train, y_train, X_test, y_test),
'one_hot_encoded': (X_train_ohc, y_train_ohc, X_test_ohc, y_test_ohc)}
# Get the list of float columns, and the float data
# so that we don't scale something we already scaled.
# We're supposed to scale the original data each time
mask = X_train.dtypes == np.float
float_columns = X_train.columns[mask]
# initialize model
LR = LinearRegression()
# iterate over all possible combinations and get the errors
errors = {}
for encoding_label, (_X_train, _y_train, _X_test, _y_test) in training_test_sets.items():
for scaler_label, scaler in scalers.items():
trainingset = _X_train.copy() # copy because we dont want to scale this more than once.
testset = _X_test.copy()
trainingset[float_columns] = scaler.fit_transform(trainingset[float_columns])
testset[float_columns] = scaler.transform(testset[float_columns])
LR.fit(trainingset, _y_train)
predictions = LR.predict(testset)
key = encoding_label + ' - ' + scaler_label + 'scaling'
errors[key] = mean_squared_error(_y_test, predictions)
errors = pd.Series(errors)
print(errors.to_string())
print('-' * 80)
for key, error_val in errors.items():
print(key, error_val)
###Output
not_encoded - maxabsscaling 1.372324e+09
not_encoded - minmaxscaling 1.372106e+09
not_encoded - standardscaling 1.372182e+09
one_hot_encoded - maxabsscaling 8.065328e+09
one_hot_encoded - minmaxscaling 8.065328e+09
one_hot_encoded - standardscaling 3.825075e+27
--------------------------------------------------------------------------------
not_encoded - maxabsscaling 1372324284.387417
not_encoded - minmaxscaling 1372106183.6621513
not_encoded - standardscaling 1372182358.9345045
one_hot_encoded - maxabsscaling 8065327607.228361
one_hot_encoded - minmaxscaling 8065327607.35342
one_hot_encoded - standardscaling 3.8250752897041467e+27
###Markdown
Question 6Plot predictions vs actual for one of the models.
###Code
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set_context('talk')
sns.set_style('ticks')
sns.set_palette('dark')
ax = plt.axes()
# we are going to use y_test, y_test_pred
ax.scatter(y_test, y_test_pred, alpha=.6)
ax.set(xlabel='Ground truth',
ylabel='Predictions',
title='Ames, Iowa House Price Predictions vs Truth, using Linear Regression');
###Output
_____no_output_____ |
Workshop/circuits.ipynb | ###Markdown
Introduce Qiskit and demonstrate superpostion and entanglement
###Code
import numpy as np
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
from qiskit import execute
# Create a Quantum Register with 1 qubit.
q = QuantumRegister(1, 'q')
# Create a Quantum Circuit acting on the q register
circ = QuantumCircuit(q)
###Output
_____no_output_____
###Markdown
Superposition principleIn the following we apply Hadamard gate on qubit '0'. This operation takes the qubit \begin{equation*}|0\rangle \rightarrow \frac{|0\rangle + |1\rangle} {\sqrt{2}}\end{equation*}This leads to the qubit being in state '0' and '1' with probability $0.5$.
###Code
circ.h(q[0])
circ.draw()
c = ClassicalRegister(1,'c')
meas = QuantumCircuit(q,c)
meas.barrier(q)
meas.measure(q,c)
qc = circ + meas
qc.draw()
# Import Aer
from qiskit import BasicAer
# Use Aer's qasm_simulator
backend_sim = BasicAer.get_backend('qasm_simulator')
# Execute the circuit on the qasm simulator.
# We've set the number of repeats of the circuit
# to be 1024, which is the default.
job_sim = execute(qc, backend_sim, shots=1024)
# Grab the results from the job.
result_sim = job_sim.result()
counts = result_sim.get_counts(qc)
print(counts)
from qiskit.tools.visualization import plot_histogram
plot_histogram(counts)
###Output
_____no_output_____
###Markdown
EntanglementEntanglement is an uniquely quantum phenomenon where qubits (anti-)correlated. In the following we will demonstrate simplest of entanglements, i.e. entanglement between two-qubits. These are referred to as Bell states.
###Code
# Create a Quantum Register with 2 qubits.
q = QuantumRegister(2, 'q')
c = ClassicalRegister(2,'c')
# Create a Quantum Circuit acting on the q register
qc = QuantumCircuit(q,c)
qc.h(q[0])
qc.cx(q[0], q[1])
qc.measure(q,c)
qc.draw()
# Use Aer's qasm_simulator
backend_sim = BasicAer.get_backend('qasm_simulator')
# Execute the circuit on the qasm simulator.
# We've set the number of repeats of the circuit
# to be 1024, which is the default.
job_sim = execute(qc, backend_sim, shots=1024)
# Grab the results from the job.
result_sim = job_sim.result()
counts = result_sim.get_counts(qc)
print(counts)
plot_histogram(counts)
###Output
_____no_output_____
###Markdown
Simulating in IBM Cloud (HPC)
###Code
from qiskit import IBMQ
IBMQ.save_account('YOUR_API_TOKEN')
IBMQ.load_accounts()
print("Available backends:")
IBMQ.backends()
backend = IBMQ.get_backend('ibmq_qasm_simulator', hub=None)
shots = 1024 # Number of shots to run the program (experiment); maximum is 8192 shots.
max_credits = 3 # Maximum number of credits to spend on executions.
job_hpc = execute(qc, backend=backend, shots=shots, max_credits=max_credits)
result_hpc = job_hpc.result()
counts_hpc = result_hpc.get_counts(qc)
plot_histogram(counts_hpc)
###Output
_____no_output_____
###Markdown
Running on real quantum machine
###Code
from qiskit.providers.ibmq import least_busy
large_enough_devices = IBMQ.backends(filters=lambda x: x.configuration().n_qubits > 4 and
not x.configuration().simulator)
backend = least_busy(large_enough_devices)
print("The best backend is " + backend.name())
from qiskit.tools.monitor import job_monitor
shots = 1024 # Number of shots to run the program (experiment); maximum is 8192 shots.
max_credits = 3 # Maximum number of credits to spend on executions.
job_exp = execute(qc, backend=backend, shots=shots, max_credits=max_credits)
job_monitor(job_exp)
result_exp = job_exp.result()
counts_exp = result_exp.get_counts(qc)
plot_histogram([counts_exp,counts])
###Output
_____no_output_____ |
regressao/data-science/reg-linear/Exercicio/Regressao Linear - Exercicio.ipynb | ###Markdown
Data Science - Regressão Linear Conhecendo o Dataset Importando bibliotecas
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
O Dataset e o Projeto Fonte: https://www.kaggle.com/greenwing1985/housepricing Descrição:Nosso objetivo neste exercício é criar um modelo de machine learning, utilizando a técnica de Regressão Linear, que faça previsões sobre os preços de imóveis a partir de um conjunto de características conhecidas dos imóveis.Vamos utilizar um dataset disponível no Kaggle que foi gerado por computador para treinamento de machine learning para iniciantes. Este dataset foi modificado para facilitar o nosso objetivo, que é fixar o conhecimento adquirido no treinamento de Regressão Linear.Siga os passos propostos nos comentários acima de cada célular e bons estudos. Dados: precos - Preços do imóveis area - Área do imóvel garagem - Número de vagas de garagem banheiros - Número de banheiros lareira - Número de lareiras marmore - Se o imóvel possui acabamento em mármore branco (1) ou não (0) andares - Se o imóvel possui mais de um andar (1) ou não (0) Leitura dos dadosDataset está na pasta "Dados" com o nome "HousePrices_HalfMil.csv" em usa como separador ";".
###Code
dados = pd.read_csv('dados/HousePrices_HalfMil.csv', sep=';')
###Output
_____no_output_____
###Markdown
Visualizar os dados
###Code
dados
###Output
_____no_output_____
###Markdown
Verificando o tamanho do dataset
###Code
dados.shape[0]
###Output
_____no_output_____
###Markdown
Análises Preliminares Estatísticas descritivas
###Code
dados.describe().round(2)
###Output
_____no_output_____
###Markdown
Matriz de correlaçãoO coeficiente de correlação é uma medida de associação linear entre duas variáveis e situa-se entre -1 e +1 sendo que -1 indica associação negativa perfeita e +1 indica associação positiva perfeita. Observe as correlações entre as variáveis: Quais são mais correlacionadas com a variável dependete (Preço)? Qual o relacionamento entre elas (positivo ou negativo)? Existe correlação forte entre as variáveis explicativas?
###Code
dados.corr().round(4)
###Output
_____no_output_____
###Markdown
Comportamento da Variável Dependente (Y) Análises gráficas Importando biblioteca seaborn
###Code
import seaborn as sns
###Output
_____no_output_____
###Markdown
Configure o estilo e cor dos gráficos (opcional)
###Code
#paleta
sns.set_palette('pastel')
# estilo
sns.set_style('darkgrid')
###Output
_____no_output_____
###Markdown
Box plot da variável *dependente* (y) Avalie o comportamento da distribuição da variável dependente: Parecem existir valores discrepantes (outliers)? O box plot apresenta alguma tendência? https://seaborn.pydata.org/generated/seaborn.boxplot.html?highlight=boxplotseaborn.boxplot
###Code
ax = sns.boxplot(data=dados['precos'], orient='v', width=0.5)
ax.figure.set_size_inches(12,6)
ax.set_title('Valores dos imóveis', fontsize=20)
###Output
_____no_output_____
###Markdown
Investigando a variável *dependente* (y) juntamente com outras característicaFaça um box plot da variável dependente em conjunto com cada variável explicativa (somente as categóricas). Avalie o comportamento da distribuição da variável dependente com cada variável explicativa categórica: As estatísticas apresentam mudança significativa entre as categorias? O box plot apresenta alguma tendência bem definida? Box-plot (Preço X Garagem)
###Code
ax = sns.boxplot(y= 'precos', x='garagem', data=dados, orient='v', width=0.5)
ax.figure.set_size_inches(12,6)
ax.set_title('Valores dos imóveis', fontsize=20)
ax.set_ylabel('Preços', fontsize=16)
ax.set_xlabel('Garagem', fontsize=16)
###Output
_____no_output_____
###Markdown
Box-plot (Preço X Banheiros)
###Code
ax = sns.boxplot(y= 'precos', x='banheiros', data=dados, orient='v', width=0.5)
ax.figure.set_size_inches(12,6)
ax.set_title('Valores dos imóveis', fontsize=20)
ax.set_ylabel('Preços', fontsize=16)
ax.set_xlabel('Banheiro', fontsize=16)
###Output
_____no_output_____
###Markdown
Box-plot (Preço X Lareira)
###Code
ax = sns.boxplot(y= 'precos', x='lareira', data=dados, orient='v', width=0.5)
ax.figure.set_size_inches(12,6)
ax.set_title('Valores dos imóveis', fontsize=20)
ax.set_ylabel('Preços', fontsize=16)
ax.set_xlabel('Lareira', fontsize=16)
###Output
_____no_output_____
###Markdown
Box-plot (Preço X Acabamento em Mármore)
###Code
ax = sns.boxplot(y= 'precos', x='marmore', data=dados, orient='v', width=0.5)
ax.figure.set_size_inches(12,6)
ax.set_title('Valores dos imóveis', fontsize=20)
ax.set_ylabel('Preços', fontsize=16)
ax.set_xlabel('Acabamento em mármore', fontsize=16)
###Output
_____no_output_____
###Markdown
Box-plot (Preço X Andares)
###Code
ax = sns.boxplot(y= 'precos', x='andares', data=dados, orient='v', width=0.5)
ax.figure.set_size_inches(12,6)
ax.set_title('Valores dos imóveis', fontsize=20)
ax.set_ylabel('Preços', fontsize=16)
ax.set_xlabel('Andares', fontsize=16)
###Output
_____no_output_____
###Markdown
Distribuição de frequências da variável *dependente* (y)Construa um histograma da variável dependente (Preço). Avalie: A distribuição de frequências da variável dependente parece ser assimétrica? É possível supor que a variável dependente segue uma distribuição normal? https://seaborn.pydata.org/generated/seaborn.distplot.html?highlight=distplotseaborn.distplot
###Code
ax = sns.distplot(dados['precos'])
ax.figure.set_size_inches(12,6)
ax.set_title('Distribuição de Frequências', fontsize=20)
ax.set_ylabel('Valores dos imóveis', fontsize=16)
ax
###Output
C:\Users\Carol\anaconda3\envs\alura\lib\site-packages\seaborn\distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).
warnings.warn(msg, FutureWarning)
###Markdown
Gráficos de dispersão entre as variáveis do dataset Plotando o pairplot fixando somente uma variável no eixo yhttps://seaborn.pydata.org/generated/seaborn.pairplot.html?highlight=pairplotseaborn.pairplotPlote gráficos de dispersão da variável dependente contra cada variável explicativa. Utilize o pairplot da biblioteca seaborn para isso.Plote o mesmo gráfico utilizando o parâmetro kind='reg'. Avalie: É possível identificar alguma relação linear entre as variáveis? A relação é positiva ou negativa? Compare com os resultados obtidos na matriz de correlação.
###Code
ax = sns.pairplot(dados, y_vars='precos', x_vars=['andares','marmore', 'banheiros', 'garagem', 'area', 'lareira'])
ax.fig.suptitle('Dispersão entre as variáveis', fontsize=20, y=1.15)
ax = sns.pairplot(dados, y_vars='precos', x_vars=['andares','marmore', 'banheiros', 'garagem', 'area', 'lareira'], kind='reg')
ax.fig.suptitle('Dispersão entre as variáveis', fontsize=20, y=1.15)
###Output
_____no_output_____
###Markdown
Estimando um Modelo de Regressão Linear Importando o *train_test_split* da biblioteca *scikit-learn*https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html
###Code
from sklearn.model_selection import train_test_split
###Output
_____no_output_____
###Markdown
Criando uma Series (pandas) para armazenar a variável dependente (y)
###Code
y = dados['precos']
###Output
_____no_output_____
###Markdown
Criando um DataFrame (pandas) para armazenar as variáveis explicativas (X)
###Code
X = dados[['area', 'marmore', 'lareira','banheiros', 'andares', 'garagem']]
###Output
_____no_output_____
###Markdown
Criando os datasets de treino e de teste
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=2811)
###Output
_____no_output_____
###Markdown
Importando *LinearRegression* e *metrics* da biblioteca *scikit-learn*https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.htmlhttps://scikit-learn.org/stable/modules/classes.htmlregression-metrics
###Code
from sklearn.linear_model import LinearRegression
from sklearn import metrics
###Output
_____no_output_____
###Markdown
Instanciando a classe *LinearRegression()*
###Code
modelo = LinearRegression()
###Output
_____no_output_____
###Markdown
Utilizando o método *fit()* para estimar o modelo linear utilizando os dados de TREINO (y_train e X_train)https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.htmlsklearn.linear_model.LinearRegression.fit
###Code
modelo.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Obtendo o coeficiente de determinação (R²) do modelo estimado com os dados de TREINOhttps://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.htmlsklearn.linear_model.LinearRegression.score Avalie: O modelo apresenta um bom ajuste? Você lembra o que representa o R²? Qual medida podemos tomar para melhorar essa estatística?
###Code
print('R² = {}'.format(modelo.score(X_train, y_train).round(2)))
###Output
R² = 0.64
###Markdown
Gerando previsões para os dados de TESTE (X_test) utilizando o método *predict()*https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.htmlsklearn.linear_model.LinearRegression.predict
###Code
y_previsto = modelo.predict(X_test)
###Output
_____no_output_____
###Markdown
Obtendo o coeficiente de determinação (R²) para as previsões do nosso modelohttps://scikit-learn.org/stable/modules/generated/sklearn.metrics.r2_score.htmlsklearn.metrics.r2_score
###Code
print('R² = %s' % metrics.r2_score(y_test, y_previsto).round(2))
###Output
R² = 0.67
###Markdown
Obtendo Previsões Pontuais Criando um simulador simplesCrie um simulador que gere estimativas de preço a partir de um conjunto de informações de um imóvel.
###Code
andar = 1
lareira = 0
area=110
banheiro =3
garagem=1
marmore=1
entrada=[[andar, lareira, area, banheiro,garagem, marmore]]
print('{0:.2f} reais'.format(modelo.predict(entrada)[0]))
###Output
112236.00 reais
###Markdown
Métricas de Regressão Métricas da regressãofonte: https://scikit-learn.org/stable/modules/model_evaluation.htmlregression-metricsAlgumas estatísticas obtidas do modelo de regressão são muito úteis como critério de comparação entre modelos estimados e de seleção do melhor modelo, as principais métricas de regressão que o scikit-learn disponibiliza para modelos lineares são as seguintes: Erro Quadrático MédioMédia dos quadrados dos erros. Ajustes melhores apresentam $EQM$ mais baixo.$$EQM(y, \hat{y}) = \frac 1n\sum_{i=0}^{n-1}(y_i-\hat{y}_i)^2$$ Raíz do Erro Quadrático MédioRaíz quadrada da média dos quadrados dos erros. Ajustes melhores apresentam $\sqrt{EQM}$ mais baixo.$$\sqrt{EQM(y, \hat{y})} = \sqrt{\frac 1n\sum_{i=0}^{n-1}(y_i-\hat{y}_i)^2}$$ Coeficiente de Determinação - R²O coeficiente de determinação (R²) é uma medida resumida que diz quanto a linha de regressão ajusta-se aos dados. É um valor entra 0 e 1.$$R^2(y, \hat{y}) = 1 - \frac {\sum_{i=0}^{n-1}(y_i-\hat{y}_i)^2}{\sum_{i=0}^{n-1}(y_i-\bar{y}_i)^2}$$ Obtendo métricas para o modelo com Temperatura Máxima
###Code
EQM_2 = metrics.mean_squared_error(y_test, y_previsto).round(2)
REQM_2 = np.sqrt(metrics.mean_squared_error(y_test, y_previsto)).round(2)
R2_2 = metrics.r2_score(y_test, y_previsto).round(2)
pd.DataFrame([EQM_2,REQM_2, R2_2], ['EQM', 'REQM', 'R²'], columns=['Métricas'])
###Output
_____no_output_____
###Markdown
Salvando e Carregando o Modelo Estimado Importando a biblioteca pickle
###Code
import pickle
###Output
_____no_output_____
###Markdown
Salvando o modelo estimado
###Code
output = open('modelo_preco', 'wb')
pickle.dump(modelo, output)
output.close()
###Output
_____no_output_____ |
Python-Standard-Library/FileSystem/tempfile.ipynb | ###Markdown
Creating temporary files with unique names securely, so they cannot be guessed by someone wanting to break the application or steal the data, is challenging. The tempfile module provides several functions for creating temporary file system resources securely. TemporaryFile() opens and returns an unnamed file, NamedTemporaryFile() opens and returns a named file, SpooledTemporaryFile holds its content in memory before writing to disk, and TemporaryDirectory is a context manager that removes the directory when the context is closed. Temporary File
###Code
import os
import tempfile
print('Building a filename with PID:')
filename = '/tmp/guess_my_name.{}.txt'.format(os.getpid())
with open(filename, 'w+b') as temp:
print('temp:')
print(' {!r}'.format(temp))
print('temp.name:')
print(' {!r}'.format(temp.name))
# Clean up the temporary file yourself.
os.remove(filename)
print()
print('TemporaryFile:')
with tempfile.TemporaryFile() as temp:
print('temp:')
print(' {!r}'.format(temp))
print('temp.name:')
print(' {!r}'.format(temp.name))
import os
import tempfile
with tempfile.TemporaryFile() as temp:
temp.write(b'Some data')
temp.seek(0)
print(temp.read())
import tempfile
with tempfile.TemporaryFile(mode='w+t') as f:
f.writelines(['first\n', 'second\n'])
f.seek(0)
for line in f:
print(line.rstrip())
###Output
first
second
###Markdown
Named File
###Code
import os
import pathlib
import tempfile
with tempfile.NamedTemporaryFile() as temp:
print('temp:')
print(' {!r}'.format(temp))
print('temp.name:')
print(' {!r}'.format(temp.name))
f = pathlib.Path(temp.name)
print('Exists after close:', f.exists())
###Output
temp:
<tempfile._TemporaryFileWrapper object at 0x105563828>
temp.name:
'/var/folders/k9/2cxh1k2115s_lw4wtq9mzj9m0000gp/T/tmpmao57w23'
Exists after close: False
###Markdown
Spooled File
###Code
import tempfile
with tempfile.SpooledTemporaryFile(max_size=100,
mode='w+t',
encoding='utf-8') as temp:
print('temp: {!r}'.format(temp))
for i in range(3):
temp.write('This line is repeated over and over.\n')
print(temp._rolled, temp._file)
import tempfile
with tempfile.SpooledTemporaryFile(max_size=1000,
mode='w+t',
encoding='utf-8') as temp:
print('temp: {!r}'.format(temp))
for i in range(3):
temp.write('This line is repeated over and over.\n')
print(temp._rolled, temp._file)
print('rolling over')
temp.rollover()
print(temp._rolled, temp._file)
###Output
temp: <tempfile.SpooledTemporaryFile object at 0x105630780>
False <_io.StringIO object at 0x10554db88>
False <_io.StringIO object at 0x10554db88>
False <_io.StringIO object at 0x10554db88>
rolling over
True <_io.TextIOWrapper name=56 mode='w+t' encoding='utf-8'>
###Markdown
Temporary Directories
###Code
import pathlib
import tempfile
with tempfile.TemporaryDirectory() as directory_name:
the_dir = pathlib.Path(directory_name)
print(the_dir)
a_file = the_dir / 'a_file.txt'
a_file.write_text('This file is deleted.')
print('Directory exists after?', the_dir.exists())
print('Contents after:', list(the_dir.glob('*')))
###Output
/var/folders/k9/2cxh1k2115s_lw4wtq9mzj9m0000gp/T/tmp5tso4jyq
Directory exists after? False
Contents after: []
###Markdown
Predicting Name
###Code
import tempfile
with tempfile.NamedTemporaryFile(suffix='_suffix',
prefix='prefix_',
dir='/tmp') as temp:
print('temp:')
print(' ', temp)
print('temp.name:')
print(' ', temp.name)
###Output
temp:
<tempfile._TemporaryFileWrapper object at 0x1055f1f28>
temp.name:
/tmp/prefix_aa9_7ivp_suffix
###Markdown
Temporary File Location
###Code
import tempfile
print('gettempdir():', tempfile.gettempdir())
print('gettempprefix():', tempfile.gettempprefix())
###Output
gettempdir(): /var/folders/k9/2cxh1k2115s_lw4wtq9mzj9m0000gp/T
gettempprefix(): tmp
|
turku_neural_parser_colab.ipynb | ###Markdown
Turku Neural Parser Pipeline - Python module version on Google Colab* This is a basic tutorial for running the parser pipeline under Google Colab* Makes it possible for anyone to run the parser with GPU acceleration* This notebook downloads and uses the `models_fi_tdt_v2.7` Finnish model, in case you want to run this with another model, change the model name while `Downloading and unpacking the model` and while `Running the parser`. Table of content1. Install2. Download and unpack the model3. Running the parser4. Process the output5. Citations Install* Install the pre-built wheel (takes its time)`pip3 install http://dl.turkunlp.org/turku-parser-models/turku_neural_parser-0.3-py3-none-any.whl`
###Code
!wget -nc http://dl.turkunlp.org/turku-parser-models/turku_neural_parser-0.3-py3-none-any.whl
!pip3 install turku_neural_parser-0.3-py3-none-any.whl
###Output
--2020-12-14 10:20:00-- http://dl.turkunlp.org/turku-parser-models/turku_neural_parser-0.3-py3-none-any.whl
Resolving dl.turkunlp.org (dl.turkunlp.org)... 195.148.30.23
Connecting to dl.turkunlp.org (dl.turkunlp.org)|195.148.30.23|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 99697 (97K) [application/octet-stream]
Saving to: ‘turku_neural_parser-0.3-py3-none-any.whl’
turku_neural_parser 100%[===================>] 97.36K 293KB/s in 0.3s
2020-12-14 10:20:01 (293 KB/s) - ‘turku_neural_parser-0.3-py3-none-any.whl’ saved [99697/99697]
Processing ./turku_neural_parser-0.3-py3-none-any.whl
Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from turku-neural-parser==0.3) (2.23.0)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from turku-neural-parser==0.3) (1.18.5)
Collecting OpenNMT-py>=1.2.0
[?25l Downloading https://files.pythonhosted.org/packages/9f/20/40f8b722aa0e35e259c144b6ec2d684f1aea7de869cf586c67cfd6fe1c55/OpenNMT_py-1.2.0-py3-none-any.whl (195kB)
[K |████████████████████████████████| 204kB 9.1MB/s
[?25hRequirement already satisfied: pyyaml in /usr/local/lib/python3.6/dist-packages (from turku-neural-parser==0.3) (3.13)
Collecting ufal.udpipe
[?25l Downloading https://files.pythonhosted.org/packages/e5/72/2b8b9dc7c80017c790bb3308bbad34b57accfed2ac2f1f4ab252ff4e9cb2/ufal.udpipe-1.2.0.3.tar.gz (304kB)
[K |████████████████████████████████| 307kB 8.9MB/s
[?25hCollecting configargparse
[?25l Downloading https://files.pythonhosted.org/packages/bb/79/3045743bb26ca2e44a1d317c37395462bfed82dbbd38e69a3280b63696ce/ConfigArgParse-1.2.3.tar.gz (42kB)
[K |████████████████████████████████| 51kB 7.2MB/s
[?25hCollecting allennlp==0.9.0
[?25l Downloading https://files.pythonhosted.org/packages/bb/bb/041115d8bad1447080e5d1e30097c95e4b66e36074277afce8620a61cee3/allennlp-0.9.0-py3-none-any.whl (7.6MB)
[K |████████████████████████████████| 7.6MB 15.4MB/s
[?25hCollecting torchtext>=0.4.0
[?25l Downloading https://files.pythonhosted.org/packages/0e/81/be2d72b1ea641afc74557574650a5b421134198de9f68f483ab10d515dca/torchtext-0.8.1-cp36-cp36m-manylinux1_x86_64.whl (7.0MB)
[K |████████████████████████████████| 7.0MB 25.0MB/s
[?25hCollecting transformers==2.11.0
[?25l Downloading https://files.pythonhosted.org/packages/48/35/ad2c5b1b8f99feaaf9d7cdadaeef261f098c6e1a6a2935d4d07662a6b780/transformers-2.11.0-py3-none-any.whl (674kB)
[K |████████████████████████████████| 675kB 53.8MB/s
[?25hRequirement already satisfied: torch>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from turku-neural-parser==0.3) (1.7.0+cu101)
Requirement already satisfied: flask in /usr/local/lib/python3.6/dist-packages (from turku-neural-parser==0.3) (1.1.2)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->turku-neural-parser==0.3) (1.24.3)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->turku-neural-parser==0.3) (2.10)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->turku-neural-parser==0.3) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->turku-neural-parser==0.3) (2020.12.5)
Requirement already satisfied: tensorboard>=1.14 in /usr/local/lib/python3.6/dist-packages (from OpenNMT-py>=1.2.0->turku-neural-parser==0.3) (2.3.0)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from OpenNMT-py>=1.2.0->turku-neural-parser==0.3) (0.16.0)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from OpenNMT-py>=1.2.0->turku-neural-parser==0.3) (1.15.0)
Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from OpenNMT-py>=1.2.0->turku-neural-parser==0.3) (4.41.1)
Collecting waitress
[?25l Downloading https://files.pythonhosted.org/packages/26/d1/5209fb8c764497a592363c47054436a515b47b8c3e4970ddd7184f088857/waitress-1.4.4-py2.py3-none-any.whl (58kB)
[K |████████████████████████████████| 61kB 9.1MB/s
[?25hCollecting pyonmttok==1.*; platform_system == "Linux"
[?25l Downloading https://files.pythonhosted.org/packages/10/21/7a69fa68de7de41ef70b35424d21523ebf2208f0c0fab1355cabc2305ff4/pyonmttok-1.22.2-cp36-cp36m-manylinux1_x86_64.whl (2.5MB)
[K |████████████████████████████████| 2.5MB 41.7MB/s
[?25hCollecting spacy<2.2,>=2.1.0
[?25l Downloading https://files.pythonhosted.org/packages/41/5b/e07dd3bf104237bce4b398558b104c8e500333d6f30eabe3fa9685356b7d/spacy-2.1.9-cp36-cp36m-manylinux1_x86_64.whl (30.8MB)
[K |████████████████████████████████| 30.9MB 106kB/s
[?25hRequirement already satisfied: nltk in /usr/local/lib/python3.6/dist-packages (from allennlp==0.9.0->turku-neural-parser==0.3) (3.2.5)
Collecting parsimonious>=0.8.0
[?25l Downloading https://files.pythonhosted.org/packages/02/fc/067a3f89869a41009e1a7cdfb14725f8ddd246f30f63c645e8ef8a1c56f4/parsimonious-0.8.1.tar.gz (45kB)
[K |████████████████████████████████| 51kB 8.3MB/s
[?25hRequirement already satisfied: sqlparse>=0.2.4 in /usr/local/lib/python3.6/dist-packages (from allennlp==0.9.0->turku-neural-parser==0.3) (0.4.1)
Collecting word2number>=1.1
Downloading https://files.pythonhosted.org/packages/4a/29/a31940c848521f0725f0df6b25dca8917f13a2025b0e8fcbe5d0457e45e6/word2number-1.1.zip
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from allennlp==0.9.0->turku-neural-parser==0.3) (1.4.1)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.6/dist-packages (from allennlp==0.9.0->turku-neural-parser==0.3) (0.22.2.post1)
Collecting tensorboardX>=1.2
[?25l Downloading https://files.pythonhosted.org/packages/af/0c/4f41bcd45db376e6fe5c619c01100e9b7531c55791b7244815bac6eac32c/tensorboardX-2.1-py2.py3-none-any.whl (308kB)
[K |████████████████████████████████| 317kB 55.5MB/s
[?25hCollecting ftfy
[?25l Downloading https://files.pythonhosted.org/packages/ff/e2/3b51c53dffb1e52d9210ebc01f1fb9f2f6eba9b3201fa971fd3946643c71/ftfy-5.8.tar.gz (64kB)
[K |████████████████████████████████| 71kB 9.6MB/s
[?25hCollecting jsonpickle
Downloading https://files.pythonhosted.org/packages/ee/d5/1cc282dc23346a43aab461bf2e8c36593aacd34242bee1a13fa750db0cfe/jsonpickle-1.4.2-py2.py3-none-any.whl
Collecting numpydoc>=0.8.0
[?25l Downloading https://files.pythonhosted.org/packages/60/1d/9e398c53d6ae27d5ab312ddc16a9ffe1bee0dfdf1d6ec88c40b0ca97582e/numpydoc-1.1.0-py3-none-any.whl (47kB)
[K |████████████████████████████████| 51kB 7.9MB/s
[?25hRequirement already satisfied: pytest in /usr/local/lib/python3.6/dist-packages (from allennlp==0.9.0->turku-neural-parser==0.3) (3.6.4)
Requirement already satisfied: h5py in /usr/local/lib/python3.6/dist-packages (from allennlp==0.9.0->turku-neural-parser==0.3) (2.10.0)
Collecting pytorch-transformers==1.1.0
[?25l Downloading https://files.pythonhosted.org/packages/50/89/ad0d6bb932d0a51793eaabcf1617a36ff530dc9ab9e38f765a35dc293306/pytorch_transformers-1.1.0-py3-none-any.whl (158kB)
[K |████████████████████████████████| 163kB 38.7MB/s
[?25hRequirement already satisfied: editdistance in /usr/local/lib/python3.6/dist-packages (from allennlp==0.9.0->turku-neural-parser==0.3) (0.5.3)
Requirement already satisfied: matplotlib>=2.2.3 in /usr/local/lib/python3.6/dist-packages (from allennlp==0.9.0->turku-neural-parser==0.3) (3.2.2)
Collecting gevent>=1.3.6
[?25l Downloading https://files.pythonhosted.org/packages/3f/92/b80b922f08f222faca53c8d278e2e612192bc74b0e1f0db2f80a6ee46982/gevent-20.9.0-cp36-cp36m-manylinux2010_x86_64.whl (5.3MB)
[K |████████████████████████████████| 5.3MB 49.1MB/s
[?25hCollecting jsonnet>=0.10.0; sys_platform != "win32"
[?25l Downloading https://files.pythonhosted.org/packages/42/40/6f16e5ac994b16fa71c24310f97174ce07d3a97b433275589265c6b94d2b/jsonnet-0.17.0.tar.gz (259kB)
[K |████████████████████████████████| 266kB 56.5MB/s
[?25hCollecting pytorch-pretrained-bert>=0.6.0
[?25l Downloading https://files.pythonhosted.org/packages/d7/e0/c08d5553b89973d9a240605b9c12404bcf8227590de62bae27acbcfe076b/pytorch_pretrained_bert-0.6.2-py3-none-any.whl (123kB)
[K |████████████████████████████████| 133kB 49.2MB/s
[?25hRequirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.6/dist-packages (from allennlp==0.9.0->turku-neural-parser==0.3) (2018.9)
Collecting conllu==1.3.1
Downloading https://files.pythonhosted.org/packages/ae/54/b0ae1199f3d01666821b028cd967f7c0ac527ab162af433d3da69242cea2/conllu-1.3.1-py2.py3-none-any.whl
Collecting flaky
Downloading https://files.pythonhosted.org/packages/43/0e/2f50064e327f41a1eb811df089f813036e19a64b95e33f8e9e0b96c2447e/flaky-3.7.0-py2.py3-none-any.whl
Collecting flask-cors>=3.0.7
Downloading https://files.pythonhosted.org/packages/69/7f/d0aeaaafb5c3c76c8d2141dbe2d4f6dca5d6c31872d4e5349768c1958abc/Flask_Cors-3.0.9-py2.py3-none-any.whl
Collecting overrides
Downloading https://files.pythonhosted.org/packages/ff/b1/10f69c00947518e6676bbd43e739733048de64b8dd998e9c2d5a71f44c5d/overrides-3.1.0.tar.gz
Collecting unidecode
[?25l Downloading https://files.pythonhosted.org/packages/d0/42/d9edfed04228bacea2d824904cae367ee9efd05e6cce7ceaaedd0b0ad964/Unidecode-1.1.1-py2.py3-none-any.whl (238kB)
[K |████████████████████████████████| 245kB 49.7MB/s
[?25hCollecting boto3
[?25l Downloading https://files.pythonhosted.org/packages/87/3e/3a4546165383a5fc9f6f7ba15a261c768aee10662bb06105100d859e8940/boto3-1.16.35-py2.py3-none-any.whl (129kB)
[K |████████████████████████████████| 133kB 60.9MB/s
[?25hCollecting responses>=0.7
Downloading https://files.pythonhosted.org/packages/d5/71/4f04aed03ca35f2d02e1732ca6e996b2d7b40232fb7f1b58ff35f9a89b7b/responses-0.12.1-py2.py3-none-any.whl
Requirement already satisfied: filelock in /usr/local/lib/python3.6/dist-packages (from transformers==2.11.0->turku-neural-parser==0.3) (3.0.12)
Collecting sacremoses
[?25l Downloading https://files.pythonhosted.org/packages/7d/34/09d19aff26edcc8eb2a01bed8e98f13a1537005d31e95233fd48216eed10/sacremoses-0.0.43.tar.gz (883kB)
[K |████████████████████████████████| 890kB 49.6MB/s
[?25hRequirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.6/dist-packages (from transformers==2.11.0->turku-neural-parser==0.3) (2019.12.20)
Requirement already satisfied: dataclasses; python_version < "3.7" in /usr/local/lib/python3.6/dist-packages (from transformers==2.11.0->turku-neural-parser==0.3) (0.8)
Collecting sentencepiece
[?25l Downloading https://files.pythonhosted.org/packages/e5/2d/6d4ca4bef9a67070fa1cac508606328329152b1df10bdf31fb6e4e727894/sentencepiece-0.1.94-cp36-cp36m-manylinux2014_x86_64.whl (1.1MB)
[K |████████████████████████████████| 1.1MB 39.9MB/s
[?25hRequirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from transformers==2.11.0->turku-neural-parser==0.3) (20.7)
Collecting tokenizers==0.7.0
[?25l Downloading https://files.pythonhosted.org/packages/14/e5/a26eb4716523808bb0a799fcfdceb6ebf77a18169d9591b2f46a9adb87d9/tokenizers-0.7.0-cp36-cp36m-manylinux1_x86_64.whl (3.8MB)
[K |████████████████████████████████| 3.8MB 46.7MB/s
[?25hRequirement already satisfied: typing-extensions in /usr/local/lib/python3.6/dist-packages (from torch>=1.6.0->turku-neural-parser==0.3) (3.7.4.3)
Requirement already satisfied: Werkzeug>=0.15 in /usr/local/lib/python3.6/dist-packages (from flask->turku-neural-parser==0.3) (1.0.1)
Requirement already satisfied: Jinja2>=2.10.1 in /usr/local/lib/python3.6/dist-packages (from flask->turku-neural-parser==0.3) (2.11.2)
Requirement already satisfied: click>=5.1 in /usr/local/lib/python3.6/dist-packages (from flask->turku-neural-parser==0.3) (7.1.2)
Requirement already satisfied: itsdangerous>=0.24 in /usr/local/lib/python3.6/dist-packages (from flask->turku-neural-parser==0.3) (1.1.0)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard>=1.14->OpenNMT-py>=1.2.0->turku-neural-parser==0.3) (1.7.0)
Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard>=1.14->OpenNMT-py>=1.2.0->turku-neural-parser==0.3) (50.3.2)
Requirement already satisfied: absl-py>=0.4 in /usr/local/lib/python3.6/dist-packages (from tensorboard>=1.14->OpenNMT-py>=1.2.0->turku-neural-parser==0.3) (0.10.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from tensorboard>=1.14->OpenNMT-py>=1.2.0->turku-neural-parser==0.3) (0.4.2)
Requirement already satisfied: wheel>=0.26; python_version >= "3" in /usr/local/lib/python3.6/dist-packages (from tensorboard>=1.14->OpenNMT-py>=1.2.0->turku-neural-parser==0.3) (0.36.1)
Requirement already satisfied: grpcio>=1.24.3 in /usr/local/lib/python3.6/dist-packages (from tensorboard>=1.14->OpenNMT-py>=1.2.0->turku-neural-parser==0.3) (1.34.0)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard>=1.14->OpenNMT-py>=1.2.0->turku-neural-parser==0.3) (3.3.3)
Requirement already satisfied: protobuf>=3.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard>=1.14->OpenNMT-py>=1.2.0->turku-neural-parser==0.3) (3.12.4)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorboard>=1.14->OpenNMT-py>=1.2.0->turku-neural-parser==0.3) (1.17.2)
Requirement already satisfied: wasabi<1.1.0,>=0.2.0 in /usr/local/lib/python3.6/dist-packages (from spacy<2.2,>=2.1.0->allennlp==0.9.0->turku-neural-parser==0.3) (0.8.0)
Collecting thinc<7.1.0,>=7.0.8
[?25l Downloading https://files.pythonhosted.org/packages/18/a5/9ace20422e7bb1bdcad31832ea85c52a09900cd4a7ce711246bfb92206ba/thinc-7.0.8-cp36-cp36m-manylinux1_x86_64.whl (2.1MB)
[K |████████████████████████████████| 2.1MB 56.5MB/s
[?25hRequirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.6/dist-packages (from spacy<2.2,>=2.1.0->allennlp==0.9.0->turku-neural-parser==0.3) (1.0.5)
Collecting blis<0.3.0,>=0.2.2
[?25l Downloading https://files.pythonhosted.org/packages/34/46/b1d0bb71d308e820ed30316c5f0a017cb5ef5f4324bcbc7da3cf9d3b075c/blis-0.2.4-cp36-cp36m-manylinux1_x86_64.whl (3.2MB)
[K |████████████████████████████████| 3.2MB 28.7MB/s
[?25hCollecting preshed<2.1.0,>=2.0.1
[?25l Downloading https://files.pythonhosted.org/packages/20/93/f222fb957764a283203525ef20e62008675fd0a14ffff8cc1b1490147c63/preshed-2.0.1-cp36-cp36m-manylinux1_x86_64.whl (83kB)
[K |████████████████████████████████| 92kB 13.6MB/s
[?25hRequirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.6/dist-packages (from spacy<2.2,>=2.1.0->allennlp==0.9.0->turku-neural-parser==0.3) (2.0.5)
Requirement already satisfied: srsly<1.1.0,>=0.0.6 in /usr/local/lib/python3.6/dist-packages (from spacy<2.2,>=2.1.0->allennlp==0.9.0->turku-neural-parser==0.3) (1.0.5)
Collecting plac<1.0.0,>=0.9.6
Downloading https://files.pythonhosted.org/packages/9e/9b/62c60d2f5bc135d2aa1d8c8a86aaf84edb719a59c7f11a4316259e61a298/plac-0.9.6-py2.py3-none-any.whl
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn->allennlp==0.9.0->turku-neural-parser==0.3) (0.17.0)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.6/dist-packages (from ftfy->allennlp==0.9.0->turku-neural-parser==0.3) (0.2.5)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from jsonpickle->allennlp==0.9.0->turku-neural-parser==0.3) (3.1.1)
Requirement already satisfied: sphinx>=1.6.5 in /usr/local/lib/python3.6/dist-packages (from numpydoc>=0.8.0->allennlp==0.9.0->turku-neural-parser==0.3) (1.8.5)
Requirement already satisfied: pluggy<0.8,>=0.5 in /usr/local/lib/python3.6/dist-packages (from pytest->allennlp==0.9.0->turku-neural-parser==0.3) (0.7.1)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.6/dist-packages (from pytest->allennlp==0.9.0->turku-neural-parser==0.3) (20.3.0)
Requirement already satisfied: atomicwrites>=1.0 in /usr/local/lib/python3.6/dist-packages (from pytest->allennlp==0.9.0->turku-neural-parser==0.3) (1.4.0)
Requirement already satisfied: more-itertools>=4.0.0 in /usr/local/lib/python3.6/dist-packages (from pytest->allennlp==0.9.0->turku-neural-parser==0.3) (8.6.0)
Requirement already satisfied: py>=1.5.0 in /usr/local/lib/python3.6/dist-packages (from pytest->allennlp==0.9.0->turku-neural-parser==0.3) (1.9.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.2.3->allennlp==0.9.0->turku-neural-parser==0.3) (2.4.7)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.2.3->allennlp==0.9.0->turku-neural-parser==0.3) (1.3.1)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.2.3->allennlp==0.9.0->turku-neural-parser==0.3) (0.10.0)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.2.3->allennlp==0.9.0->turku-neural-parser==0.3) (2.8.1)
Collecting greenlet>=0.4.17; platform_python_implementation == "CPython"
[?25l Downloading https://files.pythonhosted.org/packages/80/d0/532e160c777b42f6f393f9de8c88abb8af6c892037c55e4d3a8a211324dd/greenlet-0.4.17-cp36-cp36m-manylinux1_x86_64.whl (44kB)
[K |████████████████████████████████| 51kB 8.2MB/s
[?25hCollecting zope.interface
[?25l Downloading https://files.pythonhosted.org/packages/82/b0/da8afd9b3bd50c7665ecdac062f182982af1173c9081f9af7261091c5588/zope.interface-5.2.0-cp36-cp36m-manylinux2010_x86_64.whl (236kB)
[K |████████████████████████████████| 245kB 53.9MB/s
[?25hCollecting zope.event
Downloading https://files.pythonhosted.org/packages/9e/85/b45408c64f3b888976f1d5b37eed8d746b8d5729a66a49ec846fda27d371/zope.event-4.5.0-py2.py3-none-any.whl
Collecting s3transfer<0.4.0,>=0.3.0
[?25l Downloading https://files.pythonhosted.org/packages/69/79/e6afb3d8b0b4e96cefbdc690f741d7dd24547ff1f94240c997a26fa908d3/s3transfer-0.3.3-py2.py3-none-any.whl (69kB)
[K |████████████████████████████████| 71kB 11.4MB/s
[?25hCollecting botocore<1.20.0,>=1.19.35
[?25l Downloading https://files.pythonhosted.org/packages/cd/f8/d355891fc244cb31ad8a30ce452efbf2b31a48da0239f220a871c54fe829/botocore-1.19.35-py2.py3-none-any.whl (7.1MB)
[K |████████████████████████████████| 7.1MB 47.2MB/s
[?25hCollecting jmespath<1.0.0,>=0.7.1
Downloading https://files.pythonhosted.org/packages/07/cb/5f001272b6faeb23c1c9e0acc04d48eaaf5c862c17709d20e3469c6e0139/jmespath-0.10.0-py2.py3-none-any.whl
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.6/dist-packages (from Jinja2>=2.10.1->flask->turku-neural-parser==0.3) (1.1.1)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard>=1.14->OpenNMT-py>=1.2.0->turku-neural-parser==0.3) (1.3.0)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard>=1.14->OpenNMT-py>=1.2.0->turku-neural-parser==0.3) (4.1.1)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard>=1.14->OpenNMT-py>=1.2.0->turku-neural-parser==0.3) (0.2.8)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= "3" in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard>=1.14->OpenNMT-py>=1.2.0->turku-neural-parser==0.3) (4.6)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < "3.8"->jsonpickle->allennlp==0.9.0->turku-neural-parser==0.3) (3.4.0)
Requirement already satisfied: imagesize in /usr/local/lib/python3.6/dist-packages (from sphinx>=1.6.5->numpydoc>=0.8.0->allennlp==0.9.0->turku-neural-parser==0.3) (1.2.0)
Requirement already satisfied: snowballstemmer>=1.1 in /usr/local/lib/python3.6/dist-packages (from sphinx>=1.6.5->numpydoc>=0.8.0->allennlp==0.9.0->turku-neural-parser==0.3) (2.0.0)
Requirement already satisfied: Pygments>=2.0 in /usr/local/lib/python3.6/dist-packages (from sphinx>=1.6.5->numpydoc>=0.8.0->allennlp==0.9.0->turku-neural-parser==0.3) (2.6.1)
Requirement already satisfied: babel!=2.0,>=1.3 in /usr/local/lib/python3.6/dist-packages (from sphinx>=1.6.5->numpydoc>=0.8.0->allennlp==0.9.0->turku-neural-parser==0.3) (2.9.0)
Requirement already satisfied: sphinxcontrib-websupport in /usr/local/lib/python3.6/dist-packages (from sphinx>=1.6.5->numpydoc>=0.8.0->allennlp==0.9.0->turku-neural-parser==0.3) (1.2.4)
Requirement already satisfied: docutils>=0.11 in /usr/local/lib/python3.6/dist-packages (from sphinx>=1.6.5->numpydoc>=0.8.0->allennlp==0.9.0->turku-neural-parser==0.3) (0.16)
Requirement already satisfied: alabaster<0.8,>=0.7 in /usr/local/lib/python3.6/dist-packages (from sphinx>=1.6.5->numpydoc>=0.8.0->allennlp==0.9.0->turku-neural-parser==0.3) (0.7.12)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard>=1.14->OpenNMT-py>=1.2.0->turku-neural-parser==0.3) (3.1.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.6/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard>=1.14->OpenNMT-py>=1.2.0->turku-neural-parser==0.3) (0.4.8)
Requirement already satisfied: sphinxcontrib-serializinghtml in /usr/local/lib/python3.6/dist-packages (from sphinxcontrib-websupport->sphinx>=1.6.5->numpydoc>=0.8.0->allennlp==0.9.0->turku-neural-parser==0.3) (1.1.4)
Building wheels for collected packages: ufal.udpipe, configargparse, parsimonious, word2number, ftfy, jsonnet, overrides, sacremoses
Building wheel for ufal.udpipe (setup.py) ... [?25l[?25hdone
Created wheel for ufal.udpipe: filename=ufal.udpipe-1.2.0.3-cp36-cp36m-linux_x86_64.whl size=5625187 sha256=3683e5734cab98a7971b2e30c75f9fcc34c9cb03926251f668c4430f7be01487
Stored in directory: /root/.cache/pip/wheels/0c/9d/db/6d3404c33da5b7adb6c6972853efb6a27649d3ba15f7e9bebb
Building wheel for configargparse (setup.py) ... [?25l[?25hdone
Created wheel for configargparse: filename=ConfigArgParse-1.2.3-cp36-none-any.whl size=19329 sha256=b81469a4a06a010cef68b8c04344abbd167528db2ca75d19d11fa9d27ff40fec
Stored in directory: /root/.cache/pip/wheels/bd/d6/53/034032da9498bda2385cd50a51a289e88090b5da2d592b1fdf
Building wheel for parsimonious (setup.py) ... [?25l[?25hdone
Created wheel for parsimonious: filename=parsimonious-0.8.1-cp36-none-any.whl size=42710 sha256=d41b66f3e4d45edd9cdfc8f645efc701c138ebab89f3aca8cad93d960c43d4da
Stored in directory: /root/.cache/pip/wheels/b7/8d/e7/a0e74217da5caeb3c1c7689639b6d28ddbf9985b840bc96a9a
Building wheel for word2number (setup.py) ... [?25l[?25hdone
Created wheel for word2number: filename=word2number-1.1-cp36-none-any.whl size=5587 sha256=4b4cd6b6a1a525e41f39b2a5a13723054e53ecc7ecc4a4c3a2c7c5ba01348df9
Stored in directory: /root/.cache/pip/wheels/46/2f/53/5f5c1d275492f2fce1cdab9a9bb12d49286dead829a4078e0e
Building wheel for ftfy (setup.py) ... [?25l[?25hdone
Created wheel for ftfy: filename=ftfy-5.8-cp36-none-any.whl size=45613 sha256=6c6912e112fb23dd2ee429df65d141438afb8efa8a7411da337a7cc2917e4c76
Stored in directory: /root/.cache/pip/wheels/ba/c0/ef/f28c4da5ac84a4e06ac256ca9182fc34fa57fefffdbc68425b
Building wheel for jsonnet (setup.py) ... [?25l[?25hdone
Created wheel for jsonnet: filename=jsonnet-0.17.0-cp36-cp36m-linux_x86_64.whl size=3387874 sha256=064a18bdc731fa9bd30abb51ee596d0ab026f5d0f71164c57a6990d44dcd6f93
Stored in directory: /root/.cache/pip/wheels/26/7a/37/7dbcc30a6b4efd17b91ad1f0128b7bbf84813bd4e1cfb8c1e3
Building wheel for overrides (setup.py) ... [?25l[?25hdone
Created wheel for overrides: filename=overrides-3.1.0-cp36-none-any.whl size=10175 sha256=a0b8490b777bc367321872a771a2e2b4ed1f3ef743ad926ea91c86e84148bc42
Stored in directory: /root/.cache/pip/wheels/5c/24/13/6ef8600e6f147c95e595f1289a86a3cc82ed65df57582c65a9
Building wheel for sacremoses (setup.py) ... [?25l[?25hdone
Created wheel for sacremoses: filename=sacremoses-0.0.43-cp36-none-any.whl size=893261 sha256=b1b1692dd58ef7d484a32a272ed9702ce773149fc8e4cbfb1f6d8b7071f0ace5
Stored in directory: /root/.cache/pip/wheels/29/3c/fd/7ce5c3f0666dab31a50123635e6fb5e19ceb42ce38d4e58f45
Successfully built ufal.udpipe configargparse parsimonious word2number ftfy jsonnet overrides sacremoses
[31mERROR: torchtext 0.8.1 has requirement torch==1.7.1, but you'll have torch 1.7.0+cu101 which is incompatible.[0m
[31mERROR: en-core-web-sm 2.2.5 has requirement spacy>=2.2.2, but you'll have spacy 2.1.9 which is incompatible.[0m
[31mERROR: opennmt-py 1.2.0 has requirement torchtext==0.4.0, but you'll have torchtext 0.8.1 which is incompatible.[0m
[31mERROR: botocore 1.19.35 has requirement urllib3<1.27,>=1.25.4; python_version != "3.4", but you'll have urllib3 1.24.3 which is incompatible.[0m
[31mERROR: responses 0.12.1 has requirement urllib3>=1.25.10, but you'll have urllib3 1.24.3 which is incompatible.[0m
Installing collected packages: torchtext, configargparse, waitress, pyonmttok, OpenNMT-py, ufal.udpipe, preshed, blis, plac, thinc, spacy, parsimonious, word2number, tensorboardX, ftfy, jsonpickle, numpydoc, sentencepiece, jmespath, botocore, s3transfer, boto3, pytorch-transformers, greenlet, zope.interface, zope.event, gevent, jsonnet, pytorch-pretrained-bert, conllu, flaky, flask-cors, overrides, unidecode, responses, allennlp, sacremoses, tokenizers, transformers, turku-neural-parser
Found existing installation: torchtext 0.3.1
Uninstalling torchtext-0.3.1:
Successfully uninstalled torchtext-0.3.1
Found existing installation: preshed 3.0.5
Uninstalling preshed-3.0.5:
Successfully uninstalled preshed-3.0.5
Found existing installation: blis 0.4.1
Uninstalling blis-0.4.1:
Successfully uninstalled blis-0.4.1
Found existing installation: plac 1.1.3
Uninstalling plac-1.1.3:
Successfully uninstalled plac-1.1.3
Found existing installation: thinc 7.4.0
Uninstalling thinc-7.4.0:
Successfully uninstalled thinc-7.4.0
Found existing installation: spacy 2.2.4
Uninstalling spacy-2.2.4:
Successfully uninstalled spacy-2.2.4
Successfully installed OpenNMT-py-1.2.0 allennlp-0.9.0 blis-0.2.4 boto3-1.16.35 botocore-1.19.35 configargparse-1.2.3 conllu-1.3.1 flaky-3.7.0 flask-cors-3.0.9 ftfy-5.8 gevent-20.9.0 greenlet-0.4.17 jmespath-0.10.0 jsonnet-0.17.0 jsonpickle-1.4.2 numpydoc-1.1.0 overrides-3.1.0 parsimonious-0.8.1 plac-0.9.6 preshed-2.0.1 pyonmttok-1.22.2 pytorch-pretrained-bert-0.6.2 pytorch-transformers-1.1.0 responses-0.12.1 s3transfer-0.3.3 sacremoses-0.0.43 sentencepiece-0.1.94 spacy-2.1.9 tensorboardX-2.1 thinc-7.0.8 tokenizers-0.7.0 torchtext-0.8.1 transformers-2.11.0 turku-neural-parser-0.3 ufal.udpipe-1.2.0.3 unidecode-1.1.1 waitress-1.4.4 word2number-1.1 zope.event-4.5.0 zope.interface-5.2.0
###Markdown
Prerequisites:* The models here are tested with torch 1.7* It might be that at some point this notebook will break* If that happens, try to install torch 1.7
###Code
!nvcc --version
!python -V
###Output
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243
Python 3.6.9
###Markdown
Download and unpack the model* Available models are listed here: http://dl.turkunlp.org/turku-parser-models/* Download the model and unpack it`wget http://dl.turkunlp.org/turku-parser-models/models_fi_tdt_v2.7.tar.gz ; tar zxvf models_fi_tdt_v2.7.tar.gz`...and you are good to go!
###Code
!wget -nc http://dl.turkunlp.org/turku-parser-models/models_fi_tdt_v2.7.tar.gz
!tar zxvf models_fi_tdt_v2.7.tar.gz
###Output
--2020-12-14 10:22:59-- http://dl.turkunlp.org/turku-parser-models/models_fi_tdt_v2.7.tar.gz
Resolving dl.turkunlp.org (dl.turkunlp.org)... 195.148.30.23
Connecting to dl.turkunlp.org (dl.turkunlp.org)|195.148.30.23|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 590212039 (563M) [application/octet-stream]
Saving to: ‘models_fi_tdt_v2.7.tar.gz’
models_fi_tdt_v2.7. 100%[===================>] 562.87M 17.5MB/s in 34s
2020-12-14 10:23:34 (16.3 MB/s) - ‘models_fi_tdt_v2.7.tar.gz’ saved [590212039/590212039]
models_fi_tdt_v2.7/
models_fi_tdt_v2.7/pipelines.yaml
models_fi_tdt_v2.7/Tokenizer/
models_fi_tdt_v2.7/Tokenizer/tokenizer.udpipe
models_fi_tdt_v2.7/Lemmatizer/
models_fi_tdt_v2.7/Lemmatizer/big_lemma_cache.tsv
models_fi_tdt_v2.7/Lemmatizer/lemma_cache.tsv
models_fi_tdt_v2.7/Lemmatizer/lemmatizer.pt
models_fi_tdt_v2.7/Udify/
models_fi_tdt_v2.7/Udify/model.tar.gz
###Markdown
Running the parser* Every model can specify many processing pipelines* These are in `modeldir/pipelines.yaml`* `parse_plaintext`is the default* `parse_plaintext` read plain text, tokenize, split into sentences, tag, parse, lemmatize* `parse_sentlines` read text one sentence per line, tokenize, tag, parse, lemmatize* `parse_wslines` read whitespace tokenized text one sentence per line, tag, parse, lemmatize* `parse_conllu` read conllu, wipe existing values from all columns, tag, parse, lemmatize* `tokenize` read plain text, tokenize, split into sentences* `parse_noisytext` meant for noisy plaintext input (i.e. web crawled data), as parse_plaintext but truncates long sentences/tokens to avoid OOM issues
###Code
from tnparser.pipeline import read_pipelines, Pipeline
# print available pipelines for your model
available_pipelines=read_pipelines("models_fi_tdt_v2.7/pipelines.yaml") # insert your model name here (model-name/pipelines.yaml)
print(list(available_pipelines.keys()))
# select the pipeline fitting your input data and load the model
# this one will take long on first run because of loading the model
p=Pipeline(available_pipelines["parse_plaintext"])
parsed=p.parse("Minulla on ruskea koira! Se haukkuu ja juoksee. Voi että!") # insert your text here
print(parsed)
###Output
# newdoc
# newpar
# sent_id = 1
# text = Minulla on ruskea koira!
1 Minulla minä PRON _ Case=Ade|Number=Sing|Person=1|PronType=Prs 0 root _ _
2 on olla AUX _ Mood=Ind|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin|Voice=Act 1 cop:own _ _
3 ruskea ruskea ADJ _ Case=Nom|Degree=Pos|Number=Sing 4 amod _ _
4 koira koira NOUN _ Case=Nom|Number=Sing 1 nsubj:cop _ _
5 ! ! PUNCT _ _ 1 punct _ _
# sent_id = 2
# text = Se haukkuu ja juoksee.
1 Se se PRON _ Case=Nom|Number=Sing|PronType=Dem 2 nsubj _ _
2 haukkuu haukkua VERB _ Mood=Ind|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin|Voice=Act 0 root _ _
3 ja ja CCONJ _ _ 4 cc _ _
4 juoksee juosta VERB _ Mood=Ind|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin|Voice=Act 2 conj _ _
5 . . PUNCT _ _ 2 punct _ _
# sent_id = 3
# text = Voi että!
1 Voi voi INTJ _ _ 2 discourse _ _
2 että että INTJ _ _ 0 root _ _
3 ! ! PUNCT _ _ 2 punct _ _
###Markdown
GPU mode* The pipeline runs by default in CPU mode* Needs to be told to run in GPU* This is a bit tricky right now but not impossible* Note: if you now switch the Runtime into GPU, you need to re-run the pip install
###Code
#I do realize this ain't good! :)
import types
extra_args=types.SimpleNamespace()
extra_args.__dict__["udify_mod.device"]="0" #simulates someone giving a --device 0 parameter to Udify
extra_args.__dict__["lemmatizer_mod.device"]="0"
p=Pipeline(available_pipelines["parse_plaintext"],extra_args)
parsed=p.parse("Minulla on ruskea koira! Se haukkuu ja juoksee. Voi että!")
print("Parsed has this many lines:",len(parsed.split("\n")))
#Since we are on a GPU, we can try to push through quite a bit more of data
parsed=p.parse("Minulla on ruskea koira! Se haukkuu ja juoksee. Voi että! "*200) #takes forever on CPU, finishes in few seconds on GPU
print("Parsed has this many lines:",len(parsed.split("\n")))
###Output
Parsed has this many lines: 4403
###Markdown
Process the output* The output of the pipeline run is a conll-u string* You can parse it in any number of ways* This is my preferred:
###Code
ID,FORM,LEMMA,UPOS,XPOS,FEAT,HEAD,DEPREL,DEPS,MISC=range(10) #the 10 columns
def read_conll(inp,max_sent=0,drop_tokens=True,drop_nulls=True):
"""
inp: list of lines or an open file
max_sent: 0 for all, >0 to limit
drop_tokens: ignore multiword token lines
drop_nulls: ignore null nodes in enhanced dependencies
Yields lines of the parse and comments
"""
comments=[]
sent=[]
yielded=0
for line in inp:
line=line.rstrip("\n")
if line.startswith("#"):
comments.append(line)
elif not line:
if sent:
yield sent,comments
yielded+=1
if max_sent>0 and yielded==max_sent:
break
sent,comments=[],[]
else:
cols=line.split("\t")
if drop_tokens and "-" in cols[ID]:
continue
if drop_nulls and "." in cols[ID]:
continue
sent.append(cols)
else:
if sent:
yield sent,comments
for one_sent,comments in read_conll(parsed.split("\n"),5):
words=(word_line[FORM] for word_line in one_sent)
lemmas=(word_line[LEMMA] for word_line in one_sent)
print(" ".join(words))
print(" ".join(lemmas))
print()
# and that's really all there is to it :)
###Output
Minulla on ruskea koira !
minä olla ruskea koira !
Se haukkuu ja juoksee .
se haukkua ja juosta .
Voi että !
voi että !
Minulla on ruskea koira !
minä olla ruskea koira !
Se haukkuu ja juoksee .
se haukkua ja juosta .
|
lab/ML1_lab3.ipynb | ###Markdown
**Save this file as studentid1_studentid2_lab.ipynb**(Your student-id is the number shown on your student card.)E.g. if you work with 3 people, the notebook should be named:12301230_3434343_1238938934_lab1.ipynb.**This will be parsed by a regexp, so please double check your filename.**Before you turn this problem in, please make sure everything runs correctly. First, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\rightarrow$Run All).**Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE", as well as your names and email adresses below.**
###Code
NAME = "Pascal Esser"
NAME2 = "Jana Leible"
NAME3 = "Tom de Bruijn"
EMAIL = "[email protected]"
EMAIL2 = "[email protected]"
EMAIL3 = "[email protected]"
###Output
_____no_output_____
###Markdown
--- Lab 3: Gaussian Processes and Support Vector Machines Machine Learning 1, September 2017Notes on implementation:* You should write your code and answers in this IPython Notebook: http://ipython.org/notebook.html. If you have problems, please contact your teaching assistant.* Please write your answers right below the questions.* Among the first lines of your notebook should be "%pylab inline". This imports all required modules, and your plots will appear inline.* Refer to last week's lab notes, i.e. http://docs.scipy.org/doc/, if you are unsure about what function to use. There are different correct ways to implement each problem!* use the provided test boxes to check if your answers are correct
###Code
%pylab inline
plt.rcParams["figure.figsize"] = [20,10]
import numpy as np
import matplotlib.pyplot as plt
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
Part 1: Gaussian ProcessesFor part 1 we will be refer to Bishop sections 6.4.2 and 6.4.3. You may also want to refer to Rasmussen's Gaussian Process text which is available online at http://www.gaussianprocess.org/gpml/chapters/ and especially to the project found at https://www.automaticstatistician.com/index/ by Ghahramani for some intuition in GP. To understand Gaussian processes, it is highly recommended understand how marginal, partitioned Gaussian distributions can be converted into conditional Gaussian distributions. This is covered in Bishop 2.3 and summarized in Eqns 2.94-2.98.$\newcommand{\bt}{\mathbf{t}}$$\newcommand{\bx}{\mathbf{x}}$$\newcommand{\by}{\mathbf{y}}$$\newcommand{\bw}{\mathbf{w}}$$\newcommand{\ba}{\mathbf{a}}$ Periodic DataWe will use the same data generating function that we used previously for regression.
###Code
def true_mean_function(x):
return np.cos(2 * pi * (x + 1))
def add_noise(y, sigma):
return y + sigma * np.random.randn(len(y))
def generate_t(x, sigma):
return add_noise(true_mean_function(x), sigma)
sigma = 0.2
beta = 1.0 / pow(sigma, 2)
N_test = 100
x_test = np.linspace(-1, 1, N_test)
mu_test = np.zeros(N_test)
y_test = true_mean_function(x_test)
t_test = add_noise(y_test, sigma)
plt.plot(x_test, y_test, 'b-', lw=2)
plt.plot(x_test, t_test, 'go')
plt.show()
###Output
_____no_output_____
###Markdown
1. Sampling from the Gaussian process prior (30 points)We will implement Gaussian process regression using the kernel function in Bishop Eqn. 6.63. 1.1 k_n_m( xn, xm, thetas ) (5 points)To start, implement function `k_n_m(xn, xm, thetas)` that takes scalars $x_n$ and $x_m$, and a vector of $4$ thetas, and computes the kernel function Bishop Eqn. 6.63 (10 points). NB: usually the kernel function will take $D$ by $1$ vectors, but since we are using a univariate problem, this makes things easier.
###Code
def k_n_m(xn, xm, thetas):
k = thetas[0] * np.exp(-thetas[1] * (xn - xm) ** 2 / 2) + thetas[2] + \
thetas[3] * xn * xm
return k
###Output
_____no_output_____
###Markdown
1.2 computeK( X1, X2, thetas ) (10 points)Eqn 6.60 is the marginal distribution of mean output of $N$ data vectors: $p(\mathbf{y}) = \mathcal{N}(0, \mathbf{K})$. Notice that the expected mean function is $0$ at all locations, and that the covariance is a $N$ by $N$ kernel matrix $\mathbf{K}$. Write a function `computeK(x1, x2, thetas)`that computes the kernel matrix. Use k_n_m as part of an inner loop (of course, there are more efficient ways of computing the kernel function making better use of vectorization, but that is not necessary) (5 points).
###Code
def computeK(x1, x2, thetas):
K = np.zeros((len(x1), len(x2)))
for i in range(len(x1)):
for j in range(len(x2)):
K[i, j] = k_n_m(x1[i], x2[j], thetas)
return K
### Test your function
x1 = [0, 1, 2]
x2 = [1, 2, 3, 4]
thetas = [1, 2, 3, 4]
K = computeK(x1, x2, thetas)
assert K.shape == (len(x1), len(x2)), "the shape of K is incorrect"
###Output
_____no_output_____
###Markdown
1.3 Plot function samples (15 points)Now sample mean functions at the x_test locations for the theta values in Bishop Figure 6.5, make a figure with a 2 by 3 subplot and make sure the title reflects the theta values (make sure everything is legible). In other words, sample $\by_i \sim \mathcal{N}(0, \mathbf{K}_{\theta})$. Make use of numpy.random.multivariate_normal(). On your plots include the expected value of $\by$ with a dashed line and fill_between 2 standard deviations of the uncertainty due to $\mathbf{K}$ (the diagonal of $\mathbf{K}$ is the variance of the model uncertainty) (15 points).
###Code
#numbers from Bishop page 308
thetas_inp = np.array(
[(1.0, 4.0, 0.0, 0.0), (9.0, 4.0, 0.0, 0.0), (1.0, 64.0, 0.0, 0.0),
(1.0, 0.25, 0.0, 0.0), (1.0, 4.0, 10.0, 0.0), (1.0, 4.0, 0.0, 5.0)])
fig = plt.figure()
# run over all thetas and create the prints at the same time
for i in range(thetas_inp.shape[0]):
K_theta_i = computeK(x_test, x_test, thetas_inp[i])
# sample 5 times (as it is done in Bishop)
for _ in range(5):
y = np.random.multivariate_normal(np.zeros(len(K_theta_i)), K_theta_i)
# what subplot to write on
plt.subplot(2, 3, i + 1)
plt.plot(x_test, y, linewidth=0.7)
# plot 0 line as reverence
plt.plot(x_test, [0] * len(x_test), '--', color='k', linewidth=1.)
# standard deviation
std = np.sqrt(np.diag(K_theta_i))
plt.fill_between(x_test, -2 * std, 2 * std, color='b', alpha=0.10)
# corresponding theta values as title
plt.title(str(thetas_inp[i]))
plt.show()
###Output
_____no_output_____
###Markdown
2. Predictive distribution (35 points)So far we have sampled mean functions from the prior. We can draw actual data $\bt$ two ways. The first way is generatively, by first sampling $\by | \mathbf{K}$, then sampling $\bt | \by, \beta$ (Eqns 6.60 followed by 6.59). The second way is to integrate over $\by$ (the mean draw) and directly sample $\bt | \mathbf{K}, \beta$ using Eqn 6.61. This is the generative process for $\bt$. Note that we have not specified a distribution over inputs $\bx$; this is because Gaussian processes are conditional models. Because of this we are free to generate locations $\bx$ when playing around with the GP; obviously a dataset will give us input-output pairs.Once we have data, we are interested in the predictive distribution (note: the prior is the predictive distribution when there is no data). Consider the joint distribution for $N+1$ targets, given by Eqn 6.64. Its covariance matrix is composed of block components $C_N$, $\mathbf{k}$, and $c$. The covariance matrix $C_N$ for $\bt_N$ is $C_N = \mathbf{K}_N + \beta^{-1}\mathbf{I}_N$. We have just made explicit the size $N$ of the matrix; $N$ is the number of training points. The kernel vector $\mathbf{k}$ is a $N$ by $1$ vector of kernel function evaluations between the training input data and the test input vector. The scalar $c$ is a kernel evaluation at the test input. 2.1 gp_predictive_distribution(...) (10 points)Write a function `gp_predictive_distribution(x_train, t_train, x_test, theta, beta, C=None)` that computes Eqns 6.66 and 6.67, except allow for an arbitrary number of test points (not just one) and now the kernel matrix is for training data. By having C as an optional parameter, we can avoid computing it more than once (for this problem it is unimportant, but for real problems this is an issue). The function should compute $\mathbf{C}$, $\mathbf{k}$, and return the mean, variance and $\mathbf{C}$. Do not forget: the computeK function computes $\mathbf{K}$, not $\mathbf{C}$! (10 points)
###Code
# write the calcuation for C as a extra function,
# to reuse it in gp_log_likelihood
def computeC(x_train, beta, theta):
return computeK(x_train, x_train, theta) + np.identity(len(x_train)) / beta
def gp_predictive_distribution(x_train, t_train, x_test, theta, beta, C=None):
# if C is given no value in the input for the function,
# call computeC, to compute it
if C is None:
C = computeC(x_train, beta, theta)
# Bishop 6.66
k = computeK(x_train, x_test, theta)
C_inv = np.linalg.inv(C)
test_mean = k.T.dot(C_inv).dot(t_train)
# Bishop 6.67
c = 1.0 / beta + computeK(x_test, x_test, theta)
test_var = c - k.T.dot(C_inv).dot(k)
return test_mean, test_var, C
### Test your function
N = 2
train_x = np.linspace(-1, 1, N)
train_t = 2 * train_x
test_N = 3
test_x = np.linspace(-1, 1, test_N)
theta = [1, 2, 3, 4]
beta = 25
test_mean, test_var, C = gp_predictive_distribution(train_x, train_t, test_x,
theta, beta, C=None)
assert test_mean.shape == (test_N,), "the shape of mean is incorrect"
assert test_var.shape == (test_N, test_N), "the shape of var is incorrect"
assert C.shape == (N, N), "the shape of C is incorrect"
C_in = np.array([[0.804, -0.098168436], [-0.098168436, 0.804]])
_, _, C_out = gp_predictive_distribution(train_x, train_t, test_x, theta, beta,
C=C_in)
assert np.allclose(C_in, C_out), "C is not reused!"
###Output
_____no_output_____
###Markdown
2.2 gp_log_likelihood(...) (10 points)To learn the hyperparameters, we would need to compute the log-likelihood of the of the training data. Implicitly, this is conditioned on the value setting for $\mathbf{\theta}$. Write a function `gp_log_likelihood(x_train, t_train, theta, C=None, invC=None, beta=None)`, where C and invC can be stored and reused. It should return the log-likelihood, `C` and `invC` (10 points)
###Code
def gp_log_likelihood(x_train, t_train, theta, beta, C=None, invC=None):
# to store and reuse C and invC, we test if they are given so we dont
# calculate them a second time. otherwise we calculate them and return
if C is None:
C = computeC(x_train, beta, theta)
if invC is None:
invC = np.linalg.inv(C)
# Bishop 6.69
lp = ((- np.log(np.linalg.det(C))
- t_train.T.dot(invC).dot(t_train)
- len(t_train) * np.log(2 * np.pi)) / 2)
return lp, C, invC
### Test your function
N = 2
train_x = np.linspace(-1, 1, N)
train_t = 2 * train_x
theta = [1, 2, 3, 4]
beta = 25
lp, C, invC = gp_log_likelihood(train_x, train_t, theta, beta, C=None,
invC=None)
assert lp < 0, "the log-likelihood should smaller than 0"
assert C.shape == (N, N), "the shape of var is incorrect"
assert invC.shape == (N, N), "the shape of C is incorrect"
C_in = np.array([[0.804, -0.098168436], [-0.098168436, 0.804]])
_, C_out, _ = gp_log_likelihood(train_x, train_t, theta, beta, C=C_in,
invC=None)
assert np.allclose(C_in, C_out), "C is not reused!"
invC_in = np.array([[1.26260453, 0.15416407], [0.15416407, 1.26260453]])
_, _, invC_out = gp_log_likelihood(train_x, train_t, theta, beta, C=None,
invC=invC_in)
assert np.allclose(invC_in, invC_out), "invC is not reused!"
###Output
_____no_output_____
###Markdown
2.3 Plotting (10 points)Repeat the 6 plots above, but this time conditioned on the training points. Use the periodic data generator to create 2 training points where x is sampled uniformly between $-1$ and $1$. For these plots, feel free to use the provided function "gp_plot". Make sure you put the parameters in the title and this time also the log-likelihood. Try to understand the two types of uncertainty! If you do not use `gp_plot(...)`, please add a fill between for the model and target noise. (10 points)
###Code
def gp_plot(x_test, y_test, mean_test, var_test, x_train, t_train, theta, beta):
# x_test:
# y_test: the true function at x_test
# mean_test: predictive mean at x_test
# var_test: predictive covariance at x_test
# t_train: the training values
# theta: the kernel parameters
# beta: the precision (known)
# the reason for the manipulation is to allow plots separating model and data stddevs.
std_total = np.sqrt(
np.diag(var_test)) # includes all uncertainty, model and target noise
std_model = np.sqrt(
std_total ** 2 - 1.0 / beta) # remove data noise to get model uncertainty in stddev
std_combo = std_model + np.sqrt(
1.0 / beta) # add stddev (note: not the same as full)
plt.plot(x_test, y_test, 'b', lw=3)
plt.plot(x_test, mean_test, 'k--', lw=2)
plt.fill_between(x_test, mean_test + 2 * std_combo,
mean_test - 2 * std_combo, color='k', alpha=0.25)
plt.fill_between(x_test, mean_test + 2 * std_model,
mean_test - 2 * std_model, color='r', alpha=0.25)
plt.plot(x_train, t_train, 'ro', ms=10)
def plotting_gp(n):
beta = 1.0 / pow(sigma, 2)
# create n datapoints
x_train = np.random.uniform(-1, 1, n)
y_train = true_mean_function(x_train)
t_train = add_noise(y_train, sigma)
x_test = np.linspace(-1, 1, N_test)
y_test = true_mean_function(x_test)
for i in range(thetas_inp.shape[0]):
plt.subplot(2, 3, i + 1)
test_mean, test_var, _ = gp_predictive_distribution(x_train, t_train,
x_test,
thetas_inp[i], beta)
gp_plot(x_test, y_test, test_mean, test_var, x_train, t_train,
thetas_inp[i], beta)
plt.title(str(thetas_inp[i]))
plt.show()
plotting_gp(2)
###Output
_____no_output_____
###Markdown
2.4 More plotting (5 points)Repeat the 6 plots above, but this time conditioned a new set of 10 training points. (5 points)
###Code
plotting_gp(10)
###Output
_____no_output_____
###Markdown
Part 2: Support Vector Machines (45 points)As seen in Part 1: Gaussian Processes, one of the significant limitations of many such algorithms is that the kernel function $k(\bx_n , \bx_m)$ must be evaluated for all possible pairs $\bx_n$ and $\bx_m$ of training points, which can be computationally infeasible during training and can lead to excessive computation times when making predictions for new data points.In Part 2: Support Vector Machines, we shall look at kernel-based algorithms that have sparse solutions, so that predictions for new inputs depend only on the kernel function evaluated at a subset of the training data points. 2.1 Generating a linearly separable dataset (15 points)a) (5 points) First of all, we are going to create our own 2D toy dataset $X$. The dataset will consists of two i.i.d. subsets $X_1$ and $X_2$, each of the subsets will be sampled from a multivariate Gaussian distribution,\begin{align}X_1 \sim &\mathcal{N}(\mu_1, \Sigma_1)\\&\text{ and }\\X_2 \sim &\mathcal{N}(\mu_2, \Sigma_2).\end{align}In the following, $X_1$ will have $N_1=20$ samples and a mean $\mu_1=(1,1)$. $X_2$ will have $N_2=30$ samples and a mean $\mu_2=(3,3)$.Plot the two subsets in one figure, choose two colors to indicate which sample belongs to which subset. In addition you should choose, $\Sigma_1$ and $\Sigma_2$ in a way that the two subsets become linearly separable. (Hint: Which form has the covariance matrix for a i.i.d. dataset?)
###Code
mean_1 = (1, 1)
variance_1 = np.matrix([[0.1, 0], [0, 0.1]])
X1 = np.random.multivariate_normal(mean_1, variance_1, (20,))
mean_2 = (3, 3)
variance_2 = np.matrix([[0.1, 0], [0, 0.1]])
X2 = np.random.multivariate_normal(mean_2, variance_2, (30,))
def plot_data(X1, X2):
plt.plot([point[0] for point in X1], [point[1] for point in X1], 'go')
plt.plot([point[0] for point in X2], [point[1] for point in X2], 'ro')
plot_data(X1, X2)
plt.show()
###Output
_____no_output_____
###Markdown
b) (10 points) In the next step we will combine the two datasets X_1, X_2 and generate a vector `t` containing the labels. Write a function `create_X_and_t(X1, X2)` it should return the combined data set X and the corresponding target vector t.
###Code
def create_X_and_t(X1, X2):
X = np.concatenate((X1, X2))
t = np.concatenate((np.repeat(1, len(X1)), np.repeat(-1, len(X2))))
return X, t
### Test your function
dim = 2
N1_test = 2
N2_test = 3
X1_test = np.arange(4).reshape((N1_test, dim))
X2_test = np.arange(6).reshape((N2_test, dim))
X_test, t_test = create_X_and_t(X1_test, X2_test)
assert X_test.shape == (N1_test + N2_test, dim), "the shape of X is incorrect"
assert t_test.shape == (N1_test + N2_test,), "the shape of t is incorrect"
###Output
_____no_output_____
###Markdown
2.2 Finding the support vectors (15 points)Finally we going to use a SVM to obtain the decision boundary for which the margin is maximized. We have to solve the optimization problem\begin{align}\arg \min_{\bw, b} \frac{1}{2} \lVert \bw \rVert^2,\end{align}subject to the constraints\begin{align}t_n(\bw^T \phi(\bx_n) + b) \geq 1, n = 1,...,N.\end{align}In order to solve this constrained optimization problem, we introduce Lagrange multipliers $a_n \geq 0$. We obtain the dualrepresentation of the maximum margin problem in which we maximize\begin{align}\sum_{n=1}^N a_n - \frac{1}{2}\sum_{n=1}^N\sum_{m=1}^N a_n a_m t_n t_m k(\bx_n, \bx_m),\end{align}with respect to a subject to the constraints\begin{align}a_n &\geq 0, n=1,...,N,\\\sum_{n=1}^N a_n t_n &= 0.\end{align}This takes the form of a quadratic programming problem in which we optimize a quadratic function of a subject to a set of inequality constraints. a) (5 points) In this example we will use a linear kernel $k(\bx, \bx') = \bx^T\bx'$. Write a function `computeK(X)` that computes the kernel matrix $K$ for the 2D dataset $X$.
###Code
def computeK(X):
K = X.dot(X.T)
return K
dim = 2
N_test = 3
X_test = np.arange(6).reshape((N_test, dim))
K_test = computeK(X_test)
assert K_test.shape == (N_test, N_test)
###Output
_____no_output_____
###Markdown
Next, we will rewrite the dual representation so that we can make use of computationally efficient vector-matrix multiplication. The objective becomes\begin{align}\min_{\ba} \frac{1}{2} \ba^T K' \ba - 1^T\ba,\end{align}subject to the constraints\begin{align}a_n &\geq 0, n=1,...,N,\\\bt^T\ba &= 0.\end{align}Where\begin{align}K'_{nm} = t_n t_m k(\bx_n, \bx_m),\end{align}and in the special case of a linear kernel function,\begin{align}K'_{nm} = t_n t_m k(\bx_n, \bx_m) = k(t_n \bx_n, t_m \bx_m).\end{align}To solve the quadratic programming problem we will use a python module called cvxopt. You first have to install the module in your virtual environment (you have to activate it first), using the following command:`conda install -c conda-forge cvxopt`The quadratic programming solver can be called as`cvxopt.solvers.qp(P, q[, G, h[, A, b[, solver[, initvals]]]])`This solves the following problem,\begin{align}\min_{\bx} \frac{1}{2} \bx^T P \bx + q^T\bx,\end{align}subject to the constraints,\begin{align}G\bx &\leq h,\\A\bx &= b.\end{align}All we need to do is to map our formulation to the cvxopt interface.b) (10 points) Write a function `compute_multipliers(X, t)` that solves the quadratic programming problem using the cvxopt module and returns the lagrangian multiplier for every sample in the dataset.
###Code
import cvxopt
def compute_multipliers(X, t):
K = np.matrix(np.float64(computeK(t[:, None] * X)))
t = np.array(np.double(t))
P = cvxopt.matrix(K)
q = cvxopt.matrix(-np.ones((X.shape[0], 1)))
G = cvxopt.matrix(-np.eye(X.shape[0]))
h = cvxopt.matrix(np.zeros(X.shape[0]))
A = cvxopt.matrix(t.reshape(1, -1))
b = cvxopt.matrix(np.zeros(1))
sol = cvxopt.solvers.qp(P, q, G, h, A, b)
a = np.array(sol['x'])
return a
### Test your function
dim = 2
N_test = 3
X_test = np.arange(6).reshape((N_test, dim))
t_test = np.array([-1., 1., 1.])
a_test = compute_multipliers(X_test, t_test)
assert a_test.shape == (N_test, 1)
###Output
pcost dcost gap pres dres
0: -7.2895e-01 -1.3626e+00 6e+00 2e+00 2e+00
1: -3.0230e-01 -6.8816e-01 8e-01 1e-01 1e-01
2: -2.3865e-01 -3.3686e-01 1e-01 1e-16 2e-15
3: -2.4973e-01 -2.5198e-01 2e-03 6e-17 6e-16
4: -2.5000e-01 -2.5002e-01 2e-05 1e-16 2e-16
5: -2.5000e-01 -2.5000e-01 2e-07 8e-17 6e-16
Optimal solution found.
###Markdown
2.3 Plot support vectors (5 points)Now that we have obtained the lagrangian multipliers $\ba$, we use them to find our support vectors. Repeat the plot from 2.1, this time use a third color to indicate which samples are the support vectors.
###Code
X_inp, t_inp = create_X_and_t(X1, X2)
a = compute_multipliers(X_inp, t_inp)
# for support vector, we know that a must be >0, but because of the way we
# calculate the a all values are above zero. So we approximate: values > 1.0e-02
# are used as support vectors.
# by looking at the data we see that 'zero a' are around _e-10 - _e-09 and
# 'non zero a' around _e-01
sv_bool = a > 1.0e-02
def plot_sv(X_inp, sv_bool):
# plot the support vectors
for i in range(len(sv_bool)):
if sv_bool[i]:
print(X_inp[i])
plt.plot(X_inp[i][0], X_inp[i][1], 'o', color='b', mfc='none',
markersize=15)
plot_data(X1, X2)
plot_sv(X_inp, sv_bool)
plt.show()
###Output
[ 1.64545297 1.1399067 ]
[ 2.57290086 2.86058507]
[ 3.21108195 2.30360749]
###Markdown
2.4 Plot the decision boundary (10 Points)The decision boundary is fully specified by a (usually very small) subset of training samples, the support vectors. Make use of\begin{align}\bw &= \sum_{n=1}^N a_n t_n \mathbf{\phi}(\bx_n)\\b &= \frac{1}{N_S}\sum_{n \in S} (t_n - \sum_{m \in S} a_m t_m k(\bx_n, \bx_m)),\end{align}where $S$ denotes the set of indices of the support vectors, to calculate the slope and intercept of the decision boundary. Generate a last plot that contains the two subsets, support vectors and decision boundary.
###Code
def plot_decision_boundary(X_inp):
# calculate weights
w = np.sum(a * t_inp[:, None] * X_inp, axis=0)
b = t_inp[sv_bool.reshape(-1)] - X_inp[sv_bool.reshape(-1)].dot(w)
# calculate bias
bias = b[0]
# normalize
norm = np.linalg.norm(w)
w = w / norm
bias = bias / norm
# calculate slope and intercept
slope = -w[0] / w[1]
intercept = -bias / w[1]
# create x values for plot between the minimum and maximum of the the
# generated data in x direction
x = np.linspace(np.min(X_inp[:, 0]), np.max(X_inp[:, 0]), 100)
# calculate the y valued for the decision boundary as:
y = x * slope + intercept
# reduce values of the decition boudery to values between the
# min and max of the dataset in the y direction
x_,y_ = [],[]
for i in range(100):
if all([y[i] > np.min(X_inp[:, 1]) , y[i] < np.max(X_inp[:, 1])]):
y_.append(y[i])
x_.append(x[i])
plt.plot(x_, y_, 'k-')
plot_decision_boundary(X_inp)
plot_data(X1, X2)
plot_sv(X_inp, sv_bool)
plt.show()
###Output
[ 1.64545297 1.1399067 ]
[ 2.57290086 2.86058507]
[ 3.21108195 2.30360749]
|
notebooks/stations_metadata.ipynb | ###Markdown
**Brian Blaylock** **August 25, 2020** 🗼 Example: `metadata`Get metadata for stations or a set of stations.Refer to the [Station Selectors in the API docs](https://developers.synopticdata.com/mesonet/v2/station-selectors/) for many ways to filter the stations you want.
###Code
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import cartopy.feature as feature
import pandas as pd
from datetime import datetime
import sys
# Need to tell Python where to find `synoptic`.
# This says to look back one directory (relative to this notebook).
sys.path.append('../')
from synoptic.services import stations_metadata
# Basic example
stations_metadata(stid='KSLC', verbose='HIDE')
###Output
🚚💨 Speedy Delivery from Synoptic API [metadata]: https://api.synopticdata.com/v2/stations/metadata?stid=KSLC&token=🙈HIDDEN
###Markdown
Plot stations by network.You can see a list of network providers on the [API documents](https://developers.synopticdata.com/about/station-providers/), or use the networks API service. This demonstrates how to get the metadata for the Pacific Gas and Electric stations and plot them on a map.
###Code
a = stations_metadata(network=229, verbose='HIDE')
a
plt.figure(figsize=[10,8])
ax = plt.subplot(projection=ccrs.PlateCarree())
ax.scatter(a.loc['longitude'], a.loc['latitude'], marker='.')
ax.set_title('PG&E Station Locations', loc='left', fontweight='bold')
ax.set_title(f'Total: {len(a.columns)}', loc='right')
ax.add_feature(feature.STATES.with_scale('10m'))
plt.figure(figsize=[10,8])
ax = plt.subplot(projection=ccrs.PlateCarree())
# Plot each station artistically, to illustrate station density
ax.scatter(a.loc['longitude'], a.loc['latitude'], s=200, color='0.1', lw=2)
ax.scatter(a.loc['longitude'], a.loc['latitude'], s=200, color='1.0', lw=0)
ax.scatter(a.loc['longitude'], a.loc['latitude'], s=180, color='C1', lw=0, alpha=.1)
ax.set_title('PG&E Station Locations', loc='left', fontweight='bold')
ax.set_title(f'Total: {len(a.columns)}', loc='right')
ax.add_feature(feature.STATES.with_scale('10m'))
###Output
_____no_output_____
###Markdown
Show a histogram of when the station's period of record starts
###Code
from matplotlib.dates import DateFormatter
a.loc['RECORD_START'].hist(zorder=5, bins=50, edgecolor='k', linewidth=.5)
plt.gca().tick_params(axis='x', labelrotation=45)
date_form = DateFormatter("%Y %b")
plt.gca().xaxis.set_major_formatter(date_form)
plt.title('Number of new PG&E weather stations installed')
###Output
_____no_output_____ |
GAN Model/eda-to-prediction-dietanic.ipynb | ###Markdown
EDA To Prediction (DieTanic) *Sometimes life has a cruel sense of humor, giving you the thing you always wanted at the worst time possible.* -Lisa Kleypas The sinking of the Titanic is one of the most infamous shipwrecks in history. On April 15, 1912, during her maiden voyage, the Titanic sank after colliding with an iceberg, killing 1502 out of 2224 passengers and crew. That's why the name **DieTanic**. This is a very unforgetable disaster that no one in the world can forget.It took about $7.5 million to build the Titanic and it sunk under the ocean due to collision. The Titanic Dataset is a very good dataset for begineers to start a journey in data science and participate in competitions in Kaggle.The Objective of this notebook is to give an **idea how is the workflow in any predictive modeling problem**. How do we check features, how do we add new features and some Machine Learning Concepts. I have tried to keep the notebook as basic as possible so that even newbies can understand every phase of it.If You Like the notebook and think that it helped you..**PLEASE UPVOTE**. It will keep me motivated. Contents of the Notebook: Part1: Exploratory Data Analysis(EDA):1)Analysis of the features.2)Finding any relations or trends considering multiple features. Part2: Feature Engineering and Data Cleaning:1)Adding any few features.2)Removing redundant features.3)Converting features into suitable form for modeling. Part3: Predictive Modeling1)Running Basic Algorithms.2)Cross Validation.3)Ensembling.4)Important Features Extraction. Part1: Exploratory Data Analysis(EDA)
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use('fivethirtyeight')
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
data=pd.read_csv('../input/train.csv')
data.head()
data.isnull().sum() #checking for total null values
###Output
_____no_output_____
###Markdown
The **Age, Cabin and Embarked** have null values. I will try to fix them. How many Survived??
###Code
f,ax=plt.subplots(1,2,figsize=(18,8))
data['Survived'].value_counts().plot.pie(explode=[0,0.1],autopct='%1.1f%%',ax=ax[0],shadow=True)
ax[0].set_title('Survived')
ax[0].set_ylabel('')
sns.countplot('Survived',data=data,ax=ax[1])
ax[1].set_title('Survived')
plt.show()
###Output
_____no_output_____
###Markdown
It is evident that not many passengers survived the accident. Out of 891 passengers in training set, only around 350 survived i.e Only **38.4%** of the total training set survived the crash. We need to dig down more to get better insights from the data and see which categories of the passengers did survive and who didn't.We will try to check the survival rate by using the different features of the dataset. Some of the features being Sex, Port Of Embarcation, Age,etc.First let us understand the different types of features. Types Of Features Categorical Features:A categorical variable is one that has two or more categories and each value in that feature can be categorised by them.For example, gender is a categorical variable having two categories (male and female). Now we cannot sort or give any ordering to such variables. They are also known as **Nominal Variables**.**Categorical Features in the dataset: Sex,Embarked.** Ordinal Features:An ordinal variable is similar to categorical values, but the difference between them is that we can have relative ordering or sorting between the values. For eg: If we have a feature like **Height** with values **Tall, Medium, Short**, then Height is a ordinal variable. Here we can have a relative sort in the variable.**Ordinal Features in the dataset: PClass** Continous Feature:A feature is said to be continous if it can take values between any two points or between the minimum or maximum values in the features column.**Continous Features in the dataset: Age** Analysing The Features Sex--> Categorical Feature
###Code
data.groupby(['Sex','Survived'])['Survived'].count()
f,ax=plt.subplots(1,2,figsize=(18,8))
data[['Sex','Survived']].groupby(['Sex']).mean().plot.bar(ax=ax[0])
ax[0].set_title('Survived vs Sex')
sns.countplot('Sex',hue='Survived',data=data,ax=ax[1])
ax[1].set_title('Sex:Survived vs Dead')
plt.show()
###Output
_____no_output_____
###Markdown
This looks interesting. The number of men on the ship is lot more than the number of women. Still the number of women saved is almost twice the number of males saved. The survival rates for a **women on the ship is around 75% while that for men in around 18-19%.**This looks to be a **very important** feature for modeling. But is it the best?? Lets check other features. Pclass --> Ordinal Feature
###Code
pd.crosstab(data.Pclass,data.Survived,margins=True).style.background_gradient(cmap='summer_r')
f,ax=plt.subplots(1,2,figsize=(18,8))
data['Pclass'].value_counts().plot.bar(color=['#CD7F32','#FFDF00','#D3D3D3'],ax=ax[0])
ax[0].set_title('Number Of Passengers By Pclass')
ax[0].set_ylabel('Count')
sns.countplot('Pclass',hue='Survived',data=data,ax=ax[1])
ax[1].set_title('Pclass:Survived vs Dead')
plt.show()
###Output
_____no_output_____
###Markdown
People say **Money Can't Buy Everything**. But we can clearly see that Passenegers Of Pclass 1 were given a very high priority while rescue. Even though the the number of Passengers in Pclass 3 were a lot higher, still the number of survival from them is very low, somewhere around **25%**.For Pclass 1 %survived is around **63%** while for Pclass2 is around **48%**. So money and status matters. Such a materialistic world.Lets Dive in little bit more and check for other interesting observations. Lets check survival rate with **Sex and Pclass** Together.
###Code
pd.crosstab([data.Sex,data.Survived],data.Pclass,margins=True).style.background_gradient(cmap='summer_r')
sns.factorplot('Pclass','Survived',hue='Sex',data=data)
plt.show()
###Output
_____no_output_____
###Markdown
We use **FactorPlot** in this case, because they make the seperation of categorical values easy.Looking at the **CrossTab** and the **FactorPlot**, we can easily infer that survival for **Women from Pclass1** is about **95-96%**, as only 3 out of 94 Women from Pclass1 died. It is evident that irrespective of Pclass, Women were given first priority while rescue. Even Men from Pclass1 have a very low survival rate.Looks like Pclass is also an important feature. Lets analyse other features. Age--> Continous Feature
###Code
print('Oldest Passenger was of:',data['Age'].max(),'Years')
print('Youngest Passenger was of:',data['Age'].min(),'Years')
print('Average Age on the ship:',data['Age'].mean(),'Years')
f,ax=plt.subplots(1,2,figsize=(18,8))
sns.violinplot("Pclass","Age", hue="Survived", data=data,split=True,ax=ax[0])
ax[0].set_title('Pclass and Age vs Survived')
ax[0].set_yticks(range(0,110,10))
sns.violinplot("Sex","Age", hue="Survived", data=data,split=True,ax=ax[1])
ax[1].set_title('Sex and Age vs Survived')
ax[1].set_yticks(range(0,110,10))
plt.show()
###Output
_____no_output_____
###Markdown
Observations:1)The number of children increases with Pclass and the survival rate for passenegers below Age 10(i.e children) looks to be good irrespective of the Pclass.2)Survival chances for Passenegers aged 20-50 from Pclass1 is high and is even better for Women.3)For males, the survival chances decreases with an increase in age. As we had seen earlier, the Age feature has **177** null values. To replace these NaN values, we can assign them the mean age of the dataset.But the problem is, there were many people with many different ages. We just cant assign a 4 year kid with the mean age that is 29 years. Is there any way to find out what age-band does the passenger lie??**Bingo!!!!**, we can check the **Name** feature. Looking upon the feature, we can see that the names have a salutation like Mr or Mrs. Thus we can assign the mean values of Mr and Mrs to the respective groups.**''What's In A Name??''**---> **Feature** :p
###Code
data['Initial']=0
for i in data:
data['Initial']=data.Name.str.extract('([A-Za-z]+)\.') #lets extract the Salutations
###Output
_____no_output_____
###Markdown
Okay so here we are using the Regex: **[A-Za-z]+)\.**. So what it does is, it looks for strings which lie between **A-Z or a-z** and followed by a **.(dot)**. So we successfully extract the Initials from the Name.
###Code
pd.crosstab(data.Initial,data.Sex).T.style.background_gradient(cmap='summer_r') #Checking the Initials with the Sex
###Output
_____no_output_____
###Markdown
Okay so there are some misspelled Initials like Mlle or Mme that stand for Miss. I will replace them with Miss and same thing for other values.
###Code
data['Initial'].replace(['Mlle','Mme','Ms','Dr','Major','Lady','Countess','Jonkheer','Col','Rev','Capt','Sir','Don'],['Miss','Miss','Miss','Mr','Mr','Mrs','Mrs','Other','Other','Other','Mr','Mr','Mr'],inplace=True)
data.groupby('Initial')['Age'].mean() #lets check the average age by Initials
###Output
_____no_output_____
###Markdown
Filling NaN Ages
###Code
## Assigning the NaN Values with the Ceil values of the mean ages
data.loc[(data.Age.isnull())&(data.Initial=='Mr'),'Age']=33
data.loc[(data.Age.isnull())&(data.Initial=='Mrs'),'Age']=36
data.loc[(data.Age.isnull())&(data.Initial=='Master'),'Age']=5
data.loc[(data.Age.isnull())&(data.Initial=='Miss'),'Age']=22
data.loc[(data.Age.isnull())&(data.Initial=='Other'),'Age']=46
data.Age.isnull().any() #So no null values left finally
f,ax=plt.subplots(1,2,figsize=(20,10))
data[data['Survived']==0].Age.plot.hist(ax=ax[0],bins=20,edgecolor='black',color='red')
ax[0].set_title('Survived= 0')
x1=list(range(0,85,5))
ax[0].set_xticks(x1)
data[data['Survived']==1].Age.plot.hist(ax=ax[1],color='green',bins=20,edgecolor='black')
ax[1].set_title('Survived= 1')
x2=list(range(0,85,5))
ax[1].set_xticks(x2)
plt.show()
###Output
_____no_output_____
###Markdown
Observations:1)The Toddlers(age<5) were saved in large numbers(The Women and Child First Policy).2)The oldest Passenger was saved(80 years).3)Maximum number of deaths were in the age group of 30-40.
###Code
sns.factorplot('Pclass','Survived',col='Initial',data=data)
plt.show()
###Output
_____no_output_____
###Markdown
The Women and Child first policy thus holds true irrespective of the class. Embarked--> Categorical Value
###Code
pd.crosstab([data.Embarked,data.Pclass],[data.Sex,data.Survived],margins=True).style.background_gradient(cmap='summer_r')
###Output
_____no_output_____
###Markdown
Chances for Survival by Port Of Embarkation
###Code
sns.factorplot('Embarked','Survived',data=data)
fig=plt.gcf()
fig.set_size_inches(5,3)
plt.show()
###Output
_____no_output_____
###Markdown
The chances for survival for Port C is highest around 0.55 while it is lowest for S.
###Code
f,ax=plt.subplots(2,2,figsize=(20,15))
sns.countplot('Embarked',data=data,ax=ax[0,0])
ax[0,0].set_title('No. Of Passengers Boarded')
sns.countplot('Embarked',hue='Sex',data=data,ax=ax[0,1])
ax[0,1].set_title('Male-Female Split for Embarked')
sns.countplot('Embarked',hue='Survived',data=data,ax=ax[1,0])
ax[1,0].set_title('Embarked vs Survived')
sns.countplot('Embarked',hue='Pclass',data=data,ax=ax[1,1])
ax[1,1].set_title('Embarked vs Pclass')
plt.subplots_adjust(wspace=0.2,hspace=0.5)
plt.show()
###Output
_____no_output_____
###Markdown
Observations:1)Maximum passenegers boarded from S. Majority of them being from Pclass3.2)The Passengers from C look to be lucky as a good proportion of them survived. The reason for this maybe the rescue of all the Pclass1 and Pclass2 Passengers.3)The Embark S looks to the port from where majority of the rich people boarded. Still the chances for survival is low here, that is because many passengers from Pclass3 around **81%** didn't survive. 4)Port Q had almost 95% of the passengers were from Pclass3.
###Code
sns.factorplot('Pclass','Survived',hue='Sex',col='Embarked',data=data)
plt.show()
###Output
_____no_output_____
###Markdown
Observations:1)The survival chances are almost 1 for women for Pclass1 and Pclass2 irrespective of the Pclass.2)Port S looks to be very unlucky for Pclass3 Passenegers as the survival rate for both men and women is very low.**(Money Matters)**3)Port Q looks looks to be unlukiest for Men, as almost all were from Pclass 3. Filling Embarked NaNAs we saw that maximum passengers boarded from Port S, we replace NaN with S.
###Code
data['Embarked'].fillna('S',inplace=True)
data.Embarked.isnull().any()# Finally No NaN values
###Output
_____no_output_____
###Markdown
SibSip-->Discrete FeatureThis feature represents whether a person is alone or with his family members.Sibling = brother, sister, stepbrother, stepsisterSpouse = husband, wife
###Code
pd.crosstab([data.SibSp],data.Survived).style.background_gradient(cmap='summer_r')
f,ax=plt.subplots(1,2,figsize=(20,8))
sns.barplot('SibSp','Survived',data=data,ax=ax[0])
ax[0].set_title('SibSp vs Survived')
sns.factorplot('SibSp','Survived',data=data,ax=ax[1])
ax[1].set_title('SibSp vs Survived')
plt.close(2)
plt.show()
pd.crosstab(data.SibSp,data.Pclass).style.background_gradient(cmap='summer_r')
###Output
_____no_output_____
###Markdown
Observations:The barplot and factorplot shows that if a passenger is alone onboard with no siblings, he have 34.5% survival rate. The graph roughly decreases if the number of siblings increase. This makes sense. That is, if I have a family on board, I will try to save them instead of saving myself first. Surprisingly the survival for families with 5-8 members is **0%**. The reason may be Pclass??The reason is **Pclass**. The crosstab shows that Person with SibSp>3 were all in Pclass3. It is imminent that all the large families in Pclass3(>3) died. Parch
###Code
pd.crosstab(data.Parch,data.Pclass).style.background_gradient(cmap='summer_r')
###Output
_____no_output_____
###Markdown
The crosstab again shows that larger families were in Pclass3.
###Code
f,ax=plt.subplots(1,2,figsize=(20,8))
sns.barplot('Parch','Survived',data=data,ax=ax[0])
ax[0].set_title('Parch vs Survived')
sns.factorplot('Parch','Survived',data=data,ax=ax[1])
ax[1].set_title('Parch vs Survived')
plt.close(2)
plt.show()
###Output
_____no_output_____
###Markdown
Observations:Here too the results are quite similar. Passengers with their parents onboard have greater chance of survival. It however reduces as the number goes up.The chances of survival is good for somebody who has 1-3 parents on the ship. Being alone also proves to be fatal and the chances for survival decreases when somebody has >4 parents on the ship. Fare--> Continous Feature
###Code
print('Highest Fare was:',data['Fare'].max())
print('Lowest Fare was:',data['Fare'].min())
print('Average Fare was:',data['Fare'].mean())
###Output
_____no_output_____
###Markdown
The lowest fare is **0.0**. Wow!! a free luxorious ride.
###Code
f,ax=plt.subplots(1,3,figsize=(20,8))
sns.distplot(data[data['Pclass']==1].Fare,ax=ax[0])
ax[0].set_title('Fares in Pclass 1')
sns.distplot(data[data['Pclass']==2].Fare,ax=ax[1])
ax[1].set_title('Fares in Pclass 2')
sns.distplot(data[data['Pclass']==3].Fare,ax=ax[2])
ax[2].set_title('Fares in Pclass 3')
plt.show()
###Output
_____no_output_____
###Markdown
There looks to be a large distribution in the fares of Passengers in Pclass1 and this distribution goes on decreasing as the standards reduces. As this is also continous, we can convert into discrete values by using binning. Observations in a Nutshell for all features:**Sex:** The chance of survival for women is high as compared to men.**Pclass:**There is a visible trend that being a **1st class passenger** gives you better chances of survival. The survival rate for **Pclass3 is very low**. For **women**, the chance of survival from **Pclass1** is almost 1 and is high too for those from **Pclass2**. **Money Wins!!!**. **Age:** Children less than 5-10 years do have a high chance of survival. Passengers between age group 15 to 35 died a lot.**Embarked:** This is a very interesting feature. **The chances of survival at C looks to be better than even though the majority of Pclass1 passengers got up at S.** Passengers at Q were all from **Pclass3**. **Parch+SibSp:** Having 1-2 siblings,spouse on board or 1-3 Parents shows a greater chance of probablity rather than being alone or having a large family travelling with you. Correlation Between The Features
###Code
sns.heatmap(data.corr(),annot=True,cmap='RdYlGn',linewidths=0.2) #data.corr()-->correlation matrix
fig=plt.gcf()
fig.set_size_inches(10,8)
plt.show()
###Output
_____no_output_____
###Markdown
Interpreting The HeatmapThe first thing to note is that only the numeric features are compared as it is obvious that we cannot correlate between alphabets or strings. Before understanding the plot, let us see what exactly correlation is.**POSITIVE CORRELATION:** If an **increase in feature A leads to increase in feature B, then they are positively correlated**. A value **1 means perfect positive correlation**.**NEGATIVE CORRELATION:** If an **increase in feature A leads to decrease in feature B, then they are negatively correlated**. A value **-1 means perfect negative correlation**.Now lets say that two features are highly or perfectly correlated, so the increase in one leads to increase in the other. This means that both the features are containing highly similar information and there is very little or no variance in information. This is known as **MultiColinearity** as both of them contains almost the same information.So do you think we should use both of them as **one of them is redundant**. While making or training models, we should try to eliminate redundant features as it reduces training time and many such advantages.Now from the above heatmap,we can see that the features are not much correlated. The highest correlation is between **SibSp and Parch i.e 0.41**. So we can carry on with all features. Part2: Feature Engineering and Data CleaningNow what is Feature Engineering?Whenever we are given a dataset with features, it is not necessary that all the features will be important. There maybe be many redundant features which should be eliminated. Also we can get or add new features by observing or extracting information from other features.An example would be getting the Initals feature using the Name Feature. Lets see if we can get any new features and eliminate a few. Also we will tranform the existing relevant features to suitable form for Predictive Modeling. Age_band Problem With Age Feature:As I have mentioned earlier that **Age is a continous feature**, there is a problem with Continous Variables in Machine Learning Models.**Eg:**If I say to group or arrange Sports Person by **Sex**, We can easily segregate them by Male and Female.Now if I say to group them by their **Age**, then how would you do it? If there are 30 Persons, there may be 30 age values. Now this is problematic.We need to convert these **continous values into categorical values** by either Binning or Normalisation. I will be using binning i.e group a range of ages into a single bin or assign them a single value.Okay so the maximum age of a passenger was 80. So lets divide the range from 0-80 into 5 bins. So 80/5=16.So bins of size 16.
###Code
data['Age_band']=0
data.loc[data['Age']<=16,'Age_band']=0
data.loc[(data['Age']>16)&(data['Age']<=32),'Age_band']=1
data.loc[(data['Age']>32)&(data['Age']<=48),'Age_band']=2
data.loc[(data['Age']>48)&(data['Age']<=64),'Age_band']=3
data.loc[data['Age']>64,'Age_band']=4
data.head(2)
data['Age_band'].value_counts().to_frame().style.background_gradient(cmap='summer')#checking the number of passenegers in each band
sns.factorplot('Age_band','Survived',data=data,col='Pclass')
plt.show()
###Output
_____no_output_____
###Markdown
True that..the survival rate decreases as the age increases irrespective of the Pclass. Family_Size and AloneAt this point, we can create a new feature called "Family_size" and "Alone" and analyse it. This feature is the summation of Parch and SibSp. It gives us a combined data so that we can check if survival rate have anything to do with family size of the passengers. Alone will denote whether a passenger is alone or not.
###Code
data['Family_Size']=0
data['Family_Size']=data['Parch']+data['SibSp']#family size
data['Alone']=0
data.loc[data.Family_Size==0,'Alone']=1#Alone
f,ax=plt.subplots(1,2,figsize=(18,6))
sns.factorplot('Family_Size','Survived',data=data,ax=ax[0])
ax[0].set_title('Family_Size vs Survived')
sns.factorplot('Alone','Survived',data=data,ax=ax[1])
ax[1].set_title('Alone vs Survived')
plt.close(2)
plt.close(3)
plt.show()
###Output
_____no_output_____
###Markdown
**Family_Size=0 means that the passeneger is alone.** Clearly, if you are alone or family_size=0,then chances for survival is very low. For family size > 4,the chances decrease too. This also looks to be an important feature for the model. Lets examine this further.
###Code
sns.factorplot('Alone','Survived',data=data,hue='Sex',col='Pclass')
plt.show()
###Output
_____no_output_____
###Markdown
It is visible that being alone is harmful irrespective of Sex or Pclass except for Pclass3 where the chances of females who are alone is high than those with family. Fare_RangeSince fare is also a continous feature, we need to convert it into ordinal value. For this we will use **pandas.qcut**.So what **qcut** does is it splits or arranges the values according the number of bins we have passed. So if we pass for 5 bins, it will arrange the values equally spaced into 5 seperate bins or value ranges.
###Code
data['Fare_Range']=pd.qcut(data['Fare'],4)
data.groupby(['Fare_Range'])['Survived'].mean().to_frame().style.background_gradient(cmap='summer_r')
###Output
_____no_output_____
###Markdown
As discussed above, we can clearly see that as the **fare_range increases, the chances of survival increases.**Now we cannot pass the Fare_Range values as it is. We should convert it into singleton values same as we did in **Age_Band**
###Code
data['Fare_cat']=0
data.loc[data['Fare']<=7.91,'Fare_cat']=0
data.loc[(data['Fare']>7.91)&(data['Fare']<=14.454),'Fare_cat']=1
data.loc[(data['Fare']>14.454)&(data['Fare']<=31),'Fare_cat']=2
data.loc[(data['Fare']>31)&(data['Fare']<=513),'Fare_cat']=3
sns.factorplot('Fare_cat','Survived',data=data,hue='Sex')
plt.show()
###Output
_____no_output_____
###Markdown
Clearly, as the Fare_cat increases, the survival chances increases. This feature may become an important feature during modeling along with the Sex. Converting String Values into NumericSince we cannot pass strings to a machine learning model, we need to convert features loke Sex, Embarked, etc into numeric values.
###Code
data['Sex'].replace(['male','female'],[0,1],inplace=True)
data['Embarked'].replace(['S','C','Q'],[0,1,2],inplace=True)
data['Initial'].replace(['Mr','Mrs','Miss','Master','Other'],[0,1,2,3,4],inplace=True)
###Output
_____no_output_____
###Markdown
Dropping UnNeeded Features**Name**--> We don't need name feature as it cannot be converted into any categorical value.**Age**--> We have the Age_band feature, so no need of this.**Ticket**--> It is any random string that cannot be categorised.**Fare**--> We have the Fare_cat feature, so unneeded**Cabin**--> A lot of NaN values and also many passengers have multiple cabins. So this is a useless feature.**Fare_Range**--> We have the fare_cat feature.**PassengerId**--> Cannot be categorised.
###Code
data.drop(['Name','Age','Ticket','Fare','Cabin','Fare_Range','PassengerId'],axis=1,inplace=True)
sns.heatmap(data.corr(),annot=True,cmap='RdYlGn',linewidths=0.2,annot_kws={'size':20})
fig=plt.gcf()
fig.set_size_inches(18,15)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
Now the above correlation plot, we can see some positively related features. Some of them being **SibSp andd Family_Size** and **Parch and Family_Size** and some negative ones like **Alone and Family_Size.** Part3: Predictive ModelingWe have gained some insights from the EDA part. But with that, we cannot accurately predict or tell whether a passenger will survive or die. So now we will predict the whether the Passenger will survive or not using some great Classification Algorithms.Following are the algorithms I will use to make the model:1)Logistic Regression2)Support Vector Machines(Linear and radial)3)Random Forest4)K-Nearest Neighbours5)Naive Bayes6)Decision Tree7)Logistic Regression
###Code
#importing all the required ML packages
from sklearn.linear_model import LogisticRegression #logistic regression
from sklearn import svm #support vector Machine
from sklearn.ensemble import RandomForestClassifier #Random Forest
from sklearn.neighbors import KNeighborsClassifier #KNN
from sklearn.naive_bayes import GaussianNB #Naive bayes
from sklearn.tree import DecisionTreeClassifier #Decision Tree
from sklearn.model_selection import train_test_split #training and testing data split
from sklearn import metrics #accuracy measure
from sklearn.metrics import confusion_matrix #for confusion matrix
train,test=train_test_split(data,test_size=0.3,random_state=0,stratify=data['Survived'])
train_X=train[train.columns[1:]]
train_Y=train[train.columns[:1]]
test_X=test[test.columns[1:]]
test_Y=test[test.columns[:1]]
X=data[data.columns[1:]]
Y=data['Survived']
###Output
_____no_output_____
###Markdown
Radial Support Vector Machines(rbf-SVM)
###Code
model=svm.SVC(kernel='rbf',C=1,gamma=0.1)
model.fit(train_X,train_Y)
prediction1=model.predict(test_X)
print('Accuracy for rbf SVM is ',metrics.accuracy_score(prediction1,test_Y))
###Output
_____no_output_____
###Markdown
Linear Support Vector Machine(linear-SVM)
###Code
model=svm.SVC(kernel='linear',C=0.1,gamma=0.1)
model.fit(train_X,train_Y)
prediction2=model.predict(test_X)
print('Accuracy for linear SVM is',metrics.accuracy_score(prediction2,test_Y))
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
model = LogisticRegression()
model.fit(train_X,train_Y)
prediction3=model.predict(test_X)
print('The accuracy of the Logistic Regression is',metrics.accuracy_score(prediction3,test_Y))
###Output
_____no_output_____
###Markdown
Decision Tree
###Code
model=DecisionTreeClassifier()
model.fit(train_X,train_Y)
prediction4=model.predict(test_X)
print('The accuracy of the Decision Tree is',metrics.accuracy_score(prediction4,test_Y))
###Output
_____no_output_____
###Markdown
K-Nearest Neighbours(KNN)
###Code
model=KNeighborsClassifier()
model.fit(train_X,train_Y)
prediction5=model.predict(test_X)
print('The accuracy of the KNN is',metrics.accuracy_score(prediction5,test_Y))
###Output
_____no_output_____
###Markdown
Now the accuracy for the KNN model changes as we change the values for **n_neighbours** attribute. The default value is **5**. Lets check the accuracies over various values of n_neighbours.
###Code
a_index=list(range(1,11))
a=pd.Series()
x=[0,1,2,3,4,5,6,7,8,9,10]
for i in list(range(1,11)):
model=KNeighborsClassifier(n_neighbors=i)
model.fit(train_X,train_Y)
prediction=model.predict(test_X)
a=a.append(pd.Series(metrics.accuracy_score(prediction,test_Y)))
plt.plot(a_index, a)
plt.xticks(x)
fig=plt.gcf()
fig.set_size_inches(12,6)
plt.show()
print('Accuracies for different values of n are:',a.values,'with the max value as ',a.values.max())
###Output
_____no_output_____
###Markdown
Gaussian Naive Bayes
###Code
model=GaussianNB()
model.fit(train_X,train_Y)
prediction6=model.predict(test_X)
print('The accuracy of the NaiveBayes is',metrics.accuracy_score(prediction6,test_Y))
###Output
_____no_output_____
###Markdown
Random Forests
###Code
model=RandomForestClassifier(n_estimators=100)
model.fit(train_X,train_Y)
prediction7=model.predict(test_X)
print('The accuracy of the Random Forests is',metrics.accuracy_score(prediction7,test_Y))
###Output
_____no_output_____
###Markdown
The accuracy of a model is not the only factor that determines the robustness of the classifier. Let's say that a classifier is trained over a training data and tested over the test data and it scores an accuracy of 90%.Now this seems to be very good accuracy for a classifier, but can we confirm that it will be 90% for all the new test sets that come over??. The answer is **No**, because we can't determine which all instances will the classifier will use to train itself. As the training and testing data changes, the accuracy will also change. It may increase or decrease. This is known as **model variance**.To overcome this and get a generalized model,we use **Cross Validation**. Cross ValidationMany a times, the data is imbalanced, i.e there may be a high number of class1 instances but less number of other class instances. Thus we should train and test our algorithm on each and every instance of the dataset. Then we can take an average of all the noted accuracies over the dataset. 1)The K-Fold Cross Validation works by first dividing the dataset into k-subsets.2)Let's say we divide the dataset into (k=5) parts. We reserve 1 part for testing and train the algorithm over the 4 parts.3)We continue the process by changing the testing part in each iteration and training the algorithm over the other parts. The accuracies and errors are then averaged to get a average accuracy of the algorithm.This is called K-Fold Cross Validation.4)An algorithm may underfit over a dataset for some training data and sometimes also overfit the data for other training set. Thus with cross-validation, we can achieve a generalised model.
###Code
from sklearn.model_selection import KFold #for K-fold cross validation
from sklearn.model_selection import cross_val_score #score evaluation
from sklearn.model_selection import cross_val_predict #prediction
kfold = KFold(n_splits=10, random_state=22) # k=10, split the data into 10 equal parts
xyz=[]
accuracy=[]
std=[]
classifiers=['Linear Svm','Radial Svm','Logistic Regression','KNN','Decision Tree','Naive Bayes','Random Forest']
models=[svm.SVC(kernel='linear'),svm.SVC(kernel='rbf'),LogisticRegression(),KNeighborsClassifier(n_neighbors=9),DecisionTreeClassifier(),GaussianNB(),RandomForestClassifier(n_estimators=100)]
for i in models:
model = i
cv_result = cross_val_score(model,X,Y, cv = kfold,scoring = "accuracy")
cv_result=cv_result
xyz.append(cv_result.mean())
std.append(cv_result.std())
accuracy.append(cv_result)
new_models_dataframe2=pd.DataFrame({'CV Mean':xyz,'Std':std},index=classifiers)
new_models_dataframe2
plt.subplots(figsize=(12,6))
box=pd.DataFrame(accuracy,index=[classifiers])
box.T.boxplot()
new_models_dataframe2['CV Mean'].plot.barh(width=0.8)
plt.title('Average CV Mean Accuracy')
fig=plt.gcf()
fig.set_size_inches(8,5)
plt.show()
###Output
_____no_output_____
###Markdown
The classification accuracy can be sometimes misleading due to imbalance. We can get a summarized result with the help of confusion matrix, which shows where did the model go wrong, or which class did the model predict wrong. Confusion MatrixIt gives the number of correct and incorrect classifications made by the classifier.
###Code
f,ax=plt.subplots(3,3,figsize=(12,10))
y_pred = cross_val_predict(svm.SVC(kernel='rbf'),X,Y,cv=10)
sns.heatmap(confusion_matrix(Y,y_pred),ax=ax[0,0],annot=True,fmt='2.0f')
ax[0,0].set_title('Matrix for rbf-SVM')
y_pred = cross_val_predict(svm.SVC(kernel='linear'),X,Y,cv=10)
sns.heatmap(confusion_matrix(Y,y_pred),ax=ax[0,1],annot=True,fmt='2.0f')
ax[0,1].set_title('Matrix for Linear-SVM')
y_pred = cross_val_predict(KNeighborsClassifier(n_neighbors=9),X,Y,cv=10)
sns.heatmap(confusion_matrix(Y,y_pred),ax=ax[0,2],annot=True,fmt='2.0f')
ax[0,2].set_title('Matrix for KNN')
y_pred = cross_val_predict(RandomForestClassifier(n_estimators=100),X,Y,cv=10)
sns.heatmap(confusion_matrix(Y,y_pred),ax=ax[1,0],annot=True,fmt='2.0f')
ax[1,0].set_title('Matrix for Random-Forests')
y_pred = cross_val_predict(LogisticRegression(),X,Y,cv=10)
sns.heatmap(confusion_matrix(Y,y_pred),ax=ax[1,1],annot=True,fmt='2.0f')
ax[1,1].set_title('Matrix for Logistic Regression')
y_pred = cross_val_predict(DecisionTreeClassifier(),X,Y,cv=10)
sns.heatmap(confusion_matrix(Y,y_pred),ax=ax[1,2],annot=True,fmt='2.0f')
ax[1,2].set_title('Matrix for Decision Tree')
y_pred = cross_val_predict(GaussianNB(),X,Y,cv=10)
sns.heatmap(confusion_matrix(Y,y_pred),ax=ax[2,0],annot=True,fmt='2.0f')
ax[2,0].set_title('Matrix for Naive Bayes')
plt.subplots_adjust(hspace=0.2,wspace=0.2)
plt.show()
###Output
_____no_output_____
###Markdown
Interpreting Confusion MatrixThe left diagonal shows the number of correct predictions made for each class while the right diagonal shows the number of wrong prredictions made. Lets consider the first plot for rbf-SVM:1)The no. of correct predictions are **491(for dead) + 247(for survived)** with the mean CV accuracy being **(491+247)/891 = 82.8%** which we did get earlier.2)**Errors**--> Wrongly Classified 58 dead people as survived and 95 survived as dead. Thus it has made more mistakes by predicting dead as survived.By looking at all the matrices, we can say that rbf-SVM has a higher chance in correctly predicting dead passengers but NaiveBayes has a higher chance in correctly predicting passengers who survived. Hyper-Parameters TuningThe machine learning models are like a Black-Box. There are some default parameter values for this Black-Box, which we can tune or change to get a better model. Like the C and gamma in the SVM model and similarly different parameters for different classifiers, are called the hyper-parameters, which we can tune to change the learning rate of the algorithm and get a better model. This is known as Hyper-Parameter Tuning.We will tune the hyper-parameters for the 2 best classifiers i.e the SVM and RandomForests. SVM
###Code
from sklearn.model_selection import GridSearchCV
C=[0.05,0.1,0.2,0.3,0.25,0.4,0.5,0.6,0.7,0.8,0.9,1]
gamma=[0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0]
kernel=['rbf','linear']
hyper={'kernel':kernel,'C':C,'gamma':gamma}
gd=GridSearchCV(estimator=svm.SVC(),param_grid=hyper,verbose=True)
gd.fit(X,Y)
print(gd.best_score_)
print(gd.best_estimator_)
###Output
_____no_output_____
###Markdown
Random Forests
###Code
n_estimators=range(100,1000,100)
hyper={'n_estimators':n_estimators}
gd=GridSearchCV(estimator=RandomForestClassifier(random_state=0),param_grid=hyper,verbose=True)
gd.fit(X,Y)
print(gd.best_score_)
print(gd.best_estimator_)
###Output
_____no_output_____
###Markdown
The best score for Rbf-Svm is **82.82% with C=0.05 and gamma=0.1**.For RandomForest, score is abt **81.8% with n_estimators=900**. EnsemblingEnsembling is a good way to increase the accuracy or performance of a model. In simple words, it is the combination of various simple models to create a single powerful model.Lets say we want to buy a phone and ask many people about it based on various parameters. So then we can make a strong judgement about a single product after analysing all different parameters. This is **Ensembling**, which improves the stability of the model. Ensembling can be done in ways like:1)Voting Classifier2)Bagging3)Boosting. Voting ClassifierIt is the simplest way of combining predictions from many different simple machine learning models. It gives an average prediction result based on the prediction of all the submodels. The submodels or the basemodels are all of diiferent types.
###Code
from sklearn.ensemble import VotingClassifier
ensemble_lin_rbf=VotingClassifier(estimators=[('KNN',KNeighborsClassifier(n_neighbors=10)),
('RBF',svm.SVC(probability=True,kernel='rbf',C=0.5,gamma=0.1)),
('RFor',RandomForestClassifier(n_estimators=500,random_state=0)),
('LR',LogisticRegression(C=0.05)),
('DT',DecisionTreeClassifier(random_state=0)),
('NB',GaussianNB()),
('svm',svm.SVC(kernel='linear',probability=True))
],
voting='soft').fit(train_X,train_Y)
print('The accuracy for ensembled model is:',ensemble_lin_rbf.score(test_X,test_Y))
cross=cross_val_score(ensemble_lin_rbf,X,Y, cv = 10,scoring = "accuracy")
print('The cross validated score is',cross.mean())
###Output
_____no_output_____
###Markdown
BaggingBagging is a general ensemble method. It works by applying similar classifiers on small partitions of the dataset and then taking the average of all the predictions. Due to the averaging,there is reduction in variance. Unlike Voting Classifier, Bagging makes use of similar classifiers. Bagged KNNBagging works best with models with high variance. An example for this can be Decision Tree or Random Forests. We can use KNN with small value of **n_neighbours**, as small value of n_neighbours.
###Code
from sklearn.ensemble import BaggingClassifier
model=BaggingClassifier(base_estimator=KNeighborsClassifier(n_neighbors=3),random_state=0,n_estimators=700)
model.fit(train_X,train_Y)
prediction=model.predict(test_X)
print('The accuracy for bagged KNN is:',metrics.accuracy_score(prediction,test_Y))
result=cross_val_score(model,X,Y,cv=10,scoring='accuracy')
print('The cross validated score for bagged KNN is:',result.mean())
###Output
_____no_output_____
###Markdown
Bagged DecisionTree
###Code
model=BaggingClassifier(base_estimator=DecisionTreeClassifier(),random_state=0,n_estimators=100)
model.fit(train_X,train_Y)
prediction=model.predict(test_X)
print('The accuracy for bagged Decision Tree is:',metrics.accuracy_score(prediction,test_Y))
result=cross_val_score(model,X,Y,cv=10,scoring='accuracy')
print('The cross validated score for bagged Decision Tree is:',result.mean())
###Output
_____no_output_____
###Markdown
BoostingBoosting is an ensembling technique which uses sequential learning of classifiers. It is a step by step enhancement of a weak model.Boosting works as follows:A model is first trained on the complete dataset. Now the model will get some instances right while some wrong. Now in the next iteration, the learner will focus more on the wrongly predicted instances or give more weight to it. Thus it will try to predict the wrong instance correctly. Now this iterative process continous, and new classifers are added to the model until the limit is reached on the accuracy. AdaBoost(Adaptive Boosting)The weak learner or estimator in this case is a Decsion Tree. But we can change the dafault base_estimator to any algorithm of our choice.
###Code
from sklearn.ensemble import AdaBoostClassifier
ada=AdaBoostClassifier(n_estimators=200,random_state=0,learning_rate=0.1)
result=cross_val_score(ada,X,Y,cv=10,scoring='accuracy')
print('The cross validated score for AdaBoost is:',result.mean())
###Output
_____no_output_____
###Markdown
Stochastic Gradient BoostingHere too the weak learner is a Decision Tree.
###Code
from sklearn.ensemble import GradientBoostingClassifier
grad=GradientBoostingClassifier(n_estimators=500,random_state=0,learning_rate=0.1)
result=cross_val_score(grad,X,Y,cv=10,scoring='accuracy')
print('The cross validated score for Gradient Boosting is:',result.mean())
###Output
_____no_output_____
###Markdown
XGBoost
###Code
import xgboost as xg
xgboost=xg.XGBClassifier(n_estimators=900,learning_rate=0.1)
result=cross_val_score(xgboost,X,Y,cv=10,scoring='accuracy')
print('The cross validated score for XGBoost is:',result.mean())
###Output
_____no_output_____
###Markdown
We got the highest accuracy for AdaBoost. We will try to increase it with Hyper-Parameter Tuning Hyper-Parameter Tuning for AdaBoost
###Code
n_estimators=list(range(100,1100,100))
learn_rate=[0.05,0.1,0.2,0.3,0.25,0.4,0.5,0.6,0.7,0.8,0.9,1]
hyper={'n_estimators':n_estimators,'learning_rate':learn_rate}
gd=GridSearchCV(estimator=AdaBoostClassifier(),param_grid=hyper,verbose=True)
gd.fit(X,Y)
print(gd.best_score_)
print(gd.best_estimator_)
###Output
_____no_output_____
###Markdown
The maximum accuracy we can get with AdaBoost is **83.16% with n_estimators=200 and learning_rate=0.05** Confusion Matrix for the Best Model
###Code
ada=AdaBoostClassifier(n_estimators=200,random_state=0,learning_rate=0.05)
result=cross_val_predict(ada,X,Y,cv=10)
sns.heatmap(confusion_matrix(Y,result),cmap='winter',annot=True,fmt='2.0f')
plt.show()
###Output
_____no_output_____
###Markdown
Feature Importance
###Code
f,ax=plt.subplots(2,2,figsize=(15,12))
model=RandomForestClassifier(n_estimators=500,random_state=0)
model.fit(X,Y)
pd.Series(model.feature_importances_,X.columns).sort_values(ascending=True).plot.barh(width=0.8,ax=ax[0,0])
ax[0,0].set_title('Feature Importance in Random Forests')
model=AdaBoostClassifier(n_estimators=200,learning_rate=0.05,random_state=0)
model.fit(X,Y)
pd.Series(model.feature_importances_,X.columns).sort_values(ascending=True).plot.barh(width=0.8,ax=ax[0,1],color='#ddff11')
ax[0,1].set_title('Feature Importance in AdaBoost')
model=GradientBoostingClassifier(n_estimators=500,learning_rate=0.1,random_state=0)
model.fit(X,Y)
pd.Series(model.feature_importances_,X.columns).sort_values(ascending=True).plot.barh(width=0.8,ax=ax[1,0],cmap='RdYlGn_r')
ax[1,0].set_title('Feature Importance in Gradient Boosting')
model=xg.XGBClassifier(n_estimators=900,learning_rate=0.1)
model.fit(X,Y)
pd.Series(model.feature_importances_,X.columns).sort_values(ascending=True).plot.barh(width=0.8,ax=ax[1,1],color='#FD0F00')
ax[1,1].set_title('Feature Importance in XgBoost')
plt.show()
###Output
_____no_output_____ |
nbs/05_HitTest.ipynb | ###Markdown
HitTest> VAULT Proposal python module for addressing Technical Scenario objective 1 Objective 1:> 1) Determine the “hits” where a satellite has geodetic overlap of any vessel(s) at any point(s) in time. For simplicity, it may be assumed that a satellite has full view of half the earth (regardless of satellite type or its elevation above the earth). However, additional accuracy models with rationale is allowed.A satellite can see the ship only if the ship can see the satellite. So instead of the half-earth assumption, we define a "hit" as the satellite being above the horizon (defined as "alt" ≥0º in a ship-based alt-azimuth coordinate system). According to [exactEarth](http://www.marinetraffic.org/exactearth/), a more realistic approximation is that AIS satellites at 650 km have a 5000 km diameter Field of View. That corresponds to a horizon of about 𝜃=14.6º, as shown here: 🛰 : ` : ` 650 : ` km : ` : ` : 2500km 𝜃 ` .........................`🛳
###Code
import math
𝜃 = math.degrees(math.atan(650/2500))
print(f'{𝜃:.1f}')
# export
# Requires modules in ../jacobs_vault be available. Either:
# ln -s nbs/jacobs_vault -> jacobs_vault
# or:
# in each notebook `import sys; sys.path.append('..')`
# or:
# add .. to PYTHONPATH.
from jacobs_vault.starmap import starmap # Plotting fn
from datetime import datetime
from dateutil import tz
from skyfield.api import EarthSatellite
from skyfield.api import Topos, load
import math
import pandas as pd
import numpy as np
# hide
#from nbdev.showdoc import *
###Output
_____no_output_____
###Markdown
Define Schema and PathSave memory by using appropriate data types for the TLE elements. (Determined by peeking at the data file or reading the TLE format.) This saves quite a bit, and may save more as Pandas optimizes the `string` class vs. default `object`. Regardless, it makes columns single-typed.
###Code
#export
COLUMNS = ["satellite", "day_dt", "day", "tle_dt", "tle_ts", "line1", "line2"]
# DTYPES = [str, str, int, str, int, str, str]
DTYPES = {'satellite': 'uint16', # observed values are ints in 5..41678, so 0..65535 is good
'day_dt': 'str', # here a single date, but generally datetime: PARSE
'day': 'uint16', # here a single value 6026, too big for uint8, but 16 is good
'tle_dt': 'str', # again, PARSE AS DATETIME
'tle_ts': 'uint32', # large ints, but < 4294967295. We could compress more, but... meh
'line1': 'string', # 12K unique 80-char TLE strings. Category wd give tiny compression.
'line2': 'string'} # In theory "string" is better than "object". Not seeing it here.
DATE_COLS = ['day_dt', 'tle_dt']
# Where to look for the TLE dayfiles.
# Symlink ../data to the actual data.
DAY_FILE_PATH="../data/VAULT_Data/TLE_daily"
###Output
_____no_output_____
###Markdown
Quality Values
###Code
# export
# Set horizon in degrees. Suggested: 0º or 14.6º.
HORIZON = 14.6
# Define cutoffs for TLE track quality, as TLE age in days
EXCELLENT, GOOD, POOR = 2, 14, 56
def get_qvals(𝚫t: int, alt: float, 𝜃:float=HORIZON):
"""Get quality vals for raw 𝚫t [days].
Returns Series (alt, az, 𝚫t) in units (º,º, days)
Params
------
𝚫t - age of TLE in days. Int or flot.
alt - altitude above horizon
𝜃 - minimum alt in degrees to count as a hit (Default HORIZON)
"""
if 𝚫t <= EXCELLENT:
if alt.degrees > 𝜃:
qvals = ["Excellent", math.nan]
else:
qvals = [math.nan, "Excellent"]
elif 𝚫t <= GOOD:
if alt.degrees > 0.0:
qvals = ["Good", math.nan]
else:
qvals = [math.nan, "Good"]
elif 𝚫t <= POOR:
if alt.degrees > 0.0:
qvals = ["Poor", math.nan]
else:
qvals = [math.nan, "Poor"]
else:
qvals = [math.nan, "Stale"]
return qvals
#
###Output
_____no_output_____
###Markdown
HitTest ClassDuring data posturing we created single-day TLE files, so each day contains the most recent TLE for all 12K satellites.The `HitTest` constructor takes a date, loads the TLE dayfile, and returns the corresponding DataFrame._TODO_: Most of these satellites are useless. Speed by ignoring.
###Code
#export
class HitTest:
""" Counts the satellites that are visible at a given point on the globe at a
given time, and returns counts classified by data quality and
latitude, azimuth, hit_quality, radius for visible satellites
"""
def __init__(self, dt:datetime,
day_file_base_path:str=DAY_FILE_PATH,
𝜃:float=HORIZON):
"""Look for and load TLE datafile for date {dt}."""
df_path = "%s/%4d/%02d/%02d.tab.gz" % (day_file_base_path, dt.year, dt.month, dt.day)
print(f"Trying to load {df_path}")
df = pd.read_csv(df_path,
names=COLUMNS, sep='\t', compression='gzip',
dtype=DTYPES,
parse_dates=DATE_COLS,
infer_datetime_format=True)
self.df_day_tle = df.drop_duplicates()
self.𝜃 = 𝜃
self.dt = dt
#
def satellite_alt_az_days(self, _t0: datetime, lat: float, lon: float):
'''Load tracks for day {_t0} and return altº, azº, and 𝚫t [days]
for each row.
Usage eg: satellite_alt_az_days(datetime(2016, 6, 30), 45.0, -176.0)
'''
earth_position = Topos(lat, lon)
ts = load.timescale()
t = ts.utc(_t0.replace(tzinfo=tz.tzutc()))
def eval_tle(row):
'''Extract satellite info from line1/line2/tle_dt.
Returns alt, az, and (days between dt and each row).
Inherits {ts}, {t}, and {earth_position} values at function definition.
TODO: Currently only works for `apply(raw=False)`.
'''
try:
satellite = EarthSatellite(row['line1'], row['line2'], 'x', ts)
𝚫t = abs(_t0 - row['tle_dt']).days
except IndexError:
# `apply(raw=True)` sends arrays instead of Series
satellite = EarthSatellite(row[5], row[6], 'x', ts)
𝚫t = abs(_t0 - row[3]).days
topocentric = (satellite - earth_position).at(t)
alt, az, distance = topocentric.altaz()
qvals = get_qvals(𝚫t, alt)
return pd.Series([alt.degrees, az.degrees, 𝚫t] + qvals)
_ = self.df_day_tle.apply(eval_tle, axis=1, raw=False)
df_alt_az_days = pd.DataFrame(_)
df_alt_az_days.columns = ["altitude", "azimuth", "days", "hit", "miss"]
#df_alt_az_days.reindex()
return df_alt_az_days
def invoke(self, dt: datetime, lat: float, lon: float):
''' Main logic for satellite hit-testing service
Returns 2 DataFrames:
- df_hit_miss_table : The hit,miss stats table
- df_alt_az_days_visible : The information on the visible satellites for star-map plotting
'''
df_alt_az_days = self.satellite_alt_az_days(dt, lat, lon)
# "invert" altitude for polar plotting. Doing this thousands of times
# more than necessary (really just want R for the df_alt_az_days_visible slice)
# but pandas does not like apply on a slice.
df_alt_az_days.loc["R"] = 90.0 - df_alt_az_days["altitude"]
def apply_quality_str(row, col):
q = ""
if row[col] == Q_EXCELLENT:
q = "Excellent"
elif row[col] == Q_GOOD:
q = "Good"
elif row[col] == Q_POOR:
q = "Poor"
elif row[col] == Q_STALE:
q = "Stale"
# no-else ... leave the NaNs alone
return q
#
df_hit_miss_table = pd.concat([
df_alt_az_days["hit"].value_counts(sort=False),
df_alt_az_days["miss"].value_counts()],
axis=1, sort=False)
df_alt_az_days_visible = df_alt_az_days[df_alt_az_days["altitude"] > self.𝜃]
return df_hit_miss_table, df_alt_az_days_visible
#
def web_invoke(self, dt, lat, lon):
''' Main support function for satellite hit-testing service
returns a json object having two objects:
{
"hitmiss": The hit,miss stats table
"visible": The information on the visible satellites
}
'''
df_hit_miss_table, df_alt_az_days_visible = self.invoke(dt, lat, lon)
result = {
"hitmiss": df_hit_miss_table.to_dict(),
"visible": df_alt_az_days_visible.to_dict()
}
return json.dumps(result)
#
###Output
_____no_output_____
###Markdown
Execute for a given day2016-06-30 for starters
###Code
dt = datetime(2016, 6, 30)
ht = HitTest(dt)
hitmiss, rows = ht.invoke(dt, 45.0, -176.0)
hitmiss
rows.sample(5)
###Output
_____no_output_____
###Markdown
Visualize the resultsGenerate a polar alt/az plot of the qualifying satellites* Excellent = blue* Good = red* Else = yellow**TODO:** Is "0" here 0 altitude? That would be on the horizon, which is counter-intuitive. Note the band of satellites at southern bearings -- this ship was in the Northern hemisphere.
###Code
fig = starmap(rows)
fig.write_image('images/starmap_new.pdf')
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.