path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
docs/notebooks/inspect_model.ipynb | ###Markdown
How to inspect model fit results
###Code
import rlssm
# load non-hierarchical DDM fit:
model_fit_ddm = rlssm.load_model_results('/Users/laurafontanesi/git/rlssm/docs/notebooks/DDM.pkl')
# load non-hierarchical LBA fit:
model_fit_lba = rlssm.load_model_results('/Users/laurafontanesi/git/rlssm/docs/notebooks/LBA_2A.pkl')
# load hierarchical RL fit:
model_fit_rl = rlssm.load_model_results('/Users/laurafontanesi/git/rlssm/docs/notebooks/hierRL_2A.pkl')
###Output
_____no_output_____
###Markdown
Posteriors The posterior samples are stored in `samples`:
###Code
model_fit_ddm.samples
model_fit_rl.samples.describe()
###Output
_____no_output_____
###Markdown
You can simply plot the model's posteriors using `plot_posteriors`:
###Code
model_fit_ddm.plot_posteriors();
###Output
_____no_output_____
###Markdown
By default, 95% HDIs are shown, but you can also choose to have the posteriors without intervals or BCIs, and change the alpha level:
###Code
model_fit_rl.plot_posteriors(show_intervals='BCI', alpha_intervals=.01);
###Output
_____no_output_____
###Markdown
Trial-levelDepending on the model specification, you can also extract certain trial-level parameters as numpy ordered dictionaries of n_samples X n_trials shape:
###Code
model_fit_ddm.trial_samples['drift_t'].shape
model_fit_ddm.trial_samples.keys()
model_fit_lba.trial_samples.keys() # for the LBA
###Output
_____no_output_____
###Markdown
In the case of a RL model fit on choices alone, you can extract the log probability of accuracy=1 for each trial:
###Code
model_fit_rl.trial_samples.keys()
model_fit_rl.trial_samples['log_p_t'].shape
###Output
_____no_output_____
###Markdown
Posterior predictives With `get_posterior_predictives_df` you get posterior predictives as pandas DataFrames of `n_posterior_predictives` X `n_trials` shape:
###Code
pp = model_fit_rl.get_posterior_predictives_df(n_posterior_predictives=1000)
pp
###Output
_____no_output_____
###Markdown
For the DDM, you have additional parameters to tweak the DDM simulations, and you get a DataFrame with a hierarchical column index, for RTs and for accuracy:
###Code
pp = model_fit_ddm.get_posterior_predictives_df(n_posterior_predictives=100, dt=.001)
pp
###Output
_____no_output_____
###Markdown
You can also have posterior predictive summaries with `get_posterior_predictives_summary`.Only mean accuracy for RL models fit on choices alone, and also mean RTs, skewness and quantiles for lower and upper boundaries for models fitted on RTs as well.
###Code
model_fit_rl.get_posterior_predictives_summary()
model_fit_ddm.get_posterior_predictives_summary()
###Output
_____no_output_____
###Markdown
You can also specify which quantiles you are interested in:
###Code
model_fit_lba.get_posterior_predictives_summary(n_posterior_predictives=200, quantiles=[.1, .5, .9])
###Output
_____no_output_____
###Markdown
Finally, you can get summary for grouping variables (e.g., experimental conditions, trial blocks, etc.) in your data:
###Code
model_fit_lba.get_grouped_posterior_predictives_summary(n_posterior_predictives=200,
grouping_vars=['block_label'],
quantiles=[.3, .5, .7])
###Output
_____no_output_____
###Markdown
Plot posterior predictives You can plot posterior predictives similarly, both **ungrouped** (across all trials) or **grouped** (across conditions, trial blocks, etc.plot_mean_posterior_predictives).For RT models, you have both **mean plots**, and **quantile plots**:
###Code
model_fit_ddm.plot_mean_posterior_predictives(n_posterior_predictives=200);
###Output
_____no_output_____
###Markdown
Quantile plots have 2 main visualization options, "shades" and "lines", and you can specify again which quantiles you want, which in tervals and alpha levels:
###Code
model_fit_lba.plot_quantiles_posterior_predictives(n_posterior_predictives=200);
model_fit_lba.plot_quantiles_posterior_predictives(n_posterior_predictives=200,
kind='shades',
quantiles=[.1, .5, .9]);
model_fit_lba.plot_quantiles_grouped_posterior_predictives(
n_posterior_predictives=100,
grouping_var='block_label',
kind='shades',
quantiles=[.1, .3, .5, .7, .9]);
# Define new grouping variables:
import pandas as pd
import numpy as np
data = model_fit_rl.data_info['data']
# add a column to the data to group trials across learning blocks
data['block_bins'] = pd.cut(data.trial_block, 8, labels=np.arange(1, 9))
# add a column to define which choice pair is shown in that trial
data['choice_pair'] = 'AB'
data.loc[(data.cor_option == 3) & (data.inc_option == 1), 'choice_pair'] = 'AC'
data.loc[(data.cor_option == 4) & (data.inc_option == 2), 'choice_pair'] = 'BD'
data.loc[(data.cor_option == 4) & (data.inc_option == 3), 'choice_pair'] = 'CD'
import matplotlib.pyplot as plt
import seaborn as sns
fig, axes = plt.subplots(1, 2, figsize=(20,8))
model_fit_rl.plot_mean_grouped_posterior_predictives(grouping_vars=['block_bins'], n_posterior_predictives=500, ax=axes[0])
model_fit_rl.plot_mean_grouped_posterior_predictives(grouping_vars=['block_bins', 'choice_pair'],
n_posterior_predictives=500, ax=axes[1])
sns.despine()
###Output
_____no_output_____ |
docs/source/user_guide/clean/clean_no_mva.ipynb | ###Markdown
Norwegian VAT Numbers Introduction The function `clean_no_mva()` cleans a column containing Norwegian VAT number (ABN) strings, and standardizes them in a given format. The function `validate_no_mva()` validates either a single ABN strings, a column of ABN strings or a DataFrame of ABN strings, returning `True` if the value is valid, and `False` otherwise. ABN strings can be converted to the following formats via the `output_format` parameter:* `compact`: only number strings without any seperators or whitespace, like "995525828MVA"* `standard`: ABN strings with proper whitespace in the proper places, like "NO 995 525 828 MVA"Invalid parsing is handled with the `errors` parameter:* `coerce` (default): invalid parsing will be set to NaN* `ignore`: invalid parsing will return the input* `raise`: invalid parsing will raise an exceptionThe following sections demonstrate the functionality of `clean_no_mva()` and `validate_no_mva()`. An example dataset containing ABN strings
###Code
import pandas as pd
import numpy as np
df = pd.DataFrame(
{
"abn": [
"995525828MVA",
"NO 995 525 829 MVA",
"51824753556",
"51 824 753 556",
"hello",
np.nan,
"NULL"
],
"address": [
"123 Pine Ave.",
"main st",
"1234 west main heights 57033",
"apt 1 789 s maple rd manhattan",
"robie house, 789 north main street",
"(staples center) 1111 S Figueroa St, Los Angeles",
"hello",
]
}
)
df
###Output
_____no_output_____
###Markdown
1. Default `clean_no_mva`By default, `clean_no_mva` will clean abn strings and output them in the standard format with proper separators.
###Code
from dataprep.clean import clean_no_mva
clean_no_mva(df, column = "abn")
###Output
_____no_output_____
###Markdown
2. Output formats This section demonstrates the output parameter. `standard` (default)
###Code
clean_no_mva(df, column = "abn", output_format="standard")
###Output
_____no_output_____
###Markdown
`compact`
###Code
clean_no_mva(df, column = "abn", output_format="compact")
###Output
_____no_output_____
###Markdown
3. `inplace` parameterThis deletes the given column from the returned DataFrame. A new column containing cleaned ABN strings is added with a title in the format `"{original title}_clean"`.
###Code
clean_no_mva(df, column="abn", inplace=True)
###Output
_____no_output_____
###Markdown
4. `errors` parameter `coerce` (default)
###Code
clean_no_mva(df, "abn", errors="coerce")
###Output
_____no_output_____
###Markdown
`ignore`
###Code
clean_no_mva(df, "abn", errors="ignore")
###Output
_____no_output_____
###Markdown
4. `validate_no_mva()` `validate_no_mva()` returns `True` when the input is a valid ABN. Otherwise it returns `False`.The input of `validate_no_mva()` can be a string, a Pandas DataSeries, a Dask DataSeries, a Pandas DataFrame and a dask DataFrame.When the input is a string, a Pandas DataSeries or a Dask DataSeries, user doesn't need to specify a column name to be validated. When the input is a Pandas DataFrame or a dask DataFrame, user can both specify or not specify a column name to be validated. If user specify the column name, `validate_no_mva()` only returns the validation result for the specified column. If user doesn't specify the column name, `validate_no_mva()` returns the validation result for the whole DataFrame.
###Code
from dataprep.clean import validate_no_mva
print(validate_no_mva("995525828MVA"))
print(validate_no_mva("NO 995 525 829 MVA"))
print(validate_no_mva("51824753556"))
print(validate_no_mva("51 824 753 556"))
print(validate_no_mva("hello"))
print(validate_no_mva(np.nan))
print(validate_no_mva("NULL"))
###Output
_____no_output_____
###Markdown
Series
###Code
validate_no_mva(df["abn"])
###Output
_____no_output_____
###Markdown
DataFrame + Specify Column
###Code
validate_no_mva(df, column="abn")
###Output
_____no_output_____
###Markdown
Only DataFrame
###Code
validate_no_mva(df)
###Output
_____no_output_____ |
notebooks/todo.ipynb | ###Markdown
**Get Data** - Our data set will consist of an Excel file containing customer counts per date. We will learn how to read in the excel file for processing. **Prepare Data** - The data is an irregular time series having duplicate dates. We will be challenged in compressing the data and coming up with next years forecasted customer count. **Analyze Data** - We use graphs to visualize trends and spot outliers. Some built in computational tools will be used to calculate next years forecasted customer count. **Present Data** - The results will be plotted. ***NOTE:Make sure you have looked through all previous lessons, as the knowledge learned in previous lessons will beneeded for this exercise.***
###Code
# Import libraries
import json
import gzip
import sys
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
import string
import nltk
from nltk.corpus import stopwords
from sklearn.model_selection import train_test_split
from IPython.display import display
import numpy as np
# plt.style.use('ggplot')
%matplotlib inline
# print(plt.style.available)
print('Python version ' + sys.version)
print('Pandas version: ' + pd.__version__)
print('Matplotlib version ' + matplotlib.__version__)
###Output
Python version 3.6.1 |Anaconda 4.4.0 (x86_64)| (default, May 11 2017, 13:04:09)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)]
Pandas version: 0.20.1
Matplotlib version 2.0.2
###Markdown
> We will be creating our own test data for analysis.
###Code
# 1.1 Read Data
# # Convert to 'strict' json
# The above data can be read with python 'eval',
# but is not strict json. If you'd like to use
# some language other than python, you can convert
# the data to strict json as follows
# def parse(path):
# g = gzip.open(path, 'r')
# for l in g:
# yield json.dumps(eval(l))
# f = open("output.strict", 'w')
# for l in parse("reviews_Movies_and_TV_5.json.gz"):
# f.write(l + '\n')
# Pandas data frame
# This code reads the data into a pandas data frame
def parse(path):
g = gzip.open(path, 'rb')
for l in g:
yield eval(l)
def getDF(path):
i = 0
df = {}
for d in parse(path):
df[i] = d
i += 1
return pd.DataFrame.from_dict(df, orient='index')
df = getDF('reviews_Movies_and_TV_5.json.gz')
print(df)
# df = df.head()
# # def _color_feature(val):
# # # color = 'red' if val < 0 else 'green'
# # # color = 'red' if val.name == 'reviewerID' else 'green'
# # color = 'red' if val > 3 else 'green'
# # return 'color: %s' % color
# coldict = {'reviewerID': 'red', 'overall': 'blue', '0':'yellow'}
# def highlight_feature(s, coldict):
# if s.name in coldict.keys():
# return ['background-color: {}'.format(coldict[s.name])] * len(s)
# return [''] * len(s)
# # df.style.applymap(_color_feature)
# # df.style.apply(lambda x: ['background: lightblue' if x.name == 'reviewerID' for i in x])
# df.style.apply(highlight_feature, coldict=coldict)
# df
data = pd.DataFrame(np.random.randn(5, 3), columns=list('ABC'))
# dictionary of column colors
coldict = {'A':'grey', 'C':'yellow'}
def highlight_cols(s, coldict):
if s.name in coldict.keys():
return ['background-color: {}'.format(coldict[s.name])] * len(s)
return [''] * len(s)
data.style.apply(highlight_cols, coldict=coldict)
# When you look at a large dataframe, instead of showing you the contents of the dataframe,
# it'll show you a summary. This includes all the columns, and how many non-null values there are in each column.
df.info()
df.shape
df.describe()
###Output
_____no_output_____
###Markdown
3. Distribution of labels in the dataset
###Code
df.groupby('overall').count()
df.groupby('overall')['reviewerID'].count().plot(kind='bar',title='Rating Distribution',figsize=(10,6))
# To select a column, we index with the name of the column
df['reviewerID'].head()
# Selecting multiple columns
df[['reviewerID', 'reviewerName', 'summary']].head()
# 1.2 Data Preprocessing
# To get the first 5 rows of a dataframe, we can use a slice: df[:5]
df.head()
# Check If Any Value is NaN in a Pandas DataFrame
df.isnull().values.any()
df.isnull().sum()
# Counting cells with missing values
sum(df.isnull().values.ravel())
# Counting rows that have missing values somewhere
sum([True for idx,row in df.iterrows() if any(row.isnull())])
# Selecting only rows with one or more nulls
noise_data = df[df.isnull().any(axis=1)]
noise_data.head()
# To get a sense for whether a column has problems,
# I usually use .unique() to look at all its values.
# If it's a numeric column, I'll instead plot a histogram
# to get a sense of the distribution.
df['overall'].unique()
# Create a groupby object
name = df.groupby('reviewerID')
# Apply the sum function to the groupby object
test = name.mean().sort_values(by='unixReviewTime')
test.head()
name = df.groupby('reviewerName')
test = name.mean().sort_values(by='unixReviewTime')
test.head()
name = df.groupby('reviewerName')
test = name.count()
test.head()
name = df.groupby('asin')
test = name.mean().sort_values(by='unixReviewTime')
test.head()
# indexed_df = df.set_index(['reviewTime'])
# indexed_df[:5]
# Parsing Unix timestamps
df['date'] = pd.to_datetime(df['unixReviewTime'],unit='s')
indexed_df = df.set_index(['date'])
indexed_df.head()
ratings = []
for review in parse("reviews_Movies_and_TV_5.json.gz"):
ratings.append(review['overall'])
print(sum(ratings) / len(ratings))
# ratings = []
# for review in df['overall']:
# ratings.append(review['overall'])
# print(sum(ratings) / len(ratings))
indexed_df['overall'][:10]
# data from a rating-only CSV file
path = './ratings_Movies_and_TV.csv'
only_df = pd.read_csv(path)
only_df[:10]
indexed_df.head()
indexed_df['unixReviewTime'].head()
indexed_df.sort_values(by='unixReviewTime').head()
indexed_df.sort_values(by='reviewTime').head()
sort_indexed_df = indexed_df.sort_index(axis=0)
sort_indexed_df.head()
# indexed_df['overall'][:10].plot()
sort_indexed_df['overall'][:10].plot(figsize=(15, 5))
indexed_df_test = sort_indexed_df
indexed_df_test.head()
# indexed_df_test['overall'].plot()
df_day = indexed_df_test.resample('D').mean()
df_day.head()
df_day['overall'].plot()
df_week = indexed_df_test.resample('W').mean()
df_week.head()
df_week['overall'].plot()
df_month = indexed_df_test.resample('M').mean()
df_month.head()
df_month['overall'].plot()
# Group by reviewerID and asin
Daily = indexed_df_test.reset_index().groupby(['reviewerID', 'asin']).mean()
del Daily['unixReviewTime']
Daily.head(10)
# Daily = indexed_df_test.reset_index().groupby(['reviewerName', 'asin']).mean()
# Daily.head(10)
# indexed_df_test.head()
# Daily = indexed_df_test.reset_index().groupby(['reviewerID','date', 'asin']).mean()
Daily = indexed_df_test.reset_index().groupby(['reviewerID','date']).mean()
del Daily['unixReviewTime']
Daily.head(30)
# Select the reviewerID index
Daily.index.levels[0]
# Select the date index
Daily.index.levels[1]
# Select the asin index
# Daily.index.levels[2]
# Last four Graphs
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(20, 10))
fig.subplots_adjust(hspace=1.0) ## Create space between plots
Daily.loc['A00295401U6S2UG3RAQSZ']['2012':].plot(ax=axes[0,0])
Daily.loc['A00348066Q1WEW5BMESN']['2012':].plot(ax=axes[0,1])
Daily.loc['A0040548BPHKXMHH3NTI']['2012':].plot(ax=axes[1,0])
Daily.loc['A00438023NNXSDBGXK56L']['2012':].plot(ax=axes[1,1])
# Add titles
axes[0,0].set_title('A00295401U6S2UG3RAQSZ')
axes[0,1].set_title('A00348066Q1WEW5BMESN')
axes[1,0].set_title('A0040548BPHKXMHH3NTI')
axes[1,1].set_title('A00438023NNXSDBGXK56L');
###Output
_____no_output_____
###Markdown
2 Modeling
###Code
data = indexed_df_test[['reviewText', 'overall']]
data.head()
data.info()
len(data)
def splitTrainTest(data,percent_train):
n = int(len(data)*percent_train)
train = data[:n]
test = data[n:]
return train, test
train, test = splitTrainTest(data, 0.75)
print('Train Data: ', len(train))
print('Test Data: ', len(test))
print('Total Data: ', len(train)+len(test))
# train.head()
# from IPython.display import display
display(train.head())
display(train.tail())
display(test.head())
display(test.tail())
text = train['reviewText'][0]
text
#preprocessing steps
#stemmer = PorterStemmer()
lemmatizer = nltk.WordNetLemmatizer()
stop = stopwords.words('english')
translation = str.maketrans(string.punctuation,' '*len(string.punctuation))
def preprocessing(line):
tokens=[]
line = line.translate(translation)
line = nltk.word_tokenize(line.lower())
for t in line:
#if(t not in stop):
#stemmed = stemmer.stem(t)
stemmed = lemmatizer.lemmatize(t)
tokens.append(stemmed)
return ' '.join(tokens)
# preprocessing(text)
# train['reviewText'][0] = preprocessing(train['reviewText'][0])
t = train[0:1]
t
# t['cleanText'] = pd.Series(preprocessing(t['reviewText']))
# t.assign(cleanText = preprocessing(t['reviewText']))
a = t['reviewText']
a
for index, row in train.head(2).iterrows():
print(index)
print(row)
###Output
1997-11-13 00:00:00
reviewText The movie was ok but it did lack a few things....
overall 4
Name: 1997-11-13 00:00:00, dtype: object
1997-11-14 00:00:00
reviewText HA HA HA!!! THose were the first words anyone ...
overall 5
Name: 1997-11-14 00:00:00, dtype: object
|
feature_extraction_and_classification.ipynb | ###Markdown
Importando as bibliotecas necessárias
###Code
from tensorflow import keras
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, cohen_kappa_score, confusion_matrix
from sklearn.neural_network import MLPClassifier
from skimage.io import imread, imshow
import numpy as np
from glob import glob
from tqdm.notebook import tqdm
import pandas as pd
import pickle
###Output
_____no_output_____
###Markdown
Carregando o modelo da FaceNet O modelo utilizado neste trabalho foi adquirido através deste repositório: https://github.com/nyoki-mtl/keras-facenet.
###Code
model = keras.models.load_model('./facenet_keras.h5')
model.summary()
###Output
_________________
Block8_1_Branch_1_Conv2d_0b_1x3 (None, 3, 3, 192) 0 Block8_1_Branch_1_Conv2d_0b_1x3_B
__________________________________________________________________________________________________
Block8_1_Branch_0_Conv2d_1x1 (C (None, 3, 3, 192) 344064 Mixed_7a[0][0]
__________________________________________________________________________________________________
Block8_1_Branch_1_Conv2d_0c_3x1 (None, 3, 3, 192) 110592 Block8_1_Branch_1_Conv2d_0b_1x3_A
__________________________________________________________________________________________________
Block8_1_Branch_0_Conv2d_1x1_Ba (None, 3, 3, 192) 576 Block8_1_Branch_0_Conv2d_1x1[0][0
__________________________________________________________________________________________________
Block8_1_Branch_1_Conv2d_0c_3x1 (None, 3, 3, 192) 576 Block8_1_Branch_1_Conv2d_0c_3x1[0
__________________________________________________________________________________________________
Block8_1_Branch_0_Conv2d_1x1_Ac (None, 3, 3, 192) 0 Block8_1_Branch_0_Conv2d_1x1_Batc
__________________________________________________________________________________________________
Block8_1_Branch_1_Conv2d_0c_3x1 (None, 3, 3, 192) 0 Block8_1_Branch_1_Conv2d_0c_3x1_B
__________________________________________________________________________________________________
Block8_1_Concatenate (Concatena (None, 3, 3, 384) 0 Block8_1_Branch_0_Conv2d_1x1_Acti
Block8_1_Branch_1_Conv2d_0c_3x1_A
__________________________________________________________________________________________________
Block8_1_Conv2d_1x1 (Conv2D) (None, 3, 3, 1792) 689920 Block8_1_Concatenate[0][0]
__________________________________________________________________________________________________
Block8_1_ScaleSum (Lambda) (None, 3, 3, 1792) 0 Mixed_7a[0][0]
Block8_1_Conv2d_1x1[0][0]
__________________________________________________________________________________________________
Block8_1_Activation (Activation (None, 3, 3, 1792) 0 Block8_1_ScaleSum[0][0]
__________________________________________________________________________________________________
Block8_2_Branch_1_Conv2d_0a_1x1 (None, 3, 3, 192) 344064 Block8_1_Activation[0][0]
__________________________________________________________________________________________________
Block8_2_Branch_1_Conv2d_0a_1x1 (None, 3, 3, 192) 576 Block8_2_Branch_1_Conv2d_0a_1x1[0
__________________________________________________________________________________________________
Block8_2_Branch_1_Conv2d_0a_1x1 (None, 3, 3, 192) 0 Block8_2_Branch_1_Conv2d_0a_1x1_B
__________________________________________________________________________________________________
Block8_2_Branch_1_Conv2d_0b_1x3 (None, 3, 3, 192) 110592 Block8_2_Branch_1_Conv2d_0a_1x1_A
__________________________________________________________________________________________________
Block8_2_Branch_1_Conv2d_0b_1x3 (None, 3, 3, 192) 576 Block8_2_Branch_1_Conv2d_0b_1x3[0
__________________________________________________________________________________________________
Block8_2_Branch_1_Conv2d_0b_1x3 (None, 3, 3, 192) 0 Block8_2_Branch_1_Conv2d_0b_1x3_B
__________________________________________________________________________________________________
Block8_2_Branch_0_Conv2d_1x1 (C (None, 3, 3, 192) 344064 Block8_1_Activation[0][0]
__________________________________________________________________________________________________
Block8_2_Branch_1_Conv2d_0c_3x1 (None, 3, 3, 192) 110592 Block8_2_Branch_1_Conv2d_0b_1x3_A
__________________________________________________________________________________________________
Block8_2_Branch_0_Conv2d_1x1_Ba (None, 3, 3, 192) 576 Block8_2_Branch_0_Conv2d_1x1[0][0
__________________________________________________________________________________________________
Block8_2_Branch_1_Conv2d_0c_3x1 (None, 3, 3, 192) 576 Block8_2_Branch_1_Conv2d_0c_3x1[0
__________________________________________________________________________________________________
Block8_2_Branch_0_Conv2d_1x1_Ac (None, 3, 3, 192) 0 Block8_2_Branch_0_Conv2d_1x1_Batc
__________________________________________________________________________________________________
Block8_2_Branch_1_Conv2d_0c_3x1 (None, 3, 3, 192) 0 Block8_2_Branch_1_Conv2d_0c_3x1_B
__________________________________________________________________________________________________
Block8_2_Concatenate (Concatena (None, 3, 3, 384) 0 Block8_2_Branch_0_Conv2d_1x1_Acti
Block8_2_Branch_1_Conv2d_0c_3x1_A
__________________________________________________________________________________________________
Block8_2_Conv2d_1x1 (Conv2D) (None, 3, 3, 1792) 689920 Block8_2_Concatenate[0][0]
__________________________________________________________________________________________________
Block8_2_ScaleSum (Lambda) (None, 3, 3, 1792) 0 Block8_1_Activation[0][0]
Block8_2_Conv2d_1x1[0][0]
__________________________________________________________________________________________________
Block8_2_Activation (Activation (None, 3, 3, 1792) 0 Block8_2_ScaleSum[0][0]
__________________________________________________________________________________________________
Block8_3_Branch_1_Conv2d_0a_1x1 (None, 3, 3, 192) 344064 Block8_2_Activation[0][0]
__________________________________________________________________________________________________
Block8_3_Branch_1_Conv2d_0a_1x1 (None, 3, 3, 192) 576 Block8_3_Branch_1_Conv2d_0a_1x1[0
__________________________________________________________________________________________________
Block8_3_Branch_1_Conv2d_0a_1x1 (None, 3, 3, 192) 0 Block8_3_Branch_1_Conv2d_0a_1x1_B
__________________________________________________________________________________________________
Block8_3_Branch_1_Conv2d_0b_1x3 (None, 3, 3, 192) 110592 Block8_3_Branch_1_Conv2d_0a_1x1_A
__________________________________________________________________________________________________
Block8_3_Branch_1_Conv2d_0b_1x3 (None, 3, 3, 192) 576 Block8_3_Branch_1_Conv2d_0b_1x3[0
__________________________________________________________________________________________________
Block8_3_Branch_1_Conv2d_0b_1x3 (None, 3, 3, 192) 0 Block8_3_Branch_1_Conv2d_0b_1x3_B
__________________________________________________________________________________________________
Block8_3_Branch_0_Conv2d_1x1 (C (None, 3, 3, 192) 344064 Block8_2_Activation[0][0]
__________________________________________________________________________________________________
Block8_3_Branch_1_Conv2d_0c_3x1 (None, 3, 3, 192) 110592 Block8_3_Branch_1_Conv2d_0b_1x3_A
__________________________________________________________________________________________________
Block8_3_Branch_0_Conv2d_1x1_Ba (None, 3, 3, 192) 576 Block8_3_Branch_0_Conv2d_1x1[0][0
__________________________________________________________________________________________________
Block8_3_Branch_1_Conv2d_0c_3x1 (None, 3, 3, 192) 576 Block8_3_Branch_1_Conv2d_0c_3x1[0
__________________________________________________________________________________________________
Block8_3_Branch_0_Conv2d_1x1_Ac (None, 3, 3, 192) 0 Block8_3_Branch_0_Conv2d_1x1_Batc
__________________________________________________________________________________________________
Block8_3_Branch_1_Conv2d_0c_3x1 (None, 3, 3, 192) 0 Block8_3_Branch_1_Conv2d_0c_3x1_B
__________________________________________________________________________________________________
Block8_3_Concatenate (Concatena (None, 3, 3, 384) 0 Block8_3_Branch_0_Conv2d_1x1_Acti
Block8_3_Branch_1_Conv2d_0c_3x1_A
__________________________________________________________________________________________________
Block8_3_Conv2d_1x1 (Conv2D) (None, 3, 3, 1792) 689920 Block8_3_Concatenate[0][0]
__________________________________________________________________________________________________
Block8_3_ScaleSum (Lambda) (None, 3, 3, 1792) 0 Block8_2_Activation[0][0]
Block8_3_Conv2d_1x1[0][0]
__________________________________________________________________________________________________
Block8_3_Activation (Activation (None, 3, 3, 1792) 0 Block8_3_ScaleSum[0][0]
__________________________________________________________________________________________________
Block8_4_Branch_1_Conv2d_0a_1x1 (None, 3, 3, 192) 344064 Block8_3_Activation[0][0]
__________________________________________________________________________________________________
Block8_4_Branch_1_Conv2d_0a_1x1 (None, 3, 3, 192) 576 Block8_4_Branch_1_Conv2d_0a_1x1[0
__________________________________________________________________________________________________
Block8_4_Branch_1_Conv2d_0a_1x1 (None, 3, 3, 192) 0 Block8_4_Branch_1_Conv2d_0a_1x1_B
__________________________________________________________________________________________________
Block8_4_Branch_1_Conv2d_0b_1x3 (None, 3, 3, 192) 110592 Block8_4_Branch_1_Conv2d_0a_1x1_A
__________________________________________________________________________________________________
Block8_4_Branch_1_Conv2d_0b_1x3 (None, 3, 3, 192) 576 Block8_4_Branch_1_Conv2d_0b_1x3[0
__________________________________________________________________________________________________
Block8_4_Branch_1_Conv2d_0b_1x3 (None, 3, 3, 192) 0 Block8_4_Branch_1_Conv2d_0b_1x3_B
__________________________________________________________________________________________________
Block8_4_Branch_0_Conv2d_1x1 (C (None, 3, 3, 192) 344064 Block8_3_Activation[0][0]
__________________________________________________________________________________________________
Block8_4_Branch_1_Conv2d_0c_3x1 (None, 3, 3, 192) 110592 Block8_4_Branch_1_Conv2d_0b_1x3_A
__________________________________________________________________________________________________
Block8_4_Branch_0_Conv2d_1x1_Ba (None, 3, 3, 192) 576 Block8_4_Branch_0_Conv2d_1x1[0][0
__________________________________________________________________________________________________
Block8_4_Branch_1_Conv2d_0c_3x1 (None, 3, 3, 192) 576 Block8_4_Branch_1_Conv2d_0c_3x1[0
__________________________________________________________________________________________________
Block8_4_Branch_0_Conv2d_1x1_Ac (None, 3, 3, 192) 0 Block8_4_Branch_0_Conv2d_1x1_Batc
__________________________________________________________________________________________________
Block8_4_Branch_1_Conv2d_0c_3x1 (None, 3, 3, 192) 0 Block8_4_Branch_1_Conv2d_0c_3x1_B
__________________________________________________________________________________________________
Block8_4_Concatenate (Concatena (None, 3, 3, 384) 0 Block8_4_Branch_0_Conv2d_1x1_Acti
Block8_4_Branch_1_Conv2d_0c_3x1_A
__________________________________________________________________________________________________
Block8_4_Conv2d_1x1 (Conv2D) (None, 3, 3, 1792) 689920 Block8_4_Concatenate[0][0]
__________________________________________________________________________________________________
Block8_4_ScaleSum (Lambda) (None, 3, 3, 1792) 0 Block8_3_Activation[0][0]
Block8_4_Conv2d_1x1[0][0]
__________________________________________________________________________________________________
Block8_4_Activation (Activation (None, 3, 3, 1792) 0 Block8_4_ScaleSum[0][0]
__________________________________________________________________________________________________
Block8_5_Branch_1_Conv2d_0a_1x1 (None, 3, 3, 192) 344064 Block8_4_Activation[0][0]
__________________________________________________________________________________________________
Block8_5_Branch_1_Conv2d_0a_1x1 (None, 3, 3, 192) 576 Block8_5_Branch_1_Conv2d_0a_1x1[0
__________________________________________________________________________________________________
Block8_5_Branch_1_Conv2d_0a_1x1 (None, 3, 3, 192) 0 Block8_5_Branch_1_Conv2d_0a_1x1_B
__________________________________________________________________________________________________
Block8_5_Branch_1_Conv2d_0b_1x3 (None, 3, 3, 192) 110592 Block8_5_Branch_1_Conv2d_0a_1x1_A
__________________________________________________________________________________________________
Block8_5_Branch_1_Conv2d_0b_1x3 (None, 3, 3, 192) 576 Block8_5_Branch_1_Conv2d_0b_1x3[0
__________________________________________________________________________________________________
Block8_5_Branch_1_Conv2d_0b_1x3 (None, 3, 3, 192) 0 Block8_5_Branch_1_Conv2d_0b_1x3_B
__________________________________________________________________________________________________
Block8_5_Branch_0_Conv2d_1x1 (C (None, 3, 3, 192) 344064 Block8_4_Activation[0][0]
__________________________________________________________________________________________________
Block8_5_Branch_1_Conv2d_0c_3x1 (None, 3, 3, 192) 110592 Block8_5_Branch_1_Conv2d_0b_1x3_A
__________________________________________________________________________________________________
Block8_5_Branch_0_Conv2d_1x1_Ba (None, 3, 3, 192) 576 Block8_5_Branch_0_Conv2d_1x1[0][0
__________________________________________________________________________________________________
Block8_5_Branch_1_Conv2d_0c_3x1 (None, 3, 3, 192) 576 Block8_5_Branch_1_Conv2d_0c_3x1[0
__________________________________________________________________________________________________
Block8_5_Branch_0_Conv2d_1x1_Ac (None, 3, 3, 192) 0 Block8_5_Branch_0_Conv2d_1x1_Batc
__________________________________________________________________________________________________
Block8_5_Branch_1_Conv2d_0c_3x1 (None, 3, 3, 192) 0 Block8_5_Branch_1_Conv2d_0c_3x1_B
__________________________________________________________________________________________________
Block8_5_Concatenate (Concatena (None, 3, 3, 384) 0 Block8_5_Branch_0_Conv2d_1x1_Acti
Block8_5_Branch_1_Conv2d_0c_3x1_A
__________________________________________________________________________________________________
Block8_5_Conv2d_1x1 (Conv2D) (None, 3, 3, 1792) 689920 Block8_5_Concatenate[0][0]
__________________________________________________________________________________________________
Block8_5_ScaleSum (Lambda) (None, 3, 3, 1792) 0 Block8_4_Activation[0][0]
Block8_5_Conv2d_1x1[0][0]
__________________________________________________________________________________________________
Block8_5_Activation (Activation (None, 3, 3, 1792) 0 Block8_5_ScaleSum[0][0]
__________________________________________________________________________________________________
Block8_6_Branch_1_Conv2d_0a_1x1 (None, 3, 3, 192) 344064 Block8_5_Activation[0][0]
__________________________________________________________________________________________________
Block8_6_Branch_1_Conv2d_0a_1x1 (None, 3, 3, 192) 576 Block8_6_Branch_1_Conv2d_0a_1x1[0
__________________________________________________________________________________________________
Block8_6_Branch_1_Conv2d_0a_1x1 (None, 3, 3, 192) 0 Block8_6_Branch_1_Conv2d_0a_1x1_B
__________________________________________________________________________________________________
Block8_6_Branch_1_Conv2d_0b_1x3 (None, 3, 3, 192) 110592 Block8_6_Branch_1_Conv2d_0a_1x1_A
__________________________________________________________________________________________________
Block8_6_Branch_1_Conv2d_0b_1x3 (None, 3, 3, 192) 576 Block8_6_Branch_1_Conv2d_0b_1x3[0
__________________________________________________________________________________________________
Block8_6_Branch_1_Conv2d_0b_1x3 (None, 3, 3, 192) 0 Block8_6_Branch_1_Conv2d_0b_1x3_B
__________________________________________________________________________________________________
Block8_6_Branch_0_Conv2d_1x1 (C (None, 3, 3, 192) 344064 Block8_5_Activation[0][0]
__________________________________________________________________________________________________
Block8_6_Branch_1_Conv2d_0c_3x1 (None, 3, 3, 192) 110592 Block8_6_Branch_1_Conv2d_0b_1x3_A
__________________________________________________________________________________________________
Block8_6_Branch_0_Conv2d_1x1_Ba (None, 3, 3, 192) 576 Block8_6_Branch_0_Conv2d_1x1[0][0
__________________________________________________________________________________________________
Block8_6_Branch_1_Conv2d_0c_3x1 (None, 3, 3, 192) 576 Block8_6_Branch_1_Conv2d_0c_3x1[0
__________________________________________________________________________________________________
Block8_6_Branch_0_Conv2d_1x1_Ac (None, 3, 3, 192) 0 Block8_6_Branch_0_Conv2d_1x1_Batc
__________________________________________________________________________________________________
Block8_6_Branch_1_Conv2d_0c_3x1 (None, 3, 3, 192) 0 Block8_6_Branch_1_Conv2d_0c_3x1_B
__________________________________________________________________________________________________
Block8_6_Concatenate (Concatena (None, 3, 3, 384) 0 Block8_6_Branch_0_Conv2d_1x1_Acti
Block8_6_Branch_1_Conv2d_0c_3x1_A
__________________________________________________________________________________________________
Block8_6_Conv2d_1x1 (Conv2D) (None, 3, 3, 1792) 689920 Block8_6_Concatenate[0][0]
__________________________________________________________________________________________________
Block8_6_ScaleSum (Lambda) (None, 3, 3, 1792) 0 Block8_5_Activation[0][0]
Block8_6_Conv2d_1x1[0][0]
__________________________________________________________________________________________________
AvgPool (GlobalAveragePooling2D (None, 1792) 0 Block8_6_ScaleSum[0][0]
__________________________________________________________________________________________________
Dropout (Dropout) (None, 1792) 0 AvgPool[0][0]
__________________________________________________________________________________________________
Bottleneck (Dense) (None, 128) 229376 Dropout[0][0]
__________________________________________________________________________________________________
Bottleneck_BatchNorm (BatchNorm (None, 128) 384 Bottleneck[0][0]
==================================================================================================
Total params: 22,808,144
Trainable params: 22,779,312
Non-trainable params: 28,832
__________________________________________________________________________________________________
###Markdown
Descompactando a base de imagens
###Code
!unzip -q './new_dataset.zip'
###Output
_____no_output_____
###Markdown
Extraindo os embeddings das imagens da base
###Code
maskon = glob('./new_dataset/maskon/*.png')
maskoff = glob('./new_dataset/maskoff/*.png')
len(maskon), len(maskoff)
data = maskon + maskoff
len(data)
embeddings = []
for path in tqdm(data):
try:
# Lendo e normalizando a imagem
img = imread(path).astype('float32')/255
# Aplicando um reshape na imagem para deixar o formato de acordo com o input da FaceNet
input = np.expand_dims(img, axis=0)
# Extraindo o vetor de embeddings
embeddings.append(model.predict(input)[0])
except:
print(f'Error in {path}')
continue
np.array(embeddings).shape
pd.DataFrame(embeddings).head()
###Output
_____no_output_____
###Markdown
Adicionando labels no dataframe de embeddings* Com máscara: label **1*** Sem máscara: label **0**
###Code
labels = pd.DataFrame({
'label': [1]*len(maskon) + [0]*len(maskoff)
})
df_embeddings = pd.concat([pd.DataFrame(embeddings), labels], axis=1)
df_embeddings
###Output
_____no_output_____
###Markdown
Dividindo os dados em treino e teste
###Code
X = np.array(df_embeddings.drop('label', axis=1))
y = np.array(df_embeddings['label'])
X.shape, y.shape
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
###Output
_____no_output_____
###Markdown
Treinando uma multilayer perceptron (MLP) para a classificação dos embeddings Instanciando o modelo
###Code
mlp_model = MLPClassifier()
###Output
_____no_output_____
###Markdown
Treinando o modelo
###Code
mlp_model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Avaliando o modelo com base no conjunto de teste
###Code
y_pred = mlp_model.predict(X_test)
print('Acurácia: ', accuracy_score(y_test, y_pred))
print('Kappa: ', cohen_kappa_score(y_test, y_pred))
print('Matriz de confusão:\n', confusion_matrix(y_test, y_pred))
###Output
Acurácia: 0.9569377990430622
Kappa: 0.9135907389117303
Matriz de confusão:
[[187 14]
[ 4 213]]
###Markdown
Treinando MLP novamente, mas agora com todos os dados Neste momento, já avaliamos nosso classificador com base no conjunto de teste e chegamos ao melhor cenário. Agora precisamos treinar esse modelo novamente, mas dessa vez usando todas as imagens para treino. Quanto mais dados inserirmos em nosso modelo, melhor será seu aprendizado.Além disso, nesta etapa também precisamos salvar o modelo, pois será ele que usaremos no algoritmo de detecção com a OpenCV e a MTCNN.
###Code
X.shape, y.shape
mlp_model = MLPClassifier()
mlp_model.fit(X, y)
mlp_model.score(X, y)
###Output
_____no_output_____
###Markdown
Salvando o modelo como um arquivo pickle
###Code
pickle.dump(mlp_model, open('./mlp_model.pkl', 'wb'))
###Output
_____no_output_____ |
Data_Frame_config.ipynb | ###Markdown
Race Results
###Code
def get_race(race_name, season):
filt = (LeaderBoard['Race_Name'] == race_name) & (LeaderBoard['Season'].astype(int) == season)
full = LeaderBoard[filt]
df = pd.concat([full['FT'], full['Name']], axis=1)
return df
def get_season(year):
flanders = get_race('ronde-van-vlaanderen', year)
gent = get_race('gent-wevelgem', year)
strade = get_race('strade-bianche', year)
milano = get_race('milano-sanremo', year)
omloop = get_race('omloop-het-nieuwsblad', year)
e3 = get_race('e3-harelbeke', year)
gent = pd.merge(flanders, gent, on= 'Name', how='left', suffixes=('flanders', 'gent'))
strade = pd.merge(flanders, strade, on= 'Name', how='left', suffixes=('flanders', 'strade'))
milano = pd.merge(flanders, milano, on= 'Name', how='left', suffixes=('flanders', 'milano'))
omloop = pd.merge(flanders, omloop, on= 'Name', how='left', suffixes=('flanders', 'omloop'))
e3 = pd.merge(flanders, e3, on= 'Name', how='left', suffixes=('flanders', 'e3'))
gent['strade'] = strade['FTstrade']
gent['milano'] = milano['FTmilano']
gent['omloop'] = omloop['FTomloop']
gent['e3'] = e3['FTe3']
return gent
complete_df = get_season(2021)
for i in seasons:
season_df = get_season(i)
complete_df = complete_df.append(season_df, ignore_index=True)
###Output
_____no_output_____
###Markdown
Season
###Code
filt = LeaderBoard['Race_Name'] == 'ronde-van-vlaanderen'
Flander_LB = LeaderBoard[filt]
complete_df['Season'] = list(Flander_LB['Season'])
###Output
_____no_output_____
###Markdown
Previous Season Points
###Code
LSP = []
for i, j in zip(complete_df['Name'], complete_df['Season']):
filt = (Season_Stats['Rider_Name'] == i) & (Season_Stats['Season'] == j-1)
rider_stats = Season_Stats[filt]
LSP.append(rider_stats.Points.item())
complete_df['Last_Season_Points'] = LSP
Flanders_DF = complete_df
Flanders_DF.to_csv(r'C:\\Users\\User\\Documents\\DataScience_Projects\XG_Boost\Flanders_DF.csv', index = False, header=True)
###Output
_____no_output_____ |
Feature Engineering/2.0 Standard Deviation, Z-score.ipynb | ###Markdown
Standard Deviation & Z-score Import Necessery Libraries
###Code
import pandas as pd
import matplotlib
from matplotlib import pyplot as plt
%matplotlib inline
matplotlib.rcParams['figure.figsize'] = (10,6)
###Output
_____no_output_____
###Markdown
Import Data
###Code
df = pd.read_csv("heights.csv")
df.sample(5)
###Output
_____no_output_____
###Markdown
Visualize
###Code
plt.hist(df.height, bins=20, rwidth=0.8)
plt.xlabel('Height (inches)')
plt.ylabel('Count')
plt.show()
###Output
_____no_output_____
###Markdown
**Plot bell curve along with histogram for our dataset**
###Code
df.height.min()
df.height.max()
df.describe()
###Output
_____no_output_____
###Markdown
Standard Deviation
###Code
df.mean() # In normal distribution, mean = median = mode
df.mean()+(1*df.std()) # 1 standard deviation away. Like above first image
###Output
_____no_output_____
###Markdown
**As we can see above images, 99.7% of values are within 3 standard deviations of the mean for Normal Distribution**So, we can remove outlier using 3 standard deviation. _Note that, how many standar deviation we should take it depends on dataset_
###Code
###Output
_____no_output_____
###Markdown
Detecting Outlier
###Code
# Using 3 standard deviation
upper_limit=df.mean()+(3*df.std())
upper_limit
lower_limit=df.mean()-(3*df.std())
lower_limit
# LEt's see those outliers
df[(df.height>float(upper_limit)) | (df.height<float(lower_limit))]
# removing outlier
df2=df[(df.height<float(upper_limit)) & (df.height>float(lower_limit))]
df.shape[0]-df2.shape[0] # so, we removed 7 outliers using standard deviation
###Output
_____no_output_____
###Markdown
Z-score z-score kind of similar to standard deviation. It returns a number which indicate datapoint is how much far away from mean. If our datapoint is 3 standard deviation away from mean, then z-score for that data point will be 3. if 2.5, it will be 2.5.Z = standard score x = observed value \mu = mean of the sample \sigma = standard deviation of the sample
###Code
df['zscore']=(df.height-df.height.mean())/df.height.std() # Calculating z-score
df.head()
(73.847017-66.37)/3.84 # VErifying how we got z-score
###Output
_____no_output_____
###Markdown
Detecting Outlier using Z-score
###Code
df[df['zscore']>3]
df[df['zscore']<-3]
###Output
_____no_output_____
###Markdown
_These are the same outlier which we detected before using standard deviation_ Removing Outliers
###Code
df_without_outlier=df[(df['zscore']>-3) & (df['zscore']<3)]
df_without_outlier.head()
df.shape[0]-df_without_outlier.shape[0] # So, we removed 7 outliers using z-score
###Output
_____no_output_____ |
pipelining/exp-csgr/exp-csgr_csgr_1w_ale_plotting.ipynb | ###Markdown
Experiment Description> This notebook is for experiment \ and data sample \. Initialization
###Code
%load_ext autoreload
%autoreload 2
import numpy as np, sys, os
in_colab = 'google.colab' in sys.modules
# fetching code and data(if you are using colab
if in_colab:
!rm -rf s2search
!git clone --branch pipelining https://github.com/youyinnn/s2search.git
sys.path.insert(1, './s2search')
%cd s2search/pipelining/exp-csgr/
pic_dir = os.path.join('.', 'plot')
if not os.path.exists(pic_dir):
os.mkdir(pic_dir)
###Output
_____no_output_____
###Markdown
Loading data
###Code
sys.path.insert(1, '../../')
import numpy as np, sys, os, pandas as pd
from getting_data import read_conf
from s2search_score_pdp import pdp_based_importance
sample_name = 'csgr'
f_list = [
'title', 'abstract', 'venue', 'authors',
'year',
'n_citations'
]
ale_xy = {}
ale_metric = pd.DataFrame(columns=['feature_name', 'ale_range', 'ale_importance', 'absolute mean'])
for f in f_list:
file = os.path.join('.', 'scores', f'{sample_name}_1w_ale_{f}.npz')
if os.path.exists(file):
nparr = np.load(file)
quantile = nparr['quantile']
ale_result = nparr['ale_result']
values_for_rug = nparr.get('values_for_rug')
ale_xy[f] = {
'x': quantile,
'y': ale_result,
'rug': values_for_rug,
'weird': ale_result[len(ale_result) - 1] > 20
}
if f != 'year' and f != 'n_citations':
ale_xy[f]['x'] = list(range(len(quantile)))
ale_xy[f]['numerical'] = False
else:
ale_xy[f]['xticks'] = quantile
ale_xy[f]['numerical'] = True
ale_metric.loc[len(ale_metric.index)] = [f, np.max(ale_result) - np.min(ale_result), pdp_based_importance(ale_result, f), np.mean(np.abs(ale_result))]
# print(len(ale_result))
print(ale_metric.sort_values(by=['ale_importance'], ascending=False))
print()
###Output
feature_name ale_range ale_importance absolute mean
2 venue 19.223406 6.078975 3.430245
0 title 17.990141 5.688982 3.210180
1 abstract 17.696267 5.596051 3.157741
4 year 1.547963 0.584113 0.486953
5 n_citations 1.236328 0.392198 0.284634
3 authors 0.000000 0.000000 0.000000
###Markdown
ALE Plots
###Code
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib.ticker import MaxNLocator
categorical_plot_conf = [
{
'xlabel': 'Title',
'ylabel': 'ALE',
'ale_xy': ale_xy['title']
},
{
'xlabel': 'Abstract',
'ale_xy': ale_xy['abstract']
},
{
'xlabel': 'Authors',
'ale_xy': ale_xy['authors'],
# 'zoom': {
# 'inset_axes': [0.3, 0.3, 0.47, 0.47],
# 'x_limit': [89, 93],
# 'y_limit': [-1, 14],
# }
},
{
'xlabel': 'Venue',
'ale_xy': ale_xy['venue'],
# 'zoom': {
# 'inset_axes': [0.3, 0.3, 0.47, 0.47],
# 'x_limit': [89, 93],
# 'y_limit': [-1, 13],
# }
},
]
numerical_plot_conf = [
{
'xlabel': 'Year',
'ylabel': 'ALE',
'ale_xy': ale_xy['year'],
# 'zoom': {
# 'inset_axes': [0.15, 0.4, 0.4, 0.4],
# 'x_limit': [2019, 2023],
# 'y_limit': [1.9, 2.1],
# },
},
{
'xlabel': 'Citations',
'ale_xy': ale_xy['n_citations'],
# 'zoom': {
# 'inset_axes': [0.4, 0.65, 0.47, 0.3],
# 'x_limit': [-1000.0, 12000],
# 'y_limit': [-0.1, 1.2],
# },
},
]
def pdp_plot(confs, title):
fig, axes_list = plt.subplots(nrows=1, ncols=len(confs), figsize=(20, 5), dpi=100)
subplot_idx = 0
plt.suptitle(title, fontsize=20, fontweight='bold')
# plt.autoscale(False)
for conf in confs:
axes = axes if len(confs) == 1 else axes_list[subplot_idx]
sns.rugplot(conf['ale_xy']['rug'], ax=axes, height=0.02)
axes.axhline(y=0, color='k', linestyle='-', lw=0.8)
axes.plot(conf['ale_xy']['x'], conf['ale_xy']['y'])
axes.grid(alpha = 0.4)
# axes.set_ylim([-2, 20])
axes.xaxis.set_major_locator(MaxNLocator(integer=True))
axes.yaxis.set_major_locator(MaxNLocator(integer=True))
if ('ylabel' in conf):
axes.set_ylabel(conf.get('ylabel'), fontsize=20, labelpad=10)
# if ('xticks' not in conf['ale_xy'].keys()):
# xAxis.set_ticklabels([])
axes.set_xlabel(conf['xlabel'], fontsize=16, labelpad=10)
if not (conf['ale_xy']['weird']):
if (conf['ale_xy']['numerical']):
axes.set_ylim([-1.5, 1.5])
pass
else:
axes.set_ylim([-7, 20])
pass
if 'zoom' in conf:
axins = axes.inset_axes(conf['zoom']['inset_axes'])
axins.xaxis.set_major_locator(MaxNLocator(integer=True))
axins.yaxis.set_major_locator(MaxNLocator(integer=True))
axins.plot(conf['ale_xy']['x'], conf['ale_xy']['y'])
axins.set_xlim(conf['zoom']['x_limit'])
axins.set_ylim(conf['zoom']['y_limit'])
axins.grid(alpha=0.3)
rectpatch, connects = axes.indicate_inset_zoom(axins)
connects[0].set_visible(False)
connects[1].set_visible(False)
connects[2].set_visible(True)
connects[3].set_visible(True)
subplot_idx += 1
pdp_plot(categorical_plot_conf, f"ALE for {len(categorical_plot_conf)} categorical features")
# plt.savefig(os.path.join('.', 'plot', f'{sample_name}-1wale-categorical.png'), facecolor='white', transparent=False, bbox_inches='tight')
pdp_plot(numerical_plot_conf, f"ALE for {len(numerical_plot_conf)} numerical features")
# plt.savefig(os.path.join('.', 'plot', f'{sample_name}-1wale-numerical.png'), facecolor='white', transparent=False, bbox_inches='tight')
###Output
_____no_output_____ |
Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/RECAP_DS/03_DATA_WRANGLING_AND_VISUALISATION/L06.ipynb | ###Markdown
DS104 Data Wrangling and Visualization : Lesson Six Companion Notebook Table of Contents * [Table of Contents](DS104L6_toc) * [Page 1 - Introduction](DS104L6_page_1) * [Page 2 - Tableau Public Installation](DS104L6_page_2) * [Page 3 - Connecting to CSV Data](DS104L6_page_3) * [Page 4 - Connecting to MS Excel Data](DS104L6_page_4) * [Page 5 - Connecting Other File Types](DS104L6_page_5) * [Page 6 - Connecting to Multiple Datasets](DS104L6_page_6) * [Page 7 - Tableau Worksheets](DS104L6_page_7) * [Page 8 - Text Table Activity](DS104L6_page_8) * [Page 9 - Text Table Activity Solution](DS104L6_page_9) * [Page 10 - Basic Concepts in Tableau](DS104L6_page_10) * [Page 11 - Filters](DS104L6_page_11) * [Page 12 - Sorting](DS104L6_page_12) * [Page 13 - Using Color in Tableau](DS104L6_page_13) * [Page 14 - Bar Charts](DS104L6_page_14) * [Page 15 - Bar Chart Activity](DS104L6_page_15) * [Page 16 - Bar Chart Activity Solution](DS104L6_page_16) * [Page 17 - Stacked Bar Charts](DS104L6_page_17) * [Page 18 - Line Charts](DS104L6_page_18) * [Page 19 - Dual Axis Charts](DS104L6_page_19) * [Page 20 - Dual Axis Chart Activity](DS104L6_page_20) * [Page 21 - Dual Axis Chart Activity Solution](DS104L6_page_21) * [Page 22 - Scatter Plots](DS104L6_page_22) * [Page 23 - Scatter Plot Activity](DS104L6_page_23) * [Page 24 - Scatter Plot Activity Solution](DS104L6_page_24) * [Page 25 - Adding a Reference Line](DS104L6_page_25) * [Page 26 - Tree Maps in Tableau](DS104L6_page_26) * [Page 27 - Key Terms ](DS104L6_page_27) * [Page 28 - Lesson 6 Hands-On](DS104L6_page_28) Page 1 - Introduction[Back to Top](DS104L6_toc)
###Code
from IPython.display import VimeoVideo
# Tutorial Video Name: Tableau Introduction
VimeoVideo('388867548', width=720, height=480)
###Output
_____no_output_____ |
Clase4_09_08_21.ipynb | ###Markdown
**Continuacion de estructuras de control iterativas**---**ACUMULADORES**Se le da este nombre a las variables que se encargan de "almacenar" algun tipo de informacion.**Ejemplo:** caso de la compra de viveres en la tienda.
###Code
nombre=input("Nombre del Consumidor")
listacomp=""
print(nombre, "escribe los siguientes viveres para su compra en el supermercado:")
listacomp=listacomp+"1 Paca de papel higienico"
print("----Compras que tengo que hacer----")
print(listacomp)
listacomp=listacomp+"Shampoo pantene 2 en 1"
print(listacomp)
###Output
Nombre del ConsumidorSandra
Sandra escribe los siguientes viveres para su compra en el supermercado:
----Compras que tengo que hacer----
1 Paca de papel higienico
1 Paca de papel higienicoShampoo pantene 2 en 1
###Markdown
**Opion 2**
###Code
nombre=input("Nombre del Consumidor")
listacomp=""
print(nombre, "escribe los siguientes viveres para su compra en el supermercado:")
listacomp=listacomp+"1 Paca de papel higienico"
print("----Compras que tengo que hacer----")
listacomp=listacomp+", Shampoo pantene 2 en 1"
listacomp=listacomp+", 2 pacas de pañales pequeñin etapa 3"
print(listacomp)
###Output
Nombre del ConsumidorSandra
Sandra escribe los siguientes viveres para su compra en el supermercado:
----Compras que tengo que hacer----
1 Paca de papel higienico, Shampoo pantene 2 en 1, 2 pacas de pañales pequeñin etapa 3
###Markdown
La variable "listacomp" nos esta sirviendo para acumular información de la lista compras.Podemos observar, que **NO** estamos creando una variable por cada item, sino una variable definida que nos sirve para almacenar infomacion.A continiacion observemos un ejemplo donde se ponga en practica el uso de la acumulacion en una variable usando cantidades y precio.
###Code
ppph=14000 ##precio de paca de papel higienico
cpph=2 ##Cantidad de papel higienico
pshampoo=18000 ##Precio pantene 2 en 1
cshampoo=4 ##unidades de shampoo
ppbebe=17000 ##Precio paga de pañales pequeñin
cpbebe=3 ##Cantidad de pañales
subtotal=0
print("calculando el total de la compra...")
total_pph=ppph*cpph
print("El valor total de papel higienico es: $", total_pph)
subtotal=subtotal+total_pph
print("--- El subtotal es: $", subtotal)
total_shampoo=pshampoo*cshampoo
print("El valor total de Shampoo es: $", total_shampoo)
subtotal=subtotal+total_shampoo
print("--- El subtotal es: $", subtotal)
total_pbebe=ppbebe*cpbebe
print("El valor total paca de pañales es: $", total_pbebe)
subtotal=subtotal+total_pbebe
print("El total de su compra es: $", subtotal)
###Output
calculando el total de la compra...
El valor total de papel higienico es: $ 28000
--- El subtotal es: $ 28000
El valor total de Shampoo es: $ 72000
--- El subtotal es: $ 100000
El valor total paca de pañales es: $ 51000
El total de su compra es: $ 151000
###Markdown
**Contadores**---Tiene mucha relacion con los "acumuladores" visto en el partado anterior, estas variables se caracterizan por ser variables de control, es decir, controlan la **cantidad** de veces que se ejecuta determinada accion.Usando el ejemplo anterior y modificandolo un poco, podemos desarrollar el siguiente algoritmo.
###Code
#Se comprara pañales por unidad en este caso
#cont=conteo de pañales
contp=0
print("Se realizara la compra de pañales etapa 3... Se ha iniciado la compra de asignacion en el carrito. En total hay:", contp, "pañales")
contp=contp+1
print("Ahora hay", contp, "pañales")
contp=contp+1
print("Ahora hay", contp, "pañales")
contp=contp+1
print("Ahora hay", contp, "pañales")
contp=contp+1
print("Ahora hay", contp, "pañales")
contp=contp+1
print("Ahora hay", contp, "pañales")
###Output
Se realizara la compra de pañales etapa 3... Se ha iniciado la compra de asignacion en el carrito. En total hay: 0 pañales
Ahora hay 1 pañales
Ahora hay 2 pañales
Ahora hay 3 pañales
Ahora hay 4 pañales
Ahora hay 5 pañales
###Markdown
**CICLOS CONTROLADOS POR CONDICIONES***WHILE* = MIENTRAS---Recordemos que las variables de control nos permite manejar estados, pasar de un estado a otro es por ejemplo una variable que no contiene elementos a contenerlos o una variable con un elemento en paticular(Acumulador o contador) y cambiarlo por completo (Bandera).Estas variables de control son la base de los ciclos de control. Siendo mas claros, pasar de una adicion manual a algo mas automatizado.Empezamos con el ciclo **WHILE** en español es "mientras". Este ciclo se compone de una **condicion** y su **bloque de codigo**. Lo que nos quiere decirel **while** es que el bloque de codigo se ejecutara **mientras** la condicion da como resultado TRUE or FALSE.
###Code
lapiz=5
contlapiz=0
print("Se ha iniciado la compra. En total hay:", contlapiz,lapiz)
while (contlapiz <lapiz):
contlapiz=contlapiz+1
print("Se ha realizado la compra de lapices. Ahora hay" + str(contlapiz)+ "lapiz") #str= reconozca una variable que yo le he dado valores numericos, como una variable tipo cadena
##OPCION 2
lapiz=5
contlapiz=0
print("Se ha iniciado la compra. En total hay:", contlapiz,lapiz)
while (contlapiz <lapiz):
contlapiz=contlapiz+1
print("Se ha realizado la compra de lapices. Ahora hay", contlapiz, "lapiz")
a=str(contlapiz)
print(type(contlapiz))
print(type(a))
###Output
Se ha iniciado la compra. En total hay: 0 5
Se ha realizado la compra de lapices. Ahora hay1lapiz
Se ha realizado la compra de lapices. Ahora hay2lapiz
Se ha realizado la compra de lapices. Ahora hay3lapiz
Se ha realizado la compra de lapices. Ahora hay4lapiz
Se ha realizado la compra de lapices. Ahora hay5lapiz
Se ha iniciado la compra. En total hay: 0 5
Se ha realizado la compra de lapices. Ahora hay 1 lapiz
Se ha realizado la compra de lapices. Ahora hay 2 lapiz
Se ha realizado la compra de lapices. Ahora hay 3 lapiz
Se ha realizado la compra de lapices. Ahora hay 4 lapiz
Se ha realizado la compra de lapices. Ahora hay 5 lapiz
<class 'int'>
<class 'str'>
###Markdown
Tener en cuenta que dentro del ciclo de *while** se va afectando las variables implicadas en la declaracion de la condicion se debe cumplir el ciclo. En el ejemplo anterior la variable **contlapiz** para que en algun momento la condicion sea verdadera y termine el ciclo se tiene que cumplir la condicion **(contlapiz<lapiz)**. De lo contrario tendriamos un ciclo que nunca se detendria, lo cual decantaria en un ciclo interminable. **CICLO DE FOR**---Es un ciclo especializado y optimizado para los ciclos controlados por cantidad. Se compone de tres elementos:1. La variable de iteracion 2. Elemento de iteracion3. Bloque de codigo a iterar**ventajas del for?**En Python es muy importante y se considera una herramienta bastante flexible y poderosa, por permitir ingresar estructuras de datos complejas, cadena de caracteres, rangos, entre otros. Los elementos de iteracion usados en esta estructura de datos, son necesarios que tengas la siguiente caracteristica:1. Cantidad definida (Esto lo diferencia totalmente del WHILE)El **while** parte de una condicion de verdad, pero el **for** parte de una cantidad definida.
###Code
##Retomando el ejemplo de la compra de lapices
print("Se ha iniciado la compra. En total hay: 0 lapices.")
for i in range(1,6): #En los rangos la funcion range manejan un intervalo abierto a la derecha y cerrado a la izquierda
print("Se ha realizado la compra de lapices. Ahora hay", i, "lapices")
print("Se ha iniciado la compra. En total hay: 0 lapices.")
for i in range(2,10): #En los rangos la funcion range manejan un intervalo abierto a la derecha y cerrado a la izquierda
print("Se ha realizado la compra de lapices. Ahora hay", i, "lapices")
###Output
Se ha iniciado la compra. En total hay: 0 lapices.
Se ha realizado la compra de lapices. Ahora hay 1 lapices
Se ha realizado la compra de lapices. Ahora hay 2 lapices
Se ha realizado la compra de lapices. Ahora hay 3 lapices
Se ha realizado la compra de lapices. Ahora hay 4 lapices
Se ha realizado la compra de lapices. Ahora hay 5 lapices
Se ha iniciado la compra. En total hay: 0 lapices.
Se ha realizado la compra de lapices. Ahora hay 2 lapices
Se ha realizado la compra de lapices. Ahora hay 3 lapices
Se ha realizado la compra de lapices. Ahora hay 4 lapices
Se ha realizado la compra de lapices. Ahora hay 5 lapices
Se ha realizado la compra de lapices. Ahora hay 6 lapices
Se ha realizado la compra de lapices. Ahora hay 7 lapices
Se ha realizado la compra de lapices. Ahora hay 8 lapices
Se ha realizado la compra de lapices. Ahora hay 9 lapices
|
python practice.ipynb | ###Markdown
###Code
str = " i am a person who is good"
str.find("am")
str.find('who')
str = "i am a good person with a good heart"
str.replace('a','u')
s='helo world'
print(s)
print(s.title())
print(s.swapcase())
print(s.capitalize())
s= 'hello world'
print(s.count('o'))
#lists
list=[1,2,3,'abhi']
print(list)
list=[1,2,3,'abhi']
list.append(5)
print(list)
list1=[1,2,3]
list2=[7,8,9]
list1.append(4)#can add only one item to the list
list1.extend([5,6])# can add several items to the list
list1.extend(list2)#another way of adding the two list using extend keyword
print(list1)
list2 = [1,2,3,'cat','dog']
if 'cat' in list2:#checks the membership of the item
list2.remove('cat')#remove AKA delete
print(list2)
list3 = [1,2,3,4,'cat']
for item in list3:#checking the items
print(item)#printing the items
print('item count:',len(list3))#print the item and the count
list4 = [1,2,3,4]
for i in range(0,len(list4)):#reading the length of the list using index format
list4[i] = list4[i] + 2 #now accessing the entire list using list from the index mentioned with the length of the original list
print(list4)
p=[1,2]
p.append([3,4])
print(p)
#slices
list1 = [1,2,'idiot','rrr',3,4,5]# here the list index starts from 0 -> n-1
print(list1[2])#printing a particular item from the list using index format
print(list1[2:])#printing starts from the index 2 to till the end of the list
print(list1[:4])#printing continues till index format 4 starting from the 0th index always
list1[0]=('qqq')#assinging the index 0th to the qqq
print(list1)#printing the list
#non-continuos slice
*********************
list1 = [i for i in range(10)]
#print(list1)
list1[::2]
print(list1)
list = [1,4,3,2,6,]
list.sort()
print(list)
list1 = [1,2,3,4]
for item in list1:
print(list1)
list2 = [1,2,3,4,5]
for i in range(0,len(list2)):
list2[i]+=1
print(list2)
list1 = [1,2,3,4,5,6]
for i in range(5,5+1,14):
print(list1)
#removing from the list
list1 = [1,2,3,4,5,6,'q','a']
list1.pop()#removes the last digit of the list
list1.pop(0)#removes the content of the 0th index
del list1[1]#alternate way for deleting/removing
#list1.remove('q')
print(list1)
#note: after every iteration the value of the index changes so dont get confused
list2 = ['q','w','e','r','q']
list2.remove('q')#removing the specific content
print(list2)
#note: it only deletes the first occurance of the content
#aggregates
list1 = [1,2,3,4,5]
print (max(list1),min(list1),sum(list1))
average = sum(list1)/ float(len(list1))
print(average)
#copying
list1 = [1,'element']
list2 = list1[:]#copies the content of list1
list2[0] = 2#replaces the content from list2 not from list1
print(list1)
print(list2)
list1= [1,'element']
list2= list1#coping the content of list1
list2[0]= 3#makes changes in the original list too
print(list1)
print(list2)
#note: the difference between above two programs are that the '[:]'
#deep copy
import copy
list1 = [1,[2,3]]
list2 = copy.deepcopy(list1)
list2[1][0]=4
print(list1[1][0])
list1 = [1,[2,3]]
print(list1[1][0])
#list as stack
stack = []
stack.append(10)#adding to the list
stack.append(20)#adding to the list
stack.pop()#poping the last content from the list
stack.append(30)
stack.append(40)
stack.pop(1)#specifically poping the content
print(stack)
#list as queue
import queue
queue = [10,20,30]
print(queue)
queue.append(40)
queue.append(50)
print(queue)
queue.pop()
print(queue)
#tuple
tuple1=(1,2,3,'q','a')
tuple2=(4,5,6,'w','e')
print(tuple1)
print(tuple2)
print(tuple1[1:3])
print(tuple1+tuple2)
#note: unlike list tuple cant be reassigned, thts why its used as a key
print(tuple1[0]=10)#it is not possible, invalid syntax
t = ()#empty tuple
l=[1,2,3]#created a list
t=tuple(l)#converting the list to the tuple and assigning them
print(t)#printing the tuple
if t==l:#check condition
print(l)#if true
else:
print('not equal')#if false
#operations on tuple
a=(1,2)
b=(3,4)
#a.append(5)
print(a)
print(a+b)
print(len(a+b))
#converstion
l=[1,2,3]
print(tuple(l))#list to tuple
t=(4,5,6)
print(list(t))#tuple to list
d={'a':1,'b':2}#its a dictionary
print(tuple(d.items()))#dictionary to tuple
print(list(d.items()))#dictionary to list
#tuple methods
tuple = (1,2,3,3,4,4,4,5,6,6,7)
print(tuple.count(3))
print(tuple.index(4))
str = '12,5,9,7'
for i in range(0,len(str)-1):
if (str[i] >str[i+1]):
print('helo')
else:
print('no')
# *******************using string didnt work*************
list=[2,3,1,5,4,6]
for i in range(0,len(list)-1):
if list[i]>list[i+1]:
print('helo')
elif list[i]<list[i+1]:
print('no')
for i in range(list[i]==len(list)):
print('reached')
list = [2,3,5,4,7,6,1]
list1 = []
for i in range(0,len(list)-1):
if list[i] > list[i+1]:
#list[i]= list1[i]
list1.append(list[i])
#print('hi')
elif list[i] < list[i+1]:
print('no')
for i in range(list[i] == len(list)):
list1.append(list[i])
print(list1)
total=0
for i in range(0,len(list1)):
total = total + list1[i]
print("total sum from list: ", total)
#Disctionary
dict = {}
dict['one'] = "this is one"
dict[2] = "this is two"
tinydict = {'name':'abhi', 'code': 234,'age':27}
print(dict['one'])
print(dict[2])
print(tinydict)
print(tinydict.keys())
print(tinydict.values())
dict1={} #empty dictionary
#dict2=dict() #empty dictionary
dict3 = {dict([("r",34),("i",56)])}
print(dict3)
seq1 = ('a','b','c')
seq2 = [1,2,3]
dict = (zip(seq1,seq2))
print(dict)
#Sets
set = ([1,2,3,3,4])
print(set)
set={1,3}
set.add(4)
print(set)
set.update([2,5]) #can update using "[]"
print(set)
set.update({6,5,7}) #can update using "{}"
print(set)
set.update([8,8],{9,9,10}) #can update using both "{}"[]"
print(set)
set1 = {1,2,3,4,4,5,6,6}
print(set1)
set1.discard(2)
print(set1)
set1.remove(1)
print(set1)
set1.pop()
print(set1)
set1.pop()
print(set1)
set1.clear()
print(set1)
my_set = set("HelloWorld")
print(my_set)
#Set Union
A= {1,2,3,4,4}
B= {4,5,6,7,7}
A.union(B) #may write may not write but do mention during print function
print(A.union(B))
print(B.union(A))
#note: union from either side will be same
# Set Intersection
A= {1,2,3,4,4}
B= {4,3,5,6,1,7}
print(A&B) #can use "&" operator
print(A.intersection(B)) # or you can use "intersection" operator
print(B.intersection(A))
#note: intersection from either side will be same.
#Set Difference
A= {1,2,3,4}
B= {2,3,4,5}
print(A-B) #elements that are in A but not in B
print(B-A) #elements that are in B but not in A
print(A.difference(B))# can also use the 'difference' operator
print(B.difference(A))
#Symmetric Difference
A={1,2,3,4}
B={3,4,5,6}
print(A^B)
print(B^A)
print(A.symmetric_difference(B))
print(B.symmetric_difference(A))
#note: symmetric difference of A and B is a set of element in both A nad B except those common in both
#other methods of Set
A= {1,2,3,5}
B= {3,4,5,6}
C= {8,2,3,1}
D= {2,3,5}
print(len(A)) #length of the set
print(max(A)) #max element of the set
print(min(A)) #min element of the set
print(sorted(C)) #sort the set
print(sum(A)) #sum of all the elements of the set
print(sum(A)+sum(B)) #sum of the elements in two or more sets
print(A.difference(B)) #difference between sets
print(B.difference_update(A))
print(A.intersection_update(C))
print(D.issubset(A))
set={'apple'}
if 'p' in set:
print('yes')
#some of the concept of Disctionary and Set is not yet clearly explained here, kindly skip these parts
#Functions
def printme(str): #creating a function
"this is a string"
print(str)
return #return with no argument
printme("this is first call") #calling the function
printme('this is second call')
printme("23 call")
def printme(int): #initializing the function
"call for integer"
print(int)
return (int) #returning a argument "int"
printme('23')
#printme("this is string")
#printme('this is function')
printme([1,2,3,4,5])
#pass by reference
def changeme(mylist):
#"this is the changing passed functions"
mylist.append([2,3,4])
print("value inside the function",mylist)
return mylist
mylist=[20,30,40]
changeme(mylist)
print("value outside the function",mylist)
#pass by value
def changeme(mylist):
mylist=[1,2,3,4]
#mylist.append([50,60])
print("values inside the function:",mylist)
return mylist
mylist=[10,20,30,40]
changeme(mylist)
print("value outside the function:",mylist)
#To understand the concept of these above two , here is a link with explanation
https://www.geeksforgeeks.org/pass-by-reference-vs-value-in-python/
#required Argument
def printme(str):
#"this is me"
print(str)
return
printme('i m here')
#note: Required argument are the arguments passed to a function in correct positional order.
#Keyword Argument
def printinfo(name,age):
print("Name:",name)
print("Age:",age)
return
printinfo(age=27,name='abhi')
#Default Argument
def printinfo(name,age=30): #here is the default value
print("Name:",name)
print("Age:",age)
return
printinfo(age=25,name="abhi")
printinfo(name="nanda") #it prints a default value if no certain value is given to it
#Variable-Length Argument
def printinfo(arg1,*vartuple):
print("output:")
print (arg1)
for var in vartuple:
print(var)
return
printinfo(10,20,30)
printinfo('abhi')
#You need to process a function for more arguments than you specified while defining the function. These are not named in function definition.
#Anonymous Function
##lamda
sum=lambda arg1,arg2:arg1+arg2;
print("value is:",sum(10,20))
#
#Return statement
def sum(arg1,arg2): #defining the function
total=arg1+arg2
print("ans:",total)
return total; #give a return value argument
total=sum(10,20)#call the function
#Scope of Variable
#Global vs Local Variable
total=0 #its a global variable
def sum(arg1,arg2):#Function definition
total=arg1+arg2 #'total' is a local variable
print("inside:",total)
return total #return statement
sum(10,20) #calling the function
print("outside:",total)
#note : This ';' is given to terminate the line but it doesnt matter if you dont give thatS
#Modules
//*
1. it is a way to structure the python code.
2. it is a way of organizing the code .
3. you have to import modules in order to use them.
4. the modules can be of three different kind of things: python file,shared object,DLLs,directories
5. you can use any python source file as a module by executing an import statement in some other python source file
6. when an interpreter encounters a import statement, it imports the module by searching it in the search path .
*//
#From....import
1. here we can import a specific attribute of a module which we need in the code.
2. syntax:
from fib import fibonacci
#From....import*
1. it helps in importing all the names/ atribute from the module.
2. syntax:
from fib import*
#Namespace and Scoping
1. A python statement can access variable in local and global namespace. if both have same name variable then the local shadows
the global.
2. Python makes good guess on a variable is local or global. As if any variable assigned a value inside a function is local.
3. Example:
money=200 #global variable
def AddMoney(): #function definition
global money # here you have to comment this line out
money=money+1 #local variable
AddMoney() #function call
print(money)
###Output
_____no_output_____ |
classification/model_selection/kernel_svm.ipynb | ###Markdown
Kernel SVM Importing the libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Importing the dataset
###Code
dataset = pd.read_csv('Data.csv')
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
###Output
_____no_output_____
###Markdown
Splitting the dataset into the Training set and Test set
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
###Output
_____no_output_____
###Markdown
Feature Scaling
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
###Output
_____no_output_____
###Markdown
Training the Kernel SVM model on the Training set
###Code
from sklearn.svm import SVC
classifier = SVC(kernel = 'rbf', random_state = 0)
classifier.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Making the Confusion Matrix
###Code
from sklearn.metrics import confusion_matrix, accuracy_score
y_pred = classifier.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
###Output
[[102 5]
[ 3 61]]
|
fer_final_v.ipynb | ###Markdown
1. Downloading and getting dataset from kaggle
###Code
from google.colab import drive
drive.mount('/content/gdrive')
import os
os.environ['KAGGLE_CONFIG_DIR'] = "/content/gdrive/My Drive/Kaggle"
# /content/gdrive/My Drive/Kaggle is the path where kaggle.json is present in the Google Drive
#changing the working directory
%cd /content/gdrive/My Drive/Kaggle
#Check the present working directory using pwd command
%pwd
###Output
/content/gdrive/My Drive/Kaggle
###Markdown
2. Reading and exploring dataset
###Code
!ls
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df = pd.read_csv('fer2013.csv')
print(df.head())
print("shape= ",df.shape)
print("Data usage: \n",df.Usage.value_counts())
print('sample per emotion: ')
print(df.emotion.value_counts())
options = ['3','4','6']
df = df[df['emotion'].isin(options)]
print('new sample per emotion: ')
print(df.emotion.value_counts())
print('Number of pixels for a sample:')
print(len(df.pixels[3].split(' ')))
emotion_labels = ["Happy", "Sad", "Neutral"]
num_classes = len(emotion_labels)
num_classes
# visualize using the pixel entry
sample_number = 3
import matplotlib.pyplot as plt
array = np.mat(df.pixels[sample_number]).reshape(48,48)
plt.imshow(array)
d = {3:"Happy", 4:"Sad", 6:"Neutral"}
print(d[df.emotion[sample_number]])
###Output
Sad
###Markdown
3. Preprocessing data
###Code
train_set = df[(df.Usage == 'Training')]
validation_set = df[(df.Usage == 'PublicTest')]
test_set = df[(df.Usage == 'PrivateTest')]
train_set.shape,validation_set.shape,test_set.shape
train_set.shape,validation_set.shape,test_set.shape
emotion_labels = [ "Happy", "Sad", "Neutral"]
num_classes = len(emotion_labels)
from math import sqrt
depth = 1
height = int(sqrt(len(df.pixels[3].split())))
width = height
num_train = train_set.shape[0]
num_test = test_set.shape[0]
num_valid = validation_set.shape[0]
X_train = np.array(list(map(str.split, train_set.pixels)), np.float32)
X_validation = np.array(list(map(str.split, validation_set.pixels)), np.float32)
X_test = np.array(list(map(str.split, test_set.pixels)), np.float32)
num_train = X_train.shape[0]
num_validation = X_validation.shape[0]
num_test = X_test.shape[0]
X_train = X_train.reshape(num_train, width, height, depth)
X_validation = X_validation.reshape(num_valid, width, height, depth)
X_test = X_test.reshape(num_test, width, height, depth)
print('Training: ',X_train.shape)
print('Validation: ',X_validation.shape)
print('Test: ',X_test.shape)
# replacing emotion in df from 3,4,6 to 0,1,2
train_set['emotion'].replace(to_replace=[3,4,6], value=[0,1,2],inplace=True)
test_set['emotion'].replace(to_replace=[3,4,6], value=[0,1,2],inplace=True)
validation_set['emotion'].replace(to_replace=[3,4,6], value=[0,1,2],inplace=True)
# one hot encoding the seven emotions
from keras.utils import np_utils
y_train = train_set.emotion
y_train = np_utils.to_categorical(y_train, 3)
y_validation = validation_set.emotion
y_validation = np_utils.to_categorical(y_validation, 3)
y_test = test_set.emotion
y_test = np_utils.to_categorical(y_test, 3)
print('Training: ',y_train.shape)
print('Validation: ',y_validation.shape)
print('Test: ',y_test.shape)
np.unique(train_set.emotion)
y_train[0]
import matplotlib
import matplotlib.pyplot as plt
def overview(start, end, X):
fig = plt.figure(figsize=(20,20))
for i in range(start, end):
input_img = X[i:(i+1),:,:,:]
ax = fig.add_subplot(10,10,i+1)
ax.imshow(input_img[0,:,:,0],cmap=matplotlib.cm.gray )
plt.xticks(np.array([]))
plt.yticks(np.array([]))
plt.tight_layout()
plt.show()
overview(0,50, X_train)
###Output
_____no_output_____
###Markdown
4. Building Neural Networks
###Code
from keras.layers import Convolution2D, Activation, BatchNormalization, MaxPooling2D, Dropout, Dense, Flatten, AveragePooling2D
from keras.models import Sequential
model = Sequential()
model.add(Convolution2D(64, (3, 1), padding='same', input_shape=(48,48,1)))
model.add(Convolution2D(64, (1, 3), padding='same'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=None, padding='same'))
model.add(Dropout(0.25))
model.add(Convolution2D(128, (3, 1), padding='same'))
model.add(Convolution2D(128, (1, 3), padding='same'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=None, padding='same'))
model.add(Dropout(0.25))
model.add(Convolution2D(256, (3, 1), padding='same'))
model.add(Convolution2D(256, (1, 3), padding='same'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=None, padding='same'))
model.add(Dropout(0.25))
model.add(Convolution2D(512, (3, 1), padding='same'))
model.add(Convolution2D(512, (1, 3), padding='same'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=None, padding='same'))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.25))
model.add(Dense(256))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.25))
model.add(Dense(3))
model.add(Activation('softmax'))
model.summary()
###Output
Model: "sequential_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_24 (Conv2D) (None, 48, 48, 64) 256
_________________________________________________________________
conv2d_25 (Conv2D) (None, 48, 48, 64) 12352
_________________________________________________________________
batch_normalization_18 (Batc (None, 48, 48, 64) 256
_________________________________________________________________
activation_21 (Activation) (None, 48, 48, 64) 0
_________________________________________________________________
max_pooling2d_12 (MaxPooling (None, 24, 24, 64) 0
_________________________________________________________________
dropout_18 (Dropout) (None, 24, 24, 64) 0
_________________________________________________________________
conv2d_26 (Conv2D) (None, 24, 24, 128) 24704
_________________________________________________________________
conv2d_27 (Conv2D) (None, 24, 24, 128) 49280
_________________________________________________________________
batch_normalization_19 (Batc (None, 24, 24, 128) 512
_________________________________________________________________
activation_22 (Activation) (None, 24, 24, 128) 0
_________________________________________________________________
max_pooling2d_13 (MaxPooling (None, 12, 12, 128) 0
_________________________________________________________________
dropout_19 (Dropout) (None, 12, 12, 128) 0
_________________________________________________________________
conv2d_28 (Conv2D) (None, 12, 12, 256) 98560
_________________________________________________________________
conv2d_29 (Conv2D) (None, 12, 12, 256) 196864
_________________________________________________________________
batch_normalization_20 (Batc (None, 12, 12, 256) 1024
_________________________________________________________________
activation_23 (Activation) (None, 12, 12, 256) 0
_________________________________________________________________
max_pooling2d_14 (MaxPooling (None, 6, 6, 256) 0
_________________________________________________________________
dropout_20 (Dropout) (None, 6, 6, 256) 0
_________________________________________________________________
conv2d_30 (Conv2D) (None, 6, 6, 512) 393728
_________________________________________________________________
conv2d_31 (Conv2D) (None, 6, 6, 512) 786944
_________________________________________________________________
batch_normalization_21 (Batc (None, 6, 6, 512) 2048
_________________________________________________________________
activation_24 (Activation) (None, 6, 6, 512) 0
_________________________________________________________________
max_pooling2d_15 (MaxPooling (None, 3, 3, 512) 0
_________________________________________________________________
dropout_21 (Dropout) (None, 3, 3, 512) 0
_________________________________________________________________
flatten_3 (Flatten) (None, 4608) 0
_________________________________________________________________
dense_9 (Dense) (None, 512) 2359808
_________________________________________________________________
batch_normalization_22 (Batc (None, 512) 2048
_________________________________________________________________
activation_25 (Activation) (None, 512) 0
_________________________________________________________________
dropout_22 (Dropout) (None, 512) 0
_________________________________________________________________
dense_10 (Dense) (None, 256) 131328
_________________________________________________________________
batch_normalization_23 (Batc (None, 256) 1024
_________________________________________________________________
activation_26 (Activation) (None, 256) 0
_________________________________________________________________
dropout_23 (Dropout) (None, 256) 0
_________________________________________________________________
dense_11 (Dense) (None, 3) 771
_________________________________________________________________
activation_27 (Activation) (None, 3) 0
=================================================================
Total params: 4,061,507
Trainable params: 4,058,051
Non-trainable params: 3,456
_________________________________________________________________
###Markdown
5. Training the model
###Code
from keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
rotation_range=0, # randomly rotate images in the range (degrees, 0 to 180)
width_shift_range=0.0, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.0, # randomly shift images vertically (fraction of total height)
horizontal_flip=True, # randomly flip images
vertical_flip=False, # randomly flip images
)
datagen.fit(X_train)
datagen.fit(X_validation)
batch_size = 32
num_epochs = 50
# from keras.callbacks import EarlyStopping, ReduceLROnPlateau
# reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=10, verbose=0, mode='auto', min_delta=0.0001, cooldown=0, min_lr=0)
# early_stop = EarlyStopping(monitor='val_loss', min_delta=0, patience=0, verbose=0, mode='auto')
# # callbackfunction
# callbacks =[early_stopping,lr_scheduler]
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2,
patience=5, min_lr=0.001)
early_stop = EarlyStopping(
monitor="val_loss",
min_delta=0,
patience=0,
verbose=0,
mode="auto",
baseline=None,
restore_best_weights=False,
)
callbacks =[early_stopping,lr_scheduler]
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['acc'])
train_flow = datagen.flow(X_train, y_train, batch_size=batch_size)
validation_flow = datagen.flow(X_validation, y_validation)
history = model.fit_generator(train_flow,
steps_per_epoch=len(X_train) / batch_size,
epochs=num_epochs,
verbose=1,
validation_data=validation_flow,
validation_steps=len(X_validation) / batch_size,
callbacks=callbacks)
###Output
/usr/local/lib/python3.7/dist-packages/keras/engine/training.py:1915: UserWarning: `Model.fit_generator` is deprecated and will be removed in a future version. Please use `Model.fit`, which supports generators.
warnings.warn('`Model.fit_generator` is deprecated and '
###Markdown
6. Evaluation
###Code
score = model.evaluate(X_test, y_test, steps=len(X_test) / batch_size)
print('Evaluation loss: ', score[0])
print('Evaluation accuracy: ', score[1])
# summarize history for accuracy
plt.plot(history.history['acc'], color='b', label='Training')
plt.plot(history.history['val_acc'], color='g', label='Validation')
plt.title('Accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'], color='b', label='Training')
plt.plot(history.history['val_loss'], color='g', label='Validation')
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(loc='lower left')
plt.show()
y_pred = model.predict_classes(X_test)
y_true = np.asarray([np.argmax(i) for i in y_test])
from sklearn.metrics import confusion_matrix
import seaborn as sns
cm = confusion_matrix(y_true, y_pred)
cm_normalised = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
sns.set(font_scale=1.5)
fig, ax = plt.subplots(figsize=(10,10))
ax = sns.heatmap(cm_normalised, annot=True, linewidths=0, square=False,
cmap="Greens", yticklabels=emotion_labels, xticklabels=emotion_labels, vmin=0, vmax=np.max(cm_normalised),
fmt=".2f", annot_kws={"size": 20})
ax.set(xlabel='Predicted label', ylabel='True label')
###Output
/usr/local/lib/python3.7/dist-packages/keras/engine/sequential.py:450: UserWarning: `model.predict_classes()` is deprecated and will be removed after 2021-01-01. Please use instead:* `np.argmax(model.predict(x), axis=-1)`, if your model does multi-class classification (e.g. if it uses a `softmax` last-layer activation).* `(model.predict(x) > 0.5).astype("int32")`, if your model does binary classification (e.g. if it uses a `sigmoid` last-layer activation).
warnings.warn('`model.predict_classes()` is deprecated and '
###Markdown
7. Saving the model
###Code
model_json = model.to_json()
with open("model554.json","w") as json_file:
json_file.write(model_json)
model.save('weights554.h5')
###Output
_____no_output_____ |
Natural Language Processing/notebooks/06-hmm.ipynb | ###Markdown
Hidden Markov Models in python Here we'll show how the Viterbi algorithm works for HMMs, assuming we have a trained model to start with. We will use the example in the JM3 book (Ch. 8.4.6).
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Initialise the model parameters based on the example from the slides/book (values taken from figure). Notice that here we explicitly split the initial probabilities "pi" from the transition matrix "A".
###Code
tags = NNP, MD, VB, JJ, NN, RB, DT = 0, 1, 2, 3, 4, 5, 6
tag_dict = {0: 'NNP',
1: 'MD',
2: 'VB',
3: 'JJ',
4: 'NN',
5: 'RB',
6: 'DT'}
words = Janet, will, back, the, bill = 0, 1, 2, 3, 4
A = np.array([
[0.3777, 0.0110, 0.0009, 0.0084, 0.0584, 0.0090, 0.0025],
[0.0008, 0.0002, 0.7968, 0.0005, 0.0008, 0.1698, 0.0041],
[0.0322, 0.0005, 0.0050, 0.0837, 0.0615, 0.0514, 0.2231],
[0.0366, 0.0004, 0.0001, 0.0733, 0.4509, 0.0036, 0.0036],
[0.0096, 0.0176, 0.0014, 0.0086, 0.1216, 0.0177, 0.0068],
[0.0068, 0.0102, 0.1011, 0.1012, 0.0120, 0.0728, 0.0479],
[0.1147, 0.0021, 0.0002, 0.2157, 0.4744, 0.0102, 0.0017]
])
pi = np.array([0.2767, 0.0006, 0.0031, 0.0453, 0.0449, 0.0510, 0.2026])
B = np.array([
[0.000032, 0, 0, 0.000048, 0],
[0, 0.308431, 0, 0, 0],
[0, 0.000028, 0.000672, 0, 0.000028],
[0, 0, 0.000340, 0.000097, 0],
[0, 0.000200, 0.000223, 0.000006, 0.002337],
[0, 0, 0.010446, 0, 0],
[0, 0, 0, 0.506099, 0]
])
###Output
_____no_output_____
###Markdown
Now we'll code the Viterbi algorithm. It keeps a store of two components, the best scores to reach a state at a give time, and the last step of the path to get there. Scores alpha are initialised to -inf to denote that we haven't set them yet.
###Code
alpha = np.zeros((len(tags), len(words))) # states x time steps
alpha[:,:] = float('-inf')
backpointers = np.zeros((len(tags), len(words)), 'int')
###Output
_____no_output_____
###Markdown
The base case for the recursion sets the starting state probs based on pi and generating the observation. (Note: we also change Numpy precision when printing for better viewing)
###Code
# base case, time step 0
alpha[:, 0] = pi * B[:,Janet]
np.set_printoptions(precision=2)
print(alpha)
###Output
[[8.85e-06 -inf -inf -inf -inf]
[0.00e+00 -inf -inf -inf -inf]
[0.00e+00 -inf -inf -inf -inf]
[0.00e+00 -inf -inf -inf -inf]
[0.00e+00 -inf -inf -inf -inf]
[0.00e+00 -inf -inf -inf -inf]
[0.00e+00 -inf -inf -inf -inf]]
###Markdown
Now for the recursive step, where we maximise over incoming transitions reusing the best incoming score, computed above.
###Code
# time step 1
for t1 in tags:
for t0 in tags:
score = alpha[t0, 0] * A[t0, t1] * B[t1, will]
if score > alpha[t1, 1]:
alpha[t1, 1] = score
backpointers[t1, 1] = t0
print(alpha)
###Output
[[8.85e-06 0.00e+00 -inf -inf -inf]
[0.00e+00 3.00e-08 -inf -inf -inf]
[0.00e+00 2.23e-13 -inf -inf -inf]
[0.00e+00 0.00e+00 -inf -inf -inf]
[0.00e+00 1.03e-10 -inf -inf -inf]
[0.00e+00 0.00e+00 -inf -inf -inf]
[0.00e+00 0.00e+00 -inf -inf -inf]]
###Markdown
Note that the running maximum for any incoming state (t0) is maintained in alpha[1,t1], and the winning state is stored in addition, as a backpointer. Repeat with the next observations. (We'd do this as a loop over positions in practice.)
###Code
# time step 2
for t2 in tags:
for t1 in tags:
score = alpha[t1, 1] * A[t1, t2] * B[t2, back]
if score > alpha[t2, 2]:
alpha[t2, 2] = score
backpointers[t2, 2] = t1
print(alpha)
# time step 3
for t3 in tags:
for t2 in tags:
score = alpha[t2, 2] * A[t2, t3] * B[t3, the]
if score > alpha[t3, 3]:
alpha[t3, 3] = score
backpointers[t3, 3] = t2
print(alpha)
# time step 4
for t4 in tags:
for t3 in tags:
score = alpha[t3, 3] * A[t3, t4] * B[t4, bill]
if score > alpha[t4, 4]:
alpha[t4, 4] = score
backpointers[t4, 4] = t3
print(alpha)
###Output
[[8.85e-06 0.00e+00 0.00e+00 -inf -inf]
[0.00e+00 3.00e-08 0.00e+00 -inf -inf]
[0.00e+00 2.23e-13 1.61e-11 -inf -inf]
[0.00e+00 0.00e+00 5.11e-15 -inf -inf]
[0.00e+00 1.03e-10 5.36e-15 -inf -inf]
[0.00e+00 0.00e+00 5.33e-11 -inf -inf]
[0.00e+00 0.00e+00 0.00e+00 -inf -inf]]
[[8.85e-06 0.00e+00 0.00e+00 2.49e-17 -inf]
[0.00e+00 3.00e-08 0.00e+00 0.00e+00 -inf]
[0.00e+00 2.23e-13 1.61e-11 0.00e+00 -inf]
[0.00e+00 0.00e+00 5.11e-15 5.23e-16 -inf]
[0.00e+00 1.03e-10 5.36e-15 5.94e-18 -inf]
[0.00e+00 0.00e+00 5.33e-11 0.00e+00 -inf]
[0.00e+00 0.00e+00 0.00e+00 1.82e-12 -inf]]
[[8.85e-06 0.00e+00 0.00e+00 2.49e-17 0.00e+00]
[0.00e+00 3.00e-08 0.00e+00 0.00e+00 0.00e+00]
[0.00e+00 2.23e-13 1.61e-11 0.00e+00 1.02e-20]
[0.00e+00 0.00e+00 5.11e-15 5.23e-16 0.00e+00]
[0.00e+00 1.03e-10 5.36e-15 5.94e-18 2.01e-15]
[0.00e+00 0.00e+00 5.33e-11 0.00e+00 0.00e+00]
[0.00e+00 0.00e+00 0.00e+00 1.82e-12 0.00e+00]]
###Markdown
Now read of the best final state:
###Code
t4 = np.argmax(alpha[:, 4])
print(tag_dict[t4])
###Output
NN
###Markdown
We need to work out the rest of the path which is the best way to reach the final state, t2. We can work this out by taking a step backwards looking at the best incoming edge, i.e., as stored in the backpointers.
###Code
t3 = backpointers[t4, 4]
print(tag_dict[t3])
###Output
DT
###Markdown
Repeat this until we reach the start of the sequence.
###Code
t2 = backpointers[t3, 3]
print(tag_dict[t2])
t1 = backpointers[t2, 2]
print(tag_dict[t1])
t0 = backpointers[t1, 1]
print(tag_dict[t0])
###Output
VB
MD
NNP
###Markdown
Phew. The best state sequence is t = [NNP MD VB DT NN] Formalising things Now we can put this all into a function to handle arbitrary length inputs
###Code
def viterbi(params, words):
pi, A, B = params
N = len(words)
T = pi.shape[0]
alpha = np.zeros((T, N))
alpha[:, :] = float('-inf')
backpointers = np.zeros((T, N), 'int')
# base case
alpha[:, 0] = pi * B[:, words[0]]
# recursive case
for w in range(1, N):
for t2 in range(T):
for t1 in range(T):
score = alpha[t1, w-1] * A[t1, t2] * B[t2, words[w]]
if score > alpha[t2, w]:
alpha[t2, w] = score
backpointers[t2, w] = t1
,m,mm.......n,,,mmm.,m.,m,m,mm,,m,,,,,,,,,,,mm.m.,m.m,m.n.nn
# now follow backpointers to resolve the state sequence
output = []
output.append(np.argmax(alpha[:, N-1]))
for i in range(N-1, 0, -1):
output.append(backpointers[output[-1], i])
return list(reversed(output)), np.max(alpha[:, N-1])
###Output
_____no_output_____
###Markdown
Let's test the method on the same input, and a longer input observation sequence. Notice that we are using only 5 words as the vocabulary so we have to restrict tests to sentences containing only these words.
###Code
output, score = viterbi((pi, A, B), [Janet, will, back, the, bill])
print([tag_dict[o] for o in output])
print(score)
output, score = viterbi((pi, A, B), [Janet, will, back, the, Janet, back, bill])
print([tag_dict[o] for o in output])
print(score)
###Output
['NNP', 'MD', 'VB', 'DT', 'NNP', 'NN', 'NN']
2.4671007551487516e-26
###Markdown
Exhaustive method Let's verify that we've done the above algorithm correctly by implementing exhaustive search, which forms the cross-product of states^M.
###Code
from itertools import product
def exhaustive(params, words):
pi, A, B = params
N = len(words)
T = pi.shape[0]
# track the running best sequence and its score
best = (None, float('-inf'))
# loop over the cartesian product of |states|^M
for ss in product(range(T), repeat=N):
# score the state sequence
score = pi[ss[0]] * B[ss[0], words[0]]
for i in range(1, N):
score *= A[ss[i-1], ss[i]] * B[ss[i], words[i]]
# update the running best
if score > best[1]:
best = (ss, score)
return best
output, score = exhaustive((pi, A, B), [Janet, will, back, the, bill])
print([tag_dict[o] for o in tag_dict])
print(score)
output, score = exhaustive((pi, A, B), [Janet, will, back, the, Janet, back, bill])
print([tag_dict[o] for o in tag_dict])
print(score)
###Output
['NNP', 'MD', 'VB', 'JJ', 'NN', 'RB', 'DT']
2.4671007551487507e-26
###Markdown
Yay, it got the same results as before. Note that the exhaustive method is practical on anything beyond toy data due to the nasty cartesian product. But it is worth doing to verify the Viterbi code above is getting the right results. Supervised training, aka "visible" Markov model Let's train the HMM parameters on the Penn Treebank, using the sample from NLTK. Note that this is a small fraction of the treebank, so we shouldn't expect great performance of our method trained only on this data.
###Code
from nltk.corpus import treebank
corpus = treebank.tagged_sents()
print(corpus)
###Output
[[('Pierre', 'NNP'), ('Vinken', 'NNP'), (',', ','), ('61', 'CD'), ('years', 'NNS'), ('old', 'JJ'), (',', ','), ('will', 'MD'), ('join', 'VB'), ('the', 'DT'), ('board', 'NN'), ('as', 'IN'), ('a', 'DT'), ('nonexecutive', 'JJ'), ('director', 'NN'), ('Nov.', 'NNP'), ('29', 'CD'), ('.', '.')], [('Mr.', 'NNP'), ('Vinken', 'NNP'), ('is', 'VBZ'), ('chairman', 'NN'), ('of', 'IN'), ('Elsevier', 'NNP'), ('N.V.', 'NNP'), (',', ','), ('the', 'DT'), ('Dutch', 'NNP'), ('publishing', 'VBG'), ('group', 'NN'), ('.', '.')], ...]
###Markdown
We have to first map words and tags to numbers for compatibility with the above methods.
###Code
word_numbers = {}
tag_numbers = {}
num_corpus = []
for sent in corpus:
num_sent = []
for word, tag in sent:
wi = word_numbers.setdefault(word.lower(), len(word_numbers))
ti = tag_numbers.setdefault(tag, len(tag_numbers))
num_sent.append((wi, ti))
num_corpus.append(num_sent)
word_names = [None] * len(word_numbers)
for word, index in word_numbers.items():
word_names[index] = word
tag_names = [None] * len(tag_numbers)
for tag, index in tag_numbers.items():
tag_names[index] = tag
###Output
_____no_output_____
###Markdown
Now let's hold out the last few sentences for testing, so that they are unseen during training and give a more reasonable estimate of accuracy on fresh text.
###Code
training = num_corpus[:-10] # reserve the last 10 sentences for testing
testing = num_corpus[-10:]
###Output
_____no_output_____
###Markdown
Next we compute relative frequency estimates based on the observed tag and word counts in the training set. Note that smoothing is important, here we add a small constant to all counts.
###Code
S = len(tag_numbers)
V = len(word_numbers)
# initalise
eps = 0.1
pi = eps * np.ones(S)
A = eps * np.ones((S, S))
B = eps * np.ones((S, V))
# count
for sent in training:
last_tag = None
for word, tag in sent:
B[tag, word] += 1
# bug fixed here 27/3/17; test was incorrect
if last_tag == None:
pi[tag] += 1
else:
A[last_tag, tag] += 1
last_tag = tag
# normalise
pi /= np.sum(pi)
for s in range(S):
B[s,:] /= np.sum(B[s,:])
A[s,:] /= np.sum(A[s,:])
###Output
_____no_output_____
###Markdown
Now we're ready to use our Viterbi method defined above
###Code
predicted, score = viterbi((pi, A, B), list(map(lambda w_t: w_t[0], testing[0])))
print('%20s\t%5s\t%5s' % ('TOKEN', 'TRUE', 'PRED'))
for (wi, ti), pi in zip(testing[0], predicted):
print('%20s\t%5s\t%5s' % (word_names[wi], tag_names[ti], tag_names[pi]))
###Output
TOKEN TRUE PRED
a DT DT
white NNP NNP
house NNP NNP
spokesman NN NN
said VBD VBD
last JJ JJ
week NN NN
that IN IN
the DT DT
president NN NN
is VBZ VBZ
considering VBG VBG
*-1 -NONE- -NONE-
declaring VBG VBG
that IN IN
the DT DT
constitution NNP NNP
implicitly RB NNP
gives VBZ VBZ
him PRP PRP
the DT DT
authority NN NN
for IN IN
a DT DT
line-item JJ JJ
veto NN NN
*-2 -NONE- -NONE-
to TO TO
provoke VB VB
a DT DT
test NN NN
case NN NN
. . .
|
Ejercicios_big_o.ipynb | ###Markdown
Ejercicios Big O() 1. Calcule el costo T(n) para la siguiente función. A. Grafique la función junto con otros O() clásicos. B. Analice los cambios a medida que el n aumenta. C. T(n) esta incluido en la familia de O(n)? Justifique utilizando definición de O(). D. El costo obtenido es aceptable? A continuación un ayuda memoria de $O()$ en operaciones sobre Python:
###Code
from IPython.display import IFrame
IFrame('https://wiki.python.org/moin/TimeComplexity', width=900, height=350)
def dummy_func(lst, n):
print("primer elemento", lst[0])
midpoint = int(n / 2)
for val in lst[:midpoint]:
print(val)
for x in range(10):
print('o_O')
dummy_func([1, 2, 3, 4], 4)
###Output
primer elemento 1
1
2
o_O
o_O
o_O
o_O
o_O
o_O
o_O
o_O
o_O
o_O
###Markdown
1.A)Analizando las lineas de la función dada podemos ver que:- En linea 2 tenemos $O(1)$- En linea 4 tenemos $O(1)$- En linea 6 y 7 tenemos $O(n/2)$- En linea 9 y 10 tenemos $O(10)$Por lo que nuestro $O(n/2 + 12)$1.B) Ahora graficaremos $O(n/2 + 12)$ junto con $O(n)$, $O(n \log n)$, $O(n^2)$, $O(2^n)$
###Code
from math import log
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('default')
# Set up runtime comparisons
n = np.linspace(1, 100, 1000)
labels = ['Lineal', 'Lineal Log', 'Cuadrática','Exponencial', 'Mi T(n)']
big_o = [n, n* np.log(n), n**2, 2**n, n/2 + 12]
# Plot setup
plt.figure(figsize=(12,10))
plt.ylim(0, 700)
plt.xlim(1, 100)
for i in range(len(big_o)):
plt.plot(n,big_o[i],label = labels[i])
plt.legend(loc=0)
plt.ylabel('Tiempo relativo de ejecución')
plt.xlabel('n')
###Output
_____no_output_____
###Markdown
1.C) Podemos observar que nuestra función con $n_0 = 24, c=1$ $ \rightarrow T(n) \in O(n)\; \forall\; n \geq 24$: \begin{equation} n/2 + 12 \leq c.n \; \forall\; n \geq 24 \\ n/2 + 12 \leq n \; \forall\; n \geq 24 \\ \end{equation} 1.D)Imagen de http://bigocheatsheet.com/ 2. Desarrolle dos funciones en Python para encontrar el número más chico en una lista. La primer función debería comparar cada número entre si ($O(n^2)$). La segunda función debería ser lineal ($O(n)$). A. Grafique ambas funciones. B. Con que implementación se quedaría? C. Hay correspondencia entre el benchmarking y $O()$.
###Code
def min_value_n2(lst):
min_value = lst[0] # O(1)
for i in lst: # O(n)
for j in lst: # O(n)
if i < j: # O(1)
min_value = i # O(1)
elif j < i: # O(1)
min_value = j # O(1)
return min_value # O(1)
min_value_n2([3, 2, 6, 1])
# O(4*n^2 + 2)
def min_value_n(lst):
min_value = lst[0] # O(1)
for i in lst: # O(n)
if i < min_value: # O(1)
min_value = i # O(1)
return min_value # O(1)
min_value_n([3, 2, 6, 1])
# O(2*n + 2)
###Output
_____no_output_____
###Markdown
2.A)
###Code
from math import log
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('default')
# Set up runtime comparisons
n = np.linspace(1, 100, 1000)
labels = ['min_value_n', 'min_value_n2']
big_o = [2*n + 2, 4*n**2 + 2]
# Plot setup
plt.figure(figsize=(12,10))
plt.ylim(0, 700)
plt.xlim(1, 100)
for i in range(len(big_o)):
plt.plot(n,big_o[i],label = labels[i])
plt.legend(loc=0)
plt.ylabel('Tiempo relativo de ejecución')
plt.xlabel('n')
###Output
_____no_output_____
###Markdown
2.C)
###Code
assert(min_value_n([3, 2, 6, 1]) == min_value_n2([3, 2, 6, 1]))
%timeit min_value_n(range(10000))
%timeit min_value_n2(range(10000))
###Output
1 loop, best of 3: 5.51 s per loop
|
Autoencoding kernel convolution/03 Stochastic autoencoder.ipynb | ###Markdown
Stochastic autoencoder 3This notebook implements part of the [eager model](https://docs.google.com/drawings/d/1czjcBtDQGS8X6bnIbYU4wmFvv1AfZt5wwSRk9oyQGw0/edit). Here we continue where we left off in [part 2](https://colab.research.google.com/drive/1XZvLnmtu4QlHAXGCbmO2TFPTYFBrkfk1scrollTo=o5DfRieOB3rq&uniqifier=1).Let's put together a cell with train(), up(), down() functions Basics
###Code
import math
import torch
from torch import nn
import torch.nn.functional as F
import torch.optim as optim
import pdb
import numpy as np
import random
from scipy.ndimage.filters import gaussian_filter
from scipy import stats
from scipy.stats import norm
import os
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from skimage.draw import line_aa
%matplotlib inline
plt.style.use('classic')
device = "cuda" if torch.cuda.is_available() else "cpu"
# TODO: Use torch.normal(mean, std=1.0, out=None)
class NormalDistributionTable(object):
def __init__(self, resolution, var=0.07, table_resolution=100):
self.resolution = resolution
self.var = var
self.table_resolution = table_resolution
self.gaussians = torch.tensor([norm.pdf(np.arange(0, 1, 1.0 / self.resolution), mean, self.var) for mean in np.linspace(0, 1, self.table_resolution)])
self.gaussians = self.gaussians.transpose(0, 1)
self.gaussians = self.gaussians / self.gaussians.sum(dim=0)
self.gaussians = self.gaussians.transpose(0, 1)
def lookup(self, mean):
assert mean >= 0 and mean <= 1, "mean must be between 0 and 1"
index = math.floor(mean * self.table_resolution)
if index == self.table_resolution:
index = self.table_resolution - 1
return self.gaussians[index]
def to_pdf(self, images):
element_count = np.prod(images.shape)
images_shape = images.shape
images_view = images.contiguous().view((element_count,))
images_pdf = torch.stack([self.lookup(mean.item()) for mean in images_view])
images_pdf = images_pdf.view(images_shape[:-1] + (images_shape[-1] * self.resolution, ))
return images_pdf
def generate_images(width, height, count=100):
images = []
for _ in range(100):
image = np.zeros((width, height))
rr, cc, val = line_aa(random.randint(0, height-1), random.randint(0, width-1), random.randint(0, height-1), random.randint(0, width-1))
image[rr, cc] = val
image=gaussian_filter(image, 0.5)
images.append(image)
return torch.as_tensor(images).to(device)
def generate_moving_line(width, height, count=100):
images = []
for i in range(int(count/2)):
image = np.zeros((width, height))
rr, cc, val = line_aa(2, 3-i, width-2, height-1-i)
image[rr, cc] = val
image=gaussian_filter(image, 0.5)
images.append(image)
for i in range(int(count/2)):
image = np.zeros((width, height))
rr, cc, val = line_aa(width-1-i, 2-i, 4-i, height-2-i)
image[rr, cc] = val
image=gaussian_filter(image, 0.5)
images.append(image)
return torch.as_tensor(images).to(device)
def show_image(image, vmin=None, vmax=None, title=None, print_values=False):
#print("image ", image.shape)
image = image.cpu().numpy()
fig, ax1 = plt.subplots(figsize=(20, 8))
if title:
plt.title(title)
#i = image.reshape((height, width))
#print("i ", i.shape)
ax1.imshow(image, vmin=vmin, vmax=vmax, interpolation='none', cmap=plt.cm.plasma)
plt.show()
if print_values:
print(image)
def sample_from_pdf1(pdf):
assert pdf.shape == (resolution, )
pk = pdf.copy()
xk = np.arange(resolution)
pk[pk<0] = 0
sum_pk = sum(pk)
if sum(pk) > 0:
pk = pk / sum_pk
custm = stats.rv_discrete(name='custm', values=(xk, pk))
value = custm.rvs(size=1) / resolution
# apply scale (conflates value and confidence!)
value = value * sum_pk
return value
else:
return [0]
def sample_from_pdf(pdf):
assert pdf.shape == (resolution, )
#print("pdf ", pdf)
sum_pdf = sum(pdf)
#print("sum_pdf ", sum_pdf)
if sum_pdf > 0:
v = random.random()
#print("v ", v)
s = 0
index = 0
while s < v and index < resolution:
s += pdf[index] / sum_pdf
index += 1
#print(" s ", s)
#print(" index ", index)
# apply scale (conflates value and confidence!)
return [(index - 1) * sum_pdf / resolution]
else:
return [0]
def sample_from_images__(images__):
assert len(images__.shape) == 3
# reshape images__ from (image count, height, width*resolution) into (image count*height*width, resolution)
s = images__.shape
flattened_images__ = images__.view(s[0], s[1], int(s[2] / resolution), resolution)
s = flattened_images__.shape
flattened_images__ = flattened_images__.view(s[0] * s[1] * s[2], s[3])
# sample single value from each distributions into (image count*height*width, 1)
sampled_pixels = torch.Tensor([sample_from_pdf(item.cpu().numpy()) for item in flattened_images__])
# reshape back into (image count, height, width)
sampled_images = sampled_pixels.view(s[0], s[1], s[2])
return sampled_images
def averaged_sample_from_images__(images__, count=10):
sampled_images = torch.stack([sample_from_images__(images__) for i in range(count)])
return sampled_images.mean(dim=0)
def aggregate_to_pdf(mu_bar, image_count, samples_per_image, iH, iW, resolution):
#print("aggregate_to_pdf mu_bar", mu_bar.shape)
# mu_bar (image_count * samples_per_image, iH, iW)
# mu_bar_per_image (image_count, samples_per_image, iH, iW)
mu_bar = mu_bar.clamp(0, 1)
mu_bar_per_image = mu_bar.view(image_count, samples_per_image, iH, iW)
# mu_bar_per_image_flattened (image_count, iH, iW, samples_per_image)
mu_bar_per_image_flattened = mu_bar_per_image.permute(0, 2, 3, 1).contiguous()
# mu_bar_per_image_flattened (image_count * iH * iW, samples_per_image)
mu_bar_per_image_flattened = mu_bar_per_image_flattened.view(image_count * iH * iW, samples_per_image)
# mu_bar_flattened__ (image_count * iH * iW, resolution)
mu_bar_flattened__ = torch.zeros((image_count * iH * iW, resolution))
assert mu_bar_per_image_flattened.shape[0] == mu_bar_flattened__.shape[0]
for sample_index in range(samples_per_image):
#print("mu_bar_per_image_flattened[:, sample_index] ", mu_bar_per_image_flattened[:, sample_index])
histogram_indices = (mu_bar_per_image_flattened[:, sample_index] * resolution).long().cpu()
for item_index in range(mu_bar_per_image_flattened.shape[0]): # TODO: Vectorize!
mu_bar_flattened__[item_index][histogram_indices[item_index]] += 1
# mu_bar__ (image_count, iH, iW * resolution)
mu_bar__ = mu_bar_flattened__.view((image_count, iH, iW, resolution))
mu_bar__ = torch.nn.functional.normalize(mu_bar__, p=1, dim=3)
mu_bar__ = mu_bar__.view( (image_count, iH, iW * resolution))
return mu_bar__
# Assume input (samples, feature maps, height, width) and that
# features maps is a perfect squere, e.g. 9, of an integer 'a', e.g. 3 in this case
# Output (samples, height * a, width * a)
def flatten_feature_maps(f):
s = f.shape
f = f.permute(0, 2, 3, 1) # move features to the end
s = f.shape
a = int(s[3] ** 0.5) # feature maps are at pos 3 now that we want to first split into a square of size (a X a)
assert a * a == s[3], "Feature map count must be a perfect square"
f = f.view(s[0], s[1], s[2], a, a)
f = f.permute(0, 1, 3, 2, 4).contiguous() # frame count, height, sqr(features), width, sqr(features)
s = f.shape
f = f.view(s[0], s[1] * s[2], s[3] * s[4]) # each point becomes a square of features
return f
# Assume input (samples, height * a, width * a)
# Output (samples, feature maps, height, width)
def unflatten_feature_maps(f, a):
s = f.shape
f = f.view(s[0], int(s[1] / a), a, int(s[2] / a), a)
f = f.permute(0, 1, 3, 2, 4).contiguous() # move features to the end
s = f.shape
f = f.view(s[0], s[1], s[2], a * a).permute(0, 3, 1, 2)
return f
class EMA:
def __init__(self, mu):
super(EMA, self).__init__()
self.mu = mu
def forward(self,x, last_average):
new_average = self.mu*x + (1-self.mu)*last_average
return new_average
resolution = 10
var = 0.05
normal_distribution_table = NormalDistributionTable(resolution=resolution, var=var)
###Output
_____no_output_____
###Markdown
Autoencoder
###Code
class AutoEncoder(nn.Module):
def __init__(self, a=3):
super(AutoEncoder, self).__init__()
self.a = a
self.encoder = nn.Sequential( # b, 1, w, h
nn.Conv2d(1, 2 * a * a, 3, stride=1, padding=1), # b, 2 * a * a, w, h
nn.ReLU(True),
nn.MaxPool2d(2, stride=2), # b, 2 * a * a, w/2, h/2
nn.Conv2d(2 * a * a, a * a, 3, stride=1, padding=1), # b, a * a, w/2, h/2
nn.ReLU(True),
nn.MaxPool2d(2, stride=2), # b, a * a, w/4, h/4
nn.MaxPool2d(2, stride=2), # b, a * a, w/8, h/8
nn.Sigmoid(),
)
self.decoder = nn.Sequential(
nn.ConvTranspose2d(a * a, 2 * a * a, 3, stride=2, padding=1, output_padding=1), # b, 2 * a * a, w/4, h/4
nn.ReLU(True),
nn.ConvTranspose2d(2 * a * a, 2 * a * a, 3, stride=2, padding=1, output_padding=1), # b, 2 * a * a, w/2, h/2
nn.ReLU(True),
nn.ConvTranspose2d(2 * a * a, 1, 3, stride=2, padding=1, output_padding=1), # b, 1, w, h
nn.Sigmoid()
)
self.encoder_output = None
def forward(self, x):
assert x.shape[-1] % 4 == 0, "Width and height must be a multiple of 4"
x = self.encoder_output = self.encoder(x)
x = self.decoder(x)
return x
###Output
_____no_output_____
###Markdown
Unit
###Code
class Unit:
def __init__(self, unit_index, samples_per_image=25, a=3, resolution=10):
self.unit_index = unit_index
self.samples_per_image = samples_per_image
self.a = a
self.model = AutoEncoder(a=a).to(device)
self.ema = EMA(0.5)
self.image_count = None
self.image_size = None
self.resolution = resolution
self.trained = False
if os.path.exists(self.save_path()):
self.model.load_state_dict(torch.load(self.save_path()))
self.model.eval()
self.trained = True
def up(self):
if self.model.encoder_output is None:
raise Error("must call train() before up()")
return
h1 = self.model.encoder_output
h1_flattened = flatten_feature_maps(h1)
#print("images ", images.shape)
#print("mu1__ ", mu1__.shape)
#print("h1 ", h1.shape)
#print("h1_flattened ", h1_flattened.shape)
h1__ = normal_distribution_table.to_pdf(h1_flattened)
last_average = h1__[0].clone()
for index in range(1, h1__.shape[0]):
h1__[index] = last_average = self.ema.forward(h1__[index], last_average)
#print("h1__ ", h1__.shape)
return h1__
def down(self, u2_bar__):
if self.model.encoder_output is None:
raise Error("must call train() before down()")
return
sampled_h1 = sample_from_images__(u2_bar__)
#print("sampled_h1 ", sampled_h1.shape)
unflattened_sampled_h1 = unflatten_feature_maps(sampled_h1, self.a).to(device)
#print("**unflattened_sampled_h1 ", unflattened_sampled_h1.shape)
h1 = self.model.encoder_output
#print("**h1 ", h1.shape)
#show_image(sampled_h1[0].detach(), title=f"sampled_h1 {0}", vmin=0, vmax=1)
#show_image(unflattened_sampled_h1[0, 0].detach(), title=f"unflattened_sampled_h1 {0}", vmin=0, vmax=1)
#show_image(h1[0, 0].detach(), title=f"h1 {0}", vmin=0, vmax=1)
#merged_h1 = (unflattened_sampled_h1 + h1) / 2.0
#merged_h1 = unflattened_sampled_h1 * 0.5 + h1 * 0.5
merged_h1 = unflattened_sampled_h1 * h1
decoded_mu1 = self.model.decoder.forward(merged_h1)
decoded_mu1 = decoded_mu1[:, 0, :, :]
#print("decoded_mu1 ", decoded_mu1.shape)
#print("image_count ", self.image_count)
#print("samples_per_image ", self.samples_per_image)
#print("image_size ", self.image_size)
mu1_bar__ = aggregate_to_pdf(mu_bar=decoded_mu1, image_count=self.image_count, samples_per_image=self.samples_per_image, iH=self.image_size, iW=self.image_size, resolution=self.resolution)
return mu1_bar__
def train(self, mu1__, num_epochs=3000):
self.image_count, self.image_size, _ = mu1__.shape
learning_rate = 1e-3
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(self.model.parameters(), lr=learning_rate,
weight_decay=1e-5)
#print("mu__", mu1__.shape)
mu1_duplicated__ = torch.stack([mu__.clone() for mu__ in mu1__ for _ in range(self.samples_per_image)])
#print("Duplicated PDFs for images in animation: ", mu1__.shape)
mu1 = sample_from_images__(mu1_duplicated__)
#print("mu1: Sampled images in animation: ", mu1.shape)
input = mu1[:, None, :, :].to(device)
if self.trained:
output = self.model(input)
else:
done = False
epoch = 0
while not done:
output = self.model(input)
loss = criterion(output, input)
# ===================backward====================
optimizer.zero_grad()
loss.backward()
optimizer.step()
if epoch % int(num_epochs / 10) == 0:
print('epoch [{}/{}], loss:{:.4f}'
.format(epoch+1, num_epochs, loss.item()))
if (loss.item() < 0.01 and epoch > 1000) or epoch > num_epochs:
done = True
epoch += 1
self.trained = True
torch.save(self.model.state_dict(), self.save_path())
return mu1, output[:,0,:,:]
def save_path(self):
return f"unit_{self.unit_index}.pt"
class UnitStack:
def __init__(self, resolution=resolution):
self.units = []
self.resolution = resolution
def append_unit(self):
if len(self.units) == 0:
samples_per_image = 10
else:
samples_per_image = 1
unit = Unit(len(self.units), samples_per_image=samples_per_image, a=4, resolution=self.resolution)
self.units.append(unit)
return unit
def process(self, mu1__):
return self.process_unit(0, mu1__)
def process_unit(self, unit_index, mu1__):
unit = self.units[unit_index]
print(f"mu{unit_index}__ :", mu1__.shape)
unit.train(mu1__, num_epochs=3000)
h1__ = unit.up()
print(f"h{unit_index}__ :", h1__.shape)
if unit_index < len(self.units) - 1:
unext_bar__ = self.process_unit(unit_index + 1, h1__)
else:
print("No next unit")
unext_bar__ = h1__
mu1_bar__ = unit.down(unext_bar__)
print(f"mu{unit_index}_bar__ :", mu1_bar__.shape)
return mu1_bar__
###Output
_____no_output_____
###Markdown
Example
###Code
image_size = 16
image_count = image_size
np.random.seed(0)
torch.manual_seed(0)
images = generate_moving_line(image_size, image_size, count=image_count).float()
#print("Distinct images in animation: ", images.shape)
mu1__ = normal_distribution_table.to_pdf(images)
#print("mu1__: PDFs for images in animation: ", mu1__.shape)
unit_stack = UnitStack(resolution=resolution)
unit_stack.append_unit()
unit_stack.append_unit()
unit_stack.append_unit()
mu1_bar__ = unit_stack.process(mu1__)
sampled_images = sample_from_images__(mu1__)
for i in range(sampled_images.shape[0]):
show_image(images[i].detach(), title=f"images {i}", vmin=0, vmax=1)
show_image(mu1_bar__[i].detach(), title=f"mu1_bar__ {i}", vmin=0, vmax=1)
show_image(sampled_images[i].detach(), title=f"sampled_images {i}", vmin=0, vmax=1)
# import os
# import glob
# files = glob.glob('./*.pt')
# for f in files:
# os.remove(f)
###Output
_____no_output_____ |
notebooks/test_interfaces_rail.ipynb | ###Markdown
New Tutorial for testing interface of Delight with RAIL in Vera C. Rubin Obs context (LSST) Getting started with Delight and LSST- author : Sylvie Dagoret-Campagne- affiliation : IJCLab/IN2P3/CNRS- creation date : January 22 2022**test delight.interface.rail** : adaptation of the original tutorial on SDSS and Getting started.- run at NERSC with **desc-python** python kernel.Instruction to have a **desc-python** environnement:- https://confluence.slac.stanford.edu/display/LSSTDESC/Getting+Started+with+Anaconda+Python+at+NERSCThis environnement is a clone from the **desc-python** environnement where package required in requirements can be addded according the instructions here- https://github.com/LSSTDESC/desc-python/wiki/Add-Packages-to-the-desc-python-environment We will use the parameter file "tmps/parametersTestRail.cfg".This contains a description of the bands and data to be used.In this example we will generate mock data for the ugrizy LSST bands,fit each object with our GP using ugi bands only and see how it predicts the rz bands.This is an example for filling in/predicting missing bands in a fully bayesian waywith a flexible SED model quickly via our photo-z GP.
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats
import sys,os
sys.path.append('../')
from delight.io import *
from delight.utils import *
from delight.photoz_gp import PhotozGP
from delight.interfaces.rail.makeConfigParam import makeConfigParam
!pwd
cd ../.
!pwd
###Output
/global/u1/d/dagoret/mydesc/Delight
###Markdown
Make config parameters- now parameters are generated in a dictionnary
###Code
input_param = {}
input_param["bands_names"] = "lsst_u lsst_g lsst_r lsst_i lsst_z lsst_y"
input_param["bands_path"] = "data/FILTERS"
input_param["bands_fmt"] = "res"
input_param["bands_numcoefs"] = 15
input_param["bands_verbose"] = "True"
input_param["bands_debug"] = "True"
input_param["bands_makeplots"]= "False"
input_param['sed_path'] = "data/CWW_SEDs"
input_param['sed_name_list'] = "El_B2004a Sbc_B2004a Scd_B2004a SB3_B2004a SB2_B2004a Im_B2004a ssp_25Myr_z008 ssp_5Myr_z008"
input_param['sed_fmt'] = "dat"
input_param['prior_t_list'] = "0.27 0.26 0.25 0.069 0.021 0.11 0.0061 0.0079"
input_param['prior_zt_list'] = "0.23 0.39 0.33 0.31 1.1 0.34 1.2 0.14"
input_param['lambda_ref'] = "4.5e3"
input_param['tempdir'] = "./tmpsim"
input_param["tempdatadir"] = "./tmpsim/delight_data"
input_param['train_refbandorder'] = "lsst_u lsst_u_var lsst_g lsst_g_var lsst_r lsst_r_var lsst_i lsst_i_var lsst_z lsst_z_var lsst_y lsst_y_var redshift"
input_param['train_refband'] = "lsst_i"
input_param['train_fracfluxerr'] = "1e-4"
input_param['train_xvalidate'] = "False"
input_param['train_xvalbandorder'] = "_ _ _ _ lsst_r lsst_r_var _ _ _ _ _ _"
input_param['target_refbandorder'] = "lsst_u lsst_u_var lsst_g lsst_g_var lsst_r lsst_r_var lsst_i lsst_i_var lsst_z lsst_z_var lsst_y lsst_y_var redshift"
input_param['target_refband'] = "lsst_r"
input_param['target_fracfluxerr'] = "1e-4"
input_param["zPriorSigma"] = "0.2"
input_param["ellPriorSigma"] = "0.5"
input_param["fluxLuminosityNorm"] = "1.0"
input_param["alpha_C"] = "1.0e3"
input_param["V_C"] = "0.1"
input_param["alpha_L"] = "1.0e2"
input_param["V_L"] = "0.1"
input_param["lineWidthSigma"] = "20"
input_param["dlght_redshiftMin"] = "0.1"
input_param["dlght_redshiftMax"] = "1.101"
input_param["dlght_redshiftNumBinsGPpred"] = "100"
input_param["dlght_redshiftBinSize"] = "0.01"
input_param["dlght_redshiftDisBinSize"] = "0.2"
###Output
_____no_output_____
###Markdown
- **makeConfigParam** generate a long string defining required parameters
###Code
paramfile_txt = makeConfigParam("data",input_param)
print(paramfile_txt)
###Output
# DELIGHT parameter file
# Syntactic rules:
# - You can set parameters with : or =
# - Lines starting with # or ; will be ignored
# - Multiple values (band names, band orders, confidence levels)
# must beb separated by spaces
# - The input files should contain numbers separated with spaces.
# - underscores mean unused column
[Bands]
names: lsst_u lsst_g lsst_r lsst_i lsst_z lsst_y
directory: data/FILTERS
bands_fmt: res
numCoefs: 15
bands_verbose: True
bands_debug: True
bands_makeplots: False
[Templates]
directory: data/CWW_SEDs
names: El_B2004a Sbc_B2004a Scd_B2004a SB3_B2004a SB2_B2004a Im_B2004a ssp_25Myr_z008 ssp_5Myr_z008
sed_fmt: dat
p_t: 0.27 0.26 0.25 0.069 0.021 0.11 0.0061 0.0079
p_z_t: 0.23 0.39 0.33 0.31 1.1 0.34 1.2 0.14
lambdaRef: 4.5e3
[Simulation]
numObjects: 1000
noiseLevel: 0.03
trainingFile: ./tmpsim/delight_data/galaxies-fluxredshifts.txt
targetFile: ./tmpsim/delight_data/galaxies-fluxredshifts2.txt
[Training]
catFile: ./tmpsim/delight_data/galaxies-fluxredshifts.txt
bandOrder: lsst_u lsst_u_var lsst_g lsst_g_var lsst_r lsst_r_var lsst_i lsst_i_var lsst_z lsst_z_var lsst_y lsst_y_var redshift
referenceBand: lsst_i
extraFracFluxError: 1e-4
crossValidate: False
crossValidationBandOrder: _ _ _ _ lsst_r lsst_r_var _ _ _ _ _ _
paramFile: ./tmpsim/delight_data/galaxies-gpparams.txt
CVfile: ./tmpsim/delight_data/galaxies-gpCV.txt
numChunks: 1
[Target]
catFile: ./tmpsim/delight_data/galaxies-fluxredshifts2.txt
bandOrder: lsst_u lsst_u_var lsst_g lsst_g_var lsst_r lsst_r_var lsst_i lsst_i_var lsst_z lsst_z_var lsst_y lsst_y_var redshift
referenceBand: lsst_r
extraFracFluxError: 1e-4
redshiftpdfFile: ./tmpsim/delight_data/galaxies-redshiftpdfs.txt
redshiftpdfFileTemp: ./tmpsim/delight_data/galaxies-redshiftpdfs-cww.txt
metricsFile: ./tmpsim/delight_data/galaxies-redshiftmetrics.txt
metricsFileTemp: ./tmpsim/delight_data/galaxies-redshiftmetrics-cww.txt
useCompression: False
Ncompress: 10
compressIndicesFile: ./tmpsim/delight_data/galaxies-compressionIndices.txt
compressMargLikFile: ./tmpsim/delight_data/galaxies-compressionMargLikes.txt
redshiftpdfFileComp: ./tmpsim/delight_data/galaxies-redshiftpdfs-comp.txt
[Other]
rootDir: ./
zPriorSigma: 0.2
ellPriorSigma: 0.5
fluxLuminosityNorm: 1.0
alpha_C: 1.0e3
V_C: 0.1
alpha_L: 1.0e2
V_L: 0.1
lines_pos: 6500 5002.26 3732.22
lines_width: 20 20 20 20
redshiftMin: 0.1
redshiftMax: 1.101
redshiftNumBinsGPpred: 100
redshiftBinSize: 0.01
redshiftDisBinSize: 0.2
confidenceLevels: 0.1 0.50 0.68 0.95
###Markdown
Temporary working dir**now intermediate file are written in a temporary file:**- configuration parameter file- input fluxes- Template fitting and Gaussian Process parameters- metrics from running Template fitting and Gaussian Process estimation
###Code
# create usefull tempory directory
try:
if not os.path.exists(input_param["tempdir"]):
os.makedirs(input_param["tempdir"])
except OSError as e:
if e.errno != errno.EEXIST:
msg = "error creating file "+input_param["tempdir"]
logger.error(msg)
raise
configfilename = 'parametersTestRail.cfg'
configfullfilename = os.path.join(input_param['tempdir'],configfilename)
###Output
_____no_output_____
###Markdown
- **write parameter file**
###Code
with open(configfullfilename ,'w') as out:
out.write(paramfile_txt)
###Output
_____no_output_____
###Markdown
Filters - First, we must **fit the band filters with a gaussian mixture**. This is done with this script:
###Code
from delight.interfaces.rail.processFilters import processFilters
processFilters(configfullfilename)
###Output
2022-01-22 11:29:00,321 __main__.py delight.interfaces.rail.processFilters[26348] INFO ----- processFilters ------
2022-01-22 11:29:00,322 __main__.py delight.interfaces.rail.processFilters[26348] INFO parameter file is ./tmpsim/parametersTestRail.cfg
###Markdown
SED - Second, we will process the library of SEDs and project them onto the filters,(for the mean fct of the GP) with the following script (which may take a few minutes depending on the settings you set):
###Code
from delight.interfaces.rail.processSEDs import processSEDs
processSEDs(configfullfilename)
###Output
2022-01-22 11:29:33,676 __main__.py, delight.interfaces.rail.processSEDs[26348] INFO --- Process SED ---
###Markdown
Manage temporary working data (fluxes and GP params and metrics) directories
###Code
try:
if not os.path.exists(input_param["tempdatadir"]):
os.makedirs(input_param["tempdatadir"])
except OSError as e:
if e.errno != errno.EEXIST:
msg = "error creating file " + input_param["tempdatadir"]
logger.error(msg)
raise
###Output
_____no_output_____
###Markdown
Internal simulation of a mock catalog Third, we will make some mock data with those filters and SEDs:
###Code
from delight.interfaces.rail.simulateWithSEDs import simulateWithSEDs
simulateWithSEDs(configfullfilename)
###Output
2022-01-22 11:29:36,924 __main__.py, delight.interfaces.rail.simulateWithSEDs[26348] INFO --- Simulate with SED ---
###Markdown
Train and applyRun the scripts below. There should be a little bit of feedback as it is going through the lines.For up to 1e4 objects it should only take a few minutes max, depending on the settings above. Template Fitting
###Code
from delight.interfaces.rail.templateFitting import templateFitting
templateFitting(configfullfilename)
###Output
2022-01-22 11:29:37,300 __main__.py, delight.interfaces.rail.templateFitting[26348] INFO --- TEMPLATE FITTING ---
2022-01-22 11:29:37,300 __main__.py, delight.interfaces.rail.templateFitting[26348] INFO ==> New Prior calculation from Benitez
2022-01-22 11:29:37,303 __main__.py, delight.interfaces.rail.templateFitting[26348] INFO Thread number / number of threads: 1 , 1
2022-01-22 11:29:37,303 __main__.py, delight.interfaces.rail.templateFitting[26348] INFO Input parameter file:./tmpsim/parametersTestRail.cfg
2022-01-22 11:29:37,316 __main__.py, delight.interfaces.rail.templateFitting[26348] INFO Number of Target Objects 1000
2022-01-22 11:29:37,316 __main__.py, delight.interfaces.rail.templateFitting[26348] INFO Thread 0 , analyzes lines 0 , to 1000
###Markdown
Gaussian Process Trainning
###Code
from delight.interfaces.rail.delightLearn import delightLearn
delightLearn(configfullfilename)
###Output
2022-01-22 11:29:45,217 __main__.py, delight.interfaces.rail.delightLearn[26348] INFO --- DELIGHT-LEARN ---
2022-01-22 11:29:45,232 __main__.py, delight.interfaces.rail.delightLearn[26348] INFO Number of Training Objects 1000
2022-01-22 11:29:45,232 __main__.py, delight.interfaces.rail.delightLearn[26348] INFO Thread 0 , analyzes lines 0 , to 1000
###Markdown
Predictions
###Code
from delight.interfaces.rail.delightApply import delightApply
delightApply(configfullfilename)
###Output
2022-01-22 11:29:56,913 __main__.py, delight.interfaces.rail.delightApply[26348] INFO --- DELIGHT-APPLY ---
2022-01-22 11:29:56,939 __main__.py, delight.interfaces.rail.delightApply[26348] INFO Number of Training Objects 1000
2022-01-22 11:29:56,940 __main__.py, delight.interfaces.rail.delightApply[26348] INFO Number of Target Objects 1000
2022-01-22 11:29:56,940 __main__.py, delight.interfaces.rail.delightApply[26348] INFO Thread 0 , analyzes lines 0 to 1000
###Markdown
Analyze the outputs
###Code
# First read a bunch of useful stuff from the parameter file.
params = parseParamFile(configfullfilename, verbose=False)
bandCoefAmplitudes, bandCoefPositions, bandCoefWidths, norms\
= readBandCoefficients(params)
bandNames = params['bandNames']
numBands, numCoefs = bandCoefAmplitudes.shape
fluxredshifts = np.loadtxt(params['target_catFile'])
fluxredshifts_train = np.loadtxt(params['training_catFile'])
bandIndices, bandNames, bandColumns, bandVarColumns, redshiftColumn,\
refBandColumn = readColumnPositions(params, prefix='target_')
redshiftDistGrid, redshiftGrid, redshiftGridGP = createGrids(params)
dir_seds = params['templates_directory']
dir_filters = params['bands_directory']
lambdaRef = params['lambdaRef']
sed_names = params['templates_names']
nt = len(sed_names)
f_mod = np.zeros((redshiftGrid.size, nt, len(params['bandNames'])))
for t, sed_name in enumerate(sed_names):
f_mod[:, t, :] = np.loadtxt(dir_seds + '/' + sed_name + '_fluxredshiftmod.txt')
# Load the PDF files
metricscww = np.loadtxt(params['metricsFile'])
metrics = np.loadtxt(params['metricsFileTemp'])
# Those of the indices of the true, mean, stdev, map, and map_std redshifts.
i_zt, i_zm, i_std_zm, i_zmap, i_std_zmap = 0, 1, 2, 3, 4
i_ze = i_zm
i_std_ze = i_std_zm
pdfs = np.loadtxt(params['redshiftpdfFile'])
pdfs_cww = np.loadtxt(params['redshiftpdfFileTemp'])
pdfatZ_cww = metricscww[:, 5] / pdfs_cww.max(axis=1)
pdfatZ = metrics[:, 5] / pdfs.max(axis=1)
nobj = pdfatZ.size
#pdfs /= pdfs.max(axis=1)[:, None]
#pdfs_cww /= pdfs_cww.max(axis=1)[:, None]
pdfs /= np.trapz(pdfs, x=redshiftGrid, axis=1)[:, None]
pdfs_cww /= np.trapz(pdfs_cww, x=redshiftGrid, axis=1)[:, None]
ncol = 4
fig, axs = plt.subplots(5, ncol, figsize=(10, 9), sharex=True, sharey=False)
axs = axs.ravel()
z = fluxredshifts[:, redshiftColumn]
sel = np.random.choice(nobj, axs.size, replace=False)
lw = 2
for ik in range(axs.size):
k = sel[ik]
print(k, end=" ")
axs[ik].plot(redshiftGrid, pdfs_cww[k, :],lw=lw, label='Standard template fitting')# c="#2ecc71",
axs[ik].plot(redshiftGrid, pdfs[k, :], lw=lw, label='New method') #, c="#3498db"
axs[ik].axvline(fluxredshifts[k, redshiftColumn], c="k", lw=1, label='Spec-z')
ymax = np.max(np.concatenate((pdfs[k, :], pdfs_cww[k, :])))
axs[ik].set_ylim([0, ymax*1.2])
axs[ik].set_xlim([0, 1.1])
axs[ik].set_yticks([])
axs[ik].set_xticks([0.0, 0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4])
for i in range(ncol):
axs[-i-1].set_xlabel('Redshift', fontsize=10)
axs[0].legend(ncol=3, frameon=False, loc='upper left', bbox_to_anchor=(0.0, 1.4))
#fig.tight_layout()
#fig.subplots_adjust(wspace=0.1, hspace=0.1, top=0.96)
fig, axs = plt.subplots(2, 2, figsize=(10, 10))
zmax = 1.5
rr = [[0, zmax], [0, zmax]]
nbins = 30
h = axs[0, 0].hist2d(metricscww[:, i_zt], metricscww[:, i_zm], nbins, cmap='Greys', range=rr)
hmin, hmax = np.min(h[0]), np.max(h[0])
axs[0, 0].set_title('CWW z mean')
axs[0, 1].hist2d(metricscww[:, i_zt], metricscww[:, i_zmap], nbins, cmap='Greys', range=rr, vmax=hmax)
axs[0, 1].set_title('CWW z map')
axs[1, 0].hist2d(metrics[:, i_zt], metrics[:, i_zm], nbins, cmap='Greys', range=rr, vmax=hmax)
axs[1, 0].set_title('GP z mean')
axs[1, 1].hist2d(metrics[:, i_zt], metrics[:, i_zmap], nbins, cmap='Greys', range=rr, vmax=hmax)
axs[1, 1].set_title('GP z map')
axs[0, 0].plot([0, zmax], [0, zmax], c='k')
axs[0, 1].plot([0, zmax], [0, zmax], c='k')
axs[1, 0].plot([0, zmax], [0, zmax], c='k')
axs[1, 1].plot([0, zmax], [0, zmax], c='k')
#fig.tight_layout()
fig, axs = plt.subplots(1, 2, figsize=(10, 5.5))
chi2s = ((metrics[:, i_zt] - metrics[:, i_ze])/metrics[:, i_std_ze])**2
axs[0].errorbar(metrics[:, i_zt], metrics[:, i_ze], yerr=metrics[:, i_std_ze], fmt='o', markersize=5, capsize=0)
axs[1].errorbar(metricscww[:, i_zt], metricscww[:, i_ze], yerr=metricscww[:, i_std_ze], fmt='o', markersize=5, capsize=0)
axs[0].plot([0, zmax], [0, zmax], 'k')
axs[1].plot([0, zmax], [0, zmax], 'k')
axs[0].set_xlim([0, zmax])
axs[1].set_xlim([0, zmax])
axs[0].set_ylim([0, zmax])
axs[1].set_ylim([0, zmax])
axs[0].set_title('New method')
axs[1].set_title('Standard template fitting')
fig.tight_layout()
cmap = "coolwarm_r"
vmin = 0.0
alpha = 0.9
s = 5
fig, axs = plt.subplots(1, 2, figsize=(10, 3.5))
vs = axs[0].scatter(metricscww[:, i_zt], metricscww[:, i_zmap],
s=s, c=pdfatZ_cww, cmap=cmap, linewidth=0, vmin=vmin, alpha=alpha)
vs = axs[1].scatter(metrics[:, i_zt], metrics[:, i_zmap],
s=s, c=pdfatZ, cmap=cmap, linewidth=0, vmin=vmin, alpha=alpha)
clb = plt.colorbar(vs, ax=axs.ravel().tolist())
clb.set_label('Normalized probability at spec-$z$')
for i in range(2):
axs[i].plot([0, zmax], [0, zmax], c='k', lw=1, zorder=0, alpha=1)
axs[i].set_ylim([0, zmax])
axs[i].set_xlim([0, zmax])
axs[i].set_xlabel('Spec-$z$')
axs[0].set_ylabel('MAP photo-$z$')
axs[0].set_title('Standard template fitting')
axs[1].set_title('New method')
###Output
_____no_output_____ |
fashion_mnist/Analysis-distance.ipynb | ###Markdown
Distances between Eigenvalues
###Code
# Value 1
activator = "sgmd"
df_cnn_relu0_1 = pd.read_csv("results/" + activator + "/cnn_K0.csv")
activator = "tanh"
df_cnn_relu0_2 = pd.read_csv("results/" + activator + "/cnn_K0.csv")
activator = "relu"
df_cnn_relu0_3 = pd.read_csv("results/" + activator + "/cnn_K0.csv")
print("Sgmd to Tanh: ", float(np.linalg.norm(df_cnn_relu0_1.to_numpy() - df_cnn_relu0_2.to_numpy())))
print("Tanh to Relu: ", float(np.linalg.norm(df_cnn_relu0_2.to_numpy() - df_cnn_relu0_3.to_numpy())))
print("Relu to Sgmd: ", float(np.linalg.norm(df_cnn_relu0_1.to_numpy() - df_cnn_relu0_3.to_numpy())))
print()
# Value 2
activator = "sgmd"
df_cnn_relu0_1 = pd.read_csv("results/" + activator + "/cnn_K1.csv")
activator = "tanh"
df_cnn_relu0_2 = pd.read_csv("results/" + activator + "/cnn_K1.csv")
activator = "relu"
df_cnn_relu0_3 = pd.read_csv("results/" + activator + "/cnn_K1.csv")
print("Sgmd to Tanh: ", float(np.linalg.norm(df_cnn_relu0_1.to_numpy() - df_cnn_relu0_2.to_numpy())))
print("Tanh to Relu: ", float(np.linalg.norm(df_cnn_relu0_2.to_numpy() - df_cnn_relu0_3.to_numpy())))
print("Relu to Sgmd: ", float(np.linalg.norm(df_cnn_relu0_1.to_numpy() - df_cnn_relu0_3.to_numpy())))
print()
# Value 3
activator = "sgmd"
df_cnn_relu0_1 = pd.read_csv("results/" + activator + "/cnn_K2.csv")
activator = "tanh"
df_cnn_relu0_2 = pd.read_csv("results/" + activator + "/cnn_K2.csv")
activator = "relu"
df_cnn_relu0_3 = pd.read_csv("results/" + activator + "/cnn_K2.csv")
print("Sgmd to Tanh: ", float(np.linalg.norm(df_cnn_relu0_1.to_numpy() - df_cnn_relu0_2.to_numpy())))
print("Tanh to Relu: ", float(np.linalg.norm(df_cnn_relu0_2.to_numpy() - df_cnn_relu0_3.to_numpy())))
print("Relu to Sgmd: ", float(np.linalg.norm(df_cnn_relu0_1.to_numpy() - df_cnn_relu0_3.to_numpy())))
print()
# Value 4
activator = "sgmd"
df_cnn_relu0_1 = pd.read_csv("results/" + activator + "/cnn_K3.csv")
activator = "tanh"
df_cnn_relu0_2 = pd.read_csv("results/" + activator + "/cnn_K3.csv")
activator = "relu"
df_cnn_relu0_3 = pd.read_csv("results/" + activator + "/cnn_K3.csv")
print("Sgmd to Tanh: ", float(np.linalg.norm(df_cnn_relu0_1.to_numpy() - df_cnn_relu0_2.to_numpy())))
print("Tanh to Relu: ", float(np.linalg.norm(df_cnn_relu0_2.to_numpy() - df_cnn_relu0_3.to_numpy())))
print("Relu to Sgmd: ", float(np.linalg.norm(df_cnn_relu0_1.to_numpy() - df_cnn_relu0_3.to_numpy())))
print()
# Value 5
activator = "sgmd"
df_cnn_relu0_1 = pd.read_csv("results/" + activator + "/cnn_K4.csv")
activator = "tanh"
df_cnn_relu0_2 = pd.read_csv("results/" + activator + "/cnn_K4.csv")
activator = "relu"
df_cnn_relu0_3 = pd.read_csv("results/" + activator + "/cnn_K4.csv")
print("Sgmd to Tanh: ", float(np.linalg.norm(df_cnn_relu0_1.to_numpy() - df_cnn_relu0_2.to_numpy())))
print("Tanh to Relu: ", float(np.linalg.norm(df_cnn_relu0_2.to_numpy() - df_cnn_relu0_3.to_numpy())))
print("Relu to Sgmd: ", float(np.linalg.norm(df_cnn_relu0_1.to_numpy() - df_cnn_relu0_3.to_numpy())))
print()
# Value 6
activator = "sgmd"
df_cnn_relu0_1 = pd.read_csv("results/" + activator + "/cnn_K5.csv")
activator = "tanh"
df_cnn_relu0_2 = pd.read_csv("results/" + activator + "/cnn_K5.csv")
activator = "relu"
df_cnn_relu0_3 = pd.read_csv("results/" + activator + "/cnn_K5.csv")
print("Sgmd to Tanh: ", float(np.linalg.norm(df_cnn_relu0_1.to_numpy() - df_cnn_relu0_2.to_numpy())))
print("Tanh to Relu: ", float(np.linalg.norm(df_cnn_relu0_2.to_numpy() - df_cnn_relu0_3.to_numpy())))
print("Relu to Sgmd: ", float(np.linalg.norm(df_cnn_relu0_1.to_numpy() - df_cnn_relu0_3.to_numpy())))
print()
# Value 7
activator = "sgmd"
df_cnn_relu0_1 = pd.read_csv("results/" + activator + "/cnn_K6.csv")
activator = "tanh"
df_cnn_relu0_2 = pd.read_csv("results/" + activator + "/cnn_K6.csv")
activator = "relu"
df_cnn_relu0_3 = pd.read_csv("results/" + activator + "/cnn_K6.csv")
print("Sgmd to Tanh: ", float(np.linalg.norm(df_cnn_relu0_1.to_numpy() - df_cnn_relu0_2.to_numpy())))
print("Tanh to Relu: ", float(np.linalg.norm(df_cnn_relu0_2.to_numpy() - df_cnn_relu0_3.to_numpy())))
print("Relu to Sgmd: ", float(np.linalg.norm(df_cnn_relu0_1.to_numpy() - df_cnn_relu0_3.to_numpy())))
print()
# Value 8
activator = "sgmd"
df_cnn_relu0_1 = pd.read_csv("results/" + activator + "/cnn_K7.csv")
activator = "tanh"
df_cnn_relu0_2 = pd.read_csv("results/" + activator + "/cnn_K7.csv")
activator = "relu"
df_cnn_relu0_3 = pd.read_csv("results/" + activator + "/cnn_K7.csv")
print("Sgmd to Tanh: ", float(np.linalg.norm(df_cnn_relu0_1.to_numpy() - df_cnn_relu0_2.to_numpy())))
print("Tanh to Relu: ", float(np.linalg.norm(df_cnn_relu0_2.to_numpy() - df_cnn_relu0_3.to_numpy())))
print("Relu to Sgmd: ", float(np.linalg.norm(df_cnn_relu0_1.to_numpy() - df_cnn_relu0_3.to_numpy())))
print()
# Value 9
activator = "sgmd"
df_cnn_relu0_1 = pd.read_csv("results/" + activator + "/cnn_K8.csv")
activator = "tanh"
df_cnn_relu0_2 = pd.read_csv("results/" + activator + "/cnn_K8.csv")
activator = "relu"
df_cnn_relu0_3 = pd.read_csv("results/" + activator + "/cnn_K8.csv")
print("Sgmd to Tanh: ", float(np.linalg.norm(df_cnn_relu0_1.to_numpy() - df_cnn_relu0_2.to_numpy())))
print("Tanh to Relu: ", float(np.linalg.norm(df_cnn_relu0_2.to_numpy() - df_cnn_relu0_3.to_numpy())))
print("Relu to Sgmd: ", float(np.linalg.norm(df_cnn_relu0_1.to_numpy() - df_cnn_relu0_3.to_numpy())))
print()
# Value 10
activator = "sgmd"
df_cnn_relu0_1 = pd.read_csv("results/" + activator + "/cnn_K9.csv")
activator = "tanh"
df_cnn_relu0_2 = pd.read_csv("results/" + activator + "/cnn_K9.csv")
activator = "relu"
df_cnn_relu0_3 = pd.read_csv("results/" + activator + "/cnn_K9.csv")
print("Sgmd to Tanh: ", float(np.linalg.norm(df_cnn_relu0_1.to_numpy() - df_cnn_relu0_2.to_numpy())))
print("Tanh to Relu: ", float(np.linalg.norm(df_cnn_relu0_2.to_numpy() - df_cnn_relu0_3.to_numpy())))
print("Relu to Sgmd: ", float(np.linalg.norm(df_cnn_relu0_1.to_numpy() - df_cnn_relu0_3.to_numpy())))
print()
df_k1 = np.transpose(df_k1)
df_k2 = np.transpose(df_k2)
df_k3 = np.transpose(df_k3)
# print(df_k1)
# print(df_k2)
# print(df_k3)
print("Sgmd to Tanh: ", float(np.linalg.norm(df_k1 - df_k2)))
print("Tanh to Relu: ", float(np.linalg.norm(df_k2 - df_k3)))
print("Relu to Sgmd: ", float(np.linalg.norm(df_k1 - df_k3)))
###Output
Sgmd to Tanh: 0.870831310749054
Tanh to Relu: 1.0822906494140625
Relu to Sgmd: 1.0941476821899414
|
Python 101 For Data Science/4.3-LoadData.ipynb | ###Markdown
Introduction to Pandas Python Welcome! This notebook will teach you about using Pandas in the Python Programming Language. By the end of this lab, you'll know how to use Pandas package to view and access data. Table of Contents About the Dataset Introduction of Pandas Viewing Data and Accessing Data Quiz on DataFrame Estimated time needed: 15 min About the Dataset The table has one row for each album and several columns artist: Name of the artist album: Name of the album released_year: Year the album was released length_min_sec: Length of the album (hours,minutes,seconds) genre: Genre of the album music_recording_sales_millions: Music recording sales (millions in USD) on [SONG://DATABASE] claimed_sales_millions: Album's claimed sales (millions in USD) on [SONG://DATABASE] date_released: Date on which the album was released soundtrack: Indicates if the album is the movie soundtrack (Y) or (N) rating_of_friends: Indicates the rating from your friends from 1 to 10You can see the dataset here: Artist Album Released Length Genre Music recording sales (millions) Claimed sales (millions) Released Soundtrack Rating (friends) Michael Jackson Thriller 1982 00:42:19 Pop, rock, R&B 46 65 30-Nov-82 10.0 AC/DC Back in Black 1980 00:42:11 Hard rock 26.1 50 25-Jul-80 8.5 Pink Floyd The Dark Side of the Moon 1973 00:42:49 Progressive rock 24.2 45 01-Mar-73 9.5 Whitney Houston The Bodyguard 1992 00:57:44 Soundtrack/R&B, soul, pop 26.1 50 25-Jul-80 Y 7.0 Meat Loaf Bat Out of Hell 1977 00:46:33 Hard rock, progressive rock 20.6 43 21-Oct-77 7.0 Eagles Their Greatest Hits (1971-1975) 1976 00:43:08 Rock, soft rock, folk rock 32.2 42 17-Feb-76 9.5 Bee Gees Saturday Night Fever 1977 1:15:54 Disco 20.6 40 15-Nov-77 Y 9.0 Fleetwood Mac Rumours 1977 00:40:01 Soft rock 27.9 40 04-Feb-77 9.5 Introduction of Pandas
###Code
# Dependency needed to install file
!pip install xlrd
# Import required library
import pandas as pd
###Output
_____no_output_____
###Markdown
After the import command, we now have access to a large number of pre-built classes and functions. This assumes the library is installed; in our lab environment all the necessary libraries are installed. One way pandas allows you to work with data is a dataframe. Let's go through the process to go from a comma separated values (.csv) file to a dataframe. This variable csv_path stores the path of the .csv, that is used as an argument to the read_csv function. The result is stored in the object df, this is a common short form used for a variable referring to a Pandas dataframe.
###Code
# Read data from CSV file
csv_path = 'https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%204/Datasets/TopSellingAlbums.csv'
df = pd.read_csv(csv_path)
###Output
_____no_output_____
###Markdown
We can use the method head() to examine the first five rows of a dataframe:
###Code
# Print first five rows of the dataframe
df.head()
###Output
_____no_output_____
###Markdown
We use the path of the excel file and the function read_excel. The result is a data frame as before:
###Code
# Read data from Excel File and print the first five rows
xlsx_path = 'https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%204/Datasets/TopSellingAlbums.xlsx'
df = pd.read_excel(xlsx_path)
df.head()
###Output
_____no_output_____
###Markdown
We can access the column Length and assign it a new dataframe x:
###Code
# Access to the column Length
x = df[['Length']]
x
###Output
_____no_output_____
###Markdown
The process is shown in the figure: Viewing Data and Accessing Data You can also get a column as a series. You can think of a Pandas series as a 1-D dataframe. Just use one bracket:
###Code
# Get the column as a series
x = df['Length']
x
###Output
_____no_output_____
###Markdown
You can also get a column as a dataframe. For example, we can assign the column Artist:
###Code
# Get the column as a dataframe
x = type(df[['Artist']])
x
###Output
_____no_output_____
###Markdown
You can do the same thing for multiple columns; we just put the dataframe name, in this case, df, and the name of the multiple column headers enclosed in double brackets. The result is a new dataframe comprised of the specified columns:
###Code
# Access to multiple columns
y = df[['Artist','Length','Genre']]
y
###Output
_____no_output_____
###Markdown
The process is shown in the figure: One way to access unique elements is the iloc method, where you can access the 1st row and the 1st column as follows:
###Code
# Access the value on the first row and the first column
df.iloc[0, 0]
###Output
_____no_output_____
###Markdown
You can access the 2nd row and the 1st column as follows:
###Code
# Access the value on the second row and the first column
df.iloc[1,0]
###Output
_____no_output_____
###Markdown
You can access the 1st row and the 3rd column as follows:
###Code
# Access the value on the first row and the third column
df.iloc[0,2]
###Output
_____no_output_____
###Markdown
You can access the column using the name as well, the following are the same as above:
###Code
# Access the column using the name
df.loc[0, 'Artist']
# Access the column using the name
df.loc[1, 'Artist']
# Access the column using the name
df.loc[0, 'Released']
# Access the column using the name
df.loc[1, 'Released']
###Output
_____no_output_____
###Markdown
You can perform slicing using both the index and the name of the column:
###Code
# Slicing the dataframe
df.iloc[0:2, 0:3]
# Slicing the dataframe using name
df.loc[0:2, 'Artist':'Released']
###Output
_____no_output_____
###Markdown
Quiz on DataFrame Use a variable q to store the column Rating as a dataframe
###Code
# Write your code below and press Shift+Enter to execute
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- Your answer is below:q = df[['Rating']]q--> Assign the variable q to the dataframe that is made up of the column Released and Artist:
###Code
# Write your code below and press Shift+Enter to execute
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- Your answer is below:q = df[['Released', 'Artist']]q--> Access the 2nd row and the 3rd column of df:
###Code
# Write your code below and press Shift+Enter to execute
###Output
_____no_output_____ |
docs/_downloads/6491634d939b8bbc697b02aab860bf2c/polynomial_nn.ipynb | ###Markdown
PyTorch: nn-----------$y=\sin(x)$ 을 예측할 수 있도록, $-\pi$ 부터 $pi$ 까지유클리드 거리(Euclidean distance)를 최소화하도록 3차 다항식을 학습합니다.이번에는 PyTorch의 nn 패키지를 사용하여 신경망을 구현하겠습니다.PyTorch autograd는 연산 그래프를 정의하고 변화도를 계산하는 것을 손쉽게 만들어주지만,autograd 그 자체만으로는 복잡한 신경망을 정의하기에는 너무 저수준(low-level)일 수 있습니다;이것이 nn 패키지가 필요한 이유입니다. nn 패키지는 입력으로부터 출력을 생성하고 학습 가능한 가중치를 갖는 신경망 계층(layer) 같은 Module의 집합을 정의합니다.
###Code
import torch
import math
# 입력값과 출력값을 갖는 텐서들을 생성합니다.
x = torch.linspace(-math.pi, math.pi, 2000)
y = torch.sin(x)
# 이 예제에서, 출력 y는 (x, x^2, x^3)의 선형 함수이므로, 선형 계층 신경망으로 간주할 수 있습니다.
# (x, x^2, x^3)를 위한 텐서를 준비합니다.
p = torch.tensor([1, 2, 3])
xx = x.unsqueeze(-1).pow(p)
# 위 코드에서, x.unsqueeze(-1)은 (2000, 1)의 shape을, p는 (3,)의 shape을 가지므로,
# 이 경우 브로드캐스트(broadcast)가 적용되어 (2000, 3)의 shape을 갖는 텐서를 얻습니다.
# nn 패키지를 사용하여 모델을 순차적 계층(sequence of layers)으로 정의합니다.
# nn.Sequential은 다른 Module을 포함하는 Module로, 포함되는 Module들을 순차적으로 적용하여
# 출력을 생성합니다. 각각의 Linear Module은 선형 함수(linear function)를 사용하여 입력으로부터
# 출력을 계산하고, 내부 Tensor에 가중치와 편향을 저장합니다.
# Flatten 계층은 선형 계층의 출력을 `y` 의 shape과 맞도록(match) 1D 텐서로 폅니다(flatten).
model = torch.nn.Sequential(
torch.nn.Linear(3, 1),
torch.nn.Flatten(0, 1)
)
# 또한 nn 패키지에는 주로 사용되는 손실 함수(loss function)들에 대한 정의도 포함되어 있습니다;
# 여기에서는 평균 제곱 오차(MSE; Mean Squared Error)를 손실 함수로 사용하겠습니다.
loss_fn = torch.nn.MSELoss(reduction='sum')
learning_rate = 1e-6
for t in range(2000):
# 순전파 단계: x를 모델에 전달하여 예측값 y를 계산합니다. Module 객체는 __call__ 연산자를
# 덮어써서(override) 함수처럼 호출할 수 있도록 합니다. 이렇게 함으로써 입력 데이터의 텐서를 Module에 전달하여
# 출력 데이터의 텐서를 생성합니다.
y_pred = model(xx)
# 손실을 계산하고 출력합니다. 예측한 y와 정답인 y를 갖는 텐서들을 전달하고,
# 손실 함수는 손실(loss)을 갖는 텐서를 반환합니다.
loss = loss_fn(y_pred, y)
if t % 100 == 99:
print(t, loss.item())
# 역전파 단계를 실행하기 전에 변화도(gradient)를 0으로 만듭니다.
model.zero_grad()
# 역전파 단계: 모델의 학습 가능한 모든 매개변수에 대해 손실의 변화도를 계산합니다.
# 내부적으로 각 Module의 매개변수는 requires_grad=True일 때 텐서에 저장되므로,
# 아래 호출은 모델의 모든 학습 가능한 매개변수의 변화도를 계산하게 됩니다.
loss.backward()
# 경사하강법을 사용하여 가중치를 갱신합니다.
# 각 매개변수는 텐서이므로, 이전에 했던 것처럼 변화도에 접근할 수 있습니다.
with torch.no_grad():
for param in model.parameters():
param -= learning_rate * param.grad
# list의 첫번째 항목에 접근하는 것처럼 `model` 의 첫번째 계층(layer)에 접근할 수 있습니다.
linear_layer = model[0]
# 선형 계층에서, 매개변수는 `weights` 와 `bias` 로 저장됩니다.
print(f'Result: y = {linear_layer.bias.item()} + {linear_layer.weight[:, 0].item()} x + {linear_layer.weight[:, 1].item()} x^2 + {linear_layer.weight[:, 2].item()} x^3')
###Output
_____no_output_____ |
hw2/AMATH563_hw2_Q1_Q2.ipynb | ###Markdown
Michelle Hu ---University of Washington AMATH 563 Homework 2 Due: May 6, 2020
###Code
%load_ext autoreload
%autoreload 2
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import pickle
import seaborn as sns
wd = "/mnt/Backups/jmhu"
data_dir = wd + "/git_dirs/ComplexSystems_AMATH563/hw2/data/"
model_dir = wd + "/git_dirs/ComplexSystems_AMATH563/hw2/models/"
fig_dir = wd + "/git_dirs/ComplexSystems_AMATH563/hw2/figures/"
if not os.path.exists(data_dir):
!mkdir $data_dir
if not os.path.exists(model_dir):
!mkdir $model_dir
if not os.path.exists(fig_dir):
!mkdir $fig_dir
###Output
_____no_output_____
###Markdown
Load data
###Code
df = pd.read_csv(data_dir+"population_data.csv")
df.head()
###Output
_____no_output_____
###Markdown
Define functions
###Code
def densify(t, y, dt):
from scipy.interpolate import interp1d
f = interp1d(t, y, kind='cubic')
tnew = np.arange(t[0], t[-1], dt)
ynew = f(tnew)
return(tnew, ynew)
def DMD(X, Xprime, r, dt):
'''Dynamic Mode Decomposition Function from book'''
U,Sigma,VT = np.linalg.svd(X,full_matrices=0) # Step 1
Ur = U[:,:r]
Sigmar = np.diag(Sigma[:r])
VTr = VT[:r,:]
Atilde = np.linalg.solve(Sigmar.T,(Ur.T @ Xprime @ VTr.T).T).T # Step 2
Lambda, W = np.linalg.eig(Atilde) # Step 3
Lambda = np.diag(Lambda)
Phi = Xprime @ np.linalg.solve(Sigmar.T,VTr).T @ W # Step 4
alpha1 = Sigmar @ VTr[:,0]
b = np.linalg.solve(W @ Lambda,alpha1)
Omega = np.log(np.diag(Lambda))/dt
return Phi, Omega, b, Lambda
def forecast(Phi, Omega, t, b, r, dt):
u_modes = np.zeros((r, t))
time_vector = dt*np.arange(-1, t-1)
for i in np.arange(0, t):
u_modes[:, i]=b * np.exp(Omega * time_vector[i])
Xdmd = Phi @ u_modes
print(t, u_modes.shape, time_vector.shape, Xdmd.shape)
return(Xdmd)
###Output
_____no_output_____
###Markdown
Figure formatting
###Code
y_position=0.92
fontsize=16
weight="bold"
###Output
_____no_output_____
###Markdown
Question 1. Develop a DMD model to forecast the future population states Set-up matrices
###Code
# Hare and Lynx columns
X=df.values[:, 1:3]
X.shape
make_it_dense = 5
if make_it_dense is not None:
# For DMD with interpolation
t = df.Year.values
dt = (t[1] - t[0])/make_it_dense # make 100 times more points
# Interpolate for more points
years, dense_hare = densify(t, df.Hare.values, dt)
years, dense_lynx = densify(t, df.Lynx.values, dt)
X_dense = np.stack((dense_hare, dense_lynx)).T # in form time, states
# DMD dense set-up
X = X_dense[:-1,:]
Xprime = X_dense[1:,:]
else:
# For DMD without interpolation
X = df.values[:-1, 1:3]
Xprime = df.values[1:, 1:3]
# Transpose matrices --> two state variables, want observations on other dimension
X=X.T
Xprime=Xprime.T
print(X.shape, Xprime.shape)
###Output
(2, 144) (2, 144)
###Markdown
Run DMD
###Code
r=2
Phi, Omega, b, Lambda = DMD(X, Xprime, r, dt)
print(" Phi : Omega : b ", )
print(Phi.shape, Omega.shape, b.shape)
###Output
Phi : Omega : b
(2, 2) (2,) (2,)
###Markdown
Check values
###Code
print("Phi:", Phi, "\n")
print("Lambda:", Lambda, "\n")
print("Omega:", Omega, "\n")
print("b:", b, "\n")
###Output
Phi: [[-0.81833442-0.15178844j -0.81833442+0.15178844j]
[-0.46735257+0.21165875j -0.46735257-0.21165875j]]
Lambda: [[0.97633761+0.05190062j 0. +0.j ]
[0. +0.j 0.97633761-0.05190062j]]
Omega: [-0.0563398+0.13277123j -0.0563398-0.13277123j]
b: [-18.61671244-34.48679298j -18.61671244+34.48679298j]
###Markdown
DMD reconstruction as in dmd_intro.m
###Code
t = X.shape[1]
Xdmd = forecast(Phi, Omega, t, b, r, dt)
###Output
144 (2, 144) (144,) (2, 144)
###Markdown
Plot DMD reconstruction
###Code
h=0
l=1
labels=["Hare", "Lynx"]
plt.figure(figsize=(12, 4))
plt.plot(X.T[:,h], "b", label=str("Original " + labels[h]))
plt.plot(X.T[:,l], "y", label=str("Original " + labels[l]))
plt.plot(Xdmd.T[:,h], "b--", label="Reconstructed " + labels[h])
plt.plot(Xdmd.T[:,l], "y--", label="Reconstructed " + labels[l])
plt.title("DMD reconstruction", fontsize=fontsize, weight=weight);
plt.legend()
###Output
/home/jmhu/miniconda/envs/gda_py3/lib/python3.6/site-packages/numpy/core/_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
###Markdown
Note: with only 29 observations apiece (original DMD dataset), there are no imaginary components for Lambda. Any interpolation improves this
###Code
t_forecast = t*2
Xdmd = forecast(Phi, Omega, t_forecast, b, r, dt)
h=0
l=1
labels=["Hare", "Lynx"]
fig, axes = plt.subplots(figsize=(12, 4))
axes.plot(Xdmd.T[:,h], "b--", label="Reconstructed" + labels[h])
axes.plot(Xdmd.T[:,l], "y--", label="Reconstructed " + labels[l])
axes.plot(X.T[:,h], "b", label=str("Original " + labels[h]))
axes.plot(X.T[:,l], "y", label=str("Original " + labels[l]))
axes.legend()
plt.title("DMD forecast", fontsize=fontsize, weight=weight);
###Output
/home/jmhu/miniconda/envs/gda_py3/lib/python3.6/site-packages/ipykernel_launcher.py:28: ComplexWarning: Casting complex values to real discards the imaginary part
###Markdown
Save the DMD model components, note densified points DMD model reconstruction requires: Phi, b, Omega, and r
###Code
DMD_components = [Phi, b, Omega, r]
DMD_fn = model_dir + str("DMD" + "_" + str(make_it_dense) + "pts.pkl")
if not os.path.exists(DMD_fn):
with open(DMD_fn, "wb") as file:
pickle.dump(DMD_components, file)
else:
print(DMD_fn, "exists")
###Output
/mnt/Backups/jmhu/git_dirs/ComplexSystems_AMATH563/hw2/models/DMD_5pts.pkl exists
###Markdown
--- Question 2. Do a time-delay DMD model to produce a forecast and compare with regular DMD. Determine if it is likely that there are latent variables.
###Code
X=df.Hare.values
Y=df.Lynx.values
from scipy.linalg import hankel
# Construct Hankel matrix
H_X = hankel(X)
H_Y = hankel(Y)
print(H_X.shape, H_Y.shape)
# Take SVD of H_X
u, s, v = np.linalg.svd(H_X)
var_explained = np.round(s**2/np.sum(s**2), decimals=3)
print(len(var_explained[var_explained>0]))
fig, ax = plt.subplots(2, 1, figsize=(14, 8), sharey=True)
ax[0].plot(var_explained)
ax[0].plot(np.diag(np.diagflat(s)/np.sum(np.diagflat(s))), "ro");
ax[0].set_ylabel('% Var Explained (Hare)', fontsize=16)
ax[1].set_ylabel('% Var Explained (Lynx)', fontsize=16)
sns.barplot(x=list(range(1,len(var_explained)+1)),
y=var_explained, color="limegreen", ec='k', ax=ax[0])
# Take SVD of H_Y
u, s, v = np.linalg.svd(H_Y)
var_explained = np.round(s**2/np.sum(s**2), decimals=3)
print(len(var_explained[var_explained>0]))
ax[1].plot(var_explained)
ax[1].plot(np.diag(np.diagflat(s)/np.sum(np.diagflat(s))), "ro");
sns.barplot(x=list(range(1,len(var_explained)+1)),
y=var_explained, color="limegreen", ec='k', ax=ax[1])
plt.xlabel('SVs', fontsize=16);
###Output
21
20
###Markdown
SVD shows 21 and 20 potential variables involved in the system for the hares and lynx, respectively This suggests that there are quite a few latent variables not being measured in this dynamic system as we are only measuring 2 variables (lynx and hare) Run DMD on Hankel matrices, interpolating to same degree as DMD models
###Code
make_it_dense = 5
if make_it_dense is not None:
# With interpolation
t = df.Year.values
dt = (t[1] - t[0])/make_it_dense
# Interpolate for more points
years, dense_hare = densify(t, df.Hare.values, dt)
years, dense_lynx = densify(t, df.Lynx.values, dt)
H_hare = hankel(dense_hare)
H_lynx = hankel(dense_lynx)
else:
# Without interpolation
H_hare = hankel(df.Hare.values)
H_lynx = hankel(df.Lynx.values)
t = df.Year.values
dt = (t[1] - t[0])
print(dt, H_hare.shape)
H_matrices = [H_hare, H_lynx]
X_matrices = []
Xprime_matrices = []
for H in H_matrices:
X=H[1:, :]
Xprime=H[:-1, :]
# Transpose matrices --> two state variables, want observations on other dimension?
X=X.T
Xprime=Xprime.T
print(X.shape, Xprime.shape)
X_matrices.append(X)
Xprime_matrices.append(Xprime)
r=21
Phis =[]
Omegas = []
bs = []
Lambdas = []
for X, Xprime in zip(X_matrices, Xprime_matrices):
Phi, Omega, b, Lambda = DMD(X, Xprime, r, dt)
print(" Phi : Omega : b ", )
print(Phi.shape, Omega.shape, b.shape)
Phis.append(Phi)
Omegas.append(Omega)
bs.append(b)
Lambdas.append(Lambda)
###Output
Phi : Omega : b
(145, 21) (21,) (21,)
Phi : Omega : b
(145, 21) (21,) (21,)
###Markdown
DMD reconstruction using Hankel matrices
###Code
Xdmds = []
t = X.shape[1]
for Phi, Omega, b in zip(Phis, Omegas, bs):
Xdmd = forecast(Phi, Omega, t, b, r, dt)
Xdmds.append(Xdmd)
print(t, r, dt)
###Output
144 (21, 144) (144,) (145, 144)
144 (21, 144) (144,) (145, 144)
144 21 0.4
###Markdown
Plot DMD reconstruction
###Code
labels=["Hare", "Lynx"]
style_og=["b", "y"]
style_recon=["b--", "y--"]
fig, ax = plt.subplots(2, 1, figsize=(10,8))
for i in range(0, len(labels)):
ax[i].plot(X_matrices[i].T[:, 0], style_og[i], label=str("Original " + labels[i]))
ax[i].plot(Xdmds[i].T[:, 0], style_recon[i], label="Reconstructed Hankel" + labels[i])
ax[i].legend()
plt.suptitle("Time-delay DMD model using combined Hankel matrix with 5x data interpolation",
y=y_position, fontsize=fontsize, weight=weight);
###Output
_____no_output_____
###Markdown
Forecast
###Code
t_forecast = int(t*1.5)
Xdmds_forecast = []
for Phi, Omega, b in zip(Phis, Omegas, bs):
Xdmd = forecast(Phi, Omega, t_forecast, b, r, dt)
Xdmds_forecast.append(Xdmd)
# Xdmd = forecast(Phi, Omega, t_forecast, b, r, dt)
fig, ax = plt.subplots(2, 1, figsize=(16,8))
for i in range(0, len(labels)):
ax[i].plot(X_matrices[i].T[:, 0], style_og[i], label=str("Original " + labels[i]))
ax[i].plot(Xdmds_forecast[i].T[:, 0], style_recon[i], label="Reconstructed Hankel " + labels[i])
ax[i].legend()
plt.suptitle("Time-delay DMD model forecasts using combined Hankel matrix with 5x data interpolation",
y=y_position, fontsize=fontsize, weight=weight);
###Output
_____no_output_____
###Markdown
Save DMD with Hankel models with pickle (not the forecast DMDs)
###Code
DMD_H_hare = [Phis[0], bs[0], Omegas[0], r]
DMD_H_lynx = [Phis[1], bs[1], Omegas[1], r]
DMD_hare_fn = model_dir + str("DMD_H_hare" + "_" + str(make_it_dense) + "pts.pkl")
DMD_lynx_fn = model_dir + str("DMD_H_lynx" + "_" + str(make_it_dense) + "pts.pkl")
DMD_components = [DMD_H_hare, DMD_H_lynx]
DMD_fns = [DMD_hare_fn, DMD_lynx_fn]
for fn, component in zip(DMD_fns, DMD_components):
if not os.path.exists(fn):
with open(fn, "wb") as file:
pickle.dump(component, file)
else:
print(fn, "exists")
###Output
/mnt/Backups/jmhu/git_dirs/ComplexSystems_AMATH563/hw2/models/DMD_H_hare_5pts.pkl exists
/mnt/Backups/jmhu/git_dirs/ComplexSystems_AMATH563/hw2/models/DMD_H_lynx_5pts.pkl exists
|
data/notebooks/tota_employment.ipynb | ###Markdown
Employment from Statistics CanadaData downloaded from https://www150.statcan.gc.ca/t1/tbl1/en/tv.action?pid=1410038702 Data from 2006 - ongoing
###Code
import pandas as pd
import matplotlib.pyplot as plt
import datetime
import plotly.graph_objects as go
import numpy as np
### Download latest table
import requests, zipfile, io
r = requests.get("https://www150.statcan.gc.ca/n1/tbl/csv/14100387-eng.zip")
z = zipfile.ZipFile(io.BytesIO(r.content))
z.extractall(f"../data/raw")
## Read original source Labor Force Survey
data = pd.read_csv(f'../data/raw/14100387-eng/14100387.csv')
data.head()
### Select only data for Bristish Columbia
data = data[(data.GEO.isin(['British Columbia',
'Vancouver Island and Coast, British Columbia',
'Lower Mainland-Southwest, British Columbia',
'Thompson-Okanagan, British Columbia',
'Kootenay, British Columbia', 'Cariboo, British Columbia',
'North Coast and Nechako, British Columbia',
'Northeast, British Columbia'])) & (data.Statistics == 'Estimate')]
data.GEO.unique()
data['Labour force characteristics'].unique()
data.columns
## Select only useful columns
data = data[['REF_DATE', 'GEO','Labour force characteristics','UOM','SCALAR_FACTOR','VALUE']]
data.head()
### Keep only data from 2016 onwards
data = data[pd.DatetimeIndex(data['REF_DATE']).year > 2015]
data.GEO.unique()
## Sneak peak into data
variables = ['Population','Labour force','Employment','Unemployment']
fig,axes = plt.subplots(3,2,figsize = (10,10))
for a,ax in enumerate(axes.flatten()):
df = data[(data['GEO'] == data.GEO.unique()[a])]
for variable in variables:
ax.plot(pd.DatetimeIndex(df[df['Labour force characteristics'] == variable]['REF_DATE']),df[df['Labour force characteristics'] == variable]['VALUE'])
ax.set_title(f'{data.GEO.unique()[a]}')
## Replace Economic Region names with corresponding Tourism Regions
correspondence = {'British Columbia':'British Columbia',
'Vancouver Island and Coast, British Columbia': 'Vancouver Island',
'Lower Mainland-Southwest, British Columbia':'Vancouver Coast and Mountains',
'Thompson-Okanagan, British Columbia': 'Thompson-Okanagan',
'Kootenay, British Columbia': 'Kootenay Rockies',
'Cariboo, British Columbia':'Cariboo Chilcotin Coast',
'North Coast and Nechako, British Columbia':'Northern British Columbia',
'Northeast, British Columbia':'Northern British Columbia'}
data['Tourism_region'] =data['GEO']
data.replace({'Tourism_region':correspondence}, inplace=True)
data.columns
## Get values in appropriate units
data_count = data[data['Labour force characteristics'].isin(['Population','Labour force','Employment','Unemployment'])]
#data_count = data_count.groupby(['Tourism_region','REF_DATE','Labour force characteristics']).sum().reset_index()
data_count['VALUE']= data_count['VALUE']*1000
data_per = data[data['Labour force characteristics'].isin(['Employment rate','Unemployment rate'])]
#data_per = data_per.groupby(['Tourism_region','REF_DATE','Labour force characteristics']).mean().reset_index()
df = data_count.append(data_per)
### Format data with data model for database
df.rename(columns={'REF_DATE':'date',
'GEO':'category_2',
'Tourism_region':'region',
'VALUE':'value',
'Labour force characteristics':'indicator_code'}, inplace=True)
df['category_1']=np.nan
df = df[['indicator_code','date','region','category_1','category_2','value']]
## Set indicator names
indicator_names = {'Population':'population_by_economic_region',
'Labour force': 'labour_force_by_economic_region',
'Employment':'total_employment_by_economic_region',
'Unemployment': 'total_unemployment_by_economic_region',
'Employment rate':'total_employment_rate_by_economic_region',
'Unemployment rate':'total_unemployment_rate_by_economic_region'}
df.replace({'indicator_code':indicator_names}, inplace=True)
### delete 'British Columbia tag'
## Replace Economic Region names with corresponding Tourism Regions
correspondence = {'British Columbia':'British Columbia',
'Vancouver Island and Coast, British Columbia': 'Vancouver Island and Coast',
'Lower Mainland-Southwest, British Columbia':'Lower Mainland-Southwest',
'Thompson-Okanagan, British Columbia': 'Thompson-Okanagan',
'Kootenay, British Columbia': 'Kootenay',
'Cariboo, British Columbia':'Cariboo',
'North Coast and Nechako, British Columbia':'North Coast and Nechako',
'Northeast, British Columbia':'Northeast'}
df.replace({'category_2':correspondence}, inplace=True)
df.sort_values(by=['indicator_code','date']).to_csv(f'../data/processed/Labour_Force_Survey_2016_2021.csv')
###Output
_____no_output_____ |
UberDemandSupply.ipynb | ###Markdown
Requests status
###Code
status = pd.crosstab(index = uber["Status"], columns="count")
status.plot.bar()
###Output
_____no_output_____
###Markdown
There are more "No cars available" than "Number of trips" cancelled.
###Code
pick_point = pd.crosstab(index = uber["Pickup point"], columns="count")
pick_point.plot.bar()
###Output
_____no_output_____
###Markdown
Both Airport and City pickup points have approximately equal datapoints in a given dataset
###Code
#grouping by Status and Pickup point.
uber.groupby(['Status', 'Pickup point']).size()
# Visualizing the count of Status and Pickup points
plt.figure(figsize=(8,5))
sns.countplot(x=uber['Pickup point'],hue =uber['Status'] ,data = uber)
###Output
_____no_output_____
###Markdown
As seen in above visualization, there is higher incidence of No cars available status from Airport to City whereas there ishigher incidence of Cancelled status from City to Airport. Timeslots
###Code
#Request hours
uber['Request Hour'] = uber['Request timestamp'].dt.hour
uber.loc[uber['Request Hour'].between(1,11, inclusive=True),'Request Time Slot'] = 'Morning'
uber.loc[uber['Request Hour'].between(12,16, inclusive=True),'Request Time Slot'] = 'Noon'
uber.loc[uber['Request Hour'].between(17,21, inclusive=True),'Request Time Slot'] = 'Evening'
uber.loc[uber['Request Hour'].between(21,24, inclusive=True),'Request Time Slot'] = 'Night'
uber.head()
#As Demand include trips completed, cancelled and no cars available, we will create a column with value 1
uber['Demand'] = 1
#As Supply can only contain trips completed, so we will create a column with 1 for trips completed and 0 otherwise.
uber['Supply'] = 0
uber.loc[(uber['Status'] == 'Trip Completed'),'Supply'] = 1
###Output
_____no_output_____
###Markdown
Gap between supply and demand
###Code
uber['Gap'] = uber['Demand'] - uber['Supply']
uber.loc[uber['Gap']==0,'Gap'] = 'Trip Completed'
uber.loc[uber['Gap']==1,'Gap'] = 'Trip Not Completed'
uber = uber.drop(['Request Hour', 'Demand', 'Supply'], axis=1)
uber.head()
sns.countplot(x=uber['Request Time Slot'],hue =uber['Status'] ,data = uber)
###Output
_____no_output_____
###Markdown
From the above plot it is visible that there are "Higher incidences of No cars available in the evening" and "Higher incidences of Cancelled trips in the morning " Find the time slots when the highest gap exists
###Code
plt.figure(figsize=(10,10))
pickup_df = pd.DataFrame(uber.groupby(['Pickup point','Request Time Slot', 'Status'])['Request id'].count().unstack(fill_value=0))
pickup_df.plot.bar(figsize=(8,5))
###Output
_____no_output_____
###Markdown
As seen in visualization above,there are higher incidences of "No cars available" from Airport to City in the evenings while there are incidences of "Trips cancelled" from City to Airport in the mornings
###Code
gap_main_df = pd.DataFrame(uber.groupby(['Request Time Slot','Pickup point','Gap'])['Request id'].count().unstack(fill_value=0))
gap_main_df.plot.bar(figsize=(5,5))
###Output
_____no_output_____ |
Crap_from_JC.ipynb | ###Markdown
PCA
###Code
X = Rpca - Rpca.mean()
X.mean()
pca = PCA(n_components=3)
pca.fit(X)
Xt = pca.transform(X)
pca.components_
pca.explained_variance_ratio_
#del ratings['timestamp']
###Output
_____no_output_____ |
etl_subset_demo.ipynb | ###Markdown
PreliminariesThis notebook assumes that all setup steps from [GCP-readme.md](./GCP-readme.md) have been followed.This notebook does not demonstrate the full ETL pipeline used in production (executed on GCP Dataflow).We only investigate the first couple of major steps of the *production* pipeline (which is executed in Dataflow on the Google Cloud Platform) in order to demonstrate the general idea of the process. Note that the *production* pipeline is really a sequence of (sub) pipelines that are daisychained together in a particular order, since latter pipelines depend on former pipelines.In this notebook, we demonstrate the first two pipelines, which accomplish the following: 1. Boostrap the video indexSubsteps are: 1. Download the video index (archive) 2. Extract it. 3. Write it to the destination directory as a CSVThe video index drives the entire full-blown *production* pipeline.It tells us the filenames of the target videos. 2. Download each video segment comprising final target videos. (The video index contains the URLs).Target videos are comprised of video segments since some of the final target videos can be rather large.Altgether, **the *production* pipeline (executed on GCP Dataflow) retrieves more than 2600 videos** produced by the research conducted by Boston and Rutgers Universities, jointly under the [National Center for Sign Language and Gesture Resources project](http://www.bu.edu/asllrp/ncslgr.html).This notebook will demonstrate retrieving 50 of those.The implementation of the download shall leverage Apache Beam's parallelism in order to avoid the amount of time it would take to accomplish doing it sequentially. Note that when executed locally, the Apache Beam SDK uses Docker containers for worker node clusters. A cluster in this case consists of 8 workers nodes since my local machine has 8 cores.In *production*, on GCP Dataflow, this can be scheduled to your heart's content (but this, of course, costs more money to do so). 3. Use the `OpenCV` library to extract all frames (from each segment) for each target video.This step leverages Apache Beam's parallelism as well.But we MUST take care to ensure that a single worker extracts the frames of each segment associated with the target video.This is because frames are ordered/sequenced. Allowing two different workers to extract frames of different segments associated with the same final target video would likely result in frames being extracted out of order (due to parallelism).Therefore, we partition the extraction task by final target video in order to ensure a single worker handles all segments associated with a single target video.But we do want parallelism to occurr at the final target video level.In the end, **in production, the pipeline extracts more than 561,000 frames (images) from the source target videos**!Since these images are Of course in this demonstration we will be extracting much less - only 50 out of the more than 2600 videos available will be downloaded and processed (frames extracted). Still, extracting from 50 videos will amount to thousands of frames.
###Code
%load_ext autoreload
%autoreload 2
from __future__ import absolute_import
import apache_beam as beam
import apache_beam.runners.interactive.interactive_beam as ib
from api import beam__common, fileio, fidscs_globals
from api.fidscs_globals import disp_source
from api import data_extractor__beam
from apache_beam.options.pipeline_options import PipelineOptions
###Output
_____no_output_____
###Markdown
Constants to be used in this notebook
###Code
WORK_DIR = '/tmp'
MAX_TARGET_VIDEOS = 50 # set to -1 for all in production but not in this interactive notebook! That will result in extracting more than 561,000 images from more than 2600 videos! (onto your local machine)
PIPELINE_BASE_JOB_NAME = 'sc-fids-capstone-etl-demo'
###Output
_____no_output_____
###Markdown
Use Apache Beam PipelineOptions for any global settingsWe MUST do this since the point of Apache Beam is to enable parallelism (in processing).How this is accomplished is beyond the scope of this notebook.But suffice it to say that any notion of a global variable cannot be implemented in the manner one is normally implemented - e.g. with Python global variables.However, a PipelineOptions object IS passed to each and every worker node by Apache Beam.Therefore, we accomplish global settings to be shared by all workers - e.g. the working directory and the final destination filepaths to be output by the pipeline - by passing would-be global settings to PipelineOptions, which are required to bootstrap each worker node by Apache Beam. Custom Apache Beam Pipeline optionsThe `beam__common.FIDSCapstonePipelineOptions` class was written to do just that and allows us to create and use our own custom options in Apache Beam pipelines.Without it, attempting to set custom options on the Pipeline will fail since Apache Beam's PipelineOptions class will reject any options it doesn't recognize.
###Code
disp_source(beam__common.FIDSCapstonePipelineOptions)
###Output
_____no_output_____
###Markdown
PipelineOptions InitializationFor this notebook, we execute locally (vs. GCP Dataflow) - that is, we use the Apache Beam's `DirectRunner`. Actually, we use a variant of - the `InteractiveRunner` - geared specifically for running in notebooks. But it is still run locally. Some `PipelineOptions` options differ (or are not needed), relative to the `DataflowRunner`.To see the full implementation on how this differs from using the `Dataflow` runner, start by inspecting [run_cloud__etl.py](./run_cloud__etl.py) and follow the code.Initializing the `dict` upon which `PipelineOptions` are based has been wrapped up within the `beam__common.make_fids_options_dict` function.
###Code
disp_source(beam__common.make_fids_options_dict)
###Output
_____no_output_____
###Markdown
PipelineOptions geared for the `InteractiveRunner`Note that `InteractiveRunner` is a variant of the `DirectRunner` that allows us to run an Apache Beam pipeline with a Jupyter Notebook.Documentation for `InteractiveRunner` can be found [here](https://cloud.google.com/dataflow/docs/guides/interactive-pipeline-development).First, it must be reiterated that we must use this runner in order to collect (reduce) data kept in Apache Beam `Pcollection`s for conversion to Pandas `DataFrame`s for display within this notebook. Apache Beam `Pcollection`s can generally and *roughly* be thought of as Resilient Distributed Datasets. The documentation for Apache Beam `Pcollection`s can be found in the **Apache Beam Programming Guide** located [here](https://beam.apache.org/documentation/programming-guide/). But **`Pcollection`s are the basis for all processing within an Apache Beam pipeline**.Also note that the `InteractiveRunner` is not really meant to be used for enterprise (read: "Big Data") pipelines.The runner used for production in this project is the `DataFlow` Google Cloud Platform runner.The reader is reminded that the point of this notebook, however, is to present a demonstration of only a subset of the full Apache Beam pipeline (used in this project).
###Code
options = {
# 'runner': 'DirectRunner',
'runner': 'InteractiveRunner',
'environment_type': 'DOCKER',
'direct_num_workers': 0, # 0 is use all available cores
'direct_running_mode': 'multi_threading', # ['in_memory', 'multi_threading', 'multi_processing']
'streaming': False # set to True if data source is unbounded (e.g. GCP PubSub),
}
options.update(beam__common.make_fids_options_dict(WORK_DIR, max_target_videos=MAX_TARGET_VIDEOS))
###Output
_____no_output_____
###Markdown
Finally, instantiate the `PipelineOptions` (using the above `options` `dict`)
###Code
job_suffix = 'boostrap-vid-index'
job_name = f"{PIPELINE_BASE_JOB_NAME}--{job_suffix}"
options.update({
'job_name': job_name
})
pipeline_options = PipelineOptions(flags=[], **options) # easier to pass in options from command-line this way
print(f"PipelineOptions:\n{pipeline_options.get_all_options()}\n")
###Output
PipelineOptions:
{'runner': 'InteractiveRunner', 'streaming': False, 'beam_services': {}, 'type_check_strictness': 'DEFAULT_TO_ANY', 'type_check_additional': '', 'pipeline_type_check': True, 'runtime_type_check': False, 'performance_runtime_type_check': False, 'direct_runner_use_stacked_bundle': True, 'direct_runner_bundle_repeat': 0, 'direct_num_workers': 0, 'direct_running_mode': 'multi_threading', 'dataflow_endpoint': 'https://dataflow.googleapis.com', 'project': 'sc-fids-capstone', 'job_name': 'sc-fids-capstone-etl-demo--boostrap-vid-index', 'staging_location': None, 'temp_location': None, 'region': None, 'service_account_email': None, 'no_auth': False, 'template_location': None, 'labels': None, 'update': False, 'transform_name_mapping': None, 'enable_streaming_engine': False, 'dataflow_kms_key': None, 'flexrs_goal': None, 'hdfs_host': None, 'hdfs_port': None, 'hdfs_user': None, 'hdfs_full_urls': False, 'num_workers': None, 'max_num_workers': None, 'autoscaling_algorithm': None, 'machine_type': None, 'disk_size_gb': None, 'disk_type': None, 'worker_region': None, 'worker_zone': None, 'zone': None, 'network': None, 'subnetwork': None, 'worker_harness_container_image': None, 'sdk_harness_container_image_overrides': None, 'use_public_ips': None, 'min_cpu_platform': None, 'dataflow_worker_jar': None, 'dataflow_job_file': None, 'experiments': None, 'number_of_worker_harness_threads': None, 'profile_cpu': False, 'profile_memory': False, 'profile_location': None, 'profile_sample_rate': 1.0, 'requirements_file': None, 'requirements_cache': None, 'setup_file': None, 'beam_plugins': None, 'save_main_session': False, 'sdk_location': 'default', 'extra_packages': None, 'prebuild_sdk_container_engine': None, 'prebuild_sdk_container_base_image': None, 'docker_registry_push_url': None, 'job_endpoint': None, 'artifact_endpoint': None, 'job_server_timeout': 60, 'environment_type': 'DOCKER', 'environment_config': None, 'environment_options': None, 'sdk_worker_parallelism': 1, 'environment_cache_millis': 0, 'output_executable_path': None, 'artifacts_dir': None, 'job_port': 0, 'artifact_port': 0, 'expansion_port': 0, 'flink_master': '[auto]', 'flink_version': '1.10', 'flink_job_server_jar': None, 'flink_submit_uber_jar': False, 'spark_master_url': 'local[4]', 'spark_job_server_jar': None, 'spark_submit_uber_jar': False, 'spark_rest_url': None, 'on_success_matcher': None, 'dry_run': False, 'wait_until_finish_duration': None, 'pubsubRootUrl': None, 's3_access_key_id': None, 's3_secret_access_key': None, 's3_session_token': None, 's3_endpoint_url': None, 's3_region_name': None, 's3_api_version': None, 's3_verify': None, 's3_disable_ssl': False, 'fidscs_capstone_max_target_videos': 50, 'fidscs_capstone_work_dir': '/tmp', 'fidscs_capstone_data_dir': '/tmp/data', 'fidscs_capstone_tmp_dir': '/tmp/data/tmp', 'fidscs_capstone_videos_dir': '/tmp/data/videos', 'fidscs_capstone_stitched_video_frames_dir': '/tmp/data/stitched_video_frames', 'fidscs_capstone_corpus_dir': '/tmp/data/tmp/ncslgr-xml', 'fidscs_capstone_corpus_ds_path': '/tmp/data/ncslgr-corpus-index.csv', 'fidscs_capstone_document_asl_cconsultant_ds_path': '/tmp/data/document-consultant-index.csv', 'fidscs_capstone_asl_consultant_ds_path': '/tmp/data/consultant-index.csv', 'fidscs_capstone_video_indexes_dir': '/tmp/data/tmp/video_index-20120129', 'fidscs_capstone_selected_video_index_path': '/tmp/data/tmp/video_index-20120129/files_by_video_name.csv', 'fidscs_capstone_video_ds_path': '/tmp/data/document-consultant-targetvideo-index.csv', 'fidscs_capstone_video_segment_ds_path': '/tmp/data/document-consultant-targetvideo-segment-index.csv', 'fidscs_capstone_video_frame_ds_path': '/tmp/data/document-consultant-targetvideo-frame-index.csv', 'fidscs_capstone_utterance_ds_path': '/tmp/data/document-consultant-utterance-index.csv', 'fidscs_capstone_utterance_video_ds_path': '/tmp/data/document-consultant-utterance-targetvideo-index.csv', 'fidscs_capstone_utterance_token_ds_path': '/tmp/data/document-consultant-utterance-token-index.csv', 'fidscs_capstone_utterance_token_frame_ds_path': '/tmp/data/document-consultant-targetvideo-utterance-token-frame-index.csv', 'fidscs_capstone_vocabulary_ds_path': '/tmp/data/vocabulary-index.csv'}
###Markdown
But before running the pipeline, create necessary filestructure within `WORK_DIR`
###Code
if not fileio.dir_path_exists(options[fidscs_globals.OPT_NAME_DATA_DIR], options)[0]:
fileio.make_dirs(options[fidscs_globals.OPT_NAME_DATA_DIR], options)
if not fileio.dir_path_exists(options[fidscs_globals.OPT_NAME_TMP_DIR], options)[0]:
fileio.make_dirs(options[fidscs_globals.OPT_NAME_TMP_DIR], options)
if not beam__common.dataset_csv_files_exist(options):
if not fileio.dir_path_exists(options[fidscs_globals.OPT_NAME_VIDEO_DIR], options)[0]:
fileio.make_dirs(options[fidscs_globals.OPT_NAME_VIDEO_DIR], options)
if not fileio.dir_path_exists(options[fidscs_globals.OPT_NAME_STITCHED_VIDEO_FRAMES_DIR], options)[0]:
fileio.make_dirs(options[fidscs_globals.OPT_NAME_STITCHED_VIDEO_FRAMES_DIR], options)
###Output
_____no_output_____
###Markdown
We are now ready to execute the pipeline. But before doing so, let's discuss how it works.There are two top-level functions used by the "boostrap-vid-index" pipeline, in this order:1. `data_extractor__beam.pl__1__bootstrap_target_video_index`2. `data_extractor__beam.pl__2__write_target_vid_index_csv`Let's examine the source code for `data_extractor__beam.pl__1__bootstrap_target_video_index`...The follow python source code illustrates the programming paradigm used in all Apache Beam (stands for **B**atch and Str**eam** processing) pipelines.
###Code
disp_source(data_extractor__beam.pl__1__bootstrap_target_video_index)
###Output
_____no_output_____
###Markdown
The gist of `data_extractor__beam.pl__1__bootstrap_target_video_index` is that it will ensure that the video index exists locally before any other dependent pipeline can execute.If it doesn't, it will download download/extract the contents of the video index archive from [http://www.bu.edu/asllrp/ncslgr-for-download/video_index-20120129.zip](http://www.bu.edu/asllrp/ncslgr-for-download/zip).The first noteable point to make is that it uses the custom class `data_extractor__beam.TargetVideoIndexBootstrapper`, which inherits `beam__common.PipelinePcollElementProcessor`, which in turn inherits from from Apache Beam's `DoFn` class. Inheriting from Apache Beam's `DoFn` allows the inherited class to be used in Apache Beam pipelines via `beam.ParDo` (which stands for "**Par**allel **Do**"). Full documentation can be found [here](https://beam.apache.org/documentation/transforms/python/elementwise/pardo/).There is nothing particularly noteworthy about the internal implementation of `data_extractor__beam.TargetVideoIndexBootstrapper`. It simply downloads the video index archive (to a memfile) and extracts its contents (in-memory). Please see its implementation for details if you are interested.Source code for `data_extractor__beam.pl__2__write_target_vid_index_csv` is listed below. It simply writes the bytes extracted from the archive to desintation path `/data/video_index-20120129.csv` (using an Apache Beam `schema` so that column names can easily be referenced/manipulated later).
###Code
disp_source(data_extractor__beam.pl__2__write_target_vid_index_csv)
###Output
_____no_output_____
###Markdown
We are now ready to execute the first step of the "boostrap-vid-index" pipeline!
###Code
n_partitions = 8 # hardcoded for now but we need to retrieve this from beam to be the number of workers
pl = beam.Pipeline(options=pipeline_options)
full_target_vid_index_schemad_pcoll = data_extractor__beam.pl__1__bootstrap_target_video_index(pl)
###Output
_____no_output_____
###Markdown
That seems fast! That's because the pipeline wasn't acctually executed yet. What Apache Beam did in this case was create the corresponding pipelines *execution graph* (which is actually a *Directed Acyclic Graph*).With the `InteractiveRunner`, the pipeline only gets executed when it is required.This happens in notebooks by calling `ib.collect` or `ib.show`, which essentially executes and then reduces the distributed `Pcollection`.In this case, we use `ib.collect` which also stuffs the results into a Pandas `DataFrame` for viewing purposes within notebooks.Note that this is NOT done in production (in the cloud, in GCP Dataflow) since Pandas `DataFrame`s aren't needed and are simply impractical for "Big Data" solutions. Pandas `DataFrame`s really don't serve this purpose. Can you imagine attempting to hold all the corresponding tensor bytes for 561,000+ images in memory??Anyway, moving on... before calling `ib.collect`, we must first tell Apache Beam to "record" all `Pcollection`s (up to a certain point) by calling `ib.watch(locals())`. Note that only `Pcollection`s prior to calling `ib.watch(locals())` are eligible for "collection" (conversion to Pandas `DataFrame`s).
###Code
# we require this in order to make use of ib.show() (which provides visualization of the pcolls specified) or ib.collect() (which creates a pandas dataframe from a pcoll)
# but all pcolls we wish to visualize must be created prior to executing the following line
ib.watch(locals())
###Output
_____no_output_____
###Markdown
We can now collect the `full_target_vid_index_schemad_pcoll` `Pcollection` into a Pandas `DataFrame` for display in this notebook.
###Code
df_full_target_vid_index = ib.collect(full_target_vid_index_schemad_pcoll)
df_full_target_vid_index
###Output
_____no_output_____
###Markdown
Note that this isn't entirely useful yet since we don't have any corresponding column names (in the above Pandas `DataFrame`). We have applied a `schema` to the `Pcollection` but that doesn't get applied to the Pandas `DataFrame` since applying a `schema` to a `Pcollection` is carried out by mappinng each row to a literal Apache Beam `Row` object, thereby effectively converting each element to an *unhashed* `dict`. Thus, we cannot guarantee the ordering of the columns will be fixed. We must therefore use the `schema` to refer to columns by name.But, we do see that there are 2,612 corresponding target videos to download. Note that since target videos are actually comprised of segments, there may actually be more videos than that that we download in the end (if we were to download them all... which is exactly what is done in production, on GCP Dataflow).This is done inline while writing the `Pcollection` (collected into the above Pandas `DataFrame` just for viewing) to the destination `/data/video_index-20120129.csv` file path (by `data_extractor__beam.pl__2__write_target_vid_index_csv`).But, as a nuance of collecting a `Pcollection` into a `DataFrame`, we can't simply call `data_extractor__beam.pl__2__write_target_vid_index_csv` now if we want to view the resulting `Pcollection` as a `DataFrame`. Recall that only `Pcollection`s prior to calling `ib.watch(locals())` are eligible for "collection" (conversion to Pandas `DataFrame`s), which we already did. This means we must re-execute the first step (`data_extractor__beam.pl__1__bootstrap_target_video_index`), followed by `data_extractor__beam.pl__2__write_target_vid_index_csv`, call `ib.watch(locals())`, and then finally call `ib.collect` on each of the corresponding `Pcollection`s in order to view them.But won't that mean that `data_extractor__beam.pl__1__bootstrap_target_video_index` will re-download the video index? ANSWER: no because it was written specifically to guard againstt that case. Take a look at its source and you'll see. If the video index exists locally, it will simply load it from the "tmp" directory and the resulting `Pcollection` is used as input for `data_extractor__beam.pl__2__write_target_vid_index_csv` (which will apply a `schema` and then write it to the final destination path `/data/video_index-20120129.csv`).Let's do that now... The full "boostrap-vid-index" pipeline
###Code
# create a new instance of the pipeline
pl = beam.Pipeline(options=pipeline_options)
full_target_vid_index_schemad_pcoll = data_extractor__beam.pl__1__bootstrap_target_video_index(pl)
_ = data_extractor__beam.pl__2__write_target_vid_index_csv(full_target_vid_index_schemad_pcoll, pl._options._all_options)
###Output
_____no_output_____
###Markdown
We know that observing the `full_target_vid_index_schemad_pcoll` `Pcollection` won't be particularly useful and the `Pcollection` that `data_extractor__beam.pl__2__write_target_vid_index_csv` outputs simply has the destination path after it successfully writes `full_target_vid_index_schemad_pcoll` to `/data/video_index-20120129.csv`, which isn't particularly interesting. But we need to be sure this pipeline completes before executing the more interesting "download-videos-extract-frames" pipeline.So instead of calling `ib.collect` to force the above pipeline to run, we'll simply call `pl.run` instead (since we are not particularly interested in viewing any `Pcollection`-to-`DataFrame` conversions from it).
###Code
print(f"\n\n****************************** Starting pipeline job: {job_name} ******************************")
pl.run();
print(f"****************************** Finished pipeline job: {job_name} ******************************")
###Output
****************************** Starting pipeline job: sc-fids-capstone-etl-demo--boostrap-vid-index ******************************
FOUND EXISTING SEL VID INDEX: /tmp/data/tmp/video_index-20120129/files_by_video_name.csv
TARGET-VIDEO-INDEX CSV WRITTEN TO STORAGE: /tmp/data/video_index-20120129.csv
****************************** Finished pipeline job: sc-fids-capstone-etl-demo--boostrap-vid-index ******************************
###Markdown
The "download-videos-extract-frames" pipelineThe "download-videos-extract-frames" pipeline is comprised of four steps:1. `beam__common.pl__1__read_target_vid_index_csv`2. `data_extractor__beam.pl__2__filter_target_vid_index`3. `data_extractor__beam.pl__3__parallel_download_videos`4. `data_extractor__beam.pl__4__parallel_extract_target_video_frames` The function names used for each step suggest what they do. So I will only show source code for `data_extractor__beam.pl__3__parallel_download_videos` and `data_extractor__beam.pl__4__parallel_extract_target_video_frames`, and provide short explanations for steps 1 and 2.Step 1 obviously reads `/data/video_index-20120129.csv` from storage into a `Pcollection` to be used as input for `data_extractor__beam.pl__2__filter_target_vid_index`, which simply selects the first `MAX_TARGET_VIDEOS` from the full list of records from the `full_target_vid_index_schemad_pcoll` `Pcollection` that `beam__common.pl__1__read_target_vid_index_csv` returns. Note that `data_extractor__beam.pl__2__write_target_vid_index_csv`, in addition to applying a `schema`, also applies a row *id* and writes to `/data/video_index-20120129.csv` in the order of that index. `beam__common.pl__1__read_target_vid_index_csv` returns the corresponding `Pcollection` ordered by this index.Let's now inspect source code for steps 3 and 4...
###Code
disp_source(data_extractor__beam.pl__3__parallel_download_videos)
###Output
_____no_output_____
###Markdown
Now things get really interesting with `data_extractor__beam.pl__3__parallel_download_videos`...What we do here is explicitly tell Apache Beam to create 8 independent *partitions*, each of which will download videos independently of one another, corresponding to worker nodes.Note that either threads or worker nodes. How that plays out is beyond the scope of this notebook. Suffice it to say that this results in much faster processing than simply executing sequentially.When they are all done, the results are merged (via `beam.Flatten`) into a single `Pcollection` to be supplied as input to `data_extractor__beam.pl__4__parallel_extract_target_video_frames`.
###Code
disp_source(data_extractor__beam.pl__4__parallel_extract_target_video_frames)
###Output
_____no_output_____
###Markdown
From the first part of this notebook...This step leverages Apache Beam's parallelism as well.But we MUST take care to ensure that a single worker extracts the frames of each segment associated with the target video.This is because frames are ordered/sequenced. Allowing two different workers to extract frames of different segments associated with the same final target video would likely result in frames being extracted out of order (due to parallelism).Therefore, we partition the extraction task by final target video in order to ensure a single worker handles all segments associated with a single target video.But we do want parallelism to occurr at the final target video level.Before creating the pipeline execution graph, it is worth taking a deeper look into the internals of how we use the `OpenCV` library to process the videos (extract frames).The `data_extractor__beam.SegmentFrameExtractor` wraps the `data_extractor__beam.beam_extract_frames`, which houses the logic for this processing. There are also a couple of helper functions that `` uses: `data_extractor__beam.capture_segment_video` and `data_extractor__beam.write_frame_to_file`. These will be listed after `data_extractor__beam.beam_extract_frames`.
###Code
disp_source(data_extractor__beam.beam_extract_frames)
disp_source(data_extractor__beam.capture_segment_video)
disp_source(data_extractor__beam.write_frame_to_file)
###Output
_____no_output_____
###Markdown
We are now ready to execute the "download-videos-extract-frames" pipeline. But first we must... Create the "download-videos-extract-frames" pipeline execution graph
###Code
job_suffix = 'download-videos-extract-frames'
job_name = f"{PIPELINE_BASE_JOB_NAME}--{job_suffix}"
options.update({
'job_name': job_name
})
pipeline_options = PipelineOptions(flags=[], **options) # easier to pass in options from command-line this way
print(f"PipelineOptions:\n{pipeline_options.get_all_options()}\n")
pl = beam.Pipeline(options=pipeline_options)
full_target_vid_index_schemad_pcoll = beam__common.pl__1__read_target_vid_index_csv(pl)
filtered_target_vid_index_schemad_pcoll = data_extractor__beam.pl__2__filter_target_vid_index(full_target_vid_index_schemad_pcoll, pl._options._all_options)
merged_download_results = data_extractor__beam.pl__3__parallel_download_videos(filtered_target_vid_index_schemad_pcoll, pl._options._all_options, n_partitions)
merged_extraction_results = data_extractor__beam.pl__4__parallel_extract_target_video_frames(merged_download_results, pl._options._all_options, n_partitions)
###Output
PipelineOptions:
{'runner': 'InteractiveRunner', 'streaming': False, 'beam_services': {}, 'type_check_strictness': 'DEFAULT_TO_ANY', 'type_check_additional': '', 'pipeline_type_check': True, 'runtime_type_check': False, 'performance_runtime_type_check': False, 'direct_runner_use_stacked_bundle': True, 'direct_runner_bundle_repeat': 0, 'direct_num_workers': 0, 'direct_running_mode': 'multi_threading', 'dataflow_endpoint': 'https://dataflow.googleapis.com', 'project': 'sc-fids-capstone', 'job_name': 'sc-fids-capstone-etl-demo--download-videos-extract-frames', 'staging_location': None, 'temp_location': None, 'region': None, 'service_account_email': None, 'no_auth': False, 'template_location': None, 'labels': None, 'update': False, 'transform_name_mapping': None, 'enable_streaming_engine': False, 'dataflow_kms_key': None, 'flexrs_goal': None, 'hdfs_host': None, 'hdfs_port': None, 'hdfs_user': None, 'hdfs_full_urls': False, 'num_workers': None, 'max_num_workers': None, 'autoscaling_algorithm': None, 'machine_type': None, 'disk_size_gb': None, 'disk_type': None, 'worker_region': None, 'worker_zone': None, 'zone': None, 'network': None, 'subnetwork': None, 'worker_harness_container_image': None, 'sdk_harness_container_image_overrides': None, 'use_public_ips': None, 'min_cpu_platform': None, 'dataflow_worker_jar': None, 'dataflow_job_file': None, 'experiments': None, 'number_of_worker_harness_threads': None, 'profile_cpu': False, 'profile_memory': False, 'profile_location': None, 'profile_sample_rate': 1.0, 'requirements_file': None, 'requirements_cache': None, 'setup_file': None, 'beam_plugins': None, 'save_main_session': False, 'sdk_location': 'default', 'extra_packages': None, 'prebuild_sdk_container_engine': None, 'prebuild_sdk_container_base_image': None, 'docker_registry_push_url': None, 'job_endpoint': None, 'artifact_endpoint': None, 'job_server_timeout': 60, 'environment_type': 'DOCKER', 'environment_config': None, 'environment_options': None, 'sdk_worker_parallelism': 1, 'environment_cache_millis': 0, 'output_executable_path': None, 'artifacts_dir': None, 'job_port': 0, 'artifact_port': 0, 'expansion_port': 0, 'flink_master': '[auto]', 'flink_version': '1.10', 'flink_job_server_jar': None, 'flink_submit_uber_jar': False, 'spark_master_url': 'local[4]', 'spark_job_server_jar': None, 'spark_submit_uber_jar': False, 'spark_rest_url': None, 'on_success_matcher': None, 'dry_run': False, 'wait_until_finish_duration': None, 'pubsubRootUrl': None, 's3_access_key_id': None, 's3_secret_access_key': None, 's3_session_token': None, 's3_endpoint_url': None, 's3_region_name': None, 's3_api_version': None, 's3_verify': None, 's3_disable_ssl': False, 'fidscs_capstone_max_target_videos': 50, 'fidscs_capstone_work_dir': '/tmp', 'fidscs_capstone_data_dir': '/tmp/data', 'fidscs_capstone_tmp_dir': '/tmp/data/tmp', 'fidscs_capstone_videos_dir': '/tmp/data/videos', 'fidscs_capstone_stitched_video_frames_dir': '/tmp/data/stitched_video_frames', 'fidscs_capstone_corpus_dir': '/tmp/data/tmp/ncslgr-xml', 'fidscs_capstone_corpus_ds_path': '/tmp/data/ncslgr-corpus-index.csv', 'fidscs_capstone_document_asl_cconsultant_ds_path': '/tmp/data/document-consultant-index.csv', 'fidscs_capstone_asl_consultant_ds_path': '/tmp/data/consultant-index.csv', 'fidscs_capstone_video_indexes_dir': '/tmp/data/tmp/video_index-20120129', 'fidscs_capstone_selected_video_index_path': '/tmp/data/tmp/video_index-20120129/files_by_video_name.csv', 'fidscs_capstone_video_ds_path': '/tmp/data/document-consultant-targetvideo-index.csv', 'fidscs_capstone_video_segment_ds_path': '/tmp/data/document-consultant-targetvideo-segment-index.csv', 'fidscs_capstone_video_frame_ds_path': '/tmp/data/document-consultant-targetvideo-frame-index.csv', 'fidscs_capstone_utterance_ds_path': '/tmp/data/document-consultant-utterance-index.csv', 'fidscs_capstone_utterance_video_ds_path': '/tmp/data/document-consultant-utterance-targetvideo-index.csv', 'fidscs_capstone_utterance_token_ds_path': '/tmp/data/document-consultant-utterance-token-index.csv', 'fidscs_capstone_utterance_token_frame_ds_path': '/tmp/data/document-consultant-targetvideo-utterance-token-frame-index.csv', 'fidscs_capstone_vocabulary_ds_path': '/tmp/data/vocabulary-index.csv'}
###Markdown
This time we would like to observe the results (collected into Pandas `DataFrame`s)...
###Code
# we require this in order to make use of ib.show() (which provides visualization of the pcolls specified) or ib.collect() (which creates a pandas dataframe from a pcoll)
# but all pcolls we wish to visualize must be created prior to executing the following line
ib.watch(locals())
###Output
_____no_output_____
###Markdown
And calling `ib.collect` forces the pipeline to actually run... Run the full "download-videos-extract-frames" pipeline We do this by collecting `Pcollection`s into Pandas `DataFrame`s for viewing with *Interactive Beam*.
###Code
print(f"\n\n****************************** Starting pipeline job: {job_name} ******************************")
df_full_target_vid_index_schemad_pcoll = ib.collect(full_target_vid_index_schemad_pcoll)
df_filtered_target_vid_index_schemad_pcoll = ib.collect(filtered_target_vid_index_schemad_pcoll)
df_merged_download_results = ib.collect(merged_download_results)
df_merged_extraction_results = ib.collect(merged_extraction_results)
print(f"****************************** Finished pipeline job: {job_name} ******************************")
df_filtered_target_vid_index_schemad_pcoll
df_merged_download_results
df_merged_extraction_results.columns = ['segment_fname', 'frames', 'segment_dicts']
df_merged_extraction_results
print(f"We extracted {df_merged_extraction_results.frames.sum()} frames from {df_merged_extraction_results.segment_fname.count()} downloaded segments.")
###Output
We extracted 4382 frames from 50 downloaded segments.
|
notebooks/01-saxs-theory-p1/02-circle-image.ipynb | ###Markdown
2. Drawing Circle as 2D Image* It is assumed here that you are already familiar with boolean indexing of numpy arrays.* If not, see the previous notebook.
###Code
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
* Define a function which takes a boolean array index as its argument
###Code
N = 100
def plot_a_shape(shape):
canvas = np.zeros((N,N))
canvas[shape] = 1
fig, ax = plt.subplots()
im = ax.imshow(canvas)
ax.invert_yaxis()
###Output
_____no_output_____
###Markdown
* Define a circle as a boolean array index and give it to the function.
###Code
x = y = np.arange(N)
xx, yy = np.meshgrid(x, y)
circle = (xx - 50)**2 + (yy - 50)**2 < 10**2
plot_a_shape(circle)
###Output
_____no_output_____
###Markdown
* Define a wrapper function which is specialized for circles.
###Code
def plot_a_circle(a, b, r):
plot_a_shape((xx - a)**2 + (yy - b)**2 < r**2)
plot_a_circle(50, 50, 10)
###Output
_____no_output_____
###Markdown
* You can make it interactive with "interact".* Play with the sliders and see how they change the position and size.
###Code
from ipywidgets import interactive
interactive_plot = interactive(plot_a_circle, a=(0,99), b=(0,99), r=(0,99))
interactive_plot
###Output
_____no_output_____ |
PendulumMDP.ipynb | ###Markdown
Implicit Midpoint Shadow Integration of mathematical pendulum
###Code
import numpy as np
import matplotlib.pyplot as plt
from ShadowIntegrators import *
from CreateTrainingData import *
###Output
_____no_output_____
###Markdown
data creation for training
###Code
# setup problem for creation of training data
f = lambda x: -np.sin(x)
H = lambda x: 1/2*x[1]**2+(1-np.cos(x[0]))
h = 0.3
spacedim=[(-2*np.pi, 2*np.pi), (-1.2, 1.2)]
n_train = 400
# create traing data
start,final = CreateTrainingData(spacedim,f,n_train,h)
###Output
_____no_output_____
###Markdown
initialisation and training
###Code
# initialise shadow integrator
SI = ShadowMidpoint()
# train integrator
SI.train(start,final,h) # output: residum of least square problem, rank of linear system
###Output
Start training with for 400 data points
Start computation of covariance matrix.
Covariance matrix of shape (400, 400)computed.
Start Cholesky decomposition of (400, 400) Matrix
Cholesky decomposition completed.
Create LHS of linear system for H at test points.
Creation of linear system completed.
Solve least square problem of dimension (801, 400)
###Markdown
prediction of motion and conservation of exact energy H
###Code
## predict a motion starting from z0
N = 4000
z0 = np.array([0.4,0.])
# use symplectic shadow integration
trj = SI.predictMotion(z0,N)
# integrate exact vector field for comparision
trjSE = SI.classicTrajectory(z0,lambda z: np.array([z[1],f(z[0])]),h,N)
# as a reference solution, compute exact vector field with tiny h
trj_ref = SI.classicTrajectory(z0,lambda z: np.array([z[1],f(z[0])]),h/800,N)
# compare
plt.plot(trj_ref[0],trj_ref[1],'k')
plt.plot(trjSE[0],trjSE[1],color='lightgray')
plt.plot(trj[0],trj[1])
plt.xlabel('$q$')
plt.ylabel('$p$')
# plot energy conservation
Htrj = H(trj)
fig,ax = plt.subplots(1, 2,figsize=(20,5))
ax[0].plot(h*np.arange(0,len(trj[0,:])),H(trjSE),color='lightgray')
ax[0].plot(h*np.arange(0,len(trj[0,:])),Htrj)
ax[0].set_xlabel('$t$')
ax[0].set_ylabel('$H$')
ax[1].plot(h*np.arange(0,len(trj[0,:])),Htrj-np.mean(Htrj))
ax[1].set_xlabel('$t$')
ax[1].set_ylabel('$H$')
###Output
_____no_output_____
###Markdown
system identification phase plot
###Code
# creation of grid on phase space
n0 = 100
n1 = 100
yy0,yy1 = np.meshgrid(np.linspace(spacedim[0][0],spacedim[0][1],n0),np.linspace(spacedim[1][0],spacedim[1][1],n1))
Y = [ np.array([yy0[i,j],yy1[i,j]]) for i in range(0,n1) for j in range(0,n0) ]
Y = np.array(Y)
# compute recovered H over grid
HmodY = np.array([SI.HRecover(x) for x in Y])
# compute reference
n0 = 400
n1 = 400
yy0High,yy1High = np.meshgrid(np.linspace(spacedim[0][0],spacedim[0][1],n0),np.linspace(spacedim[1][0],spacedim[1][1],n1))
H_ref=H([yy0High,yy1High])
# compare recovered H with reference
plt.contour(yy0High,yy1High,H([yy0High,yy1High]),colors=['grey'])
plt.contour(yy0,yy1,np.reshape(HmodY[:,2],yy0.shape))
plt.xlabel('$q$')
plt.ylabel('$p$')
###Output
_____no_output_____
###Markdown
conservation properties of recovered H along trajectory
###Code
# compute recovered H along trajectory
rYt=np.array([SI.HRecover(x) for x in trj.transpose()])
t=h*np.arange(0,len(rYt))
rYtm=np.mean(rYt,0)
fig, axs = plt.subplots(1, 3,figsize=(30,6))
axs[0].plot(t,rYt[:,0]-rYtm[0],t,rYt[:,1]-rYtm[1],t,rYt[:,2]-rYtm[2])
axs[1].plot(t,rYt[:,1]-rYtm[1],'C1',t,rYt[:,2]-rYtm[2],'C2')
axs[2].plot(t,rYt[:,2]-rYtm[2],'C2')
axs[0].set_xlabel('$t$')
axs[1].set_xlabel('$t$')
axs[2].set_xlabel('$t$')
###Output
_____no_output_____ |
notebooks/UNET_ReverseEngineering/ReverseEngineering.ipynb | ###Markdown
Step I: Reading data
###Code
from DeepDeconv.utils.batch_utils import dynamic_batches
#Input the directory containing the fits file
data_directory = '/data/DeepDeconv/data/vsc_euclidpsfs/reshuffle/'
write_path="/data/DeepDeconv/data/vsc_euclidpsfs/reshuffle/"
#Retrieves the list of all the files
import glob
gal_files = glob.glob(data_directory+'image-*-multihdu.fits')
gal_files.sort()
SNR = [20,100]#Range of SNR simulated
noiseless_img_hdu = 0
psf_hdu = 1
targets_hdu = 2
deconv_mode = 'TIKHONOV'
gen = dynamic_batches(gal_files[2:] , batch_size=32, noise_std=None, SNR=SNR,
noiseless_img_hdu=noiseless_img_hdu, targets_hdu=targets_hdu,
psf_hdu=psf_hdu, image_dim=96, image_per_row=100,
deconv_mode=deconv_mode)
a = next(gen)
a[0].shape, a[1].shape, a[2].shape
subplot(121)
imshow(a[0][3,:,:,0])
subplot(122)
imshow(a[1][3,:,:,0])
###Output
_____no_output_____
###Markdown
Step II: Define a network
###Code
# Disclaimer.... this is a very stupod networl
inputs = tf.keras.Input(shape=[96, 96, 1])
net = tf.keras.layers.Conv2D(32, 3, padding='same')(inputs)
net = tf.keras.layers.Activation('relu')(net)
net = tf.keras.layers.Conv2D(16, 3, padding='same')(net)
output = tf.keras.layers.Conv2D(1, 3, padding='same')(net)
# Compile the model
model = tf.keras.Model(input=inputs, outputs=output)
model.compile(optimizer = Adam(lr=1e-3), loss = 'mse')
###Output
_____no_output_____
###Markdown
Step III: Training
###Code
# Train the model
history = model.fit_generator(gen,
samples_per_epoch=10000,
epochs=20,)
# have a look at history
history
###Output
_____no_output_____
###Markdown
Step IV: Applying the model
###Code
# The model can be applied like so:
res = model(a[0])
# This should return the solution of the deconvolution prblm
###Output
_____no_output_____ |
notebooks/plot_mnist.ipynb | ###Markdown
Accuracy of the network
###Code
n_updates_tick = 1000
plot_accuracy(all_layers_folders, n_updates_tick=n_updates_tick, ymax=60)
###Output
Acc test last layer last epoch: [0.98828125 0.98828125]
(n_runs, n_epoch/test_interval, n_batch_test, n_layers): (1, 500, 2, 3)
Accuracy: [0.51757812 0.89648438 0.93359375 0.93652344 0.94726562 0.94433594
0.95507812 0.94921875 0.95507812 0.96582031 0.9453125 0.95703125
0.95898438 0.9609375 0.95996094 0.95996094 0.94824219 0.96582031
0.96777344 0.95898438 0.96972656 0.96484375 0.97167969 0.96289062
0.95117188 0.96679688 0.96679688 0.97460938 0.95605469 0.97167969
0.9609375 0.97558594 0.97265625 0.96972656 0.97558594 0.97460938
0.96484375 0.97167969 0.97460938 0.97070312 0.97558594 0.96875
0.96582031 0.97070312 0.97460938 0.96679688 0.9609375 0.97460938
0.96386719 0.97460938 0.97265625 0.96972656 0.97460938 0.97460938
0.98144531 0.9765625 0.97949219 0.98046875 0.97167969 0.98046875
0.96972656 0.97949219 0.9765625 0.9765625 0.97753906 0.984375
0.9765625 0.97363281 0.9765625 0.98339844 0.9765625 0.97363281
0.98046875 0.97363281 0.97851562 0.97070312 0.97558594 0.9765625
0.98046875 0.9765625 0.97460938 0.97363281 0.97753906 0.97851562
0.97851562 0.97558594 0.97167969 0.9765625 0.97460938 0.98144531
0.97851562 0.97949219 0.97363281 0.97363281 0.97265625 0.97460938
0.97753906 0.97363281 0.97949219 0.98144531 0.97851562 0.97949219
0.98242188 0.98046875 0.9765625 0.98144531 0.98242188 0.98242188
0.98144531 0.97851562 0.9765625 0.98144531 0.97851562 0.98046875
0.97363281 0.98242188 0.98339844 0.97753906 0.97949219 0.98046875
0.97949219 0.98046875 0.98242188 0.98242188 0.98144531 0.98535156
0.97753906 0.97363281 0.97851562 0.98144531 0.98046875 0.98144531
0.98242188 0.98730469 0.98242188 0.98339844 0.97851562 0.98339844
0.98339844 0.98144531 0.98339844 0.97851562 0.98144531 0.98632812
0.98339844 0.98046875 0.98242188 0.98242188 0.98242188 0.97851562
0.97851562 0.98730469 0.98339844 0.984375 0.98046875 0.984375
0.98144531 0.98144531 0.98535156 0.97851562 0.97949219 0.97265625
0.97949219 0.98046875 0.98632812 0.98535156 0.98046875 0.97753906
0.98144531 0.984375 0.97949219 0.98242188 0.98339844 0.98242188
0.98828125 0.98144531 0.98632812 0.98339844 0.98242188 0.984375
0.97851562 0.98242188 0.984375 0.98144531 0.984375 0.98339844
0.98046875 0.98535156 0.97851562 0.98242188 0.98632812 0.98242188
0.97949219 0.97949219 0.98046875 0.97851562 0.98046875 0.98242188
0.97949219 0.98144531 0.98828125 0.984375 0.98339844 0.98535156
0.984375 0.98828125 0.98535156 0.98339844 0.98339844 0.98535156
0.98046875 0.98339844 0.98046875 0.98632812 0.98339844 0.98730469
0.98925781 0.984375 0.98632812 0.98632812 0.98925781 0.98535156
0.98339844 0.98730469 0.984375 0.98339844 0.98730469 0.98144531
0.98144531 0.98339844 0.984375 0.98535156 0.98339844 0.984375
0.98730469 0.984375 0.98632812 0.984375 0.98339844 0.98046875
0.98242188 0.98339844 0.984375 0.98535156 0.98632812 0.984375
0.98632812 0.984375 0.97949219 0.98339844 0.984375 0.98242188
0.98144531 0.984375 0.98535156 0.98144531 0.98632812 0.98730469
0.984375 0.98339844 0.98730469 0.98632812 0.984375 0.98925781
0.98925781 0.98242188 0.98632812 0.98339844 0.98632812 0.98535156
0.98242188 0.98242188 0.98339844 0.98144531 0.984375 0.984375
0.98339844 0.98632812 0.98828125 0.98144531 0.98632812 0.984375
0.98632812 0.98046875 0.98339844 0.98632812 0.98339844 0.98535156
0.98242188 0.98828125 0.98339844 0.98339844 0.98339844 0.98632812
0.98144531 0.984375 0.98339844 0.98632812 0.98730469 0.98925781
0.98339844 0.98730469 0.98535156 0.98632812 0.98925781 0.98730469
0.98339844 0.98730469 0.98632812 0.98730469 0.98535156 0.98535156
0.98535156 0.98730469 0.984375 0.98535156 0.98730469 0.98242188
0.98339844 0.98632812 0.98535156 0.98632812 0.98535156 0.984375
0.984375 0.98632812 0.984375 0.98535156 0.98632812 0.98730469
0.98730469 0.98632812 0.98632812 0.98339844 0.98632812 0.98144531
0.984375 0.98339844 0.98339844 0.98535156 0.98730469 0.98730469
0.98828125 0.98535156 0.98339844 0.98242188 0.984375 0.98535156
0.984375 0.98535156 0.98632812 0.984375 0.98632812 0.98828125
0.984375 0.98632812 0.98632812 0.98535156 0.98730469 0.984375
0.98339844 0.984375 0.98828125 0.98828125 0.984375 0.98339844
0.984375 0.98632812 0.98632812 0.98828125 0.98632812 0.98632812
0.984375 0.98730469 0.98730469 0.984375 0.98828125 0.98144531
0.98730469 0.98730469 0.98828125 0.98730469 0.98632812 0.98730469
0.98242188 0.98535156 0.98730469 0.984375 0.98632812 0.984375
0.97949219 0.984375 0.98828125 0.98828125 0.98339844 0.98535156
0.99023438 0.99023438 0.984375 0.98632812 0.98730469 0.98730469
0.984375 0.98730469 0.98730469 0.98730469 0.98632812 0.98632812
0.98632812 0.98925781 0.98632812 0.98339844 0.98828125 0.98242188
0.98535156 0.98632812 0.98535156 0.98632812 0.98925781 0.984375
0.98730469 0.98632812 0.98925781 0.98632812 0.98828125 0.98242188
0.98632812 0.984375 0.98632812 0.98535156 0.98535156 0.98632812
0.984375 0.98535156 0.98535156 0.984375 0.98730469 0.98828125
0.98828125 0.98535156 0.98632812 0.98730469 0.98730469 0.98828125
0.98632812 0.98828125 0.98730469 0.98730469 0.984375 0.98339844
0.984375 0.98632812 0.98730469 0.98535156 0.98632812 0.98632812
0.98535156 0.98144531 0.98632812 0.98730469 0.98632812 0.98242188
0.984375 0.98632812 0.98730469 0.98632812 0.984375 0.98632812
0.98242188 0.98535156 0.98730469 0.98535156 0.98535156 0.98535156
0.984375 0.98632812 0.98925781 0.984375 0.98730469 0.984375
0.98632812 0.98535156 0.98632812 0.98730469 0.98242188 0.98730469
0.98632812 0.98632812 0.984375 0.98632812 0.98925781 0.984375
0.98632812 0.98730469 0.98730469 0.984375 0.98730469 0.98632812
0.98632812 0.98828125]
Accuracy ref: [0.10449219 0.86621094 0.9140625 0.92480469 0.95019531 0.95507812
0.96484375 0.96484375 0.96777344 0.97167969 0.97265625 0.97949219
0.97265625 0.97558594 0.97753906 0.97363281 0.97363281 0.97460938
0.97851562 0.98242188 0.97949219 0.98144531 0.98144531 0.97753906
0.98242188 0.98242188 0.98144531 0.98339844 0.98339844 0.984375
0.98046875 0.98339844 0.98339844 0.97851562 0.98339844 0.984375
0.98339844 0.98242188 0.98242188 0.984375 0.98632812 0.984375
0.98632812 0.98535156 0.98339844 0.984375 0.98144531 0.984375
0.98242188 0.98828125 0.98535156 0.98242188 0.98339844 0.984375
0.98339844 0.98535156 0.98730469 0.98535156 0.98535156 0.984375
0.98339844 0.98339844 0.98632812 0.98632812 0.984375 0.98535156
0.98535156 0.98828125 0.98339844 0.98730469 0.98632812 0.98535156
0.98828125 0.99121094 0.98632812 0.98632812 0.98046875 0.98730469
0.98632812 0.98339844 0.98535156 0.98730469 0.98535156 0.98242188
0.98535156 0.98730469 0.98632812 0.98632812 0.98632812 0.98339844
0.98730469 0.98632812 0.98632812 0.98925781 0.984375 0.98535156
0.98828125 0.98730469 0.98632812 0.98730469 0.98730469 0.98925781
0.98632812 0.984375 0.98925781 0.98535156 0.98828125 0.98632812
0.98339844 0.98730469 0.98535156 0.98828125 0.98632812 0.984375
0.98925781 0.98828125 0.98535156 0.98535156 0.984375 0.98632812
0.99023438 0.98828125 0.98730469 0.98730469 0.98632812 0.98828125
0.98730469 0.98828125 0.98632812 0.98632812 0.98730469 0.98730469
0.98828125 0.98632812 0.98632812 0.98632812 0.98535156 0.98828125
0.98730469 0.98730469 0.98828125 0.98535156 0.98730469 0.98535156
0.98535156 0.98632812 0.98925781 0.98730469 0.98730469 0.98828125
0.984375 0.98535156 0.98730469 0.98925781 0.98730469 0.98828125
0.98632812 0.98535156 0.98730469 0.98242188 0.98535156 0.98535156
0.984375 0.98828125 0.98730469 0.98828125 0.98828125 0.98632812
0.98828125 0.98828125 0.98535156 0.98925781 0.98535156 0.98828125
0.98925781 0.98632812 0.98632812 0.98632812 0.98632812 0.98828125
0.99023438 0.99023438 0.99023438 0.98925781 0.98925781 0.98730469
0.98632812 0.98632812 0.98535156 0.98339844 0.98730469 0.98730469
0.98632812 0.98925781 0.98632812 0.984375 0.98535156 0.98730469
0.98535156 0.984375 0.98535156 0.984375 0.98632812 0.98730469
0.98730469 0.98535156 0.98535156 0.98730469 0.98632812 0.98730469
0.98632812 0.98828125 0.98828125 0.98730469 0.98925781 0.98730469
0.98730469 0.98925781 0.98828125 0.98730469 0.98730469 0.98925781
0.98828125 0.98730469 0.98535156 0.99023438 0.98632812 0.98632812
0.98632812 0.98730469 0.99121094 0.98632812 0.98925781 0.984375
0.98925781 0.98828125 0.99023438 0.98925781 0.98632812 0.98730469
0.984375 0.98535156 0.98925781 0.98730469 0.98535156 0.98828125
0.99023438 0.99023438 0.98828125 0.98925781 0.98828125 0.98828125
0.98632812 0.984375 0.98730469 0.98925781 0.98632812 0.98828125
0.98730469 0.98730469 0.98730469 0.98730469 0.98828125 0.98632812
0.98535156 0.98730469 0.98925781 0.98828125 0.98730469 0.99023438
0.984375 0.98828125 0.98535156 0.98730469 0.98730469 0.98828125
0.98925781 0.99023438 0.98730469 0.98535156 0.984375 0.98730469
0.98730469 0.98828125 0.98632812 0.98632812 0.98828125 0.98730469
0.98925781 0.98828125 0.98535156 0.98632812 0.98828125 0.98730469
0.98535156 0.98828125 0.98535156 0.98730469 0.98730469 0.98925781
0.98925781 0.98828125 0.99121094 0.98730469 0.98925781 0.98925781
0.98730469 0.98730469 0.984375 0.98925781 0.98535156 0.98828125
0.98535156 0.98632812 0.99023438 0.984375 0.98632812 0.984375
0.99121094 0.98925781 0.99023438 0.9921875 0.98925781 0.98925781
0.98730469 0.98730469 0.98730469 0.98730469 0.98242188 0.98828125
0.98828125 0.984375 0.98925781 0.98925781 0.9921875 0.98828125
0.98730469 0.98632812 0.98632812 0.98632812 0.98925781 0.99023438
0.99023438 0.98925781 0.98632812 0.98730469 0.98730469 0.98828125
0.99121094 0.98828125 0.99023438 0.98925781 0.98730469 0.99121094
0.98925781 0.98632812 0.98730469 0.98828125 0.99023438 0.98925781
0.98632812 0.98730469 0.98925781 0.98925781 0.98730469 0.98925781
0.98925781 0.98828125 0.98730469 0.9921875 0.98925781 0.98632812
0.98925781 0.98828125 0.98730469 0.98828125 0.98730469 0.98828125
0.99023438 0.98730469 0.99023438 0.98828125 0.99023438 0.98632812
0.98632812 0.98925781 0.98828125 0.98925781 0.98828125 0.98925781
0.98925781 0.99121094 0.99023438 0.99023438 0.984375 0.98730469
0.98828125 0.98925781 0.98535156 0.98730469 0.98828125 0.99023438
0.98828125 0.99023438 0.98632812 0.98730469 0.98828125 0.98828125
0.99121094 0.98925781 0.98535156 0.98925781 0.98730469 0.98828125
0.98730469 0.99023438 0.98730469 0.99121094 0.98925781 0.98828125
0.99121094 0.98828125 0.98925781 0.99121094 0.98828125 0.98925781
0.98828125 0.98828125 0.99023438 0.99023438 0.99023438 0.99023438
0.99023438 0.98632812 0.98632812 0.98535156 0.98730469 0.98730469
0.98828125 0.99023438 0.98828125 0.98925781 0.98339844 0.98828125
0.984375 0.98730469 0.98925781 0.98632812 0.98925781 0.9921875
0.98925781 0.98535156 0.98339844 0.98730469 0.98925781 0.98828125
0.98828125 0.98535156 0.98925781 0.98730469 0.98925781 0.98925781
0.98730469 0.98925781 0.98925781 0.98730469 0.98632812 0.98925781
0.98730469 0.99023438 0.99023438 0.98730469 0.98828125 0.98730469
0.98925781 0.98535156 0.98828125 0.98730469 0.98730469 0.98632812
0.98828125 0.98730469 0.98632812 0.98925781 0.98730469 0.98730469
0.98730469 0.98828125 0.984375 0.98925781 0.99023438 0.98828125
0.98925781 0.98632812 0.98828125 0.98730469 0.98925781 0.98925781
0.99023438 0.98925781]
###Markdown
Accuracy of the network at the beginning
###Code
plot_accuracy(all_layers_begin_folders, comment="_begin", xmax=3000, n_layers=3)
###Output
Acc test last layer last epoch: [0.94335938 0.92382812]
(n_runs, n_epoch/test_interval, n_batch_test, n_layers): (1, 50, 2, 3)
Accuracy: [0.51953125 0.78027344 0.81738281 0.83300781 0.85839844 0.8359375
0.86621094 0.875 0.89257812 0.87011719 0.86914062 0.88964844
0.90625 0.89648438 0.90039062 0.91113281 0.91015625 0.92382812
0.90527344 0.91210938 0.91601562 0.93359375 0.9296875 0.91113281
0.91503906 0.921875 0.9375 0.93554688 0.94921875 0.94042969
0.92871094 0.93359375 0.92871094 0.91503906 0.91992188 0.93554688
0.9375 0.92871094 0.93847656 0.94238281 0.93457031 0.9140625
0.93847656 0.9296875 0.93847656 0.92578125 0.93066406 0.92871094
0.93847656 0.93359375]
Accuracy ref: [0.10449219 0.14257812 0.25390625 0.35546875 0.51171875 0.65625
0.69628906 0.72070312 0.73535156 0.70507812 0.71386719 0.76953125
0.78808594 0.79296875 0.83886719 0.84667969 0.86328125 0.82421875
0.85546875 0.84472656 0.86621094 0.85644531 0.86816406 0.875
0.8671875 0.87988281 0.87792969 0.8828125 0.8828125 0.88378906
0.88378906 0.88964844 0.89160156 0.89648438 0.890625 0.89453125
0.91015625 0.90820312 0.90917969 0.90722656 0.9140625 0.90917969
0.92382812 0.91992188 0.91992188 0.91894531 0.91894531 0.91015625
0.92382812 0.92382812]
|
analysis/002_Label_training_images.ipynb | ###Markdown
Point to the training images, label using the 'DeepPoseKit' annotator (Graving et al. 2019)
###Code
# The skeleton file specifies the affinity fields (nose-to-tail, ear-to-nose, etc)
# between the clicked keypoints (nose, ears, tail, implant).
training_sets = glob.glob('training_sets' + '/*.h5')
skeletons = glob.glob('training_sets' + '/*skeleton_v2*')
print(training_sets)
print(skeletons)
app = Annotator(datapath=training_sets[0],
dataset='c_images',
skeleton=skeletons[0],
shuffle_colors=False,
text_scale=.3)
app.run()
###Output
Saved
Saved
|
code/4_optimized_model.ipynb | ###Markdown
U-AutoRec baseline
###Code
import os
from tqdm import tqdm, trange
import numpy as np
import scipy.sparse
from scipy.sparse import csr_matrix
import pandas as pd
from pandas.api.types import CategoricalDtype
import torch
from torch import nn, tensor, optim
from torch.autograd import Variable
from torch.utils.data import Dataset, DataLoader
import torch.nn.functional as F
import matplotlib.pyplot as plt
import sklearn
from sklearn.model_selection import train_test_split
from sklearn.metrics import ndcg_score
import scipy.optimize as opt
import seaborn as sns
from hyperopt import hp, tpe, fmin, space_eval, STATUS_OK, Trials
from time import time
###Output
_____no_output_____
###Markdown
Load games
###Code
WORKING_DIR = os.path.dirname(os.getcwd()) # repo directory
def read_df(path):
return pd.read_pickle(os.path.join(WORKING_DIR, path))
games = read_df('data/cleaned/games_normalized.pkl.gz')
N_GAMES = len(games)
games.head(1)
###Output
_____no_output_____
###Markdown
GPUGet the current device, the model and training data will be copied to this device.
###Code
cuda_device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
cpu_device = torch.device('cpu')
print(f"Using device: {cuda_device}")
###Output
Using device: cuda
###Markdown
PyTorch datasetCreate a custom pytorch dataset that works with the scipy csr matrix.We can quickly get the data for item idx using self.data_matrix.getrow(idx) (for CSR).The resulting matrix must be flattened and converted to a numpy array.Note: for some reason reshape is much faster than flatten, also creating a mask on gpu during training instead of cpu is much faster.
###Code
def normalize_playtime(x: tensor, dims):
# dim 0: normalize playtime for each game
# dim 1: normalize playtime for each user
for d in dims:
x = F.normalize(x, dim=d)
return x
class UserRatingSet(Dataset):
def __init__(self, path_interactions, path_meta_data, n_splits, exclude_meta_data=None):
if exclude_meta_data is None:
exclude_meta_data = []
df = read_df(path_interactions) #.iloc[:1000]
self.n_users = len(df)
self.train_set_ratings = torch.zeros((n_splits, self.n_users, N_GAMES), dtype=torch.float32)
self.train_set_mask = torch.zeros((n_splits, self.n_users, N_GAMES), dtype=torch.bool)
meta_data_info = read_df(path_meta_data)
self.n_meta = meta_data_info
for name in exclude_meta_data:
self.n_meta = self.n_meta[self.n_meta['Name'] != name]
self.n_meta = self.n_meta['N'].sum()
self.train_meta_data = torch.zeros((n_splits, self.n_users, self.n_meta), dtype=torch.float32)
self.val_set = []
self.test_set = []
self.item_counts = []
for split in range(n_splits):
item_ids = df['train'].apply(lambda x: x[split, 0])
playtimes = df['train'].apply(lambda x: x[split, 1])
for user_id, game_ids in tqdm(enumerate(item_ids), total=self.n_users, desc='Rating matrix'):
self.train_set_ratings[split, user_id, game_ids] = torch.from_numpy(1 + playtimes[user_id]).float() # ratings
self.train_set_mask[split, user_id, game_ids] = True # mask
item_count = item_ids.apply(len)
self.item_counts.append([1 + item_count[item_count >= i].index[-1] for i in range(6)])
self.val_set.append(df['val'].apply(lambda x: set(x[split, 0])))
self.test_set.append(df['test'].apply(lambda x: set(x[split, 0])))
user_profiles = df['user_profile_train'].apply(lambda x: x[split])
for user_id, profile in tqdm(enumerate(user_profiles), total=self.n_users, desc='Meta_data'):
start = 0
for i, (name, n) in meta_data_info[['Name', 'N']].iterrows():
if name in exclude_meta_data:
continue
idx_bin, counts = profile[i]
v = torch.from_numpy(counts).float()
self.train_meta_data[split, user_id, start + idx_bin] = v/v.sum()
start += n
# normalize
self.train_set_ratings = self.train_set_ratings.to(cuda_device)
self.train_set_ratings = torch.log(1 + torch.log(1 + self.train_set_ratings))
self.train_set_ratings /= self.train_set_ratings.max()
self.train_set_ratings = self.train_set_ratings.to(cpu_device)
self.current_split, self.min_user_items = 0, 0
self.rating_set_cuda = self.train_set_ratings[self.current_split].to(cuda_device)
self.mask_cuda = self.train_set_mask[self.current_split].to(cuda_device)
self.meta_data_cuda = self.train_meta_data[self.current_split].to(cuda_device)
def settings(self, split=None, min_user_items=None):
reload = False
if split:
assert 0 <= split < self.train_set_ratings.shape[0]
self.current_split = split
reload = True
if min_user_items:
assert 0 <= min_user_items < 6
self.min_user_items = min_user_items
self.n_users = self.item_counts[self.current_split][self.min_user_items]
if reload:
self.rating_set_cuda.to(cpu_device)
self.rating_set_cuda = self.train_set_ratings[self.current_split].to(cuda_device)
self.mask_cuda.to(cpu_device)
self.mask_cuda = self.train_set_mask[self.current_split].to(cuda_device)
self.meta_data_cuda.to(cpu_device)
self.meta_data_cuda = self.train_meta_data[self.current_split].to(cuda_device)
return self
def val_data(self):
return self.val_set[self.current_split]
def test_data(self):
return self.test_set[self.current_split]
def __len__(self):
return self.n_users
def __getitem__(self, user_id):
return self.rating_set_cuda[user_id], self.mask_cuda[user_id], self.meta_data_cuda[user_id]
def __str__(self):
memory_gpu = self.rating_set_cuda.element_size() * self.rating_set_cuda.nelement()
memory_gpu += self.mask_cuda.element_size() * self.mask_cuda.nelement()
memory_gpu += self.meta_data_cuda.element_size() * self.meta_data_cuda.nelement()
memory_cpu = self.train_set_ratings.element_size() * self.train_set_ratings.nelement()
memory_cpu += self.train_set_mask.element_size() * self.train_set_mask.nelement()
memory_cpu += self.train_meta_data.element_size() * self.train_meta_data.nelement()
return f'Dataset(\n' + \
f'\tusers={len(self)},\n' + \
f'\tgames={int(self.rating_set_cuda.shape[1])},\n' + \
f'\tmin_items_per_user={self.min_user_items},\n' + \
f'\tmemory_cpu={round(memory_cpu/(2**30),2)} GB,\n' + \
f'\tmemory_gpu={round(memory_gpu/(2**30),2)} GB,\n'
###Output
_____no_output_____
###Markdown
Load data
###Code
N_SPLITS = 1
# load all the train/val/test splits
user_rating_set = UserRatingSet(
'data/cleaned/interactions_splits_meta_data.pkl.gz',
'data/cleaned/meta_data_info.pkl.gz',
n_splits=N_SPLITS,
exclude_meta_data=['developer_category', 'publisher_category']
)
print(user_rating_set)
#interr = InteractionDataset(interactions[0, 'train'], model.n_games, model.rating_bias)
for u in [0, 999]:
r = user_rating_set[u][0]
r = r[r > 0]
print(f'first 100 ratings of user {u}')
print(r[:100])
print(f'# ratings from user {u}')
print(len(r))
###Output
first 100 ratings of user 0
tensor([0.1979, 0.7705, 0.6620, 0.7842, 0.7012, 0.6430, 0.1979, 0.1979, 0.8303,
0.8023, 0.6934, 0.7477, 0.7189, 0.6960, 0.1979, 0.1979, 0.2786, 0.8931,
0.6620, 0.8323, 0.6620, 0.7106, 0.6485, 0.7386, 0.1979, 0.7418, 0.6833,
0.1979, 0.4778, 0.7873, 0.6724, 0.5766, 0.1979, 0.1979, 0.7379, 0.7752,
0.7313, 0.6430, 0.2786, 0.1979, 0.7687, 0.7579, 0.7198, 0.6542, 0.7458,
0.7653, 0.1979, 0.7379, 0.6620, 0.1979, 0.1979, 0.8169, 0.1979, 0.1979,
0.6973, 0.6430, 0.1979, 0.8223, 0.1979, 0.7038, 0.1979, 0.7325, 0.6430,
0.8052, 0.5252, 0.6620, 0.1979, 0.6737, 0.1979, 0.6620, 0.1979, 0.1979,
0.6758, 0.6430, 0.6707, 0.6565, 0.6221, 0.6698, 0.1979, 0.1979, 0.7372,
0.1979, 0.6620, 0.1979, 0.1979, 0.6843, 0.6200, 0.6430, 0.1979, 0.6430,
0.6368, 0.6660, 0.6762, 0.6962, 0.6352, 0.6582, 0.6749, 0.1979, 0.1979,
0.6511], device='cuda:0')
# ratings from user 0
674
first 100 ratings of user 999
tensor([0.1979, 0.7222, 0.2786, 0.1979, 0.6883, 0.5510, 0.7173, 0.5374, 0.4990,
0.2786, 0.3605, 0.1979, 0.6737, 0.7420, 0.8144, 0.1979, 0.5541, 0.6068,
0.6811, 0.1979, 0.5650, 0.1979, 0.6630, 0.5336, 0.1979, 0.6407, 0.6524,
0.6231, 0.6651, 0.1979, 0.5510, 0.4369, 0.3605, 0.5445, 0.6472, 0.6189,
0.2786, 0.1979, 0.6829, 0.1979, 0.3605, 0.6392, 0.6630, 0.1979, 0.1979,
0.6530, 0.4778, 0.8331, 0.6189, 0.6777, 0.3605, 0.1979, 0.5374, 0.7227,
0.1979, 0.5106, 0.6119, 0.7503, 0.7703, 0.1979, 0.1979, 0.1979, 0.7180,
0.5766, 0.5699, 0.6873, 0.1979, 0.1979, 0.4061, 0.6665, 0.6524, 0.1979,
0.2786, 0.6811, 0.2786, 0.6155, 0.7282, 0.1979, 0.6670, 0.1979, 0.6770,
0.1979, 0.1979, 0.1979, 0.1979, 0.1979, 0.3859, 0.1979, 0.1979, 0.5206,
0.7648, 0.1979, 0.6951, 0.1979, 0.1979, 0.1979, 0.3605, 0.8116, 0.1979,
0.4597], device='cuda:0')
# ratings from user 999
154
###Markdown
The AutoRec modelDescribes the architecture of the neural network.The input x can be either item ratings by user (U-autoRec) or user ratings of an item (I-AutoRec).
###Code
class AutoEncoder(nn.Module):
def __init__(self, features_in, hidden_size, meta_features):
super().__init__()
self.hidden_size = hidden_size
self.encoder1 = nn.Linear(features_in, 2 * hidden_size)
self.encoder2 = nn.Linear(2 * hidden_size, hidden_size)
self.decoder1 = nn.Linear(hidden_size, 2 * hidden_size)
self.decoder2 = nn.Linear(2 * hidden_size, features_in)
self.meta_data_learner = nn.Linear(hidden_size, meta_features)
self.dropout = nn.Dropout(p=0.1)
def forward(self, x: tensor):
x = self.dropout(x)
x = torch.relu(self.dropout(self.encoder1(x)))
x = torch.sigmoid(self.dropout(self.encoder2(x)))
meta_pred = self.meta_data_learner(x)
x = torch.sigmoid(self.dropout(self.decoder1(x)))
x = self.decoder2(x)
if self.training:
return x, meta_pred
return x
###Output
_____no_output_____
###Markdown
For the optimizer we use Adam for now, but we can also compare with traditional SGD (stochastic gradient descent).The weight_decay paramater can be specified to add regularization to the network.Using a mask, outputs that we do not need will become 0 (perfect loss) so the optimizer will not adjust these weights (in the W matrix).We found that the fastest way to create the mask is to use torch.where when the data is already on the GPU.The ```test``` method just runs the data through the network. The result does not use a mask and is softmaxed to provide clear propabilities.
###Code
class AutoRecModel:
def __init__(self,
hidden_size,
learning_rate,
regularization,
meta_alpha,
huber_delta):
self.auto_encoder = AutoEncoder(N_GAMES, hidden_size, user_rating_set.n_meta).to(cuda_device)
self.optimizer = optim.Adam(self.auto_encoder.parameters(), lr=learning_rate)
self.regularization = regularization / 2 # lambda
self.meta_alpha = meta_alpha
self.huber_loss_function = torch.nn.HuberLoss(delta=huber_delta)
self.huber_loss = lambda x: self.huber_loss_function(torch.zeros_like(x), x)
self.mae_loss = torch.nn.L1Loss()
self.mse_loss = torch.nn.MSELoss()
def add_rating_bias(self, x, mask):
return mask.float() * 0.7 + x * 0.3
def train(self, dataset, epochs, show_progress=True):
self.auto_encoder.train()
data_loader = DataLoader(dataset, batch_size=2**8, shuffle=True)
losses = np.zeros((epochs, 4))
for e in (trange(epochs, desc='Epoch') if show_progress else range(epochs)):
for x, mask, meta_data in data_loader:
x = self.add_rating_bias(x, mask)
y, y_meta = self.auto_encoder(x)
loss_regularization = self.regularization * self.huber_loss(self.auto_encoder.decoder2.weight.data)
loss_rating = self.mae_loss(mask*x, mask*y)
loss_meta_data = self.meta_alpha * self.mse_loss(meta_data, y_meta)
loss = loss_rating + loss_regularization + loss_meta_data
for i, l0 in enumerate([loss_rating, loss_regularization, loss_meta_data, loss]):
losses[e, i] += l0.item() # add to the total loss of the dataset (this does not imply accuracies)
loss.backward() # calculate the gradients through the network
self.optimizer.step() # do backpropagation and adjust the network according to the optimizer settings
self.optimizer.zero_grad() # reset gradients
return losses
def show_output(self, dataset, n_user_samples: int, n_game_samples: int):
self.auto_encoder.eval() # eval modus
for x, mask, meta_data in DataLoader(dataset, batch_size=n_user_samples, shuffle=True):
x = self.add_rating_bias(x, mask)
y = self.auto_encoder(x)
idx_random_games = np.random.choice(N_GAMES, n_game_samples, replace=False)
return y[:, idx_random_games].detach().cpu()
def top_k(self, dataset, k: int, use_popular: float=1.0):
self.auto_encoder.eval() # train modus
pred_index = []
pred_values = []
for x, mask, meta_data in DataLoader(dataset, batch_size=600, shuffle=False):
x = self.add_rating_bias(x, mask)
y = self.auto_encoder(x) - ((mask+0) << 20) # diminish predictions of owned items
included_games = int(mask.shape[1] * use_popular)
y[:, included_games:] -= (2**10) # diminish predictions of ignored items
#y = torch.softmax(y, dim=1)
top_k_result = torch.topk(y, k=k, dim=1)
pred_index.append(top_k_result.indices.detach().cpu())
pred_values.append(top_k_result.values.detach().cpu())
return torch.cat(pred_index), torch.cat(pred_values)
def ndcg(self, dataset, test, k_values):
self.auto_encoder.eval() # train modus
ndcg_sums = np.zeros(len(k_values))
start = 0
batch_size = 1000
true_relevance = np.zeros((len(dataset), N_GAMES), dtype=np.float32)
for i, relevant_ids in enumerate(test):
true_relevance[i, list(relevant_ids)] = 1
for x, mask, meta_data in DataLoader(dataset, batch_size=batch_size, shuffle=False):
x = self.add_rating_bias(x, mask)
y = self.auto_encoder(x) - ((mask+0) << 20) # diminish predictions of owned items
y = y.detach().cpu().numpy()
for i, k in enumerate(k_values):
true_y = true_relevance[batch_size*start:batch_size*(start+1)]
ndcg_sums[i] += ndcg_score(true_y, y, k=k) * x.shape[0]
start += 1
return ndcg_sums / len(dataset)
def cuda(self):
self.auto_encoder = self.auto_encoder.to(cuda_device)
return self
def cpu(self):
self.auto_encoder = self.auto_encoder.to(cpu_device)
return self
###Output
_____no_output_____
###Markdown
Train first model
###Code
torch.cuda.empty_cache()
model_0 = AutoRecModel(
hidden_size=600,
learning_rate=1e-06,
regularization=1e-06,
meta_alpha=0.1,
huber_delta=0.5
)
t0 = time()
EPOCHS = 10
losses = model_0.train(user_rating_set.settings(min_user_items=2), epochs=EPOCHS)
TIME_PER_EPOCH = (time()-t0) / EPOCHS
def plot_losses(arr_loss):
print("Minimum loss:", arr_loss[1:,3].min())
plt.title("Train")
for i, name in enumerate(['rating', 'regularization', 'meta_data', 'final loss']):
d = arr_loss[1:, i]
#d -= d.min()
#d /= d.max()
plt.plot(d, label=name)
plt.legend(loc='upper right')
plt.show()
plot_losses(losses)
print(model_0.auto_encoder)
###Output
AutoEncoder(
(encoder1): Linear(in_features=7276, out_features=1200, bias=True)
(encoder2): Linear(in_features=1200, out_features=600, bias=True)
(decoder1): Linear(in_features=600, out_features=1200, bias=True)
(decoder2): Linear(in_features=1200, out_features=7276, bias=True)
(meta_data_learner): Linear(in_features=600, out_features=408, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
###Markdown
Evaluation
###Code
def visualize_ratings(pred, transpose=False):
if transpose:
plt.plot(pred.t())
plt.xlabel("Games")
plt.ylabel("Ratings")
else:
plt.plot(pred)
plt.xlabel("Users")
plt.ylabel("Ratings")
plt.show()
test_pred = model_0.show_output(user_rating_set, n_user_samples=10, n_game_samples=10)
visualize_ratings(test_pred)
visualize_ratings(test_pred, transpose=True)
# predict 10 games for each user (that the user does not own yet)
# predictions is of size (users, k)
pred_index, pred_value = model_0.top_k(user_rating_set.settings(min_user_items=1), k=8)
print(pred_index[1000:1005])
print(pred_value[1000:1005])
def get_recall_hitrate(model, dataset, test, k_values):
recall_at = np.zeros(len(k_values), dtype=np.float32)
hit_at = np.zeros(len(k_values), dtype=np.float32)
pred_index, pred_value = model.top_k(dataset, k=max(k_values))
for user in range(pred_index.shape[0]):
actual = test[user] # set of ids
for i, k in enumerate(k_values):
intersection = actual.intersection(pred_index[user,:k].tolist())
recall_at[i] += len(intersection) / min(k, len(actual))
hit_at[i] += min(1, len(intersection))
recall_at /= pred_index.shape[0]
hit_at /= pred_index.shape[0]
return dict(zip(k_values, recall_at)), dict(zip(k_values, hit_at))
def get_ndcg(model, dataset, test, k_values):
return dict(zip(k_values, model.ndcg(dataset, test, k_values)))
def print_metrics(recall_at, hit_at, ndcg_at, k_values):
for k, value in recall_at.items():
print(f'nRecall@{k}: {value}')
for k, value in hit_at.items():
print(f'Hitrate@{k}: {value}')
if ndcg_at:
for k, value in ndcg_at.items():
print(f'nDCG@{k}: {value}')
at_k = [5, 10, 20]
recall, hr = get_recall_hitrate(model_0, user_rating_set.settings(min_user_items=1), user_rating_set.test_data(), at_k)
print_metrics(recall, hr, None, at_k)
###Output
nRecall@5: 6.366489105857909e-05
nRecall@10: 0.013262330554425716
nRecall@20: 0.024682816118001938
Hitrate@5: 0.0003137110034003854
Hitrate@10: 0.048920463770627975
Hitrate@20: 0.10112567245960236
###Markdown
Hyper parameter tuning
###Code
#MAX_HOURS = 2/60
MAX_EVALS = 40 #int((MAX_HOURS * 3600)/(20 * TIME_PER_EPOCH/10))
best_model = None
best_losses = None
best_recall = 0
def objective_function(args):
hidden_size, learning_rate, regularization, meta_alpha, huber_delta, epochs = args
global best_model
global best_losses
global best_recall
torch.cuda.empty_cache()
model = AutoRecModel(
hidden_size=hidden_size,
learning_rate=learning_rate,
regularization=regularization,
meta_alpha=meta_alpha,
huber_delta=huber_delta)
losses = model.train(user_rating_set.settings(min_user_items=2), epochs=epochs, show_progress=False)
user_rating_set.settings(min_user_items=1)
recall, hitrate = get_recall_hitrate(model, user_rating_set, user_rating_set.val_data(), k_values=[20])
recall, hitrate = float(recall[20]), float(hitrate[20])
if recall > best_recall:
best_recall = recall
best_model = model
best_losses = losses
return -recall
space = [
hp.choice('hidden_size', [700]),
hp.loguniform('learning_rate', np.log(1e-9), np.log(1e-1)),
hp.loguniform('regularization', np.log(1e-9), np.log(1e5)),
hp.loguniform('meta_alpha', np.log(1e-7), np.log(1e3)),
hp.loguniform('huber_delta', np.log(1e-3), np.log(1e2)),
hp.choice('epochs', [12])
]
trials = Trials()
best = fmin(objective_function,
space=space,
algo=tpe.suggest,
trials=trials,
max_evals=MAX_EVALS)
best_hyper_param = dict(zip(['hidden_size', 'learning_rate', 'regularization', 'meta_alpha', 'huber_delta', 'epochs'], space_eval(space, best)))
best_hyper_param
plot_losses(best_losses)
def plot_losses2(arr_loss):
print("Minimum loss:", arr_loss[1:,3].min())
plt.title("Train")
for i, name in enumerate(['rating', 'regularization', 'meta_data', 'final loss']):
d = arr_loss[1:, i]
d = d - d.min()
d /= d.max()
plt.plot(d, label=name)
plt.legend(loc='upper right')
plt.show()
plot_losses2(best_losses)
###Output
Minimum loss: 16881.983932495117
###Markdown
Show example output of model
###Code
test_pred = best_model.show_output(user_rating_set.settings(min_user_items=2), n_user_samples=12, n_game_samples=12)
visualize_ratings(test_pred)
visualize_ratings(test_pred, transpose=True)
at_k = [5, 10, 20]
recall, hr = get_recall_hitrate(best_model, user_rating_set.settings(min_user_items=1), user_rating_set.test_data(), at_k)
ndcg = get_ndcg(best_model, user_rating_set.settings(min_user_items=1), user_rating_set.test_data(), at_k)
print_metrics(recall, hr, ndcg, at_k)
torch.save(best_model.auto_encoder.state_dict(), 'best_model.pt')
quality_users = [30000, 30001, 42069]
quality_ratings = user_rating_set.train_set_ratings[0, quality_users].to(cuda_device)
quality_mask = user_rating_set.train_set_mask[0, quality_users].to(cuda_device)
quality_ratings = best_model.add_rating_bias(quality_ratings, quality_mask)
best_model.auto_encoder.eval()
y = best_model.auto_encoder(quality_ratings) - ((quality_mask+0) << 20)
top_k_result = torch.topk(y, k=10, dim=1).indices.detach().cpu()
print(top_k_result)
for u, u_pred in enumerate(top_k_result):
results = []
print(f'user {quality_users[u]}:')
for idx in u_pred:
print('\t',games.at[idx.item(), 'app_name'])
###Output
tensor([[ 3, 14, 5, 17, 20, 9, 4, 35, 39, 15],
[ 3, 14, 5, 17, 20, 9, 4, 35, 39, 15],
[ 3, 14, 5, 20, 9, 4, 35, 39, 15, 8]])
user 30000:
Arma 3
Mount & Blade: Warband
Dishonored
The Forest
Prison Architect
Sanctum 2
Natural Selection 2
Borderlands: The Pre-Sequel
The Binding of Isaac: Rebirth
The Stanley Parable
user 30001:
Arma 3
Mount & Blade: Warband
Dishonored
The Forest
Prison Architect
Sanctum 2
Natural Selection 2
Borderlands: The Pre-Sequel
The Binding of Isaac: Rebirth
The Stanley Parable
user 42069:
Arma 3
Mount & Blade: Warband
Dishonored
Prison Architect
Sanctum 2
Natural Selection 2
Borderlands: The Pre-Sequel
The Binding of Isaac: Rebirth
The Stanley Parable
Poker Night at the Inventory
|
notebook/sdk_test1.ipynb | ###Markdown
Buckets
###Code
## Let's use Amazon S3
s3_client = boto3.resource('s3')
## that's just one of the services
## for all services, please see
## https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/index.html#available-services
# Print out bucket names
#for bucket in s3.buckets.all():
# print(bucket.name)
bucket_name = "my-test-bucket"
b = s3_client.Bucket(bucket_name)
for obj in b.objects.all():
print(obj)
iot_client = boto3.client('iot')
## as described here: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/iot.html#id293
## let's see what IoT devices we have
iot_client.list_things()
# iot_data_client = boto3.client('iot-data')
# iot_data_client.list_retained_messages() # needs some extra access rights shit..
# "Lambda function" is something that you can call from wherever (backend, API gateway, IoT device) and it get's executed in the cloud
# define lambda function in
# https://us-west-2.console.aws.amazon.com/lambda/home?region=us-west-2#/functions
# give it the name "myFirstLambda"
dynamodb = boto3.resource('dynamodb')
print(list(dynamodb.tables.all()))
table = dynamodb.Table('my-test-db')
# this table has the primary key 'prim' : str and sort key 'sortie' : str
response = table.put_item(
Item = {
'prim': 'eka',
'sortie': 'A',
'Name': 'My Name',
'Email': 'My Email'
}
)
print(response)
###Output
{'ResponseMetadata': {'RequestId': '47BQSFGT5U0B3FUT8PGIM138EFVV4KQNSO5AEMVJF66Q9ASUAAJG', 'HTTPStatusCode': 200, 'HTTPHeaders': {'server': 'Server', 'date': 'Fri, 18 Feb 2022 07:29:14 GMT', 'content-type': 'application/x-amz-json-1.0', 'content-length': '2', 'connection': 'keep-alive', 'x-amzn-requestid': '47BQSFGT5U0B3FUT8PGIM138EFVV4KQNSO5AEMVJF66Q9ASUAAJG', 'x-amz-crc32': '2745614147'}, 'RetryAttempts': 0}}
###Markdown
Lambda
###Code
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/lambda.html
lambda_client = boto3.client('lambda')
response = lambda_client.invoke(
FunctionName='myFirstLambda',
InvocationType='RequestResponse'
)
print(response["Payload"].read())
response = iot_client.describe_endpoint(
endpointType='iot:Data-ATS'
)
print(response["endpointAddress"])
end_point_adr = response["endpointAddress"]
# create a file into the bucket
b.put_object(
Body="kokkelis".encode("utf-8"),
# Bucket=bucket_name,
Key="somefile.txt"
)
###Output
_____no_output_____ |
Introduction-to-Computer-Vision-with-PyTorch/07.Solving_vision_problems_with_MobileNet.ipynb | ###Markdown
軽量ネットワークとMobileNet複雑なネットワークは、学習や高速な推論のために、GPUなどの膨大な計算資源を必要とすることがわかりました。しかし、ほとんどの場合、パラメータの数が大幅に少ないモデルでも、それなりの性能を発揮できるように学習できることがわかりました。言い換えれば、モデルの複雑さが増しても、モデルの性能は比例しないで小さくなるのが一般的です。このことは、モジュールの初期にMNISTの数字分類を学習したときに観察されました。単純な密なモデルの精度は、強力なCNNの精度よりも大きくは悪くありませんでした。CNNのレイヤー数や分類器のニューロン数を増やすと、せいぜい数パーセントの精度しか得られませんでした。このことから、より高速なモデルを学習するために、軽量なネットワークアーキテクチャを試すことができるという考えに至りました。これは、モデルをモバイル機器で実行したい場合に特に重要です。このモジュールでは、前のユニットでダウンロードした「Cats and Dogs」のデータセットを利用します。まず、データセットが利用可能かどうかを確認します。
###Code
import torch
import torch.nn as nn
import torchvision
import matplotlib.pyplot as plt
from torchinfo import summary
import os
from pytorchcv import train, display_dataset, train_long, load_cats_dogs_dataset, validate, common_transform
if not os.path.exists('data/kagglecatsanddogs_3367a.zip'):
!wget -P data -q https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip
dataset, train_loader, test_loader = load_cats_dogs_dataset()
###Output
/Users/shogo/miniforge3/envs/pytorch/lib/python3.8/site-packages/PIL/TiffImagePlugin.py:771: UserWarning: Possibly corrupt EXIF data. Expecting to read 32 bytes but only got 0. Skipping tag 270
warnings.warn(
/Users/shogo/miniforge3/envs/pytorch/lib/python3.8/site-packages/PIL/TiffImagePlugin.py:771: UserWarning: Possibly corrupt EXIF data. Expecting to read 5 bytes but only got 0. Skipping tag 271
warnings.warn(
/Users/shogo/miniforge3/envs/pytorch/lib/python3.8/site-packages/PIL/TiffImagePlugin.py:771: UserWarning: Possibly corrupt EXIF data. Expecting to read 8 bytes but only got 0. Skipping tag 272
warnings.warn(
/Users/shogo/miniforge3/envs/pytorch/lib/python3.8/site-packages/PIL/TiffImagePlugin.py:771: UserWarning: Possibly corrupt EXIF data. Expecting to read 8 bytes but only got 0. Skipping tag 282
warnings.warn(
/Users/shogo/miniforge3/envs/pytorch/lib/python3.8/site-packages/PIL/TiffImagePlugin.py:771: UserWarning: Possibly corrupt EXIF data. Expecting to read 8 bytes but only got 0. Skipping tag 283
warnings.warn(
/Users/shogo/miniforge3/envs/pytorch/lib/python3.8/site-packages/PIL/TiffImagePlugin.py:771: UserWarning: Possibly corrupt EXIF data. Expecting to read 20 bytes but only got 0. Skipping tag 306
warnings.warn(
/Users/shogo/miniforge3/envs/pytorch/lib/python3.8/site-packages/PIL/TiffImagePlugin.py:771: UserWarning: Possibly corrupt EXIF data. Expecting to read 48 bytes but only got 0. Skipping tag 532
warnings.warn(
/Users/shogo/miniforge3/envs/pytorch/lib/python3.8/site-packages/PIL/TiffImagePlugin.py:793: UserWarning: Corrupt EXIF data. Expecting to read 2 bytes but only got 0.
warnings.warn(str(msg))
###Markdown
MobileNet前のユニットでは、画像分類のための**ResNet**アーキテクチャを見てきました。ResNetのより軽量なアナログは **MobileNet** で、これはいわゆる *Inverted Residual Blocks* を使用しています。事前に学習されたmobilenetをロードし、それがどのように動作するか見てみましょう。
###Code
model = torch.hub.load('pytorch/vision:v0.6.0', 'mobilenet_v2', pretrained=True)
model.eval()
print(model)
###Output
MobileNetV2(
(features): Sequential(
(0): ConvBNActivation(
(0): Conv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): Conv2d(32, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(2): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(16, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(96, 96, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=96, bias=False)
(1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(96, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(3): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(24, 144, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(144, 144, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=144, bias=False)
(1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(144, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(4): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(24, 144, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(144, 144, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=144, bias=False)
(1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(144, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(5): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=192, bias=False)
(1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(6): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=192, bias=False)
(1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(7): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(192, 192, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=192, bias=False)
(1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(192, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(8): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False)
(1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(9): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False)
(1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(10): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False)
(1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(11): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False)
(1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(384, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(12): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=576, bias=False)
(1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(13): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=576, bias=False)
(1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(14): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(576, 576, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=576, bias=False)
(1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(576, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(15): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=960, bias=False)
(1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(960, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(16): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=960, bias=False)
(1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(960, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(17): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=960, bias=False)
(1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(960, 320, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(18): ConvBNActivation(
(0): Conv2d(320, 1280, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1280, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
)
(classifier): Sequential(
(0): Dropout(p=0.2, inplace=False)
(1): Linear(in_features=1280, out_features=1000, bias=True)
)
)
###Markdown
このモデルをデータセットに適用し、動作することを確認してみましょう。
###Code
sample_image = dataset[0][0].unsqueeze(0)
res = model(sample_image)
print(res[0].argmax())
###Output
tensor(281)
###Markdown
**演習:** MobileNetとフルスケールのResNetモデルのパラメータ数を比較する。 MobileNetを使った転移学習それでは、前のユニットと同じ伝達学習を、MobileNetを使って行ってみましょう。まず、モデルのすべてのパラメータを凍結します。
###Code
for x in model.parameters():
x.requires_grad = False
###Output
_____no_output_____
###Markdown
そして、最終的な分類器を置き換えます。また、モデルをデフォルトのトレーニングデバイス(GPUまたはCPU)に転送します。
###Code
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model.classifier = nn.Linear(1280, 2)
model = model.to(device)
summary(model, input_size=(1, 3, 244, 244))
###Output
_____no_output_____
###Markdown
では、実際にトレーニングをしてみましょう。
###Code
train_long(model, train_loader, test_loader, loss_fn=torch.nn.CrossEntropyLoss(), epochs=1, print_freq=90)
###Output
Epoch 0, minibatch 0: train acc = 0.40625, train loss = 0.02217177301645279
|
nbs/examples/Examples_3.1_mining.unsupervised.traceability.eda.ipynb | ###Markdown
Exploratory Data Analysis for Software Traceability [EDA]> Adapted from CodeSearchNet Challenge
###Code
import json
import pandas as pd
from pathlib import Path
pd.set_option('max_colwidth',300)
from pprint import pprint
import re
#hide
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
!pip install pyemd
###Output
_____no_output_____
###Markdown
EDA for Word2Vec
###Code
def default_params():
return {
'system':'CSB-CICDPipelineEdition-master',
'saving_path': 'test_data/',
'language': 'english'
}
params = default_params()
word2vec_model =
###Output
_____no_output_____
###Markdown
Preview dataset Download specific java dataset
###Code
!wget https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/java.zip
!unzip java.zip
!gzip -d java/final/jsonl/test/java_test_0.jsonl.gz
with open('test_data/java/final/jsonl/test/java_test_0.jsonl', 'r') as f:
sample_file = f.readlines()
sample_file[0]
print(type(sample_file))
print(len(sample_file))
pprint(json.loads(sample_file[0]))
###Output
{'code': 'protected final void fastPathOrderedEmit(U value, boolean '
'delayError, Disposable disposable) {\n'
' final Observer<? super V> observer = downstream;\n'
' final SimplePlainQueue<U> q = queue;\n'
'\n'
' if (wip.get() == 0 && wip.compareAndSet(0, 1)) {\n'
' if (q.isEmpty()) {\n'
' accept(observer, value);\n'
' if (leave(-1) == 0) {\n'
' return;\n'
' }\n'
' } else {\n'
' q.offer(value);\n'
' }\n'
' } else {\n'
' q.offer(value);\n'
' if (!enter()) {\n'
' return;\n'
' }\n'
' }\n'
' QueueDrainHelper.drainLoop(q, observer, delayError, '
'disposable, this);\n'
' }',
'code_tokens': ['protected',
'final',
'void',
'fastPathOrderedEmit',
'(',
'U',
'value',
',',
'boolean',
'delayError',
',',
'Disposable',
'disposable',
')',
'{',
'final',
'Observer',
'<',
'?',
'super',
'V',
'>',
'observer',
'=',
'downstream',
';',
'final',
'SimplePlainQueue',
'<',
'U',
'>',
'q',
'=',
'queue',
';',
'if',
'(',
'wip',
'.',
'get',
'(',
')',
'==',
'0',
'&&',
'wip',
'.',
'compareAndSet',
'(',
'0',
',',
'1',
')',
')',
'{',
'if',
'(',
'q',
'.',
'isEmpty',
'(',
')',
')',
'{',
'accept',
'(',
'observer',
',',
'value',
')',
';',
'if',
'(',
'leave',
'(',
'-',
'1',
')',
'==',
'0',
')',
'{',
'return',
';',
'}',
'}',
'else',
'{',
'q',
'.',
'offer',
'(',
'value',
')',
';',
'}',
'}',
'else',
'{',
'q',
'.',
'offer',
'(',
'value',
')',
';',
'if',
'(',
'!',
'enter',
'(',
')',
')',
'{',
'return',
';',
'}',
'}',
'QueueDrainHelper',
'.',
'drainLoop',
'(',
'q',
',',
'observer',
',',
'delayError',
',',
'disposable',
',',
'this',
')',
';',
'}'],
'docstring': 'Makes sure the fast-path emits in order.\n'
'@param value the value to emit or queue up\n'
'@param delayError if true, errors are delayed until the source '
'has terminated\n'
'@param disposable the resource to dispose if the drain '
'terminates',
'docstring_tokens': ['Makes',
'sure',
'the',
'fast',
'-',
'path',
'emits',
'in',
'order',
'.'],
'func_name': 'QueueDrainObserver.fastPathOrderedEmit',
'language': 'java',
'original_string': 'protected final void fastPathOrderedEmit(U value, boolean '
'delayError, Disposable disposable) {\n'
' final Observer<? super V> observer = downstream;\n'
' final SimplePlainQueue<U> q = queue;\n'
'\n'
' if (wip.get() == 0 && wip.compareAndSet(0, 1)) {\n'
' if (q.isEmpty()) {\n'
' accept(observer, value);\n'
' if (leave(-1) == 0) {\n'
' return;\n'
' }\n'
' } else {\n'
' q.offer(value);\n'
' }\n'
' } else {\n'
' q.offer(value);\n'
' if (!enter()) {\n'
' return;\n'
' }\n'
' }\n'
' QueueDrainHelper.drainLoop(q, observer, '
'delayError, disposable, this);\n'
' }',
'partition': 'test',
'path': 'src/main/java/io/reactivex/internal/observers/QueueDrainObserver.java',
'repo': 'ReactiveX/RxJava',
'sha': 'ac84182aa2bd866b53e01c8e3fe99683b882c60e',
'url': 'https://github.com/ReactiveX/RxJava/blob/ac84182aa2bd866b53e01c8e3fe99683b882c60e/src/main/java/io/reactivex/internal/observers/QueueDrainObserver.java#L88-L108'}
###Markdown
Exploring the full DataSet
###Code
!ls test_data/java/
java_files = sorted(Path('test_data/java/').glob('**/*.gz'))
print('Total of related java files: {}'.format(len(java_files)))
java_df = jsonl_list_to_dataframe(java_files)
java_df.head()
###Output
_____no_output_____
###Markdown
Summary stats.
###Code
java_df.partition.value_counts()
java_df.groupby(['partition', 'language'])['code_tokens'].count()
java_df['code_len'] = java_df.code_tokens.apply(lambda x: len(x))
java_df['query_len'] = java_df.docstring_tokens.apply(lambda x: len(x))
###Output
_____no_output_____
###Markdown
Tokens Length Percentile
###Code
code_len_summary = java_df.groupby('language')['code_len'].quantile([.5, .7, .8, .9, .95])
display(pd.DataFrame(code_len_summary))
###Output
_____no_output_____
###Markdown
Query length percentile by language
###Code
query_len_summary = java_df.groupby('language')['query_len'].quantile([.5, .7, .8, .9, .95])
display(pd.DataFrame(query_len_summary))
java_df = java_df[java_df['partition'] == 'train']
java_df.shape
###Output
_____no_output_____
###Markdown
Data transformation
###Code
java_df.columns
java_df.shape
src_code_columns = ['code', 'code_tokens', 'code_len']
java_src_code_df = java_df[src_code_columns]
java_src_code_df.shape
###Output
_____no_output_____
###Markdown
Visualizing an example
###Code
java_src_code_df[:10]['code']
java_src_code_df.shape
data_type_new_column = ['src' for x in range(java_src_code_df.shape[0])]
len(data_type_new_column)
java_src_code_df.loc[:,'data_type'] = data_type_new_column
java_src_code_df.head()
waka = java_src_code_df.sample(100)
java_src_code_df.shape
waka = get_valid_code_df(java_src_code_df, 'code')
waka.shape
type(java_code_df['code'][9071])
java_code_df['code'][9071]
java_path = Path('test_data/java/')
sp_model_from_df(java_src_code_df, output=java_path, model_name='_sp_bpe_modal', cols=['code'])
sp_processor = sp.SentencePieceProcessor()
sp_processor.Load(f"{java_path/'_sp_bpe_modal'}.model")
java_src_code_df.shape
n_sample_4_sp = int(java_src_code_df.shape[0]*0.01)
print(n_sample_4_sp)
java_code_df = java_src_code_df.sample(n=n_sample_4_sp)
java_code_df.shape
# Use the model to compute each file's entropy
java_doc_entropies = get_doc_entropies_from_df(java_code_df, 'code', java_path/'_sp_bpe_modal', ['src'])
Rsa
# Use the model to compute each file's entropy
java_corpus_entropies = get_corpus_entropies_from_df(java_code_df, 'code', path/'_sp_bpe_modal', ['src'])
java_corpus_entropies
# Use the model to compute each file's entropy
java_system_entropy = get_system_entropy_from_df(java_code_df, 'code', path/'_sp_bpe_modal')
java_system_entropy
flatten = lambda l: [item for sublist in l for item in sublist]
report_stats(flatten(java_doc_entropies))
java_doc_entropies
# Create a histogram of the entropy distribution
plt.hist(java_doc_entropies,bins = 20, color="blue", alpha=0.5, edgecolor="black", linewidth=1.0)
plt.title('Entropy histogram')
plt.ylabel("Num records")
plt.xlabel("Entropy score")
plt.show()
fig1, ax1 = plt.subplots()
ax1.set_title('Entropy box plot')
ax1.boxplot(java_doc_entropies, vert=False)
java_code_df.head(1)
test_src_code = java_code_df['code'].values[0]
print(test_src_code)
###Output
public void setCurrentObject(Object obj) {
if (((obj == null) && (this.nullString != null))
|| this.requiredType.isInstance(obj)
|| (obj instanceof String)
|| (obj instanceof File)
|| ((obj instanceof Task) && this.requiredType.isAssignableFrom(((Task) obj).getTaskResultType()))) {
this.currentValue = obj;
} else {
throw new IllegalArgumentException("Object not of required type.");
}
}
###Markdown
Sample of available metrics (for method level)
###Code
metrics = lizard.analyze_file.analyze_source_code('test.java', test_src_code)
func = metrics.function_list[0]
print('cyclomatic_complexity: {}'.format(func.cyclomatic_complexity))
print('nloc (length): {}'.format(func.length))
print('nloc: {}'.format(func.nloc))
print('parameter_count: {}'.format(func.parameter_count))
print('name: {}'.format(func.name))
print('token_count {}'.format(func.token_count))
print('long_name: {}'.format(func.long_name))
java_code_df.shape
code_df = add_method_mccabe_metrics_to_code_df(java_code_df, 'code')
code_df.shape
code_df.head()
code_df.describe()
display_numeric_col_hist(code_df['cyclomatic_complexity'], 'Cyclomatic complexity')
fig1, ax1 = plt.subplots()
ax1.set_title('Cyclomatic complexity box plot')
ax1.boxplot(code_df['cyclomatic_complexity'], vert=False)
display_numeric_col_hist(code_df['nloc'], 'Nloc')
fig1, ax1 = plt.subplots()
ax1.set_title('Nloc box plot')
ax1.boxplot(code_df['nloc'], vert=False)
display_numeric_col_hist(code_df['parameter_count'], 'Parameter count')
fig1, ax1 = plt.subplots()
ax1.set_title('Param. count box plot')
ax1.boxplot(code_df['parameter_count'], vert=False)
display_numeric_col_hist(code_df['token_count'], 'Token count')
fig1, ax1 = plt.subplots()
ax1.set_title('Token count box plot')
ax1.boxplot(code_df['token_count'], vert=False)
fig1, ax1 = plt.subplots()
ax1.set_title('Code len box plot')
ax1.boxplot(code_df['code_len'], vert=False)
code_df.shape
code_df[['cyclomatic_complexity', 'nloc', 'token_count', 'parameter_count']].corr()
java_code_df.shape
code_df = add_method_mccabe_metrics_to_code_df(java_code_df, 'code')
code_df.shape
code_df.head()
code_df.describe()
display_numeric_col_hist(code_df['cyclomatic_complexity'], 'Cyclomatic complexity')
fig1, ax1 = plt.subplots()
ax1.set_title('Cyclomatic complexity box plot')
ax1.boxplot(code_df['cyclomatic_complexity'], vert=False)
display_numeric_col_hist(code_df['nloc'], 'Nloc')
fig1, ax1 = plt.subplots()
ax1.set_title('Nloc box plot')
ax1.boxplot(code_df['nloc'], vert=False)
display_numeric_col_hist(code_df['parameter_count'], 'Parameter count')
fig1, ax1 = plt.subplots()
ax1.set_title('Param. count box plot')
ax1.boxplot(code_df['parameter_count'], vert=False)
display_numeric_col_hist(code_df['token_count'], 'Token count')
fig1, ax1 = plt.subplots()
ax1.set_title('Token count box plot')
ax1.boxplot(code_df['token_count'], vert=False)
fig1, ax1 = plt.subplots()
ax1.set_title('Code len box plot')
ax1.boxplot(code_df['code_len'], vert=False)
code_df.shape
code_df[['cyclomatic_complexity', 'nloc', 'token_count', 'parameter_count']].corr()
java_code_df.shape
code_df = add_method_mccabe_metrics_to_code_df(java_code_df, 'code')
code_df.shape
code_df.head()
code_df.describe()
display_numeric_col_hist(code_df['cyclomatic_complexity'], 'Cyclomatic complexity')
fig1, ax1 = plt.subplots()
ax1.set_title('Cyclomatic complexity box plot')
ax1.boxplot(code_df['cyclomatic_complexity'], vert=False)
display_numeric_col_hist(code_df['nloc'], 'Nloc')
fig1, ax1 = plt.subplots()
ax1.set_title('Nloc box plot')
ax1.boxplot(code_df['nloc'], vert=False)
display_numeric_col_hist(code_df['parameter_count'], 'Parameter count')
fig1, ax1 = plt.subplots()
ax1.set_title('Param. count box plot')
ax1.boxplot(code_df['parameter_count'], vert=False)
display_numeric_col_hist(code_df['token_count'], 'Token count')
fig1, ax1 = plt.subplots()
ax1.set_title('Token count box plot')
ax1.boxplot(code_df['token_count'], vert=False)
fig1, ax1 = plt.subplots()
ax1.set_title('Code len box plot')
ax1.boxplot(code_df['code_len'], vert=False)
code_df.shape
code_df[['cyclomatic_complexity', 'nloc', 'token_count', 'parameter_count']].corr()
columns = ['cyclomatic_complexity', 'nloc', 'token_count', 'parameter_count']
corr = code_df[columns].corr()
corr = pd.melt(corr.reset_index(), id_vars='index') # Unpivot the dataframe, so we can get pair of arrays for x and y
corr.columns = ['x', 'y', 'value']
heatmap(
x=corr['x'],
y=corr['y'],
size=corr['value'].abs()
)
###Output
_____no_output_____ |
fifa2019_analysis.ipynb | ###Markdown
Is the physical or technical properties of player increase the overall performance most? if yes, what are these? Imports
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
# Read data
df = pd.read_csv('data.csv')
df.head(7)
###Output
_____no_output_____
###Markdown
Data Wrangling
###Code
# Check the features
df.columns
# Check the data types
df.info()
# Drop some non-essential columns
df = df.drop(columns = ['Unnamed: 0', 'ID', 'Photo',
'Flag', 'Club Logo', 'Real Face',
'Jersey Number', 'Loaned From', 'Contract Valid Until', 'Release Clause'], axis=1)
df.head(7)
# Choose features
s = ['Overall', 'Crossing',
'Finishing', 'HeadingAccuracy', 'ShortPassing', 'Volleys', 'Dribbling',
'Curve', 'FKAccuracy', 'LongPassing', 'BallControl', 'Acceleration',
'SprintSpeed', 'Agility', 'Reactions', 'Balance', 'ShotPower',
'Jumping', 'Stamina', 'Strength', 'LongShots', 'Aggression',
'Interceptions', 'Positioning', 'Vision', 'Penalties', 'Composure',
'Marking', 'StandingTackle', 'SlidingTackle', 'GKDiving', 'GKHandling',
'GKKicking', 'GKPositioning', 'GKReflexes']
features = df[s]
# Take a look how null values are distrubuted
plt.figure(figsize=(16, 12))
sns.heatmap(features.isnull(), cbar=False, cmap='viridis', yticklabels=False)
plt.show()
# Drop missing values
features.dropna(inplace=True)
plt.figure(figsize=(12,8))
sns.heatmap(features.isnull(), cbar=False, cmap='viridis', yticklabels=False)
plt.show()
# Check the missing values if left
features.isnull().any()
# Statistics of features
features.describe()
###Output
_____no_output_____
###Markdown
So, I have now clean dataframe with all important features. And it's time to build a model.
###Code
# From correlation matrix taking just overall performance column and sort it
features.corr()['Overall'].sort_values(ascending=False)
# As we can see in the graph below there are plenty of properties that correlate.
# In order to avoid collinearity we have to exclude one from correlating pair
# (except when high (>.8) correlation appears with target feature this case 'Overall')
plt.figure(figsize=(16,12))
sns.heatmap(features.corr(), cmap='viridis')
plt.show()
# After excluding collinear properties I got these features
t = ['Overall', 'Strength', 'Stamina', 'Jumping', 'Composure', 'Reactions', 'ShortPassing', 'GKKicking']
filtered_features = features[t]
filtered_features.corr()["Overall"].sort_values(ascending=False)
# Reactions: measures how quickly a player responds to a situation happening around him.
# Composure: determines at what distance the player
# last time checking to avoid collinearity
plt.figure(figsize=(8,6))
sns.heatmap(filtered_features.corr(), cmap='viridis', annot=True)
plt.show()
# Choose features and labels
X = (filtered_features[['Strength', 'Stamina', 'Jumping', 'Composure', 'Reactions', 'ShortPassing', 'GKKicking']])
y = (filtered_features['Overall'])
from sklearn.model_selection import train_test_split as tts
# Split the variables into train and test
X_train, X_true, y_train, y_true = tts(X, y, test_size=0.20, random_state=0,)
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Model Evaluation
###Code
# Create dataframe from model coefficients
coefs = pd.DataFrame(model.coef_, X_train.columns, columns=["Coefficients"])
print(f'rSquared: {round(model.score(X_train, y_train), 3)}')
coefs
# rSquared metric describes how good is your model
# coefficients interpretation: if you hold other features fixed and increase 'Reactions'
# in one unit you get increase in 'Overall' by 0.379
###Output
rSquared: 0.812
###Markdown
Model Predictions
###Code
y_pred = model.predict(X_true)
# plot actual vs predicted values
plt.rcParams.update({'font.size': 12})
plt.title('Actual vs. Predicted')
plt.xlabel('Actual Values')
plt.ylabel('Predicted Values')
plt.scatter(y_true, y_pred)
plt.show()
from sklearn.metrics import mean_squared_error
# Root Mean Squared Error represents difference between real and predicted values.
# this difference is expressed in the same units as predicted value (in this case 'Overall')
# other way to test your model is to plot residuals distribution. If it visually seems normally distributed and mean around 0
# it indicates that your model is the right decision for this data
rmse = np.sqrt(mean_squared_error(y_true, y_pred))
print(f"Root Mean Squared Error: {round(rmse, 3)}")
plt.title('Residuals')
sns.distplot((y_true-y_pred),bins=50)
plt.show()
###Output
Root Mean Squared Error: 2.94
|
TFLite for deployment.ipynb | ###Markdown
**Converting to TFLite :**
###Code
# Importing dependancies
from tensorflow import lite
import numpy as np
# Initialising converters
encoder_tflite_converter = lite.TFLiteConverter.from_saved_model("../input/neural-machine-translation-english-to-italian/encoder")
decoder_tflite_converter = lite.TFLiteConverter.from_saved_model("../input/neural-machine-translation-english-to-italian/decoder")
# Defining quantisation schema
encoder_tflite_converter.optimizations = [lite.Optimize.DEFAULT]
decoder_tflite_converter.optimizations = [lite.Optimize.DEFAULT]
# Converting to TFLite
encoder_tflite_content = encoder_tflite_converter.convert()
decoder_tflite_content = decoder_tflite_converter.convert()
# Saving quantised models
with open("./encoder.tflite", 'wb') as f:
f.write(encoder_tflite_content)
with open("./decoder.tflite", 'wb') as f:
f.write(decoder_tflite_content)
###Output
e it has no allocated buffer.
###Markdown
**Inference from quantised model :**
###Code
# Importing Dependancies
import pickle
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.utils import to_categorical
# Loading tokenizers
eng_tok = pickle.load(open("../input/neural-machine-translation-english-to-italian/English Tokenizer.pkl", 'rb'))
it_tok = pickle.load(open("../input/neural-machine-translation-english-to-italian/Italian Tokenizer.pkl", 'rb'))
eng_seq_len = 20 # First dimension of encoder Input shape
eng_vocab_size = len(eng_tok.word_index)+1 # Second dimension of encoder Input shape
it_seq_len = 20 # First dimension of decoder Input shape
it_vocab_size = len(it_tok.word_index)+1 # Second dimension of decoder Input shape
def sent_to_seq(sequences, tokenizer, vocab_size=None, reverse=False, onehot=False):
""" Converts text data into sequences supported by model input layers.
Args:
sequences (list): List of text data.
tokenizer (tf.keras.preprocessing.text.Tokenizer): Tensorflow tokenizer object.
vocab_size (int): Number of words in the whole vocabulary.
reverse (bool): Reverses the padded sequence if set True. Defaults False.
(Eg: if set True, [1 2 3 0 0] becomes [0 0 3 2 1])
onehot (bool): Creates onehot representation of the padded sequence if set True.
Defaults False.
Returns:
preprocessed_seq (list): List of preprocessed sequences.
"""
# Tokenizing
seq = tokenizer.texts_to_sequences(sequences)
# Padding
preprocessed_seq = pad_sequences(seq, padding='post', truncating='post', maxlen=20)
# Reversing
if reverse:
preprocessed_seq = preprocessed_seq[:, ::-1]
# Onehot encoding
if onehot:
preprocessed_seq = to_categorical(preprocessed_seq, num_classes=vocab_size)
return preprocessed_seq
def word_to_onehot(tokenizer, word, vocab_size):
""" Converts a single word into onehot representation.
Args:
tokenizer (tf.keras.preprocessing.text.Tokenizer): Tensorflow tokenizer object.
word (str): Word to be tokenized and onehot encoded.
vocab_size (int): Number of words in the whole vocabulary.
Returns:
de_onhot (list): Onehot representation of given word.
"""
de_seq = tokenizer.texts_to_sequences([[word]])
de_onehot = to_categorical(de_seq, num_classes=vocab_size).reshape(1, 1, vocab_size)
return de_onehot
# Loading TFLite models
enc_interpreter = lite.Interpreter(model_path="./encoder.tflite")
dec_interpreter = lite.Interpreter(model_path="./decoder.tflite")
# Allocates tensors
enc_interpreter.allocate_tensors()
dec_interpreter.allocate_tensors()
# Input layer details
en_input_details = enc_interpreter.get_input_details()
de_input_details = dec_interpreter.get_input_details()
# Output layer details
en_output_details = enc_interpreter.get_output_details()
de_output_details = dec_interpreter.get_output_details()
print(enc_interpreter.get_input_details())
print(de_input_details[1])
print(de_output_details)
def translate(eng_sentence):
""" Returns Italian translation of given english sentence.
Args:
eng_sentence (str): English text to be translated.
Returns:
it_sent (str): Italian translated text.
"""
en_seq = sent_to_seq([eng_sentence],
tokenizer=eng_tok,
reverse=True,
onehot=True,
vocab_size=eng_vocab_size)
enc_interpreter.set_tensor(en_input_details[0]['index'], en_seq)
enc_interpreter.invoke()
en_st = enc_interpreter.get_tensor(en_output_details[0]['index'])
de_seq = word_to_onehot(it_tok, "sos", it_vocab_size)
it_sent = ""
for i in range(it_seq_len):
dec_interpreter.set_tensor(de_input_details[0]['index'], en_st)
dec_interpreter.set_tensor(de_input_details[1]['index'], de_seq)
dec_interpreter.invoke()
en_st = dec_interpreter.get_tensor(de_output_details[0]['index'])
de_prob = dec_interpreter.get_tensor(de_output_details[1]['index'])
index = np.argmax(de_prob[0, :], axis=-1)
de_w = it_tok.index_word[index]
de_seq = word_to_onehot(it_tok, de_w, it_vocab_size)
if de_w == 'eos': break
it_sent += de_w + ' '
return it_sent
# Making translations
print(translate("Can I talk to you for a minute?"))
###Output
posso parlarvi per un minuto
|
Coordconv/Coord_conv_uniform_regression100.ipynb | ###Markdown
Notes: - This implementation is a replicate of coordconv uniform dataset regression task from the paper "An intriguing failing of convolutional neural networks and the CoordConv solution" by R.Liu et al. (2018) from Uber AI. - The model takes onehot as an input, coordinates as output
###Code
import os
import sys
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import tensorflow as tf
#retrieve data from folder
#references: https://github.com/titu1994/keras-coordconv/blob/master/experiments/train_uniform_classifier.py
train_onehot = np.load('coordconv_data/train_onehot100.npy').astype('float32')
test_onehot = np.load('coordconv_data/test_onehot100.npy').astype('float32')
#retrieve coordinates
coord_train = np.where(train_onehot == 1.0)
coord_test = np.where(test_onehot == 1.0)
#Training coords
x_coord_train = coord_train[1]
y_coord_train = coord_train[2]
#Test coords
x_coord_test = coord_test[1]
y_coord_test = coord_test[2]
xy_coord_train = np.zeros((len(x_coord_train), 1, 1, 2), dtype='float32')
xy_coord_test = np.zeros((len(x_coord_test), 1, 1, 2), dtype='float32')
for i ,(x, y) in enumerate(zip(x_coord_train, y_coord_train)):
xy_coord_train[i, 0, 0, 0] = x
xy_coord_train[i, 0, 0, 1] = y
for i ,(x, y) in enumerate(zip(x_coord_test, y_coord_test)):
xy_coord_test[i, 0, 0, 0] = x
xy_coord_test[i, 0, 0, 1] = y
# Plot dataset
plt.imshow(np.sum(train_onehot, axis=0)[:, :, 0], cmap='gray')
plt.title('Train One-hot dataset')
plt.show()
plt.imshow(np.sum(test_onehot, axis=0)[:, :, 0], cmap='gray')
plt.title('Test One-hot dataset')
plt.show()
noise = tf.expand_dims(np.sum(test_onehot[:500], axis=0)*(0.8), 0)
train_onehot02 = train_onehot + noise
# for i in range(len(train_onehot)):
# plt.imshow(train_onehot[i, :, :, 0], cmap='gray')
# print("Training data set label #{}, x coordinate: ".format(i))
# plt.show()
# if input()== 'exit':
# break
###Output
_____no_output_____
###Markdown
Coordconv Model
###Code
#model
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import Input, Conv2D, Flatten, Dense, Softmax, MaxPooling2D
from coord_conv import CoordConv
def model_regression(inps):
coord01 = CoordConv(x_dim = 100, y_dim = 100, with_r = False, filters = 20,
kernel_size = 1)(inps)
conv01 = Conv2D(20,1, activation = None)(coord01)
conv02 = Conv2D(20,1, activation = None)(conv01)
conv03 = Conv2D(20,1, activation = None)(conv02)
conv04 = Conv2D(20,1, activation = None)(conv03)
conv05 = Conv2D(20,1, activation = None)(conv04)
conv06 = Conv2D(20,3, activation = None)(conv05)
conv07 = Conv2D(2,3 ,activation = None)(conv06)
output = MaxPooling2D( pool_size=96, strides=96, padding='valid')(conv07)
return output
def model_regression02(inps):
coord01 = CoordConv(x_dim = 100, y_dim = 100, with_r = False, filters = 128,
kernel_size = 1, activation = 'relu')(inps)
conv01 = Conv2D(64,1, activation = 'relu')(coord01)
conv02 = Conv2D(32,3, activation = 'relu')(conv01)
maxPool01 = MaxPooling2D(pool_size = (2,2), strides = 2)(conv02)
conv03 = Conv2D(2,3 ,activation = None)(maxPool01)
output = MaxPooling2D( pool_size=47, strides=47, padding='valid')(conv03)
return output
model_input = Input(shape=(100,100,1))
model = Model(model_input, model_regression(model_input))
model.summary()
# model input coordinates
optimizer = tf.keras.optimizers.Adam(lr=0.01)
model.compile(optimizer, 'mean_squared_error', metrics=['accuracy'])
model.fit(train_onehot,xy_coord_train, batch_size = 32, epochs = 20,
verbose = 1, validation_data = (test_onehot, xy_coord_test))
model.load_weights("best_reg_model100_02.hdf5")
img_01 = test_onehot[0]
img_02 = test_onehot[1]*0.8
img_03 = test_onehot[2]
img_04 = test_onehot[3]*0.5
img_05 = test_onehot[4]*0.6
img_total = img_01 + img_02 + img_03 + img_04 + img_05
plt.imshow(np.squeeze(img_total), cmap = 'gray')
preds_01 = model.predict(tf.expand_dims(img_01, 0))
preds_02 = model.predict(tf.expand_dims(img_02, 0))
preds_03 = model.predict(tf.expand_dims(img_03, 0))
preds_04 = model.predict(tf.expand_dims(img_04, 0))
preds_05 = model.predict(tf.expand_dims(img_05, 0))
preds_total = model.predict(tf.expand_dims(img_total, 0))
print(preds_01)
print(preds_02)
print(preds_03)
print(preds_04)
print(preds_05)
print(preds_total)
#Visualize test set
preds = model.predict(test_onehot)
def error(true_coord, pred_coord, i):
x_dif = true_coord[i,0,0,0] - pred_coord[i,0,0,0]
y_dif = true_coord[i,0,0,1] - pred_coord[i,0,0,1]
return np.sqrt(np.square(x_dif)+np.square(y_dif))
for i in range(len(xy_coord_test)):
print('True coordinates: {}'.format(xy_coord_test[i]))
print('Predicted coordinates: {}'.format(preds[i]))
print('Error: {}' .format(error(xy_coord_test, preds, i)))
plt.imshow(np.reshape(test_onehot[i], (100, 100)), cmap='gray')
plt.show()
if input() == 'exit':
break
#Visualize test set
preds = model.predict(test_onehot)
def error(true_coord, pred_coord, i):
x_dif = true_coord[i,0,0,0] - pred_coord[i,0,0,0]
y_dif = true_coord[i,0,0,1] - pred_coord[i,0,0,1]
return np.sqrt(np.square(x_dif)+np.square(y_dif))
for i in range(len(xy_coord_test)):
print('True coordinates: {}'.format(xy_coord_test[i]))
print('Predicted coordinates: {}'.format(preds[i]))
print('Error: {}' .format(error(xy_coord_test, preds, i)))
plt.imshow(np.reshape(test_onehot[i], (100, 100)), cmap='gray')
plt.show()
if input() == 'exit':
break
# model.save_weights("best_reg_model100_02.hdf5")
###Output
_____no_output_____
###Markdown
Normal Covnet Model
###Code
covnet = Sequential([
Conv2D(filters = 8, kernel_size = 1, strides = 1, padding = "same", activation = 'relu'),
Conv2D(filters = 8, kernel_size = 1, strides = 1, padding = "same", activation = 'relu'),
Conv2D(filters = 8, kernel_size = 1, strides = 1, padding = "same", activation = 'relu'),
Conv2D(filters = 8, kernel_size = 3, strides = 1, padding = "same", activation = 'relu'),
Conv2D(filters = 2, kernel_size = 3, strides = 1, padding = "same", activation = 'relu'),
MaxPooling2D(pool_size = 64, strides = 64, padding = 'valid'),
#output shape(batch_size, 1, 1, 2)
])
covnet.build((None, 64,64, 1))
optimizer = tf.keras.optimizers.Adam(lr=0.01)
covnet.compile(optimizer, 'mean_squared_error', metrics=['accuracy'])
covnet.fit(train_onehot,xy_coord_train, batch_size = 32, epochs = 20,
verbose = 1, validation_data = (test_onehot, xy_coord_test))
preds = covnet.predict(test_onehot)
def error(true_coord, pred_coord, i):
x_dif = true_coord[i,0,0,0] - pred_coord[i,0,0,0]
y_dif = true_coord[i,0,0,1] - pred_coord[i,0,0,1]
return np.sqrt(np.square(x_dif)+np.square(y_dif))
for i in range(len(xy_coord_test)):
print('True coordinates: {}'.format(xy_coord_test[i]))
print('Predicted coordinates: {}'.format(preds[i]))
print('Error: {}' .format(error(xy_coord_test, preds, i)))
plt.imshow(np.reshape(test_onehot[i], (64, 64)), cmap='gray')
plt.show()
if input() == 'exit':
break
###Output
True coordinates: [[[25. 31.]]]
Predicted coordinates: [[[30.058958 30.397638]]]
Error: 5.094692707061768
|
detecting_mitotic_figures.ipynb | ###Markdown
Detecting mitotic figures using Amazon Rekognition Custom LabelsMitotic figures are cells that are dividing via a process called _mitosis_ to create two new cells. Identifying and counting these mitotic figures is part of histopathology tissue analysis, considered the gold standard in cancer diagnosis. A pathologist will usually take hematoxylin-eosin stained tissue samples and identify these and other features when evaluating tumors.This process depends entirely on pathologists and is costly and time consuming. As technology evolves, whole-slide imaging (WSI) techniques have enabled laboratories to start scanning and digitizing samples. And with the recent advances in machine learning (ML), it has now become feasible to build systems that can help pathologists by automatic the detection of abnormal and/or relevant features in pathology slides.In this workshop, we will explore how Amazon Rekognition Custom Labels can be used to implement such automated detection systems by processing WSI data, and using it to train a custom model that detects mitotic figures. Before you continueMake sure that you are using the _Python 3 (Data Science)_ kernel, and an `ml.m5.large` instance (will show up as 2 vCPU + 8 GiB on toolbar). Using a smaller instance may cause some operations to run out of memory. Install dependenciesTo prepare our SageMaker Studio application instance, we will update system packages first.
###Code
!apt update > /dev/null && apt dist-upgrade -y > /dev/null
###Output
_____no_output_____
###Markdown
For the WSI data, we need the [OpenSlide](https://openslide.org) library and tooling, which we can install using `apt`.
###Code
!apt install -y build-essential openslide-tools python-openslide libgl1-mesa-glx > /dev/null
###Output
_____no_output_____
###Markdown
We also use [SlideRunner](https://github.com/DeepPathology/SlideRunner) and [fastai](https://fast.ai) to load and process the slides, which we need to install by using `pip`.
###Code
!pip install SlideRunner SlideRunner_dataAccess fastai==1.0.61 > /dev/null
###Output
_____no_output_____
###Markdown
Downloading the datasetWe will use the MITOS_WSI_CMC dataset, which is available on [GitHub](https://github.com/DeepPathology/MITOS_WSI_CMC). Images are downloaded from Figshare.This step takes approximately 10-12 minutes. If you are not running this as a self-paced lab, your instructor will make a pause here, and introduce you to other necessary concepts while waiting.
###Code
from dataset import download_dataset
download_dataset()
###Output
_____no_output_____
###Markdown
Loading the dataIn the previous step, you downloaded the WSI files from which you will generate the training and test images for Amazon Rekognition. However, you still need the labels for each of the mitotic figures in those images. These are stored in a sqlite database that is the dataset's repository. We will download the database now.
###Code
%reload_ext autoreload
%autoreload 2
import os
from typing import List
import urllib
import numpy as np
from SlideRunner.dataAccess.database import Database
from pathlib import Path
DATABASE_URL = 'https://github.com/DeepPathology/MITOS_WSI_CMC/raw/master/databases/MITOS_WSI_CMC_MEL.sqlite'
DATABASE_FILENAME = 'MITOS_WSI_CMC_MEL.sqlite'
Path("./databases").mkdir(parents=True, exist_ok=True)
local_filename, headers = urllib.request.urlretrieve(
DATABASE_URL,
filename=os.path.join('databases', DATABASE_FILENAME),
)
###Output
_____no_output_____
###Markdown
SetupThere are a few things we still need to define before moving on: StorageWe need an Amazon S3 bucket to place the image files, so that Amazon Rekognition can read those during training and testing. We will use the default Amazon SageMaker bucket that is automatically created for you. DatabaseTo have access to the annotations, we need to open the database using `SlideRunner`. Test slidesWe need to define a set of test slides to set apart. These will be used to assess your model's ability to generalize, and thus cannot be used to generate training data. That is the reason we are defining them beforehand.There are three different arrays with different test slides defined for each. By default, the first set of test slides is used, by you can go ahead and try different combinations.
###Code
import sagemaker
sm_session = sagemaker.Session()
size=512
bucket_name = sm_session.default_bucket()
database = Database()
database.open(os.path.join('databases', DATABASE_FILENAME))
slidelist_test_1 = ['14','18','3','22','10','15','21']
slidelist_test_2 = ['1','20','17','5','2','11','16']
slidelist_test_3 = ['13','7','19','8','6','9', '12']
slidelist_test = slidelist_test_1
###Output
_____no_output_____
###Markdown
Retrieve the slidesNow we can call the `get_slides` function, which will produce a list of training and test slides we can use to generate the training and test images. The code for this function is in the `sampling.py` file.We need to pass:* A reference to the database object, so that annotations can be read and linked to the slides.* A list of slides to use to generate the test dataset (and to exclude from the training dataset).* The ID of the negative class - Not used in this workshop.* The size (both width and height), in pixels, of the image that is generated when `get_patch` is invoked on a `SlideContainer`. This effectively sets the size of the image that is created for Amazon Rekognition.
###Code
from sampling import get_slides
image_size = 512
lbl_bbox, training_slides, test_slides, files = get_slides(database, slidelist_test, negative_class=1, size=image_size)
###Output
_____no_output_____
###Markdown
Shuffle the slidesWe want to randomly sample from the training and test slides. Using the lists of training and test slides, we will randomly select `n_training_images` times a file for training, and `n_test_images` times a file for test. Notice that we have chosen to have a test set that contains 20% the number of images the training set has.
###Code
n_training_images = 500
n_test_images = int(0.2 * n_training_images)
training_files = list([
(y, files[y]) for y in np.random.choice(
[x for x in training_slides], n_training_images)
])
test_files = list([
(y, files[y]) for y in np.random.choice(
[x for x in test_slides], n_test_images)
])
###Output
_____no_output_____
###Markdown
Create the images for training the Rekognition Custom Labels model
###Code
Path("rek_slides/training").mkdir(parents=True, exist_ok=True)
Path("rek_slides/test").mkdir(parents=True, exist_ok=True)
###Output
_____no_output_____
###Markdown
We need to build JSON lines manifest.
###Code
def get_annotation_json_line(filename, channel, annotations, labels):
objects = list([{'confidence' : 1} for i in range(0, len(annotations))])
return json.dumps({
'source-ref': f's3://{bucket_name}/data/{channel}/{filename}',
'bounding-box': {
'image_size': [{
'width': size,
'height': size,
'depth': 3
}],
'annotations': annotations,
},
'bounding-box-metadata': {
'objects': objects,
'class-map': dict({ x: str(x) for x in labels }),
'type': 'groundtruth/object-detection',
'human-annotated': 'yes',
'creation-date': datetime.datetime.now().isoformat(),
'job-name': 'rek-pathology',
}
})
def generate_annotations(x_start: int, y_start: int, bboxes, labels, filename: str, channel: str):
annotations = []
for bbox in bboxes:
if check_bbox(x_start, y_start, bbox):
# Get coordinates relative to this slide.
x0 = bbox.left - x_start
y0 = bbox.top - y_start
annotation = {
'class_id': 1,
'top': y0,
'left': x0,
'width': bbox.right - bbox.left,
'height': bbox.bottom - bbox.top
}
annotations.append(annotation)
return get_annotation_json_line(filename, channel, annotations, labels)
###Output
_____no_output_____
###Markdown
Next, we get random pieces of our images to use for training.
###Code
import datetime
import json
import random
from fastai import *
from fastai.vision import *
from tqdm.notebook import tqdm
# Margin size, in pixels, for training images. This is the space we leave on
# each side for the bounding box(es) to be well into the image.
margin_size = 64
training_annotations = []
test_annotations = []
def check_bbox(x_start: int, y_start: int, bbox) -> bool:
return (bbox._left > x_start and
bbox._right < x_start + image_size and
bbox._top > y_start and
bbox._bottom < y_start + image_size)
def generate_images(file_list) -> None:
for f_idx in tqdm(range(0, len(file_list)), desc='Writing training images...'):
slide_idx, f = file_list[f_idx]
bboxes = lbl_bbox[slide_idx][0]
labels = lbl_bbox[slide_idx][1]
# Calculate the minimum and maximum horizontal and vertical positions
# that bounding boxes should have within the image.
x_min = min(map(lambda x: x.left, bboxes)) - margin_size
y_min = min(map(lambda x: x.top, bboxes)) - margin_size
x_max = max(map(lambda x: x.right, bboxes)) + margin_size
y_max = max(map(lambda x: x.bottom, bboxes)) + margin_size
result = False
while not result:
x_start = random.randint(x_min, x_max - image_size)
y_start = random.randint(y_min, y_max - image_size)
for bbox in bboxes:
if check_bbox(x_start, y_start, bbox):
result = True
break
filename = f'slide_{f_idx}.png'
channel = 'test' if slide_idx in test_slides else 'training'
annotation = generate_annotations(x_start, y_start, bboxes, labels, filename, channel)
if channel == 'training':
training_annotations.append(annotation)
else:
test_annotations.append(annotation)
img = Image(pil2tensor(f.get_patch(x_start, y_start) / 255., np.float32))
img.save(f'rek_slides/{channel}/{filename}')
generate_images(training_files)
generate_images(test_files)
###Output
_____no_output_____
###Markdown
Write the manifest files to diskThe previous cell generated a series of annotations in the Amazon SageMaker Ground Truth format, which is the same Amazon Rekognition expects. The specifics for object detection are detailed [in the documentation](https://docs.aws.amazon.com/rekognition/latest/customlabels-dg/cd-manifest-files-object-detection.html).Annotations were stored in the `training_annotations` and `test_annotations` lists. Now, we need to write a `manifest.json` file with the contents of each list into the _training_ and _test_ directories.
###Code
with open('rek_slides/training/manifest.json', 'w') as mf:
mf.write("\n".join(training_annotations))
with open('rek_slides/test/manifest.json', 'w') as mf:
mf.write("\n".join(test_annotations))
###Output
_____no_output_____
###Markdown
Transfer the files to S3Having written the images and the manifest file, we can now upload everything to our S3 bucket. We will use the `upload_data` method exposed by the SageMaker `Session` object, which recursively uploads the contents of a directory to S3.
###Code
import sagemaker
sm_session = sagemaker.Session()
data_location = sm_session.upload_data(
'./rek_slides',
bucket=bucket_name,
)
###Output
_____no_output_____
###Markdown
Create an Amazon Rekognition Custom Labels projectWith our data already in S3, we can take the first step towards training a custom model, and create a Custom Labels project. Using the `boto3` library, we create an Amazon Rekognition client and invoke the `create_project` method. This method only takes a project name as input. If succesful, it returns the ARN (Amazon Resource Name) of the newly created project, which we need to save for future use.If you already created the project and just want to retrieve its ARN, you can use the `describe_projects` method exposed by the Amazon Rekognition client, and then retrieve the ARN from the list of projects returned. The commented line assumes that you only have one project and retrieves the ARN from the first description in the list. If you are doing this as a self-paced lab and have previously used Rekognition, be aware that using the zero index may not retrieve the ARN of your workshop project.
###Code
import boto3
project_name = 'rek-mitotic-figures-workshop'
rek = boto3.client('rekognition')
response = rek.create_project(ProjectName=project_name)
# If you have already created the project, use the describe_projects call to
# retrieve the project ARN.
# response = rek.describe_projects()['ProjectDescriptions'][0]
project_arn = response['ProjectArn']
project_arn
###Output
_____no_output_____
###Markdown
Create a project versionTo create a project version, we need to specify:* The name of the version.* The name of the bucket, along with a prefix under which you want the training results to be stored.* Test and a training datasets.For the test and training datasets, you need to tell Amazon Rekognition where you training and test images are stored. The information is contained in the `manifest.json` files that we created in an earlier step, and all we need to do know is to indicate where they are stored.
###Code
version_name = '1'
output_config = {
'S3Bucket': bucket_name,
'S3KeyPrefix': 'output',
}
training_dataset = {
'Assets': [
{
'GroundTruthManifest': {
'S3Object': {
'Bucket': bucket_name,
'Name': 'data/training/manifest.json'
}
},
},
]
}
testing_dataset = {
'Assets': [
{
'GroundTruthManifest': {
'S3Object': {
'Bucket': bucket_name,
'Name': 'data/test/manifest.json'
}
},
},
]
}
###Output
_____no_output_____
###Markdown
We also define a helper function to describe the different versions of a project.
###Code
def describe_project_versions():
describe_response = rek.describe_project_versions(
ProjectArn=project_arn,
VersionNames=[version_name],
)
for model in describe_response['ProjectVersionDescriptions']:
print(f"Status: {model['Status']}")
print(f"Message: {model['StatusMessage']}")
return describe_response
###Output
_____no_output_____
###Markdown
All that is left to do is to invoke the `create_project_version` method with the parameters we just defined. Calling this method start the task of training a model asynchronously. To wait for the task to finish, we create a _waiter_, which will poll the service periodically and exit once the model has either been successfully trained, or an error has occurred.
###Code
response = rek.create_project_version(
VersionName=version_name,
ProjectArn=project_arn,
OutputConfig=output_config,
TrainingData=training_dataset,
TestingData=testing_dataset,
)
waiter = rek.get_waiter('project_version_training_completed')
waiter.wait(
ProjectArn=project_arn,
VersionNames=[version_name],
)
describe_response = describe_project_versions()
###Output
_____no_output_____
###Markdown
Using the modelIf you got this far, it means that your project is ready to run! Before you can start doing inference with your Amazon Rekognition Custom Labels model, you need to start the model. Start the modelTo start the model, simply call the `start_project_version` method. You will need to provide two parameters:* Your project version ARN.* A number of inference units.The number of inference units is related to the amount of resources deployed for your model. The higher the number of inference units you allocate, the higher the throughput you can achieve. However, since you are billed based on the number of inference units as well, the higher the cost.The model can take a 5-15 minutes to deploy. If doing this as an instructor-led workshop, your instructor will use this time to answer questions or deliver additional content.
###Code
model_arn = describe_response['ProjectVersionDescriptions'][0]['ProjectVersionArn']
response = rek.start_project_version(
ProjectVersionArn=model_arn,
MinInferenceUnits=1,
)
waiter = rek.get_waiter('project_version_running')
waiter.wait(
ProjectArn=project_arn,
VersionNames=[version_name],
)
describe_project_versions()
###Output
_____no_output_____
###Markdown
Submit an image for inferenceOur trained model is now ready for inference. Use any of the files in the `rek_slides/test` and send it over to your endpoint by using the `detect_custom_labels` method of the SDK to see how your model is now able to detect mitotic figures in microscopy images.
###Code
from matplotlib import pyplot as plt
from PIL import Image, ImageDraw
# We'll use one of our test images to try out our model.
with open('./rek_slides/test/slide_0.png', 'rb') as image_file:
image_bytes=image_file.read()
# Send the image data to the model.
response = rek.detect_custom_labels(
ProjectVersionArn=model_arn,
Image={
'Bytes': image_bytes
}
)
#
img = Image.open(io.BytesIO(image_bytes))
draw = ImageDraw.Draw(img)
for custom_label in response['CustomLabels']:
geometry = custom_label['Geometry']['BoundingBox']
w = geometry['Width'] * img.width
h = geometry['Height'] * img.height
l = geometry['Left'] * img.width
t = geometry['Top'] * img.height
draw.rectangle([l, t, l + w, t + h], outline=(0, 0, 255, 255), width=5)
plt.imshow(np.asarray(img))
###Output
_____no_output_____
###Markdown
Cleaning upTo finish this workshop, we will stop the model.**Do not forget to run this step when you complete there workshop. Custom Labels models are billed by the minute.**
###Code
rek.stop_project_version(
ProjectVersionArn=model_arn,
)
###Output
_____no_output_____ |
Neural Network Basics with Tensorflow.ipynb | ###Markdown
Import the Libraries
###Code
import tensorflow as tf
from tensorflow import keras
tf.keras.Model()
from tensorflow.keras.models import Sequential
from tensorflow.keras import Model
import numpy as np
import matplotlib.pyplot as plt
# Import the dataset
mnist = keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
print(x_train.shape, y_train.shape)
# normalize the data : 0.255 -> 0.1
x_train, x_test = x_train / 255.0, x_test / 255.0
# Plot the data
for i in range(6):
plt.subplot(2, 3, i+1)
plt.imshow(x_train[i], cmap='gray')
plt.show()
# model
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=(28, 28)), # Flattens our image to reduce to 1-D
keras.layers.Dense(128, activation = 'relu'), # Fully connected layer
keras.layers.Dense(10), # Final layer
])
print(model.summary())
# We can write in this from also
'''model = keras.Sequential()
model.add(keras.layers.Flatten(input_shape=(28, 28)))
model.add(keras.layers.Dense(128, activation = 'relu'))
model.add(keras.layers.Dense(10))
print(model.summary())'''
# Loss & optimizer
loss = keras.losses.SparseCategoricalCrossentropy(from_logits=True) # For multiclass problem because y is an int class level also sometimes label include onehot
optim = keras.optimizers.Adam(lr=0.001) # create optimizer lr is the hyper parameter
metrics = ["accuracy"]
model.compile(loss=loss, optimizer=optim, metrics=metrics) # configure the model for training
# training
batch_size = 64
epochs = 5
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, shuffle=True, verbose=2)
# evaluate the model
model.evaluate(x_test, y_test, batch_size=batch_size, verbose=2)
# predictions
probability_model = keras.models.Sequential([
model,
keras.layers.Softmax()
])
predictions = probability_model(x_test)
pred0 = predictions[0]
print(pred0)
label0 = np.argmax(pred0)
print(label0)
# 2nd way
# model + softmax
predictions = model(x_test)
predictions = tf.nn.softmax(predictions)
pred0 = predictions[0]
print(pred0)
label0 = np.argmax(pred0)
print(label0)
# 3rd way
predictions = model.predict(x_test, batch_size=batch_size)
predictions = tf.nn.softmax(predictions)
pred0 = predictions[0]
print(pred0)
label0 = np.argmax(pred0)
print(label0)
# For 5 different labels
pred05s = predictions[0:5]
print(pred05s.shape)
label05s = np.argmax(pred05s, axis = 1)
print(label05s)
###Output
(5, 10)
[7 2 1 0 4]
###Markdown
Or we can do in another way
###Code
import tensorflow as tf
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)
model.evaluate(x_test, y_test)
###Output
Epoch 1/5
1875/1875 [==============================] - 4s 2ms/step - loss: 0.4830 - accuracy: 0.8608
Epoch 2/5
1875/1875 [==============================] - 4s 2ms/step - loss: 0.1517 - accuracy: 0.9555
Epoch 3/5
1875/1875 [==============================] - 4s 2ms/step - loss: 0.1083 - accuracy: 0.9684
Epoch 4/5
1875/1875 [==============================] - 4s 2ms/step - loss: 0.0881 - accuracy: 0.9724
Epoch 5/5
1875/1875 [==============================] - 4s 2ms/step - loss: 0.0715 - accuracy: 0.9772
313/313 [==============================] - 1s 2ms/step - loss: 0.0760 - accuracy: 0.9764
|
notebooks/day1/01_Notebooks_Intro.ipynb | ###Markdown
Introduction to Jupyter Notebooks This section will introduce you to the basics of using Python code in Jupyter Notebooks via JupyterLab.This notebook is derived from a [Digital Earth Africa](https://www.digitalearthafrica.org/) notebook: [here](https://github.com/digitalearthafrica/deafrica-sandbox-notebooks/blob/master/Beginners_guide/01_Jupyter_notebooks.ipynb) BackgroundAccess to implementations of the [Open Data Cube](https://www.opendatacube.org/) such as [Digital Earth Africa](https://www.digitalearthafrica.org/) and [Digital Earth Australia](https://www.ga.gov.au/dea) is achieved through the use of Python code and [Jupyter Notebooks](https://jupyterlab.readthedocs.io/en/stable/user/notebook.html).The Jupyter Notebook (also termed notebook from here onwards) is an interactive web application that allows for the viewing, creation and documentation of live code.Notebook applications include data transformation, visualisation, modelling and machine learning.The default web interface to access notebooks is [JupyterLab](https://jupyterlab.readthedocs.io/en/stable/), which we will cover in the next section. DescriptionTopics covered include:* How to run (execute) a Jupyter Notebook cell* The different types of Jupyter Notebook cells* Stopping a process or restarting a Jupyter Notebook* Saving and exporting your work* Starting a new Jupyter Notebook*** Getting started Running (executing) a cellJupyter Notebooks allow code to be separated into sections that can be executed independent of one another.These sections are called "cells".Python code is written into individual cells that can be executed by placing the cursor in the cell and typing `Shift-Enter` on the keyboard or selecting the &9658; "run" button in the ribbon at the top of the notebook. These options will run a single cell at a time.If you wish to auto-run all cells in a notebook, select the &9658;&9658; "restart and run all cells" button in the ribbon. When you run a cell, you are executing that cell's content.Any output produced from running the cell will appear directly below it.Run the cell below:
###Code
print ("I ran a cell!")
###Output
I ran a cell!
###Markdown
Cell status The `[ ]:` symbol to the left of each Code cell describes the state of the cell:* `[ ]:` means that the cell has not been run yet.* `[*]:` means that the cell is currently running.* `[1]:` means that the cell has finished running and was the first cell run.The number indicates the order that the cells were run in.You can also tell whether a cell is currently executing in a Jupyter notebook by inspecting the small circle in the right corner of the ribbon. The circle will turn grey ("Kernel busy") when the cell is running , and return to empty ("Kernel idle") when the process is complete . Jupyter notebook cell typesCells are identified as either Code, Markdown, or Raw. This designation can be changed using the ribbon. Code cellsAll code operations are performed in Code cells. Code cells can be used to edit and write new code, and perform tasks like loading data, plotting data and running analyses. Click on the cell below. Note that the ribbon describes it as a Code cell.
###Code
print("This is a code cell")
###Output
This is a code cell
###Markdown
Markdown cellsPlace the cursor in this cell by double clicking.The cell format has changed to allow for editing. Note that the ribbon describes this as a Markdown cell.Run this cell to return the formatted version.Markdown cells provide the narrative to a notebook.They are used for text and are useful to describe the code operations in the following cells. To see some of the formatting options for text in a Markdown cell, navigate to the "Help" tab of the menu bar at the top of JupyterLab and select "Markdown Reference".Here you will see a wide range of text formatting options including headings, dot points, italics, hyperlinking and creating tables. Raw cellsInformation in Raw cells is stored in the notebook metadata and can be used to render different code formats into HTML or $\LaTeX$.There are a range of available Raw cell formats that differ depending on how they are to be rendered.For the purposes of this beginner's guide, raw cells are rarely used by the authors and not required for most notebook users. Stopping a process or restarting a Jupyter NotebookSometimes it can be useful to stop a cell execution before it finishes (e.g. if a process is taking too long to complete, or if you realise you need to modify some code before running the cell). To interrupt a cell execution, you can click the &9632; "stop" button in the ribbon (shortcut: press the I key twice - `I, I`). -->To test this, run the following code cell.This will run a piece of code that will take 20 seconds to complete.To interrupt this code, press the &9632; "stop" button. The notebook should stop executing the cell.
###Code
import time
time.sleep(20)
###Output
_____no_output_____ |
notebook/14t-seresnext50.ipynb.ipynb | ###Markdown
GPU
###Code
gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
print(gpu_info)
###Output
Sun Jan 17 22:42:27 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.27.04 Driver Version: 418.67 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |
| N/A 32C P8 9W / 70W | 0MiB / 15079MiB | 0% Default |
| | | ERR! |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
###Markdown
CFG
###Code
CONFIG_NAME = 'config14.yml'
from requests import get
# filename = get('http://172.28.0.2:9000/api/sessions').json()[0]['name']
TITLE = '14t-seresnext50'
! rm -r cassava
! git clone https://github.com/raijin0704/cassava.git
# ====================================================
# CFG
# ====================================================
import yaml
CONFIG_PATH = f'./cassava/config/{CONFIG_NAME}'
with open(CONFIG_PATH) as f:
config = yaml.load(f)
INFO = config['info']
TAG = config['tag']
CFG = config['cfg']
CFG['train'] = True
CFG['inference'] = False
# CFG['debug'] = True
if CFG['debug']:
CFG['epochs'] = 1
assert INFO['TITLE'] == TITLE
TAG['model_name'] = 'seresnext50_32x4d'
# CFG['batch_size'] = 8
CFG['batch_size']
###Output
_____no_output_____
###Markdown
colab & kaggle notebookでの環境面の処理 colab
###Code
def _colab_kaggle_authority():
from googleapiclient.discovery import build
import io, os
from googleapiclient.http import MediaIoBaseDownload
drive_service = build('drive', 'v3')
results = drive_service.files().list(
q="name = 'kaggle.json'", fields="files(id)").execute()
kaggle_api_key = results.get('files', [])
filename = "/root/.kaggle/kaggle.json"
os.makedirs(os.path.dirname(filename), exist_ok=True)
request = drive_service.files().get_media(fileId=kaggle_api_key[0]['id'])
fh = io.FileIO(filename, 'wb')
downloader = MediaIoBaseDownload(fh, request)
done = False
while done is False:
status, done = downloader.next_chunk()
print("Download %d%%." % int(status.progress() * 100))
os.chmod(filename, 600)
def _install_apex():
import os
import subprocess
import sys
# import time
subprocess.run('git clone https://github.com/NVIDIA/apex'.split(' '))
# time.sleep(10)
os.chdir('apex')
subprocess.run('pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" .'.split(' '))
os.chdir('..')
def process_colab():
import subprocess
# ドライブのマウント
from google.colab import drive
drive.mount('/content/drive')
# Google Cloudの権限設定
from google.colab import auth
auth.authenticate_user()
# kaggle設定
# _colab_kaggle_authority()
# subprocess.run('pip install --upgrade --force-reinstall --no-deps kaggle'.split(' '))
# ライブラリ関係
subprocess.run('pip install --upgrade opencv-python'.split(' '))
subprocess.run('pip install --upgrade albumentations'.split(' '))
subprocess.run('pip install timm'.split(' '))
# if CFG['apex']:
# print('installing apex')
# _install_apex()
# print('done')
# 各種pathの設定
# DATA_PATH = '/content/drive/Shareddrives/便利用/kaggle/cassava/input/'
DATA_PATH = '/content/input'
OUTPUT_DIR = './output/'
NOTEBOOK_PATH = f'/content/drive/Shareddrives/便利用/kaggle/cassava/notebook/{TITLE}.ipynb'
return DATA_PATH, OUTPUT_DIR, NOTEBOOK_PATH
###Output
_____no_output_____
###Markdown
kaggle notebook
###Code
def _kaggle_gcp_authority():
from kaggle_secrets import UserSecretsClient
user_secrets = UserSecretsClient()
user_credential = user_secrets.get_gcloud_credential()
user_secrets.set_tensorflow_credential(user_credential)
def process_kaggle():
# GCP設定
_kaggle_gcp_authority()
# 各種pathの設定
DATA_PATH = '../input/cassava-leaf-disease-classification/'
! mkdir output
OUTPUT_DIR = './output/'
NOTEBOOK_PATH = './__notebook__.ipynb'
# system path
import sys
sys.path.append('../input/pytorch-image-models/pytorch-image-models-master')
return DATA_PATH, OUTPUT_DIR, NOTEBOOK_PATH
###Output
_____no_output_____
###Markdown
共通
###Code
def process_common():
# ライブラリ関係
import subprocess
subprocess.run('pip install mlflow'.split(' '))
# 環境変数
import os
os.environ["GCLOUD_PROJECT"] = INFO['PROJECT_ID']
try:
from google.colab import auth
except ImportError:
DATA_PATH, OUTPUT_DIR, NOTEBOOK_PATH = process_kaggle()
env = 'kaggle'
else:
DATA_PATH, OUTPUT_DIR, NOTEBOOK_PATH = process_colab()
env = 'colab'
finally:
process_common()
!rm -r /content/input
import os
try:
from google.colab import auth
except ImportError:
pass
else:
! cp /content/drive/Shareddrives/便利用/kaggle/cassava/input.zip /content/input.zip
! unzip input.zip
! rm input.zip
train_num = len(os.listdir(DATA_PATH+"/train_images"))
assert train_num == 21397
###Output
[1;30;43mストリーミング出力は最後の 5000 行に切り捨てられました。[0m
inflating: input/train_images/1137739472.jpg
inflating: input/train_images/441313044.jpg
inflating: input/train_images/982733151.jpg
inflating: input/train_images/3643480526.jpg
inflating: input/train_images/2440664696.jpg
inflating: input/train_images/370523815.jpg
inflating: input/train_images/1125560878.jpg
inflating: input/train_images/3303165486.jpg
inflating: input/train_images/510873412.jpg
inflating: input/train_images/550429661.jpg
inflating: input/train_images/1127352598.jpg
inflating: input/train_images/3767196717.jpg
inflating: input/train_images/25671501.jpg
inflating: input/train_images/2296550892.jpg
inflating: input/train_images/754482252.jpg
inflating: input/train_images/3714169299.jpg
inflating: input/train_images/127776052.jpg
inflating: input/train_images/3761357668.jpg
inflating: input/train_images/996534381.jpg
inflating: input/train_images/255701322.jpg
inflating: input/train_images/1448397186.jpg
inflating: input/train_images/2531944838.jpg
inflating: input/train_images/3700174408.jpg
inflating: input/train_images/1026177105.jpg
inflating: input/train_images/600110842.jpg
inflating: input/train_images/3937975751.jpg
inflating: input/train_images/3461803618.jpg
inflating: input/train_images/1426179564.jpg
inflating: input/train_images/1010855776.jpg
inflating: input/train_images/1475296121.jpg
inflating: input/train_images/3298093070.jpg
inflating: input/train_images/1620970351.jpg
inflating: input/train_images/1488733138.jpg
inflating: input/train_images/1333419699.jpg
inflating: input/train_images/360076327.jpg
inflating: input/train_images/3009647475.jpg
inflating: input/train_images/2234089477.jpg
inflating: input/train_images/2609118194.jpg
inflating: input/train_images/410890316.jpg
inflating: input/train_images/4184062100.jpg
inflating: input/train_images/2963247427.jpg
inflating: input/train_images/3178512733.jpg
inflating: input/train_images/1353797925.jpg
inflating: input/train_images/2177911442.jpg
inflating: input/train_images/4065072100.jpg
inflating: input/train_images/1058428683.jpg
inflating: input/train_images/3400218369.jpg
inflating: input/train_images/1815863009.jpg
inflating: input/train_images/1233926167.jpg
inflating: input/train_images/1916594362.jpg
inflating: input/train_images/3095137637.jpg
inflating: input/train_images/3028047509.jpg
inflating: input/train_images/2857325941.jpg
inflating: input/train_images/3224710052.jpg
inflating: input/train_images/3214705014.jpg
inflating: input/train_images/2625055972.jpg
inflating: input/train_images/2041267183.jpg
inflating: input/train_images/2763501516.jpg
inflating: input/train_images/3808267926.jpg
inflating: input/train_images/1978570472.jpg
inflating: input/train_images/2866106633.jpg
inflating: input/train_images/1924914.jpg
inflating: input/train_images/607627857.jpg
inflating: input/train_images/3815389777.jpg
inflating: input/train_images/29157816.jpg
inflating: input/train_images/1472883908.jpg
inflating: input/train_images/716814677.jpg
inflating: input/train_images/1803483476.jpg
inflating: input/train_images/514040790.jpg
inflating: input/train_images/1559511598.jpg
inflating: input/train_images/3791164400.jpg
inflating: input/train_images/3612093736.jpg
inflating: input/train_images/1899940648.jpg
inflating: input/train_images/4017689394.jpg
inflating: input/train_images/2628143794.jpg
inflating: input/train_images/1960927003.jpg
inflating: input/train_images/987080644.jpg
inflating: input/train_images/1290734565.jpg
inflating: input/train_images/3914399588.jpg
inflating: input/train_images/2896221561.jpg
inflating: input/train_images/3319706331.jpg
inflating: input/train_images/960648391.jpg
inflating: input/train_images/2776899438.jpg
inflating: input/train_images/1367181542.jpg
inflating: input/train_images/634618230.jpg
inflating: input/train_images/1093539112.jpg
inflating: input/train_images/1757605463.jpg
inflating: input/train_images/3727402157.jpg
inflating: input/train_images/1331106068.jpg
inflating: input/train_images/2569150875.jpg
inflating: input/train_images/63578884.jpg
inflating: input/train_images/2351347259.jpg
inflating: input/train_images/2515648929.jpg
inflating: input/train_images/489859226.jpg
inflating: input/train_images/1096302814.jpg
inflating: input/train_images/1894282232.jpg
inflating: input/train_images/3009725992.jpg
inflating: input/train_images/4091446019.jpg
inflating: input/train_images/4218379987.jpg
inflating: input/train_images/721201325.jpg
inflating: input/train_images/3164729248.jpg
inflating: input/train_images/2190893097.jpg
inflating: input/train_images/3451525469.jpg
inflating: input/train_images/3014758416.jpg
inflating: input/train_images/3540255449.jpg
inflating: input/train_images/3048875200.jpg
inflating: input/train_images/3158777284.jpg
inflating: input/train_images/525960141.jpg
inflating: input/train_images/2171124892.jpg
inflating: input/train_images/3013885002.jpg
inflating: input/train_images/3136739059.jpg
inflating: input/train_images/113115280.jpg
inflating: input/train_images/3473193669.jpg
inflating: input/train_images/149926175.jpg
inflating: input/train_images/69869891.jpg
inflating: input/train_images/2618549388.jpg
inflating: input/train_images/3645775098.jpg
inflating: input/train_images/2534943188.jpg
inflating: input/train_images/1881585879.jpg
inflating: input/train_images/378056459.jpg
inflating: input/train_images/1943423929.jpg
inflating: input/train_images/2495248330.jpg
inflating: input/train_images/2348415590.jpg
inflating: input/train_images/1908552116.jpg
inflating: input/train_images/1834449995.jpg
inflating: input/train_images/3363323405.jpg
inflating: input/train_images/1100049957.jpg
inflating: input/train_images/4102423093.jpg
inflating: input/train_images/3699866059.jpg
inflating: input/train_images/4040452479.jpg
inflating: input/train_images/27989539.jpg
inflating: input/train_images/1998118528.jpg
inflating: input/train_images/2446149894.jpg
inflating: input/train_images/3446250532.jpg
inflating: input/train_images/484161659.jpg
inflating: input/train_images/913261951.jpg
inflating: input/train_images/469774939.jpg
inflating: input/train_images/979088794.jpg
inflating: input/train_images/640668148.jpg
inflating: input/train_images/2095404633.jpg
inflating: input/train_images/744371772.jpg
inflating: input/train_images/707347112.jpg
inflating: input/train_images/405406521.jpg
inflating: input/train_images/1696348486.jpg
inflating: input/train_images/1364063429.jpg
inflating: input/train_images/2468021699.jpg
inflating: input/train_images/3968384941.jpg
inflating: input/train_images/621642656.jpg
inflating: input/train_images/1935476091.jpg
inflating: input/train_images/1850804110.jpg
inflating: input/train_images/3106298633.jpg
inflating: input/train_images/1224284456.jpg
inflating: input/train_images/592236552.jpg
inflating: input/train_images/4145595052.jpg
inflating: input/train_images/3719586023.jpg
inflating: input/train_images/4089978663.jpg
inflating: input/train_images/1832863930.jpg
inflating: input/train_images/708059328.jpg
inflating: input/train_images/2265275376.jpg
inflating: input/train_images/2817027058.jpg
inflating: input/train_images/36663927.jpg
inflating: input/train_images/1524470833.jpg
inflating: input/train_images/4075436598.jpg
inflating: input/train_images/507089312.jpg
inflating: input/train_images/4210825389.jpg
inflating: input/train_images/151796855.jpg
inflating: input/train_images/3631281208.jpg
inflating: input/train_images/1931114133.jpg
inflating: input/train_images/2717987423.jpg
inflating: input/train_images/595567358.jpg
inflating: input/train_images/2424084146.jpg
inflating: input/train_images/3105887434.jpg
inflating: input/train_images/3619174521.jpg
inflating: input/train_images/643613398.jpg
inflating: input/train_images/2411688047.jpg
inflating: input/train_images/2753146306.jpg
inflating: input/train_images/2970713601.jpg
inflating: input/train_images/7145099.jpg
inflating: input/train_images/217559821.jpg
inflating: input/train_images/3117750908.jpg
inflating: input/train_images/466809222.jpg
inflating: input/train_images/3715474986.jpg
inflating: input/train_images/3914861285.jpg
inflating: input/train_images/1728159331.jpg
inflating: input/train_images/423373963.jpg
inflating: input/train_images/58667011.jpg
inflating: input/train_images/2412789448.jpg
inflating: input/train_images/1220026240.jpg
inflating: input/train_images/3191274160.jpg
inflating: input/train_images/1837614598.jpg
inflating: input/train_images/722430028.jpg
inflating: input/train_images/2667091366.jpg
inflating: input/train_images/515448717.jpg
inflating: input/train_images/4200872179.jpg
inflating: input/train_images/951654457.jpg
inflating: input/train_images/2708697487.jpg
inflating: input/train_images/3081150280.jpg
inflating: input/train_images/3490066061.jpg
inflating: input/train_images/2588147420.jpg
inflating: input/train_images/3988625744.jpg
inflating: input/train_images/3203815975.jpg
inflating: input/train_images/1295166151.jpg
inflating: input/train_images/2209303439.jpg
inflating: input/train_images/3973846840.jpg
inflating: input/train_images/860380969.jpg
inflating: input/train_images/427669101.jpg
inflating: input/train_images/2963468821.jpg
inflating: input/train_images/980911264.jpg
inflating: input/train_images/958551982.jpg
inflating: input/train_images/1971179233.jpg
inflating: input/train_images/1839467462.jpg
inflating: input/train_images/1652379583.jpg
inflating: input/train_images/3175363969.jpg
inflating: input/train_images/3369678617.jpg
inflating: input/train_images/2145478515.jpg
inflating: input/train_images/3156981020.jpg
inflating: input/train_images/3708133019.jpg
inflating: input/train_images/3274661195.jpg
inflating: input/train_images/4133329748.jpg
inflating: input/train_images/400079356.jpg
inflating: input/train_images/2963358758.jpg
inflating: input/train_images/3304481536.jpg
inflating: input/train_images/95121846.jpg
inflating: input/train_images/3589093765.jpg
inflating: input/train_images/3482378150.jpg
inflating: input/train_images/152558595.jpg
inflating: input/train_images/366838317.jpg
inflating: input/train_images/3255212212.jpg
inflating: input/train_images/2885001903.jpg
inflating: input/train_images/2779441922.jpg
inflating: input/train_images/827007782.jpg
inflating: input/train_images/2124797374.jpg
inflating: input/train_images/2056096028.jpg
inflating: input/train_images/3784730051.jpg
inflating: input/train_images/3045897535.jpg
inflating: input/train_images/2123719792.jpg
inflating: input/train_images/3674280452.jpg
inflating: input/train_images/1194287929.jpg
inflating: input/train_images/1489659706.jpg
inflating: input/train_images/3758912402.jpg
inflating: input/train_images/1022008227.jpg
inflating: input/train_images/2052899282.jpg
inflating: input/train_images/1501891629.jpg
inflating: input/train_images/2508483326.jpg
inflating: input/train_images/1466509745.jpg
inflating: input/train_images/1250400468.jpg
inflating: input/train_images/3567077195.jpg
inflating: input/train_images/726377415.jpg
inflating: input/train_images/4089766352.jpg
inflating: input/train_images/2981404650.jpg
inflating: input/train_images/1690276814.jpg
inflating: input/train_images/2323359188.jpg
inflating: input/train_images/237807134.jpg
inflating: input/train_images/4134468341.jpg
inflating: input/train_images/1401539761.jpg
inflating: input/train_images/579336785.jpg
inflating: input/train_images/2321958432.jpg
inflating: input/train_images/2481954386.jpg
inflating: input/train_images/3653286071.jpg
inflating: input/train_images/3192745866.jpg
inflating: input/train_images/2690225028.jpg
inflating: input/train_images/1678515454.jpg
inflating: input/train_images/321774948.jpg
inflating: input/train_images/3085864631.jpg
inflating: input/train_images/91815905.jpg
inflating: input/train_images/1689265715.jpg
inflating: input/train_images/1266654234.jpg
inflating: input/train_images/840889363.jpg
inflating: input/train_images/237348353.jpg
inflating: input/train_images/1230729035.jpg
inflating: input/train_images/3308221091.jpg
inflating: input/train_images/3391710930.jpg
inflating: input/train_images/4210262767.jpg
inflating: input/train_images/1833189586.jpg
inflating: input/train_images/1635220625.jpg
inflating: input/train_images/795796251.jpg
inflating: input/train_images/296895333.jpg
inflating: input/train_images/2868211687.jpg
inflating: input/train_images/3251764098.jpg
inflating: input/train_images/3997854029.jpg
inflating: input/train_images/1379566830.jpg
inflating: input/train_images/4254533142.jpg
inflating: input/train_images/958879588.jpg
inflating: input/train_images/3999057498.jpg
inflating: input/train_images/3371952476.jpg
inflating: input/train_images/830812751.jpg
inflating: input/train_images/2159189425.jpg
inflating: input/train_images/1175579707.jpg
inflating: input/train_images/3258960112.jpg
inflating: input/train_images/4075412353.jpg
inflating: input/train_images/1118381273.jpg
inflating: input/train_images/285685903.jpg
inflating: input/train_images/1569295973.jpg
inflating: input/train_images/2238956267.jpg
inflating: input/train_images/2877381868.jpg
inflating: input/train_images/3937938312.jpg
inflating: input/train_images/483199386.jpg
inflating: input/train_images/1648069139.jpg
inflating: input/train_images/3664608014.jpg
inflating: input/train_images/3694686793.jpg
inflating: input/train_images/511654743.jpg
inflating: input/train_images/2593776626.jpg
inflating: input/train_images/3169385678.jpg
inflating: input/train_images/2532945082.jpg
inflating: input/train_images/3208365182.jpg
inflating: input/train_images/3532098475.jpg
inflating: input/train_images/3426267905.jpg
inflating: input/train_images/3809163419.jpg
inflating: input/train_images/2921818611.jpg
inflating: input/train_images/1671701546.jpg
inflating: input/train_images/4168645136.jpg
inflating: input/train_images/336032005.jpg
inflating: input/train_images/193173270.jpg
inflating: input/train_images/3874600835.jpg
inflating: input/train_images/625159901.jpg
inflating: input/train_images/96246425.jpg
inflating: input/train_images/635511735.jpg
inflating: input/train_images/3868235977.jpg
inflating: input/train_images/4250554784.jpg
inflating: input/train_images/3122760139.jpg
inflating: input/train_images/269761139.jpg
inflating: input/train_images/3402053246.jpg
inflating: input/train_images/328474315.jpg
inflating: input/train_images/500317925.jpg
inflating: input/train_images/1101262562.jpg
inflating: input/train_images/3696122433.jpg
inflating: input/train_images/1070151605.jpg
inflating: input/train_images/1957341134.jpg
inflating: input/train_images/1502339292.jpg
inflating: input/train_images/2962131870.jpg
inflating: input/train_images/3377872611.jpg
inflating: input/train_images/4009415549.jpg
inflating: input/train_images/3203913963.jpg
inflating: input/train_images/3267041557.jpg
inflating: input/train_images/2602649407.jpg
inflating: input/train_images/2934273784.jpg
inflating: input/train_images/2828195136.jpg
inflating: input/train_images/2035935927.jpg
inflating: input/train_images/2943837219.jpg
inflating: input/train_images/1553303103.jpg
inflating: input/train_images/4162370694.jpg
inflating: input/train_images/596706595.jpg
inflating: input/train_images/1769025995.jpg
inflating: input/train_images/1415940915.jpg
inflating: input/train_images/3196320591.jpg
inflating: input/train_images/3381316827.jpg
inflating: input/train_images/1395866975.jpg
inflating: input/train_images/2539540333.jpg
inflating: input/train_images/3548816902.jpg
inflating: input/train_images/758001650.jpg
inflating: input/train_images/2543681056.jpg
inflating: input/train_images/810720134.jpg
inflating: input/train_images/2751784473.jpg
inflating: input/train_images/718245684.jpg
inflating: input/train_images/3241980938.jpg
inflating: input/train_images/622599265.jpg
inflating: input/train_images/3127706145.jpg
inflating: input/train_images/370935703.jpg
inflating: input/train_images/980401006.jpg
inflating: input/train_images/4269208386.jpg
inflating: input/train_images/3176028684.jpg
inflating: input/train_images/3190970408.jpg
inflating: input/train_images/3407973340.jpg
inflating: input/train_images/3238638493.jpg
inflating: input/train_images/2411669233.jpg
inflating: input/train_images/2644006691.jpg
inflating: input/train_images/4282894767.jpg
inflating: input/train_images/1523038784.jpg
inflating: input/train_images/4116414929.jpg
inflating: input/train_images/2819461837.jpg
inflating: input/train_images/289082000.jpg
inflating: input/train_images/3687904505.jpg
inflating: input/train_images/3635870628.jpg
inflating: input/train_images/497624056.jpg
inflating: input/train_images/409358481.jpg
inflating: input/train_images/3128583938.jpg
inflating: input/train_images/1305780999.jpg
inflating: input/train_images/1036697361.jpg
inflating: input/train_images/1611948994.jpg
inflating: input/train_images/3928789907.jpg
inflating: input/train_images/2264163141.jpg
inflating: input/train_images/744348515.jpg
inflating: input/train_images/3971401456.jpg
inflating: input/train_images/4197924159.jpg
inflating: input/train_images/197102142.jpg
inflating: input/train_images/1879473722.jpg
inflating: input/train_images/2645759543.jpg
inflating: input/train_images/2736151353.jpg
inflating: input/train_images/1091450292.jpg
inflating: input/train_images/274875938.jpg
inflating: input/train_images/1436413250.jpg
inflating: input/train_images/227401382.jpg
inflating: input/train_images/2555433348.jpg
inflating: input/train_images/2980417090.jpg
inflating: input/train_images/3875484369.jpg
inflating: input/train_images/3825520782.jpg
inflating: input/train_images/3194152028.jpg
inflating: input/train_images/612426444.jpg
inflating: input/train_images/385038415.jpg
inflating: input/train_images/4113678968.jpg
inflating: input/train_images/2800121117.jpg
inflating: input/train_images/3580388018.jpg
inflating: input/train_images/3218303848.jpg
inflating: input/train_images/1799253944.jpg
inflating: input/train_images/636816861.jpg
inflating: input/train_images/1521518831.jpg
inflating: input/train_images/504400425.jpg
inflating: input/train_images/4222936048.jpg
inflating: input/train_images/2620149411.jpg
inflating: input/train_images/362517802.jpg
inflating: input/train_images/1557580898.jpg
inflating: input/train_images/287239456.jpg
inflating: input/train_images/1094294460.jpg
inflating: input/train_images/887146964.jpg
inflating: input/train_images/2079122441.jpg
inflating: input/train_images/1556262944.jpg
inflating: input/train_images/3424618786.jpg
inflating: input/train_images/2446013457.jpg
inflating: input/train_images/2038601042.jpg
inflating: input/train_images/3003520736.jpg
inflating: input/train_images/140350208.jpg
inflating: input/train_images/100731318.jpg
inflating: input/train_images/1311435294.jpg
inflating: input/train_images/2290545459.jpg
inflating: input/train_images/3255912771.jpg
inflating: input/train_images/113596289.jpg
inflating: input/train_images/3894758285.jpg
inflating: input/train_images/375724523.jpg
inflating: input/train_images/1355611441.jpg
inflating: input/train_images/1418341223.jpg
inflating: input/train_images/860103938.jpg
inflating: input/train_images/1657954384.jpg
inflating: input/train_images/3205640765.jpg
inflating: input/train_images/1368162685.jpg
inflating: input/train_images/756009491.jpg
inflating: input/train_images/3845529950.jpg
inflating: input/train_images/342704060.jpg
inflating: input/train_images/3899800621.jpg
inflating: input/train_images/339933006.jpg
inflating: input/train_images/1082822990.jpg
inflating: input/train_images/1493949319.jpg
inflating: input/train_images/4246986532.jpg
inflating: input/train_images/3943448361.jpg
inflating: input/train_images/2396550881.jpg
inflating: input/train_images/402280669.jpg
inflating: input/train_images/360723691.jpg
inflating: input/train_images/570889595.jpg
inflating: input/train_images/1873822026.jpg
inflating: input/train_images/4124205703.jpg
inflating: input/train_images/3717454916.jpg
inflating: input/train_images/1549198414.jpg
inflating: input/train_images/2926455089.jpg
inflating: input/train_images/2279252876.jpg
inflating: input/train_images/2966597475.jpg
inflating: input/train_images/838106879.jpg
inflating: input/train_images/1824287001.jpg
inflating: input/train_images/2080103817.jpg
inflating: input/train_images/2761443259.jpg
inflating: input/train_images/1426902714.jpg
inflating: input/train_images/3929739835.jpg
inflating: input/train_images/2040710303.jpg
inflating: input/train_images/1753621875.jpg
inflating: input/train_images/959863341.jpg
inflating: input/train_images/4149019611.jpg
inflating: input/train_images/2178418518.jpg
inflating: input/train_images/1009462599.jpg
inflating: input/train_images/2361087670.jpg
inflating: input/train_images/2534559127.jpg
inflating: input/train_images/2489987245.jpg
inflating: input/train_images/2598422574.jpg
inflating: input/train_images/1983607439.jpg
inflating: input/train_images/1351460531.jpg
inflating: input/train_images/3899742169.jpg
inflating: input/train_images/618654809.jpg
inflating: input/train_images/3386180554.jpg
inflating: input/train_images/2048322812.jpg
inflating: input/train_images/3994869632.jpg
inflating: input/train_images/902300140.jpg
inflating: input/train_images/1171170982.jpg
inflating: input/train_images/1047272845.jpg
inflating: input/train_images/1823946129.jpg
inflating: input/train_images/3608399345.jpg
inflating: input/train_images/2043193612.jpg
inflating: input/train_images/3115512885.jpg
inflating: input/train_images/1936058467.jpg
inflating: input/train_images/2408092711.jpg
inflating: input/train_images/963790461.jpg
inflating: input/train_images/3621100767.jpg
inflating: input/train_images/1672251151.jpg
inflating: input/train_images/3571158096.jpg
inflating: input/train_images/1398447903.jpg
inflating: input/train_images/2287557968.jpg
inflating: input/train_images/1905776137.jpg
inflating: input/train_images/3762841948.jpg
inflating: input/train_images/2901825019.jpg
inflating: input/train_images/2062422141.jpg
inflating: input/train_images/3201631268.jpg
inflating: input/train_images/3591651493.jpg
inflating: input/train_images/301667071.jpg
inflating: input/train_images/3738526203.jpg
inflating: input/train_images/291050311.jpg
inflating: input/train_images/442397332.jpg
inflating: input/train_images/2987650848.jpg
inflating: input/train_images/3753049842.jpg
inflating: input/train_images/1616207760.jpg
inflating: input/train_images/2636602591.jpg
inflating: input/train_images/1036692910.jpg
inflating: input/train_images/4064710747.jpg
inflating: input/train_images/3918869320.jpg
inflating: input/train_images/2542872391.jpg
inflating: input/train_images/2091719066.jpg
inflating: input/train_images/2085248098.jpg
inflating: input/train_images/1617838083.jpg
inflating: input/train_images/2488940657.jpg
inflating: input/train_images/165934128.jpg
inflating: input/train_images/1286828340.jpg
inflating: input/train_images/4119520540.jpg
inflating: input/train_images/358823158.jpg
inflating: input/train_images/1497417417.jpg
inflating: input/train_images/3409314423.jpg
inflating: input/train_images/3999758372.jpg
inflating: input/train_images/759532134.jpg
inflating: input/train_images/2069885945.jpg
inflating: input/train_images/3745339060.jpg
inflating: input/train_images/2872638305.jpg
inflating: input/train_images/2643505151.jpg
inflating: input/train_images/3926636513.jpg
inflating: input/train_images/1186498322.jpg
inflating: input/train_images/2343109986.jpg
inflating: input/train_images/2559119042.jpg
inflating: input/train_images/1418463220.jpg
inflating: input/train_images/2584392391.jpg
inflating: input/train_images/3668890746.jpg
inflating: input/train_images/2285830070.jpg
inflating: input/train_images/3049440979.jpg
inflating: input/train_images/1108862699.jpg
inflating: input/train_images/3144595150.jpg
inflating: input/train_images/3690342005.jpg
inflating: input/train_images/4286169099.jpg
inflating: input/train_images/2718731433.jpg
inflating: input/train_images/591802054.jpg
inflating: input/train_images/1351485568.jpg
inflating: input/train_images/76382862.jpg
inflating: input/train_images/2356134016.jpg
inflating: input/train_images/3153277814.jpg
inflating: input/train_images/3304610004.jpg
inflating: input/train_images/1629406213.jpg
inflating: input/train_images/64287317.jpg
inflating: input/train_images/2085011211.jpg
inflating: input/train_images/1465491965.jpg
inflating: input/train_images/1704973107.jpg
inflating: input/train_images/1574353431.jpg
inflating: input/train_images/2758510429.jpg
inflating: input/train_images/2606217713.jpg
inflating: input/train_images/1724697986.jpg
inflating: input/train_images/3114033989.jpg
inflating: input/train_images/2321688265.jpg
inflating: input/train_images/1502155213.jpg
inflating: input/train_images/2804918430.jpg
inflating: input/train_images/1391087542.jpg
inflating: input/train_images/3806166801.jpg
inflating: input/train_images/4278686402.jpg
inflating: input/train_images/2751510981.jpg
inflating: input/train_images/396528848.jpg
inflating: input/train_images/1534499442.jpg
inflating: input/train_images/3535670903.jpg
inflating: input/train_images/1133271450.jpg
inflating: input/train_images/1499115422.jpg
inflating: input/train_images/1866368464.jpg
inflating: input/train_images/1463552704.jpg
inflating: input/train_images/762558370.jpg
inflating: input/train_images/2385678651.jpg
inflating: input/train_images/4248695502.jpg
inflating: input/train_images/1521850100.jpg
inflating: input/train_images/2935458232.jpg
inflating: input/train_images/3786378837.jpg
inflating: input/train_images/2329535405.jpg
inflating: input/train_images/4247400003.jpg
inflating: input/train_images/956343434.jpg
inflating: input/train_images/2591792455.jpg
inflating: input/train_images/865733342.jpg
inflating: input/train_images/479659706.jpg
inflating: input/train_images/927454705.jpg
inflating: input/train_images/1021758544.jpg
inflating: input/train_images/1418640499.jpg
inflating: input/train_images/3050652741.jpg
inflating: input/train_images/1179255874.jpg
inflating: input/train_images/1785801012.jpg
inflating: input/train_images/434128804.jpg
inflating: input/train_images/3369247693.jpg
inflating: input/train_images/334288239.jpg
inflating: input/train_images/2768458456.jpg
inflating: input/train_images/1423993986.jpg
inflating: input/train_images/1541217230.jpg
inflating: input/train_images/3514164612.jpg
inflating: input/train_images/3859819303.jpg
inflating: input/train_images/1609127882.jpg
inflating: input/train_images/2614282441.jpg
inflating: input/train_images/4215606048.jpg
inflating: input/train_images/3980819954.jpg
inflating: input/train_images/2870340497.jpg
inflating: input/train_images/2446621407.jpg
inflating: input/train_images/2206925602.jpg
inflating: input/train_images/3654524052.jpg
inflating: input/train_images/475756576.jpg
inflating: input/train_images/1855684239.jpg
inflating: input/train_images/1677019415.jpg
inflating: input/train_images/3377004222.jpg
inflating: input/train_images/2108674538.jpg
inflating: input/train_images/4069208912.jpg
inflating: input/train_images/340885349.jpg
inflating: input/train_images/689108493.jpg
inflating: input/train_images/690578148.jpg
inflating: input/train_images/4247295410.jpg
inflating: input/train_images/2147728293.jpg
inflating: input/train_images/330459705.jpg
inflating: input/train_images/1835826214.jpg
inflating: input/train_images/3564085262.jpg
inflating: input/train_images/3304054356.jpg
inflating: input/train_images/1855493278.jpg
inflating: input/train_images/2112871300.jpg
inflating: input/train_images/2190536713.jpg
inflating: input/train_images/2917202983.jpg
inflating: input/train_images/2473152942.jpg
inflating: input/train_images/932990043.jpg
inflating: input/train_images/2699152924.jpg
inflating: input/train_images/3608836660.jpg
inflating: input/train_images/4240281500.jpg
inflating: input/train_images/2154958086.jpg
inflating: input/train_images/435952255.jpg
inflating: input/train_images/340248030.jpg
inflating: input/train_images/2282200395.jpg
inflating: input/train_images/755073612.jpg
inflating: input/train_images/812408252.jpg
inflating: input/train_images/2492550594.jpg
inflating: input/train_images/292859382.jpg
inflating: input/train_images/1710665856.jpg
inflating: input/train_images/2148458142.jpg
inflating: input/train_images/1858309623.jpg
inflating: input/train_images/2443920111.jpg
inflating: input/train_images/487175432.jpg
inflating: input/train_images/2154826831.jpg
inflating: input/train_images/3097661849.jpg
inflating: input/train_images/1304731693.jpg
inflating: input/train_images/1041588404.jpg
inflating: input/train_images/1332894133.jpg
inflating: input/train_images/1424184391.jpg
inflating: input/train_images/2556137627.jpg
inflating: input/train_images/2600531939.jpg
inflating: input/train_images/1365114577.jpg
inflating: input/train_images/3453865390.jpg
inflating: input/train_images/2297731997.jpg
inflating: input/train_images/336962626.jpg
inflating: input/train_images/1965932647.jpg
inflating: input/train_images/3896618890.jpg
inflating: input/train_images/350684447.jpg
inflating: input/train_images/472839300.jpg
inflating: input/train_images/983014273.jpg
inflating: input/train_images/2392144549.jpg
inflating: input/train_images/2206285676.jpg
inflating: input/train_images/1359200021.jpg
inflating: input/train_images/1224742642.jpg
inflating: input/train_images/3009386931.jpg
inflating: input/train_images/2503151412.jpg
inflating: input/train_images/2516082759.jpg
inflating: input/train_images/1942589536.jpg
inflating: input/train_images/3914258603.jpg
inflating: input/train_images/1149189284.jpg
inflating: input/train_images/3037767592.jpg
inflating: input/train_images/2268584645.jpg
inflating: input/train_images/2760525879.jpg
inflating: input/train_images/3048278043.jpg
inflating: input/train_images/1338751464.jpg
inflating: input/train_images/1696364421.jpg
inflating: input/train_images/1667294993.jpg
inflating: input/train_images/3692342611.jpg
inflating: input/train_images/4113877744.jpg
inflating: input/train_images/3388412714.jpg
inflating: input/train_images/3227590908.jpg
inflating: input/train_images/3731977760.jpg
inflating: input/train_images/3102877932.jpg
inflating: input/train_images/1242176689.jpg
inflating: input/train_images/3678773878.jpg
inflating: input/train_images/4288249349.jpg
inflating: input/train_images/2510540586.jpg
inflating: input/train_images/1866826606.jpg
inflating: input/train_images/435404397.jpg
inflating: input/train_images/3904989683.jpg
inflating: input/train_images/2325685392.jpg
inflating: input/train_images/3926359515.jpg
inflating: input/train_images/1933911755.jpg
inflating: input/train_images/1129442307.jpg
inflating: input/train_images/737753256.jpg
inflating: input/train_images/2338213285.jpg
inflating: input/train_images/526842311.jpg
inflating: input/train_images/719562902.jpg
inflating: input/train_images/1890285082.jpg
inflating: input/train_images/2761769890.jpg
inflating: input/train_images/2635668715.jpg
inflating: input/train_images/395071544.jpg
inflating: input/train_images/933311576.jpg
inflating: input/train_images/1214320013.jpg
inflating: input/train_images/1340939411.jpg
inflating: input/train_images/4019472268.jpg
inflating: input/train_images/3166562991.jpg
inflating: input/train_images/3934432993.jpg
inflating: input/train_images/907434734.jpg
inflating: input/train_images/3977934674.jpg
inflating: input/train_images/2010268546.jpg
inflating: input/train_images/518284284.jpg
inflating: input/train_images/1753499945.jpg
inflating: input/train_images/1916496572.jpg
inflating: input/train_images/1272495783.jpg
inflating: input/train_images/408144068.jpg
inflating: input/train_images/3773184372.jpg
inflating: input/train_images/3251960666.jpg
inflating: input/train_images/605794963.jpg
inflating: input/train_images/2491752039.jpg
inflating: input/train_images/2157497391.jpg
inflating: input/train_images/3525137356.jpg
inflating: input/train_images/1428569100.jpg
inflating: input/train_images/376083914.jpg
inflating: input/train_images/2215986164.jpg
inflating: input/train_images/166688569.jpg
inflating: input/train_images/1600111988.jpg
inflating: input/train_images/2436214521.jpg
inflating: input/train_images/1129668095.jpg
inflating: input/train_images/3909366564.jpg
inflating: input/train_images/2466474683.jpg
inflating: input/train_images/571328280.jpg
inflating: input/train_images/2757749488.jpg
inflating: input/train_images/2733805167.jpg
inflating: input/train_images/4007199956.jpg
inflating: input/train_images/1239119385.jpg
inflating: input/train_images/184462493.jpg
inflating: input/train_images/2329257679.jpg
inflating: input/train_images/442989383.jpg
inflating: input/train_images/136815898.jpg
inflating: input/train_images/585209164.jpg
inflating: input/train_images/488438522.jpg
inflating: input/train_images/987159645.jpg
inflating: input/train_images/813509621.jpg
inflating: input/train_images/463752462.jpg
inflating: input/train_images/2324804154.jpg
inflating: input/train_images/1660278504.jpg
inflating: input/train_images/2100448540.jpg
inflating: input/train_images/410863155.jpg
inflating: input/train_images/3598509491.jpg
inflating: input/train_images/1067302519.jpg
inflating: input/train_images/1245945074.jpg
inflating: input/train_images/2928220238.jpg
inflating: input/train_images/2998191298.jpg
inflating: input/train_images/3717608172.jpg
inflating: input/train_images/2194319348.jpg
inflating: input/train_images/3269286573.jpg
inflating: input/train_images/261186049.jpg
inflating: input/train_images/3289401619.jpg
inflating: input/train_images/3968048683.jpg
inflating: input/train_images/187887606.jpg
inflating: input/train_images/1683556923.jpg
inflating: input/train_images/1989483426.jpg
inflating: input/train_images/549854027.jpg
inflating: input/train_images/3244462441.jpg
inflating: input/train_images/2276395594.jpg
inflating: input/train_images/4156138691.jpg
inflating: input/train_images/4044829046.jpg
inflating: input/train_images/2377043047.jpg
inflating: input/train_images/1145084928.jpg
inflating: input/train_images/2554260896.jpg
inflating: input/train_images/1424905982.jpg
inflating: input/train_images/2178037336.jpg
inflating: input/train_images/2318645335.jpg
inflating: input/train_images/2297472102.jpg
inflating: input/train_images/206432986.jpg
inflating: input/train_images/1971427379.jpg
inflating: input/train_images/1212460226.jpg
inflating: input/train_images/940939729.jpg
inflating: input/train_images/972649867.jpg
inflating: input/train_images/1912329791.jpg
inflating: input/train_images/2115543310.jpg
inflating: input/train_images/2544507899.jpg
inflating: input/train_images/3478417480.jpg
inflating: input/train_images/3564855382.jpg
inflating: input/train_images/743850893.jpg
inflating: input/train_images/4179630738.jpg
inflating: input/train_images/1808921036.jpg
inflating: input/train_images/2781831798.jpg
inflating: input/train_images/2761141056.jpg
inflating: input/train_images/3838540238.jpg
inflating: input/train_images/2406694792.jpg
inflating: input/train_images/2653833670.jpg
inflating: input/train_images/1440401494.jpg
inflating: input/train_images/2522202499.jpg
inflating: input/train_images/3974455402.jpg
inflating: input/train_images/439049574.jpg
inflating: input/train_images/3378967649.jpg
inflating: input/train_images/1962020298.jpg
inflating: input/train_images/2744104425.jpg
inflating: input/train_images/2937855140.jpg
inflating: input/train_images/3237815335.jpg
inflating: input/train_images/4060450564.jpg
inflating: input/train_images/847847826.jpg
inflating: input/train_images/3741168853.jpg
inflating: input/train_images/3444851887.jpg
inflating: input/train_images/424246187.jpg
inflating: input/train_images/887272418.jpg
inflating: input/train_images/1269646820.jpg
inflating: input/train_images/3927154512.jpg
inflating: input/train_images/227631164.jpg
inflating: input/train_images/1096438409.jpg
inflating: input/train_images/4130960215.jpg
inflating: input/train_images/2589571181.jpg
inflating: input/train_images/2936101260.jpg
inflating: input/train_images/744968127.jpg
inflating: input/train_images/2182855809.jpg
inflating: input/train_images/2484471054.jpg
inflating: input/train_images/2198789414.jpg
inflating: input/train_images/1277679675.jpg
inflating: input/train_images/1981710530.jpg
inflating: input/train_images/2687625618.jpg
inflating: input/train_images/15175683.jpg
inflating: input/train_images/1025060651.jpg
inflating: input/train_images/147082706.jpg
inflating: input/train_images/879360102.jpg
inflating: input/train_images/3138454359.jpg
inflating: input/train_images/1853237353.jpg
inflating: input/train_images/1156963169.jpg
inflating: input/train_images/4252058382.jpg
inflating: input/train_images/3672574295.jpg
inflating: input/train_images/3602124236.jpg
inflating: input/train_images/3044653418.jpg
inflating: input/train_images/2527606306.jpg
inflating: input/train_images/3254091350.jpg
inflating: input/train_images/306210288.jpg
inflating: input/train_images/336083586.jpg
inflating: input/train_images/3570056225.jpg
inflating: input/train_images/4281504647.jpg
inflating: input/train_images/1974290297.jpg
inflating: input/train_images/2587436758.jpg
inflating: input/train_images/1017670009.jpg
inflating: input/train_images/4290827656.jpg
inflating: input/train_images/3814975148.jpg
inflating: input/train_images/3704210007.jpg
inflating: input/train_images/1398282852.jpg
inflating: input/train_images/278898129.jpg
inflating: input/train_images/1859307222.jpg
inflating: input/train_images/72925791.jpg
inflating: input/train_images/806702626.jpg
inflating: input/train_images/2509495518.jpg
inflating: input/train_images/3529355178.jpg
inflating: input/train_images/2497160011.jpg
inflating: input/train_images/3649285117.jpg
inflating: input/train_images/1675758805.jpg
inflating: input/train_images/3889376143.jpg
inflating: input/train_images/1811483991.jpg
inflating: input/train_images/2935901261.jpg
inflating: input/train_images/3156591589.jpg
inflating: input/train_images/1136746572.jpg
inflating: input/train_images/2801545272.jpg
inflating: input/train_images/2590675849.jpg
inflating: input/train_images/4091333216.jpg
inflating: input/train_images/830380663.jpg
inflating: input/train_images/1495222609.jpg
inflating: input/train_images/2684560144.jpg
inflating: input/train_images/2442826642.jpg
inflating: input/train_images/2431603043.jpg
inflating: input/train_images/1148219268.jpg
inflating: input/train_images/3398807044.jpg
inflating: input/train_images/2271308515.jpg
inflating: input/train_images/720776367.jpg
inflating: input/train_images/2377974845.jpg
inflating: input/train_images/3829413649.jpg
inflating: input/train_images/3518069486.jpg
inflating: input/train_images/1359893940.jpg
inflating: input/train_images/3295623672.jpg
inflating: input/train_images/3948333262.jpg
inflating: input/train_images/472370398.jpg
inflating: input/train_images/995221528.jpg
inflating: input/train_images/2663709463.jpg
inflating: input/train_images/2032736928.jpg
inflating: input/train_images/2642446422.jpg
inflating: input/train_images/577090506.jpg
inflating: input/train_images/208832652.jpg
inflating: input/train_images/2377945699.jpg
inflating: input/train_images/3870800967.jpg
inflating: input/train_images/2807877356.jpg
inflating: input/train_images/690565188.jpg
inflating: input/train_images/1839152868.jpg
inflating: input/train_images/630407730.jpg
inflating: input/train_images/2426854829.jpg
inflating: input/train_images/3968955392.jpg
inflating: input/train_images/105741284.jpg
inflating: input/train_images/591335475.jpg
inflating: input/train_images/358985142.jpg
inflating: input/train_images/2199231317.jpg
inflating: input/train_images/667282886.jpg
inflating: input/train_images/542560691.jpg
inflating: input/train_images/2734892772.jpg
inflating: input/train_images/2097195239.jpg
inflating: input/train_images/1090116806.jpg
inflating: input/train_images/2372500857.jpg
inflating: input/train_images/874878736.jpg
inflating: input/train_images/3332684410.jpg
inflating: input/train_images/2107489867.jpg
inflating: input/train_images/1127545108.jpg
inflating: input/train_images/3964970132.jpg
inflating: input/train_images/986888785.jpg
inflating: input/train_images/3419923779.jpg
inflating: input/train_images/802266352.jpg
inflating: input/train_images/3882109848.jpg
inflating: input/train_images/2320107173.jpg
inflating: input/train_images/1435727833.jpg
inflating: input/train_images/1535887769.jpg
inflating: input/train_images/4029027750.jpg
inflating: input/train_images/212573449.jpg
inflating: input/train_images/2721767282.jpg
inflating: input/train_images/3585245374.jpg
inflating: input/train_images/2650131569.jpg
inflating: input/train_images/1012804587.jpg
inflating: input/train_images/1909520119.jpg
inflating: input/train_images/1375245484.jpg
inflating: input/train_images/1323997328.jpg
inflating: input/train_images/1538926850.jpg
inflating: input/train_images/52883488.jpg
inflating: input/train_images/1758003075.jpg
inflating: input/train_images/2342455447.jpg
inflating: input/train_images/3058038323.jpg
inflating: input/train_images/3625017880.jpg
inflating: input/train_images/2278938430.jpg
inflating: input/train_images/1351433725.jpg
inflating: input/train_images/1553995001.jpg
inflating: input/train_images/2936687909.jpg
inflating: input/train_images/522209459.jpg
inflating: input/train_images/252899909.jpg
inflating: input/train_images/3489514020.jpg
inflating: input/train_images/206053415.jpg
inflating: input/train_images/84024270.jpg
inflating: input/train_images/4175172679.jpg
inflating: input/train_images/2607316834.jpg
inflating: input/train_images/743721638.jpg
inflating: input/train_images/3542289007.jpg
inflating: input/train_images/955346673.jpg
inflating: input/train_images/3221754335.jpg
inflating: input/train_images/1135103288.jpg
inflating: input/train_images/1364435251.jpg
inflating: input/train_images/192970946.jpg
inflating: input/train_images/1583439213.jpg
inflating: input/train_images/3362457506.jpg
inflating: input/train_images/3516286553.jpg
inflating: input/train_images/3211556241.jpg
inflating: input/train_images/2764717089.jpg
inflating: input/train_images/1052881053.jpg
inflating: input/train_images/1770197152.jpg
inflating: input/train_images/525742373.jpg
inflating: input/train_images/1348307468.jpg
inflating: input/train_images/2418850424.jpg
inflating: input/train_images/663810012.jpg
inflating: input/train_images/1558058751.jpg
inflating: input/train_images/1405323651.jpg
inflating: input/train_images/1785877990.jpg
inflating: input/train_images/560888503.jpg
inflating: input/train_images/65344468.jpg
inflating: input/train_images/1244124878.jpg
inflating: input/train_images/781931652.jpg
inflating: input/train_images/2091943364.jpg
inflating: input/train_images/3031817409.jpg
inflating: input/train_images/2066965759.jpg
inflating: input/train_images/1403068754.jpg
inflating: input/train_images/3501100214.jpg
inflating: input/train_images/2473268326.jpg
inflating: input/train_images/3992628804.jpg
inflating: input/train_images/2170455392.jpg
inflating: input/train_images/3038549667.jpg
inflating: input/train_images/2491179665.jpg
inflating: input/train_images/1211187400.jpg
inflating: input/train_images/3909952620.jpg
inflating: input/train_images/2445684335.jpg
inflating: input/train_images/2847670157.jpg
inflating: input/train_images/307148103.jpg
inflating: input/train_images/3800475083.jpg
inflating: input/train_images/1201690046.jpg
inflating: input/train_images/1179237425.jpg
inflating: input/train_images/2703475066.jpg
inflating: input/train_images/370481129.jpg
inflating: input/train_images/1873204876.jpg
inflating: input/train_images/1354380890.jpg
inflating: input/train_images/1822627582.jpg
inflating: input/train_images/2486584885.jpg
inflating: input/train_images/1535057791.jpg
inflating: input/train_images/4284057693.jpg
inflating: input/train_images/3606325619.jpg
inflating: input/train_images/3947205646.jpg
inflating: input/train_images/1352603733.jpg
inflating: input/train_images/2642951848.jpg
inflating: input/train_images/645991519.jpg
inflating: input/train_images/1101409116.jpg
inflating: input/train_images/1995180992.jpg
inflating: input/train_images/4149439273.jpg
inflating: input/train_images/1086549590.jpg
inflating: input/train_images/2428748411.jpg
inflating: input/train_images/3493232417.jpg
inflating: input/train_images/1581083088.jpg
inflating: input/train_images/2410206880.jpg
inflating: input/train_images/2178737885.jpg
inflating: input/train_images/3189838386.jpg
inflating: input/train_images/466665230.jpg
inflating: input/train_images/4111579304.jpg
inflating: input/train_images/2757717680.jpg
inflating: input/train_images/634724499.jpg
inflating: input/train_images/3219226576.jpg
inflating: input/train_images/747761920.jpg
inflating: input/train_images/2275462714.jpg
inflating: input/train_images/335204978.jpg
inflating: input/train_images/1024067372.jpg
inflating: input/train_images/2396049466.jpg
inflating: input/train_images/3688162570.jpg
inflating: input/train_images/1201158169.jpg
inflating: input/train_images/785251696.jpg
inflating: input/train_images/4212381540.jpg
inflating: input/train_images/3909502094.jpg
inflating: input/train_images/1963744346.jpg
inflating: input/train_images/4205544766.jpg
inflating: input/train_images/368507715.jpg
inflating: input/train_images/2797119560.jpg
inflating: input/train_images/1557398186.jpg
inflating: input/train_images/2820350.jpg
inflating: input/train_images/3339979490.jpg
inflating: input/train_images/1445721278.jpg
inflating: input/train_images/1782735402.jpg
inflating: input/train_images/2768992642.jpg
inflating: input/train_images/3174117407.jpg
inflating: input/train_images/2850943526.jpg
inflating: input/train_images/3058839740.jpg
inflating: input/train_images/1775924817.jpg
inflating: input/train_images/3788064831.jpg
inflating: input/train_images/1979663030.jpg
inflating: input/train_images/1829820794.jpg
inflating: input/train_images/2932123995.jpg
inflating: input/train_images/866496354.jpg
inflating: input/train_images/2783143835.jpg
inflating: input/train_images/338055646.jpg
inflating: input/train_images/936438569.jpg
inflating: input/train_images/460888118.jpg
inflating: input/train_images/2806775485.jpg
inflating: input/train_images/4037829966.jpg
inflating: input/train_images/3205038032.jpg
inflating: input/train_images/3381060071.jpg
inflating: input/train_images/2662418280.jpg
inflating: input/train_images/1685008458.jpg
inflating: input/train_images/4265846658.jpg
inflating: input/train_images/414229694.jpg
inflating: input/train_images/2111097529.jpg
inflating: input/train_images/3645245816.jpg
inflating: input/train_images/4010091989.jpg
inflating: input/train_images/1518858149.jpg
inflating: input/train_images/1993626674.jpg
inflating: input/train_images/51063556.jpg
inflating: input/train_images/4049843068.jpg
inflating: input/train_images/2175002388.jpg
inflating: input/train_images/3250432393.jpg
inflating: input/train_images/3904531265.jpg
inflating: input/train_images/3660846961.jpg
inflating: input/train_images/1602453903.jpg
inflating: input/train_images/1285287595.jpg
inflating: input/train_images/1496786758.jpg
inflating: input/train_images/2348020586.jpg
inflating: input/train_images/3001133912.jpg
inflating: input/train_images/2249767695.jpg
inflating: input/train_images/1776919078.jpg
inflating: input/train_images/175816110.jpg
inflating: input/train_images/24632378.jpg
inflating: input/train_images/3778551901.jpg
inflating: input/train_images/2457010245.jpg
inflating: input/train_images/3057848148.jpg
inflating: input/train_images/867025399.jpg
inflating: input/train_images/1823796412.jpg
inflating: input/train_images/1290043327.jpg
inflating: input/train_images/3949530220.jpg
inflating: input/train_images/2424770024.jpg
inflating: input/train_images/3537938630.jpg
inflating: input/train_images/2615629258.jpg
inflating: input/train_images/3053608144.jpg
inflating: input/train_images/2458565094.jpg
inflating: input/train_images/1434278080.jpg
inflating: input/train_images/4180192243.jpg
inflating: input/train_images/2349165022.jpg
inflating: input/train_images/4099004871.jpg
inflating: input/train_images/323293221.jpg
inflating: input/train_images/334748709.jpg
inflating: input/train_images/2265305065.jpg
inflating: input/train_images/2511157697.jpg
inflating: input/train_images/2681562878.jpg
inflating: input/train_images/2326806135.jpg
inflating: input/train_images/1380110739.jpg
inflating: input/train_images/1110421964.jpg
inflating: input/train_images/1059986462.jpg
inflating: input/train_images/3743195625.jpg
inflating: input/train_images/1001320321.jpg
inflating: input/train_images/1234375577.jpg
inflating: input/train_images/2553757426.jpg
inflating: input/train_images/554189180.jpg
inflating: input/train_images/3628996963.jpg
inflating: input/train_images/1445369057.jpg
inflating: input/train_images/2363265177.jpg
inflating: input/train_images/1098184586.jpg
inflating: input/train_images/1351337979.jpg
inflating: input/train_images/2659141611.jpg
inflating: input/train_images/3920121408.jpg
inflating: input/train_images/2447086641.jpg
inflating: input/train_images/137038681.jpg
inflating: input/train_images/130685384.jpg
inflating: input/train_images/4182626844.jpg
inflating: input/train_images/2705031352.jpg
inflating: input/train_images/2583781568.jpg
inflating: input/train_images/98151706.jpg
inflating: input/train_images/1902806995.jpg
inflating: input/train_images/3760730104.jpg
inflating: input/train_images/3504669411.jpg
inflating: input/train_images/641947553.jpg
inflating: input/train_images/4056070889.jpg
inflating: input/train_images/2903629810.jpg
inflating: input/train_images/1450741896.jpg
inflating: input/train_images/1837284696.jpg
inflating: input/train_images/3114522519.jpg
inflating: input/train_images/3272750945.jpg
inflating: input/train_images/1659403875.jpg
inflating: input/train_images/2904671058.jpg
inflating: input/train_images/3993617081.jpg
inflating: input/train_images/1047894047.jpg
inflating: input/train_images/1178307457.jpg
inflating: input/train_images/489369440.jpg
inflating: input/train_images/2344592304.jpg
inflating: input/train_images/422632532.jpg
inflating: input/train_images/2021239499.jpg
inflating: input/train_images/1268162819.jpg
inflating: input/train_images/2254310915.jpg
inflating: input/train_images/737157935.jpg
inflating: input/train_images/2473286391.jpg
inflating: input/train_images/2952187496.jpg
inflating: input/train_images/2859464940.jpg
inflating: input/train_images/1991065222.jpg
inflating: input/train_images/748154559.jpg
inflating: input/train_images/2077965062.jpg
inflating: input/train_images/15383908.jpg
inflating: input/train_images/1637611911.jpg
inflating: input/train_images/1456881000.jpg
inflating: input/train_images/3889601162.jpg
inflating: input/train_images/107458529.jpg
inflating: input/train_images/726108789.jpg
inflating: input/train_images/2978135052.jpg
inflating: input/train_images/3467700084.jpg
inflating: input/train_images/1558263812.jpg
inflating: input/train_images/2009674442.jpg
inflating: input/train_images/1079107547.jpg
inflating: input/train_images/2722340963.jpg
inflating: input/train_images/2469652661.jpg
inflating: input/train_images/219505342.jpg
inflating: input/train_images/2091547987.jpg
inflating: input/train_images/3119699604.jpg
inflating: input/train_images/1016415263.jpg
inflating: input/train_images/1286002720.jpg
inflating: input/train_images/745112623.jpg
inflating: input/train_images/1251319553.jpg
inflating: input/train_images/2080055302.jpg
inflating: input/train_images/4084600785.jpg
inflating: input/train_images/146247693.jpg
inflating: input/train_images/214207820.jpg
inflating: input/train_images/1899678596.jpg
inflating: input/train_images/3718347785.jpg
inflating: input/train_images/3735709482.jpg
inflating: input/train_images/2401581385.jpg
inflating: input/train_images/2338923245.jpg
inflating: input/train_images/2969200705.jpg
inflating: input/train_images/2888341148.jpg
inflating: input/train_images/4282908664.jpg
inflating: input/train_images/3443567177.jpg
inflating: input/train_images/227245436.jpg
inflating: input/train_images/1060214168.jpg
inflating: input/train_images/2447025051.jpg
inflating: input/train_images/3030726237.jpg
inflating: input/train_images/1193388066.jpg
inflating: input/train_images/1226905166.jpg
inflating: input/train_images/1025466430.jpg
inflating: input/train_images/4054194563.jpg
inflating: input/train_images/769509727.jpg
inflating: input/train_images/446547261.jpg
inflating: input/train_images/1551517348.jpg
inflating: input/train_images/2754223528.jpg
inflating: input/train_images/2765109909.jpg
inflating: input/train_images/714060321.jpg
inflating: input/train_images/1007700625.jpg
inflating: input/train_images/2059180039.jpg
inflating: input/train_images/4213471018.jpg
inflating: input/train_images/3934403227.jpg
inflating: input/train_images/3241444808.jpg
inflating: input/train_images/1751949402.jpg
inflating: input/train_images/3833349143.jpg
inflating: input/train_images/1747286543.jpg
inflating: input/train_images/587829607.jpg
inflating: input/train_images/287967918.jpg
inflating: input/train_images/1315703561.jpg
inflating: input/train_images/3438538629.jpg
inflating: input/train_images/3848359159.jpg
inflating: input/train_images/1624580117.jpg
inflating: input/train_images/3357188995.jpg
inflating: input/train_images/4199549961.jpg
inflating: input/train_images/359920987.jpg
inflating: input/train_images/1970622259.jpg
inflating: input/train_images/43727066.jpg
inflating: input/train_images/1040876350.jpg
inflating: input/train_images/1579937444.jpg
inflating: input/train_images/1903746269.jpg
inflating: input/train_images/2306385345.jpg
inflating: input/train_images/4084470563.jpg
inflating: input/train_images/2789524005.jpg
inflating: input/train_images/897745173.jpg
inflating: input/train_images/2848007232.jpg
inflating: input/train_images/339595487.jpg
inflating: input/train_images/913436788.jpg
inflating: input/train_images/1234294272.jpg
inflating: input/train_images/1159573446.jpg
inflating: input/train_images/3203849568.jpg
inflating: input/train_images/3597887012.jpg
inflating: input/train_images/3666469133.jpg
inflating: input/train_images/2412773929.jpg
inflating: input/train_images/1625527821.jpg
inflating: input/train_images/3218262834.jpg
inflating: input/train_images/944920581.jpg
inflating: input/train_images/822120250.jpg
inflating: input/train_images/3395914437.jpg
inflating: input/train_images/2323936.jpg
inflating: input/train_images/2386253796.jpg
inflating: input/train_images/1116229616.jpg
inflating: input/train_images/365554294.jpg
inflating: input/train_images/546528869.jpg
inflating: input/train_images/1452430816.jpg
inflating: input/train_images/3645241295.jpg
inflating: input/train_images/3506364620.jpg
inflating: input/train_images/2557965026.jpg
inflating: input/train_images/3853081007.jpg
inflating: input/train_images/1658205752.jpg
inflating: input/train_images/222568522.jpg
inflating: input/train_images/3930678824.jpg
inflating: input/train_images/278959446.jpg
inflating: input/train_images/2478018732.jpg
inflating: input/train_images/1889980641.jpg
inflating: input/train_images/1807049681.jpg
inflating: input/train_images/490262929.jpg
inflating: input/train_images/1363877858.jpg
inflating: input/train_images/3829488807.jpg
inflating: input/train_images/1427193776.jpg
inflating: input/train_images/140387888.jpg
inflating: input/train_images/467965750.jpg
inflating: input/train_images/3744017930.jpg
inflating: input/train_images/1160106089.jpg
inflating: input/train_images/372067852.jpg
inflating: input/train_images/2992093777.jpg
inflating: input/train_images/9548002.jpg
inflating: input/train_images/456278844.jpg
inflating: input/train_images/1734343224.jpg
inflating: input/train_images/2020174786.jpg
inflating: input/train_images/1849667123.jpg
inflating: input/train_images/3063789590.jpg
inflating: input/train_images/1973726786.jpg
inflating: input/train_images/602912992.jpg
inflating: input/train_images/3793330900.jpg
inflating: input/train_images/1037337861.jpg
inflating: input/train_images/2427032578.jpg
inflating: input/train_images/2466948945.jpg
inflating: input/train_images/91285032.jpg
inflating: input/train_images/1480404336.jpg
inflating: input/train_images/1436106170.jpg
inflating: input/train_images/2998271425.jpg
inflating: input/train_images/3381014274.jpg
inflating: input/train_images/2240905108.jpg
inflating: input/train_images/1398249475.jpg
inflating: input/train_images/3126169965.jpg
inflating: input/train_images/3132897283.jpg
inflating: input/train_images/488165307.jpg
inflating: input/train_images/3402601721.jpg
inflating: input/train_images/4288369732.jpg
inflating: input/train_images/1437341132.jpg
inflating: input/train_images/1713705590.jpg
inflating: input/train_images/1285436512.jpg
inflating: input/train_images/2406905705.jpg
inflating: input/train_images/2067363667.jpg
inflating: input/train_images/2333065966.jpg
inflating: input/train_images/4060346776.jpg
inflating: input/train_images/3664478040.jpg
inflating: input/train_images/3643749616.jpg
inflating: input/train_images/2819697442.jpg
inflating: input/train_images/2142361151.jpg
inflating: input/train_images/3438550828.jpg
inflating: input/train_images/99645916.jpg
inflating: input/train_images/710791285.jpg
inflating: input/train_images/4192503187.jpg
inflating: input/train_images/2652514070.jpg
inflating: input/train_images/1416607824.jpg
inflating: input/train_images/1679630111.jpg
inflating: input/train_images/851594010.jpg
inflating: input/train_images/2085823072.jpg
inflating: input/train_images/2313023027.jpg
inflating: input/train_images/1894712729.jpg
inflating: input/train_images/3637416250.jpg
inflating: input/train_images/972959733.jpg
inflating: input/train_images/1532691436.jpg
inflating: input/train_images/2865082132.jpg
inflating: input/train_images/489343827.jpg
inflating: input/train_images/3805200386.jpg
inflating: input/train_images/2154740842.jpg
inflating: input/train_images/2038191789.jpg
inflating: input/train_images/454771505.jpg
inflating: input/train_images/1238597078.jpg
inflating: input/train_images/675325967.jpg
inflating: input/train_images/1198530356.jpg
inflating: input/train_images/1417446398.jpg
inflating: input/train_images/3987800643.jpg
inflating: input/train_images/1147036458.jpg
inflating: input/train_images/873205488.jpg
inflating: input/train_images/3237097482.jpg
inflating: input/train_images/1490907378.jpg
inflating: input/train_images/2811772992.jpg
inflating: input/train_images/906773049.jpg
inflating: input/train_images/243732361.jpg
inflating: input/train_images/1480331459.jpg
inflating: input/train_images/2911274899.jpg
inflating: input/train_images/1550668897.jpg
inflating: input/train_images/3458185025.jpg
inflating: input/train_images/1202852313.jpg
inflating: input/train_images/1896975057.jpg
inflating: input/train_images/2090381650.jpg
inflating: input/train_images/3171724304.jpg
inflating: input/train_images/4243068188.jpg
inflating: input/train_images/977709534.jpg
inflating: input/train_images/622186063.jpg
inflating: input/train_images/1840189288.jpg
inflating: input/train_images/3311389928.jpg
inflating: input/train_images/2450978537.jpg
inflating: input/train_images/3066421395.jpg
inflating: input/train_images/3468766688.jpg
inflating: input/train_images/686316627.jpg
inflating: input/train_images/1500571663.jpg
inflating: input/train_images/2398447662.jpg
inflating: input/train_images/1926938373.jpg
inflating: input/train_images/3735941988.jpg
inflating: input/train_images/914090622.jpg
inflating: input/train_images/2223402762.jpg
inflating: input/train_images/1461335818.jpg
inflating: input/train_images/1950825631.jpg
inflating: input/train_images/1459667255.jpg
inflating: input/train_images/4183847559.jpg
inflating: input/train_images/193958679.jpg
inflating: input/train_images/3416552356.jpg
inflating: input/train_images/3477094015.jpg
inflating: input/train_images/683136495.jpg
inflating: input/train_images/4168975711.jpg
inflating: input/train_images/952303505.jpg
inflating: input/train_images/4173960352.jpg
inflating: input/train_images/2891633468.jpg
inflating: input/train_images/3532319788.jpg
inflating: input/train_images/305323841.jpg
inflating: input/train_images/4041427827.jpg
inflating: input/train_images/2266653294.jpg
inflating: input/train_images/1198438913.jpg
inflating: input/train_images/2287369071.jpg
inflating: input/train_images/1154479394.jpg
inflating: input/train_images/384265557.jpg
inflating: input/train_images/833082515.jpg
inflating: input/train_images/1454074626.jpg
inflating: input/train_images/3313811273.jpg
inflating: input/train_images/2857145632.jpg
inflating: input/train_images/3686283354.jpg
inflating: input/train_images/3376933694.jpg
inflating: input/train_images/1216982810.jpg
inflating: input/train_images/1060543398.jpg
inflating: input/train_images/3775506887.jpg
inflating: input/train_images/3481431708.jpg
inflating: input/train_images/3750069048.jpg
inflating: input/train_images/1468414187.jpg
inflating: input/train_images/2846400209.jpg
inflating: input/train_images/1143548479.jpg
inflating: input/train_images/3734410646.jpg
inflating: input/train_images/4230970262.jpg
inflating: input/train_images/357924077.jpg
inflating: input/train_images/1690581200.jpg
inflating: input/train_images/1099450301.jpg
inflating: input/train_images/3634822711.jpg
inflating: input/train_images/2646206273.jpg
inflating: input/train_images/1951284903.jpg
inflating: input/train_images/2129589240.jpg
inflating: input/train_images/3468802020.jpg
inflating: input/train_images/3173958806.jpg
inflating: input/train_images/1367013120.jpg
inflating: input/train_images/1184986094.jpg
inflating: input/train_images/3938349285.jpg
inflating: input/train_images/3751724682.jpg
inflating: input/train_images/186788126.jpg
inflating: input/train_images/645203852.jpg
inflating: input/train_images/440065824.jpg
inflating: input/train_images/2241165356.jpg
inflating: input/train_images/2740797987.jpg
inflating: input/train_images/4282727554.jpg
inflating: input/train_images/704485270.jpg
inflating: input/train_images/5511383.jpg
inflating: input/train_images/2673087220.jpg
inflating: input/train_images/281173602.jpg
inflating: input/train_images/2260521441.jpg
inflating: input/train_images/1284343377.jpg
inflating: input/train_images/3231064987.jpg
inflating: input/train_images/427529521.jpg
inflating: input/train_images/543499151.jpg
inflating: input/train_images/3667636603.jpg
inflating: input/train_images/1304015285.jpg
inflating: input/train_images/751463025.jpg
inflating: input/train_images/3562133791.jpg
inflating: input/train_images/2019819944.jpg
inflating: input/train_images/3770389028.jpg
inflating: input/train_images/2436369739.jpg
inflating: input/train_images/296813400.jpg
inflating: input/train_images/1520590067.jpg
inflating: input/train_images/3483612153.jpg
inflating: input/train_images/1929151368.jpg
inflating: input/train_images/860926146.jpg
inflating: input/train_images/2762258439.jpg
inflating: input/train_images/1100677588.jpg
inflating: input/train_images/2682587481.jpg
inflating: input/train_images/2753152635.jpg
inflating: input/train_images/3877043596.jpg
inflating: input/train_images/3694251978.jpg
inflating: input/train_images/3916184464.jpg
inflating: input/train_images/2534822734.jpg
inflating: input/train_images/3479436986.jpg
inflating: input/train_images/2828604531.jpg
inflating: input/train_images/3635783291.jpg
inflating: input/train_images/108944327.jpg
inflating: input/train_images/2669158036.jpg
inflating: input/train_images/972419188.jpg
inflating: input/train_images/4109440762.jpg
inflating: input/train_images/3199430859.jpg
inflating: input/train_images/1742921296.jpg
inflating: input/train_images/466394178.jpg
inflating: input/train_images/3171101040.jpg
inflating: input/train_images/1315280013.jpg
inflating: input/train_images/208823391.jpg
inflating: input/train_images/1035014017.jpg
inflating: input/train_images/3481763546.jpg
inflating: input/train_images/3892421662.jpg
inflating: input/train_images/454246300.jpg
inflating: input/train_images/278030958.jpg
inflating: input/train_images/4211828884.jpg
inflating: input/train_images/883429890.jpg
inflating: input/train_images/1590211674.jpg
inflating: input/train_images/932643551.jpg
inflating: input/train_images/2787710566.jpg
inflating: input/train_images/3456541997.jpg
inflating: input/train_images/4173133461.jpg
inflating: input/train_images/2869773815.jpg
inflating: input/train_images/1913067109.jpg
inflating: input/train_images/1352630748.jpg
inflating: input/train_images/391259058.jpg
inflating: input/train_images/1882080819.jpg
inflating: input/train_images/4098383673.jpg
inflating: input/train_images/2740053023.jpg
inflating: input/train_images/4034547314.jpg
inflating: input/train_images/33354579.jpg
inflating: input/train_images/1256672551.jpg
inflating: input/train_images/1277358374.jpg
inflating: input/train_images/4012105605.jpg
inflating: input/train_images/1889645110.jpg
inflating: input/train_images/1965467529.jpg
inflating: input/train_images/1693341190.jpg
inflating: input/train_images/148206427.jpg
inflating: input/train_images/3299285879.jpg
inflating: input/train_images/1797773732.jpg
inflating: input/train_images/3413647138.jpg
inflating: input/train_images/346540685.jpg
inflating: input/train_images/1851497737.jpg
inflating: input/train_images/1637819435.jpg
inflating: input/train_images/3756317968.jpg
inflating: input/train_images/1019366633.jpg
inflating: input/train_images/1280298978.jpg
inflating: input/train_images/1301053588.jpg
inflating: input/train_images/1290024042.jpg
inflating: input/train_images/2753540167.jpg
inflating: input/train_images/144778428.jpg
inflating: input/train_images/1005200906.jpg
inflating: input/train_images/219949624.jpg
inflating: input/train_images/64732457.jpg
inflating: input/train_images/1254294690.jpg
inflating: input/train_images/3779645691.jpg
inflating: input/train_images/118428727.jpg
inflating: input/train_images/441501298.jpg
inflating: input/train_images/2868197011.jpg
inflating: input/train_images/4111582298.jpg
inflating: input/train_images/3345325515.jpg
inflating: input/train_images/2115118532.jpg
inflating: input/train_images/2490398752.jpg
inflating: input/train_images/3671009734.jpg
inflating: input/train_images/2511355520.jpg
inflating: input/train_images/1875040493.jpg
inflating: input/train_images/838734529.jpg
inflating: input/train_images/1191019000.jpg
inflating: input/train_images/2561181055.jpg
inflating: input/train_images/1291552662.jpg
inflating: input/train_images/2747276655.jpg
inflating: input/train_images/3765800503.jpg
inflating: input/train_images/1386339986.jpg
inflating: input/train_images/3388533190.jpg
inflating: input/train_images/4146743223.jpg
inflating: input/train_images/1797305477.jpg
inflating: input/train_images/4030129065.jpg
inflating: input/train_images/4000198689.jpg
inflating: input/train_images/699330995.jpg
inflating: input/train_images/3769308751.jpg
inflating: input/train_images/4001352115.jpg
inflating: input/train_images/3795376798.jpg
inflating: input/train_images/3644516431.jpg
inflating: input/train_images/3368561907.jpg
inflating: input/train_images/2515485322.jpg
inflating: input/train_images/2350967286.jpg
inflating: input/train_images/966306135.jpg
inflating: input/train_images/3784570453.jpg
inflating: input/train_images/2174425139.jpg
inflating: input/train_images/739320335.jpg
inflating: input/train_images/2663655952.jpg
inflating: input/train_images/2477858047.jpg
inflating: input/train_images/1165503053.jpg
inflating: input/train_images/255319476.jpg
inflating: input/train_images/4209232605.jpg
inflating: input/train_images/709663348.jpg
inflating: input/train_images/1061287420.jpg
inflating: input/train_images/3772745803.jpg
inflating: input/train_images/3193568902.jpg
inflating: input/train_images/2680198267.jpg
inflating: input/train_images/1539957701.jpg
inflating: input/train_images/1666732950.jpg
inflating: input/train_images/554118057.jpg
inflating: input/train_images/3792024125.jpg
inflating: input/train_images/2499785298.jpg
inflating: input/train_images/2166936841.jpg
inflating: input/train_images/2816524205.jpg
inflating: input/train_images/2668114223.jpg
inflating: input/train_images/3498150363.jpg
inflating: input/train_images/2922394453.jpg
inflating: input/train_images/873637313.jpg
inflating: input/train_images/1834269542.jpg
inflating: input/train_images/1383027450.jpg
inflating: input/train_images/3349562972.jpg
inflating: input/train_images/2544185952.jpg
inflating: input/train_images/137474940.jpg
inflating: input/train_images/1998622752.jpg
inflating: input/train_images/3035140922.jpg
inflating: input/train_images/1182679440.jpg
inflating: input/train_images/1910208633.jpg
inflating: input/train_images/3082722756.jpg
inflating: input/train_images/811732946.jpg
inflating: input/train_images/3101248126.jpg
inflating: input/train_images/4096774182.jpg
inflating: input/train_images/1694570941.jpg
inflating: input/train_images/569842713.jpg
inflating: input/train_images/3275448678.jpg
inflating: input/train_images/708802382.jpg
inflating: input/train_images/1712961040.jpg
inflating: input/train_images/2269945499.jpg
inflating: input/train_images/3061935906.jpg
inflating: input/train_images/319553064.jpg
inflating: input/train_images/123780103.jpg
inflating: input/train_images/3265165430.jpg
inflating: input/train_images/2202131582.jpg
inflating: input/train_images/2537957417.jpg
inflating: input/train_images/278460610.jpg
inflating: input/train_images/1966999074.jpg
inflating: input/train_images/471647021.jpg
inflating: input/train_images/3533779400.jpg
inflating: input/train_images/2322601993.jpg
inflating: input/train_images/1442310291.jpg
inflating: input/train_images/296929664.jpg
inflating: input/train_images/3120171714.jpg
inflating: input/train_images/1352386013.jpg
inflating: input/train_images/423272178.jpg
inflating: input/train_images/1637055528.jpg
inflating: input/train_images/3806787164.jpg
inflating: input/train_images/1520672258.jpg
inflating: input/train_images/574000840.jpg
inflating: input/train_images/1655898941.jpg
inflating: input/train_images/84787134.jpg
inflating: input/train_images/1048581072.jpg
inflating: input/train_images/4064456640.jpg
inflating: input/train_images/2352335642.jpg
inflating: input/train_images/3127022829.jpg
inflating: input/train_images/1127286868.jpg
inflating: input/train_images/2339596137.jpg
inflating: input/train_images/348338717.jpg
inflating: input/train_images/4183926297.jpg
inflating: input/train_images/805519444.jpg
inflating: input/train_images/595787428.jpg
inflating: input/train_images/1835317288.jpg
inflating: input/train_images/3471469388.jpg
inflating: input/train_images/3391958379.jpg
inflating: input/train_images/3303796327.jpg
inflating: input/train_images/1948078849.jpg
inflating: input/train_images/764628702.jpg
inflating: input/train_images/2808447158.jpg
inflating: input/train_images/2900494721.jpg
inflating: input/train_images/25169920.jpg
inflating: input/train_images/285610834.jpg
inflating: input/train_images/2344567219.jpg
inflating: input/train_images/632357010.jpg
inflating: input/train_images/2705839002.jpg
inflating: input/train_images/2620141958.jpg
inflating: input/train_images/1077647851.jpg
inflating: input/train_images/2815272486.jpg
inflating: input/train_images/3271104064.jpg
inflating: input/train_images/3966618744.jpg
inflating: input/train_images/3453138674.jpg
inflating: input/train_images/2794995428.jpg
inflating: input/train_images/3330498442.jpg
inflating: input/train_images/1411799307.jpg
inflating: input/train_images/3795291687.jpg
inflating: input/train_images/1620003976.jpg
inflating: input/train_images/3670288988.jpg
inflating: input/train_images/1656709823.jpg
inflating: input/train_images/728084630.jpg
inflating: input/train_images/2904271426.jpg
inflating: input/train_images/2779488398.jpg
inflating: input/train_images/2915436940.jpg
inflating: input/train_images/2065453840.jpg
inflating: input/train_images/3635315825.jpg
inflating: input/train_images/3303116849.jpg
inflating: input/train_images/68141141.jpg
inflating: input/train_images/333968590.jpg
inflating: input/train_images/247720048.jpg
inflating: input/train_images/3484015955.jpg
inflating: input/train_images/1719831117.jpg
inflating: input/train_images/2310639649.jpg
inflating: input/train_images/2859716069.jpg
inflating: input/train_images/1447330096.jpg
inflating: input/train_images/347680492.jpg
inflating: input/train_images/2143249620.jpg
inflating: input/train_images/2556134164.jpg
inflating: input/train_images/2838277856.jpg
inflating: input/train_images/2256445671.jpg
inflating: input/train_images/2270423003.jpg
inflating: input/train_images/3762101778.jpg
inflating: input/train_images/2382793101.jpg
inflating: input/train_images/2984109922.jpg
inflating: input/train_images/2829084377.jpg
inflating: input/train_images/3168669941.jpg
inflating: input/train_images/194265941.jpg
inflating: input/train_images/89450181.jpg
inflating: input/train_images/194223136.jpg
inflating: input/train_images/2339196209.jpg
inflating: input/train_images/387670192.jpg
inflating: input/train_images/3739341345.jpg
inflating: input/train_images/2908562046.jpg
inflating: input/train_images/3731008076.jpg
inflating: input/train_images/2140295414.jpg
inflating: input/train_images/3862819448.jpg
inflating: input/train_images/1637773657.jpg
inflating: input/train_images/370235390.jpg
inflating: input/train_images/1342351547.jpg
inflating: input/train_images/756993867.jpg
inflating: input/train_images/343786903.jpg
inflating: input/train_images/3956407201.jpg
inflating: input/train_images/3213310104.jpg
inflating: input/train_images/4024436293.jpg
inflating: input/train_images/3637314147.jpg
inflating: input/train_images/2078711421.jpg
inflating: input/train_images/265078969.jpg
inflating: input/train_images/653790605.jpg
inflating: input/train_images/2127776934.jpg
inflating: input/train_images/715597792.jpg
inflating: input/train_images/2505719500.jpg
inflating: input/train_images/1589860418.jpg
inflating: input/train_images/2510221236.jpg
inflating: input/train_images/2088351120.jpg
inflating: input/train_images/2462343820.jpg
inflating: input/train_images/1903634061.jpg
inflating: input/train_images/3504608499.jpg
inflating: input/train_images/569345273.jpg
inflating: input/train_images/2820596794.jpg
inflating: input/train_images/1195706216.jpg
inflating: input/train_images/2066754199.jpg
inflating: input/train_images/3105818725.jpg
inflating: input/train_images/3856210949.jpg
inflating: input/train_images/519080705.jpg
inflating: input/train_images/3533209606.jpg
inflating: input/train_images/1344397550.jpg
inflating: input/train_images/2377060677.jpg
inflating: input/train_images/4177958446.jpg
inflating: input/train_images/3589688329.jpg
inflating: input/train_images/2496275543.jpg
inflating: input/train_images/2858707596.jpg
inflating: input/train_images/3644947505.jpg
inflating: input/train_images/3953331047.jpg
inflating: input/train_images/2036966083.jpg
inflating: input/train_images/2710925690.jpg
inflating: input/train_images/4061646597.jpg
inflating: input/train_images/4102442898.jpg
inflating: input/train_images/779647083.jpg
inflating: input/train_images/1166973570.jpg
inflating: input/train_images/119785295.jpg
inflating: input/train_images/793206643.jpg
inflating: input/train_images/4049425598.jpg
inflating: input/train_images/473806024.jpg
inflating: input/train_images/3871053042.jpg
inflating: input/train_images/4256882986.jpg
inflating: input/train_images/1618933538.jpg
inflating: input/train_images/3927374351.jpg
inflating: input/train_images/3510536964.jpg
inflating: input/train_images/549591801.jpg
inflating: input/train_images/1840408344.jpg
inflating: input/train_images/1017890227.jpg
inflating: input/train_images/2220363674.jpg
inflating: input/train_images/748035819.jpg
inflating: input/train_images/3135277056.jpg
inflating: input/train_images/847288162.jpg
inflating: input/train_images/263932822.jpg
inflating: input/train_images/1562342410.jpg
inflating: input/train_images/1485335587.jpg
inflating: input/train_images/1236840149.jpg
inflating: input/train_images/3213974381.jpg
inflating: input/train_images/1393783706.jpg
inflating: input/train_images/2993551637.jpg
inflating: input/train_images/4267819868.jpg
inflating: input/train_images/260241921.jpg
inflating: input/train_images/2457404086.jpg
inflating: input/train_images/2202750916.jpg
inflating: input/train_images/3817099479.jpg
inflating: input/train_images/1462467319.jpg
inflating: input/train_images/4137525990.jpg
inflating: input/train_images/2826533790.jpg
inflating: input/train_images/2324073630.jpg
inflating: input/train_images/762372916.jpg
inflating: input/train_images/2448969787.jpg
inflating: input/train_images/2944420316.jpg
inflating: input/train_images/1150800852.jpg
inflating: input/train_images/2873763227.jpg
inflating: input/train_images/807322814.jpg
inflating: input/train_images/3120878969.jpg
inflating: input/train_images/3044677012.jpg
inflating: input/train_images/3774494718.jpg
inflating: input/train_images/4092211517.jpg
inflating: input/train_images/3346966351.jpg
inflating: input/train_images/1434474334.jpg
inflating: input/train_images/1923898414.jpg
inflating: input/train_images/3845401101.jpg
inflating: input/train_images/4248003612.jpg
inflating: input/train_images/4182987199.jpg
inflating: input/train_images/900587637.jpg
inflating: input/train_images/768948562.jpg
inflating: input/train_images/4091571824.jpg
inflating: input/train_images/266070647.jpg
inflating: input/train_images/3689465564.jpg
inflating: input/train_images/88427493.jpg
inflating: input/train_images/1009845426.jpg
inflating: input/train_images/3344007463.jpg
inflating: input/train_images/1774675464.jpg
inflating: input/train_images/2049291369.jpg
inflating: input/train_images/16033842.jpg
inflating: input/train_images/2318257216.jpg
inflating: input/train_images/3315504598.jpg
inflating: input/train_images/844432443.jpg
inflating: input/train_images/617446207.jpg
inflating: input/train_images/2053738846.jpg
inflating: input/train_images/266548434.jpg
inflating: input/train_images/3604608395.jpg
inflating: input/train_images/1145076372.jpg
inflating: input/train_images/2051265559.jpg
inflating: input/train_images/3198115498.jpg
inflating: input/train_images/936942979.jpg
inflating: input/train_images/56813625.jpg
inflating: input/train_images/3881936041.jpg
inflating: input/train_images/470266437.jpg
inflating: input/train_images/1788934648.jpg
inflating: input/train_images/1776704317.jpg
inflating: input/train_images/2312874930.jpg
inflating: input/train_images/2716413618.jpg
inflating: input/train_images/1773757795.jpg
inflating: input/train_images/2731692092.jpg
inflating: input/train_images/1028446573.jpg
inflating: input/train_images/4127820008.jpg
inflating: input/train_images/3369247230.jpg
inflating: input/train_images/2933959901.jpg
inflating: input/train_images/1951683874.jpg
inflating: input/train_images/2035009601.jpg
inflating: input/train_images/469514672.jpg
inflating: input/train_images/1097202105.jpg
inflating: input/train_images/1919668160.jpg
inflating: input/train_images/999068805.jpg
inflating: input/train_images/2803131903.jpg
inflating: input/train_images/4283277874.jpg
inflating: input/train_images/3278834779.jpg
inflating: input/train_images/3491458662.jpg
inflating: input/train_images/4152969433.jpg
inflating: input/train_images/262767997.jpg
inflating: input/train_images/2865479509.jpg
inflating: input/train_images/1284398426.jpg
inflating: input/train_images/2927616574.jpg
inflating: input/train_images/3515249327.jpg
inflating: input/train_images/2289892824.jpg
inflating: input/train_images/788479472.jpg
inflating: input/train_images/2092002351.jpg
inflating: input/train_images/104769172.jpg
inflating: input/train_images/3174632328.jpg
inflating: input/train_images/2928426348.jpg
inflating: input/train_images/522854384.jpg
inflating: input/train_images/2588915537.jpg
inflating: input/train_images/3750697954.jpg
inflating: input/train_images/3324977995.jpg
inflating: input/train_images/456963939.jpg
inflating: input/train_images/1120933008.jpg
inflating: input/train_images/1497437475.jpg
inflating: input/train_images/3911564125.jpg
inflating: input/train_images/2454175142.jpg
inflating: input/train_images/1870627097.jpg
inflating: input/train_images/2538957278.jpg
inflating: input/train_images/3676798253.jpg
inflating: input/train_images/444416814.jpg
inflating: input/train_images/4052560823.jpg
inflating: input/train_images/3020362387.jpg
inflating: input/train_images/16980960.jpg
inflating: input/train_images/1761893158.jpg
inflating: input/train_images/2638158691.jpg
inflating: input/train_images/1391150010.jpg
inflating: input/train_images/544530143.jpg
inflating: input/train_images/881681381.jpg
inflating: input/train_images/3610667873.jpg
inflating: input/train_images/437673562.jpg
inflating: input/train_images/4282010677.jpg
inflating: input/train_images/328706922.jpg
inflating: input/train_images/566542293.jpg
inflating: input/train_images/194035212.jpg
inflating: input/train_images/3981449307.jpg
inflating: input/train_images/1952723516.jpg
inflating: input/train_images/3816048744.jpg
inflating: input/train_images/2914654588.jpg
inflating: input/train_images/1947066099.jpg
inflating: input/train_images/1325098159.jpg
inflating: input/train_images/458083471.jpg
inflating: input/train_images/901274607.jpg
inflating: input/train_images/3395591441.jpg
inflating: input/train_images/4242296013.jpg
inflating: input/train_images/1589092032.jpg
inflating: input/train_images/177523271.jpg
inflating: input/train_images/2665505446.jpg
inflating: input/train_images/2745720901.jpg
inflating: input/train_images/1711244110.jpg
inflating: input/train_images/2763304605.jpg
inflating: input/train_images/2143925311.jpg
inflating: input/train_images/2535169951.jpg
inflating: input/train_images/2152368754.jpg
inflating: input/train_images/3115158095.jpg
inflating: input/train_images/1731777849.jpg
inflating: input/train_images/277811844.jpg
inflating: input/train_images/3977320696.jpg
inflating: input/train_images/3257812988.jpg
inflating: input/train_images/2424570451.jpg
inflating: input/train_images/1033283646.jpg
inflating: input/train_images/1533215166.jpg
inflating: input/train_images/3912926258.jpg
inflating: input/train_images/3356448488.jpg
inflating: input/train_images/728369192.jpg
inflating: input/train_images/1600900862.jpg
inflating: input/train_images/2691672185.jpg
inflating: input/train_images/3193368098.jpg
inflating: input/train_images/1849151787.jpg
inflating: input/train_images/4209668687.jpg
inflating: input/train_images/1642673136.jpg
inflating: input/train_images/4065788460.jpg
inflating: input/train_images/2028087473.jpg
inflating: input/train_images/3123906243.jpg
inflating: input/train_images/2124724718.jpg
inflating: input/train_images/4138481373.jpg
inflating: input/train_images/631607680.jpg
inflating: input/train_images/2589663236.jpg
inflating: input/train_images/890004895.jpg
inflating: input/train_images/3856935048.jpg
inflating: input/train_images/1676416724.jpg
inflating: input/train_images/1338159402.jpg
inflating: input/train_images/328383026.jpg
inflating: input/train_images/1557930579.jpg
inflating: input/train_images/2260280582.jpg
inflating: input/train_images/326251894.jpg
inflating: input/train_images/2411595422.jpg
inflating: input/train_images/1052028548.jpg
inflating: input/train_images/238134855.jpg
inflating: input/train_images/967831491.jpg
inflating: input/train_images/2038741619.jpg
inflating: input/train_images/443817384.jpg
inflating: input/train_images/491341032.jpg
inflating: input/train_images/426501539.jpg
inflating: input/train_images/1667487311.jpg
inflating: input/train_images/724059503.jpg
inflating: input/train_images/1434200283.jpg
inflating: input/train_images/2941744909.jpg
inflating: input/train_images/90333661.jpg
inflating: input/train_images/3073813239.jpg
inflating: input/train_images/2744352830.jpg
inflating: input/train_images/1078779441.jpg
inflating: input/train_images/2731257282.jpg
inflating: input/train_images/872917593.jpg
inflating: input/train_images/2025224084.jpg
inflating: input/train_images/3837215772.jpg
inflating: input/train_images/1367319512.jpg
inflating: input/train_images/1757442303.jpg
inflating: input/train_images/1088104223.jpg
inflating: input/train_images/2073687427.jpg
inflating: input/train_images/3636948632.jpg
inflating: input/train_images/451612458.jpg
inflating: input/train_images/1029779424.jpg
inflating: input/train_images/1469167700.jpg
inflating: input/train_images/3377176469.jpg
inflating: input/train_images/2763380810.jpg
inflating: input/train_images/1823886562.jpg
inflating: input/train_images/3984046455.jpg
inflating: input/train_images/3598862913.jpg
inflating: input/train_images/3992333811.jpg
inflating: input/train_images/272796178.jpg
inflating: input/train_images/303061561.jpg
inflating: input/train_images/1651072461.jpg
inflating: input/train_images/1592499949.jpg
inflating: input/train_images/2676338923.jpg
inflating: input/train_images/3118120871.jpg
inflating: input/train_images/1050263382.jpg
inflating: input/train_images/280102344.jpg
inflating: input/train_images/2328106144.jpg
inflating: input/train_images/964482896.jpg
inflating: input/train_images/2883462081.jpg
inflating: input/train_images/2214549486.jpg
inflating: input/train_images/1080179563.jpg
inflating: input/train_images/1754481092.jpg
inflating: input/train_images/3855303383.jpg
inflating: input/train_images/3383545626.jpg
inflating: input/train_images/1761634229.jpg
inflating: input/train_images/2872140019.jpg
inflating: input/train_images/1735911943.jpg
inflating: input/train_images/3416705012.jpg
inflating: input/train_images/3810809174.jpg
inflating: input/train_images/1292034909.jpg
inflating: input/train_images/981211210.jpg
inflating: input/train_images/2722971247.jpg
inflating: input/train_images/1524527135.jpg
inflating: input/train_images/2384802478.jpg
inflating: input/train_images/133119904.jpg
inflating: input/train_images/3612391519.jpg
inflating: input/train_images/1509149905.jpg
inflating: input/train_images/2314978790.jpg
inflating: input/train_images/217699633.jpg
inflating: input/train_images/4130557422.jpg
inflating: input/train_images/1408250223.jpg
inflating: input/train_images/2195289519.jpg
inflating: input/train_images/1635358503.jpg
inflating: input/train_images/3706939010.jpg
inflating: input/train_images/291024116.jpg
inflating: input/train_images/3815754295.jpg
inflating: input/train_images/2776053424.jpg
inflating: input/train_images/1483622893.jpg
inflating: input/train_images/354915584.jpg
inflating: input/train_images/2220656958.jpg
inflating: input/train_images/2490802199.jpg
inflating: input/train_images/3480744448.jpg
inflating: input/train_images/965026603.jpg
inflating: input/train_images/1458457293.jpg
inflating: input/train_images/1309857132.jpg
inflating: input/train_images/3390815225.jpg
inflating: input/train_images/836589560.jpg
inflating: input/train_images/2399308727.jpg
inflating: input/train_images/1105255922.jpg
inflating: input/train_images/1263787016.jpg
inflating: input/train_images/2495474828.jpg
inflating: input/train_images/2642374600.jpg
inflating: input/train_images/3709091678.jpg
inflating: input/train_images/486182898.jpg
inflating: input/train_images/898753077.jpg
inflating: input/train_images/805005298.jpg
inflating: input/train_images/87170987.jpg
inflating: input/train_images/2425385872.jpg
inflating: input/train_images/94882276.jpg
inflating: input/train_images/2771194672.jpg
inflating: input/train_images/223648042.jpg
inflating: input/train_images/2237962396.jpg
inflating: input/train_images/2342505190.jpg
inflating: input/train_images/1423442291.jpg
inflating: input/train_images/3516603497.jpg
inflating: input/train_images/347892415.jpg
inflating: input/train_images/3054703946.jpg
inflating: input/train_images/1866388418.jpg
inflating: input/train_images/1938614393.jpg
inflating: input/train_images/1117006363.jpg
inflating: input/train_images/783533670.jpg
inflating: input/train_images/3497832029.jpg
inflating: input/train_images/4089661162.jpg
inflating: input/train_images/4165290692.jpg
inflating: input/train_images/3637370040.jpg
inflating: input/train_images/701496880.jpg
inflating: input/train_images/2496742167.jpg
inflating: input/train_images/1818510196.jpg
inflating: input/train_images/1531777334.jpg
inflating: input/train_images/2201343641.jpg
inflating: input/train_images/3327540813.jpg
inflating: input/train_images/1010648150.jpg
inflating: input/train_images/3360732633.jpg
inflating: input/train_images/2780229634.jpg
inflating: input/train_images/2141638054.jpg
inflating: input/train_images/435059376.jpg
inflating: input/train_images/2217792341.jpg
inflating: input/train_images/316988889.jpg
inflating: input/train_images/3937810828.jpg
inflating: input/train_images/3084656957.jpg
inflating: input/train_images/2933784082.jpg
inflating: input/train_images/3140588188.jpg
inflating: input/train_images/2597005911.jpg
inflating: input/train_images/2775889321.jpg
inflating: input/train_images/1781932612.jpg
inflating: input/train_images/1230982659.jpg
inflating: input/train_images/3804966425.jpg
inflating: input/train_images/4233314988.jpg
inflating: input/train_images/1404393856.jpg
inflating: input/train_images/2565638908.jpg
inflating: input/train_images/1950600545.jpg
inflating: input/train_images/3173423403.jpg
inflating: input/train_images/512811593.jpg
inflating: input/train_images/1352878479.jpg
inflating: input/train_images/282061991.jpg
inflating: input/train_images/1700324452.jpg
inflating: input/train_images/110588438.jpg
inflating: input/train_images/732588563.jpg
inflating: input/train_images/1421969879.jpg
inflating: input/train_images/1606458459.jpg
inflating: input/train_images/1540809571.jpg
inflating: input/train_images/2323474464.jpg
inflating: input/train_images/3371718731.jpg
inflating: input/train_images/3021499033.jpg
inflating: input/train_images/3964876844.jpg
inflating: input/train_images/1476011675.jpg
inflating: input/train_images/2241031313.jpg
inflating: input/train_images/2509010515.jpg
inflating: input/train_images/2888618344.jpg
inflating: input/train_images/3812383679.jpg
inflating: input/train_images/1896468303.jpg
inflating: input/train_images/190439528.jpg
inflating: input/train_images/4003954623.jpg
inflating: input/train_images/2156160232.jpg
inflating: input/train_images/755729450.jpg
inflating: input/train_images/3956075690.jpg
inflating: input/train_images/4232654944.jpg
inflating: input/train_images/3178167371.jpg
inflating: input/train_images/1687247558.jpg
inflating: input/train_images/1446406888.jpg
inflating: input/train_images/2655340809.jpg
inflating: input/train_images/737556184.jpg
inflating: input/train_images/1319853450.jpg
inflating: input/train_images/1397843043.jpg
inflating: input/train_images/817459754.jpg
inflating: input/train_images/768729652.jpg
inflating: input/train_images/3681935423.jpg
inflating: input/train_images/478400081.jpg
inflating: input/train_images/1689108113.jpg
inflating: input/train_images/3059461197.jpg
inflating: input/train_images/196618228.jpg
inflating: input/train_images/3172430646.jpg
inflating: input/train_images/3626400961.jpg
inflating: input/train_images/54600142.jpg
inflating: input/train_images/4046154865.jpg
inflating: input/train_images/1002394761.jpg
inflating: input/train_images/756826989.jpg
inflating: input/train_images/1807238206.jpg
inflating: input/train_images/553826173.jpg
inflating: input/train_images/1456612395.jpg
inflating: input/train_images/1845160585.jpg
inflating: input/train_images/1094811132.jpg
inflating: input/train_images/1653502098.jpg
inflating: input/train_images/274723853.jpg
inflating: input/train_images/527287896.jpg
inflating: input/train_images/2359291077.jpg
inflating: input/train_images/705886704.jpg
inflating: input/train_images/442164564.jpg
inflating: input/train_images/4279471194.jpg
inflating: input/train_images/2245464938.jpg
inflating: input/train_images/3461945356.jpg
inflating: input/train_images/1919914.jpg
inflating: input/train_images/3557979582.jpg
inflating: input/train_images/456001532.jpg
inflating: input/train_images/2321343793.jpg
inflating: input/train_images/1528313299.jpg
inflating: input/train_images/3838519719.jpg
inflating: input/train_images/2330158228.jpg
inflating: input/train_images/1643328543.jpg
inflating: input/train_images/1685786396.jpg
inflating: input/train_images/4144515023.jpg
inflating: input/train_images/2727099306.jpg
inflating: input/train_images/1927309377.jpg
inflating: input/train_images/1473359901.jpg
inflating: input/train_images/2193438012.jpg
inflating: input/train_images/705789219.jpg
inflating: input/train_images/3540543335.jpg
inflating: input/train_images/2771179588.jpg
inflating: input/train_images/1569385189.jpg
inflating: input/train_images/1114918862.jpg
inflating: input/train_images/1339614738.jpg
inflating: input/train_images/2202498219.jpg
inflating: input/train_images/1367959908.jpg
inflating: input/train_images/2330997994.jpg
inflating: input/train_images/1388871610.jpg
inflating: input/train_images/1688398046.jpg
inflating: input/train_images/506664858.jpg
inflating: input/train_images/1225597955.jpg
inflating: input/train_images/222987451.jpg
inflating: input/train_images/84635797.jpg
inflating: input/train_images/1005739807.jpg
inflating: input/train_images/2326873533.jpg
inflating: input/train_images/2916689767.jpg
inflating: input/train_images/4275645735.jpg
inflating: input/train_images/320516441.jpg
inflating: input/train_images/1620023854.jpg
inflating: input/train_images/2782292505.jpg
inflating: input/train_images/2779684221.jpg
inflating: input/train_images/616718743.jpg
inflating: input/train_images/115829780.jpg
inflating: input/train_images/1106608031.jpg
inflating: input/train_images/4271963761.jpg
inflating: input/train_images/4064049916.jpg
inflating: input/train_images/1598187662.jpg
inflating: input/train_images/3601015067.jpg
inflating: input/train_images/1564717497.jpg
inflating: input/train_images/2631757503.jpg
inflating: input/train_images/4181351157.jpg
inflating: input/train_images/1858613258.jpg
inflating: input/train_images/2359855528.jpg
inflating: input/train_images/1465359996.jpg
inflating: input/train_images/293469170.jpg
inflating: input/train_images/2766931963.jpg
inflating: input/train_images/3458199144.jpg
inflating: input/train_images/1733285805.jpg
inflating: input/train_images/4094696582.jpg
inflating: input/train_images/608264330.jpg
inflating: input/train_images/1880623255.jpg
inflating: input/train_images/3773164943.jpg
inflating: input/train_images/4011855023.jpg
inflating: input/train_images/1138036719.jpg
inflating: input/train_images/579333790.jpg
inflating: input/train_images/4123656006.jpg
inflating: input/train_images/4024636592.jpg
inflating: input/train_images/1850235748.jpg
inflating: input/train_images/7635457.jpg
inflating: input/train_images/2927026832.jpg
inflating: input/train_images/747646454.jpg
inflating: input/train_images/2225851641.jpg
inflating: input/train_images/323035134.jpg
inflating: input/train_images/1452755583.jpg
inflating: input/train_images/1871382569.jpg
inflating: input/train_images/454183787.jpg
inflating: input/train_images/3125296469.jpg
inflating: input/train_images/846734911.jpg
inflating: input/train_images/2115661859.jpg
inflating: input/train_images/3852927202.jpg
inflating: input/train_images/1344420041.jpg
inflating: input/train_images/2175527708.jpg
inflating: input/train_images/489562516.jpg
inflating: input/train_images/610968546.jpg
inflating: input/train_images/2609479763.jpg
inflating: input/train_images/3774493586.jpg
inflating: input/train_images/674072472.jpg
inflating: input/train_images/876588489.jpg
inflating: input/train_images/2195653841.jpg
inflating: input/train_images/2276075918.jpg
inflating: input/train_images/4196953609.jpg
inflating: input/train_images/2824711941.jpg
inflating: input/train_images/238076770.jpg
inflating: input/train_images/1179385123.jpg
inflating: input/train_images/3088205156.jpg
inflating: input/train_images/451500275.jpg
inflating: input/train_images/2358788534.jpg
inflating: input/train_images/1890479595.jpg
inflating: input/train_images/2472587506.jpg
inflating: input/train_images/1172216777.jpg
inflating: input/train_images/2097640981.jpg
inflating: input/train_images/815952257.jpg
inflating: input/train_images/3192117887.jpg
inflating: input/train_images/4239262882.jpg
inflating: input/train_images/1831712346.jpg
inflating: input/train_images/533613162.jpg
inflating: input/train_images/3373453090.jpg
inflating: input/train_images/1774341872.jpg
inflating: input/train_images/3502675900.jpg
inflating: input/train_images/3483686543.jpg
inflating: input/train_images/2672675899.jpg
inflating: input/train_images/1977466275.jpg
inflating: input/train_images/1858241102.jpg
inflating: input/train_images/670519295.jpg
inflating: input/train_images/1306087922.jpg
inflating: input/train_images/3461250443.jpg
inflating: input/train_images/957970680.jpg
inflating: input/train_images/2386536448.jpg
inflating: input/train_images/845926406.jpg
inflating: input/train_images/2485317187.jpg
inflating: input/train_images/2434587882.jpg
inflating: input/train_images/1841093794.jpg
inflating: input/train_images/2481971712.jpg
inflating: input/train_images/2948143920.jpg
inflating: input/train_images/1832233045.jpg
inflating: input/train_images/1536954784.jpg
inflating: input/train_images/3292967761.jpg
inflating: input/train_images/2030421746.jpg
inflating: input/train_images/2269881726.jpg
inflating: input/train_images/3472250799.jpg
inflating: input/train_images/4224616597.jpg
inflating: input/train_images/1650753377.jpg
inflating: input/train_images/26942115.jpg
inflating: input/train_images/783577117.jpg
inflating: input/train_images/2931187107.jpg
inflating: input/train_images/4136137862.jpg
inflating: input/train_images/3852956252.jpg
inflating: input/train_images/597527186.jpg
inflating: input/train_images/3054525360.jpg
inflating: input/train_images/2302696132.jpg
inflating: input/train_images/3662625055.jpg
inflating: input/train_images/4071806046.jpg
inflating: input/train_images/1058438877.jpg
inflating: input/train_images/1541989559.jpg
inflating: input/train_images/3335580523.jpg
inflating: input/train_images/3589106075.jpg
inflating: input/train_images/270386050.jpg
inflating: input/train_images/1958784721.jpg
inflating: input/train_images/2174381903.jpg
inflating: input/train_images/2425540175.jpg
inflating: input/train_images/1292573857.jpg
inflating: input/train_images/976801924.jpg
inflating: input/train_images/3212562477.jpg
inflating: input/train_images/381118833.jpg
inflating: input/train_images/4522938.jpg
inflating: input/train_images/3003859169.jpg
inflating: input/train_images/518891556.jpg
inflating: input/train_images/1635619978.jpg
inflating: input/train_images/1709443342.jpg
inflating: input/train_images/1378444906.jpg
inflating: input/train_images/2486212637.jpg
inflating: input/train_images/2660643626.jpg
inflating: input/train_images/2125666901.jpg
inflating: input/train_images/3325640914.jpg
inflating: input/train_images/1617179127.jpg
inflating: input/train_images/2623830395.jpg
inflating: input/train_images/336319648.jpg
inflating: input/train_images/3899344477.jpg
inflating: input/train_images/4167185859.jpg
inflating: input/train_images/1680873912.jpg
inflating: input/train_images/3401409874.jpg
inflating: input/train_images/3403726678.jpg
inflating: input/train_images/3308428199.jpg
inflating: input/train_images/165028458.jpg
inflating: input/train_images/4124651871.jpg
inflating: input/train_images/1152100841.jpg
inflating: input/train_images/2448657590.jpg
inflating: input/train_images/1812865076.jpg
inflating: input/train_images/1695222619.jpg
inflating: input/train_images/3449220829.jpg
inflating: input/train_images/653958117.jpg
inflating: input/train_images/1858722965.jpg
inflating: input/train_images/3053713467.jpg
inflating: input/train_images/2217598381.jpg
inflating: input/train_images/2956539470.jpg
inflating: input/train_images/4184461961.jpg
inflating: input/train_images/2704051333.jpg
inflating: input/train_images/830055343.jpg
inflating: input/train_images/2886723600.jpg
inflating: input/train_images/78946276.jpg
inflating: input/train_images/2320602332.jpg
inflating: input/train_images/279831330.jpg
inflating: input/train_images/918823473.jpg
inflating: input/train_images/3953857113.jpg
inflating: input/train_images/1861027492.jpg
inflating: input/train_images/2373129151.jpg
inflating: input/train_images/3633249375.jpg
inflating: input/train_images/1014087087.jpg
inflating: input/train_images/1115186514.jpg
inflating: input/train_images/1236036550.jpg
inflating: input/train_images/2711827024.jpg
inflating: input/train_images/1514398511.jpg
inflating: input/train_images/3819082104.jpg
inflating: input/train_images/82105739.jpg
inflating: input/train_images/1349523638.jpg
inflating: input/train_images/4177148396.jpg
inflating: input/train_images/3644380116.jpg
inflating: input/train_images/3699200152.jpg
inflating: input/train_images/2674690912.jpg
inflating: input/train_images/50223703.jpg
inflating: input/train_images/4205256960.jpg
inflating: input/train_images/1690647938.jpg
inflating: input/train_images/2502748604.jpg
inflating: input/train_images/2093487029.jpg
inflating: input/train_images/1918288357.jpg
inflating: input/train_images/1012305013.jpg
inflating: input/train_images/1415036837.jpg
inflating: input/train_images/1077343678.jpg
inflating: input/train_images/2016298412.jpg
inflating: input/train_images/215124252.jpg
inflating: input/train_images/1168108889.jpg
inflating: input/train_images/3788552221.jpg
inflating: input/train_images/1643954108.jpg
inflating: input/train_images/3885885091.jpg
inflating: input/train_images/843004516.jpg
inflating: input/train_images/1190206557.jpg
inflating: input/train_images/4288246700.jpg
inflating: input/train_images/4013482004.jpg
inflating: input/train_images/4173729579.jpg
inflating: input/train_images/1465663375.jpg
inflating: input/train_images/803715275.jpg
inflating: input/train_images/2745895551.jpg
inflating: input/train_images/1213570538.jpg
inflating: input/train_images/2202135584.jpg
inflating: input/train_images/3962941156.jpg
inflating: input/train_images/1022850256.jpg
inflating: input/train_images/2983992918.jpg
inflating: input/train_images/4193627922.jpg
inflating: input/train_images/3899499666.jpg
inflating: input/train_images/1146811684.jpg
inflating: input/train_images/155869794.jpg
inflating: input/train_images/4037044151.jpg
inflating: input/train_images/1612215855.jpg
inflating: input/train_images/1527271851.jpg
inflating: input/train_images/3910372860.jpg
inflating: input/train_images/2753696681.jpg
inflating: input/train_images/3655511142.jpg
inflating: input/train_images/226962956.jpg
inflating: input/train_images/3647735432.jpg
inflating: input/train_images/3791342812.jpg
inflating: input/train_images/3072539565.jpg
inflating: input/train_images/3365043679.jpg
inflating: input/train_images/3052037743.jpg
inflating: input/train_images/3164095616.jpg
inflating: input/train_images/3331601570.jpg
inflating: input/train_images/3044146032.jpg
inflating: input/train_images/2454437891.jpg
inflating: input/train_images/2238900239.jpg
inflating: input/train_images/2993271745.jpg
inflating: input/train_images/2110727627.jpg
inflating: input/train_images/1176023935.jpg
inflating: input/train_images/1760414048.jpg
inflating: input/train_images/149376158.jpg
inflating: input/train_images/1802754219.jpg
inflating: input/train_images/4210199349.jpg
inflating: input/train_images/245276230.jpg
inflating: input/train_images/2498500935.jpg
inflating: input/train_images/4200249723.jpg
inflating: input/train_images/2524827545.jpg
inflating: input/train_images/1765947311.jpg
inflating: input/train_images/292361837.jpg
inflating: input/train_images/232064940.jpg
inflating: input/train_images/2870914313.jpg
inflating: input/train_images/790756634.jpg
inflating: input/train_images/65235972.jpg
inflating: input/train_images/174674584.jpg
inflating: input/train_images/1685873117.jpg
inflating: input/train_images/510749381.jpg
inflating: input/train_images/1307113448.jpg
inflating: input/train_images/3904937505.jpg
inflating: input/train_images/2604878350.jpg
inflating: input/train_images/2110903174.jpg
inflating: input/train_images/2641912037.jpg
inflating: input/train_images/2704366062.jpg
inflating: input/train_images/1400362797.jpg
inflating: input/train_images/2857397632.jpg
inflating: input/train_images/2939402823.jpg
inflating: input/train_images/2595712013.jpg
inflating: input/train_images/933957288.jpg
inflating: input/train_images/1550184448.jpg
inflating: input/train_images/1658025063.jpg
inflating: input/train_images/3524748781.jpg
inflating: input/train_images/868625768.jpg
inflating: input/train_images/2347718160.jpg
inflating: input/train_images/3399701289.jpg
inflating: input/train_images/282573788.jpg
inflating: input/train_images/3605605369.jpg
inflating: input/train_images/2252494012.jpg
inflating: input/train_images/2110760303.jpg
inflating: input/train_images/2881921977.jpg
inflating: input/train_images/779905140.jpg
inflating: input/train_images/2031835632.jpg
inflating: input/train_images/2146338076.jpg
inflating: input/train_images/2186658480.jpg
inflating: input/train_images/1222123820.jpg
inflating: input/train_images/2735914869.jpg
inflating: input/train_images/3921597155.jpg
inflating: input/train_images/2290881666.jpg
inflating: input/train_images/2026260632.jpg
inflating: input/train_images/3809986981.jpg
inflating: input/train_images/3749019108.jpg
inflating: input/train_images/1142064548.jpg
inflating: input/train_images/473737593.jpg
inflating: input/train_images/510338633.jpg
inflating: input/train_images/4227613635.jpg
inflating: input/train_images/1989388407.jpg
inflating: input/train_images/3493597952.jpg
inflating: input/train_images/529641068.jpg
inflating: input/train_images/3092828212.jpg
inflating: input/train_images/1998676158.jpg
inflating: input/train_images/2913981436.jpg
inflating: input/train_images/2033316556.jpg
inflating: input/train_images/1314320713.jpg
inflating: input/train_images/917153346.jpg
inflating: input/train_images/3797193100.jpg
inflating: input/train_images/618242268.jpg
inflating: input/train_images/839823259.jpg
inflating: input/train_images/1381532581.jpg
inflating: input/train_images/3712630265.jpg
inflating: input/train_images/888332390.jpg
inflating: input/train_images/2225869087.jpg
inflating: input/train_images/4260532551.jpg
inflating: input/train_images/2907262343.jpg
inflating: input/train_images/3188371437.jpg
inflating: input/train_images/3373743927.jpg
inflating: input/train_images/4214126617.jpg
inflating: input/train_images/1769895706.jpg
inflating: input/train_images/1168027605.jpg
inflating: input/train_images/532103360.jpg
inflating: input/train_images/2112497230.jpg
inflating: input/train_images/1417032569.jpg
inflating: input/train_images/2356810303.jpg
inflating: input/train_images/2276911457.jpg
inflating: input/train_images/3606039415.jpg
inflating: input/train_images/3855806697.jpg
inflating: input/train_images/2369607903.jpg
inflating: input/train_images/598746218.jpg
inflating: input/train_images/3533539817.jpg
inflating: input/train_images/4261141445.jpg
inflating: input/train_images/797009430.jpg
inflating: input/train_images/1748255784.jpg
inflating: input/train_images/1931683048.jpg
inflating: input/train_images/3465620541.jpg
inflating: input/train_images/4070109402.jpg
inflating: input/train_images/2337801831.jpg
inflating: input/train_images/3151004531.jpg
inflating: input/train_images/1921012349.jpg
inflating: input/train_images/3748130169.jpg
inflating: input/train_images/435094576.jpg
inflating: input/train_images/2407217275.jpg
inflating: input/train_images/975110881.jpg
inflating: input/train_images/3651348948.jpg
inflating: input/train_images/378058032.jpg
inflating: input/train_images/4219528096.jpg
inflating: input/train_images/742519691.jpg
inflating: input/train_images/2517495253.jpg
inflating: input/train_images/1838219986.jpg
inflating: input/train_images/1955905129.jpg
inflating: input/train_images/1009495847.jpg
inflating: input/train_images/1264109301.jpg
inflating: input/train_images/1311772076.jpg
inflating: input/train_images/1721767789.jpg
inflating: input/train_images/210014333.jpg
inflating: input/train_images/1122970967.jpg
inflating: input/train_images/3470341002.jpg
inflating: input/train_images/3096568582.jpg
inflating: input/train_images/2620708986.jpg
inflating: input/train_images/1318419572.jpg
inflating: input/train_images/3084221499.jpg
inflating: input/train_images/2302395970.jpg
inflating: input/train_images/809330872.jpg
inflating: input/train_images/1989543996.jpg
inflating: input/train_images/1648200070.jpg
inflating: input/train_images/1425588144.jpg
inflating: input/train_images/1834175690.jpg
inflating: input/train_images/3079739464.jpg
inflating: input/train_images/201703584.jpg
inflating: input/train_images/2368100826.jpg
inflating: input/train_images/1149596528.jpg
inflating: input/train_images/1719304067.jpg
inflating: input/train_images/1086538376.jpg
inflating: input/train_images/1081331009.jpg
inflating: input/train_images/2639026926.jpg
inflating: input/train_images/1123847730.jpg
inflating: input/train_images/4221079095.jpg
inflating: input/train_images/2801308538.jpg
inflating: input/train_images/3685314761.jpg
inflating: input/train_images/2154562670.jpg
inflating: input/train_images/2286580738.jpg
inflating: input/train_images/801531661.jpg
inflating: input/train_images/1269638182.jpg
inflating: input/train_images/2392729768.jpg
inflating: input/train_images/9255514.jpg
inflating: input/train_images/2425193162.jpg
inflating: input/train_images/3686939079.jpg
inflating: input/train_images/1093908406.jpg
inflating: input/train_images/1231822734.jpg
inflating: input/train_images/3345225500.jpg
inflating: input/train_images/3615856215.jpg
inflating: input/train_images/2930198832.jpg
inflating: input/train_images/3949525313.jpg
inflating: input/train_images/953218056.jpg
inflating: input/train_images/3943832883.jpg
inflating: input/train_images/364035626.jpg
inflating: input/train_images/1307759125.jpg
inflating: input/train_images/3665306245.jpg
inflating: input/train_images/2614224221.jpg
inflating: input/train_images/1396847821.jpg
inflating: input/train_images/1373520804.jpg
inflating: input/train_images/4017896312.jpg
inflating: input/train_images/1100532272.jpg
inflating: input/train_images/2069068462.jpg
inflating: input/train_images/1304829719.jpg
inflating: input/train_images/515391369.jpg
inflating: input/train_images/271969032.jpg
inflating: input/train_images/3237441683.jpg
inflating: input/train_images/862706097.jpg
inflating: input/train_images/1636356850.jpg
inflating: input/train_images/3514373354.jpg
inflating: input/train_images/2575015649.jpg
inflating: input/train_images/2714131195.jpg
inflating: input/train_images/670541081.jpg
inflating: input/train_images/1546993885.jpg
inflating: input/train_images/596124693.jpg
inflating: input/train_images/1234924764.jpg
inflating: input/train_images/1669079665.jpg
inflating: input/train_images/1412766908.jpg
inflating: input/train_images/2740084860.jpg
inflating: input/train_images/3548238505.jpg
inflating: input/train_images/1276561660.jpg
inflating: input/train_images/2186042272.jpg
inflating: input/train_images/918535914.jpg
inflating: input/train_images/1586567598.jpg
inflating: input/train_images/3444356227.jpg
inflating: input/train_images/1279878525.jpg
inflating: input/train_images/2282888474.jpg
inflating: input/train_images/4235664377.jpg
inflating: input/train_images/4165541319.jpg
inflating: input/train_images/4129857747.jpg
inflating: input/train_images/2174562681.jpg
inflating: input/train_images/2334618381.jpg
inflating: input/train_images/1763486772.jpg
inflating: input/train_images/4271000778.jpg
inflating: input/train_images/1851671093.jpg
inflating: input/train_images/2414261693.jpg
inflating: input/train_images/3413855723.jpg
inflating: input/train_images/185826608.jpg
inflating: input/train_images/1083149527.jpg
inflating: input/train_images/2134446650.jpg
inflating: input/train_images/2596091150.jpg
inflating: input/train_images/3221707185.jpg
inflating: input/train_images/3073993044.jpg
inflating: input/train_images/2759256201.jpg
inflating: input/train_images/19905959.jpg
inflating: input/train_images/836320036.jpg
inflating: input/train_images/4175366577.jpg
inflating: input/train_images/3123083708.jpg
inflating: input/train_images/3496923861.jpg
inflating: input/train_images/1466391339.jpg
inflating: input/train_images/1796327295.jpg
inflating: input/train_images/2698452187.jpg
inflating: input/train_images/3481276167.jpg
inflating: input/train_images/1884379874.jpg
inflating: input/train_images/322175078.jpg
inflating: input/train_images/2837017211.jpg
inflating: input/train_images/4044063310.jpg
inflating: input/train_images/3586284239.jpg
inflating: input/train_images/1148629594.jpg
inflating: input/train_images/588169966.jpg
inflating: input/train_images/1371946692.jpg
inflating: input/train_images/4136963668.jpg
inflating: input/train_images/1431172246.jpg
inflating: input/train_images/1640860200.jpg
inflating: input/train_images/1713497934.jpg
inflating: input/train_images/3786394001.jpg
inflating: input/train_images/4004655752.jpg
inflating: input/train_images/4279337211.jpg
inflating: input/train_images/2662609508.jpg
inflating: input/train_images/4104529237.jpg
inflating: input/train_images/3619447229.jpg
inflating: input/train_images/1428048289.jpg
inflating: input/train_images/3253079818.jpg
inflating: input/train_images/3568424320.jpg
inflating: input/train_images/3958714206.jpg
inflating: input/train_images/1969270538.jpg
inflating: input/train_images/1772264324.jpg
inflating: input/train_images/3036226090.jpg
inflating: input/train_images/2149893988.jpg
inflating: input/train_images/1589160306.jpg
inflating: input/train_images/4068103331.jpg
inflating: input/train_images/128820265.jpg
inflating: input/train_images/3307346645.jpg
inflating: input/train_images/2011886920.jpg
inflating: input/train_images/735041254.jpg
inflating: input/train_images/1552927795.jpg
inflating: input/train_images/4132381633.jpg
inflating: input/train_images/1708792236.jpg
inflating: input/train_images/400686072.jpg
inflating: input/train_images/1347999958.jpg
inflating: input/train_images/1164550544.jpg
inflating: input/train_images/711824069.jpg
inflating: input/train_images/1513772475.jpg
inflating: input/train_images/3720322849.jpg
inflating: input/train_images/812733394.jpg
inflating: input/train_images/2298308938.jpg
inflating: input/train_images/1475290962.jpg
inflating: input/train_images/2241828969.jpg
inflating: input/train_images/733701690.jpg
inflating: input/train_images/530632304.jpg
inflating: input/train_images/17774752.jpg
inflating: input/train_images/3099133132.jpg
inflating: input/train_images/2597280377.jpg
inflating: input/train_images/2828211930.jpg
inflating: input/train_images/2845047042.jpg
inflating: input/train_images/1548749021.jpg
inflating: input/train_images/1907189508.jpg
inflating: input/train_images/3500044378.jpg
inflating: input/train_images/1211842242.jpg
inflating: input/train_images/2898481085.jpg
inflating: input/train_images/3479082158.jpg
inflating: input/train_images/4013991367.jpg
inflating: input/train_images/2953610226.jpg
inflating: input/train_images/3594749275.jpg
inflating: input/train_images/127130878.jpg
inflating: input/train_images/1430955699.jpg
inflating: input/train_images/3046878913.jpg
inflating: input/train_images/3957612771.jpg
inflating: input/train_images/351175754.jpg
inflating: input/train_images/3346048761.jpg
inflating: input/train_images/1334904382.jpg
inflating: input/train_images/2312856534.jpg
inflating: input/train_images/3399565382.jpg
inflating: input/train_images/3189720932.jpg
inflating: input/train_images/3183016038.jpg
inflating: input/train_images/910008110.jpg
inflating: input/train_images/3370367169.jpg
inflating: input/train_images/792105357.jpg
inflating: input/train_images/4149719008.jpg
inflating: input/train_images/4278349650.jpg
inflating: input/train_images/1736753003.jpg
inflating: input/train_images/2491061027.jpg
inflating: input/train_images/1633926971.jpg
inflating: input/train_images/1781156420.jpg
inflating: input/train_images/3708222016.jpg
inflating: input/train_images/3106475684.jpg
inflating: input/train_images/405521670.jpg
inflating: input/train_images/1084779411.jpg
inflating: input/train_images/800593866.jpg
inflating: input/train_images/1417496054.jpg
inflating: input/train_images/1942233808.jpg
inflating: input/train_images/3877698675.jpg
inflating: input/train_images/3441715093.jpg
inflating: input/train_images/2319938868.jpg
inflating: input/train_images/1688388949.jpg
inflating: input/train_images/1336010975.jpg
inflating: input/train_images/1780364325.jpg
inflating: input/train_images/4282530176.jpg
inflating: input/train_images/1115273193.jpg
inflating: input/train_images/3057793137.jpg
inflating: input/train_images/313266547.jpg
inflating: input/train_images/2685117981.jpg
inflating: input/train_images/3872024623.jpg
inflating: input/train_images/193330948.jpg
inflating: input/train_images/1499416552.jpg
inflating: input/train_images/270924529.jpg
inflating: input/train_images/2799223634.jpg
inflating: input/train_images/3047486589.jpg
inflating: input/train_images/2890583988.jpg
inflating: input/train_images/3670635448.jpg
inflating: input/train_images/3161586140.jpg
inflating: input/train_images/4012035359.jpg
inflating: input/train_images/1884499886.jpg
inflating: input/train_images/437121347.jpg
inflating: input/train_images/1883239637.jpg
inflating: input/train_images/2130031461.jpg
inflating: input/train_images/1036959902.jpg
inflating: input/train_images/923965406.jpg
inflating: input/train_images/544308898.jpg
inflating: input/train_images/1539533561.jpg
inflating: input/train_images/2218894352.jpg
inflating: input/train_images/1320401376.jpg
inflating: input/train_images/3931679914.jpg
inflating: input/train_images/1402748825.jpg
inflating: input/train_images/3855509491.jpg
inflating: input/train_images/2907707282.jpg
inflating: input/train_images/3022337655.jpg
inflating: input/train_images/2380213593.jpg
inflating: input/train_images/2429941129.jpg
inflating: input/train_images/4230605387.jpg
inflating: input/train_images/832593154.jpg
inflating: input/train_images/4032326580.jpg
inflating: input/train_images/3089200900.jpg
inflating: input/train_images/42150813.jpg
inflating: input/train_images/956840852.jpg
inflating: input/train_images/4067721259.jpg
inflating: input/train_images/1423035983.jpg
inflating: input/train_images/2883512033.jpg
inflating: input/train_images/3739148870.jpg
inflating: input/train_images/3395212339.jpg
inflating: input/train_images/3196140306.jpg
inflating: input/train_images/1022475063.jpg
inflating: input/train_images/2458161037.jpg
inflating: input/train_images/2880080736.jpg
inflating: input/train_images/3102842348.jpg
inflating: input/train_images/2250996072.jpg
inflating: input/train_images/3584346239.jpg
inflating: input/train_images/3713723345.jpg
inflating: input/train_images/4037648479.jpg
inflating: input/train_images/2960433265.jpg
inflating: input/train_images/3795035520.jpg
inflating: input/train_images/969719974.jpg
inflating: input/train_images/3435954655.jpg
inflating: input/train_images/3714240352.jpg
inflating: input/train_images/2978748370.jpg
inflating: input/train_images/1529906734.jpg
inflating: input/train_images/3358566810.jpg
inflating: input/train_images/175398874.jpg
inflating: input/train_images/2646565423.jpg
inflating: input/train_images/4207507257.jpg
inflating: input/train_images/2082657173.jpg
inflating: input/train_images/1888617839.jpg
inflating: input/train_images/2509869422.jpg
inflating: input/train_images/3259817213.jpg
inflating: input/train_images/2390982435.jpg
inflating: input/train_images/4140207015.jpg
inflating: input/train_images/282134948.jpg
inflating: input/train_images/2265583516.jpg
inflating: input/train_images/3275923788.jpg
inflating: input/train_images/588893932.jpg
inflating: input/train_images/1666313319.jpg
inflating: input/train_images/84598209.jpg
inflating: input/train_images/2022032014.jpg
inflating: input/train_images/2191616657.jpg
inflating: input/train_images/4047167362.jpg
inflating: input/train_images/1477310762.jpg
inflating: input/train_images/3420596297.jpg
inflating: input/train_images/318991539.jpg
inflating: input/train_images/1835637370.jpg
inflating: input/train_images/619095657.jpg
inflating: input/train_images/379836605.jpg
inflating: input/train_images/1406654897.jpg
inflating: input/train_images/2842496003.jpg
inflating: input/train_images/3495162607.jpg
inflating: input/train_images/11162952.jpg
inflating: input/train_images/4210272961.jpg
inflating: input/train_images/3928671875.jpg
inflating: input/train_images/960202242.jpg
inflating: input/train_images/1703598406.jpg
inflating: input/train_images/396152951.jpg
inflating: input/train_images/1768669927.jpg
inflating: input/train_images/1761652043.jpg
inflating: input/train_images/88884986.jpg
inflating: input/train_images/773722195.jpg
inflating: input/train_images/1885492446.jpg
inflating: input/train_images/846824837.jpg
inflating: input/train_images/780812090.jpg
inflating: input/train_images/1363807985.jpg
inflating: input/train_images/1712805260.jpg
inflating: input/train_images/555028236.jpg
inflating: input/train_images/3111312039.jpg
inflating: input/train_images/1425848299.jpg
inflating: input/train_images/1361425698.jpg
inflating: input/train_images/3469950332.jpg
inflating: input/train_images/2293304630.jpg
inflating: input/train_images/1855416267.jpg
inflating: input/train_images/3615317579.jpg
inflating: input/train_images/1126189127.jpg
inflating: input/train_images/2493815329.jpg
inflating: input/train_images/4010033110.jpg
inflating: input/train_images/87567596.jpg
inflating: input/train_images/2241498976.jpg
inflating: input/train_images/3126227969.jpg
inflating: input/train_images/2571739382.jpg
inflating: input/train_images/1731652110.jpg
inflating: input/train_images/354223592.jpg
inflating: input/train_images/1555288170.jpg
inflating: input/train_images/269106077.jpg
inflating: input/train_images/2397652699.jpg
inflating: input/train_images/3662919995.jpg
inflating: input/train_images/148573246.jpg
inflating: input/train_images/1730472510.jpg
inflating: input/train_images/1829924843.jpg
inflating: input/train_images/1359844717.jpg
inflating: input/train_images/3163683583.jpg
inflating: input/train_images/2309955110.jpg
inflating: input/train_images/3794607673.jpg
inflating: input/train_images/1695184578.jpg
inflating: input/train_images/255823836.jpg
inflating: input/train_images/1232148732.jpg
inflating: input/train_images/464335035.jpg
inflating: input/train_images/3006942372.jpg
inflating: input/train_images/1516455512.jpg
inflating: input/train_images/618285041.jpg
inflating: input/train_images/842889608.jpg
inflating: input/train_images/2623778604.jpg
inflating: input/train_images/2495865690.jpg
inflating: input/train_images/1985492850.jpg
inflating: input/train_images/3689984405.jpg
inflating: input/train_images/1634815405.jpg
inflating: input/train_images/1286864317.jpg
inflating: input/train_images/49856658.jpg
inflating: input/train_images/4189636250.jpg
inflating: input/train_images/2365362415.jpg
inflating: input/train_images/345359010.jpg
inflating: input/train_images/206885369.jpg
inflating: input/train_images/753470251.jpg
inflating: input/train_images/439346642.jpg
inflating: input/train_images/3751819618.jpg
inflating: input/train_images/4076823454.jpg
inflating: input/train_images/3651930392.jpg
inflating: input/train_images/1626054782.jpg
inflating: input/train_images/2963992338.jpg
inflating: input/train_images/3380981345.jpg
inflating: input/train_images/2707943069.jpg
inflating: input/train_images/1726694302.jpg
inflating: input/train_images/1879407752.jpg
inflating: input/train_images/578646228.jpg
inflating: input/train_images/3837243300.jpg
inflating: input/train_images/3412345556.jpg
inflating: input/train_images/3034682689.jpg
inflating: input/train_images/1492204596.jpg
inflating: input/train_images/1143022454.jpg
inflating: input/train_images/2371511733.jpg
inflating: input/train_images/2011205362.jpg
inflating: input/train_images/2477268412.jpg
inflating: input/train_images/1713720885.jpg
inflating: input/train_images/1086221210.jpg
inflating: input/train_images/1110776878.jpg
inflating: input/train_images/3117191836.jpg
inflating: input/train_images/3078675372.jpg
inflating: input/train_images/986854149.jpg
inflating: input/train_images/1524705787.jpg
inflating: input/train_images/328272253.jpg
inflating: input/train_images/513364056.jpg
inflating: input/train_images/1365352690.jpg
inflating: input/train_images/1462318476.jpg
inflating: input/train_images/458279026.jpg
inflating: input/train_images/3230535852.jpg
inflating: input/train_images/1027766963.jpg
inflating: input/train_images/2523038081.jpg
inflating: input/train_images/2241795551.jpg
inflating: input/train_images/3778585395.jpg
inflating: input/train_images/2279465716.jpg
inflating: input/train_images/31199469.jpg
inflating: input/train_images/62771226.jpg
inflating: input/train_images/91850516.jpg
inflating: input/train_images/3392605234.jpg
inflating: input/train_images/4237501920.jpg
inflating: input/train_images/2892098824.jpg
inflating: input/train_images/3031973712.jpg
inflating: input/train_images/2632579053.jpg
inflating: input/train_images/3677086623.jpg
inflating: input/train_images/3600838809.jpg
inflating: input/train_images/1430503629.jpg
inflating: input/train_images/3606832271.jpg
inflating: input/train_images/3225508752.jpg
inflating: input/train_images/3196066673.jpg
inflating: input/train_images/2305707140.jpg
inflating: input/train_images/2191768604.jpg
inflating: input/train_images/3633084327.jpg
inflating: input/train_images/1481899695.jpg
inflating: input/train_images/1938616867.jpg
inflating: input/train_images/3068094202.jpg
inflating: input/train_images/1250607039.jpg
inflating: input/train_images/3384036560.jpg
inflating: input/train_images/1414812251.jpg
inflating: input/train_images/1498323364.jpg
inflating: input/train_images/2479508913.jpg
inflating: input/train_images/4092575256.jpg
inflating: input/train_images/629896725.jpg
inflating: input/train_images/2177099962.jpg
inflating: input/train_images/2605464541.jpg
inflating: input/train_images/1158328538.jpg
inflating: input/train_images/434025087.jpg
inflating: input/train_images/3969288877.jpg
inflating: input/train_images/2601426667.jpg
inflating: input/train_images/2250892724.jpg
inflating: input/train_images/1654084150.jpg
inflating: input/train_images/427226135.jpg
inflating: input/train_images/1416108002.jpg
inflating: input/train_images/2426488262.jpg
inflating: input/train_images/759934720.jpg
inflating: input/train_images/290569488.jpg
inflating: input/train_images/1121950922.jpg
inflating: input/train_images/2489474149.jpg
inflating: input/train_images/3867608934.jpg
inflating: input/train_images/75207996.jpg
inflating: input/train_images/1469568121.jpg
inflating: input/train_images/2024328143.jpg
inflating: input/train_images/3787937736.jpg
inflating: input/train_images/631859689.jpg
inflating: input/train_images/701052546.jpg
inflating: input/train_images/1355746136.jpg
inflating: input/train_images/3055699105.jpg
inflating: input/train_images/2649681680.jpg
inflating: input/train_images/2733686258.jpg
inflating: input/train_images/1419674728.jpg
inflating: input/train_images/3766160086.jpg
inflating: input/train_images/2530320825.jpg
inflating: input/train_images/1640455749.jpg
inflating: input/train_images/222017409.jpg
inflating: input/train_images/990558315.jpg
inflating: input/train_images/250907427.jpg
inflating: input/train_images/1451915901.jpg
inflating: input/train_images/3630824057.jpg
inflating: input/train_images/2594433845.jpg
inflating: input/train_images/1510309183.jpg
inflating: input/train_images/1182950503.jpg
inflating: input/train_images/3251176884.jpg
inflating: input/train_images/1905665106.jpg
inflating: input/train_images/342641109.jpg
inflating: input/train_images/241750365.jpg
inflating: input/train_images/4222984856.jpg
inflating: input/train_images/2498170494.jpg
inflating: input/train_images/2231502538.jpg
inflating: input/train_images/1742176204.jpg
inflating: input/train_images/442386946.jpg
inflating: input/train_images/3594319135.jpg
inflating: input/train_images/2658482776.jpg
inflating: input/train_images/4233694735.jpg
inflating: input/train_images/1673670656.jpg
inflating: input/train_images/464299112.jpg
inflating: input/train_images/2551622966.jpg
inflating: input/train_images/4261671268.jpg
inflating: input/train_images/1226235183.jpg
inflating: input/train_images/2386556017.jpg
inflating: input/train_images/3241593592.jpg
inflating: input/train_images/706808191.jpg
inflating: input/train_images/527474755.jpg
inflating: input/train_images/3149481755.jpg
inflating: input/train_images/1364643330.jpg
inflating: input/train_images/2715574572.jpg
inflating: input/train_images/2698895843.jpg
inflating: input/train_images/743105570.jpg
inflating: input/train_images/1760489108.jpg
inflating: input/train_images/1850537552.jpg
inflating: input/train_images/840541858.jpg
inflating: input/train_images/4012298941.jpg
inflating: input/train_images/66116561.jpg
inflating: input/train_images/2655654600.jpg
inflating: input/train_images/1645164318.jpg
inflating: input/train_images/3132128369.jpg
inflating: input/train_images/2707769947.jpg
inflating: input/train_images/3924533072.jpg
inflating: input/train_images/1597577379.jpg
inflating: input/train_images/3979826725.jpg
inflating: input/train_images/989119004.jpg
inflating: input/train_images/2953075261.jpg
inflating: input/train_images/3854724979.jpg
inflating: input/train_images/4279868467.jpg
inflating: input/train_images/3144413913.jpg
inflating: input/train_images/2946668981.jpg
inflating: input/train_images/1981073260.jpg
inflating: input/train_images/2344357016.jpg
inflating: input/train_images/1348163694.jpg
inflating: input/train_images/2062577716.jpg
inflating: input/train_images/3966975834.jpg
inflating: input/train_images/2435302600.jpg
inflating: input/train_images/944255811.jpg
inflating: input/train_images/2536030910.jpg
inflating: input/train_images/1813497241.jpg
inflating: input/train_images/1073105435.jpg
inflating: input/train_images/1443356256.jpg
inflating: input/train_images/1151941049.jpg
inflating: input/train_images/44779769.jpg
inflating: input/train_images/1946961119.jpg
inflating: input/train_images/1621074094.jpg
inflating: input/train_images/4287286739.jpg
inflating: input/train_images/58390189.jpg
inflating: input/train_images/3621162819.jpg
inflating: input/train_images/27376313.jpg
inflating: input/train_images/120478478.jpg
inflating: input/train_images/164144958.jpg
inflating: input/train_images/2313851666.jpg
inflating: input/train_images/3992168079.jpg
inflating: input/train_images/1203777682.jpg
inflating: input/train_images/2375799345.jpg
inflating: input/train_images/2905887778.jpg
inflating: input/train_images/122390799.jpg
inflating: input/train_images/212727951.jpg
inflating: input/train_images/3195057988.jpg
inflating: input/train_images/2117372627.jpg
inflating: input/train_images/2343661462.jpg
inflating: input/train_images/374285455.jpg
inflating: input/train_images/2904534563.jpg
inflating: input/train_images/4252535713.jpg
inflating: input/train_images/3873234052.jpg
inflating: input/train_images/645764425.jpg
inflating: input/train_images/1265114019.jpg
inflating: input/train_images/102968016.jpg
inflating: input/train_images/32222018.jpg
inflating: input/train_images/3898690121.jpg
inflating: input/train_images/3923893955.jpg
inflating: input/train_images/1841364017.jpg
inflating: input/train_images/3365603380.jpg
inflating: input/train_images/284280039.jpg
inflating: input/train_images/3943961452.jpg
inflating: input/train_images/1865176608.jpg
inflating: input/train_images/929286178.jpg
inflating: input/train_images/498242244.jpg
inflating: input/train_images/2605912286.jpg
inflating: input/train_images/2076206920.jpg
inflating: input/train_images/2055902363.jpg
inflating: input/train_images/3103401623.jpg
inflating: input/train_images/4058647833.jpg
inflating: input/train_images/733543752.jpg
inflating: input/train_images/4059169921.jpg
inflating: input/train_images/2606569889.jpg
inflating: input/train_images/786703663.jpg
inflating: input/train_images/1974451313.jpg
inflating: input/train_images/3303422256.jpg
inflating: input/train_images/4206587190.jpg
inflating: input/train_images/175320441.jpg
inflating: input/train_images/414320641.jpg
inflating: input/train_images/3082518263.jpg
inflating: input/train_images/173970359.jpg
inflating: input/train_images/902007589.jpg
inflating: input/train_images/3878214050.jpg
inflating: input/train_images/954778743.jpg
inflating: input/train_images/1775680225.jpg
inflating: input/train_images/3137997905.jpg
inflating: input/train_images/3947465350.jpg
inflating: input/train_images/351468773.jpg
inflating: input/train_images/1742301779.jpg
inflating: input/train_images/2291266654.jpg
inflating: input/train_images/2221240763.jpg
inflating: input/train_images/1287334854.jpg
inflating: input/train_images/2722977827.jpg
inflating: input/train_images/2535943246.jpg
inflating: input/train_images/3041790402.jpg
inflating: input/train_images/1424400038.jpg
inflating: input/train_images/771092021.jpg
inflating: input/train_images/3792671862.jpg
inflating: input/train_images/424257956.jpg
inflating: input/train_images/722155866.jpg
inflating: input/train_images/3314830802.jpg
inflating: input/train_images/4199179186.jpg
inflating: input/train_images/1276467932.jpg
inflating: input/train_images/3285778653.jpg
inflating: input/train_images/577275229.jpg
inflating: input/train_images/4046239145.jpg
inflating: input/train_images/375323961.jpg
inflating: input/train_images/1290727489.jpg
inflating: input/train_images/2421789568.jpg
inflating: input/train_images/3809607157.jpg
inflating: input/train_images/2159674288.jpg
inflating: input/train_images/2812234258.jpg
inflating: input/train_images/1231866114.jpg
inflating: input/train_images/2319594231.jpg
inflating: input/train_images/4211652237.jpg
inflating: input/train_images/2201881693.jpg
inflating: input/train_images/169839003.jpg
inflating: input/train_images/1398685050.jpg
inflating: input/train_images/415504810.jpg
inflating: input/train_images/3419381051.jpg
inflating: input/train_images/143092286.jpg
inflating: input/train_images/720072964.jpg
inflating: input/train_images/2387502649.jpg
inflating: input/train_images/2640239935.jpg
inflating: input/train_images/1057579024.jpg
inflating: input/train_images/1336103427.jpg
inflating: input/train_images/2421154857.jpg
inflating: input/train_images/4144297967.jpg
inflating: input/train_images/902139572.jpg
inflating: input/train_images/1759297979.jpg
inflating: input/train_images/3594689734.jpg
inflating: input/train_images/486827297.jpg
inflating: input/train_images/3107856880.jpg
inflating: input/train_images/1009268848.jpg
inflating: input/train_images/3014952608.jpg
inflating: input/train_images/2884532990.jpg
inflating: input/train_images/860744310.jpg
inflating: input/train_images/729514787.jpg
inflating: input/train_images/2694336156.jpg
inflating: input/train_images/1252288895.jpg
inflating: input/train_images/1654777863.jpg
inflating: input/train_images/2609452359.jpg
inflating: input/train_images/1009704586.jpg
inflating: input/train_images/2635255297.jpg
inflating: input/train_images/1882997044.jpg
inflating: input/train_images/404883957.jpg
inflating: input/train_images/4147695010.jpg
inflating: input/train_images/1864517626.jpg
inflating: input/train_images/538938122.jpg
inflating: input/train_images/927713688.jpg
inflating: input/train_images/3233669266.jpg
inflating: input/train_images/2654948784.jpg
inflating: input/train_images/2926655687.jpg
inflating: input/train_images/1628979950.jpg
inflating: input/train_images/1247154727.jpg
inflating: input/train_images/1011139244.jpg
inflating: input/train_images/912860596.jpg
inflating: input/train_images/1598087640.jpg
inflating: input/train_images/1127322697.jpg
inflating: input/train_images/3271812673.jpg
inflating: input/train_images/547866885.jpg
inflating: input/train_images/3669617106.jpg
inflating: input/train_images/1741376467.jpg
inflating: input/train_images/964570288.jpg
inflating: input/train_images/352468512.jpg
inflating: input/train_images/3061295375.jpg
inflating: input/train_images/2186949019.jpg
inflating: input/train_images/2772545038.jpg
inflating: input/train_images/1944764485.jpg
inflating: input/train_images/1383484107.jpg
inflating: input/train_images/1887341048.jpg
inflating: input/train_images/3543872440.jpg
inflating: input/train_images/3084391599.jpg
inflating: input/train_images/2708373940.jpg
inflating: input/train_images/1418080132.jpg
inflating: input/train_images/3987029837.jpg
inflating: input/train_images/2822052779.jpg
inflating: input/train_images/4024425381.jpg
inflating: input/train_images/4247839780.jpg
inflating: input/train_images/1952120040.jpg
inflating: input/train_images/3263657130.jpg
inflating: input/train_images/4131260506.jpg
inflating: input/train_images/53025412.jpg
inflating: input/train_images/822444900.jpg
inflating: input/train_images/814020712.jpg
inflating: input/train_images/3255287642.jpg
inflating: input/train_images/4171475132.jpg
inflating: input/train_images/3205007771.jpg
inflating: input/train_images/673868311.jpg
inflating: input/train_images/429982747.jpg
inflating: input/train_images/3615381155.jpg
inflating: input/train_images/3874354035.jpg
inflating: input/train_images/1423702741.jpg
inflating: input/train_images/2449191784.jpg
inflating: input/train_images/4182646483.jpg
inflating: input/train_images/2317789476.jpg
inflating: input/train_images/3740367572.jpg
inflating: input/train_images/2822226984.jpg
inflating: input/train_images/2064730340.jpg
inflating: input/train_images/2398068893.jpg
inflating: input/train_images/2651900879.jpg
inflating: input/train_images/3751355008.jpg
inflating: input/train_images/2002906625.jpg
inflating: input/train_images/2227639831.jpg
inflating: input/train_images/3881028757.jpg
inflating: input/train_images/3627502987.jpg
inflating: input/train_images/2388202129.jpg
inflating: input/train_images/4268211199.jpg
inflating: input/train_images/1934046551.jpg
inflating: input/train_images/2090702902.jpg
inflating: input/train_images/3985416531.jpg
inflating: input/train_images/1210950494.jpg
inflating: input/train_images/2411989112.jpg
inflating: input/train_images/208431523.jpg
inflating: input/train_images/1907583116.jpg
inflating: input/train_images/3285918057.jpg
inflating: input/train_images/884573629.jpg
inflating: input/train_images/1866140496.jpg
inflating: input/train_images/3676238683.jpg
inflating: input/train_images/782115033.jpg
inflating: input/train_images/2523850258.jpg
inflating: input/train_images/2571102489.jpg
inflating: input/train_images/3935356318.jpg
inflating: input/train_images/640157848.jpg
inflating: input/train_images/4274564396.jpg
inflating: input/train_images/1064213029.jpg
inflating: input/train_images/870659549.jpg
inflating: input/train_images/1265234988.jpg
inflating: input/train_images/1909072265.jpg
inflating: input/train_images/4280698838.jpg
inflating: input/train_images/512152604.jpg
inflating: input/train_images/2380764597.jpg
inflating: input/train_images/3910075821.jpg
inflating: input/train_images/4156293959.jpg
inflating: input/train_images/600691544.jpg
inflating: input/train_images/2529805366.jpg
inflating: input/train_images/3405050538.jpg
inflating: input/train_images/672149291.jpg
inflating: input/train_images/2158163785.jpg
inflating: input/train_images/2534462886.jpg
inflating: input/train_images/2848167687.jpg
inflating: input/train_images/2705886783.jpg
inflating: input/train_images/794665522.jpg
inflating: input/train_images/343493007.jpg
inflating: input/train_images/1775966274.jpg
inflating: input/train_images/169189292.jpg
inflating: input/train_images/4218669271.jpg
inflating: input/train_images/1599665158.jpg
inflating: input/train_images/2346713566.jpg
inflating: input/train_images/1140116873.jpg
inflating: input/train_images/3931881539.jpg
inflating: input/train_images/451431780.jpg
inflating: input/train_images/1274424632.jpg
inflating: input/train_images/1178309265.jpg
inflating: input/train_images/4208508755.jpg
inflating: input/train_images/1469995634.jpg
inflating: input/train_images/1468561211.jpg
inflating: input/train_images/23042367.jpg
inflating: input/train_images/396625328.jpg
inflating: input/train_images/1652920595.jpg
inflating: input/train_images/3876087345.jpg
inflating: input/train_images/1151646112.jpg
inflating: input/train_images/4123166218.jpg
inflating: input/train_images/2349124978.jpg
inflating: input/train_images/348828969.jpg
inflating: input/train_images/1932604522.jpg
inflating: input/train_images/3706807921.jpg
inflating: input/train_images/2222709872.jpg
inflating: input/train_images/2271948413.jpg
inflating: input/train_images/1493725119.jpg
inflating: input/train_images/2008776850.jpg
inflating: input/train_images/108428649.jpg
inflating: input/train_images/2419833907.jpg
inflating: input/train_images/3092766457.jpg
inflating: input/train_images/2644649435.jpg
inflating: input/train_images/1835162927.jpg
inflating: input/train_images/3057523045.jpg
inflating: input/train_images/2488494933.jpg
inflating: input/train_images/1344212681.jpg
inflating: input/train_images/141741766.jpg
inflating: input/train_images/396829878.jpg
inflating: input/train_images/761767350.jpg
inflating: input/train_images/3609350672.jpg
inflating: input/train_images/1892079469.jpg
inflating: input/train_images/2658720625.jpg
inflating: input/train_images/2103640329.jpg
inflating: input/train_images/3954180556.jpg
inflating: input/train_images/1492566436.jpg
inflating: input/train_images/4060304349.jpg
inflating: input/train_images/4099957665.jpg
inflating: input/train_images/4031789905.jpg
inflating: input/train_images/3439535328.jpg
inflating: input/train_images/463033778.jpg
inflating: input/train_images/2527840845.jpg
inflating: input/train_images/1800156844.jpg
inflating: input/train_images/3644271564.jpg
inflating: input/train_images/3584687840.jpg
inflating: input/train_images/3729162562.jpg
inflating: input/train_images/2975904009.jpg
inflating: input/train_images/2800062911.jpg
inflating: input/train_images/3564843091.jpg
inflating: input/train_images/1345401195.jpg
inflating: input/train_images/2889162661.jpg
inflating: input/train_images/2860693015.jpg
inflating: input/train_images/3702802130.jpg
inflating: input/train_images/3929583160.jpg
inflating: input/train_images/2476584583.jpg
inflating: input/train_images/2137007247.jpg
inflating: input/train_images/41357060.jpg
inflating: input/train_images/2021701763.jpg
inflating: input/train_images/811928525.jpg
inflating: input/train_images/3301514895.jpg
inflating: input/train_images/2601706592.jpg
inflating: input/train_images/58446146.jpg
inflating: input/train_images/1470070828.jpg
inflating: input/train_images/3967891639.jpg
inflating: input/train_images/687913373.jpg
inflating: input/train_images/676226256.jpg
inflating: input/train_images/1877332484.jpg
inflating: input/train_images/1149843066.jpg
inflating: input/train_images/2344308543.jpg
inflating: input/train_images/849133698.jpg
inflating: input/train_images/2268242314.jpg
inflating: input/train_images/2281997520.jpg
inflating: input/train_images/2033655713.jpg
inflating: input/train_images/1295898623.jpg
inflating: input/train_images/3308263183.jpg
inflating: input/train_images/1231695981.jpg
inflating: input/train_images/2613045307.jpg
inflating: input/train_images/3375409497.jpg
inflating: input/train_images/476113846.jpg
inflating: input/train_images/4182745953.jpg
inflating: input/train_images/4042037111.jpg
inflating: input/train_images/2214576095.jpg
inflating: input/train_images/1706796288.jpg
inflating: input/train_images/994621972.jpg
inflating: input/train_images/3743858641.jpg
inflating: input/train_images/2962999252.jpg
inflating: input/train_images/1850723562.jpg
inflating: input/train_images/2376695116.jpg
inflating: input/train_images/1492594056.jpg
inflating: input/train_images/190449795.jpg
inflating: input/train_images/2218023332.jpg
inflating: input/train_images/323873580.jpg
inflating: input/train_images/871966628.jpg
inflating: input/train_images/511932063.jpg
inflating: input/train_images/3896158732.jpg
inflating: input/train_images/915715866.jpg
inflating: input/train_images/82533757.jpg
inflating: input/train_images/2884824828.jpg
inflating: input/train_images/319910228.jpg
inflating: input/train_images/2940017595.jpg
inflating: input/train_images/1592129841.jpg
inflating: input/train_images/3107644192.jpg
inflating: input/train_images/3698178527.jpg
inflating: input/train_images/83337985.jpg
inflating: input/train_images/532255691.jpg
inflating: input/train_images/1715814415.jpg
inflating: input/train_images/3917412702.jpg
inflating: input/train_images/1648724139.jpg
inflating: input/train_images/2323728288.jpg
inflating: input/train_images/1430539919.jpg
inflating: input/train_images/4282408832.jpg
inflating: input/train_images/4293661491.jpg
inflating: input/train_images/2864427141.jpg
inflating: input/train_images/1379079003.jpg
inflating: input/train_images/3660194933.jpg
inflating: input/train_images/249927375.jpg
inflating: input/train_images/3219471796.jpg
inflating: input/train_images/1834266408.jpg
inflating: input/train_images/2016669057.jpg
inflating: input/train_images/507004978.jpg
inflating: input/train_images/571189248.jpg
inflating: input/train_images/952146173.jpg
inflating: input/train_images/873526870.jpg
inflating: input/train_images/2240458370.jpg
inflating: input/train_images/2575222166.jpg
inflating: input/train_images/1065833532.jpg
inflating: input/train_images/3704493951.jpg
inflating: input/train_images/131507385.jpg
inflating: input/train_images/111358933.jpg
inflating: input/train_images/3758253395.jpg
inflating: input/train_images/2475812200.jpg
inflating: input/train_images/3235584529.jpg
inflating: input/train_images/2178075893.jpg
inflating: input/train_images/3675828725.jpg
inflating: input/train_images/2337524208.jpg
inflating: input/train_images/2024172583.jpg
inflating: input/train_images/2326914865.jpg
inflating: input/train_images/2941452708.jpg
inflating: input/train_images/408414905.jpg
inflating: input/train_images/1043184548.jpg
inflating: input/train_images/4101194273.jpg
inflating: input/train_images/919597577.jpg
inflating: input/train_images/654992578.jpg
inflating: input/train_images/1775343418.jpg
inflating: input/train_images/1472183727.jpg
inflating: input/train_images/2559116486.jpg
inflating: input/train_images/241148727.jpg
inflating: input/train_images/3304643014.jpg
inflating: input/train_images/1981041140.jpg
inflating: input/train_images/3907936185.jpg
inflating: input/train_images/3251562752.jpg
inflating: input/train_images/1208145531.jpg
inflating: input/train_images/3899552692.jpg
inflating: input/train_images/876666484.jpg
inflating: input/train_images/211225277.jpg
inflating: input/train_images/920401054.jpg
inflating: input/train_images/1131959133.jpg
inflating: input/train_images/1138006821.jpg
inflating: input/train_images/2468963984.jpg
inflating: input/train_images/860785504.jpg
inflating: input/train_images/55799003.jpg
inflating: input/train_images/2873516336.jpg
inflating: input/train_images/381393296.jpg
inflating: input/train_images/4223217189.jpg
inflating: input/train_images/2814433150.jpg
inflating: input/train_images/2177675284.jpg
inflating: input/train_images/2975448123.jpg
inflating: input/train_images/3519335178.jpg
inflating: input/train_images/4082420465.jpg
inflating: input/train_images/1882919886.jpg
inflating: input/train_images/4207293267.jpg
inflating: input/train_images/2115648947.jpg
inflating: input/train_images/1589109993.jpg
inflating: input/train_images/907691648.jpg
inflating: input/train_images/4136626919.jpg
inflating: input/train_images/761160675.jpg
inflating: input/train_images/9312065.jpg
inflating: input/train_images/3085973890.jpg
inflating: input/train_images/1541714876.jpg
inflating: input/train_images/3188953817.jpg
inflating: input/train_images/3240792628.jpg
inflating: input/train_images/4253799258.jpg
inflating: input/train_images/2494865945.jpg
inflating: input/train_images/696538469.jpg
inflating: input/train_images/3489269448.jpg
inflating: input/train_images/497685909.jpg
inflating: input/train_images/1154259077.jpg
inflating: input/train_images/1491670235.jpg
inflating: input/train_images/3563392216.jpg
inflating: input/train_images/3623375685.jpg
inflating: input/train_images/745566741.jpg
inflating: input/train_images/411955232.jpg
inflating: input/train_images/2098699727.jpg
inflating: input/train_images/2462747672.jpg
inflating: input/train_images/1169677118.jpg
inflating: input/train_images/775786945.jpg
inflating: input/train_images/3180664408.jpg
inflating: input/train_images/4078601864.jpg
inflating: input/train_images/4170892667.jpg
inflating: input/train_images/1226193662.jpg
inflating: input/train_images/2742114843.jpg
inflating: input/train_images/490760030.jpg
inflating: input/train_images/2002346677.jpg
inflating: input/train_images/2089853591.jpg
inflating: input/train_images/3092716255.jpg
inflating: input/train_images/3113190178.jpg
inflating: input/train_images/719526260.jpg
inflating: input/train_images/808180923.jpg
inflating: input/train_images/740762568.jpg
inflating: input/train_images/3080481359.jpg
inflating: input/train_images/3287692788.jpg
inflating: input/train_images/3208609885.jpg
inflating: input/train_images/1558118745.jpg
inflating: input/train_images/944726140.jpg
inflating: input/train_images/3964066128.jpg
inflating: input/train_images/1753872657.jpg
inflating: input/train_images/513986084.jpg
inflating: input/train_images/891426683.jpg
inflating: input/train_images/1270368553.jpg
inflating: input/train_images/9454129.jpg
inflating: input/train_images/1129878051.jpg
inflating: input/train_images/1060644080.jpg
inflating: input/train_images/3408858113.jpg
inflating: input/train_images/581179733.jpg
inflating: input/train_images/2847223266.jpg
inflating: input/train_images/2529150821.jpg
inflating: input/train_images/2105063058.jpg
inflating: input/train_images/2182518914.jpg
inflating: input/train_images/3376371946.jpg
inflating: input/train_images/2437201100.jpg
inflating: input/train_images/2951126410.jpg
inflating: input/train_images/615415014.jpg
inflating: input/train_images/3541075880.jpg
inflating: input/train_images/3609260930.jpg
inflating: input/train_images/1348606741.jpg
inflating: input/train_images/2287869401.jpg
inflating: input/train_images/3115057364.jpg
inflating: input/train_images/738338306.jpg
inflating: input/train_images/1903992787.jpg
inflating: input/train_images/462402577.jpg
inflating: input/train_images/1129666944.jpg
inflating: input/train_images/693164586.jpg
inflating: input/train_images/3840637397.jpg
inflating: input/train_images/880178968.jpg
inflating: input/train_images/3977938536.jpg
inflating: input/train_images/3531650713.jpg
inflating: input/train_images/3257711542.jpg
inflating: input/train_images/3714119135.jpg
inflating: input/train_images/3027691323.jpg
inflating: input/train_images/2585045883.jpg
inflating: input/train_images/3117219248.jpg
inflating: input/train_images/2837141717.jpg
inflating: input/train_images/1001723730.jpg
inflating: input/train_images/696867083.jpg
inflating: input/train_images/1522208575.jpg
inflating: input/train_images/2270358342.jpg
inflating: input/train_images/2078942776.jpg
inflating: input/train_images/3147511199.jpg
inflating: input/train_images/3818759549.jpg
inflating: input/train_images/3316969906.jpg
inflating: input/train_images/2333207631.jpg
inflating: input/train_images/1968421706.jpg
inflating: input/train_images/1752948058.jpg
inflating: input/train_images/832440144.jpg
inflating: input/train_images/4024391744.jpg
inflating: input/train_images/4048156987.jpg
inflating: input/train_images/4276465485.jpg
inflating: input/train_images/2618036565.jpg
inflating: input/train_images/1767778795.jpg
inflating: input/train_images/2200762237.jpg
inflating: input/train_images/3331347285.jpg
inflating: input/train_images/323586160.jpg
inflating: input/train_images/3440246067.jpg
inflating: input/train_images/3083613226.jpg
inflating: input/train_images/2748659636.jpg
inflating: input/train_images/4111265654.jpg
inflating: input/train_images/3354624529.jpg
inflating: input/train_images/1986919607.jpg
inflating: input/train_images/742898185.jpg
inflating: input/train_images/2384551148.jpg
inflating: input/train_images/2251153057.jpg
inflating: input/train_images/1860324672.jpg
inflating: input/train_images/1676052292.jpg
inflating: input/train_images/3670039640.jpg
inflating: input/train_images/1177074840.jpg
inflating: input/train_images/3951364046.jpg
inflating: input/train_images/186667196.jpg
inflating: input/train_images/3341713020.jpg
inflating: input/train_images/3486225470.jpg
inflating: input/train_images/4098341362.jpg
inflating: input/train_images/3250253495.jpg
inflating: input/train_images/3958986545.jpg
inflating: input/train_images/1101317234.jpg
inflating: input/train_images/2143264851.jpg
inflating: input/train_images/4130203885.jpg
inflating: input/train_images/2061733689.jpg
inflating: input/train_images/2021948804.jpg
inflating: input/train_images/2150406389.jpg
inflating: input/train_images/1178519877.jpg
inflating: input/train_images/4225133358.jpg
inflating: input/train_images/723564013.jpg
inflating: input/train_images/3208851813.jpg
inflating: input/train_images/3150477025.jpg
inflating: input/train_images/3300885184.jpg
inflating: input/train_images/231005253.jpg
inflating: input/train_images/1023837322.jpg
inflating: input/train_images/1727150436.jpg
inflating: input/train_images/2563788715.jpg
inflating: input/train_images/102039365.jpg
inflating: input/train_images/4179147529.jpg
inflating: input/train_images/2203981379.jpg
inflating: input/train_images/2021244568.jpg
inflating: input/train_images/2489350383.jpg
inflating: input/train_images/2385423168.jpg
inflating: input/train_images/4211138249.jpg
inflating: input/train_images/1635544822.jpg
inflating: input/train_images/302898400.jpg
inflating: input/train_images/736834551.jpg
inflating: input/train_images/1643552654.jpg
inflating: input/train_images/3110035366.jpg
inflating: input/train_images/1595577438.jpg
inflating: input/train_images/1674922822.jpg
inflating: input/train_images/1688478980.jpg
inflating: input/train_images/6103.jpg
inflating: input/train_images/825551560.jpg
inflating: input/train_images/582179912.jpg
inflating: input/train_images/1575013487.jpg
inflating: input/train_images/1017006970.jpg
inflating: input/train_images/1398572814.jpg
inflating: input/train_images/3442867405.jpg
inflating: input/train_images/1442249007.jpg
inflating: input/train_images/135834998.jpg
inflating: input/train_images/3903538298.jpg
inflating: input/train_images/2877008433.jpg
inflating: input/train_images/2222831550.jpg
inflating: input/train_images/3125050696.jpg
inflating: input/train_images/336299725.jpg
inflating: input/train_images/3435885572.jpg
inflating: input/train_images/1575521049.jpg
inflating: input/train_images/2403083568.jpg
inflating: input/train_images/2371237551.jpg
inflating: input/train_images/189585547.jpg
inflating: input/train_images/3323965689.jpg
inflating: input/train_images/1741967088.jpg
inflating: input/train_images/1494726462.jpg
inflating: input/train_images/3793827107.jpg
inflating: input/train_images/2242503873.jpg
inflating: input/train_images/29130164.jpg
inflating: input/train_images/2194916526.jpg
inflating: input/train_images/814185128.jpg
inflating: input/train_images/4006579451.jpg
inflating: input/train_images/3924061539.jpg
inflating: input/train_images/4192202317.jpg
inflating: input/train_images/2917486619.jpg
inflating: input/train_images/3368457880.jpg
inflating: input/train_images/830772376.jpg
inflating: input/train_images/3784391347.jpg
inflating: input/train_images/3548679387.jpg
inflating: input/train_images/3701689199.jpg
inflating: input/train_images/543312121.jpg
inflating: input/train_images/3096059384.jpg
inflating: input/train_images/607840807.jpg
inflating: input/train_images/610094717.jpg
inflating: input/train_images/1422694007.jpg
inflating: input/train_images/993366541.jpg
inflating: input/train_images/1657763940.jpg
inflating: input/train_images/2019941140.jpg
inflating: input/train_images/3743464955.jpg
inflating: input/train_images/12688038.jpg
inflating: input/train_images/3623190050.jpg
inflating: input/train_images/3170957509.jpg
inflating: input/train_images/3791562105.jpg
inflating: input/train_images/1271525915.jpg
inflating: input/train_images/2649484487.jpg
inflating: input/train_images/4221848010.jpg
inflating: input/train_images/2058959882.jpg
inflating: input/train_images/4046068592.jpg
inflating: input/train_images/3644657668.jpg
inflating: input/train_images/2055261864.jpg
inflating: input/train_images/2443428424.jpg
inflating: input/train_images/1653535676.jpg
inflating: input/train_images/744972013.jpg
inflating: input/train_images/3068359463.jpg
inflating: input/train_images/3664934784.jpg
inflating: input/train_images/2156883044.jpg
inflating: input/train_images/3292555378.jpg
inflating: input/train_images/1176894803.jpg
inflating: input/train_images/1291065002.jpg
inflating: input/train_images/2236610037.jpg
inflating: input/train_images/2051048628.jpg
inflating: input/train_images/1059213340.jpg
inflating: input/train_images/972622677.jpg
inflating: input/train_images/491673903.jpg
inflating: input/train_images/4131457294.jpg
inflating: input/train_images/3333666077.jpg
inflating: input/train_images/2734186624.jpg
inflating: input/train_images/3403835665.jpg
inflating: input/train_images/3598395242.jpg
inflating: input/train_images/357823942.jpg
inflating: input/train_images/2437048705.jpg
inflating: input/train_images/2432364538.jpg
inflating: input/train_images/3304581364.jpg
inflating: input/train_images/1564583746.jpg
inflating: input/train_images/1923179851.jpg
inflating: input/train_images/1098441542.jpg
inflating: input/train_images/2844800744.jpg
inflating: input/train_images/3576795584.jpg
inflating: input/train_images/605816056.jpg
inflating: input/train_images/3632711020.jpg
inflating: input/train_images/3523363514.jpg
inflating: input/train_images/3613817114.jpg
inflating: input/train_images/3365264596.jpg
inflating: input/train_images/931943521.jpg
inflating: input/train_images/3277182366.jpg
inflating: input/train_images/2588469356.jpg
inflating: input/train_images/4172480899.jpg
inflating: input/train_images/2888640560.jpg
inflating: input/train_images/920229727.jpg
inflating: input/train_images/4121566836.jpg
inflating: input/train_images/1946046925.jpg
inflating: input/train_images/2578576273.jpg
inflating: input/train_images/1687852669.jpg
inflating: input/train_images/1408639438.jpg
inflating: input/train_images/1851047251.jpg
inflating: input/train_images/4177391802.jpg
inflating: input/train_images/1544151022.jpg
inflating: input/train_images/157014347.jpg
inflating: input/train_images/2462876995.jpg
inflating: input/train_images/572515769.jpg
inflating: input/train_images/3942244753.jpg
inflating: input/train_images/4250885951.jpg
inflating: input/train_images/1040079282.jpg
inflating: input/train_images/1679233615.jpg
inflating: input/train_images/4054246073.jpg
inflating: input/train_images/1633652647.jpg
inflating: input/train_images/3454772304.jpg
inflating: input/train_images/3384300864.jpg
inflating: input/train_images/3728053314.jpg
inflating: input/train_images/144301620.jpg
inflating: input/train_images/4213525466.jpg
inflating: input/train_images/1189155349.jpg
inflating: input/train_images/3902366551.jpg
inflating: input/train_images/3406720343.jpg
inflating: input/train_images/1925077855.jpg
inflating: input/train_images/3604705495.jpg
inflating: input/train_images/1700921498.jpg
inflating: input/train_images/3175679586.jpg
inflating: input/train_images/3921328805.jpg
inflating: input/train_images/1803764964.jpg
inflating: input/train_images/1486873366.jpg
inflating: input/train_images/3356619304.jpg
inflating: input/train_images/2990902039.jpg
inflating: input/train_images/1092772509.jpg
inflating: input/train_images/1750580369.jpg
inflating: input/train_images/3640609321.jpg
inflating: input/train_images/747770020.jpg
inflating: input/train_images/3702249794.jpg
inflating: input/train_images/4283521063.jpg
inflating: input/train_images/2721303722.jpg
inflating: input/train_images/1278246781.jpg
inflating: input/train_images/1698429014.jpg
inflating: input/train_images/4044329664.jpg
inflating: input/train_images/897925605.jpg
inflating: input/train_images/3991445025.jpg
inflating: input/train_images/1350898322.jpg
inflating: input/train_images/627068686.jpg
inflating: input/train_images/4206288054.jpg
inflating: input/train_images/3959033347.jpg
inflating: input/train_images/2549684718.jpg
inflating: input/train_images/3943363815.jpg
inflating: input/train_images/3305068232.jpg
inflating: input/train_images/3912538989.jpg
inflating: input/train_images/1768185229.jpg
inflating: input/train_images/652624585.jpg
inflating: input/train_images/149791608.jpg
inflating: input/train_images/1694314252.jpg
inflating: input/train_images/324248837.jpg
inflating: input/train_images/551716959.jpg
inflating: input/train_images/2837753620.jpg
inflating: input/train_images/1710828726.jpg
inflating: input/train_images/3672364441.jpg
inflating: input/train_images/1031500522.jpg
inflating: input/train_images/1162809006.jpg
inflating: input/train_images/1290189931.jpg
inflating: input/train_images/2657104946.jpg
inflating: input/train_images/2702330830.jpg
inflating: input/train_images/3986218390.jpg
inflating: input/train_images/2321458.jpg
inflating: input/train_images/790568397.jpg
inflating: input/train_images/2725369010.jpg
inflating: input/train_images/3248325951.jpg
inflating: input/train_images/442743258.jpg
inflating: input/train_images/2176153898.jpg
inflating: input/train_images/1420352773.jpg
inflating: input/train_images/553195372.jpg
inflating: input/train_images/2212089862.jpg
inflating: input/train_images/163947288.jpg
inflating: input/train_images/561647799.jpg
inflating: input/train_images/1049791378.jpg
inflating: input/train_images/2292197107.jpg
inflating: input/train_images/700664783.jpg
inflating: input/train_images/3943338687.jpg
inflating: input/train_images/1900666535.jpg
inflating: input/train_images/338729354.jpg
inflating: input/train_images/3504889993.jpg
inflating: input/train_images/2260738205.jpg
inflating: input/train_images/3223282866.jpg
inflating: input/train_images/630797550.jpg
inflating: input/train_images/4286174060.jpg
inflating: input/train_images/3320071514.jpg
inflating: input/train_images/2612899548.jpg
inflating: input/train_images/2242351929.jpg
inflating: input/train_images/3761713726.jpg
inflating: input/train_images/2152051451.jpg
inflating: input/train_images/2187716233.jpg
inflating: input/train_images/2433307411.jpg
inflating: input/train_images/2833869009.jpg
inflating: input/train_images/2703773318.jpg
inflating: input/train_images/2512300453.jpg
inflating: input/train_images/550691642.jpg
inflating: input/train_images/1373499412.jpg
inflating: input/train_images/3324881806.jpg
inflating: input/train_images/1281155236.jpg
inflating: input/train_images/3957562076.jpg
inflating: input/train_images/3356800378.jpg
inflating: input/train_images/476858432.jpg
inflating: input/train_images/3561069837.jpg
inflating: input/train_images/3834284991.jpg
inflating: input/train_images/601644904.jpg
inflating: input/train_images/1740309612.jpg
inflating: input/train_images/1238821861.jpg
inflating: input/train_images/1164692375.jpg
inflating: input/train_images/798006612.jpg
inflating: input/train_images/3139351952.jpg
inflating: input/train_images/621367062.jpg
inflating: input/train_images/1787178614.jpg
inflating: input/train_images/1227161199.jpg
inflating: input/train_images/3451871229.jpg
inflating: input/train_images/36508013.jpg
inflating: input/train_images/1454850686.jpg
inflating: input/train_images/2045580929.jpg
inflating: input/train_images/1721634086.jpg
inflating: input/train_images/3779724069.jpg
inflating: input/train_images/2529358101.jpg
inflating: input/train_images/2174309008.jpg
inflating: input/train_images/3486487741.jpg
inflating: input/train_images/3816412484.jpg
inflating: input/train_images/2686776292.jpg
inflating: input/train_images/1317487445.jpg
inflating: input/train_images/549750394.jpg
inflating: input/train_images/447048984.jpg
inflating: input/train_images/2074319088.jpg
inflating: input/train_images/187343662.jpg
inflating: input/train_images/181355439.jpg
inflating: input/train_images/1556497496.jpg
inflating: input/train_images/2791918976.jpg
inflating: input/train_images/1009361983.jpg
inflating: input/train_images/3452080171.jpg
inflating: input/train_images/2727816974.jpg
inflating: input/train_images/1655615998.jpg
inflating: input/train_images/3413220258.jpg
inflating: input/train_images/3238426139.jpg
inflating: input/train_images/957437817.jpg
inflating: input/train_images/3318282640.jpg
inflating: input/train_images/1810295906.jpg
inflating: input/train_images/527923260.jpg
inflating: input/train_images/3029026599.jpg
inflating: input/train_images/2397511243.jpg
inflating: input/train_images/523467512.jpg
inflating: input/train_images/2189140444.jpg
inflating: input/train_images/2893140493.jpg
inflating: input/train_images/1515921207.jpg
inflating: input/train_images/2017329516.jpg
inflating: input/train_images/3231201573.jpg
inflating: input/train_images/123865158.jpg
inflating: input/train_images/391853550.jpg
inflating: input/train_images/1383579851.jpg
inflating: input/train_images/3792533899.jpg
inflating: input/train_images/3617067141.jpg
inflating: input/train_images/623333046.jpg
inflating: input/train_images/3559981218.jpg
inflating: input/train_images/3426035333.jpg
inflating: input/train_images/1345964218.jpg
inflating: input/train_images/1951270318.jpg
inflating: input/train_images/440896922.jpg
inflating: input/train_images/1142600361.jpg
inflating: input/train_images/1258505000.jpg
inflating: input/train_images/2452364106.jpg
inflating: input/train_images/2164681780.jpg
inflating: input/train_images/2267434559.jpg
inflating: input/train_images/954749288.jpg
inflating: input/train_images/2293810835.jpg
inflating: input/train_images/1799764071.jpg
inflating: input/train_images/519373224.jpg
inflating: input/train_images/3376905285.jpg
inflating: input/train_images/31398935.jpg
inflating: input/train_images/459566440.jpg
inflating: input/train_images/3609925731.jpg
inflating: input/train_images/3910162936.jpg
inflating: input/train_images/425235104.jpg
inflating: input/train_images/1728004537.jpg
inflating: input/train_images/823276084.jpg
inflating: input/train_images/3485849218.jpg
inflating: input/train_images/1345258060.jpg
inflating: input/train_images/2906227978.jpg
inflating: input/train_images/1533935132.jpg
inflating: input/train_images/892025744.jpg
inflating: input/train_images/2657169592.jpg
inflating: input/train_images/57149651.jpg
inflating: input/train_images/2382758285.jpg
inflating: input/train_images/2442102115.jpg
inflating: input/train_images/3194870771.jpg
inflating: input/train_images/495190803.jpg
inflating: input/train_images/3884199625.jpg
inflating: input/train_images/432371484.jpg
inflating: input/train_images/235258659.jpg
inflating: input/train_images/3705669883.jpg
inflating: input/train_images/77379532.jpg
inflating: input/train_images/3769167654.jpg
inflating: input/train_images/3672629086.jpg
inflating: input/train_images/3803823261.jpg
inflating: input/train_images/70656521.jpg
inflating: input/train_images/725027388.jpg
inflating: input/train_images/3905145362.jpg
inflating: input/train_images/69803851.jpg
inflating: input/train_images/1996862654.jpg
inflating: input/train_images/3556675221.jpg
inflating: input/train_images/7830631.jpg
inflating: input/train_images/3528647821.jpg
inflating: input/train_images/4019771530.jpg
inflating: input/train_images/3820517266.jpg
inflating: input/train_images/2003067998.jpg
inflating: input/train_images/3738618109.jpg
inflating: input/train_images/871333676.jpg
inflating: input/train_images/854773586.jpg
inflating: input/train_images/4004133979.jpg
inflating: input/train_images/3712064465.jpg
inflating: input/train_images/3384250377.jpg
inflating: input/train_images/2091940865.jpg
inflating: input/train_images/1435182915.jpg
inflating: input/train_images/4263534897.jpg
inflating: input/train_images/2094357697.jpg
inflating: input/train_images/3212523761.jpg
inflating: input/train_images/4057359447.jpg
inflating: input/train_images/353518711.jpg
inflating: input/train_images/1745296302.jpg
inflating: input/train_images/2668524110.jpg
inflating: input/train_images/534784883.jpg
inflating: input/train_images/3758838047.jpg
inflating: input/train_images/284270936.jpg
inflating: input/train_images/2079104179.jpg
inflating: input/train_images/1206025945.jpg
inflating: input/train_images/240469234.jpg
inflating: input/train_images/207893947.jpg
inflating: input/train_images/1433061380.jpg
inflating: input/train_images/3567260259.jpg
inflating: input/train_images/724302384.jpg
inflating: input/train_images/1823721668.jpg
inflating: input/train_images/3207910782.jpg
inflating: input/train_images/999329392.jpg
inflating: input/train_images/563348054.jpg
inflating: input/train_images/414363375.jpg
inflating: input/train_images/2154249381.jpg
inflating: input/train_images/2854519294.jpg
inflating: input/train_images/3305821160.jpg
inflating: input/train_images/3645419540.jpg
inflating: input/train_images/2365265934.jpg
inflating: input/train_images/3341706471.jpg
inflating: input/train_images/2565456267.jpg
inflating: input/train_images/2241394681.jpg
inflating: input/train_images/2756642189.jpg
inflating: input/train_images/128594927.jpg
inflating: input/train_images/2650949618.jpg
inflating: input/train_images/1831917433.jpg
inflating: input/train_images/473823142.jpg
inflating: input/train_images/2173229407.jpg
inflating: input/train_images/2698282165.jpg
inflating: input/train_images/3542768898.jpg
inflating: input/train_images/1926981545.jpg
inflating: input/train_images/1208216776.jpg
inflating: input/train_images/4147991798.jpg
inflating: input/train_images/2750295170.jpg
inflating: input/train_images/3713964400.jpg
inflating: input/train_images/3924602971.jpg
inflating: input/train_images/2051629821.jpg
inflating: input/train_images/3406705095.jpg
inflating: input/train_images/1157584839.jpg
inflating: input/train_images/1966061216.jpg
inflating: input/train_images/1938730022.jpg
inflating: input/train_images/2252678193.jpg
inflating: input/train_images/2666373738.jpg
inflating: input/train_images/3151577248.jpg
inflating: input/train_images/1854790191.jpg
inflating: input/train_images/4170665280.jpg
inflating: input/train_images/326839479.jpg
inflating: input/train_images/3500204258.jpg
inflating: input/train_images/1335214678.jpg
inflating: input/train_images/2045675044.jpg
inflating: input/train_images/53615554.jpg
inflating: input/train_images/2569083558.jpg
inflating: input/train_images/745442190.jpg
inflating: input/train_images/3579980941.jpg
inflating: input/train_images/2103659900.jpg
inflating: input/train_images/304408360.jpg
inflating: input/train_images/3410015128.jpg
inflating: input/train_images/3495009786.jpg
inflating: input/train_images/178079419.jpg
inflating: input/train_images/4156956690.jpg
inflating: input/train_images/691285209.jpg
inflating: input/train_images/3190980772.jpg
inflating: input/train_images/2946977526.jpg
inflating: input/train_images/2095602074.jpg
inflating: input/train_images/3298994120.jpg
inflating: input/train_images/2116662197.jpg
inflating: input/train_images/4202441426.jpg
inflating: input/train_images/824321777.jpg
inflating: input/train_images/3140362576.jpg
inflating: input/train_images/2207440318.jpg
inflating: input/train_images/226557134.jpg
inflating: input/train_images/2484530081.jpg
inflating: input/train_images/1983454737.jpg
inflating: input/train_images/3653076658.jpg
inflating: input/train_images/4183866544.jpg
inflating: input/train_images/2178767483.jpg
inflating: input/train_images/2138403170.jpg
inflating: input/train_images/898386629.jpg
inflating: input/train_images/239724736.jpg
inflating: input/train_images/4135889078.jpg
inflating: input/train_images/1218546888.jpg
inflating: input/train_images/3443846811.jpg
inflating: input/train_images/3365267863.jpg
inflating: input/train_images/1510579448.jpg
inflating: input/train_images/4146003606.jpg
inflating: input/train_images/494596361.jpg
inflating: input/train_images/2484602563.jpg
inflating: input/train_images/1815972967.jpg
inflating: input/train_images/3576823132.jpg
inflating: input/train_images/2620251115.jpg
inflating: input/train_images/1053009170.jpg
inflating: input/train_images/2610411314.jpg
inflating: input/train_images/1542373427.jpg
inflating: input/train_images/512575634.jpg
inflating: input/train_images/3306255309.jpg
inflating: input/train_images/2220309469.jpg
inflating: input/train_images/2273730548.jpg
inflating: input/train_images/535314273.jpg
inflating: input/train_images/2707336069.jpg
inflating: input/train_images/379548308.jpg
inflating: input/train_images/2189576451.jpg
inflating: input/train_images/557774617.jpg
inflating: input/train_images/2757539730.jpg
inflating: input/train_images/2976035478.jpg
inflating: input/train_images/3647297248.jpg
inflating: input/train_images/544346867.jpg
inflating: input/train_images/1220118716.jpg
inflating: input/train_images/3815135505.jpg
inflating: input/train_images/1775001723.jpg
inflating: input/train_images/396499078.jpg
inflating: input/train_images/2663749229.jpg
inflating: input/train_images/309569120.jpg
inflating: input/train_images/4102729978.jpg
inflating: input/train_images/2018165733.jpg
inflating: input/train_images/1643576810.jpg
inflating: input/train_images/1291178007.jpg
inflating: input/train_images/3957819631.jpg
inflating: input/train_images/3389583925.jpg
inflating: input/train_images/2653588387.jpg
inflating: input/train_images/2384181550.jpg
inflating: input/train_images/2594282701.jpg
inflating: input/train_images/4288418406.jpg
inflating: input/train_images/1355462312.jpg
inflating: input/train_images/314599917.jpg
inflating: input/train_images/2628555515.jpg
inflating: input/train_images/1332855741.jpg
inflating: input/train_images/4221104214.jpg
inflating: input/train_images/1446871248.jpg
inflating: input/train_images/2083878392.jpg
inflating: input/train_images/3511671285.jpg
inflating: input/train_images/651123589.jpg
inflating: input/train_images/2421566220.jpg
inflating: input/train_images/368553798.jpg
inflating: input/train_images/1974079097.jpg
inflating: input/train_images/412390858.jpg
inflating: input/train_images/3917185101.jpg
inflating: input/train_images/2225817818.jpg
inflating: input/train_images/3608578044.jpg
inflating: input/train_images/3854923530.jpg
inflating: input/train_images/2795551857.jpg
inflating: input/train_images/1733354827.jpg
inflating: input/train_images/376932028.jpg
inflating: input/train_images/3203412332.jpg
inflating: input/train_images/815702740.jpg
inflating: input/train_images/1680657766.jpg
inflating: input/train_images/4127132722.jpg
inflating: input/train_images/2082251851.jpg
inflating: input/train_images/3442647074.jpg
inflating: input/train_images/3684209409.jpg
inflating: input/train_images/147604859.jpg
inflating: input/train_images/1873100849.jpg
inflating: input/train_images/3814035701.jpg
inflating: input/train_images/3869030734.jpg
inflating: input/train_images/1567877420.jpg
inflating: input/train_images/619658845.jpg
inflating: input/train_images/2378984273.jpg
inflating: input/train_images/2214228898.jpg
inflating: input/train_images/3205016772.jpg
inflating: input/train_images/494558399.jpg
inflating: input/train_images/573039129.jpg
inflating: input/train_images/2710440557.jpg
inflating: input/train_images/3192657039.jpg
inflating: input/train_images/3570121717.jpg
inflating: input/train_images/1898437154.jpg
inflating: input/train_images/1302078468.jpg
inflating: input/train_images/1483743890.jpg
inflating: input/train_images/657424989.jpg
inflating: input/train_images/159031113.jpg
inflating: input/train_images/3652093161.jpg
inflating: input/train_images/1998363687.jpg
inflating: input/train_images/1968596086.jpg
inflating: input/train_images/1009322597.jpg
inflating: input/train_images/1972409682.jpg
inflating: input/train_images/4089218356.jpg
inflating: input/train_images/2600500593.jpg
inflating: input/train_images/1960063101.jpg
inflating: input/train_images/2016371750.jpg
inflating: input/train_images/1063110047.jpg
inflating: input/train_images/3662018109.jpg
inflating: input/train_images/2514350663.jpg
inflating: input/train_images/3219439906.jpg
inflating: input/train_images/700464495.jpg
inflating: input/train_images/3192692419.jpg
inflating: input/train_images/3719058586.jpg
inflating: input/train_images/1194042617.jpg
inflating: input/train_images/3747440776.jpg
inflating: input/train_images/2895326283.jpg
inflating: input/train_images/265181391.jpg
inflating: input/train_images/3856769685.jpg
inflating: input/train_images/4009888434.jpg
inflating: input/train_images/1512350296.jpg
inflating: input/train_images/641590483.jpg
inflating: input/train_images/2514394316.jpg
inflating: input/train_images/1631620098.jpg
inflating: input/train_images/2907723622.jpg
inflating: input/train_images/1314553833.jpg
inflating: input/train_images/3566226674.jpg
inflating: input/train_images/1077138637.jpg
inflating: input/train_images/1083021605.jpg
inflating: input/train_images/1150608601.jpg
inflating: input/train_images/2713091649.jpg
inflating: input/train_images/2797611751.jpg
inflating: input/train_images/1525852725.jpg
inflating: input/train_images/3658178204.jpg
inflating: input/train_images/117226420.jpg
inflating: input/train_images/441408374.jpg
inflating: input/train_images/910617288.jpg
inflating: input/train_images/744676370.jpg
inflating: input/train_images/2462241027.jpg
inflating: input/train_images/4060346540.jpg
inflating: input/train_images/2292154193.jpg
inflating: input/train_images/2457737762.jpg
inflating: input/train_images/1262742218.jpg
inflating: input/train_images/972840038.jpg
inflating: input/train_images/631563709.jpg
inflating: input/train_images/3914082089.jpg
inflating: input/train_images/3934216826.jpg
inflating: input/train_images/1131545521.jpg
inflating: input/train_images/3988748153.jpg
inflating: input/train_images/3633505917.jpg
inflating: input/train_images/207761661.jpg
inflating: input/train_images/2086061590.jpg
inflating: input/train_images/2272721188.jpg
inflating: input/train_images/2148834762.jpg
inflating: input/train_images/3317706044.jpg
inflating: input/train_images/3115055353.jpg
inflating: input/train_images/2995689898.jpg
inflating: input/train_images/2875351329.jpg
inflating: input/train_images/3586867158.jpg
inflating: input/train_images/1906831443.jpg
inflating: input/train_images/568949064.jpg
inflating: input/train_images/2742350070.jpg
inflating: input/train_images/2878480561.jpg
inflating: input/train_images/4186680625.jpg
inflating: input/train_images/2056154613.jpg
inflating: input/train_images/3435254116.jpg
inflating: input/train_images/3484566225.jpg
inflating: input/train_images/3120725353.jpg
inflating: input/train_images/3192892221.jpg
inflating: input/train_images/1670109260.jpg
inflating: input/train_images/3589849323.jpg
inflating: input/train_images/813060428.jpg
inflating: input/train_images/1033403106.jpg
inflating: input/train_images/3078123533.jpg
inflating: input/train_images/2800642069.jpg
inflating: input/train_images/1705724767.jpg
inflating: input/train_images/817366566.jpg
inflating: input/train_images/2876605372.jpg
inflating: input/train_images/3211443011.jpg
inflating: input/train_images/2810787386.jpg
inflating: input/train_images/3027251380.jpg
inflating: input/train_images/1545341197.jpg
inflating: input/train_images/2648991795.jpg
inflating: input/train_images/3302044032.jpg
inflating: input/train_images/2754537497.jpg
inflating: input/train_images/3945332868.jpg
inflating: input/train_images/3058243587.jpg
inflating: input/train_images/2856404486.jpg
inflating: input/train_images/2059048377.jpg
inflating: input/train_images/681211585.jpg
inflating: input/train_images/4290607578.jpg
inflating: input/train_images/1454944435.jpg
inflating: input/train_images/1649231975.jpg
inflating: input/train_images/4290883718.jpg
inflating: input/train_images/1844847808.jpg
inflating: input/train_images/107466550.jpg
inflating: input/train_images/248540513.jpg
inflating: input/train_images/1744118582.jpg
inflating: input/train_images/3247553763.jpg
inflating: input/train_images/2241778439.jpg
inflating: input/train_images/3675829567.jpg
inflating: input/train_images/1664825517.jpg
inflating: input/train_images/1040828572.jpg
inflating: input/train_images/705928796.jpg
inflating: input/train_images/2807875042.jpg
inflating: input/train_images/2198414004.jpg
inflating: input/train_images/1595866872.jpg
inflating: input/train_images/181157076.jpg
inflating: input/train_images/3523670880.jpg
inflating: input/train_images/1300599354.jpg
inflating: input/train_images/3731510435.jpg
inflating: input/train_images/1956034762.jpg
inflating: input/train_images/3088191583.jpg
inflating: input/train_images/3052959460.jpg
inflating: input/train_images/2330534234.jpg
inflating: input/train_images/4188219605.jpg
inflating: input/train_images/2275525608.jpg
inflating: input/train_images/2577068353.jpg
inflating: input/train_images/673931298.jpg
inflating: input/train_images/1482562306.jpg
inflating: input/train_images/2943495695.jpg
inflating: input/train_images/2863474549.jpg
inflating: input/train_images/1903950320.jpg
inflating: input/train_images/2095589836.jpg
inflating: input/train_images/4027944552.jpg
inflating: input/train_images/2852147190.jpg
inflating: input/train_images/4151340541.jpg
inflating: input/train_images/2665470851.jpg
inflating: input/train_images/3023855128.jpg
inflating: input/train_images/268372107.jpg
inflating: input/train_images/1997488168.jpg
inflating: input/train_images/3289230042.jpg
inflating: input/train_images/3389713573.jpg
inflating: input/train_images/2417374571.jpg
inflating: input/train_images/2274065917.jpg
inflating: input/train_images/794895924.jpg
inflating: input/train_images/1667727245.jpg
inflating: input/train_images/4034495674.jpg
inflating: input/train_images/2486687383.jpg
inflating: input/train_images/3413149507.jpg
inflating: input/train_images/1249688756.jpg
inflating: input/train_images/3643428044.jpg
inflating: input/train_images/3217927386.jpg
inflating: input/train_images/1946306561.jpg
inflating: input/train_images/3223218079.jpg
inflating: input/train_images/2408821742.jpg
inflating: input/train_images/390092040.jpg
inflating: input/train_images/431535307.jpg
inflating: input/train_images/2394770217.jpg
inflating: input/train_images/391364557.jpg
inflating: input/train_images/2840988158.jpg
inflating: input/train_images/4000584857.jpg
inflating: input/train_images/1397439068.jpg
inflating: input/train_images/2832723542.jpg
inflating: input/train_images/1904646699.jpg
inflating: input/train_images/1343777320.jpg
inflating: input/train_images/3850927712.jpg
inflating: input/train_images/582352615.jpg
inflating: input/train_images/2130379306.jpg
inflating: input/train_images/3316874782.jpg
inflating: input/train_images/2043115778.jpg
inflating: input/train_images/1544234863.jpg
inflating: input/train_images/700971789.jpg
inflating: input/train_images/1442268656.jpg
inflating: input/train_images/3083319765.jpg
inflating: input/train_images/2179057801.jpg
inflating: input/train_images/1442929249.jpg
inflating: input/train_images/2098716407.jpg
inflating: input/train_images/1910478563.jpg
inflating: input/train_images/3638291703.jpg
inflating: input/train_images/3211907814.jpg
inflating: input/train_images/61492497.jpg
inflating: input/train_images/1814763610.jpg
inflating: input/train_images/131746797.jpg
inflating: input/train_images/3656998236.jpg
inflating: input/train_images/2213150889.jpg
inflating: input/train_images/2291300428.jpg
inflating: input/train_images/2736444451.jpg
inflating: input/train_images/1681857148.jpg
inflating: input/train_images/2519147193.jpg
inflating: input/train_images/1185369128.jpg
inflating: input/train_images/4018307313.jpg
inflating: input/train_images/1987438093.jpg
inflating: input/train_images/2141260400.jpg
inflating: input/train_images/506080526.jpg
inflating: input/train_images/3939103661.jpg
inflating: input/train_images/3806878671.jpg
inflating: input/train_images/925945591.jpg
inflating: input/train_images/3490253996.jpg
inflating: input/train_images/2916268128.jpg
inflating: input/train_images/2466684931.jpg
inflating: input/train_images/1348221044.jpg
inflating: input/train_images/630798966.jpg
inflating: input/train_images/1834263937.jpg
inflating: input/train_images/1247742333.jpg
inflating: input/train_images/1279535323.jpg
inflating: input/train_images/760863006.jpg
inflating: input/train_images/1687514090.jpg
inflating: input/train_images/3141049473.jpg
inflating: input/train_images/52672633.jpg
inflating: input/train_images/4046910331.jpg
inflating: input/train_images/3375224217.jpg
inflating: input/train_images/3668854435.jpg
inflating: input/train_images/518719429.jpg
inflating: input/train_images/2861545981.jpg
inflating: input/train_images/3883954527.jpg
inflating: input/train_images/4225259568.jpg
inflating: input/train_images/2967863031.jpg
inflating: input/train_images/550742055.jpg
inflating: input/train_images/1703538353.jpg
inflating: input/train_images/717532306.jpg
inflating: input/train_images/3232846318.jpg
inflating: input/train_images/3863840044.jpg
inflating: input/train_images/3829295052.jpg
inflating: input/train_images/1756754615.jpg
inflating: input/train_images/2203540560.jpg
inflating: input/train_images/2694971765.jpg
inflating: input/train_images/3153618395.jpg
inflating: input/train_images/2869465595.jpg
inflating: input/train_images/2573573471.jpg
inflating: input/train_images/2498112851.jpg
inflating: input/train_images/1092232758.jpg
inflating: input/train_images/2240579189.jpg
inflating: input/train_images/996539252.jpg
inflating: input/train_images/1999438235.jpg
inflating: input/train_images/2188616846.jpg
inflating: input/train_images/4022980210.jpg
inflating: input/train_images/1991345445.jpg
inflating: input/train_images/3182540417.jpg
inflating: input/train_images/216687816.jpg
inflating: input/train_images/1988989222.jpg
inflating: input/train_images/1645330578.jpg
inflating: input/train_images/4011808410.jpg
inflating: input/train_images/1075249116.jpg
inflating: input/train_images/2224409184.jpg
inflating: input/train_images/1357774593.jpg
inflating: input/train_images/842973014.jpg
inflating: input/train_images/1699816187.jpg
inflating: input/train_images/3989761123.jpg
inflating: input/train_images/125088495.jpg
inflating: input/train_images/1050774063.jpg
inflating: input/train_images/3876777651.jpg
inflating: input/train_images/334728567.jpg
inflating: input/train_images/4274049119.jpg
inflating: input/train_images/3929446886.jpg
inflating: input/train_images/807698612.jpg
inflating: input/train_images/174581727.jpg
inflating: input/train_images/1613918546.jpg
inflating: input/train_images/382849332.jpg
inflating: input/train_images/687753518.jpg
inflating: input/train_images/3880993061.jpg
inflating: input/train_images/775831061.jpg
inflating: input/train_images/3852521.jpg
inflating: input/train_images/4160459669.jpg
inflating: input/train_images/857983591.jpg
inflating: input/train_images/3646458335.jpg
inflating: input/train_images/2458353661.jpg
inflating: input/train_images/3345535790.jpg
inflating: input/train_images/2823421970.jpg
inflating: input/train_images/319927039.jpg
inflating: input/train_images/725034194.jpg
inflating: input/train_images/3088061893.jpg
inflating: input/train_images/1615166642.jpg
inflating: input/train_images/1133741600.jpg
inflating: input/train_images/224402404.jpg
inflating: input/train_images/3738228775.jpg
inflating: input/train_images/3779096121.jpg
inflating: input/train_images/504065374.jpg
inflating: input/train_images/3094956073.jpg
inflating: input/train_images/1552938679.jpg
inflating: input/train_images/1421166262.jpg
inflating: input/train_images/3964576371.jpg
inflating: input/train_images/3057285715.jpg
inflating: input/train_images/832729024.jpg
inflating: input/train_images/30053778.jpg
inflating: input/train_images/327801627.jpg
inflating: input/train_images/208435588.jpg
inflating: input/train_images/471570254.jpg
inflating: input/train_images/1813365799.jpg
inflating: input/train_images/1633620342.jpg
inflating: input/train_images/3692116146.jpg
inflating: input/train_images/2378730150.jpg
inflating: input/train_images/1836597085.jpg
inflating: input/train_images/235479847.jpg
inflating: input/train_images/66720017.jpg
inflating: input/train_images/873720819.jpg
inflating: input/train_images/2753076252.jpg
inflating: input/train_images/483774797.jpg
inflating: input/train_images/392932902.jpg
inflating: input/train_images/2821260437.jpg
inflating: input/train_images/2708567457.jpg
inflating: input/train_images/2101603304.jpg
inflating: input/train_images/3615279906.jpg
inflating: input/train_images/257033092.jpg
inflating: input/train_images/2679988285.jpg
inflating: input/train_images/2238082787.jpg
inflating: input/train_images/817893600.jpg
inflating: input/train_images/3378163527.jpg
inflating: input/train_images/651331224.jpg
inflating: input/train_images/832048490.jpg
inflating: input/train_images/2416353814.jpg
inflating: input/train_images/1527255990.jpg
inflating: input/train_images/910870110.jpg
inflating: input/train_images/1201098987.jpg
inflating: input/train_images/180819743.jpg
inflating: input/train_images/1620448232.jpg
inflating: input/train_images/1568139691.jpg
inflating: input/train_images/31203445.jpg
inflating: input/train_images/4102201143.jpg
inflating: input/train_images/3075515456.jpg
inflating: input/train_images/3582059053.jpg
inflating: input/train_images/315067949.jpg
inflating: input/train_images/1902246685.jpg
inflating: input/train_images/2047483616.jpg
inflating: input/train_images/651791438.jpg
inflating: input/train_images/3945098769.jpg
inflating: input/train_images/1745794549.jpg
inflating: input/train_images/3616264908.jpg
inflating: input/train_images/1913393149.jpg
inflating: input/train_images/573324267.jpg
inflating: input/train_images/2170099924.jpg
inflating: input/train_images/298252775.jpg
inflating: input/train_images/142510693.jpg
inflating: input/train_images/1389564003.jpg
inflating: input/train_images/230268037.jpg
inflating: input/train_images/3916490828.jpg
inflating: input/train_images/997973414.jpg
inflating: input/train_images/2997330474.jpg
inflating: input/train_images/1670353820.jpg
inflating: input/train_images/2573993718.jpg
inflating: input/train_images/2834607422.jpg
inflating: input/train_images/3793586140.jpg
inflating: input/train_images/2510072136.jpg
inflating: input/train_images/2371937835.jpg
inflating: input/train_images/3578968000.jpg
inflating: input/train_images/1048372814.jpg
inflating: input/train_images/1070630875.jpg
inflating: input/train_images/4145051602.jpg
inflating: input/train_images/2806021040.jpg
inflating: input/train_images/1172621803.jpg
inflating: input/train_images/3755850035.jpg
inflating: input/train_images/1115331699.jpg
inflating: input/train_images/1591211160.jpg
inflating: input/train_images/2536929613.jpg
inflating: input/train_images/1973608314.jpg
inflating: input/train_images/1590856402.jpg
inflating: input/train_images/1313437821.jpg
inflating: input/train_images/1874752223.jpg
inflating: input/train_images/407043133.jpg
inflating: input/train_images/3880530404.jpg
inflating: input/train_images/670606517.jpg
inflating: input/train_images/4188579631.jpg
inflating: input/train_images/2846698529.jpg
inflating: input/train_images/818188540.jpg
inflating: input/train_images/2494580860.jpg
inflating: input/train_images/2468056909.jpg
inflating: input/train_images/1362172635.jpg
inflating: input/train_images/3379826408.jpg
inflating: input/train_images/3370597486.jpg
inflating: input/train_images/3102370100.jpg
inflating: input/train_images/4124483418.jpg
inflating: input/train_images/3928691677.jpg
inflating: input/train_images/1797636994.jpg
inflating: input/train_images/2247014271.jpg
inflating: input/train_images/1964957251.jpg
inflating: input/train_images/2260330058.jpg
inflating: input/train_images/2272550.jpg
inflating: input/train_images/3783300400.jpg
inflating: input/train_images/1082098147.jpg
inflating: input/train_images/1286475043.jpg
inflating: input/train_images/4255275522.jpg
inflating: input/train_images/1726536780.jpg
inflating: input/train_images/1340999953.jpg
inflating: input/train_images/624872321.jpg
inflating: input/train_images/2552592093.jpg
inflating: input/train_images/1540424279.jpg
inflating: input/train_images/2456930353.jpg
inflating: input/train_images/3516555909.jpg
inflating: input/train_images/1908512219.jpg
inflating: input/train_images/2690529206.jpg
inflating: input/train_images/2803815396.jpg
inflating: input/train_images/1478209787.jpg
inflating: input/train_images/497447638.jpg
inflating: input/train_images/593922825.jpg
inflating: input/train_images/2822454255.jpg
inflating: input/train_images/895910836.jpg
inflating: input/train_images/2073193450.jpg
inflating: input/train_images/2593681769.jpg
inflating: input/train_images/2479163539.jpg
inflating: input/train_images/1573242622.jpg
inflating: input/train_images/3229492371.jpg
inflating: input/train_images/3807708050.jpg
inflating: input/train_images/15196864.jpg
inflating: input/train_images/3484852968.jpg
inflating: input/train_images/1983523977.jpg
inflating: input/train_images/710380163.jpg
inflating: input/train_images/3397396466.jpg
inflating: input/train_images/2195305103.jpg
inflating: input/train_images/3986526512.jpg
inflating: input/train_images/623023838.jpg
inflating: input/train_images/1074822391.jpg
inflating: input/train_images/2315459950.jpg
inflating: input/train_images/434367485.jpg
inflating: input/train_images/2275346369.jpg
inflating: input/train_images/2370587478.jpg
inflating: input/train_images/224544096.jpg
inflating: input/train_images/716052378.jpg
inflating: input/train_images/937960341.jpg
inflating: input/train_images/759559409.jpg
inflating: input/train_images/1196761091.jpg
inflating: input/train_images/2198388199.jpg
inflating: input/train_images/2666280615.jpg
inflating: input/train_images/4060987360.jpg
inflating: input/train_images/2002289812.jpg
inflating: input/train_images/2169718096.jpg
inflating: input/train_images/2279072182.jpg
inflating: input/train_images/76754228.jpg
inflating: input/train_images/3291696815.jpg
inflating: input/train_images/3551024298.jpg
inflating: input/train_images/1814144394.jpg
inflating: input/train_images/1395467521.jpg
inflating: input/train_images/2590010123.jpg
inflating: input/train_images/3734809897.jpg
inflating: input/train_images/851450770.jpg
inflating: input/train_images/722419138.jpg
inflating: input/train_images/2425413365.jpg
inflating: input/train_images/4114035268.jpg
inflating: input/train_images/2547653381.jpg
inflating: input/train_images/179499224.jpg
inflating: input/train_images/2719402606.jpg
inflating: input/train_images/1959890279.jpg
inflating: input/train_images/1758318062.jpg
inflating: input/train_images/2704844325.jpg
inflating: input/train_images/4285379899.jpg
inflating: input/train_images/1391510011.jpg
inflating: input/train_images/1492381879.jpg
inflating: input/train_images/3486341851.jpg
inflating: input/train_images/3917462304.jpg
inflating: input/train_images/1569815719.jpg
inflating: input/train_images/384773120.jpg
inflating: input/train_images/271724485.jpg
inflating: input/train_images/351189555.jpg
inflating: input/train_images/3799141039.jpg
inflating: input/train_images/1948579246.jpg
inflating: input/train_images/226294252.jpg
inflating: input/train_images/3070263575.jpg
inflating: input/train_images/3162418489.jpg
inflating: input/train_images/662372109.jpg
inflating: input/train_images/3031971659.jpg
inflating: input/train_images/1467768254.jpg
inflating: input/train_images/3962684748.jpg
inflating: input/train_images/599872588.jpg
inflating: input/train_images/845640217.jpg
inflating: input/train_images/1539638666.jpg
inflating: input/train_images/2236456685.jpg
inflating: input/train_images/2231604198.jpg
inflating: input/train_images/19555068.jpg
inflating: input/train_images/2347169881.jpg
inflating: input/train_images/3961645455.jpg
inflating: input/train_images/38680980.jpg
inflating: input/train_images/4151371064.jpg
inflating: input/train_images/1138072371.jpg
inflating: input/train_images/2026618644.jpg
inflating: input/train_images/2016389925.jpg
inflating: input/train_images/3729595157.jpg
inflating: input/train_images/2792806251.jpg
inflating: input/train_images/1547490615.jpg
inflating: input/train_images/4063216337.jpg
inflating: input/train_images/3631359549.jpg
inflating: input/train_images/3620010252.jpg
inflating: input/train_images/1681772018.jpg
inflating: input/train_images/731897320.jpg
inflating: input/train_images/2209567923.jpg
inflating: input/train_images/723805565.jpg
inflating: input/train_images/634816572.jpg
inflating: input/train_images/3681083835.jpg
inflating: input/train_images/878720080.jpg
inflating: input/train_images/1316293623.jpg
inflating: input/train_images/3110572972.jpg
inflating: input/train_images/3410127077.jpg
inflating: input/train_images/971188113.jpg
inflating: input/train_images/4254996610.jpg
inflating: input/train_images/2023062288.jpg
inflating: input/train_images/69026823.jpg
inflating: input/train_images/1820631202.jpg
inflating: input/train_images/646841236.jpg
inflating: input/train_images/3083312540.jpg
inflating: input/train_images/3818159518.jpg
inflating: input/train_images/383656390.jpg
inflating: input/train_images/2828802203.jpg
inflating: input/train_images/1751029526.jpg
inflating: input/train_images/1844147447.jpg
inflating: input/train_images/793076912.jpg
inflating: input/train_images/2382410530.jpg
inflating: input/train_images/3324645534.jpg
inflating: input/train_images/62070739.jpg
inflating: input/train_images/2097527991.jpg
inflating: input/train_images/1085931637.jpg
inflating: input/train_images/1600157246.jpg
inflating: input/train_images/1445131056.jpg
inflating: input/train_images/639068838.jpg
inflating: input/train_images/3456289388.jpg
inflating: input/train_images/1117635403.jpg
inflating: input/train_images/1328015019.jpg
inflating: input/train_images/2286469795.jpg
inflating: input/train_images/2044199243.jpg
inflating: input/train_images/1581350775.jpg
inflating: input/train_images/4137700702.jpg
inflating: input/train_images/524188630.jpg
inflating: input/train_images/3459275499.jpg
inflating: input/train_images/2054130310.jpg
inflating: input/train_images/3555694863.jpg
inflating: input/train_images/4250668297.jpg
inflating: input/train_images/532038171.jpg
inflating: input/train_images/1835974960.jpg
inflating: input/train_images/1794430804.jpg
inflating: input/train_images/2494192748.jpg
inflating: input/train_images/4052896742.jpg
inflating: input/train_images/2406746609.jpg
inflating: input/train_images/947208202.jpg
inflating: input/train_images/3072589200.jpg
inflating: input/train_images/1820335596.jpg
inflating: input/train_images/713259486.jpg
inflating: input/train_images/2476023220.jpg
inflating: input/train_images/1019359196.jpg
inflating: input/train_images/809489252.jpg
inflating: input/train_images/3158054107.jpg
inflating: input/train_images/3365094524.jpg
inflating: input/train_images/1941569739.jpg
inflating: input/train_images/4268256477.jpg
inflating: input/train_images/1246560412.jpg
inflating: input/train_images/678943117.jpg
inflating: input/train_images/1704757491.jpg
inflating: input/train_images/4184902338.jpg
inflating: input/train_images/2478621909.jpg
inflating: input/train_images/3954910918.jpg
inflating: input/train_images/2725135519.jpg
inflating: input/train_images/2848893902.jpg
inflating: input/train_images/2878808214.jpg
inflating: input/train_images/3888347347.jpg
inflating: input/train_images/2793573567.jpg
inflating: input/train_images/2823249481.jpg
inflating: input/train_images/3492042850.jpg
inflating: input/train_images/3780777265.jpg
inflating: input/train_images/1922719796.jpg
inflating: input/train_images/2204493218.jpg
inflating: input/train_images/4100436529.jpg
inflating: input/train_images/2661455807.jpg
inflating: input/train_images/341323855.jpg
inflating: input/train_images/3419650756.jpg
inflating: input/train_images/3697315680.jpg
inflating: input/train_images/2892268393.jpg
inflating: input/train_images/3349471928.jpg
inflating: input/train_images/1809996556.jpg
inflating: input/train_images/1515903038.jpg
inflating: input/train_images/3240849248.jpg
inflating: input/train_images/1849571285.jpg
inflating: input/train_images/1859104179.jpg
inflating: input/train_images/3312370190.jpg
inflating: input/train_images/4276437960.jpg
inflating: input/train_images/1034735631.jpg
inflating: input/train_images/1166984087.jpg
inflating: input/train_images/1362265434.jpg
inflating: input/train_images/3271313528.jpg
inflating: input/train_images/1121089876.jpg
inflating: input/train_images/1847003440.jpg
inflating: input/train_images/3741620114.jpg
inflating: input/train_images/2888884309.jpg
inflating: input/train_images/2865650772.jpg
inflating: input/train_images/3643755636.jpg
inflating: input/train_images/562228915.jpg
inflating: input/train_images/1124255421.jpg
inflating: input/train_images/1138779690.jpg
inflating: input/train_images/3408901143.jpg
inflating: input/train_images/854627770.jpg
inflating: input/train_images/699952648.jpg
inflating: input/train_images/127665486.jpg
inflating: input/train_images/2513639394.jpg
inflating: input/train_images/2218891230.jpg
inflating: input/train_images/1990169159.jpg
inflating: input/train_images/1196276402.jpg
inflating: input/train_images/2825142093.jpg
inflating: input/train_images/1226067101.jpg
inflating: input/train_images/2673180218.jpg
inflating: input/train_images/126180843.jpg
inflating: input/train_images/3822961101.jpg
inflating: input/train_images/3089786786.jpg
inflating: input/train_images/4018949072.jpg
inflating: input/train_images/415127331.jpg
inflating: input/train_images/2776534686.jpg
inflating: input/train_images/938742634.jpg
inflating: input/train_images/4006299018.jpg
inflating: input/train_images/1865345123.jpg
inflating: input/train_images/1595574343.jpg
inflating: input/train_images/226270925.jpg
inflating: input/train_images/546383916.jpg
inflating: input/train_images/374033272.jpg
inflating: input/train_images/2444460528.jpg
inflating: input/train_images/3878495459.jpg
inflating: input/train_images/2409113043.jpg
inflating: input/train_images/3795384573.jpg
inflating: input/train_images/879089403.jpg
inflating: input/train_images/2916471006.jpg
inflating: input/train_images/2552409953.jpg
inflating: input/train_images/3772118108.jpg
inflating: input/train_images/3177907789.jpg
inflating: input/train_images/1993464761.jpg
inflating: input/train_images/699893195.jpg
inflating: input/train_images/3034275498.jpg
inflating: input/train_images/1640734047.jpg
inflating: input/train_images/2828723686.jpg
inflating: input/train_images/2324206704.jpg
inflating: input/train_images/3465555642.jpg
inflating: input/train_images/3222591815.jpg
inflating: input/train_images/4287278405.jpg
inflating: input/train_images/3971124978.jpg
inflating: input/train_images/352708690.jpg
inflating: input/train_images/2285402273.jpg
inflating: input/train_images/1036302176.jpg
inflating: input/train_images/2792323624.jpg
inflating: input/train_images/787914923.jpg
inflating: input/train_images/1904073700.jpg
inflating: input/train_images/830740908.jpg
inflating: input/train_images/4291622548.jpg
inflating: input/train_images/1409426378.jpg
inflating: input/train_images/696072814.jpg
inflating: input/train_images/3943400574.jpg
inflating: input/train_images/1504494469.jpg
inflating: input/train_images/363497551.jpg
inflating: input/train_images/3335254269.jpg
inflating: input/train_images/4237476054.jpg
inflating: input/train_images/2988461706.jpg
inflating: input/train_images/520992069.jpg
inflating: input/train_images/4186528669.jpg
inflating: input/train_images/4224255718.jpg
inflating: input/train_images/1617011317.jpg
inflating: input/train_images/4174034867.jpg
inflating: input/train_images/1178339881.jpg
inflating: input/train_images/1119747653.jpg
inflating: input/train_images/1315797481.jpg
inflating: input/train_images/3943173652.jpg
inflating: input/train_images/1012755275.jpg
inflating: input/train_images/712756334.jpg
inflating: input/train_images/2778033959.jpg
inflating: input/train_images/1343457656.jpg
inflating: input/train_images/1124029406.jpg
inflating: input/train_images/3330242957.jpg
inflating: input/train_images/3140977512.jpg
inflating: input/train_images/3689683022.jpg
inflating: input/train_images/3732011479.jpg
inflating: input/train_images/4248150807.jpg
inflating: input/train_images/3738494351.jpg
inflating: input/train_images/3227667303.jpg
inflating: input/train_images/1240563843.jpg
inflating: input/train_images/4250119562.jpg
inflating: input/train_images/1852697554.jpg
inflating: input/train_images/3107549236.jpg
inflating: input/train_images/188217517.jpg
inflating: input/train_images/913294875.jpg
inflating: input/train_images/3013103054.jpg
inflating: input/train_images/2810410846.jpg
inflating: input/train_images/2936136374.jpg
inflating: input/train_images/2053079432.jpg
inflating: input/train_images/1628667516.jpg
inflating: input/train_images/172123475.jpg
inflating: input/train_images/4785038.jpg
inflating: input/train_images/1830680184.jpg
inflating: input/train_images/1438970432.jpg
inflating: input/train_images/618683385.jpg
inflating: input/train_images/3495835530.jpg
inflating: input/train_images/2742838328.jpg
inflating: input/train_images/1791690921.jpg
inflating: input/train_images/258351073.jpg
inflating: input/train_images/1260451249.jpg
inflating: input/train_images/49247184.jpg
inflating: input/train_images/1509840012.jpg
inflating: input/train_images/3245080798.jpg
inflating: input/train_images/2512000919.jpg
inflating: input/train_images/443211656.jpg
inflating: input/train_images/1270179035.jpg
inflating: input/train_images/2399292310.jpg
inflating: input/train_images/948363257.jpg
inflating: input/train_images/223143311.jpg
inflating: input/train_images/3432358469.jpg
inflating: input/train_images/3152994734.jpg
inflating: input/train_images/2799109400.jpg
inflating: input/train_images/1000015157.jpg
inflating: input/train_images/3719761839.jpg
inflating: input/train_images/644212609.jpg
inflating: input/train_images/3321411338.jpg
inflating: input/train_images/2602727826.jpg
inflating: input/train_images/2143941183.jpg
inflating: input/train_images/317345345.jpg
inflating: input/train_images/850170128.jpg
inflating: input/train_images/2504965655.jpg
inflating: input/train_images/266508251.jpg
inflating: input/train_images/46087551.jpg
inflating: input/train_images/2499466480.jpg
inflating: input/train_images/2203128362.jpg
inflating: input/train_images/2167218965.jpg
inflating: input/train_images/297361079.jpg
inflating: input/train_images/1312051349.jpg
inflating: input/train_images/858147094.jpg
inflating: input/train_images/718324300.jpg
inflating: input/train_images/3952521277.jpg
inflating: input/train_images/1458175598.jpg
inflating: input/train_images/22205792.jpg
inflating: input/train_images/3454506248.jpg
inflating: input/train_images/2325593465.jpg
inflating: input/train_images/1333145781.jpg
inflating: input/train_images/3927863529.jpg
inflating: input/train_images/2478144118.jpg
inflating: input/train_images/4267085223.jpg
inflating: input/train_images/2724038983.jpg
inflating: input/train_images/4283076582.jpg
inflating: input/train_images/445689900.jpg
inflating: input/train_images/343243475.jpg
inflating: input/train_images/1138949936.jpg
inflating: input/train_images/1367882980.jpg
inflating: input/train_images/2539159693.jpg
inflating: input/train_images/2949246528.jpg
inflating: input/train_images/259935954.jpg
inflating: input/train_images/1241954880.jpg
inflating: input/train_images/2580494027.jpg
inflating: input/train_images/1364893627.jpg
inflating: input/train_images/3318985729.jpg
inflating: input/train_images/1313344197.jpg
inflating: input/train_images/2181259401.jpg
inflating: input/train_images/1826588021.jpg
inflating: input/train_images/218377.jpg
inflating: input/train_images/4166830303.jpg
inflating: input/train_images/1242632379.jpg
inflating: input/train_images/292358670.jpg
inflating: input/train_images/657935270.jpg
inflating: input/train_images/1795774689.jpg
inflating: input/train_images/3421593064.jpg
inflating: input/train_images/3010385016.jpg
inflating: input/train_images/3462014192.jpg
inflating: input/train_images/678699468.jpg
inflating: input/train_images/3174223236.jpg
inflating: input/train_images/881020012.jpg
inflating: input/train_images/3493200468.jpg
inflating: input/train_images/1272204314.jpg
inflating: input/train_images/3432135920.jpg
inflating: input/train_images/984417293.jpg
inflating: input/train_images/71842711.jpg
inflating: input/train_images/1206126822.jpg
inflating: input/train_images/3474960927.jpg
inflating: input/train_images/3395550776.jpg
inflating: input/train_images/2182247433.jpg
inflating: input/train_images/2655690235.jpg
inflating: input/train_images/3384690676.jpg
inflating: input/train_images/2944007610.jpg
inflating: input/train_images/2102230404.jpg
inflating: input/train_images/1256156533.jpg
inflating: input/train_images/3540765435.jpg
inflating: input/train_images/2974027386.jpg
inflating: input/train_images/1445981750.jpg
inflating: input/train_images/4278290945.jpg
inflating: input/train_images/2736688100.jpg
inflating: input/train_images/737465791.jpg
inflating: input/train_images/2321781860.jpg
inflating: input/train_images/2335461038.jpg
inflating: input/train_images/3568729258.jpg
inflating: input/train_images/3198558728.jpg
inflating: input/train_images/223775564.jpg
inflating: input/train_images/811696349.jpg
inflating: input/train_images/665859596.jpg
inflating: input/train_images/3563733619.jpg
inflating: input/train_images/3196839385.jpg
inflating: input/train_images/4151543321.jpg
inflating: input/train_images/3026867163.jpg
inflating: input/train_images/1887102374.jpg
inflating: input/train_images/2828087857.jpg
inflating: input/train_images/3115800767.jpg
inflating: input/train_images/3171295117.jpg
inflating: input/train_images/3826378692.jpg
inflating: input/train_images/471759171.jpg
inflating: input/train_images/4021120982.jpg
inflating: input/train_images/2532502018.jpg
inflating: input/train_images/2216672463.jpg
inflating: input/train_images/630537916.jpg
inflating: input/train_images/3808548941.jpg
inflating: input/train_images/2900265052.jpg
inflating: input/train_images/3815339672.jpg
inflating: input/train_images/1654496299.jpg
inflating: input/train_images/2016922102.jpg
inflating: input/train_images/3476129770.jpg
inflating: input/train_images/2629547729.jpg
inflating: input/train_images/3251222806.jpg
inflating: input/train_images/1202619622.jpg
inflating: input/train_images/3116602626.jpg
inflating: input/train_images/955110808.jpg
inflating: input/train_images/3226850517.jpg
inflating: input/train_images/138459723.jpg
inflating: input/train_images/2379208411.jpg
inflating: input/train_images/1058466257.jpg
inflating: input/train_images/3699018572.jpg
inflating: input/train_images/491516744.jpg
inflating: input/train_images/2360034731.jpg
inflating: input/train_images/1109535439.jpg
inflating: input/train_images/1906662889.jpg
inflating: input/train_images/685497354.jpg
inflating: input/train_images/3182178940.jpg
inflating: input/train_images/1196683812.jpg
inflating: input/train_images/3923826368.jpg
inflating: input/train_images/353529214.jpg
inflating: input/train_images/2492261561.jpg
inflating: input/train_images/3047244862.jpg
inflating: input/train_images/701939948.jpg
inflating: input/train_images/3536918233.jpg
inflating: input/train_images/230708644.jpg
inflating: input/train_images/3558210232.jpg
inflating: input/train_images/454637792.jpg
inflating: input/train_images/1679058349.jpg
inflating: input/train_images/2626785894.jpg
inflating: input/train_images/3452368233.jpg
inflating: input/train_images/1391911669.jpg
inflating: input/train_images/3963194620.jpg
inflating: input/train_images/3430083704.jpg
inflating: input/train_images/1903454403.jpg
inflating: input/train_images/3704646396.jpg
inflating: input/train_images/79165028.jpg
inflating: input/train_images/124731659.jpg
inflating: input/train_images/318227226.jpg
inflating: input/train_images/2903757445.jpg
inflating: input/train_images/281473459.jpg
inflating: input/train_images/2040980180.jpg
inflating: input/train_images/2523040233.jpg
inflating: input/train_images/986387276.jpg
inflating: input/train_images/95322868.jpg
inflating: input/train_images/2187520505.jpg
inflating: input/train_images/4165383936.jpg
inflating: input/train_images/3366388775.jpg
inflating: input/train_images/1149366109.jpg
inflating: input/train_images/1711803773.jpg
inflating: input/train_images/3619872017.jpg
inflating: input/train_images/2087904046.jpg
inflating: input/train_images/1823518467.jpg
inflating: input/train_images/2853557444.jpg
inflating: input/train_images/4077418003.jpg
inflating: input/train_images/4156520758.jpg
inflating: input/train_images/2871112358.jpg
inflating: input/train_images/544532198.jpg
inflating: input/train_images/1524851298.jpg
inflating: input/train_images/2079191755.jpg
inflating: input/train_images/2784153881.jpg
inflating: input/train_images/1870791731.jpg
inflating: input/train_images/284130814.jpg
inflating: input/train_images/3842835147.jpg
inflating: input/train_images/1550131198.jpg
inflating: input/train_images/488621358.jpg
inflating: input/train_images/107179104.jpg
inflating: input/train_images/1648797663.jpg
inflating: input/train_images/1716516140.jpg
inflating: input/train_images/3983622976.jpg
inflating: input/train_images/4237027288.jpg
inflating: input/train_images/3163389622.jpg
inflating: input/train_images/2415642544.jpg
inflating: input/train_images/1708593214.jpg
inflating: input/train_images/1074804892.jpg
inflating: input/train_images/2598383245.jpg
inflating: input/train_images/1966479162.jpg
inflating: input/train_images/1176036516.jpg
inflating: input/train_images/670049521.jpg
inflating: input/train_images/1011571614.jpg
inflating: input/train_images/1556372049.jpg
inflating: input/train_images/1662332092.jpg
inflating: input/train_images/1788421624.jpg
inflating: input/train_images/1182166221.jpg
inflating: input/train_images/1098348421.jpg
inflating: input/train_images/2314598518.jpg
inflating: input/train_images/410771727.jpg
inflating: input/train_images/288071387.jpg
inflating: input/train_images/3030866180.jpg
inflating: input/train_images/3190228731.jpg
inflating: input/train_images/2243019094.jpg
inflating: input/train_images/2373222201.jpg
inflating: input/train_images/1682355627.jpg
inflating: input/train_images/1983716706.jpg
inflating: input/train_images/2497722314.jpg
inflating: input/train_images/1437139355.jpg
inflating: input/train_images/3491035063.jpg
inflating: input/train_images/3314463308.jpg
inflating: input/train_images/2037654079.jpg
inflating: input/train_images/2597522280.jpg
inflating: input/train_images/2642218624.jpg
inflating: input/train_images/2003778001.jpg
inflating: input/train_images/1630104012.jpg
inflating: input/train_images/2235231408.jpg
inflating: input/train_images/2884334007.jpg
inflating: input/train_images/3007566713.jpg
inflating: input/train_images/4179303014.jpg
inflating: input/train_images/1934675446.jpg
inflating: input/train_images/632445353.jpg
inflating: input/train_images/2489013604.jpg
inflating: input/train_images/340856242.jpg
inflating: input/train_images/1178737001.jpg
inflating: input/train_images/135225385.jpg
inflating: input/train_images/3508055707.jpg
inflating: input/train_images/3474174991.jpg
inflating: input/train_images/3608785177.jpg
inflating: input/train_images/3002089936.jpg
inflating: input/train_images/2892596636.jpg
inflating: input/train_images/4267879183.jpg
inflating: input/train_images/1427678310.jpg
inflating: input/train_images/1221540590.jpg
inflating: input/train_images/1316234196.jpg
inflating: input/train_images/879395678.jpg
inflating: input/train_images/3676902854.jpg
inflating: input/train_images/2608690004.jpg
inflating: input/train_images/1846294489.jpg
inflating: input/train_images/959693086.jpg
inflating: input/train_images/2451619030.jpg
inflating: input/train_images/307854780.jpg
inflating: input/train_images/2585623036.jpg
inflating: input/train_images/3277846362.jpg
inflating: input/train_images/4037735151.jpg
inflating: input/train_images/255761886.jpg
inflating: input/train_images/3795252251.jpg
inflating: input/train_images/2474023641.jpg
inflating: input/train_images/1185052767.jpg
inflating: input/train_images/3550628898.jpg
inflating: input/train_images/2187563548.jpg
inflating: input/train_images/2525317901.jpg
inflating: input/train_images/331060034.jpg
inflating: input/train_images/1864756253.jpg
inflating: input/train_images/838136657.jpg
inflating: input/train_images/1902258647.jpg
inflating: input/train_images/2026837055.jpg
inflating: input/train_images/1447520990.jpg
inflating: input/train_images/4178594597.jpg
inflating: input/train_images/3537514783.jpg
inflating: input/train_images/872867609.jpg
inflating: input/train_images/316457574.jpg
inflating: input/train_images/2997262447.jpg
inflating: input/train_images/3276323574.jpg
inflating: input/train_images/1791700029.jpg
inflating: input/train_images/2447058144.jpg
inflating: input/train_images/1918762654.jpg
inflating: input/train_images/1128373288.jpg
inflating: input/train_images/524234621.jpg
inflating: input/train_images/792990285.jpg
inflating: input/train_images/474493920.jpg
inflating: input/train_images/1741717414.jpg
inflating: input/train_images/2987785704.jpg
inflating: input/train_images/340679528.jpg
inflating: input/train_images/2853469251.jpg
inflating: input/train_images/1252338072.jpg
inflating: input/train_images/4273998923.jpg
inflating: input/train_images/343990809.jpg
inflating: input/train_images/871044756.jpg
inflating: input/train_images/3295247232.jpg
inflating: input/train_images/3135970412.jpg
inflating: input/train_images/3021177990.jpg
inflating: input/train_images/2648357013.jpg
inflating: input/train_images/2130483498.jpg
inflating: input/train_images/736957194.jpg
inflating: input/train_images/3801909688.jpg
inflating: input/train_images/1017667711.jpg
inflating: input/train_images/2361236588.jpg
inflating: input/train_images/2889971119.jpg
inflating: input/train_images/1232852695.jpg
inflating: input/train_images/294829234.jpg
inflating: input/train_images/847946899.jpg
inflating: input/train_images/450239051.jpg
inflating: input/train_images/2120170838.jpg
inflating: input/train_images/270643654.jpg
inflating: input/train_images/3340101218.jpg
inflating: input/train_images/1784432938.jpg
inflating: input/train_images/2618441199.jpg
inflating: input/train_images/606006336.jpg
inflating: input/train_images/2051128435.jpg
inflating: input/train_images/1109418037.jpg
inflating: input/train_images/3031879144.jpg
inflating: input/train_images/4126193524.jpg
inflating: input/train_images/2190685942.jpg
inflating: input/train_images/3779623554.jpg
inflating: input/train_images/1150322026.jpg
inflating: input/train_images/1633537888.jpg
inflating: input/train_images/4155925395.jpg
inflating: input/train_images/3369161144.jpg
inflating: input/train_images/4239036041.jpg
inflating: input/train_images/59208991.jpg
inflating: input/train_images/3720115396.jpg
inflating: input/train_images/782478171.jpg
inflating: input/train_images/2882347147.jpg
inflating: input/train_images/3160420933.jpg
inflating: input/train_images/2435051665.jpg
inflating: input/train_images/2032441634.jpg
inflating: input/train_images/2847345267.jpg
inflating: input/train_images/60094859.jpg
inflating: input/train_images/3837457616.jpg
inflating: input/train_images/4291020124.jpg
inflating: input/train_images/673860639.jpg
inflating: input/train_images/3982368414.jpg
inflating: input/train_images/4123940124.jpg
inflating: input/train_images/1881799608.jpg
inflating: input/train_images/560592460.jpg
inflating: input/train_images/1393861868.jpg
inflating: input/train_images/428725949.jpg
inflating: input/train_images/2579865055.jpg
inflating: input/train_images/2202207984.jpg
inflating: input/train_images/730431773.jpg
inflating: input/train_images/3971597689.jpg
inflating: input/train_images/4107485206.jpg
inflating: input/train_images/248855703.jpg
extracting: input/sample_submission.csv
###Markdown
install apex
###Code
if CFG['apex']:
try:
import apex
except Exception:
! git clone https://github.com/NVIDIA/apex.git
% cd apex
!pip install --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" .
%cd ..
###Output
_____no_output_____
###Markdown
Library
###Code
# ====================================================
# Library
# ====================================================
import os
import datetime
import math
import time
import random
import glob
import shutil
from pathlib import Path
from contextlib import contextmanager
from collections import defaultdict, Counter
import scipy as sp
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import seaborn as sns
from sklearn import preprocessing
from sklearn.metrics import accuracy_score
from sklearn.model_selection import StratifiedKFold
from tqdm.auto import tqdm
from functools import partial
import cv2
from PIL import Image
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.optim import Adam, SGD
import torchvision.models as models
from torch.nn.parameter import Parameter
from torch.utils.data import DataLoader, Dataset
from torch.optim.lr_scheduler import CosineAnnealingWarmRestarts, CosineAnnealingLR, ReduceLROnPlateau
from albumentations import (
Compose, OneOf, Normalize, Resize, RandomResizedCrop, RandomCrop, HorizontalFlip, VerticalFlip,
RandomBrightness, RandomContrast, RandomBrightnessContrast, Rotate, ShiftScaleRotate, Cutout,
IAAAdditiveGaussianNoise, Transpose
)
from albumentations.pytorch import ToTensorV2
from albumentations import ImageOnlyTransform
import timm
import mlflow
import warnings
warnings.filterwarnings('ignore')
if CFG['apex']:
from apex import amp
if CFG['debug']:
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
else:
device = torch.device('cuda')
start_time = datetime.datetime.now()
start_time_str = start_time.strftime('%m%d%H%M')
###Output
_____no_output_____
###Markdown
Directory settings
###Code
# ====================================================
# Directory settings
# ====================================================
if os.path.exists(OUTPUT_DIR) and env!='kaggle':
shutil.rmtree(OUTPUT_DIR)
if not os.path.exists(OUTPUT_DIR):
os.makedirs(OUTPUT_DIR)
###Output
_____no_output_____
###Markdown
save basic files
###Code
# with open(f'{OUTPUT_DIR}/{start_time_str}_TAG.json', 'w') as f:
# json.dump(TAG, f, indent=4)
# with open(f'{OUTPUT_DIR}/{start_time_str}_CFG.json', 'w') as f:
# json.dump(CFG, f, indent=4)
import shutil
notebook_path = f'{OUTPUT_DIR}/{start_time_str}_{TITLE}.ipynb'
shutil.copy2(NOTEBOOK_PATH, notebook_path)
###Output
_____no_output_____
###Markdown
Data Loading
###Code
train = pd.read_csv(f'{DATA_PATH}/train.csv')
test = pd.read_csv(f'{DATA_PATH}/sample_submission.csv')
label_map = pd.read_json(f'{DATA_PATH}/label_num_to_disease_map.json',
orient='index')
if CFG['debug']:
train = train.sample(n=1000, random_state=CFG['seed']).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Utils
###Code
# ====================================================
# Utils
# ====================================================
def get_score(y_true, y_pred):
return accuracy_score(y_true, y_pred)
@contextmanager
def timer(name):
t0 = time.time()
LOGGER.info(f'[{name}] start')
yield
LOGGER.info(f'[{name}] done in {time.time() - t0:.0f} s.')
def init_logger(log_file=OUTPUT_DIR+'train.log'):
from logging import getLogger, FileHandler, Formatter, StreamHandler
from logging import INFO as INFO_
logger = getLogger(__name__)
logger.setLevel(INFO_)
handler1 = StreamHandler()
handler1.setFormatter(Formatter("%(message)s"))
handler2 = FileHandler(filename=log_file)
handler2.setFormatter(Formatter("%(message)s"))
logger.addHandler(handler1)
logger.addHandler(handler2)
return logger
logger_path = OUTPUT_DIR+f'{start_time_str}_train.log'
LOGGER = init_logger(logger_path)
def seed_torch(seed=42):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
seed_torch(seed=CFG['seed'])
class EarlyStopping:
"""Early stops the training if validation loss doesn't improve after a given patience."""
def __init__(self, patience=7, verbose=False, save_path='checkpoint.pt',
counter=0, best_score=None, save_latest_path=None):
"""
Args:
patience (int): How long to wait after last time validation loss improved.
Default: 7
verbose (bool): If True, prints a message for each validation loss improvement.
Default: False
save_path (str): Directory for saving a model.
Default: "'checkpoint.pt'"
"""
self.patience = patience
self.verbose = verbose
self.save_path = save_path
self.counter = counter
self.best_score = best_score
self.save_latest_path = save_latest_path
self.early_stop = False
self.val_loss_min = np.Inf
def __call__(self, val_loss, model, preds, epoch):
score = -val_loss
if self.best_score is None:
self.best_score = score
self.save_checkpoint(val_loss, model, preds, epoch)
self.save_latest(val_loss, model, preds, epoch, score)
elif score >= self.best_score:
self.counter = 0
self.best_score = score
self.save_checkpoint(val_loss, model, preds, epoch)
self.save_latest(val_loss, model, preds, epoch, score)
# nanになったら学習ストップ
elif math.isnan(score):
self.early_stop = True
else:
self.counter += 1
if self.save_latest_path is not None:
self.save_latest(val_loss, model, preds, epoch, score)
if self.verbose:
print(f'EarlyStopping counter: {self.counter} out of {self.patience}')
if self.counter >= self.patience:
self.early_stop = True
def save_checkpoint(self, val_loss, model, preds, epoch):
'''Saves model when validation loss decrease.'''
if self.verbose:
print(f'Validation loss decreased ({self.val_loss_min:.10f} --> {val_loss:.10f}). Saving model ...')
torch.save({'model': model.state_dict(), 'preds': preds,
'epoch' : epoch, 'best_score' : self.best_score, 'counter' : self.counter},
self.save_path)
self.val_loss_min = val_loss
def save_latest(self, val_loss, model, preds, epoch, score):
'''Saves latest model.'''
torch.save({'model': model.state_dict(), 'preds': preds,
'epoch' : epoch, 'score' : score, 'counter' : self.counter},
self.save_latest_path)
self.val_loss_min = val_loss
def remove_glob(pathname, recursive=True):
for p in glob.glob(pathname, recursive=recursive):
if os.path.isfile(p):
os.remove(p)
###Output
_____no_output_____
###Markdown
CV split
###Code
folds = train.copy()
Fold = StratifiedKFold(n_splits=CFG['n_fold'], shuffle=True, random_state=CFG['seed'])
for n, (train_index, val_index) in enumerate(Fold.split(folds, folds[CFG['target_col']])):
folds.loc[val_index, 'fold'] = int(n)
folds['fold'] = folds['fold'].astype(int)
print(folds.groupby(['fold', CFG['target_col']]).size())
###Output
fold label
0 0 218
1 438
2 477
3 2631
4 516
1 0 218
1 438
2 477
3 2631
4 516
2 0 217
1 438
2 477
3 2632
4 515
3 0 217
1 438
2 477
3 2632
4 515
4 0 217
1 437
2 478
3 2632
4 515
dtype: int64
###Markdown
Dataset
###Code
# ====================================================
# Dataset
# ====================================================
class TrainDataset(Dataset):
def __init__(self, df, transform=None):
self.df = df
self.file_names = df['image_id'].values
self.labels = df['label'].values
self.transform = transform
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
file_name = self.file_names[idx]
file_path = f'{DATA_PATH}/train_images/{file_name}'
image = cv2.imread(file_path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
if self.transform:
augmented = self.transform(image=image)
image = augmented['image']
label = torch.tensor(self.labels[idx]).long()
return image, label
class TestDataset(Dataset):
def __init__(self, df, transform=None):
self.df = df
self.file_names = df['image_id'].values
self.transform = transform
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
file_name = self.file_names[idx]
file_path = f'{DATA_PATH}/test_images/{file_name}'
image = cv2.imread(file_path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
if self.transform:
augmented = self.transform(image=image)
image = augmented['image']
return image
# train_dataset = TrainDataset(train, transform=None)
# for i in range(1):
# image, label = train_dataset[i]
# plt.imshow(image)
# plt.title(f'label: {label}')
# plt.show()
###Output
_____no_output_____
###Markdown
Transforms
###Code
def _get_augmentations(aug_list):
process = []
for aug in aug_list:
if aug == 'Resize':
process.append(Resize(CFG['size'], CFG['size']))
elif aug == 'RandomResizedCrop':
process.append(RandomResizedCrop(CFG['size'], CFG['size']))
elif aug == 'Transpose':
process.append(Transpose(p=0.5))
elif aug == 'HorizontalFlip':
process.append(HorizontalFlip(p=0.5))
elif aug == 'VerticalFlip':
process.append(VerticalFlip(p=0.5))
elif aug == 'ShiftScaleRotate':
process.append(ShiftScaleRotate(p=0.5))
elif aug == 'Cutout':
process.append(Cutout(max_h_size=CFG['CutoutSize'], max_w_size=CFG['CutoutSize'], p=0.5))
elif aug == 'Normalize':
process.append(Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225],
))
else:
raise ValueError(f'{aug} is not suitable')
process.append(ToTensorV2())
return process
# ====================================================
# Transforms
# ====================================================
def get_transforms(*, data):
if data == 'train':
return Compose(
_get_augmentations(TAG['augmentation'])
)
elif data == 'valid':
return Compose(
_get_augmentations(['Resize', 'Normalize'])
)
train_dataset = TrainDataset(train, transform=get_transforms(data='train'))
for i in range(1):
image, label = train_dataset[i]
plt.imshow(image[0])
plt.title(f'label: {label}')
plt.show()
###Output
_____no_output_____
###Markdown
Bi-tempered logistic loss
###Code
def log_t(u, t):
"""Compute log_t for `u'."""
if t==1.0:
return u.log()
else:
return (u.pow(1.0 - t) - 1.0) / (1.0 - t)
def exp_t(u, t):
"""Compute exp_t for `u'."""
if t==1:
return u.exp()
else:
return (1.0 + (1.0-t)*u).relu().pow(1.0 / (1.0 - t))
def compute_normalization_fixed_point(activations, t, num_iters):
"""Returns the normalization value for each example (t > 1.0).
Args:
activations: A multi-dimensional tensor with last dimension `num_classes`.
t: Temperature 2 (> 1.0 for tail heaviness).
num_iters: Number of iterations to run the method.
Return: A tensor of same shape as activation with the last dimension being 1.
"""
mu, _ = torch.max(activations, -1, keepdim=True)
normalized_activations_step_0 = activations - mu
normalized_activations = normalized_activations_step_0
for _ in range(num_iters):
logt_partition = torch.sum(
exp_t(normalized_activations, t), -1, keepdim=True)
normalized_activations = normalized_activations_step_0 * \
logt_partition.pow(1.0-t)
logt_partition = torch.sum(
exp_t(normalized_activations, t), -1, keepdim=True)
normalization_constants = - log_t(1.0 / logt_partition, t) + mu
return normalization_constants
def compute_normalization_binary_search(activations, t, num_iters):
"""Returns the normalization value for each example (t < 1.0).
Args:
activations: A multi-dimensional tensor with last dimension `num_classes`.
t: Temperature 2 (< 1.0 for finite support).
num_iters: Number of iterations to run the method.
Return: A tensor of same rank as activation with the last dimension being 1.
"""
mu, _ = torch.max(activations, -1, keepdim=True)
normalized_activations = activations - mu
effective_dim = \
torch.sum(
(normalized_activations > -1.0 / (1.0-t)).to(torch.int32),
dim=-1, keepdim=True).to(activations.dtype)
shape_partition = activations.shape[:-1] + (1,)
lower = torch.zeros(shape_partition, dtype=activations.dtype, device=activations.device)
upper = -log_t(1.0/effective_dim, t) * torch.ones_like(lower)
for _ in range(num_iters):
logt_partition = (upper + lower)/2.0
sum_probs = torch.sum(
exp_t(normalized_activations - logt_partition, t),
dim=-1, keepdim=True)
update = (sum_probs < 1.0).to(activations.dtype)
lower = torch.reshape(
lower * update + (1.0-update) * logt_partition,
shape_partition)
upper = torch.reshape(
upper * (1.0 - update) + update * logt_partition,
shape_partition)
logt_partition = (upper + lower)/2.0
return logt_partition + mu
class ComputeNormalization(torch.autograd.Function):
"""
Class implementing custom backward pass for compute_normalization. See compute_normalization.
"""
@staticmethod
def forward(ctx, activations, t, num_iters):
if t < 1.0:
normalization_constants = compute_normalization_binary_search(activations, t, num_iters)
else:
normalization_constants = compute_normalization_fixed_point(activations, t, num_iters)
ctx.save_for_backward(activations, normalization_constants)
ctx.t=t
return normalization_constants
@staticmethod
def backward(ctx, grad_output):
activations, normalization_constants = ctx.saved_tensors
t = ctx.t
normalized_activations = activations - normalization_constants
probabilities = exp_t(normalized_activations, t)
escorts = probabilities.pow(t)
escorts = escorts / escorts.sum(dim=-1, keepdim=True)
grad_input = escorts * grad_output
return grad_input, None, None
def compute_normalization(activations, t, num_iters=5):
"""Returns the normalization value for each example.
Backward pass is implemented.
Args:
activations: A multi-dimensional tensor with last dimension `num_classes`.
t: Temperature 2 (> 1.0 for tail heaviness, < 1.0 for finite support).
num_iters: Number of iterations to run the method.
Return: A tensor of same rank as activation with the last dimension being 1.
"""
return ComputeNormalization.apply(activations, t, num_iters)
def tempered_sigmoid(activations, t, num_iters = 5):
"""Tempered sigmoid function.
Args:
activations: Activations for the positive class for binary classification.
t: Temperature tensor > 0.0.
num_iters: Number of iterations to run the method.
Returns:
A probabilities tensor.
"""
internal_activations = torch.stack([activations,
torch.zeros_like(activations)],
dim=-1)
internal_probabilities = tempered_softmax(internal_activations, t, num_iters)
return internal_probabilities[..., 0]
def tempered_softmax(activations, t, num_iters=5):
"""Tempered softmax function.
Args:
activations: A multi-dimensional tensor with last dimension `num_classes`.
t: Temperature > 1.0.
num_iters: Number of iterations to run the method.
Returns:
A probabilities tensor.
"""
if t == 1.0:
return activations.softmax(dim=-1)
normalization_constants = compute_normalization(activations, t, num_iters)
return exp_t(activations - normalization_constants, t)
def bi_tempered_binary_logistic_loss(activations,
labels,
t1,
t2,
label_smoothing = 0.0,
num_iters=5,
reduction='mean'):
"""Bi-Tempered binary logistic loss.
Args:
activations: A tensor containing activations for class 1.
labels: A tensor with shape as activations, containing probabilities for class 1
t1: Temperature 1 (< 1.0 for boundedness).
t2: Temperature 2 (> 1.0 for tail heaviness, < 1.0 for finite support).
label_smoothing: Label smoothing
num_iters: Number of iterations to run the method.
Returns:
A loss tensor.
"""
internal_activations = torch.stack([activations,
torch.zeros_like(activations)],
dim=-1)
internal_labels = torch.stack([labels.to(activations.dtype),
1.0 - labels.to(activations.dtype)],
dim=-1)
return bi_tempered_logistic_loss(internal_activations,
internal_labels,
t1,
t2,
label_smoothing = label_smoothing,
num_iters = num_iters,
reduction = reduction)
def bi_tempered_logistic_loss(activations,
labels,
t1,
t2,
label_smoothing=0.0,
num_iters=5,
reduction = 'mean'):
"""Bi-Tempered Logistic Loss.
Args:
activations: A multi-dimensional tensor with last dimension `num_classes`.
labels: A tensor with shape and dtype as activations (onehot),
or a long tensor of one dimension less than activations (pytorch standard)
t1: Temperature 1 (< 1.0 for boundedness).
t2: Temperature 2 (> 1.0 for tail heaviness, < 1.0 for finite support).
label_smoothing: Label smoothing parameter between [0, 1). Default 0.0.
num_iters: Number of iterations to run the method. Default 5.
reduction: ``'none'`` | ``'mean'`` | ``'sum'``. Default ``'mean'``.
``'none'``: No reduction is applied, return shape is shape of
activations without the last dimension.
``'mean'``: Loss is averaged over minibatch. Return shape (1,)
``'sum'``: Loss is summed over minibatch. Return shape (1,)
Returns:
A loss tensor.
"""
if len(labels.shape)<len(activations.shape): #not one-hot
labels_onehot = torch.zeros_like(activations)
labels_onehot.scatter_(1, labels[..., None], 1)
else:
labels_onehot = labels
if label_smoothing > 0:
num_classes = labels_onehot.shape[-1]
labels_onehot = ( 1 - label_smoothing * num_classes / (num_classes - 1) ) \
* labels_onehot + \
label_smoothing / (num_classes - 1)
probabilities = tempered_softmax(activations, t2, num_iters)
loss_values = labels_onehot * log_t(labels_onehot + 1e-10, t1) \
- labels_onehot * log_t(probabilities, t1) \
- labels_onehot.pow(2.0 - t1) / (2.0 - t1) \
+ probabilities.pow(2.0 - t1) / (2.0 - t1)
loss_values = loss_values.sum(dim = -1) #sum over classes
if reduction == 'none':
return loss_values
if reduction == 'sum':
return loss_values.sum()
if reduction == 'mean':
return loss_values.mean()
###Output
_____no_output_____
###Markdown
MODEL
###Code
# ====================================================
# MODEL
# ====================================================
class CustomModel(nn.Module):
def __init__(self, model_name, pretrained=False):
super().__init__()
self.model = timm.create_model(model_name, pretrained=pretrained)
if hasattr(self.model, 'classifier'):
n_features = self.model.classifier.in_features
self.model.classifier = nn.Linear(n_features, CFG['target_size'])
elif hasattr(self.model, 'fc'):
n_features = self.model.fc.in_features
self.model.fc = nn.Linear(n_features, CFG['target_size'])
def forward(self, x):
x = self.model(x)
return x
model = CustomModel(model_name=TAG['model_name'], pretrained=False)
train_dataset = TrainDataset(train, transform=get_transforms(data='train'))
train_loader = DataLoader(train_dataset, batch_size=4, shuffle=True,
num_workers=4, pin_memory=True, drop_last=True)
for image, label in train_loader:
output = model(image)
print(output)
break
###Output
tensor([[0.1352, 0.2035, 0.2537, 0.2944, 0.3238],
[0.0705, 0.1487, 0.1531, 0.1135, 0.3336],
[0.1452, 0.1324, 0.2210, 0.0973, 0.3384],
[0.3189, 0.2203, 0.1436, 0.1277, 0.2788]], grad_fn=<AddmmBackward>)
###Markdown
Helper functions
###Code
# ====================================================
# Helper functions
# ====================================================
class AverageMeter(object):
"""Computes and stores the average and current value"""
def __init__(self):
self.reset()
def reset(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
def asMinutes(s):
m = math.floor(s / 60)
s -= m * 60
return '%dm %ds' % (m, s)
def timeSince(since, percent):
now = time.time()
s = now - since
es = s / (percent)
rs = es - s
return '%s (remain %s)' % (asMinutes(s), asMinutes(rs))
# ====================================================
# loss
# ====================================================
def get_loss(criterion, y_preds, labels):
if TAG['criterion']=='CrossEntropyLoss':
loss = criterion(y_preds, labels)
elif TAG['criterion'] == 'bi_tempered_logistic_loss':
loss = criterion(y_preds, labels, t1=CFG['bi_tempered_loss_t1'], t2=CFG['bi_tempered_loss_t2'])
return loss
# ====================================================
# Helper functions
# ====================================================
def train_fn(train_loader, model, criterion, optimizer, epoch, scheduler, device):
batch_time = AverageMeter()
data_time = AverageMeter()
losses = AverageMeter()
scores = AverageMeter()
# switch to train mode
model.train()
start = end = time.time()
global_step = 0
for step, (images, labels) in enumerate(train_loader):
# measure data loading time
data_time.update(time.time() - end)
images = images.to(device)
labels = labels.to(device)
batch_size = labels.size(0)
y_preds = model(images)
loss = get_loss(criterion, y_preds, labels)
# record loss
losses.update(loss.item(), batch_size)
if CFG['gradient_accumulation_steps'] > 1:
loss = loss / CFG['gradient_accumulation_steps']
if CFG['apex']:
with amp.scale_loss(loss, optimizer) as scaled_loss:
scaled_loss.backward()
else:
loss.backward()
# clear memory
del loss, y_preds
torch.cuda.empty_cache()
grad_norm = torch.nn.utils.clip_grad_norm_(model.parameters(), CFG['max_grad_norm'])
if (step + 1) % CFG['gradient_accumulation_steps'] == 0:
optimizer.step()
optimizer.zero_grad()
global_step += 1
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
if step % CFG['print_freq'] == 0 or step == (len(train_loader)-1):
print('Epoch: [{0}][{1}/{2}] '
'Data {data_time.val:.3f} ({data_time.avg:.3f}) '
'Elapsed {remain:s} '
'Loss: {loss.val:.4f}({loss.avg:.4f}) '
'Grad: {grad_norm:.4f} '
#'LR: {lr:.6f} '
.format(
epoch+1, step, len(train_loader), batch_time=batch_time,
data_time=data_time, loss=losses,
remain=timeSince(start, float(step+1)/len(train_loader)),
grad_norm=grad_norm,
#lr=scheduler.get_lr()[0],
))
return losses.avg
def valid_fn(valid_loader, model, criterion, device):
batch_time = AverageMeter()
data_time = AverageMeter()
losses = AverageMeter()
scores = AverageMeter()
# switch to evaluation mode
model.eval()
preds = []
start = end = time.time()
for step, (images, labels) in enumerate(valid_loader):
# measure data loading time
data_time.update(time.time() - end)
images = images.to(device)
labels = labels.to(device)
batch_size = labels.size(0)
# compute loss
with torch.no_grad():
y_preds = model(images)
loss = get_loss(criterion, y_preds, labels)
losses.update(loss.item(), batch_size)
# record accuracy
preds.append(y_preds.softmax(1).to('cpu').numpy())
if CFG['gradient_accumulation_steps'] > 1:
loss = loss / CFG['gradient_accumulation_steps']
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
if step % CFG['print_freq'] == 0 or step == (len(valid_loader)-1):
print('EVAL: [{0}/{1}] '
'Data {data_time.val:.3f} ({data_time.avg:.3f}) '
'Elapsed {remain:s} '
'Loss: {loss.val:.4f}({loss.avg:.4f}) '
.format(
step, len(valid_loader), batch_time=batch_time,
data_time=data_time, loss=losses,
remain=timeSince(start, float(step+1)/len(valid_loader)),
))
predictions = np.concatenate(preds)
return losses.avg, predictions
def inference(model, states, test_loader, device):
model.to(device)
tk0 = tqdm(enumerate(test_loader), total=len(test_loader))
probs = []
for i, (images) in tk0:
images = images.to(device)
avg_preds = []
for state in states:
# model.load_state_dict(state['model'])
model.load_state_dict(state)
model.eval()
with torch.no_grad():
y_preds = model(images)
avg_preds.append(y_preds.softmax(1).to('cpu').numpy())
avg_preds = np.mean(avg_preds, axis=0)
probs.append(avg_preds)
probs = np.concatenate(probs)
return probs
###Output
_____no_output_____
###Markdown
Train loop
###Code
# ====================================================
# scheduler
# ====================================================
def get_scheduler(optimizer):
if TAG['scheduler']=='ReduceLROnPlateau':
scheduler = ReduceLROnPlateau(optimizer, mode='min', factor=CFG['factor'], patience=CFG['patience'], verbose=True, eps=CFG['eps'])
elif TAG['scheduler']=='CosineAnnealingLR':
scheduler = CosineAnnealingLR(optimizer, T_max=CFG['T_max'], eta_min=CFG['min_lr'], last_epoch=-1)
elif TAG['scheduler']=='CosineAnnealingWarmRestarts':
scheduler = CosineAnnealingWarmRestarts(optimizer, T_0=CFG['T_0'], T_mult=1, eta_min=CFG['min_lr'], last_epoch=-1)
return scheduler
# ====================================================
# criterion
# ====================================================
def get_criterion():
if TAG['criterion']=='CrossEntropyLoss':
criterion = nn.CrossEntropyLoss()
elif TAG['criterion'] == 'bi_tempered_logistic_loss':
criterion = bi_tempered_logistic_loss
return criterion
# ====================================================
# Train loop
# ====================================================
def train_loop(folds, fold):
LOGGER.info(f"========== fold: {fold} training ==========")
if not CFG['debug']:
mlflow.set_tag('running.fold', str(fold))
# ====================================================
# loader
# ====================================================
trn_idx = folds[folds['fold'] != fold].index
val_idx = folds[folds['fold'] == fold].index
train_folds = folds.loc[trn_idx].reset_index(drop=True)
valid_folds = folds.loc[val_idx].reset_index(drop=True)
train_dataset = TrainDataset(train_folds,
transform=get_transforms(data='train'))
valid_dataset = TrainDataset(valid_folds,
transform=get_transforms(data='valid'))
train_loader = DataLoader(train_dataset,
batch_size=CFG['batch_size'],
shuffle=True,
num_workers=CFG['num_workers'], pin_memory=True, drop_last=True)
valid_loader = DataLoader(valid_dataset,
batch_size=CFG['batch_size'],
shuffle=False,
num_workers=CFG['num_workers'], pin_memory=True, drop_last=False)
# ====================================================
# model & optimizer & criterion
# ====================================================
best_model_path = OUTPUT_DIR+f'{TAG["model_name"]}_fold{fold}_best.pth'
latest_model_path = OUTPUT_DIR+f'{TAG["model_name"]}_fold{fold}_latest.pth'
model = CustomModel(TAG['model_name'], pretrained=True)
model.to(device)
# 学習途中の重みがあれば読み込み
if os.path.isfile(latest_model_path):
state_latest = torch.load(latest_model_path)
state_best = torch.load(best_model_path)
model.load_state_dict(state_latest['model'])
epoch_start = state_latest['epoch']+1
# er_best_score = state_latest['score']
er_counter = state_latest['counter']
er_best_score = state_best['best_score']
LOGGER.info(f'Retrain model in epoch:{epoch_start}, best_score:{er_best_score:.3f}, counter:{er_counter}')
else:
epoch_start = 0
er_best_score = None
er_counter = 0
optimizer = Adam(model.parameters(), lr=CFG['lr'], weight_decay=CFG['weight_decay'], amsgrad=False)
scheduler = get_scheduler(optimizer)
criterion = get_criterion()
# ====================================================
# apex
# ====================================================
if CFG['apex']:
model, optimizer = amp.initialize(model, optimizer, opt_level='O1', verbosity=0)
# ====================================================
# loop
# ====================================================
# best_score = 0.
# best_loss = np.inf
early_stopping = EarlyStopping(
patience=CFG['early_stopping_round'],
verbose=True,
save_path=best_model_path,
counter=er_counter, best_score=er_best_score,
save_latest_path=latest_model_path)
for epoch in range(epoch_start, CFG['epochs']):
start_time = time.time()
# train
avg_loss = train_fn(train_loader, model, criterion, optimizer, epoch, scheduler, device)
# eval
avg_val_loss, preds = valid_fn(valid_loader, model, criterion, device)
valid_labels = valid_folds[CFG['target_col']].values
# early stopping
early_stopping(avg_val_loss, model, preds, epoch)
if early_stopping.early_stop:
print(f'Epoch {epoch+1} - early stopping')
break
if isinstance(scheduler, ReduceLROnPlateau):
scheduler.step(avg_val_loss)
elif isinstance(scheduler, CosineAnnealingLR):
scheduler.step()
elif isinstance(scheduler, CosineAnnealingWarmRestarts):
scheduler.step()
# scoring
score = get_score(valid_labels, preds.argmax(1))
elapsed = time.time() - start_time
LOGGER.info(f'Epoch {epoch+1} - avg_train_loss: {avg_loss:.4f} avg_val_loss: {avg_val_loss:.4f} time: {elapsed:.0f}s')
LOGGER.info(f'Epoch {epoch+1} - Accuracy: {score}')
# log mlflow
if not CFG['debug']:
mlflow.log_metric(f"fold{fold} avg_train_loss", avg_loss, step=epoch)
mlflow.log_metric(f"fold{fold} avg_valid_loss", avg_val_loss, step=epoch)
mlflow.log_metric(f"fold{fold} score", score, step=epoch)
mlflow.log_metric(f"fold{fold} lr", scheduler.get_last_lr()[0], step=epoch)
mlflow.log_artifact(best_model_path)
if os.path.isfile(latest_model_path):
mlflow.log_artifact(latest_model_path)
check_point = torch.load(best_model_path)
valid_folds[[str(c) for c in range(5)]] = check_point['preds']
valid_folds['preds'] = check_point['preds'].argmax(1)
return valid_folds
def get_trained_fold_preds(folds, fold, best_model_path):
val_idx = folds[folds['fold'] == fold].index
valid_folds = folds.loc[val_idx].reset_index(drop=True)
check_point = torch.load(best_model_path)
valid_folds[[str(c) for c in range(5)]] = check_point['preds']
valid_folds['preds'] = check_point['preds'].argmax(1)
return valid_folds
# ====================================================
# main
# ====================================================
def get_result(result_df):
preds = result_df['preds'].values
labels = result_df[CFG['target_col']].values
score = get_score(labels, preds)
LOGGER.info(f'Score: {score:<.5f}')
return score
def main():
"""
Prepare: 1.train 2.test 3.submission 4.folds
"""
if CFG['train']:
# train
oof_df = pd.DataFrame()
for fold in range(CFG['n_fold']):
best_model_path = OUTPUT_DIR+f'{TAG["model_name"]}_fold{fold}_best.pth'
if fold in CFG['trn_fold']:
_oof_df = train_loop(folds, fold)
elif os.path.exists(best_model_path):
_oof_df = get_trained_fold_preds(folds, fold, best_model_path)
else:
_oof_df = None
if _oof_df is not None:
oof_df = pd.concat([oof_df, _oof_df])
LOGGER.info(f"========== fold: {fold} result ==========")
_ = get_result(_oof_df)
# CV result
LOGGER.info(f"========== CV ==========")
score = get_result(oof_df)
# save result
oof_df.to_csv(OUTPUT_DIR+'oof_df.csv', index=False)
# log mlflow
if not CFG['debug']:
mlflow.log_metric('oof score', score)
mlflow.delete_tag('running.fold')
mlflow.log_artifact(OUTPUT_DIR+'oof_df.csv')
if CFG['inference']:
# inference
model = CustomModel(TAG['model_name'], pretrained=False)
states = [torch.load(OUTPUT_DIR+f'{TAG["model_name"]}_fold{fold}_best.pth') for fold in CFG['trn_fold']]
test_dataset = TestDataset(test, transform=get_transforms(data='valid'))
test_loader = DataLoader(test_dataset, batch_size=CFG['batch_size'], shuffle=False,
num_workers=CFG['num_workers'], pin_memory=True)
predictions = inference(model, states, test_loader, device)
# submission
test['label'] = predictions.argmax(1)
test[['image_id', 'label']].to_csv(OUTPUT_DIR+'submission.csv', index=False)
###Output
_____no_output_____
###Markdown
rerun
###Code
def _load_save_point(run_id):
# どこで中断したか取得
try:
stop_fold = int(mlflow.get_run(run_id=run_id).to_dictionary()['data']['tags']['running.fold'])
except KeyError:
pass
else:
# 学習対象のfoldを変更
CFG['trn_fold'] = [fold for fold in CFG['trn_fold'] if fold>=stop_fold]
# 学習済みモデルがあれば.pthファイルを取得(学習中も含む)
client = mlflow.tracking.MlflowClient()
artifacts = [artifact for artifact in client.list_artifacts(run_id) if ".pth" in artifact.path]
for artifact in artifacts:
client.download_artifacts(run_id, artifact.path, OUTPUT_DIR)
def check_have_run():
results = mlflow.search_runs(INFO['EXPERIMENT_ID'])
run_id_list = results[results['tags.mlflow.runName']==TITLE]['run_id'].tolist()
# 初めて実行する場合
if len(run_id_list) == 0:
run_id = None
# 既に実行されている場合
else:
assert len(run_id_list)==1
run_id = run_id_list[0]
_load_save_point(run_id)
return run_id
if __name__ == '__main__':
if CFG['debug']:
main()
else:
mlflow.set_tracking_uri(INFO['TRACKING_URI'])
mlflow.set_experiment('single model')
# 既に実行済みの場合は続きから実行する
run_id = check_have_run()
with mlflow.start_run(run_id=run_id, run_name=TITLE):
if run_id is None:
mlflow.log_artifact(CONFIG_PATH)
mlflow.log_param('device', device)
mlflow.set_tag('env', env)
mlflow.set_tags(TAG)
mlflow.log_params(CFG)
mlflow.log_artifact(notebook_path)
main()
mlflow.log_artifacts(OUTPUT_DIR)
remove_glob(f'{OUTPUT_DIR}/*latest.pth')
if env=="kaggle":
shutil.copy2(CONFIG_PATH, f'{OUTPUT_DIR}/{CONFIG_NAME}')
! rm -r cassava
elif env=="colab":
shutil.copytree(OUTPUT_DIR, f'{INFO["SHARE_DRIVE_PATH"]}/{TITLE}')
shutil.copy2(CONFIG_PATH, f'{INFO["SHARE_DRIVE_PATH"]}/{TITLE}/{CONFIG_NAME}')
###Output
_____no_output_____ |
Natural Language Processing/Course 3 - Natural Language Processing with Sequence Models/Labs/Week 1/Classes and subclasses.ipynb | ###Markdown
Classes and subclasses In this notebook, I will show you the basics of classes and subclasses in Python. As you've seen in the lectures from this week, `Trax` uses layer classes as building blocks for deep learning models, so it is important to understand how classes and subclasses behave in order to be able to build custom layers when needed. By completing this notebook, you will:- Be able to define classes and subclasses in Python- Understand how inheritance works in subclasses- Be able to work with instances Part 1: Parameters, methods and instances First, let's define a class `My_Class`.
###Code
class My_Class: #Definition of My_class
x = None
###Output
_____no_output_____
###Markdown
`My_Class` has one parameter `x` without any value. You can think of parameters as the variables that every object assigned to a class will have. So, at this point, any object of class `My_Class` would have a variable `x` equal to `None`. To check this, I'll create two instances of that class and get the value of `x` for both of them.
###Code
instance_a= My_Class() #To create an instance from class "My_Class" you have to call "My_Class"
instance_b= My_Class()
print('Parameter x of instance_a: ' + str(instance_a.x)) #To get a parameter 'x' from an instance 'a', write 'a.x'
print('Parameter x of instance_b: ' + str(instance_b.x))
###Output
Parameter x of instance_a: None
Parameter x of instance_b: None
###Markdown
For an existing instance you can assign new values for any of its parameters. In the next cell, assign a value of `5` to the parameter `x` of `instance_a`.
###Code
### START CODE HERE (1 line) ###
instance_a.x = 5
### END CODE HERE ###
print('Parameter x of instance_a: ' + str(instance_a.x))
###Output
Parameter x of instance_a: 5
###Markdown
1.1 The `__init__` method When you want to assign values to the parameters of your class when an instance is created, it is necessary to define a special method: `__init__`. The `__init__` method is called when you create an instance of a class. It can have multiple arguments to initialize the paramenters of your instance. In the next cell I will define `My_Class` with an `__init__` method that takes the instance (`self`) and an argument `y` as inputs.
###Code
class My_Class:
def __init__(self, y): # The __init__ method takes as input the instance to be initialized and a variable y
self.x = y # Sets parameter x to be equal to y
###Output
_____no_output_____
###Markdown
In this case, the parameter `x` of an instance from `My_Class` would take the value of an argument `y`. The argument `self` is used to pass information from the instance being created to the method `__init__`. In the next cell, create an instance `instance_c`, with `x` equal to `10`.
###Code
### START CODE HERE (1 line) ###
instance_c = My_Class(10)
### END CODE HERE ###
print('Parameter x of instance_c: ' + str(instance_c.x))
###Output
Parameter x of instance_c: 10
###Markdown
Note that in this case, you had to pass the argument `y` from the `__init__` method to create an instance of `My_Class`. 1.2 The `__call__` method Another important method is the `__call__` method. It is performed whenever you call an initialized instance of a class. It can have multiple arguments and you can define it to do whatever you want like- Change a parameter, - Print a message,- Create new variables, etc.In the next cell, I'll define `My_Class` with the same `__init__` method as before and with a `__call__` method that adds `z` to parameter `x` and prints the result.
###Code
class My_Class:
def __init__(self, y): # The __init__ method takes as input the instance to be initialized and a variable y
self.x = y # Sets parameter x to be equal to y
def __call__(self, z): # __call__ method with self and z as arguments
self.x += z # Adds z to parameter x when called
print(self.x)
###Output
_____no_output_____
###Markdown
Let’s create `instance_d` with `x` equal to 5.
###Code
instance_d = My_Class(5)
###Output
_____no_output_____
###Markdown
And now, see what happens when `instance_d` is called with argument `10`.
###Code
instance_d(10)
###Output
15
###Markdown
Now, you are ready to complete the following cell so any instance from `My_Class`:- Is initialized taking two arguments `y` and `z` and assigns them to `x_1` and `x_2`, respectively. And, - When called, takes the values of the parameters `x_1` and `x_2`, sums them, prints and returns the result.
###Code
class My_Class:
def __init__(self, y, z): #Initialization of x_1 and x_2 with arguments y and z
### START CODE HERE (2 lines) ###
self.x_1 = y
self.x_2 = z
### END CODE HERE ###
def __call__(self): #When called, adds the values of parameters x_1 and x_2, prints and returns the result
### START CODE HERE (1 line) ###
result = self.x_1 + self.x_2
### END CODE HERE ###
print("Addition of {} and {} is {}".format(self.x_1,self.x_2,result))
return result
###Output
_____no_output_____
###Markdown
Run the next cell to check your implementation. If everything is correct, you shouldn't get any errors.
###Code
instance_e = My_Class(10,15)
def test_class_definition():
assert instance_e.x_1 == 10, "Check the value assigned to x_1"
assert instance_e.x_2 == 15, "Check the value assigned to x_2"
assert instance_e() == 25, "Check the __call__ method"
print("\033[92mAll tests passed!")
test_class_definition()
###Output
Addition of 10 and 15 is 25
[92mAll tests passed!
###Markdown
1.3 Custom methods In addition to the `__init__` and `__call__` methods, your classes can have custom-built methods to do whatever you want when called. To define a custom method, you have to indicate its input arguments, the instructions that you want it to perform and the values to return (if any). In the next cell, `My_Class` is defined with `my_method` that multiplies the values of `x_1` and `x_2`, sums that product with an input `w`, and returns the result.
###Code
class My_Class:
def __init__(self, y, z): #Initialization of x_1 and x_2 with arguments y and z
self.x_1 = y
self.x_2 = z
def __call__(self): #Performs an operation with x_1 and x_2, and returns the result
a = self.x_1 - 2*self.x_2
return a
def my_method(self, w): #Multiplies x_1 and x_2, adds argument w and returns the result
result = self.x_1*self.x_2 + w
return result
###Output
_____no_output_____
###Markdown
Create an instance `instance_f` of `My_Class` with any integer values that you want for `x_1` and `x_2`. For that instance, see the result of calling `My_method`, with an argument `w` equal to `16`.
###Code
### START CODE HERE (1 line) ###
instance_f = My_Class(1,10)
### END CODE HERE ###
print("Output of my_method:",instance_f.my_method(16))
###Output
Output of my_method: 26
###Markdown
As you can corroborate in the previous cell, to call a custom method `m`, with arguments `args`, for an instance `i` you must write `i.m(args)`. With that in mind, methods can call others within a class. In the following cell, try to define `new_method` which calls `my_method` with `v` as input argument. Try to do this on your own in the cell given below.
###Code
class My_Class:
def __init__(self, y, z): #Initialization of x_1 and x_2 with arguments y and z
self.x_1 = None
self.x_2 = None
def __call__(self): #Performs an operation with x_1 and x_2, and returns the result
a = None
return a
def my_method(self, w): #Multiplies x_1 and x_2, adds argument w and returns the result
b = None
return b
def new_method(self, v): #Calls My_method with argument v
### START CODE HERE (1 line) ###
result = None
### END CODE HERE ###
return result
###Output
_____no_output_____
###Markdown
SPOILER ALERT Solution:
###Code
# hidden-cell
class My_Class:
def __init__(self, y, z): #Initialization of x_1 and x_2 with arguments y and z
self.x_1 = y
self.x_2 = z
def __call__(self): #Performs an operation with x_1 and x_2, and returns the result
a = self.x_1 - 2*self.x_2
return a
def my_method(self, w): #Multiplies x_1 and x_2, adds argument w and returns the result
b = self.x_1*self.x_2 + w
return b
def new_method(self, v): #Calls My_method with argument v
result = self.my_method(v)
return result
instance_g = My_Class(1,10)
print("Output of my_method:",instance_g.my_method(16))
print("Output of new_method:",instance_g.new_method(16))
###Output
Output of my_method: 26
Output of new_method: 26
###Markdown
Part 2: Subclasses and Inheritance `Trax` uses classes and subclasses to define layers. The base class in `Trax` is `layer`, which means that every layer from a deep learning model is defined as a subclass of the `layer` class. In this part of the notebook, you are going to see how subclasses work. To define a subclass `sub` from class `super`, you have to write `class sub(super):` and define any method and parameter that you want for your subclass. In the next cell, I define `sub_c` as a subclass of `My_Class` with only one method (`additional_method`).
###Code
class sub_c(My_Class): #Subclass sub_c from My_class
def additional_method(self): #Prints the value of parameter x_1
print(self.x_1)
###Output
_____no_output_____
###Markdown
2.1 Inheritance When you define a subclass `sub`, every method and parameter is inherited from `super` class, including the `__init__` and `__call__` methods. This means that any instance from `sub` can use the methods defined in `super`. Run the following cell and see for yourself.
###Code
instance_sub_a = sub_c(1,10)
print('Parameter x_1 of instance_sub_a: ' + str(instance_sub_a.x_1))
print('Parameter x_2 of instance_sub_a: ' + str(instance_sub_a.x_2))
print("Output of my_method of instance_sub_a:",instance_sub_a.my_method(16))
###Output
Parameter x_1 of instance_sub_a: 1
Parameter x_2 of instance_sub_a: 10
Output of my_method of instance_sub_a: 26
###Markdown
As you can see, `sub_c` does not have an initialization method `__init__`, it is inherited from `My_class`. However, you can overwrite any method you want by defining it again in the subclass. For instance, in the next cell define a class `sub_c` with a redefined `my_Method` that multiplies `x_1` and `x_2` but does not add any additional argument.
###Code
class sub_c(My_Class): #Subclass sub_c from My_class
def my_method(self): #Multiplies x_1 and x_2 and returns the result
### START CODE HERE (1 line) ###
b = self.x_1*self.x_2
### END CODE HERE ###
return b
###Output
_____no_output_____
###Markdown
To check your implementation run the following cell.
###Code
test = sub_c(3,10)
assert test.my_method() == 30, "The method my_method should return the product between x_1 and x_2"
print("Output of overridden my_method of test:",test.my_method()) #notice we didn't pass any parameter to call my_method
#print("Output of overridden my_method of test:",test.my_method(16)) #try to see what happens if you call it with 1 argument
###Output
Output of overridden my_method of test: 30
###Markdown
In the next cell, two instances are created, one of `My_Class` and another one of `sub_c`. The instances are initialized with equal `x_1` and `x_2` parameters.
###Code
y,z= 1,10
instance_sub_a = sub_c(y,z)
instance_a = My_Class(y,z)
print('My_method for an instance of sub_c returns: ' + str(instance_sub_a.my_method()))
print('My_method for an instance of My_Class returns: ' + str(instance_a.my_method(10)))
###Output
My_method for an instance of sub_c returns: 10
My_method for an instance of My_Class returns: 20
|
DAY 101 ~ 200/DAY105_[SW Expert Academy] 5162번 두가지 빵의 딜레마 (Python).ipynb | ###Markdown
2020년 5월 21일 목요일 SW Expert Academy - 두가지 빵의 딜레마 문제 : https://swexpertacademy.com/main/code/problem/problemDetail.do?contestProbId=AWTaTDua3OoDFAVT 블로그 : https://somjang.tistory.com/entry/SWExpertAcademy-5162%EB%B2%88-%EB%91%90%EA%B0%80%EC%A7%80-%EB%B9%B5%EC%9D%98-%EB%94%9C%EB%A0%88%EB%A7%88-Python 첫번째 시도
###Code
T = int(input())
for i in range(T):
A, B, C = map(int, input().split())
max_bread = 0
min_price = min(A, B)
max_bread = C // min_price
print("#{} {}".format(i+1, max_bread))
###Output
_____no_output_____ |
New Jupyter Notebooks/Titanic Data Preprocessing.ipynb | ###Markdown
Data Preprocessing Methods Used:- One Hot Encoding- Label Encoding- Mode Imputation (for null values) Dataset Used: Kaggle's Titanic Dataset
###Code
# https://www.kaggle.com/c/titanic/data --> Kaggle Titanic Dataset
import pandas as pd # import pandas for dataframe
from sklearn import preprocessing
train = pd.read_csv('../Titanic Data/train.csv')
# test = pd.read_csv('../Titanic Data/test.csv')
# ^^ I am not preprocessing the test dataset (it is just a repeat of the same below steps).
train.head()
# https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop.html
train = train.drop(columns=['Ticket', 'PassengerId', 'Name', 'Cabin'])
train.head()
###Output
_____no_output_____
###Markdown
Why I dropped the above columns - Ticket: hard to process (each ticket is unique), does not add any new information that Pclass does not provide. - PassengerID: effectively is just the row number in the data, which is not helpful. - Name: hard to process (each name is unique), does not add any new information that Sex does not provide. - Cabin: hard to process (mainly just null values), does not add any new information that Pclass does not provide.
###Code
def convert_sex(sex):
if sex == 'male':
return 0
elif sex == 'female':
return 1
else:
print(f"ERROR: SEX WAS NEITHER MALE NOR FEMALE {sex}")
train['Sex'] = train['Sex'].apply(convert_sex)
train.head()
# https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html#sklearn.preprocessing.OneHotEncoder
# https://stackoverflow.com/questions/43588679/issue-with-onehotencoder-for-categorical-features/43589167#43589167
ohe_class = preprocessing.OneHotEncoder(sparse=False)
one_hot_encoded_class = ohe_class.fit_transform(train['Pclass'].values.reshape(-1, 1))
train['Upper Class'] = one_hot_encoded_class[:, 0]
train['Middle Class'] = one_hot_encoded_class[:, 1]
train['Lower Class'] = one_hot_encoded_class[:, 2]
train = train.drop(columns=['Pclass'])
train.head()
embarked_vals = train['Embarked'].values
print(embarked_vals)
# lovely, there are two entries that are NaN that have to be dealt with...
# below code determines which port is the mode, and sets the NaN datapoints to it
C = 0
Q = 0
S = 0
typical_port = 'C'
NaN_indexes = []
index = 0
for port in train['Embarked'].values:
if port == 'C':
C += 1
elif port == 'Q':
Q += 1
elif port == 'S':
S += 1
else:
NaN_indexes.append(index)
index += 1
if Q > C and Q > S:
typical_port = 'Q'
elif S > Q and S > C:
typical_port = 'S'
for NaN_index in NaN_indexes:
embarked_vals[NaN_index] = typical_port
print(embarked_vals)
# https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html#sklearn.preprocessing.LabelEncoder.fit_transform
# https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html#sklearn.preprocessing.OneHotEncoder
# https://stackoverflow.com/questions/43588679/issue-with-onehotencoder-for-categorical-features/43589167#43589167
embarked_label_encoded = preprocessing.LabelEncoder().fit_transform(embarked_vals).reshape(-1, 1)
ohe_embarked = preprocessing.OneHotEncoder(sparse=False)
one_hot_encoded_embarked = ohe_embarked.fit_transform(embarked_label_encoded)
train['Embarked Cherbourg'] = one_hot_encoded_embarked[:, 0]
train['Embarked Queenstown'] = one_hot_encoded_embarked[:, 1]
train['Embarked Southampton'] = one_hot_encoded_embarked[:, 2]
train.drop(columns=['Embarked'])
train.head()
###Output
_____no_output_____ |
nlp_dev_day_july_2021.ipynb | ###Markdown
Natural Language Processing Dev DayWelcome to the NLP Dev day. Natural Language Processing (NLP) is a subfield of AI which explores how to make computers "understand" natural languages, such as English. This tutorial is meant to walk you through some of the basic concepts used in practice to process text documents. The first section is about pre-processing data, the second is mostly about classifying data. We will be using Python NLTK (Natural Language Toolkit) and scikit-learn (or sklearn), a machine learning library. Start by making your own copy of this notebook in Google Colab so that you can edit/experiment with any and all of the code snippets (you will need a Google drive account to do this). Reach out in Teams if you have questions or notice a mistrake!After you've gone through the tutorial, you can spin up a project of your own with a dataset of your choice. Getting StartedRun the snippet below to get set up with some of the libraries & datasets we will be using.
###Code
import nltk
nltk.download("punkt")
nltk.download("stopwords")
nltk.download("wordnet")
nltk.download("brown")
nltk.download("names")
nltk.download('movie_reviews')
nltk.download('averaged_perceptron_tagger')
nltk.download('tagsets')
###Output
[nltk_data] Downloading package punkt to /root/nltk_data...
[nltk_data] Unzipping tokenizers/punkt.zip.
[nltk_data] Downloading package stopwords to /root/nltk_data...
[nltk_data] Unzipping corpora/stopwords.zip.
[nltk_data] Downloading package wordnet to /root/nltk_data...
[nltk_data] Unzipping corpora/wordnet.zip.
[nltk_data] Downloading package brown to /root/nltk_data...
[nltk_data] Unzipping corpora/brown.zip.
[nltk_data] Downloading package names to /root/nltk_data...
[nltk_data] Unzipping corpora/names.zip.
[nltk_data] Downloading package movie_reviews to /root/nltk_data...
[nltk_data] Unzipping corpora/movie_reviews.zip.
[nltk_data] Downloading package averaged_perceptron_tagger to
[nltk_data] /root/nltk_data...
[nltk_data] Unzipping taggers/averaged_perceptron_tagger.zip.
[nltk_data] Downloading package tagsets to /root/nltk_data...
[nltk_data] Unzipping help/tagsets.zip.
###Markdown
Section One: Pre-processing text data TokenizationTokenization means splitting text into pieces (or 'tokens') that we are interested in analyzing. Tokens might be sentences, or they might be individual words. Feel free to experiment with some of the tokenizers below. For the most part, `WordPunctTokenizer` will split special characters (such as apostrophes) into seperate tokens, while `word_tokenize` will try to keep them attached to the relevant words. See [here](https://stackoverflow.com/questions/50240029/nltk-wordpunct-tokenize-vs-word-tokenize) for more about why.
###Code
from nltk.tokenize import sent_tokenize, \
word_tokenize, WordPunctTokenizer
input_text = "Here's some input text, we can use it to see what tokenization is."
print("\nSentence tokenizer:")
print(sent_tokenize(input_text))
print("\nWord tokenizer:")
print(word_tokenize(input_text))
print("\nWord punct tokenizer:")
print(WordPunctTokenizer().tokenize(input_text))
###Output
Sentence tokenizer:
["Here's some input text, we can use it to see what tokenization is and how it works."]
Word tokenizer:
['Here', "'s", 'some', 'input', 'text', ',', 'we', 'can', 'use', 'it', 'to', 'see', 'what', 'tokenization', 'is', 'and', 'how', 'it', 'works', '.']
Word punct tokenizer:
['Here', "'", 's', 'some', 'input', 'text', ',', 'we', 'can', 'use', 'it', 'to', 'see', 'what', 'tokenization', 'is', 'and', 'how', 'it', 'works', '.']
###Markdown
Removing Stop Words**Stop words** are words that are so commonly used that they are useless for most applications. Words such as *the*, *of*, and *is* tell us very little about what a document is about. We'd often like to simply remove them.There is a pre-defined list of stop words available in NLTK.
###Code
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
example_sent = "Like most sentences, this sentence contains a few stop words that aren't very interesting."
tokens = word_tokenize(example_sent)
stop_words = set(stopwords.words('english'))
filtered_sentence = [w for w in tokens if not w.lower() in stop_words]
print(tokens)
print(filtered_sentence)
###Output
['Like', 'most', 'sentences', ',', 'this', 'sentence', 'contains', 'a', 'few', 'stop', 'words', 'that', 'are', "n't", 'very', 'interesting', '.']
['Like', 'sentences', ',', 'sentence', 'contains', 'stop', 'words', "n't", 'interesting', '.']
###Markdown
StemmingIn linguistics, the **stem** of a word is the part of a word responsible for its lexical meaning. It's the part of the word that's leftover when you remove prefixes and suffixes, and the part of the word that's leftover when you de-conjugate a verb. For example, the stem of *walking* is *walk*, the stem of *quickly* is *quick*. In English, the stem of a word is often also a word, but not always.Stemming is a common preprocessing step when working with text data. It's useful to ignore prefixes, suffixes, and verb tense in a lot of applications; if someone is searching for documents about "organizing", we might as well return documents that are about "organize", "organized", "organizer", etc. `PorterStemmer`, `LancasterStemmer`, and `SnowballStemmer` are three stemmers available in NLTK. Feel free to compare them below. Check out this [stackoverflow article](https://stackoverflow.com/questions/10554052/what-are-the-major-differences-and-benefits-of-porter-and-lancaster-stemming-alg) for more on the differences between the Porter, Lancaster, and Snowball algorithms.
###Code
from nltk.stem.porter import PorterStemmer
from nltk.stem.lancaster import LancasterStemmer
from nltk.stem.snowball import SnowballStemmer
input_words = ['chocolate', 'hat', 'walking', 'landed', 'growth', 'messenger',
'possibly', 'provision', 'building', 'kept', 'scratchy', 'code', 'lying']
porter = PorterStemmer()
lancaster = LancasterStemmer()
snowball = SnowballStemmer('english')
stemmer_names = ['Porter', 'Lancaster', 'Snowball']
formatted_text = '{:>16}' * (len(stemmer_names) + 1)
print('\n', formatted_text.format('Input', *stemmer_names),
'\n', '='*68)
for word in input_words:
output = [word, porter.stem(word),
lancaster.stem(word), snowball.stem(word)]
print(formatted_text.format(*output))
###Output
Input Porter Lancaster Snowball
====================================================================
chocolate chocol chocol chocol
hat hat hat hat
walking walk walk walk
landed land land land
growth growth grow growth
messenger messeng messeng messeng
possibly possibl poss possibl
provision provis provid provis
building build build build
kept kept kept kept
scratchy scratchi scratchy scratchi
code code cod code
lying lie lying lie
###Markdown
LemmatizationWe can take stemming one step further by making sure the result is actually a real word. This is known as **lemmatization**. Lemmatization is slower than stemming, but sometimes it's useful.The `WordNetLemmatizer` removes prefixes and suffixes only if the resulting word is in its dictionary. It also tries to remove tenses from verbs and convert plural nouns to singular.
###Code
from nltk.stem import WordNetLemmatizer
input_words = ['chocolate', 'hats', 'walking', 'landed', 'women', 'messengers',
'possibly', 'provision', 'building', 'kept', 'scratchy', 'code', 'lying', 'Frisco']
lemmatizer = WordNetLemmatizer()
lemmatizer_names = ['Noun Lemmatizer', 'Verb Lemmatizer']
formatted_text = '{:>24}' * (len(lemmatizer_names) + 1)
print('\n', formatted_text.format('Input', *lemmatizer_names),
'\n', '='*75)
for word in input_words:
output = [word, lemmatizer.lemmatize(word, pos='n'), lemmatizer.lemmatize(word, pos='v')]
print(formatted_text.format(*output))
###Output
Input Noun Lemmatizer Verb Lemmatizer
===========================================================================
chocolate chocolate chocolate
hats hat hat
walking walking walk
landed landed land
women woman women
messengers messenger messengers
possibly possibly possibly
provision provision provision
building building build
kept kept keep
scratchy scratchy scratchy
code code code
lying lying lie
Frisco Frisco Frisco
###Markdown
Part-of-Speech TaggingA part-of-speech tagger (AKA a **POS tagger**) attaches a part-of-speech tags to words, meaning it labels nouns as nouns, verbs as verbs, etc. Try out NLTK's POS tagger below. Under the hood, a tagger is a machine learning model. When you give it a word, it predicts what type of word it is. POS tags are useful in a number of ways. For instance, suppose NLTK runs into a word it's never seen before: *He was scrobbling*. Even though it has no idea of the meaning, it's likely to guess that *scrobbling* is a verb. Additionally, POS tags help us distinguish between homonymns. Consider this sentence: *They refuse to permit us to obtain the refuse permit*. The first *refuse* is a verb, the second *refuse* is a noun. Depending on how picky we are, we might want to consider them as completely different words in our system. The example below uses NLTK's Averaged Perceptron Tagger (a *perceptron* is a neural network consisting of only one layer). If you're interested in how it works, [this article](https://explosion.ai/blog/part-of-speech-pos-tagger-in-python) explains how to write an averaged perceptron tagger.
###Code
from nltk.tokenize import word_tokenize
# Uncomment this to see descriptions of all the parts of speech in the tagger
# notice how some of the verbs include extra information, like verb tense (present, progressive, past, etc)
# nltk.help.upenn_tagset()
tokens = word_tokenize("Let's look at part-of-speech tagging.")
print(nltk.pos_tag(tokens))
###Output
[('Let', 'VB'), ("'s", 'POS'), ('look', 'VB'), ('at', 'IN'), ('part-of-speech', 'JJ'), ('tagging', 'NN'), ('.', '.')]
###Markdown
Count Vectorizer `CountVectorizer` (from the [sklearn](https://scikit-learn.org/stable/) library) converts a documents into "vectors" of term/token counts. CountVectorizer is useful for creating a [document-term matrix](https://en.wikipedia.org/wiki/Document-term_matrix). A document-term matrix is handy when you want to represent your data numerically, and it is often passed to machine learning algorithms (read: we will be using CountVectorizer in later examples). CountVectorizer does a few handy things by default, including: * converts your text to lowercase* does word tokenization for you* gets rid of single characters (meaning words like 'a' and 'I' are discarded)
###Code
from sklearn.feature_extraction.text import CountVectorizer
# Each sentence here is considered a 'document'
cat_in_the_hat_docs=[
"One Cent, Two Cents, Old Cent, New Cent: All About Money (Cat in the Hat's Learning Library",
"Inside Your Outside: All About the Human Body (Cat in the Hat's Learning Library)",
"Oh, The Things You Can Do That Are Good for You: All About Staying Healthy (Cat in the Hat's Learning Library)",
"On Beyond Bugs: All About Insects (Cat in the Hat's Learning Library)",
"There's No Place Like Space: All About Our Solar System (Cat in the Hat's Learning Library)"
]
cv = CountVectorizer(cat_in_the_hat_docs)
# .fit creates a vocabulary, that is, picks out all the unique words in each document and assigns them an index
vectorizer = cv.fit(cat_in_the_hat_docs)
# .fit_transform creates a document-term matrix, meaning it picks out all the unique words and returns a 2D array where
# each row represents a document & each column represents a term/word in the vocabulary
count_vector=cv.fit_transform(cat_in_the_hat_docs)
# Print unique words with their indices
print("Vocabulary: ", vectorizer.vocabulary_)
# Print the document-term matrix
print(count_vector.toarray())
###Output
Vocabulary: {'one': 28, 'cent': 8, 'two': 40, 'cents': 9, 'old': 26, 'new': 23, 'all': 1, 'about': 0, 'money': 22, 'cat': 7, 'in': 16, 'the': 37, 'hat': 13, 'learning': 19, 'library': 20, 'inside': 18, 'your': 42, 'outside': 30, 'human': 15, 'body': 4, 'oh': 25, 'things': 39, 'you': 41, 'can': 6, 'do': 10, 'that': 36, 'are': 2, 'good': 12, 'for': 11, 'staying': 34, 'healthy': 14, 'on': 27, 'beyond': 3, 'bugs': 5, 'insects': 17, 'there': 38, 'no': 24, 'place': 31, 'like': 21, 'space': 33, 'our': 29, 'solar': 32, 'system': 35}
[[1 1 0 0 0 0 0 1 3 1 0 0 0 1 0 0 1 0 0 1 1 0 1 1 0 0 1 0 1 0 0 0 0 0 0 0
0 1 0 0 1 0 0]
[1 1 0 0 1 0 0 1 0 0 0 0 0 1 0 1 1 0 1 1 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0
0 2 0 0 0 0 1]
[1 1 1 0 0 0 1 1 0 0 1 1 1 1 1 0 1 0 0 1 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0
1 2 0 1 0 2 0]
[1 1 0 1 0 1 0 1 0 0 0 0 0 1 0 0 1 1 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0]
[1 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 1 1 1 0 0 1 0 0 0 0 1 0 1 1 1 0 1
0 1 1 0 0 0 0]]
###Markdown
Keyword Extraction (TF-IDF)Keyword extraction is a common pre-processing step and a common standalone task in NLP. It means picking out important words from a document that describe what the document is about. **Term Frequency-Inverse Document Frequency (TF-IDF)** is essentially a statistic assigned to a word that indicates how important it is to a document. Words with a high TF-IDF score are considered to be keywords. [The first 5 minutes of this video](https://www.youtube.com/watch?v=RPMYV-eb6lI) give a pretty good explanation of how TF-IDF is computed.
###Code
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
# We should note that TF-IDF works a lot better on larger datasets
docs=["the house had a tiny little mouse",
"the cat saw the mouse",
"the mouse ran away from the house",
"the cat finally ate the mouse",
"the end of the mouse story"
]
tfidf_vectorizer=TfidfVectorizer(use_idf=True)
tfidf_vectorizer_vectors=tfidf_vectorizer.fit_transform(docs)
# Get the vector for the first document
first_vector_tfidfvectorizer=tfidf_vectorizer_vectors[0]
# Using a pandas dataframe to pretty print
df = pd.DataFrame(first_vector_tfidfvectorizer.T.todense(), index=tfidf_vectorizer.get_feature_names(), columns=["tfidf"])
df.sort_values(by=["tfidf"],ascending=False)
###Output
_____no_output_____
###Markdown
Task: Pick a dataset you want to work withBefore reading into the next section, it might be helpful to pick a dataset you want to experiment with. Go ahead and search online for a dataset you are interested in using. You'll want to find one that contains raw text data. Keep your dataset in mind when going through the examples in the next section. Which (if any) of the tasks below are applicable to it? NLTK contains a set of [built-in datasets](http://www.nltk.org/nltk_data/) for experimentation and learning which might be a good starting point (they are mostly geared towards very specific tasks). There's also [Kaggle](https://www.kaggle.com/datasets) and [Google Dataset Search](https://datasetsearch.research.google.com/). Part two: Classification and ModelingIn this section we will walk through a few examples of classification and one example of topic modelling. All of the classification examples below use a Naive Bayes model for classification. For the purposes of this dev day, the model itself isn't important. A lot of what we are learning today is about how to get text data into a useful format for passing to a classifier like Naive Bayes. Category PredictionThe example of category prediction below uses the 20 News Groups dataset. It contains around 18000 news articles on 20 topics. The data has already been split into two subsets, one to train our model and one for testing the output of the model. For fun, we are using our own tiny set of test data instead of the provided test data.A detailed description of the dataset is available [here](https://scikit-learn.org/0.19/datasets/twenty_newsgroups.html).Feel free to play around with the test input data. Although it does work a lot of the time, it's still pretty easy to trick the model. As you might expect, if you write something that isn't in one of the 5 categories it's trained on, it will spit out something that just looks random.
###Code
from sklearn.datasets import fetch_20newsgroups
from sklearn.naive_bayes import MultinomialNB
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import CountVectorizer
category_map = {'talk.politics.misc': 'Politics', 'rec.autos': 'Autos',
'rec.sport.hockey': 'Hockey', 'sci.electronics': 'Electronics',
'sci.med': 'Medicine'}
# Get the training dataset
# Shuffling training data is a standard practise in ML, to create more general models and prevent a common problem called overfitting. The thread linked below has a more thorough discussion
# https://datascience.stackexchange.com/questions/24511/why-should-the-data-be-shuffled-for-machine-learning-tasks
training_data = fetch_20newsgroups(subset='train',
categories=category_map.keys(), shuffle=True, random_state=5)
# Get a document-term matrix
count_vectorizer = CountVectorizer()
train_term_counts = count_vectorizer.fit_transform(training_data.data)
# We can pass a document-term matrix to tfidf.fit_transform() to get the TF-IDF weights of each word
# Notice we didn't worry about stop words? They are going to have a very low tf-idf weight anyways.
tfidf = TfidfTransformer()
train_tfidf = tfidf.fit_transform(train_term_counts)
# Train the model
# For each row in our document-term matrix, we have a corresponding category in training_data.target
classifier = MultinomialNB().fit(train_tfidf, training_data.target)
# Erase my test data and create your own. Keep in mind the model is going to try to classify in one of the 5 categories in category_map
input_data = [
'You should always be careful if you are driving a car',
'A lot of devices are not as secure as you might think',
'The sports cup was won by a team because they scored the most points at the super cup game, yay',
'Big election has politicians doing all sorts of stuff to get votes',
'Medical experts warn Burrata cheese sold in Quebec is not safe'
]
# Transform input data using count vectorizer
input_term_counts = count_vectorizer.transform(input_data)
# Transform again to get the tf-idf weights
input_tfidf = tfidf.transform(input_term_counts)
# With our data in this format, we can pass it to the classification model and see what it predicts.
predictions = classifier.predict(input_tfidf)
# Print the outputs
for sent, category in zip(input_data, predictions):
print('\nInput:', sent, '\nPredicted category:',
category_map[training_data.target_names[category]])
###Output
Downloading 20news dataset. This may take a few minutes.
Downloading dataset from https://ndownloader.figshare.com/files/5975967 (14 MB)
###Markdown
Gender IdentifierGender identification is a well-studied task in NLP with many different approaches. In the example below, we will test if the model is able to accurately identify gender given the last couple letters of a first name. In classification problems (such as gender identification and category prediction), we often create the model like so: `model = whateverModelIAmUsing.fit(X, y)`or `training_data = [({featureName: feature}, target), ({featureName: feature}, target)...]``model = whateverModelIAmUsing.train(training_data)`In the first example, `X` is the set of **features** we think will help the model make accurate predicitions and `y` is the set of **targets**, AKA the answers that the model should ideally come up with. There are many methods and heuristics out there for choosing good features (if you're interested in learning more about this, there's a good tutorial [here](https://www.kaggle.com/learn/feature-engineering)). For our purposes, let's simply compare the accuracy between a few different sets of features. We'll train the model based on the last letter of a name, the last two letters, the last three letters, and so on.
###Code
import random
from nltk import NaiveBayesClassifier
from nltk.classify import accuracy as nltk_accuracy
from nltk.corpus import names
# This time, we are only going to pass the last N letters of the word to the model.
def extract_features(word, N=2):
last_n_letters = word[-N:]
return {'lastLetters': last_n_letters.lower()}
if __name__=='__main__':
# Create training data using labeled names available in NLTK
# Unfortunately the dataset doesn't yet contain a list of gender-neutral names
male_list = [(name, 'male') for name in names.words('male.txt')]
female_list = [(name, 'female') for name in names.words('female.txt')]
data = (male_list + female_list)
#Shuffle the data
random.seed(5)
random.shuffle(data)
# Create test data
input_names = ['Yash', 'Shrimanti', 'Sai Ram', 'Riley', 'Brooke', 'Ashley', 'Robin']
# Define the number of samples used for train and test
# It's typical to use an 80/20 split
num_train = int(0.8 * len(data))
# Iterate through different lengths to compare the accuracy
for i in range(1, 6):
print('\nNumber of end letters:', i)
features = [(extract_features(n, i), gender) for (n, gender) in data]
train_data, test_data = features[:num_train], features[num_train:]
classifier = NaiveBayesClassifier.train(train_data)
# Compute the accuracy of the classifier
accuracy = round(100 * nltk_accuracy(classifier, test_data), 2)
print('Accuracy = ' + str(accuracy) + '%')
# Predict outputs for input names using the trained classifier model
for name in input_names:
print(name, '=>', classifier.classify(extract_features(name, i)))
###Output
Number of end letters: 1
Accuracy = 75.02%
Yash => female
Shrimanti => female
Sai Ram => male
Riley => female
Brooke => female
Ashley => female
Robin => male
Number of end letters: 2
Accuracy = 78.35%
Yash => male
Shrimanti => female
Sai Ram => male
Riley => female
Brooke => male
Ashley => female
Robin => male
Number of end letters: 3
Accuracy = 76.02%
Yash => male
Shrimanti => female
Sai Ram => male
Riley => male
Brooke => female
Ashley => male
Robin => male
Number of end letters: 4
Accuracy = 69.35%
Yash => female
Shrimanti => female
Sai Ram => female
Riley => female
Brooke => female
Ashley => female
Robin => male
Number of end letters: 5
Accuracy = 65.07%
Yash => female
Shrimanti => female
Sai Ram => female
Riley => male
Brooke => female
Ashley => female
Robin => female
###Markdown
Sentiment AnalyzerSentiment analysis, or opinion mining, is the practise of creating models that determine the tone of a piece of text (or voice) data, such as whether a review was positive or negative. Below is an example of a sentiment analyzer using NLTK's Movie Review toy dataset. If you're interested/have time, sentiment analysis of tweets can be a fun project. Here's a [tutorial](https://towardsdatascience.com/how-to-scrape-tweets-from-twitter-59287e20f0f1) on how to use the twitter API to get a dataset of tweets into python.
###Code
from nltk.corpus import movie_reviews
from nltk.classify import NaiveBayesClassifier
from nltk.classify.util import accuracy as nltk_accuracy
# Extract features from the input list of words
# The format we are using for the features looks like this:
# [({'here': True, 'are': True, 'all': True, 'the': True, 'words': True, 'in': True, 'the': True, 'review': True}, Positive)]
def extract_features(words):
return dict([(word, True) for word in words])
if __name__=='__main__':
# Load the reviews from the corpus
fileids_pos = movie_reviews.fileids('pos')
fileids_neg = movie_reviews.fileids('neg')
# Extract the features from the reviews
features_pos = [(extract_features(movie_reviews.words(
fileids=[f])), 'Positive') for f in fileids_pos]
features_neg = [(extract_features(movie_reviews.words(
fileids=[f])), 'Negative') for f in fileids_neg]
# This is our 80/20 train/test split
threshold = 0.8
num_pos = int(threshold * len(features_pos))
num_neg = int(threshold * len(features_neg))
features_train = features_pos[:num_pos] + features_neg[:num_neg]
features_test = features_pos[num_pos:] + features_neg[num_neg:]
# Train a Naive Bayes classifier & get the accuracy
classifier = NaiveBayesClassifier.train(features_train)
print('\nAccuracy of the classifier:', nltk_accuracy(
classifier, features_test))
# NaiveBayesClassifier can get us the most informative words, that is, words that strongly influence the model
top_ten_words = classifier.most_informative_features()[:10]
print('\nTop ten most informative words: ')
for i, item in enumerate(top_ten_words):
print(str(i+1) + '. ' + item[0])
# Let's make up our own test data again
input_reviews = [
'I liked the cinematography',
'This was a terrible movie, the characters were so dumb',
'This movie has one of my favorite actors! I loved it!',
'This is such an boring movie. Would not recommend.',
'This movie contains Nicolas Cage'
]
print("\nMovie review predictions:")
for review in input_reviews:
print("\nReview:", review)
# Compute the probabilities
probabilities = classifier.prob_classify(extract_features(review.split()))
# Pick the maximum value
predicted_sentiment = probabilities.max()
# Print outputs
print("Predicted sentiment:", predicted_sentiment)
print("Probability:", round(probabilities.prob(predicted_sentiment), 2))
###Output
Accuracy of the classifier: 0.735
Top ten most informative words:
1. outstanding
2. insulting
3. vulnerable
4. ludicrous
5. uninvolving
6. astounding
7. avoids
8. fascination
9. symbol
10. seagal
Movie review predictions:
Review: I liked the cinematography
Predicted sentiment: Positive
Probability: 0.69
Review: This was a terrible movie, the characters were so dumb
Predicted sentiment: Negative
Probability: 0.92
Review: This movie has one of my favorite actors! I loved it!
Predicted sentiment: Positive
Probability: 0.56
Review: This is such an boring movie. Would not recommend.
Predicted sentiment: Negative
Probability: 0.76
Review: This movie contains Nicolas Cage
Predicted sentiment: Negative
Probability: 0.53
###Markdown
Topic ModelingSo far, we have seen examples of classificaton, where we have some data and we'd like to make a specific conclusion about it: positive or negative, about sports or about politics, etc. We have predetermined the categories that we want to fit our data into. Suppose we want to learn something about some given text data without having any pre-determined categories. One thing we can do is topic modeling, where we generate a statistical model that tells us what a document is about. **Latent Dirichlet Allocation** (LDA) is an algorithm for creating *topic vectors*. A topic vector is a set of words which represent an abstract topic. If you are interested in a full description of the algorithm, there is one [here.](https://www.youtube.com/watch?v=DWJYZq_fQ2A). We need pass a parameter to the LDA model function that tells it how many topics we want it to return. There is no way of determining how many noteworthy LDA topic vectors a document has; it's far from an exact science and requires some trial and error.Feel free to play around with the example below.
###Code
from nltk.tokenize import RegexpTokenizer
from nltk.corpus import stopwords
from nltk.stem.snowball import SnowballStemmer
from gensim import models, corpora
# More of our own test data.
def get_data():
return [
'The recorded history of Scotland begins with the arrival of the Roman Empire in the 1st century.',
'Then the Viking invasions began, forcing the Picts and Gaels to unite, forming the Kingdom of Scotland.',
'The Kingdom of Scotland was united under the House of Alpin, whose members fought among each other during frequent disputed successions.',
'England would take advantage of this questioned succession to launch a series of conquests, resulting in the Wars of Scottish Independence',
'During the Scottish Enlightenment and Industrial Revolution, Scotland became one of the powerhouses of Europe.',
'Giraffes usually inhabit savannahs and open woodlands.' ,
'The giraffe\'s chief distinguishing characteristics are its extremely long neck and legs and its distinctive coat pattern.',
'Giraffes may be preyed on by lions, leopards, spotted hyenas and African wild dogs.',
'It is classified as vulnerable to extinction, and has been extirpated from many parts of its former range.',
'The elongation of the neck appears to have started early in the giraffe lineage.',
];
def preprocess(input_text):
# Regular expression tokenizer, we'd like to ignore punctuation and numbers
tokenizer = RegexpTokenizer(r'\w+')
stop_words = stopwords.words('english')
stemmer = SnowballStemmer('english')
tokens = tokenizer.tokenize(input_text.lower())
tokens = [x for x in tokens if not x in stop_words]
tokens_stemmed = [stemmer.stem(x) for x in tokens]
return tokens_stemmed
if __name__=='__main__':
data = get_data()
# Create a list for sentence tokens
tokens = [preprocess(x) for x in data]
# Create document-term matrix
# In this case, we are taking the tokenized words and using a bag-of-words format to create the doc-term matrix, because there is always more than one way of doing things
# doc2bow => given a document, we would like a bag of words, meaning for each token create a tuple with a token ID and the number of times it occurs in the document.
# https://en.wikipedia.org/wiki/Bag-of-words_model
dict_tokens = corpora.Dictionary(tokens)
doc_term_matrix = [dict_tokens.doc2bow(token) for token in tokens]
# The number of topics we want the LDA model to give us, I chose 2 because it already looks like there are two topics in the dataset
# For most real-world applications, the dataset would be too large to guess at the 'right' number of topics. You end up just picking a number.
num_topics = 2
# Generate the LDA model
ldamodel = models.ldamodel.LdaModel(doc_term_matrix,
num_topics=num_topics, id2word=dict_tokens, passes=25)
num_words = 5
print('\nTop ' + str(num_words) + ' contributing words to each topic:')
for item in ldamodel.print_topics(num_topics=num_topics, num_words=num_words):
print('\nTopic', item[0])
# Print the contributing words along with their relative contributions
list_of_strings = item[1].split(' + ')
for text in list_of_strings:
weight = text.split('*')[0]
word = text.split('*')[1]
print(word, '==>', str(round(float(weight) * 100, 2)) + '%')
###Output
Top 5 contributing words to each topic:
Topic 0
"giraff" ==> 4.1%
"neck" ==> 2.9%
"scottish" ==> 1.7%
"success" ==> 1.7%
"seri" ==> 1.7%
Topic 1
"scotland" ==> 4.9%
"kingdom" ==> 2.7%
"unit" ==> 2.7%
"success" ==> 1.6%
"disput" ==> 1.6%
|
19. Regression/02. Linear Fit 01.ipynb | ###Markdown
 --- 02. Linear Fit Example 01.Eduard Larrañaga ([email protected])--- About this notebookIn this worksheet, we consdier the discovery of the Expansion of the Universe by Edwin Hubble as example of a linear fit of experimental data.---
###Code
import numpy as np
from matplotlib import pyplot as plt
#import seaborn as sns
#sns.set()
import pandas as pd
%matplotlib inline
###Output
_____no_output_____
###Markdown
Edwin Hubble's dataAround the 1920's, Edwin Hubble show that the "nebulae" were external galaxies and not part of our own Galaxy, the Milky Way. In a seminal paper, https://ui.adsabs.harvard.edu/abs/1931ApJ....74...43HE. Hubble and M. Humason determined that some of these galaxies moved away from Earth with a velocity $v$ that is proportional to their distance $d$, i.e.$v=H_0 d.$This relation is now known as *Hubble's law* and quantity $H_0$ is called the *Hubble constant*. It is usual to give the value of $H_0$ in units of $\textrm{km}\, \textrm{s}^{-1} \, \textrm{Mpc}^{-1}$. The original data of Hubble and Humason is summarized in the data file `hubble.csv`.
###Code
df = pd.read_csv("data/hubble.csv")
df
df.describe()
###Output
_____no_output_____
###Markdown
The data in the data frame includes 10 samples (Nebulae) with 4 features:**Name** : Name of the nebula \**N_measurement** : Number of velocities measured by Hubble and Humason \**velocity** : Mean velocity of the nebula measured in km/s \**mean_m** : Apparent magnitude of the nebula___The apaprent magnitude is related to the distance in parsecs through the relation$\log_{10} d = \frac{m-M+5}{5}$where $M=-13.8$ is the absolute magnitude reported by Hubble and considered as a constant in the paper.The relation of velocity vs. apparent magnitude gives the plot
###Code
plt.scatter(df['mean_m'], df['velocity'])
plt.xlabel(r'$m$')
plt.ylabel(r'velocity [km/s]')
plt.show()
###Output
_____no_output_____
###Markdown
CovarianceWe will calculate the covariance between these features.
###Code
def cov(x,y):
N = len(x)
mu_x = sum(x)/N
mu_y = sum(y)/N
return sum((x - mu_x)*(y - mu_y))/N
cov(df['mean_m'],df['velocity'])
###Output
_____no_output_____
###Markdown
This result shows that there is some kidn of (positive) covariance. however, it is not clear if the features have a linear relation between them. Correlation CoefficientNow, we calculate the correlation coefficient of these features.
###Code
def corr(x,y):
N = len(x)
num = N*sum(x*y)- sum(x)*sum(y)
den = np.sqrt(N*sum(x*x) - sum(x)*sum(x))*np.sqrt(N*sum(y*y) - sum(y)*sum(y))
return num/den
corr(df['mean_m'],df['velocity'])
###Output
_____no_output_____
###Markdown
This result, together with the plot above, shows that the relation is not completly linear. However, we will introduce a column in the dataframe including the logarithm of the velocity to show that it is possible to obtain a good linear behavior.
###Code
df['log10_velocity'] = np.log10(df['velocity'])
df
df.describe()
plt.scatter(df['mean_m'], df['log10_velocity'])
plt.xlabel(r'apparent magnitude')
plt.ylabel(r'logarithm of velocity')
plt.show()
cov(df['mean_m'],df['log10_velocity'])
corr(df['mean_m'],df['log10_velocity'])
###Output
_____no_output_____
###Markdown
This result and the plot above, show that there is a linear relation between these features. Hence, we will obtain a linear fit. Linear FitSince the plot of the logarithm of the velocity vs. apparent magnitude has a linear tendence, we will create a linear fit for this data (including the possibility of having error in the y measurements).
###Code
import LinearFit as lf
###Output
_____no_output_____
###Markdown
Since we have no information about the observational error, we use a constant value of $\sigma_i = 1$
###Code
number_rows = df['log10_velocity'].count()
error_y = np.ones(number_rows)
###Output
_____no_output_____
###Markdown
Now we make the linear fit
###Code
a_1, a_2 , sigma_a1, sigma_a2, chi2, R2 = lf.linear_fit(df['mean_m'],df['log10_velocity'],error_y)
a_1, a_2, chi2, R2
###Output
_____no_output_____
###Markdown
The obtained relation fits very well the observational data, with a value of $\chi ^2 = 0.028 $ and a score of $R^2 = 0.99$. The plot shows the good fit
###Code
m_range= np.linspace(10,20,40)
logV = a_1 + a_2*m_range
plt.figure(figsize=(5,5))
plt.scatter(df['mean_m'], df['log10_velocity'],label='observational data')
plt.plot(m_range,logV,'--k', label='linear fit')
plt.xlabel(r'apparent magnitude')
plt.ylabel(r'logarithm of velocity')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The obtained linear model gives the relation between velocity and apparent magnitude as$\log_{10} v = a_1 + a_2 m = 0.548 + 0.197 m$.At this point we will assume, as Hubble and Humason did, that the coefficient of $m$ in this equation will be $0.2$, and therefore the equation becomes$\log_{10} v = a_1 + a_2 m = 0.548 + 0.2 m$.From the expression for the distance, we have $\log_{10} d = \frac{m-M+5}{5} \, \rightarrow \, \log_{10} d = 0.2m - 0.2M + 1$and therefore we can write$\log_{10} \left( \frac{v}{d} \right) = \log_{10}v - \log_{10} d = a_1 - 1+ 0.2M $.Using the obtained value for $a_1$ and $M=-13.8$ we get$\log_{10} \left( \frac{v}{d} \right) = -3.212$,which gives the Hubble constant$H_0 = \frac{v}{d} = 10^{-3.212} = 614 \times 10^{-6} \textrm{ km } \textrm{s}^{-1} \textrm{ pc}^{-1} = 614 \textrm{ km } \textrm{s}^{-1} \textrm{ Mpc}^{-1}$ --- Alternative Linear FitWe can perform another linear fit, introducing directly the distance as a feature in the dataframe,
###Code
M = -13.8
df['log10_distance'] = (df['mean_m'] - M + 5.)/5.
df
###Output
_____no_output_____
###Markdown
The linear behavior can be seen in a plot of $\log_{10} v$ vs. $\log_{10} d$,
###Code
plt.scatter(df['log10_distance'], df['log10_velocity'])
plt.xlabel(r'logarithm of distance')
plt.ylabel(r'logarithm of velocity')
plt.show()
###Output
_____no_output_____
###Markdown
Hence, we create the linear fit between these variables.
###Code
number_rows = df['log10_velocity'].count()
error_y = np.ones(number_rows)
a_1, a_2 , sigma_a1, sigma_a2, chi2, R2 = lf.linear_fit(df['log10_distance'],df['log10_velocity'],error_y)
a_1, a_2, chi2, R2
###Output
_____no_output_____
###Markdown
One again, the fit has a value of $\chi ^2 = 0.028 $ and a score of $R^2 = 0.99$. The plot shows the good fit:
###Code
distance_range= np.linspace(6,8,20)
logV = a_1 + a_2*distance_range
plt.figure(figsize=(5,5))
plt.scatter(df['log10_distance'], df['log10_velocity'],label='observational data')
plt.plot(distance_range,logV,'--k', label='linear fit')
plt.xlabel(r'logarithm of distance')
plt.ylabel(r'logarithm of velocity')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The obtained linear model gives the relation between velocity and distance as$\log_{10} v = a_1 + a_2 \log_{10} d = -3.15 + 0.98 \log_{10} d$.At this point we will assume that the coefficient of $\log_{10} d$ in this equation will be approximately $1$, and therefore the equation becomes$\log_{10} v = a_1 + a_2 m = -3.15 +\log_{10}d$.From this expression we have $\log_{10} \left( \frac{v}{d} \right) = \log_{10}v - \log_{10} d = -3.15 $,and therefore the Hubble constant is$H_0 = \frac{v}{d} = 10^{-3.15} = 708 \times 10^{-6} \textrm{ km } \textrm{s}^{-1} \textrm{ pc}^{-1} = 708 \textrm{ km } \textrm{s}^{-1} \textrm{ Mpc}^{-1}$ --- Linear Regression with `Numpy`The function [numpy.linalg.lstsg](https://numpy.org/doc/stable/reference/generated/numpy.linalg.lstsq.html) provides an efficient algorithm to obtain a linear regression. In order to use it, we need to transform our dataframe's columns into numpy arrays,
###Code
x = df['mean_m'].to_numpy()
y = df['log10_velocity'].to_numpy()
x
###Output
_____no_output_____
###Markdown
Now, we will put the linear system \begin{equation}y = a_1 + a_2 x\end{equation}in the form\begin{equation}y = Ap\end{equation}where $A$ is the matrix\begin{equation}A = \pmatrix{x_1 & 1\\ x_2 & 1\\ x_3 & 1\\ ... & ...}\end{equation}and $p$ is the 2-dimensional vector\begin{equation}p = \pmatrix{a_2 \\ a_1}.\end{equation}Then, we define $A$ as
###Code
A = np.vstack([x, np.ones(len(x))]).T
A
###Output
_____no_output_____
###Markdown
and apply the function `np.linalg.lstsq` to $A$ and $y$ to obtain the slope and the intercept of the regression line,
###Code
a_2, a_1 = np.linalg.lstsq(A, y, rcond=None)[0] # slope, intercept
a_2, a_1
###Output
_____no_output_____
###Markdown
The complete set of returns of this function include- Least-squares solution: If b is two-dimensional, the solutions are in the K columns of x.- Residuals: Sums of squared residuals ($\chi^2$)- Rank: Rank of matrix A.- s(min(M, N),): Singular values of A.
###Code
np.linalg.lstsq(A, y, rcond=None)
###Output
_____no_output_____
###Markdown
Here you can see that this fit gives a value of $\chi^2 = 0.02790285 $
###Code
distance_range= np.linspace(min(x)-1, max(x)+1, 20)
logV = a_1 + a_2*distance_range
plt.figure(figsize=(5,5))
plt.scatter(x, y, label='observational data')
plt.plot(distance_range, logV,'--k', label='linear fit')
plt.xlabel(r'logarithm of distance')
plt.ylabel(r'logarithm of velocity')
plt.legend()
plt.show()
###Output
_____no_output_____ |
notebooks/Bert Model.ipynb | ###Markdown
Split the Data
###Code
X_train, X_val, y_train, y_val = train_test_split(dataset.index, dataset.label, test_size=0.2, random_state=7)
dataset['data_type'] = ['not_set']*dataset.shape[0]
dataset.loc[X_train, 'data_type'] = 'train'
dataset.loc[X_val, 'data_type'] = 'val'
dataset.data_type.value_counts()
###Output
_____no_output_____
###Markdown
Tokenize and Encode the data
###Code
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
encoded_data_train = tokenizer.batch_encode_plus(
dataset[dataset.data_type=='train'].translated_description.values,
add_special_tokens=True,
return_attention_mask=True,
pad_to_max_length=True,
max_length=256,
return_tensors='pt',
truncation=True
)
encoded_data_val = tokenizer.batch_encode_plus(
dataset[dataset.data_type=='val'].translated_description.values,
add_special_tokens=True,
return_attention_mask=True,
pad_to_max_length=True,
max_length=256,
return_tensors='pt',
truncation=True
)
input_ids_train = encoded_data_train['input_ids']
attention_masks_train = encoded_data_train['attention_mask']
labels_train = torch.tensor(dataset[dataset.data_type=='train'].label.values)
input_ids_val = encoded_data_val['input_ids']
attention_masks_val = encoded_data_val['attention_mask']
labels_val = torch.tensor(dataset[dataset.data_type=='val'].label.values)
dataset_train = TensorDataset(input_ids_train, attention_masks_train, labels_train)
dataset_val = TensorDataset(input_ids_val, attention_masks_val, labels_val)
len(dataset_train)
len(dataset_val)
###Output
_____no_output_____
###Markdown
BERT Pretarined Model
###Code
model = BertForSequenceClassification.from_pretrained("bert-base-uncased",
num_labels=2,
output_attentions=False,
output_hidden_states=False)
batch_size = 32
dataloader_train = DataLoader(dataset_train,
sampler=RandomSampler(dataset_train),
batch_size=batch_size)
dataloader_validation = DataLoader(dataset_val,
sampler=SequentialSampler(dataset_val),
batch_size=batch_size)
optimizer = AdamW(model.parameters(),
lr=1e-5,
eps=1e-8)
epochs = 3
scheduler = get_linear_schedule_with_warmup(optimizer,
num_warmup_steps=0,
num_training_steps=len(dataloader_train)*epochs)
###Output
_____no_output_____
###Markdown
Creating our Training Loop
###Code
import random
seed = 7
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
train_on_gpu=torch.cuda.is_available()
device = torch.device('cuda' if train_on_gpu else 'cpu')
model.to(device)
# If there's a GPU available...
if train_on_gpu:
device = torch.device("cuda")
print('There are %d GPU(s) available.' % torch.cuda.device_count())
print('Triaining on GPU:', torch.cuda.get_device_name(0))
else:
print('No GPU available, using the CPU instead.')
device = torch.device("cpu")
def evaluate(dataloader_val):
model.eval()
loss_val_total = 0
predictions, true_vals = [], []
for batch in dataloader_val:
batch = tuple(b.to(device) for b in batch)
inputs = {'input_ids': batch[0],
'attention_mask': batch[1],
'labels': batch[2],
}
with torch.no_grad():
outputs = model(**inputs)
loss = outputs[0]
logits = outputs[1]
loss_val_total += loss.item()
logits = logits.detach().cpu().numpy()
label_ids = inputs['labels'].cpu().numpy()
predictions.append(logits)
true_vals.append(label_ids)
loss_val_avg = loss_val_total/len(dataloader_val)
predictions = np.concatenate(predictions, axis=0)
true_vals = np.concatenate(true_vals, axis=0)
return loss_val_avg, predictions, true_vals
# Training, Takes a lot of time. Use GPU preferably...
for epoch in tqdm(range(1, epochs+1)):
model.train()
loss_train_total = 0
progress_bar = tqdm(dataloader_train, desc='Epoch {:1d}'.format(epoch), leave=False, disable=False)
for batch in progress_bar:
model.zero_grad()
batch = tuple(b.to(device) for b in batch)
inputs = {'input_ids': batch[0],
'attention_mask': batch[1],
'labels': batch[2],
}
outputs = model(**inputs)
loss = outputs[0]
loss_train_total += loss.item()
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
optimizer.step()
scheduler.step()
progress_bar.set_postfix({'training_loss': '{:.3f}'.format(loss.item()/len(batch))})
torch.save(model.state_dict(), f'finetuned_BERT_epoch_{epoch}.model')
tqdm.write(f'\nEpoch {epoch}')
loss_train_avg = loss_train_total/len(dataloader_train)
tqdm.write(f'Training loss: {loss_train_avg}')
val_loss, predictions, true_vals = evaluate(dataloader_validation)
tqdm.write(f'Validation loss: {val_loss}')
model = BertForSequenceClassification.from_pretrained("bert-base-uncased",
num_labels=2,
output_attentions=False,
output_hidden_states=False)
model.to(device)
if(train_on_gpu):
model.load_state_dict(torch.load('finetuned_BERT_epoch_1.model', map_location=torch.device('cuda')))
else:
model.load_state_dict(torch.load('finetuned_BERT_epoch_1.model', map_location=torch.device('cpu')))
_, predictions, true_vals = evaluate(dataloader_validation)
def accuracy_per_class(preds, labels):
possible_labels = dataset.label.unique()
label_dict = {}
for index, possible_label in enumerate(possible_labels):
label_dict[possible_label] = index
label_dict_inverse = {v: k for k, v in label_dict.items()}
preds_flat = np.argmax(preds, axis=1).flatten()
labels_flat = labels.flatten()
correct = 0
for label in np.unique(labels_flat):
y_preds = preds_flat[labels_flat==label]
y_true = labels_flat[labels_flat==label]
print(f'Class: {label_dict_inverse[label]}')
print(f'Accuracy: {len(y_preds[y_preds==label])}/{len(y_true)}\n')
correct += len(y_preds[y_preds==label])
print("Total Validation Set Accuracy: {:.3f}".format(correct/X_val.shape[0]*100))
accuracy_per_class(predictions, true_vals)
###Output
_____no_output_____ |
class/01-pandas_intro.ipynb | ###Markdown
Table of Contents 1 Markdown basics2 loading and slicing data2.1 Subset Columns2.2 Subset Rows2.3 Subset Rows and Columns2.4 Subset using booleans3 quick intro to groupby4 Saving Markdown basics
###Code
#printing things in python
print(3)
###Output
3
###Markdown
Quick introduction to markdown This is a header l2 l3 l4 l6 is the smallest you can go- this is a bullet - this is an indent1. item 11. item 21. bullet numbers do not matter,1. they will be rendered in the correcto order$y = mx + b$
###Code
# get the working directory
# this is actually a shell command call
# not python specific
pwd
# import the pandas library
import pandas
pandas.__version__
%pwd
# if you want to explicietly use a shell pwd command,
# you can use the pwd magic by putting a % in the front
###Output
_____no_output_____
###Markdown
loading and slicing data
###Code
# read in a tab delimited dataset
pandas.read_csv('../data/gapminder.tsv',
delimiter = '\t')
# save the data to a variable
df = pandas.read_csv('../data/gapminder.tsv',
delimiter = '\t')
df
# another way to import libraries is to give them an alias
import pandas as pd
# pd.read_csv, instead of pandas.read_csv
df = pd.read_csv('../data/gapminder.tsv',
delimiter = '\t')
# get the first 5 lines
df.head()
# built-in python function to see the type of the object
type(df)
# get the number of rows and columns
df.shape
# shape is an attribute, not a method
# this will cause an error
df.shape()
# get basic information about a dataframe
df.info()
# pass a number into head to get the first n rows
df.head(1)
###Output
_____no_output_____
###Markdown
Subset Columns
###Code
# select a single column in a dataframe
country_df = df['country']
type(country_df)
country_df.head()
# select multiple columns in a dataframe
country_df = df[['country', 'continent', 'year']]
country_df.head()
# this method of subsetting no longer works
# will now cause an error
df[[0]]
# a convenient way to get a single column
# is to use dot noation and the column name
# be careful if the column is named an attribute
# e.g., shape
df.country
###Output
_____no_output_____
###Markdown
Subset Rows
###Code
df.head()
# get the row with the label 0
df.loc[0]
# get the row with the label 99
df.loc[99]
# negative numbers in python count from the end
# instead of getting the row with the label -1
# we get the index position, -1 using iloc
df.iloc[-1]
# get multiple labels
df.loc[[0, 1, 2]]
# we pass in a list object
type([0, 1, 2])
# can also slice ranges
df.loc[0:4]
# range in python 3 is a generator
range(5)
# you have to maually convert range to a list
# if you want a list
list(range(5))
# use a range of values to slice by row label
df.loc[list(range(5))]
###Output
_____no_output_____
###Markdown
Subset Rows and Columns
###Code
# get rows and columns using loc/iloc
df.loc[:, ['year', 'pop']].head()
df.loc[[0, 1, 2], :].head()
# takes slicing notation with a colon
df.loc[0:2, :].head()
# third value in the colon is the step size
df.loc[0:10:3, :].head()
# use loc to subset rows and columns
df.loc[0:5, ['country', 'year']]
# use the same exact code to subuset but using iloc
df.iloc[0:5, [0, 1]]
# do the same thing with ix
# note ix will no longer work after pandas 0.20
df.ix[0:5, ['country', 'year']]
###Output
/home/dchen/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:3: DeprecationWarning:
.ix is deprecated. Please use
.loc for label based indexing or
.iloc for positional indexing
See the documentation here:
http://pandas.pydata.org/pandas-docs/stable/indexing.html#ix-indexer-is-deprecated
This is separate from the ipykernel package so we can avoid doing imports until
###Markdown
Subset using booleans
###Code
# check boolean values
df['country'] == 'United States'
# boolean subset
df.loc[df['country'] == 'United States']
# boolean subset rows and select columns
df.loc[df['country'] == 'United States',
['year', 'pop']]
# can use the convenient dot notation for the column
df.loc[df.country == 'United States',
['year', 'pop']]
# multiple boolean conditions
df.loc[(df.country == 'United States') & (df['year'] == 1982)]
# series (columns) are extensions of the np array
# so you can perform basic statistics on them directly
le_mean = df.lifeExp.mean()
# get the uniue countries where lifeExp is greater than the mean
df.loc[df.lifeExp > le_mean, 'country'].unique()
###Output
_____no_output_____
###Markdown
quick intro to groupby
###Code
df.head()
# get the unique values of year
df.year.unique()
# for each year, get the lifeExp, and calculate the mean
df.groupby('year')['lifeExp'].mean()
# for each year and continent calculate the average lifeExp
grouped_year_cont = df.groupby(['year', 'continent'])['lifeExp'].mean()
# can use parenthesis and break up long code so it's a little more readable
grouped_year_cont = (df
.groupby(['year', 'continent'])
['lifeExp']
.mean())
grouped_year_cont.reset_index()
# frequency count in a groupby
df.groupby(['continent', 'year'])['country'].value_counts()
# plotting using matplotlib
import matplotlib.pyplot as plt
# when using the jupyter notebook
# you need to tell it to plot inline
# so the plot shows up in the notebook
# not as a popup
%matplotlib inline
# average lifeExp by year
gyle = df.groupby('year')['lifeExp'].mean()
gyle.plot()
# plot is a generic pandas method
# by defualt it will treat the values as a timeseries
# and plot a line plot
df.lifeExp.plot()
# look at our data
gyle.head()
# the data isn't exactly like a normal dataframe
# we can flatten the results
df = gyle.reset_index()
df.head()
###Output
_____no_output_____
###Markdown
Saving
###Code
# save to csv
# do not save the row index labels
df.to_csv('grle.csv', index=False)
%%bash
# using the %% allows this entire cell to run a command using the shell
head grle.csv
# Save values using feather if you want to load it into R more efficiently
df.to_feather('gyle.feather')
###Output
_____no_output_____ |
python_data_science/ex2.ipynb | ###Markdown
NumPy
###Code
import numpy as np
a = np.array([[1, 6],
[2, 8],
[3, 11],
[3, 10],
[1, 7]])
a
mean_a = np.mean(a, axis=0)
mean_a
centered_a = a - mean_a
centered_a
a_centered_sp = centered_a[:,0] @ centered_a[:,1]
a_centered_sp
a_centered_sp / (a.shape[0] - 1)
np.cov(a.T)
###Output
_____no_output_____
###Markdown
Pandas
###Code
import pandas as pd
authors = pd.DataFrame({'author_id' : [1, 2, 3],
'author_name' : ['Тургенев', 'Чехов', 'Островский']},
columns = ['author_id', 'author_name'])
authors
books = pd.DataFrame({'author_id' : [1, 1, 1, 2, 2, 3, 3],
'book_tittle' : ['Отцы и дети', 'Рудин', 'Дворянское гнездо', 'Толстый и тонкий', 'Дама с собачкой', 'Гроза', 'Таланты и поклонники'],
'price' : [450, 300, 350, 500, 450, 370, 290]}, columns = ['author_id', 'book_tittle', 'price'])
books
authors_price = pd.merge(authors, books, on = 'author_id')
authors_price
top5 = authors_price.sort_values(by = 'price', ascending = False).head()
top5
authors_stat = authors_price.groupby('author_name').agg({'price': ['min', 'max', 'mean']})
authors_stat
authors_price['cover'] = ['твердая', 'мягкая', 'мягкая', 'твердая', 'твердая', 'мягкая', 'мягкая']
authors_price
book_info = authors_price.pivot_table(index='author_name', columns='cover', values='price', aggfunc='sum', fill_value=0)
book_info
###Output
_____no_output_____ |
cross/results.ipynb | ###Markdown
Analysis of Models using only MIMIC Notes Imports & Inits
###Code
%load_ext autoreload
%autoreload 2
import sys
sys.path.append('../')
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("darkgrid")
%matplotlib inline
import pickle
import numpy as np
import pandas as pd
from pathlib import Path
from scipy import stats
from itertools import combinations
from tqdm import tqdm_notebook as tqdm
from utils.metrics import BinaryAvgMetrics
from utils.plots import *
from lr.args import args as lr_args
from rf.args import args as rf_args
from gbm.args import args as gbm_args
transfer_thresholds = {
'mimic_mlh': {
'lr': lr_args.mimic_src_thresh,
'rf': rf_args.mimic_src_thresh,
'gbm': gbm_args.mimic_src_thresh,
},
'mlh_mimic': {
'lr': lr_args.mlh_src_thresh,
'rf': rf_args.mlh_src_thresh,
'gbm': gbm_args.mlh_src_thresh,
},
}
test_thresholds = {
'lr': lr_args.mlh_src_test_thresh,
'rf': rf_args.mlh_src_test_thresh,
'gbm': gbm_args.mlh_src_test_thresh,
}
path = Path('data')
workdir = path/f'workdir'
figdir = workdir/'figdir'
###Output
_____no_output_____
###Markdown
Ensembles
###Code
def get_ensemble(ensembles, thresh, bams):
outputs = {}
for ens_model in ensembles:
key = '-'.join(ens_model)
targs = bams[ens_model[0]].targs
avg_thresh = np.array([thresh[model] for model in ens_model]).mean()
max_thresh = max([thresh[model] for model in ens_model])
probs = []
for i in range(len(targs)):
prob = []
for model in ens_model:
prob.append(bams[model].pos_probs[i])
probs.append(np.stack(prob))
avg_probs = [probs.mean(axis=0) for probs in probs]
max_probs = [probs.max(axis=0) for probs in probs]
avg_preds = [(probs > avg_thresh).astype(np.int64) for probs in avg_probs]
max_preds = [(probs > max_thresh).astype(np.int64) for probs in max_probs]
outputs[f'avg-{key}'] = (targs, avg_preds, avg_probs, avg_thresh)
outputs[f'max-{key}'] = (targs, max_preds, max_probs, max_thresh)
return outputs
def do_ttest(bams, model1, model2, metric):
if metric == 'sensitivity':
x1 = bams[model1].sensitivities()
x2 = bams[model2].sensitivities()
elif metric == 'specificity':
x1 = bams[model1].specificities()
x2 = bams[model2].specificities()
elif metric == 'ppv':
x1 = bams[model1].ppvs()
x2 = bams[model2].ppvs()
elif metric == 'auroc':
x1 = bams[model1].aurocs()
x2 = bams[model2].aurocs()
elif metric == 'npv':
x1 = bams[model1].npvs()
x2 = bams[model2].npvs()
elif metric == 'f1':
x1 = bams[model1].f1s()
x2 = bams[model2].f1s()
t, p = stats.ttest_ind(x1, x2)
return np.round(t, 2), max(np.round(p, 2), 0.001)
###Output
_____no_output_____
###Markdown
Cross Testing
###Code
with open(workdir/f'vectordir/mlh2mimic.pkl', 'rb') as f:
mlh2mimic_vec = pickle.load(f)
x_train_mlh = pickle.load(f)
x_test_mimic = pickle.load(f)
y_train_mlh = pickle.load(f)
y_test_mimic = pickle.load(f)
x_train_mlh.shape, y_train_mlh.shape, x_test_mimic.shape, y_test_mimic.shape
model = 'gbm'
clf = pickle.load(open(workdir/model/'models/mlh_full.pkl', 'rb'))
prob = clf.predict_proba(x_test_mimic)
pos_prob = prob[:, 1]
threshold = thresholds[model]
pred = (pos_prob > threshold).astype(np.int64)
cm = confusion_matrix(y_test_mimic, pred)
tn,fp,fn,tp = cm[0][0],cm[0][1],cm[1][0],cm[1][1]
sensitivity = tp/(tp+fn)
specificity = tn/(tn+fp)
ppv = tp/(tp+fp)
npv = tn/(tn+fn)
f1 = (2*ppv*sensitivity)/(ppv+sensitivity)
auroc = roc_auc_score(y_test_mimic, pos_prob)
d = {
'sensitivity': np.round(sensitivity, 3),
'specificity': np.round(specificity, 3),
'ppv': np.round(ppv, 3),
'npv': np.round(npv, 3),
'f1': np.round(f1, 3),
'auroc': np.round(auroc, 3),
'threshold': threshold,
}
metrics = pd.DataFrame(d.values(), index=d.keys(), columns=['Value'])
metrics
with open(workdir/model/'mlh_mimic_test_preds.pkl', 'wb') as f:
pickle.dump(y_test_mimic, f)
pickle.dump(prob, f)
pickle.dump(pred, f)
###Output
_____no_output_____
###Markdown
Compute Average Ensembles
###Code
models = ['lr', 'rf', 'gbm']
bams = {}
for model in models:
with open(workdir/model/f'mlh_mimic_test_preds.pkl', 'rb') as f:
targs = pickle.load(f)
probs = pickle.load(f)
preds = pickle.load(f)
bams[model] = BinaryAvgMetrics([targs], [preds], [probs[:, 1]])
# ens_models = [
# ['lr', 'rf'],
# ['lr', 'gbm'],
# ['rf', 'gbm'],
# ['lr', 'rf', 'gbm'],
# ]
ens_models = [m for m in sum([list(map(list, combinations(models, i))) for i in range(len(models) + 1)], []) if len(m) > 1]
ensembles = get_ensemble(ens_models, test_thresholds, bams)
for model, vals in ensembles.items():
bams[model] = BinaryAvgMetrics(*vals[:-1])
final_metrics = {}
for key in bams.keys():
final_metrics[key] = []
for i in range(len(bams[key].get_avg_metrics())):
final_metrics[key].append(bams[key].get_avg_metrics().iloc[i]['Value'])
final_metrics = pd.DataFrame(final_metrics, index=['sensitivity', 'specificity', 'ppv', 'auroc', 'npv', 'f1']).transpose()
best_models = pd.DataFrame([(final_metrics[metric].idxmax(), final_metrics[metric].max()) for metric in final_metrics], columns=['model', 'value'], index=['sensitivity', 'specificity', 'ppv', 'auroc', 'npv', 'f1'])
final_metrics
best_models
cte = [61, 58.1, 63.1, 61.4, 61.4, 60.6, 63.2, 62.5, 63.1, 63.2, 63.2]
ctr = [74.1, 73.7, 73.2, 74.4, 74.2, 74.4, 74.1, 74.2, 73.7, 74.6, 74.2]
[np.round(100 * (b - a) / a, 2) for a, b in zip(cte, ctr)]
###Output
_____no_output_____
###Markdown
Cross Training Compute Average Ensembles
###Code
transfer = 'mlh_mimic'
thresholds = transfer_thresholds[transfer]
models = ['lr', 'rf', 'gbm']
bams = {}
for model in models:
with open(workdir/model/f'{transfer}_preds.pkl', 'rb') as f:
targs = pickle.load(f)
probs = pickle.load(f)
preds = pickle.load(f)
bams[model] = BinaryAvgMetrics(targs, preds, [prob[:, 1] for prob in probs])
# ens_models = [
# ['lr', 'rf'],
# ['lr', 'gbm'],
# ['rf', 'gbm'],
# ['lr', 'rf', 'gbm'],
# ]
ens_models = [m for m in sum([list(map(list, combinations(models, i))) for i in range(len(models) + 1)], []) if len(m) > 1]
ensembles = get_ensemble(ens_models, thresholds, bams)
for model, vals in ensembles.items():
bams[model] = BinaryAvgMetrics(*vals[:-1])
final_metrics = {}
for key in bams.keys():
final_metrics[key] = []
for i in range(len(bams[key].get_avg_metrics())):
final_metrics[key].append(bams[key].get_avg_metrics().iloc[i]['Value'])
final_metrics = pd.DataFrame(final_metrics, index=['sensitivity', 'specificity', 'ppv', 'auroc', 'npv', 'f1']).transpose()
best_models = pd.DataFrame([(final_metrics[metric].idxmax(), final_metrics[metric].max()) for metric in final_metrics], columns=['model', 'value'], index=['sensitivity', 'specificity', 'ppv', 'auroc', 'npv', 'f1'])
###Output
_____no_output_____
###Markdown
Student-t Tests
###Code
models = list(final_metrics.index)
metrics = list(final_metrics.columns)
ttests = {}
for m1, m2 in combinations(models, 2):
ttests[f'{m1}:{m2}'] = {}
for metric in metrics:
ttests[f'{m1}:{m2}'][metric] = do_ttest(bams, m1, m2, metric)
ttests = pd.DataFrame(ttests).transpose()
###Output
_____no_output_____
###Markdown
Save to disk
###Code
pickle.dump(bams, open(workdir/f'{transfer}_bams.pkl', 'wb'))
final_metrics.to_csv(workdir/f'{transfer}_final_metrics.csv', float_format='%.3f')
best_models.to_csv(workdir/f'{transfer}_best_models.csv', float_format='%.3f')
ttests.to_csv(workdir/f'{transfer}_ttests.csv')
###Output
_____no_output_____
###Markdown
Results Cross Testing
###Code
bams = BinaryAvgMetrics([y_test_mimic], [pred], [pos_prob])
bams = {}
for model in models:
with open(workdir/model/f'{transfer}_preds.pkl', 'rb') as f:
targs = pickle.load(f)
probs = pickle.load(f)
preds = pickle.load(f)
bams[model] = BinaryAvgMetrics(targs, preds, [prob[:, 1] for prob in probs])
thresholds = transfer_thresholds[transfer]
models = ['lr', 'rf', 'gbm']
bams = {}
for model in models:
with open(workdir/model/f'{transfer}_preds.pkl', 'rb') as f:
targs = pickle.load(f)
probs = pickle.load(f)
preds = pickle.load(f)
bams[model] = BinaryAvgMetrics(targs, preds, [prob[:, 1] for prob in probs])
###Output
_____no_output_____
###Markdown
Cross Training
###Code
transfer = 'mlh_mimic'
bams = pickle.load(open(workdir/f'{transfer}_bams.pkl', 'rb'))
final_metrics = pd.read_csv(workdir/f'{transfer}_final_metrics.csv', index_col=0)
best_models = pd.read_csv(workdir/f'{transfer}_best_models.csv', index_col=0)
ttests = pd.read_csv(workdir/f'{transfer}_ttests.csv', index_col=0)
itr = iter(bams.keys())
bams.keys()
model = next(itr)
print(model)
bams[model].get_avg_metrics(conf=0.95)
final_metrics
best_models
print(ttests.to_latex())
###Output
_____no_output_____
###Markdown
Box Plot
###Code
save = True
transfer = 'mlh_mimic'
bams = pickle.load(open(workdir/f'{transfer}_bams.pkl', 'rb'))
final_metrics = pd.read_csv(workdir/f'{transfer}_final_metrics.csv', index_col=0)
best_models = pd.read_csv(workdir/f'{transfer}_best_models.csv', index_col=0)
ttests = pd.read_csv(workdir/f'{transfer}_ttests.csv', index_col=0)
for k in bams.keys():
bams[k.upper()] = bams.pop(k)
bams['AVG-ALL'] = bams.pop('AVG-LR-RF-GBM')
bams['MAX-ALL'] = bams.pop('MAX-LR-RF-GBM')
itr = iter(bams.keys())
bams.keys()
metrics = {}
for md in itr:
df = pd.DataFrame()
for k, m in bams[md].yield_metrics():
df[k] = m
df['model'] = md
cols = list(df.columns)
cols = [cols[-1]] + cols[:-1]
df = df[cols]
metrics[md] = df
plot_df = pd.concat(metrics.values())
met = 'AUC'
fig, ax = plt.subplots(1,1,figsize=(15,8))
sns.boxplot(x='model', y=met, data=plot_df, ax=ax)
ax.set_xlabel('')
if save:
fig.savefig(figdir/f'{transfer}_{met.lower()}_box_plot.pdf', dpi=300)
###Output
_____no_output_____
###Markdown
Mean AUC
###Code
def get_mean_tprs(bams, base_fpr):
mean_tprs = {}
for model, bam in bams.items():
tprs = []
for i, (targs, probs) in enumerate(zip(bam.targs, bam.pos_probs)):
fpr, tpr, _ = roc_curve(targs, probs)
tpr = interp(base_fpr, fpr, tpr)
tpr[0] = 0.0
tprs.append(tpr)
tprs = np.array(tprs)
mean_tprs[model] = tprs.mean(axis=0)
return mean_tprs
des = 'all_'
if not des:
plot_bams = {k: bams[k] for k in bams.keys() if '-' not in k}
des = ''
names = plot_bams.keys()
aucs = [model.auroc_avg() for _, model in plot_bams.items()]
legends = [f'{model} ({auc})' for model, auc in zip(names, aucs)]
elif des == 'avg_':
plot_bams = {k: bams[k] for k in bams.keys() if 'AVG' in k}
names = [name[4:] for name in plot_bams.keys()]
aucs = [model.auroc_avg() for _, model in plot_bams.items()]
legends = [f'{model} ({auc})' for model, auc in zip(names, aucs)]
elif des == 'max_':
plot_bams = {k: bams[k] for k in bams.keys() if 'MAX' in k}
names = [name[4:] for name in plot_bams.keys()]
aucs = [model.auroc_avg() for _, model in plot_bams.items()]
legends = [f'{model} ({auc})' for model, auc in zip(names, aucs)]
elif des == 'all_':
plot_bams = bams
names = plot_bams.keys()
aucs = [model.auroc_avg() for _, model in plot_bams.items()]
legends = [f'{model} ({auc})' for model, auc in zip(names, aucs)]
legends
base_fpr = np.linspace(0, 1, 100)
mean_tprs = get_mean_tprs(plot_bams, base_fpr)
fig, ax = plt.subplots(1, 1, figsize=(11, 8))
for i, (model, mean_tpr) in enumerate(mean_tprs.items()):
ax.plot(base_fpr, mean_tpr)
ax.plot([0, 1], [0, 1], linestyle=':')
ax.grid(b=True, which='major', color='#d3d3d3', linewidth=1.0)
ax.grid(b=True, which='minor', color='#d3d3d3', linewidth=0.5)
ax.set_ylabel('Sensitivity')
ax.set_xlabel('1 - Specificity')
ax.legend(legends)
if save:
fig.savefig(figdir/f'{transfer}_{des}mean_auc.pdf', dpi=300)
###Output
_____no_output_____ |
DS_1/PY_OOP.ipynb | ###Markdown
Object Oriented ProgrammingOver the last lectures, I have sometimes referred to **objects** or **Python objects**. I've also mentioned **methods** of objects (e.g. the `get` method of `dict`). What do these terms mean?For now we can think of an object as anything we can store in a variable. We can have objects with different `type`. We might also call an object's `type` its **class**. We'll come back to class later.
###Code
x = 42
print('%d is an object of %s' % (x, type(x)))
x = 'Hello world!'
print('%s is an object of %s' % (x, type(x)))
x = {'name': 'Dylan', 'age': 26}
print('%s is an object of %s' % (x, type(x)))
###Output
42 is an object of <class 'int'>
Hello world! is an object of <class 'str'>
{'name': 'Dylan', 'age': 26} is an object of <class 'dict'>
###Markdown
We already know that integers, strings, and dictionaries behave differently. They have different properties and different capabilities. In the language of programming, we say they have different **attributes** and **methods**.An object's attributes are its internal variables that are used to store information about the object.
###Code
0.1+0.3
# a complex number has real and imaginary parts
x = complex(5, 3)
print(x.real)
print(x.imag)
###Output
_____no_output_____
###Markdown
An object's methods are its internal functions that implement different capabilities.
###Code
x = 'Dylan'
print(x.lower())
print(x.upper())
###Output
_____no_output_____
###Markdown
We'll interact with an object's methods more often than its attributes. The attributes represent the _state_ of an object. We usually prefer to mutate the state of an object via its methods, since the methods represent the actions one can take safely without breaking the object. Often the attributes of an object will be immutable.
###Code
%%expect_exception AttributeError
x = complex(5, 3)
x.real = 6
###Output
_____no_output_____
###Markdown
An example of a method that mutates an object is the `append` method of a `list`.
###Code
x = [35, 'example', 348.1]
x.append(True)
print(x)
###Output
_____no_output_____
###Markdown
How do we know what the attributes and methods of an object are? We can use Python's `dir` function. We can use `dir` on an object or on a class.
###Code
# dir on an object
x = 42
print(dir(x)[-6:]) # I've truncated the results for clarity
# dir on a class
print(dir(int)[-6:])
###Output
_____no_output_____
###Markdown
We can also look up documentation on the class. For example, [here's Python's documentation on the built-in Python types](https://docs.python.org/2/library/stdtypes.html). We'll use documentation more and more as we incorporate third-party libraries and tools into Python. ClassesBut this isn't the whole story. The methods and attributes of a `dict` don't tell us anything about key-value pairs or hashing. The full definition of an object is an object's class. We can define our own classes to create objects that carry out a variety of related tasks or represent information in a convenient way. Some examples we'll deal with later in the course are classes for making plots and graphs, classes for creating and analyzing tables of data, and classes for doing statistics and regression.For now, let's implement a class called `Rational` for working with fractional numbers (e.g. 5/15). The first thing we'll need `Rational` to do is to be able to create a `Rational` object. We define how this should work with a special (hidden) method called `__init__`. We'll also define another special method called `__repr__` that tells Python how to print out the object.
###Code
class Rational(object):
def __init__(self, numerator, denominator):
self.numerator = numerator
self.denominator = denominator
def __repr__(self):
return '%d/%d' % (self.numerator, self.denominator)
fraction = Rational(4, 3)
print(fraction)
###Output
_____no_output_____
###Markdown
You might have noticed that both of the methods took as a first argument the keyword `self`. The first argument to any method in a class is the instance of the class upon which the method is being called. Think of a class like a blueprint from which possibly many objects are built. The `self` argument is the mechanism Python uses so that the method can know which instance of the class it is being called upon. When the method is actually called, we can call it in two ways. Lets say we create a class `MyClass` with method `.do_it(self)`, if we instantiate an object from this class, we can call the method in two ways:
###Code
class MyClass(object):
def __init__(self, num):
self.num = num
def do_it(self):
print(self.num)
myclass = MyClass(2)
myclass.do_it()
MyClass.do_it(myclass)
###Output
_____no_output_____
###Markdown
In on way `myclass.do_it()` the `self` argument is understood because `myclass` is an instance of `MyClass`. This is the almost universal way to do call a method. The other possibility is `MyClass.do_it(myclass)` where we are passing in the object `myclass` as the `self` argument, this syntax is much less common. Like all Python arguments, there is no need for `self` to be named `self`, we could also call it `this` or `apple` or `wizard`. However, the use of `self` is a very strong Python convention which is rarely broken. You should use this convention so that your code is understood by other people.Lets get back to our `Rational` class. So far, we can make a `Rational` object and `print` it out, but it can't do much else. We might also want a `reduce` method that will divide the numerator and denominator by their greatest common divisor. We will therefore need to write a function that computes the greatest common divisor. We'll add these to our class definition.
###Code
class Rational(object):
def __init__(self, numerator, denominator):
self.numerator = numerator
self.denominator = denominator
def __repr__(self):
return '%d/%d' % (self.numerator, self.denominator)
def _gcd(self):
smaller = min(self.numerator, self.denominator)
small_divisors = {i for i in range(1, smaller + 1) if smaller % i == 0}
larger = max(self.numerator, self.denominator)
common_divisors = {i for i in small_divisors if larger % i == 0}
return max(common_divisors)
def reduce(self):
gcd = self._gcd()
self.numerator = self.numerator / gcd
self.denominator = self.denominator / gcd
return self
fraction = Rational(16, 32)
fraction.reduce()
print(fraction)
###Output
_____no_output_____
###Markdown
We're gradually building up the functionality of our `Rational` class, but it has a huge problem: we can't do math with it!
###Code
%%expect_exception TypeError
print(4 * fraction)
###Output
_____no_output_____
###Markdown
We have to tell Python how to implement mathematical operators (`+`, `-`, `*`, `/`) for our class.
###Code
print(dir(int))
###Output
_____no_output_____
###Markdown
If we look at `dir(int)` we see it has hidden methods like `__add__`, `__div__`, `__mul__`, `__sub__`, etc. Just like `__repr__` tells Python how to `print` our object, these hidden methods tell Python how to handle mathematical operators.Let's add the methods implementing mathematical operations to our class definition. To perform addition or subtraction, we'll have to find a common denominator with the number we're adding. For simplicity, we'll only implement multiplication. We won't be able to add, subtract, or divide. Even implementing only multiplication will require quite a bit of logic.
###Code
class Rational(object):
def __init__(self, numerator, denominator):
self.numerator = numerator
self.denominator = denominator
def __repr__(self):
return '%d/%d' % (self.numerator, self.denominator)
def __mul__(self, number):
if isinstance(number, int):
return Rational(self.numerator * number, self.denominator)
elif isinstance(number, Rational):
return Rational(self.numerator * number.numerator, self.denominator * number.denominator)
else:
raise TypeError('Expected number to be int or Rational. Got %s' % type(number))
def _gcd(self):
smaller = min(self.numerator, self.denominator)
small_divisors = {i for i in range(1, smaller + 1) if smaller % i == 0}
larger = max(self.numerator, self.denominator)
common_divisors = {i for i in small_divisors if larger % i == 0}
return max(common_divisors)
def reduce(self):
gcd = self._gcd()
self.numerator = self.numerator / gcd
self.denominator = self.denominator / gcd
return self
print(Rational(4, 6) * 3)
print(Rational(5, 9) * Rational(2, 3))
%%expect_exception TypeError
# remember, no support for float
print(Rational(4, 6) * 2.3)
%%expect_exception TypeError
# also, no addition, subtraction, etc.
print(Rational(4, 6) + Rational(2, 3))
###Output
_____no_output_____
###Markdown
Defining classes can be a lot of work. We have to imagine all the ways we might want to use an object, and where we might run into trouble. This is also true of defining functions, but classes will typically handle many tasks while a function might only do one. Private Methods in PythonYou might have noticed we have used some methods which start with `_` such as `_gcd`. This has a conventional meaning in Python which is formally implemented in other languages, the notion of a private function. Classes are used to encapsulate functionality and data while providing an interface to the outside world of other objects. Think of a program as a company, each worker has their own responsibilities and they know that other people the company perform certain tasks, but they don't necessary know how those people perform those tasks. In order to make this possible, Classes have both public and private methods. Public methods are methods which are exposed to other objects or user interaction. Private methods are used internally to the object, often in a "helper" sense. In some languages this notion of public and private methods is enforced and the programmer will have to specify every method as either public or private. In Python every method is public, but to distinguish which methods we mean to be private, we add an underscore to the front of the method, hence `_gcd`. This is a note to someone using the class that this method should only be called inside the object and can be subject to change with new versions, whereas the public methods will hopefully not change their interface.Another Python convention dealing with underscores are the so called `dunder` methods which have double underscores before and after the method names. There are a bunch of these in Python `__init__, __name__, __add__`, etc and they have special meaning. Note that they are generally considered private methods as well except in special circumstances. In the case of methods like `__add__`, they are what allow the programmer to specify the `+` operation. Since these method have special meaning to Python they should only be used with care. Additionally, even though overloading things like the `+` operator might make sense to you as you program it, it can very confusing to someone reading your code as Python's dynamic type system usually does not allow determination of types until runtime, usually defining an `.add` method is much more clear. When do we want Classes?When we want to perform a set of related tasks, especially in repetition, we will usually want to define a new class. We will see that in most of the third-party libraries we will use, the major tools they introduce to Python are new classes. For example, later in the course we'll learn about the Pandas library, whose main feature is the `DataFrame` class.
###Code
import pandas as pd
df = pd.DataFrame({'a': [1, 2, 5], 'b': [True, False, True]})
print(type(df))
df.head()
###Output
_____no_output_____
###Markdown
Here's the (abridged) beginning of the DataFrame class definition:```pythonclass DataFrame(NDFrame): def __init__(self, data=None, index=None, columns=None, dtype=None, copy=False): if data is None: data = {} if dtype is not None: dtype = self._validate_dtype(dtype) if isinstance(data, DataFrame): data = data._data if isinstance(data, BlockManager): mgr = self._init_mgr(data, axes=dict(index=index, columns=columns), dtype=dtype, copy=copy) elif isinstance(data, dict): mgr = self._init_dict(data, index, columns, dtype=dtype) elif isinstance(data, ma.MaskedArray): import numpy.ma.mrecords as mrecords masked recarray if isinstance(data, mrecords.MaskedRecords): mgr = _masked_rec_array_to_mgr(data, index, columns, dtype, copy) a masked array else: mask = ma.getmaskarray(data) if mask.any(): data, fill_value = maybe_upcast(data, copy=True) data[mask] = fill_value else: data = data.copy() mgr = self._init_ndarray(data, index, columns, dtype=dtype, copy=copy) elif isinstance(data, (np.ndarray, Series, Index)): if data.dtype.names: data_columns = list(data.dtype.names) data = dict((k, data[k]) for k in data_columns) if columns is None: columns = data_columns mgr = self._init_dict(data, index, columns, dtype=dtype) elif getattr(data, 'name', None) is not None: mgr = self._init_dict({data.name: data}, index, columns, dtype=dtype) else: mgr = self._init_ndarray(data, index, columns, dtype=dtype, copy=copy) elif isinstance(data, (list, types.GeneratorType)): if isinstance(data, types.GeneratorType): data = list(data) if len(data) > 0: if is_list_like(data[0]) and getattr(data[0], 'ndim', 1) == 1: if is_named_tuple(data[0]) and columns is None: columns = data[0]._fields arrays, columns = _to_arrays(data, columns, dtype=dtype) columns = _ensure_index(columns) set the index if index is None: if isinstance(data[0], Series): index = _get_names_from_index(data) elif isinstance(data[0], Categorical): index = _default_index(len(data[0])) else: index = _default_index(len(data)) mgr = _arrays_to_mgr(arrays, columns, index, columns, dtype=dtype) else: mgr = self._init_ndarray(data, index, columns, dtype=dtype, copy=copy) else: mgr = self._init_dict({}, index, columns, dtype=dtype) elif isinstance(data, collections.Iterator): raise TypeError("data argument can't be an iterator") else: try: arr = np.array(data, dtype=dtype, copy=copy) except (ValueError, TypeError) as e: exc = TypeError('DataFrame constructor called with ' 'incompatible data and dtype: %s' % e) raise_with_traceback(exc) if arr.ndim == 0 and index is not None and columns is not None: values = cast_scalar_to_array((len(index), len(columns)), data, dtype=dtype) mgr = self._init_ndarray(values, index, columns, dtype=values.dtype, copy=False) else: raise ValueError('DataFrame constructor not properly called!') NDFrame.__init__(self, mgr, fastpath=True)``` That's a lot of code just for `__init__`!Often we'll use the relationship between a new class and existing classes to _inherit_ functionality, saving us from writing some code. InheritanceOften the classes we define in Python will build off of existing ideas in other classes. For example, our `Rational` class is a number, so it should behave like other numbers. We could write an implementation of `Rational` that uses `float` arithmetic and simply converts between floating point and rational representations during input and output. This would save us complexity in implementing the arithmetic, but might complicate object creation and representation. Even if you never write a class, it's useful to understand the idea of inheritance and the relationship between classes.Lets write a general class called `Rectangle`, it will have two attributes, a length and a width, as well as a few methods.
###Code
class Rectangle(object):
def __init__(self, height, length):
self.height = height
self.length = length
def area(self):
return self.height * self.length
def perimeter(self):
return 2 * (self.height + self.length)
###Output
_____no_output_____
###Markdown
Now a square is also a rectangle, but its somewhat more restricted in that it has the same height as length, so we can subclass `Rectangle` and enforce this in code.
###Code
class Square(Rectangle):
def __init__(self, length):
super(Square, self).__init__(length, length)
s = Square(5)
s.area(), s.perimeter()
###Output
_____no_output_____
###Markdown
Sometimes (although not often) we want to actually check the type of a python object (what class it is from). There are two ways of doing this, lets first look at a few examples to get a sense of the difference.
###Code
type(s) == Square
type(s) == Rectangle
isinstance(s, Rectangle)
###Output
_____no_output_____ |
es.rcs.tfm/es.rcs.tfm.nlp/src/test/python/tfm-2.3.5/create_models.ipynb | ###Markdown
Creates Tensorflow Graphs for spark-nlp DL Annotators and Models
###Code
import os
try:
os.chdir(os.path.join(os.getcwd(), '/home/rcuesta/TFM/es.rcs.tfm/es.rcs.tfm.nlp/src/main/python/tfm'))
print(os.getcwd())
except:
pass
import numpy as np
import os
import tensorflow as tf
import string
import random
import math
import sys
import shutil
from pathlib import Path
from ner_model import NerModel
from dataset_encoder import DatasetEncoder
from ner_model_saver import NerModelSaver
CORPUS_PATH="/home/rcuesta/TFM/es.rcs.tfm/es.rcs.tfm.corpus/"
DATASET_PATH=CORPUS_PATH + "datasets/"
BERT_PATH=DATASET_PATH + 'bert/'
use_contrib = False if os.name == 'nt' else True
name_prefix = 'blstm-noncontrib' if not use_contrib else 'blstm'
def create_graph(ntags, embeddings_dim, nchars, lstm_size = 128):
#RCS if sys.version_info[0] != 3 or sys.version_info[1] >= 7:
if sys.version_info[0] != 3 or sys.version_info[1] >= 9:
print('Python 3.7 or above not supported by tensorflow')
return
#RCS if tf.__version__ != '1.12.0':
if tf.__version__ != '1.13.2':
print('Spark NLP is compiled with Tensorflo 1.12.0. Please use such version.')
return
tf.reset_default_graph()
model_name = name_prefix+'_{}_{}_{}_{}'.format(ntags, embeddings_dim, lstm_size, nchars)
with tf.Session() as session:
ner = NerModel(session=None, use_contrib=use_contrib)
ner.add_cnn_char_repr(nchars, 25, 30)
ner.add_bilstm_char_repr(nchars, 25, 30)
ner.add_pretrained_word_embeddings(embeddings_dim)
ner.add_context_repr(ntags, lstm_size, 3)
ner.add_inference_layer(True)
ner.add_training_op(5)
ner.init_variables()
saver = tf.train.Saver()
file_name = model_name + '.pb'
tf.train.write_graph(ner.session.graph, BERT_PATH, file_name, False)
ner.close()
session.close()
###Output
_____no_output_____
###Markdown
Attributes info- 1st attribute: max number of tags (Must be at least equal to the number of unique labels, including O if IOB)- 2nd attribute: embeddings dimension- 3rd attribute: max number of characters processed (Must be at least the largest possible amount of characters)- 4th attribute: LSTM Size (128)
###Code
create_graph(14, 1024, 62)
#create_graph(80, 200, 125)
# create_graph(10, 200, 100)
# create_graph(10, 300, 100)
# create_graph(10, 768, 100)
# create_graph(10, 1024, 100)
# create_graph(25, 300, 100)
tf.__version__
###Output
_____no_output_____ |
course_2/edited_material/Part_5_Advanced_Statistical_Methods_(Machine_Learning)/S39_L271/Heatmaps_with_comments.ipynb | ###Markdown
Cluster analysisIn this notebook we explore heatmaps and dendrograms Import the relevant libraries
###Code
import numpy as np
import pandas as pd
import seaborn as sns
# We don't need matplotlib this time
###Output
_____no_output_____
###Markdown
Load the data
###Code
# Load the standardized data
# index_col is an argument we can set to one of the columns
# this will cause one of the Series to become the index
data = pd.read_csv('Country clusters standardized.csv', index_col='Country')
# Create a new data frame for the inputs, so we can clean it
x_scaled = data.copy()
# Drop the variables that are unnecessary for this solution
x_scaled = x_scaled.drop(['Language'],axis=1)
# Check what's inside
x_scaled
###Output
_____no_output_____
###Markdown
Plot the data
###Code
# Using the Seaborn method 'clustermap' we can get a heatmap and dendrograms for both the observations and the features
# The cmap 'mako' is the coolest if you ask me
sns.clustermap(x_scaled, cmap='mako')
###Output
_____no_output_____ |
01_Getting_&_Knowing_Your_Data/Chipotle/temp.ipynb | ###Markdown
Ex2 - Getting and Knowing your Data This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
import requests
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv).
###Code
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv'
chipo = pd.read_csv(url, sep = '\t')
###Output
_____no_output_____
###Markdown
Step 3. Assign it to a variable called chipo.
###Code
chipo
###Output
_____no_output_____
###Markdown
Step 4. See the first 10 entries
###Code
chipo.head(10)
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
# Solution 1
chipo.shape
# Solution 2
chipo.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4622 entries, 0 to 4621
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 order_id 4622 non-null int64
1 quantity 4622 non-null int64
2 item_name 4622 non-null object
3 choice_description 3376 non-null object
4 item_price 4622 non-null object
dtypes: int64(2), object(3)
memory usage: 180.7+ KB
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
len(chipo.columns)
###Output
_____no_output_____
###Markdown
Step 7. Print the name of all the columns.
###Code
chipo.columns
###Output
_____no_output_____
###Markdown
Step 8. How is the dataset indexed?
###Code
chipo.index
###Output
_____no_output_____
###Markdown
Step 9. Which was the most-ordered item?
###Code
count = chipo.groupby(['item_name']).agg({'quantity':'sum'})
count.idxmax()
###Output
_____no_output_____
###Markdown
Step 10. For the most-ordered item, how many items were ordered?
###Code
count.max()
###Output
_____no_output_____
###Markdown
Step 11. What was the most ordered item in the choice_description column?
###Code
choice=chipo.groupby(['choice_description']).agg({'quantity':'sum'})
choice.idxmax()
###Output
_____no_output_____
###Markdown
Step 12. How many items were orderd in total?
###Code
chipo.quantity.sum()
###Output
_____no_output_____
###Markdown
Step 13. Turn the item price into a float Step 13.a. Check the item price type
###Code
#chipo[chipo.columns[-1]]=chipo[chipo.columns[-1]].replace('[\$,]', '', regex=True).astype(float)
#chipo.item_price.str.slice(1).astype(float)
###Output
_____no_output_____
###Markdown
Step 13.b. Create a lambda function and change the type of item price
###Code
result_series=lambda item_price : item_price.astype(object)
chipo[chipo.item_price] = result_series(chipo[chipo.columns[-1]])
###Output
_____no_output_____
###Markdown
Step 13.c. Check the item price type
###Code
chipo.dtypes
###Output
_____no_output_____
###Markdown
Step 14. How much was the revenue for the period in the dataset?
###Code
chipo.item_price.sum()
###Output
_____no_output_____
###Markdown
Step 15. How many orders were made in the period? Step 16. What is the average revenue amount per order?
###Code
# Solution 1
chipo['revenue'] = chipo['quantity'] * chipo['item_price']
order_grouped = chipo.groupby(by=['order_id']).sum()
order_grouped.mean()['revenue']
# Solution 2
(chipo['item_price']*chipo['quantity']).mean()
###Output
_____no_output_____
###Markdown
Step 17. How many different items are sold?
###Code
len(chipo['item_name'].unique())
###Output
_____no_output_____ |
notebooks/Model inference - example.ipynb | ###Markdown
Move model from TesnorFlow to MLFlow registry
###Code
model_path = "/home/jovyan/dist-tf-model/"
restored_keras_model = tf.keras.models.load_model(model_path)
with mlflow.start_run() as run:
mlflow.keras.log_model(restored_keras_model, "models")
run_id ="425438f8a7b0471d9413684deeb63deb"
experiment_id = "0"
from pyspark.sql import SparkSession
from pyspark.sql.functions import col
import pyspark.sql.functions
from pyspark.sql.types import *
spark = SparkSession \
.builder \
.appName("Model inference") \
.getOrCreate()
###Output
_____no_output_____
###Markdown
Define mlfloyw.pyfunc wrapper for the Model:
###Code
# TIP: Create custom Python pyfunc model that transforms and predicts on inference data
# Allows the inference pipeline to be independent of the model framework used in training pipeline
class KerasCNNModelWrapper(mlflow.pyfunc.PythonModel):
def __init__(self, model_path):
self.model_path = model_path
def load_context(self, context):
# Load the Keras-native representation of the MLflow
# model
print(self.model_path)
self.model = mlflow.keras.load_model(
model_uri=self.model_path)
def predict(self, context, model_input):
import tensorflow as tf
import json
class_def = {
0: '212.teapot',
1: '234.tweezer',
2: '196.spaghetti',
3: '249.yo-yo',
}
model_input['origin'] = model_input['origin'].str.replace("dbfs:","/dbfs")
images = model_input['origin']
rtn_df = model_input.iloc[:,0:1]
rtn_df['prediction'] = None
rtn_df['probabilities'] = None
for index, row in model_input.iterrows():
image = np.round(np.array(Image.open(row['origin']).resize((224,224)),dtype=np.float32))
img = tf.reshape(image, shape=[-1, 224, 224, 3])
class_probs = self.model.predict(img)
classes = np.argmax(class_probs, axis=1)
class_prob_dict = dict()
for key, val in class_def.items():
class_prob_dict[val] = np.round(np.float(class_probs[0][int(key)]), 3).tolist()
rtn_df.loc[index,'prediction'] = classes[0]
rtn_df.loc[index,'probabilities'] = json.dumps(class_prob_dict)
return rtn_df[['prediction', 'probabilities']].values.tolist()
model_path = f"file:/home/jovyan/mlruns/{experiment_id}/{run_id}/artifacts/models"
wrappedModel = KerasCNNModelWrapper(model_path)
mlflow.pyfunc.log_model("pyfunc_model_v2", python_model=wrappedModel)
print(f"Inside MLflow Run with run_id `{run_id}` and experiment_id `{experiment_id}`")
###Output
Inside MLflow Run with run_id `425438f8a7b0471d9413684deeb63deb` and experiment_id `0`
###Markdown
Test the model with mlflow.pyfunc
###Code
# Test data. Using the same dataframe in this example
images_df = spark.read.parquet( "images_data/silver/augmented")
model_path = f"file:/home/jovyan/mlruns/{experiment_id}/{run_id}/artifacts/models"
# Always use the Production version of the model from the registry
mlflow_model_path = model_path
# Load model as a Spark UDF.
loaded_model = mlflow.pyfunc.spark_udf(spark, mlflow_model_path, result_type=ArrayType(StringType()))
# Predict on a Spark DataFrame.
scored_df = (images_df
.withColumn('origin', col("content"))
.withColumn('my_predictions', loaded_model(struct("origin")))
.drop("origin"))
scored_df.show(5, truncate=False)
###Output
_____no_output_____ |
notebooks/piexchange/pisender.ipynb | ###Markdown
Pi SenderThis process sends the value `pi` to a separate HELICS federate. Initialization
###Code
# -*- coding: utf-8 -*-
import time
import helics as h
from math import pi
initstring = "-f 2 --name=mainbroker"
fedinitstring = "--broker=mainbroker --federates=1"
deltat = 0.01
helicsversion = h.helicsGetVersion()
print("PI SENDER: Helics version = {}".format(helicsversion))
###Output
_____no_output_____
###Markdown
Create the broker
###Code
print("Creating Broker")
broker = h.helicsCreateBroker("zmq", "", initstring)
print("Created Broker")
print("Checking if Broker is connected")
isconnected = h.helicsBrokerIsConnected(broker)
print("Checked if Broker is connected")
if isconnected == 1:
print("Broker created and connected")
###Output
_____no_output_____
###Markdown
Create the federate info object
###Code
# Create Federate Info object that describes the federate properties #
fedinfo = h.helicsCreateFederateInfo()
# Set Federate name #
h.helicsFederateInfoSetCoreName(fedinfo, "TestA Federate")
# Set core type from string #
h.helicsFederateInfoSetCoreTypeFromString(fedinfo, "zmq")
# Federate init string #
h.helicsFederateInfoSetCoreInitString(fedinfo, fedinitstring)
# Set the message interval (timedelta) for federate. Note th#
# HELICS minimum message time interval is 1 ns and by default
# it uses a time delta of 1 second. What is provided to the
# setTimedelta routine is a multiplier for the default timedelta.
# Set one second message interval #
h.helicsFederateInfoSetTimeProperty(fedinfo, h.helics_property_time_delta, deltat)
###Output
_____no_output_____
###Markdown
Create a value federate
###Code
# Create value federate #
vfed = h.helicsCreateValueFederate("TestA Federate", fedinfo)
print("PI SENDER: Value federate created")
# Register the publication #
pub = h.helicsFederateRegisterGlobalTypePublication(vfed, "testA", "double", "")
print("PI SENDER: Publication registered")
###Output
_____no_output_____
###Markdown
Enter execution
###Code
# Enter execution mode #
h.helicsFederateEnterExecutingMode(vfed)
print("PI SENDER: Entering execution mode")
###Output
_____no_output_____
###Markdown
Start simulation
###Code
# This federate will be publishing deltat*pi for numsteps steps #
this_time = 0.0
value = pi
for t in range(5, 10):
val = value
currenttime = h.helicsFederateRequestTime(vfed, t)
h.helicsPublicationPublishDouble(pub, val)
print(
"PI SENDER: Sending value pi = {} at time {} to PI RECEIVER".format(
val, currenttime
)
)
time.sleep(1)
h.helicsFederateFinalize(vfed)
print("PI SENDER: Federate finalized")
while h.helicsBrokerIsConnected(broker):
time.sleep(1)
h.helicsFederateFree(vfed)
h.helicsCloseLibrary()
print("PI SENDER: Broker disconnected")
###Output
_____no_output_____ |
04_Save_Restore_CN.ipynb | ###Markdown
TensorFlow Tutorial 04 保存 & 恢复by [Magnus Erik Hvass Pedersen](http://www.hvass-labs.org/)/[GitHub中文](https://github.com/Hvass-Labs/TensorFlow-Tutorials-Chinese)/ [GitHub](https://github.com/Hvass-Labs/TensorFlow-Tutorials) / [Videos on YouTube](https://www.youtube.com/playlist?list=PL9Hr9sNUjfsmEu1ZniY0XpHSzl5uihcXZ)中文翻译 [thrillerist](https://zhuanlan.zhihu.com/insight-pixel)修订[ZhouGeorge](https://github.com/ZhouGeorge) 警告**这份教程使用的TensorFlow的版本不是v1.9,由于PrettyTensor构筑API不再被Google开发者更新和支持。建议你用_Kears API_代替,它保存和恢复模型更容易,见教程 03-C。** 简介这篇教程展示了如何保存以及恢复神经网络中的变量。在优化的过程中,当验证集上分类准确率提高时,保存神经网络的变量。如果经过1000次迭代还不能提升性能时,就终止优化。然后我们重新载入在验证集上表现最好的变量。这种策略称为Early-Stopping。它用来避免神经网络的过拟合。(过拟合)会在神经网络训练时间太长时出现,此时神经网络开始学习训练集中的噪声,将导致它误分类新的图像。这篇教程主要是用神经网络来识别MNIST数据集中的手写数字,过拟合在这里并不是什么大问题。但本教程展示了Early Stopping的思想。本文基于上一篇教程,你需要了解基本的TensorFlow和附加包Pretty Tensor。其中大量代码和文字与之前教程相似,如果你已经看过就可以快速地浏览本文。 流程图 下面的图表直接显示了之后实现的卷积神经网络中数据的传递。网络有两个卷积层和两个全连接层,最后一层是用来给输入图像分类的。关于网络和卷积的更多细节描述见教程 02 。
###Code
from IPython.display import Image
Image('images/02_network_flowchart.png')
###Output
_____no_output_____
###Markdown
导入
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import time
from datetime import timedelta
import math
import os
# Use PrettyTensor to simplify Neural Network construction.
import prettytensor as pt
###Output
_____no_output_____
###Markdown
使用Python3.5.2(Anaconda)开发,TensorFlow版本是:
###Code
tf.__version__
###Output
_____no_output_____
###Markdown
PrettyTensor 版本:
###Code
pt.__version__
###Output
_____no_output_____
###Markdown
载入数据 MNIST数据集大约12MB,如果没在给定路径中找到就会自动下载。
###Code
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)
###Output
Extracting data/MNIST/train-images-idx3-ubyte.gz
Extracting data/MNIST/train-labels-idx1-ubyte.gz
Extracting data/MNIST/t10k-images-idx3-ubyte.gz
Extracting data/MNIST/t10k-labels-idx1-ubyte.gz
###Markdown
现在已经载入了MNIST数据集,它由70,000张图像和对应的标签(比如图像的类别)组成。数据集分成三份互相独立的子集。我们在教程中只用训练集和测试集。
###Code
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
###Output
Size of:
- Training-set: 55000
- Test-set: 10000
- Validation-set: 5000
###Markdown
类型标签使用One-Hot编码,这意外每个标签是长为10的向量,除了一个元素之外,其他的都为零。这个元素的索引就是类别的数字,即相应图片中画的数字。我们也需要测试数据集类别数字的整型值,用下面的方法来计算。
###Code
data.test.cls = np.argmax(data.test.labels, axis=1)
data.validation.cls = np.argmax(data.validation.labels, axis=1)
###Output
_____no_output_____
###Markdown
数据维度 在下面的源码中,有很多地方用到了数据维度。它们只在一个地方定义,因此我们可以在代码中使用这些数字而不是直接写数字。
###Code
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes, one class for each of 10 digits.
num_classes = 10
###Output
_____no_output_____
###Markdown
用来绘制图片的帮助函数 这个函数用来在3x3的栅格中画9张图像,然后在每张图像下面写出真实类别和预测类别。
###Code
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
绘制几张图像来看看数据是否正确
###Code
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
###Output
_____no_output_____
###Markdown
TensorFlow图TensorFlow的全部目的就是使用一个称之为计算图(computational graph)的东西,它会比直接在Python中进行相同计算量要高效得多。TensorFlow比Numpy更高效,因为TensorFlow了解整个需要运行的计算图,然而Numpy只知道某个时间点上唯一的数学运算。TensorFlow也能够自动地计算需要优化的变量的梯度,使得模型有更好的表现。这是由于图是简单数学表达式的结合,因此整个图的梯度可以用链式法则推导出来。TensorFlow还能利用多核CPU和GPU,Google也为TensorFlow制造了称为TPUs(Tensor Processing Units)的特殊芯片,它比GPU更快。一个TensorFlow图由下面几个部分组成,后面会详细描述:* 占位符变量(Placeholder)用来改变图的输入。* 模型变量(Model)将会被优化,使得模型表现得更好。* 模型本质上就是一些数学函数,它根据Placeholder和模型的输入变量来计算一些输出。* 一个cost度量用来指导变量的优化。* 一个优化策略会更新模型的变量。另外,TensorFlow图也包含了一些调试状态,比如用TensorBoard打印log数据,本教程不涉及这些。 占位符 (Placeholder)变量 Placeholder是作为图的输入,我们每次运行图的时候都可能改变它们。将这个过程称为feeding placeholder变量,后面将会描述这个。首先我们为输入图像定义placeholder变量。这让我们可以改变输入到TensorFlow图中的图像。这也是一个张量(tensor),代表一个多维向量或矩阵。数据类型设置为float32,形状设为`[None, img_size_flat]`,`None`代表tensor可能保存着任意数量的图像,每张图象是一个长度为`img_size_flat`的向量。
###Code
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
###Output
_____no_output_____
###Markdown
卷积层希望`x`被编码为4维张量,因此我们需要将它的形状转换至`[num_images, img_height, img_width, num_channels]`。注意`img_height == img_width == img_size`,如果第一维的大小设为-1, `num_images`的大小也会被自动推导出来。转换运算如下:
###Code
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
###Output
_____no_output_____
###Markdown
接下来我们为输入变量`x`中的图像所对应的真实标签定义placeholder变量。变量的形状是`[None, num_classes]`,这代表着它保存了任意数量的标签,每个标签是长度为`num_classes`的向量,本例中长度为10。
###Code
y_true = tf.placeholder(tf.float32, shape=[None, 10], name='y_true')
###Output
_____no_output_____
###Markdown
我们也可以为class-number提供一个placeholder,但这里用argmax来计算它。这里只是TensorFlow中的一些操作,没有执行什么运算。
###Code
y_true_cls = tf.argmax(y_true, axis=1)
###Output
_____no_output_____
###Markdown
神经网络 这一节用PrettyTensor实现卷积神经网络,这要比直接在TensorFlow中实现来得简单,详见教程 03。基本思想就是用一个Pretty Tensor object封装输入张量`x_image`,它有一个添加新卷积层的帮助函数,以此来创建整个神经网络。Pretty Tensor负责变量分配等等。
###Code
x_pretty = pt.wrap(x_image)
###Output
_____no_output_____
###Markdown
现在我们已经将输入图像装到一个PrettyTensor的object中,再用几行代码就可以添加卷积层和全连接层。注意,在`with`代码块中,`pt.defaults_scope(activation_fn=tf.nn.relu)` 把 `activation_fn=tf.nn.relu`当作每个的层参数,因此这些层都用到了 Rectified Linear Units (ReLU) 。`defaults_scope`使我们能更方便地修改所有层的参数。
###Code
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = x_pretty.\
conv2d(kernel=5, depth=16, name='layer_conv1').\
max_pool(kernel=2, stride=2).\
conv2d(kernel=5, depth=36, name='layer_conv2').\
max_pool(kernel=2, stride=2).\
flatten().\
fully_connected(size=128, name='layer_fc1').\
softmax_classifier(num_classes=num_classes, labels=y_true)
###Output
WARNING:tensorflow:From D:\anaconda\envs\tensorflow-gpu\lib\site-packages\tensorflow\contrib\nn\python\ops\cross_entropy.py:68: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Future major versions of TensorFlow will allow gradients to flow
into the labels input on backprop by default.
See tf.nn.softmax_cross_entropy_with_logits_v2.
###Markdown
获取权重 下面,我们要绘制神经网络的权重。当使用Pretty Tensor来创建网络时,层的所有变量都是由Pretty Tensoe间接创建的。因此我们要从TensorFlow中获取变量。我们用`layer_conv1` 和 `layer_conv2`代表两个卷积层。这也叫变量作用域(不要与上面描述的`defaults_scope`混淆了)。PrettyTensor会自动给它为每个层创建的变量命名,因此我们可以通过层的作用域名称和变量名来取得某一层的权重。函数实现有点笨拙,因为我们不得不用TensorFlow函数`get_variable()`,它是设计给其他用途的,创建新的变量或重用现有变量。创建下面的帮助函数很简单。
###Code
def get_weights_variable(layer_name):
# Retrieve an existing variable named 'weights' in the scope
# with the given layer_name.
# This is awkward because the TensorFlow function was
# really intended for another purpose.
with tf.variable_scope(layer_name, reuse=True):
variable = tf.get_variable('weights')
return variable
###Output
_____no_output_____
###Markdown
借助这个帮助函数我们可以获取变量。这些是TensorFlow的objects。你需要类似的操作来获取变量的内容: `contents = session.run(weights_conv1)` ,下面会提到这个。
###Code
weights_conv1 = get_weights_variable(layer_name='layer_conv1')
weights_conv2 = get_weights_variable(layer_name='layer_conv2')
###Output
_____no_output_____
###Markdown
优化方法 PrettyTensor给我们提供了预测类型标签(`y_pred`)以及一个需要最小化的损失度量,用来提升神经网络分类图片的能力。PrettyTensor的文档并没有说明它的损失度量是用cross-entropy还是其他的。但现在我们用`AdamOptimizer`来最小化损失。优化过程并不是在这里执行。实际上,还没计算任何东西,我们只是往TensorFlow图中添加了优化器,以便后续操作。
###Code
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)
###Output
_____no_output_____
###Markdown
性能度量我们需要另外一些性能度量,来向用户展示这个过程。首先我们从神经网络输出的`y_pred`中计算出预测的类别,它是一个包含10个元素的向量。类别数字是最大元素的索引。
###Code
y_pred_cls = tf.argmax(y_pred, dimension=1)
###Output
_____no_output_____
###Markdown
然后创建一个布尔向量,用来告诉我们每张图片的真实类别是否与预测类别相同。
###Code
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
###Output
_____no_output_____
###Markdown
上面的计算先将布尔值向量类型转换成浮点型向量,这样子False就变成0,True变成1,然后计算这些值的平均数,以此来计算分类的准确度。
###Code
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
###Output
_____no_output_____
###Markdown
Saver为了保存神经网络的变量,我们创建一个称为Saver-object的对象,它用来保存及恢复TensorFlow图的所有变量。在这里并未保存什么东西,(保存操作)在后面的`optimize()`函数中完成。
###Code
saver = tf.train.Saver()
###Output
_____no_output_____
###Markdown
由于(保存操作)常间隔着写在(代码)中,因此保存的文件通常称为checkpoints。这是用来保存或恢复数据的文件夹。
###Code
save_dir = 'checkpoints/'
###Output
_____no_output_____
###Markdown
如果文件夹不存在则创建。
###Code
if not os.path.exists(save_dir):
os.makedirs(save_dir)
###Output
_____no_output_____
###Markdown
这是保存checkpoint文件的路径。
###Code
save_path = os.path.join(save_dir, 'best_validation')
###Output
_____no_output_____
###Markdown
运行TensorFlow 创建TensorFlow会话(session)一旦创建了TensorFlow图,我们需要创建一个TensorFlow会话,用来运行图。
###Code
session = tf.Session()
###Output
_____no_output_____
###Markdown
初始化变量变量`weights`和`biases`在优化之前需要先进行初始化。我们写一个简单的封装函数,后面会再次调用。
###Code
def init_variables():
session.run(tf.global_variables_initializer())
###Output
_____no_output_____
###Markdown
运行函数来初始化变量。
###Code
init_variables()
###Output
_____no_output_____
###Markdown
用来优化迭代的帮助函数 在训练集中有50,000张图。用这些图像计算模型的梯度会花很多时间。因此我们利用随机梯度下降的方法,它在优化器的每次迭代里只用到了一小部分的图像。如果内存耗尽导致电脑死机或变得很慢,你应该试着减少这些数量,但同时可能还需要更优化的迭代。
###Code
train_batch_size = 64
###Output
_____no_output_____
###Markdown
每迭代100次下面的优化函数,会计算一次验证集上的分类准确率。如果过了1000次迭代验证准确率还是没有提升,就停止优化。我们需要一些变量来跟踪这个过程。
###Code
# Best validation accuracy seen so far.
best_validation_accuracy = 0.0
# Iteration-number for last improvement to validation accuracy.
last_improvement = 0
# Stop optimization if no improvement found in this many iterations.
require_improvement = 1000
###Output
_____no_output_____
###Markdown
函数用来执行一定数量的优化迭代,以此来逐渐改善网络层的变量。在每次迭代中,会从训练集中选择新的一批数据,然后TensorFlow在这些训练样本上执行优化。每100次迭代会打印出(信息),同时计算验证准确率,如果效果有提升的话会将它保存至文件。
###Code
# Counter for total number of iterations performed so far.
total_iterations = 0
def optimize(num_iterations):
# Ensure we update the global variables rather than local copies.
global total_iterations
global best_validation_accuracy
global last_improvement
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(num_iterations):
# Increase the total number of iterations performed.
# It is easier to update it in each iteration because
# we need this number several times in the following.
total_iterations += 1
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(train_batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
# Print status every 100 iterations and after last iteration.
if (total_iterations % 100 == 0) or (i == (num_iterations - 1)):
# Calculate the accuracy on the training-batch.
acc_train = session.run(accuracy, feed_dict=feed_dict_train)
# Calculate the accuracy on the validation-set.
# The function returns 2 values but we only need the first.
acc_validation, _ = validation_accuracy()
# If validation accuracy is an improvement over best-known.
if acc_validation > best_validation_accuracy:
# Update the best-known validation accuracy.
best_validation_accuracy = acc_validation
# Set the iteration for the last improvement to current.
last_improvement = total_iterations
# Save all variables of the TensorFlow graph to file.
saver.save(sess=session, save_path=save_path)
# A string to be printed below, shows improvement found.
improved_str = '*'
else:
# An empty string to be printed below.
# Shows that no improvement was found.
improved_str = ''
# Status-message for printing.
msg = "Iter: {0:>6}, Train-Batch Accuracy: {1:>6.1%}, Validation Acc: {2:>6.1%} {3}"
# Print it.
print(msg.format(i + 1, acc_train, acc_validation, improved_str))
# If no improvement found in the required number of iterations.
if total_iterations - last_improvement > require_improvement:
print("No improvement found in a while, stopping optimization.")
# Break out from the for-loop.
break
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
###Output
_____no_output_____
###Markdown
用来绘制错误样本的帮助函数 函数用来绘制测试集中被误分类的样本。
###Code
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
###Output
_____no_output_____
###Markdown
绘制混淆(confusion)矩阵的帮助函数
###Code
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.matshow(cm)
# Make various adjustments to the plot.
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
计算分类的帮助函数这个函数用来计算图像的预测类别,同时返回一个代表每张图像分类是否正确的布尔数组。由于计算可能会耗费太多内存,就分批处理。如果你的电脑死机了,试着降低batch-size。
###Code
# Split the data-set in batches of this size to limit RAM usage.
batch_size = 256
def predict_cls(images, labels, cls_true):
# Number of images.
num_images = len(images)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_images, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_images:
# The ending index for the next batch is denoted j.
j = min(i + batch_size, num_images)
# Create a feed-dict with the images and labels
# between index i and j.
feed_dict = {x: images[i:j, :],
y_true: labels[i:j, :]}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
return correct, cls_pred
###Output
_____no_output_____
###Markdown
计算测试集上的预测类别。
###Code
def predict_cls_test():
return predict_cls(images = data.test.images,
labels = data.test.labels,
cls_true = data.test.cls)
###Output
_____no_output_____
###Markdown
计算验证集上的预测类别。
###Code
def predict_cls_validation():
return predict_cls(images = data.validation.images,
labels = data.validation.labels,
cls_true = data.validation.cls)
###Output
_____no_output_____
###Markdown
分类准确率的帮助函数这个函数计算了给定布尔数组的分类准确率,布尔数组表示每张图像是否被正确分类。比如, `cls_accuracy([True, True, False, False, False]) = 2/5 = 0.4`。
###Code
def cls_accuracy(correct):
# Calculate the number of correctly classified images.
# When summing a boolean array, False means 0 and True means 1.
correct_sum = correct.sum()
# Classification accuracy is the number of correctly classified
# images divided by the total number of images in the test-set.
acc = float(correct_sum) / len(correct)
return acc, correct_sum
###Output
_____no_output_____
###Markdown
计算验证集上的分类准确率。
###Code
def validation_accuracy():
# Get the array of booleans whether the classifications are correct
# for the validation-set.
# The function returns two values but we only need the first.
correct, _ = predict_cls_validation()
# Calculate the classification accuracy and return it.
return cls_accuracy(correct)
###Output
_____no_output_____
###Markdown
展示性能的帮助函数 函数用来打印测试集上的分类准确率。为测试集上的所有图片计算分类会花费一段时间,因此我们直接从这个函数里调用上面的函数,这样就不用每个函数都重新计算分类。
###Code
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# For all the images in the test-set,
# calculate the predicted classes and whether they are correct.
correct, cls_pred = predict_cls_test()
# Classification accuracy and the number of correct classifications.
acc, num_correct = cls_accuracy(correct)
# Number of images being classified.
num_images = len(correct)
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, num_correct, num_images))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
###Output
_____no_output_____
###Markdown
绘制卷积权重的帮助函数
###Code
def plot_conv_weights(weights, input_channel=0):
# Assume weights are TensorFlow ops for 4-dim variables
# e.g. weights_conv1 or weights_conv2.
# Retrieve the values of the weight-variables from TensorFlow.
# A feed-dict is not necessary because nothing is calculated.
w = session.run(weights)
# Print mean and standard deviation.
print("Mean: {0:.5f}, Stdev: {1:.5f}".format(w.mean(), w.std()))
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
# Number of filters used in the conv. layer.
num_filters = w.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid filter-weights.
if i<num_filters:
# Get the weights for the i'th filter of the input channel.
# The format of this 4-dim tensor is determined by the
# TensorFlow API. See Tutorial #02 for more details.
img = w[:, :, input_channel, i]
# Plot image.
ax.imshow(img, vmin=w_min, vmax=w_max,
interpolation='nearest', cmap='seismic')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
优化之前的性能测试集上的准确度很低,这是由于模型只做了初始化,并没做任何优化,所以它只是对图像做随机分类。
###Code
print_test_accuracy()
###Output
Accuracy on Test-Set: 11.1% (1106 / 10000)
###Markdown
卷积权重是随机的,但也很难把它与下面优化过的权重区分开来。这里也展示了平均值和标准差,因此我们可以看看是否有差别。
###Code
plot_conv_weights(weights=weights_conv1)
###Output
Mean: -0.02261, Stdev: 0.29167
###Markdown
10,000次优化迭代后的性能现在我们进行了10,000次优化迭代,并且,当经过1000次迭代验证集上的性能却没有提升时就停止优化。星号 * 代表验证集上的分类准确度有提升。
###Code
optimize(num_iterations=10000)
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
###Output
Accuracy on Test-Set: 98.5% (9851 / 10000)
Example errors:
###Markdown
现在卷积权重是经过优化的。将这些与上面的随机权重进行对比。它们看起来基本相同。实际上,一开始我以为程序有bug,因为优化前后的权重看起来差不多。但保存图像,并排着比较它们(你可以右键保存)。你会发现两者有细微的不同。平均值和标准差也有一点变化,因此优化过的权重肯定是不一样的。
###Code
plot_conv_weights(weights=weights_conv1)
###Output
Mean: -0.00057, Stdev: 0.30640
###Markdown
再次初始化变量再一次用随机值来初始化所有神经网络变量。
###Code
init_variables()
###Output
_____no_output_____
###Markdown
这意味着神经网络又是完全随机地对图片进行分类,由于只是随机的猜测所以分类准确率很低。
###Code
print_test_accuracy()
###Output
Accuracy on Test-Set: 10.2% (1021 / 10000)
###Markdown
卷积权重看起来应该与上面的不同。
###Code
plot_conv_weights(weights=weights_conv1)
###Output
Mean: 0.02746, Stdev: 0.27970
###Markdown
恢复最好的变量重新载入在优化过程中保存到文件的所有变量。
###Code
saver.restore(sess=session, save_path=save_path)
###Output
INFO:tensorflow:Restoring parameters from checkpoints/best_validation
###Markdown
使用之前保存的那些变量,分类准确率又提高了。注意,准确率与之前相比可能会有细微的上升或下降,这是由于文件里的变量是用来最大化验证集上的分类准确率,但在保存文件之后,又进行了1000次的优化迭代,因此这是两组有轻微不同的变量的结果。有时这会导致测试集上更好或更差的表现。
###Code
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
###Output
Accuracy on Test-Set: 98.6% (9855 / 10000)
Example errors:
###Markdown
卷积权重也与之前显示的图几乎相同,同样,由于多做了1000次优化迭代,二者并非完全一样。
###Code
plot_conv_weights(weights=weights_conv1)
###Output
Mean: -0.00115, Stdev: 0.30464
###Markdown
关闭TensorFlow会话 现在我们已经用TensorFlow完成了任务,关闭session,释放资源。
###Code
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
###Output
_____no_output_____ |
DAPri-Damage Assessment Prioritization.ipynb | ###Markdown
Damage Assessment Prioritization (DAPRI) DSI-8 August 2019 Team: Sade Ekulona, Nick Minaie, Jeremy Opacich, Andrew Picart Problem StatementGiven a list of properties including the address, lat/long, damage level, damage comments, and the applicant's estimate of the damage (in $$), and other information, prioritize the properties for site visit and damage assessment. The idea is to prioritize properties that are in higher need of assessment for receiving funds sooner. Summary of the DAPri AlgorithmThe DAPri algorithm works in three steps, as described below. For each property, a `Priority Index` is calculated as below: Priority Index = (Damage Level) * (Home Safe To Live) * (Level of Access to Utilities) * (Level of Insurance Coverage) * {(Application Estimate) / (Estimated Home Value)} Damage Level (scale 0-3):When looking at property damage levels, it should be noted that according to FEMA there are four levels of damage (shown in the table below). Damage levels are assigned on a scale of 1~4 according to these levels.| FEMA Damage Level | Score ||-------------------|------------------------|| Destroyed | 3|| Major | 2 || Minor | 1 ||Affected with an Inaccessible category forthe homes that cannot be reached for assessment| 0|**NOTE:** The reason inaccessible properties are scored at 0 for their damage level is that they cannot be reached for assessment, and until they become accessible they are prioritized as '0' (or low priority) for the time being. This means, these properties will be pushed to the end of the list, until they become accessible after which the prioritization should be run with the revised damage levels. Home Safe to Live (scale 1-2):This takes into account if the damaged home is safe to live in or not. Some levels of damage may or may not lead to unsafe situations. Those properties that are no longer safe to live in get higher priority.|Safe to Live | Score ||------------|-------||Yes| 1||No|2| Access to Utilities (scale 1-3):Those properties that have lost access to basic utilities, or have partial access to utilities get higher priority.|Level of Access to Utilities | Score||-----------------------------|------|| Full access to all utilities| 1||Partial access to utilities | 2|| No access to utilities | 3| Level of Insurance Coverage (scale 0 to 1, float):Those properties that have full or partial coverage get lower priority than those which do not have any insurance coverage for the type of disaster.|Level of Insurance Coverage| Score||-----------------------------|------|| Full Coverage| 0||Partial Coverage | 1|| No Coverage | 2| Damage Estimate over Estimate Home ValueThis metric provides a ratio of the level of damage (in monetary value) provided by the applicant over the total home value. Properties that have suffered significant damage compared to their total value will get higher priority. The reason the damage estimate is divided by home value is to remove any bias toward wealthy neighborhoods, where a minor damage may cost more than a total price in under-privileged areas. Therefore, the focus will be on the portion of the home value that was affected by the damage, as opposed to the total damage cost. Steps for Prioritization:- Calculate a `Priority Index` (PI) for each property- Cluster properties based on their lat/long (DBSCAN)- Calculate average of PI for each cluster- Sort clusters based on their average PI- Sort properties in each cluster based on their individual PIs- Save the list of prioritized properties to a `.csv` file
###Code
# Imports
import googlemaps
from datetime import datetime
import pandas as pd
import matplotlib.pyplot as plt
import gmaps as gm
import numpy as np
# Muting warnings
import warnings
warnings.filterwarnings("ignore")
# Reading API key
with open('/Users/nick/dsi/google_api.txt') as f:
api_key = f.readline()[:-1]
f.close
# Instantiate Google Maps Extension for Jupyter Notebooks for inline maps
gm.configure(api_key=api_key)
# import data
df = pd.read_csv('./data/applications.csv')
df.head()
# Plotting the heatmap and color coded location dots
# Creating a map with center at average lat and long
map_center = ((df['lat'].min()+df['lat'].max())/2,
(df['long'].min()+df['long'].max())/2)
# Calculating the appropriate level of zoom based on lat and long
zoom= round(df['long'].max()-df['long'].min(), 1)
# Creating a map figure
fig = gm.figure(center=map_center, zoom_level=zoom, map_type='ROADMAP')
# Creating a tuple of markers lat and long
markers = []
for i in df.index:
markers.append((df.iloc[i,1], df.iloc[i,2]))
# Creating the layer for symbols
symbol = gm.symbol_layer(markers,
fill_color=tuple(df['damage'].map({3:'#B22222', #FireBrick
2:'#FFA500', #Orange
1:'#FFFF00', #Yellow
0:'#000000'}).values),
stroke_color=None,
stroke_opacity=0.0,
scale=5,
info_box_content=df['address'],
display_info_box=True)
# Creating the layer for heatmap
heatmap = gm.heatmap_layer(markers,
weights=df['damage'],
point_radius=40,
opacity=0.6)
#Add the layers
fig.add_layer(heatmap)
fig.add_layer(symbol)
fig
###Output
_____no_output_____
###Markdown
Splitting the PropertiesWe are going to split the properties into two groups:- Accessible properties AND with no coverage or partial insurance coverage- Inaccessible properties OR those with full insurance coverageThe reason for this is to do the prioritization only on the properties that do not have full insurance or are accessible. Once the prioritization is done, other properties will be added to the end of the list. Once inaccessible properties become accessible, the prioritization need to be run again to take those properties into account.
###Code
# The subset of properties for prioritization
df_prio = df[(df['damage']!=0) & (df['insured']!=0)]
df_prio.shape
# Those that will not be prioritized
df_no_prio = df[(df['damage']==0) | (df['insured']==0)]
df_no_prio.shape
###Output
_____no_output_____
###Markdown
Calculating Priority Index
###Code
# Refer to the problem state section for details of priority index caluclation
df_prio['priority_index']=(df_prio['damage'])*\
(df_prio['safe_to_live'])*\
(df_prio['utils_on'])*\
(df_prio['insured'])*\
(df_prio['app_estimate']/df['est_home_value'])
df_no_prio['priority_index'] = 0
# reset index for both dataframes
df_prio.reset_index(inplace=True)
df_no_prio.reset_index(inplace=True)
df_prio.head()
###Output
_____no_output_____
###Markdown
Unsupervised Clustering of Addresses
###Code
from sklearn.cluster import DBSCAN
# Getting dimensions of the entire area where all properties are located
len_area = round(df_prio['lat'].max()-df_prio['lat'].min(), 1)
wid_area = round(df_prio['long'].max()-df_prio['long'].min(),1)
dia_area = np.sqrt(len_area**2+wid_area**2)
# Instantiate DBSCAN
# epsilon is defined based on the area of interest
db = DBSCAN(eps=dia_area/30, min_samples=2)
# Fit DBSCAN on data
db.fit(df_prio[['lat', 'long']])
# Save labels to cluster column
df_prio['cluster'] = db.labels_+1 # adding +1 to all clusters so they start from 1 instead of 0
# Cluster 0 are outliers
df_prio['cluster'].unique()
# assign numbers to cluster -1 (outliers)
count = df_prio['cluster'].unique().max()+1
for i in range(len(df_prio)):
if df_prio['cluster'][i]==0:
df_prio.loc[i, 'cluster'] = count
count += 1
else:
pass
# Plot clusters, color coded for cluster, and size based on priority index
# Just for visualizing the location and severity of damages
plt.scatter(data=df_prio,
x='long',
y='lat',
c='cluster',
s=df_prio['priority_index']*15,
alpha=0.7);
plt.xlabel('longitude')
plt.ylabel('latitude')
plt.title('Location of Addresses - Colors: Clusters - Size: Priority Index');
###Output
_____no_output_____
###Markdown
Sorting Addresses
###Code
# Get the order of clusters based on mean of prioritization score
cluster_prio = list(df_prio.groupby('cluster').mean()['priority_index'].sort_values(ascending=False).index)
# Showing cluster priority orders
cluster_prio
# Create the dictionary that defines the order for sorting
cluster_prio_indexer = dict(zip(cluster_prio,range(len(cluster_prio))))
# Generate a rank column that will be used to sort the dataframe numerically
df_prio['cluster_ranked'] = df_prio['cluster'].map(cluster_prio_indexer)
# Sort the DF based on cluster_ranked (ascending), and then
# priority_index (descending)
df_prio = df_prio.sort_values(by=['cluster_ranked', 'priority_index'],
ascending=[True, False])
###Output
_____no_output_____
###Markdown
Preparing No-Priority DataFrame for Concatination with Priority DataFrame
###Code
# Adding columns to math df_prio
df_no_prio['priority_index'] = 0
df_no_prio['cluster'] = 0
df_no_prio['cluster_ranked'] = 0
# Creating a new column as the ratio of app_estimate and est_home_value
df_no_prio['damage_ratio'] = df_no_prio['app_estimate'] / df_no_prio['est_home_value']
# Sorting based on damage_ratio
df_no_prio = df_no_prio.sort_values(by='damage_ratio', ascending=False)
df_no_prio.drop('damage_ratio', axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
Concatinating the two DataFrames
###Code
df = pd.concat([df_prio, df_no_prio])
df.head()
df.drop(columns=['index','cluster', 'cluster_ranked']).to_csv('./data/apps_sorted.csv',
index=False)
###Output
_____no_output_____ |
daytrade-finder.ipynb | ###Markdown
daytrade target finder20190312
###Code
import datetime
import numpy as np
import pandas as pd
import pandas_datareader.data as web
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import matplotlib.dates as mdates
import seaborn as sns
import plotly as py
import plotly.offline as plyo
from plotly.graph_objs import Contours, Histogram2dContour, Marker, Scatter
import cufflinks as cf # google colaboratoryでは install できない
plt.style.use('seaborn')
#!pip install jsm # necessary for google colaboratory
import jsm
#!pip install mpl_finance # necessary for google colaboratory
import mpl_finance
# if an error occurs for importing cufflinks, do
# pip install --upgrade plotly
# pip install --upgrade cufflinks
# pip install --upgrade ipywidgets
import os, sys
module_path = os.path.abspath(os.path.join('./utils'))
if module_path not in sys.path:
sys.path.append(module_path)
from read_nikkei import get_jstock
###Output
_____no_output_____
###Markdown
analyze
###Code
from IPython.display import HTML, display, Math
import tabulate
today = datetime.datetime.now().strftime("%Y%m%d")
print("today is {}".format(today))
codes = [8892, 9685, 3926]
q = jsm.Quotes()
start = datetime.date(2018, 9, 1)
end = datetime.date(2019, 1, 18)
data = {}
for c in codes:
data[str(c)] = q.get_finance(c)
def get_table_by(code):
if type(code) == int:
code = str(code)
return [["時価総額", data[code].market_cap],
["発行済株式数", data[code].shares_issued],
["配当利回り", data[code].dividend_yield],
["1株配当", data[code].dividend_one],
["株価収益率", data[code].per],
["純資産倍率", data[code].pbr],
["1株利益", data[code].eps],
["1株純資産", data[code].bps],
["最低購入代金", data[code].price_min],
["単元株数", data[code].round_lot],
["年初来高値", data[code].years_high],
["年初来安値", data[code].years_low]]
brand = q.get_brand() # TAKES TOO LONG!!!!!
from pickle_handler import save_dict, load_dict
save_dict(brand, '20190122-brand')
category = [
'0050', # 農林・水産業
'1050', # 鉱業
'2050', # 建設業
'3050', # 食料品
'3100', # 繊維製品
'3150', # パルプ・紙
'3200', # 化学
'3250', # 医薬品
'3300', # 石油・石炭製品
'3350', # ゴム製品
'3400', # ガラス・土石製品
'3450', # 鉄鋼
'3500', # 非鉄金属
'3550', # 金属製品
'3600', # 機械
'3650', # 電気機器
'3700', # 輸送機器
'3750', # 精密機器
'3800', # その他製品
'4050', # 電気・ガス業
'5050', # 陸運業
'5100', # 海運業
'5150', # 空運業
'5200', # 倉庫・運輸関連業
'5250', # 情報・通信
'6050', # 卸売業
'6100', # 小売業
'7050', # 銀行業
'7100', # 証券業
'7150', # 保険業
'7200', # その他金融業
'8050', # 不動産業
'9050' # サービス業
]
def show_company_info_by_code(code):
for ct in category:
#print("for {}, # of entries is {}".format(c, len(brand[c])))
for n in brand[ct]:
if n.ccode == str(code):
print("----------")
print("{}".format(n.ccode))
print("{}".format(n.name))
print("{}".format(n.market))
print("{}".format(n.info))
print("----------")
break
for c in codes:
show_company_info_by_code(c)
display(HTML(tabulate.tabulate(get_table_by(c), tablefmt='html', floatfmt=".2f")))
###Output
----------
8892
(株)日本エスコン
東証1部
総合不動産。マンション分譲から商業施設やホテル開発など業容拡大。中部電力が筆頭株主
----------
###Markdown
getting data
###Code
dfs={}
for c in codes:
key = str(c)
dfs[key] = get_jstock(c, end=pd.Timestamp(today), periods=300)
dfs[key].head(5).append(dfs[key].tail(5)) # これなんだっけ?
dfs[key] = dfs[key].reset_index()
dfs[key].Date = pd.to_datetime(dfs[key].Date)
print("for {}".format(c))
dfs[key].head()
###Output
Get data from 2018-03-30 to 2019-01-24
###Markdown
plot
###Code
num = len(codes)
for i in range(len(codes)):
key = str(codes[i])
fig, axes = plt.subplots(nrows=num, ncols=1, figsize=(12, 4), sharex=True) #, gridspec_kw={'height_ratios': [2,1]})
fig.suptitle("{}".format(codes[i]))
plt.subplot(2, 1, 1)
plt.plot(dfs[key].Date, dfs[key].Close)
plt.subplot(2,1,2)
plt.bar(dfs[key].Date, dfs[key].Volume)
#ax2.xaxis_date()
axes[i].grid()
plt.show()
###Output
_____no_output_____
###Markdown
position with SMA
###Code
SMA1 = 42
SMA2 = 100
key = str(codes[2])
data = (
pd.DataFrame(dfs[key]).dropna()
)
data['SMA1'] = data.Close.rolling(SMA1).mean()
data['SMA2'] = data.Close.rolling(SMA2).mean()
data['Position'] = np.where(data['SMA1'] > data['SMA2'], 1, -1)
data.tail()
fig, ax1 = plt.subplots(figsize=(12, 5))
ax2 = ax1.twinx()
plt.title("strategy for {}".format(key))
ax1.plot(data.Date, data.Close)
ax1.plot(data.Date, data.SMA1)
ax1.plot(data.Date, data.SMA2)
ax2.plot(data.Date, data.Position, color='r')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
vectorized backtesting
###Code
data['Returns'] = np.log(data['Close'] / data['Close'].shift(1))
data['Strategy'] = data['Position'].shift(1) * data['Returns']
data.round(4).head()
data.dropna(inplace=True)
np.exp(data[['Returns', 'Strategy']].sum())
###Output
_____no_output_____
###Markdown
make sure to understand the following equation
###Code
data[['Returns', 'Strategy']].std() * 252 ** 0.5
ax = data[['Returns', 'Strategy']].cumsum().apply(np.exp).plot(figsize=(10, 6))
data['Position'].plot(ax=ax, secondary_y='Position', style='--')
ax.get_legend().set_bbox_to_anchor((0.25, 0.85))
###Output
_____no_output_____
###Markdown
optimization (using brute force) SKIPPED! Linear OLS Regression
###Code
raw = pd.read_csv('./data/tr_eikon_eod_data.csv', index_col=0, parse_dates=True).dropna()
raw.columns
symbol = 'EUR='
data = pd.DataFrame(raw[symbol])
data['returns'] = np.log(data / data.shift(1))
data.dropna(inplace=True)
data['direction'] = np.sign(data['returns']).astype(int)
data.head()
data['returns'].hist(bins=35, figsize=(10, 6))
lags = 2
def create_lags(data):
global cols
cols = []
for lag in range(1, lags + 1):
col = 'lag_{}'.format(lag)
data[col] = data['returns'].shift(lag)
cols.append(col)
create_lags(data)
data.head()
data.dropna(inplace=True)
data.plot.scatter(x='lag_1', y='lag_2', c='returns', cmap='coolwarm', figsize=(10, 6), colorbar=True)
plt.axvline(0, c='r', ls='--')
plt.axhline(0, c='r', ls='--')
###Output
_____no_output_____
###Markdown
regression
###Code
from sklearn.linear_model import LinearRegression
model = LinearRegression()
data['pos_ols_1'] = model.fit(data[cols], data['returns']).predict(data[cols])
data['pos_ols_2'] = model.fit(data[cols], data['direction']).predict(data[cols])
data[['pos_ols_1', 'pos_ols_2']].head()
data[['pos_ols_1', 'pos_ols_2']] = np.where(data[['pos_ols_1', 'pos_ols_2']] > 0, 1, -1)
data['pos_ols_1'].value_counts()
data['pos_ols_2'].value_counts()
(data['pos_ols_1'].diff() != 0).sum()
(data['pos_ols_2'].diff() != 0).sum()
data['strat_ols_1'] = data['pos_ols_1'] * data['returns']
data['strat_ols_2'] = data['pos_ols_2'] * data['returns']
data[['returns', 'strat_ols_1', 'strat_ols_2']].sum().apply(np.exp)
(data['direction'] == data['pos_ols_1']).value_counts()
(data['direction'] == data['pos_ols_2']).value_counts()
data[['returns', 'strat_ols_1', 'strat_ols_2']].cumsum().apply(np.exp).plot(figsize=(10, 6))
###Output
_____no_output_____
###Markdown
clustering
###Code
from sklearn.cluster import KMeans
model = KMeans(n_clusters=2, random_state=0)
model.fit(data[cols])
KMeans(algorithm='auto', copy_x=True, init='k-means++', max_iter=300, n_clusters=2,
n_init=10, n_jobs=None, precompute_distances='auto',
random_state=0, tol=0.0001, verbose=0)
data['pos_clus'] = model.predict(data[cols])
data['pos_clus'] = np.where(data['pos_clus'] == 1, -1, 1)
data['pos_clus'].values
plt.figure(figsize=(10, 6))
plt.scatter(data[cols].iloc[:, 0], data[cols].iloc[:, 1], c=data['pos_clus'], cmap='coolwarm')
data['strat_clus'] = data['pos_clus'] * data['returns']
data[['returns', 'strat_clus']].sum().apply(np.exp)
(data['direction'] == data['pos_clus']).value_counts()
data[['returns', 'strat_clus']].cumsum().apply(np.exp).plot(figsize=(10, 6))
###Output
_____no_output_____
###Markdown
my trade history
###Code
history = pd.read_csv('./data/tradehistory(JP)_20190117.csv', encoding="shift-jis")
history.tail()
history['売買区分'] = history['売買区分'].apply(lambda x : 'bid' if x == '買付' else 'ask')
history.head()
date_contract = history['約定日']
code = history['銘柄コード']
name = history['銘柄名']
action = history['売買区分']
price = history['受渡金額[円]']
mh = pd.DataFrame(
{
'date': pd.to_datetime(date_contract),
'code': code,
'name': name,
'action': action,
'price': price
}
)
mh.head()
mh.price = pd.to_numeric(mh.price.str.replace(',', ''))
mh.head()
df1_bid = mh[(mh.code == c) & (mh.action == 'bid')]
df1_ask = mh[(mh.code == c) & (mh.action == 'ask')]
df1_bid.head()
num_stocks = 100
plt.figure(figsize=(15, 6))
plt.plot(df.Date, df.Close)
plt.plot(df1_bid.date, df1_bid.price / num_stocks, color='blue', marker='o')
plt.plot(df1_ask.date, df1_ask.price / num_stocks, color='red', marker='o')
plt.show()
qf = cf.QuantFig(df)
qf.add_bollinger_bands()
qf.add_volume()
qf.add_macd()
qf.iplot()
###Output
_____no_output_____
###Markdown
percent change plot with nikkei255
###Code
nikkei = web.DataReader('^N225', 'yahoo', start, end)
plt.figure(figsize=(15, 6))
plt.plot(nikkei.Close.pct_change(), label='nikei')
plt.plot(df.Close.pct_change(), label=str(c))
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
logarithmic returns
###Code
rets_nikkei = np.log(nikkei.Close / nikkei.Close.shift(1))
#rets_nikkei.head()
rets = np.log(df.Close / df.Close.shift(1))
plt.subplots_adjust(wspace=0.4, hspace=0.6)
plt.subplot(3,1,1)
#plt.figure(figsize=(15,6))
plt.plot(nikkei.Close, label='nikkei')
plt.title('nikkei close')
plt.legend()
plt.subplot(3,1,2)
plt.plot(df.Close, label='8892')
plt.title("{} close".format(str(c)))
plt.legend()
plt.subplot(3,1,3)
#plt.figure(figsize=(15, 6))
plt.plot(rets_nikkei, label='rets nikkei')
plt.plot(rets, label="rets {}".format(str(c)))
plt.legend()
#plt.show()
###Output
_____no_output_____
###Markdown
scattering plot for logarithmic returns
###Code
tmp = np.log(nikkei.Close / nikkei.Close.shift(1))
df.replace([np.inf, -np.inf], np.nan).dropna(axis=1)
type(tmp)
df_compare = pd.merge(nikkei, df)
df_compare.head()
#df_compare.describe()
df.head()
merged = pd.merge(
rets_nikkei.to_frame().reset_index(),
rets.to_frame().reset_index(),
on='Date'
)
pd.plotting.scatter_matrix(merged, alpha=0.2)
###Output
_____no_output_____
###Markdown
stochastics
###Code
import math
import numpy as np
import numpy.random as npr
from pylab import plt, mpl
plt.style.use('seaborn')
import scipy.stats as scs
###Output
_____no_output_____
###Markdown
Black-Scholes-Merton
###Code
S0 = 100
r = 0.05 # constant riskless short rate
sigma = 0.25 # constant volatility
T = 2.0
I = 10000 # number of simulations
ST1 = S0 * np.exp((r - 0.5 * sigma ** 2) * T + sigma * math.sqrt(T) * npr.standard_normal(I))
plt.figure(figsize=(10,6))
plt.hist(ST1,bins=50)
plt.xlabel('index level')
plt.ylabel('frequency')
ST2 = S0 * npr.lognormal((r - 0.5 * sigma ** 2) * T, sigma * math.sqrt(T), size=I)
plt.figure(figsize=(10,6))
plt.hist(ST2, bins=50)
plt.xlabel('index level')
plt.ylabel('frequency')
###Output
_____no_output_____
###Markdown
normality tests
###Code
import math
import numpy as np
import scipy.stats as scs
import statsmodels.api as sm
import matplotlib.pyplot as plt
plt.style.use('seaborn')
def gen_paths(S0, r, sigma, T, M, I):
"""location 17163 generate sample Monte Carlo paths for geometric brownian motion"""
dt = T / M
paths = np.zeros((M + 1, I))
paths[0] = S0
for t in range(1, M + 1):
rand = np.random.standard_normal(I)
rand = (rand - rand.mean()) / rand.std()
paths[t] = paths[t - 1] * np.exp((r - 0.5 * sigma ** 2) * dt + sigma * math.sqrt(dt) * rand)
return paths
S0 = 100.
r = 0.05
sigma = 0.2
T = 1.0
M = 50
I = 250000
np.random.seed(1000)
paths = gen_paths(S0, r, sigma, T, M, I)
S0 * math.exp(r * T)
paths[-1].mean()
plt.figure(figsize=(10,6))
plt.plot(paths[:,:10])
plt.xlabel('time steps')
plt.ylabel('index level')
paths[:,0].round(4)
log_returns = np.log(paths[1:] / paths[:-1])
log_returns[:,1].round(4)
def print_statistics(array):
''' Prints selected statistics.
Parameters
==========
array: ndarray
object to generate statistics on
'''
sta = scs.describe(array)
print('%14s %15s' % ('statistic', 'value'))
print(30 * '-')
print('%14s %15.5f' % ('size', sta[0]))
print('%14s %15.5f' % ('min', sta[1][0]))
print('%14s %15.5f' % ('max', sta[1][1]))
print('%14s %15.5f' % ('mean', sta[2]))
print('%14s %15.5f' % ('std', np.sqrt(sta[3])))
print('%14s %15.5f' % ('skew', sta[4]))
print('%14s %15.5f' % ('kurtosis', sta[5]))
print_statistics(log_returns.flatten())
log_returns.mean() * M + 0.5 * sigma ** 2
log_returns.std() * math.sqrt(M)
###Output
_____no_output_____
###Markdown
from Qiitahttps://qiita.com/innovation1005/items/199df28af6fc0d60a4b0
###Code
DJ=['AAPL','AXP','BA','CAT','CSCO','CVX','DIS','DWDP','GS','HD',
'IBM','INTC','JNJ','JPM','KO','MCD','MMM','MRK','MSFT','NKE',
'PFE','PG','TRV','UNH','UTX','V','VZ','WBA','WMT','XOM']
m = [] # holds average for each
v = [] # holds std for each
for i in range(len(DJ)):
tsd = web.DataReader(DJ[i], "yahoo", '2009/1/1')
lntsd = np.log(tsd.iloc[:,5])
m.append((lntsd.diff().dropna().mean() + 1 )** 250 - 1) # ?
v.append(lntsd.diff().dropna().std() * np.sqrt(250))
print('{0: 03d}'.format(i+1),'{0:7s}'.format(DJ[i]),'平均{0:5.2f}'.format(m[i]),
'ボラティリティ {0:5.2f}'.format(v[i]),'m/v {0:5.2f}'.format(m[i]/v[i]),
' データ数{0:10d}'.format(len(tsd)))
v_m = pd.DataFrame({'v':v, 'm':m})
sns.jointplot(x='v', y='m', data=v_m, color="g")
###Output
/Users/sishida/.pyenv/versions/3.6.3/envs/btc3.6/lib/python3.6/site-packages/scipy/stats/stats.py:1713: FutureWarning:
Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
###Markdown
https://qiita.com/ynakayama/items/1801d374224d6914a382
###Code
def get_px(stock, start, end):
return web.get_data_yahoo(stock, start, end)['Adj Close']
names = ['AAPL', 'GOOG', 'MSFT', 'DELL', 'GS', 'MS', 'BAC', 'C']
px = pd.DataFrame({n: get_px(n, '1/1/2010', '1/17/2019') for n in names})
px = px.asfreq('B').fillna(method='pad')
rets = px.pct_change()
result = ((1 + rets).cumprod() - 1) # ?
plt.figure()
result.plot()
plt.show()
###Output
_____no_output_____ |
zindiWheatChallenge.ipynb | ###Markdown
Imports
###Code
! nvidia-smi
import warnings
warnings.filterwarnings(action='ignore')
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchsummary import summary
import torchvision
from torchvision import models, transforms
from torchvision.datasets import ImageFolder
from torch.utils.data import DataLoader, Dataset, SubsetRandomSampler
from torchsummary import summary
from PIL import Image
import sklearn
from sklearn.metrics import roc_curve, auc, log_loss, precision_score, f1_score, recall_score, confusion_matrix
import matplotlib as mplb
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import sys
from tensorflow.keras.utils import to_categorical
import os
import zipfile
import shutil
from tqdm.notebook import tqdm
torch.__version__
###Output
_____no_output_____
###Markdown
config
###Code
base_dir = '/content'
seed_val = 2020
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# set seed
np.random.seed(seed=seed_val)
torch.manual_seed(seed=seed_val)
if torch.cuda.is_available():
torch.cuda.manual_seed(seed=seed_val)
torch.cuda.manual_seed_all(seed=seed_val)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
device.type
data_transforms = {
'train': transforms.Compose([
transforms.Resize(300),
transforms.CenterCrop(224),
transforms.RandomHorizontalFlip(p=.2),
#transforms.RandomRotation(degrees=35),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
]),
'test': transforms.Compose([
transforms.Resize(300),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
]),
}
IMG_SIZE = 224
IMG_SHAPE = (3, 224, 224)
BATCH_SIZE = 32
class_names = ['Crown_root', 'Tillering', "XXXXX", "Booting", "Heading", "Anthesis", "Milking"]
###Output
_____no_output_____
###Markdown
Getting data
###Code
! curl "https://zindpublic.blob.core.windows.net/private/uploads/competition_datafile/file/674/Train.csv?sp=r&sv=2015-04-05&sr=b&st=2020-08-30T03"%"3A57"%"3A03Z&se=2020-08-30T04"%"3A13"%"3A03Z&sig=rtK68ONxD0y04OgUpMR05"%"2BlbqDMVTj5bJxYnT2ZZ8ak"%"3D" -H "User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:79.0) Gecko/20100101 Firefox/79.0" -H "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8" -H "Accept-Language: fr,fr-FR;q=0.8,en-US;q=0.5,en;q=0.3" --compressed -H "Referer: https://zindi.africa/competitions/cgiar-wheat-growth-stage-challenge/data" -H "Connection: keep-alive" -H "Upgrade-Insecure-Requests: 1" -o Train.csv
! curl "https://zindpublic.blob.core.windows.net/private/uploads/competition_datafile/file/673/SampleSubmission.csv?sp=r&sv=2015-04-05&sr=b&st=2020-08-30T03"%"3A57"%"3A31Z&se=2020-08-30T04"%"3A13"%"3A31Z&sig=HPoIucHcwV8JE8b9ScNGaegis"%"2FLx6osGfRQfrg34rbY"%"3D" -H "User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:79.0) Gecko/20100101 Firefox/79.0" -H "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8" -H "Accept-Language: fr,fr-FR;q=0.8,en-US;q=0.5,en;q=0.3" --compressed -H "Referer: https://zindi.africa/competitions/cgiar-wheat-growth-stage-challenge/data" -H "Connection: keep-alive" -H "Upgrade-Insecure-Requests: 1" -o SampleSubmission.csv
! curl "https://zindpublic.blob.core.windows.net/private/uploads/competition_datafile/file/675/Images.zip?sp=r&sv=2015-04-05&sr=b&st=2020-08-30T03"%"3A55"%"3A56Z&se=2020-08-30T04"%"3A11"%"3A56Z&sig=9fokH"%"2BuaGegmcXj5vSLQTQzbmQggcNPL9na7MsnoVS0"%"3D" -H "User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:79.0) Gecko/20100101 Firefox/79.0" -H "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8" -H "Accept-Language: fr,fr-FR;q=0.8,en-US;q=0.5,en;q=0.3" --compressed -H "Referer: https://zindi.africa/competitions/cgiar-wheat-growth-stage-challenge/data" -H "Connection: keep-alive" -H "Upgrade-Insecure-Requests: 1" -o images.zip
train_csv_path = os.path.join(base_dir, "Train.csv")
sample_sub_path = os.path.join(base_dir, "SampleSubmission.csv")
train_df = pd.read_csv(train_csv_path)
sample_sub = pd.read_csv(sample_sub_path)
###Output
_____no_output_____
###Markdown
Data management
###Code
train_df.head()
train_df.describe()
sample_sub.head()
train_images_list = train_df['UID'].to_list()
test_images_list = sample_sub['UID'].to_list()
len(train_images_list), len(test_images_list)
! rm -r /content/dataset/Images
! rm -r /content/dataset/train && rm -r /content/dataset/test
! mkdir /content/dataset/train && mkdir /content/dataset/test
! mkdir /content/dataset/train/{class_names[0]} && mkdir /content/dataset/train/{class_names[1]} && mkdir /content/dataset/train/{class_names[2]} && mkdir /content/dataset/train/{class_names[3]} && mkdir /content/dataset/train/{class_names[4]} && mkdir /content/dataset/train/{class_names[5]} && mkdir /content/dataset/train/{class_names[6]}
train_images_dir = os.path.join(base_dir, "dataset/train")
test_images_dir = os.path.join(base_dir, "dataset/test")
train_df['growth_stage'].unique()
train_df['growth_stage'].value_counts()
def extract_images(dest="."):
try:
with zipfile.ZipFile(os.path.join(base_dir, "images.zip"), "r") as zipF:
zipF.extractall(path=dest)
print('[INFO] Done !')
except Exception as ex:
print(ex)
extract_images(dest=os.path.join(base_dir, "dataset/"))
dataset_path = os.path.join(base_dir, 'dataset/Images')
for imag in tqdm(os.listdir(dataset_path), desc='Moving files into their specific folders'):
img_path = os.path.join(dataset_path, imag)
src = img_path
try:
label = train_df.loc[train_df['UID'] == imag.split('.')[0]]['growth_stage'].values[0]
dest = os.path.join(train_images_dir, class_names[label-1])
shutil.move(src, dest)
except Exception as ex:
dest = test_images_dir
shutil.move(src, dest)
! ls /content/dataset/train/
len(os.listdir(train_images_dir)), len(os.listdir(test_images_dir))
class ZindiWheatDataset(Dataset):
def __init__(self, csv_path=train_csv_path, task='train', transform=None, num_classes=7):
super(ZindiWheatDataset, self).__init__()
self.csv_file = pd.read_csv(csv_path)
self.task = task
self.num_classes = num_classes
self.transform = transform
def __getitem__(self, index):
img_name = self.csv_file.iloc[index, 0]+'.jpeg'
try:
if self.task=='train':
img_array = Image.open(os.path.join(train_images_dir, img_name)).convert("RGB")
if self.transform is not None:
img_array = self.transform(img_array)
else:
img_array = Image.open(os.path.join(test_images_dir, img_name)).convert("RGB")
if self.transform is not None:
img_array = self.transform(img_array)
except Exception as ex:
print(f'[ERROR] {ex}')
if self.task == 'train':
label = torch.tensor(to_categorical(self.csv_file.iloc[index, 2], self.num_classes), dtype=torch.float)
output = (img_array, label)
else:
output = img_array
return output
def __len__(self):
return len(self.csv_file)
test_dataset = ZindiWheatDataset(csv_path=sample_sub_path, task='test', transform=data_transforms['test'], num_classes=7)
def target_to_one_hot(target):
target = torch.tensor(to_categorical(target, num_classes=7), dtype=torch.float)
return target
#using pytorch based way
train_set = ImageFolder(root=train_images_dir,
transform=data_transforms['train'])
#target_transform=target_to_one_hot)
len(train_set), len(test_dataset)
test_dataset[15].shape
# Creating data indices for training and validation splits:
validation_split = 0.15
shuffle_dataset = True
dataset_size = len(train_set)
indices = list(range(dataset_size))
split = int(np.floor(validation_split * dataset_size))
if shuffle_dataset :
np.random.seed(seed_val)
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
# Creating PT data samplers and loaders:
train_sampler = SubsetRandomSampler(train_indices)
valid_sampler = SubsetRandomSampler(val_indices)
len(train_indices), len(val_indices)
# dataloaders
train_loader = torch.utils.data.DataLoader(train_set, batch_size=BATCH_SIZE,
sampler=train_sampler)
validation_loader = torch.utils.data.DataLoader(train_set, batch_size=BATCH_SIZE,
sampler=valid_sampler, shuffle=False)
test_data_loader = DataLoader(dataset=test_dataset,
batch_size=BATCH_SIZE,
shuffle=False,
num_workers=2)
# dataloaders
train_loader = torch.utils.data.DataLoader(train_set, batch_size=BATCH_SIZE,
sampler=train_sampler)
validation_loader = torch.utils.data.DataLoader(train_set, batch_size=BATCH_SIZE,
sampler=valid_sampler, shuffle=False)
test_data_loader = DataLoader(dataset=test_dataset,
batch_size=BATCH_SIZE,
shuffle=False,
num_workers=2)
###Output
_____no_output_____
###Markdown
Visualization
###Code
imgs, labs = next(iter(train_loader))
class_names[torch.argmax(labs[2]).item()]
def visualize_sample(From = train_loader):
"""Imshow for Tensor."""
images, labels = next(iter(From))
plt.figure(figsize=(18, 10))
num_to_show = min(BATCH_SIZE, 25)
for i in range(num_to_show):
if num_to_show < 25:
l, c = int(round(np.sqrt(BATCH_SIZE))), int(round(np.sqrt(BATCH_SIZE)))
ax = plt.subplot(l, c, i + 1)
img = images[i].numpy().transpose((1, 2, 0))
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
img = std * img + mean
img = np.clip(img, 0, 1)
plt.imshow(img)
#label = int(labels[i].item())
label = torch.argmax(labels[i]).item()
plt.title(class_names[label])
plt.axis("off")
else:
ax = plt.subplot(5, 5, i + 1)
img = images[i].numpy().transpose((1, 2, 0))
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
img = std * img + mean
img = np.clip(img, 0, 1)
plt.imshow(img)
#label = int(labels[i].item())
label = torch.argmax(labels[i]).item()
plt.title(class_names[label])
plt.axis("off")
visualize_sample()
###Output
_____no_output_____
###Markdown
Modeling
###Code
class MyModel(nn.Module):
def __init__(self, based_on="", out_size=2, fc_size=512):
super(MyModel, self).__init__()
self.architecture_name = based_on
self.archi = getattr(torchvision.models, self.architecture_name)(pretrained=True)
for param in self.archi.parameters():
param.requires_grad = False
if "resnet" in self.architecture_name:
self.classifier_name = "fc"
self.num_ftrs_in = getattr(self.archi, self.classifier_name).in_features
else:
self.classifier_name = "classifier"
self.num_ftrs_in = getattr(self.archi, self.classifier_name).in_features
self.classifier = nn.Linear(in_features=self.num_ftrs_in, out_features=out_size)
setattr(self.archi, self.classifier_name, self.classifier)
torch.nn.init.xavier_normal_(getattr(getattr(self.archi, self.classifier_name), 'weight'))
# forward pass
def forward(self, x):
x = self.archi(x.view(-1, 3, IMG_SIZE, IMG_SIZE))
return x
architecture_name = "resnet50"
model = MyModel(based_on=architecture_name, out_size=7, fc_size=512)
model.to(device)
summary(model=model, input_size=IMG_SHAPE, device=device.type)
###Output
Downloading: "https://download.pytorch.org/models/resnet50-19c8e357.pth" to /root/.cache/torch/hub/checkpoints/resnet50-19c8e357.pth
###Markdown
Training pipeline
###Code
def train_model(model, criterion, optimizer, scheduler, num_epochs=25):
since = time.time()
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch+1, num_epochs))
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['train', 'val']:
if phase == 'train':
model.train() # Set model to training mode
else:
model.eval() # Set model to evaluate mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
for inputs, labels in tqdm(dataloaders[phase], desc=f"{phase}"):
inputs = inputs.to(device)
labels = labels.squeeze(dim=1)
labels = labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(phase == 'train'):
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data[:, 1])
if phase == 'train':
scheduler.step()
epoch_loss = running_loss / dataset_sizes[phase]
epoch_acc = running_corrects.double() / dataset_sizes[phase]
print('{} Loss: {:.4f} Acc: {:.4f}'.format(
phase, epoch_loss, epoch_acc))
# deep copy the model
if phase == 'val' and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
print()
time_elapsed = time.time() - since
print('\nTraining complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
print('Best val Acc: {:4f}'.format(best_acc))
# load best model weights
model.load_state_dict(best_model_wts)
return model
def training_loop(model, epochs, train_loader, val_loader):
train_losses = []
val_losses = []
train_accs = []
val_accs = []
for epoch in range(epochs):
train_loss = 0
val_loss = 0
train_accuracy = 0
val_accuracy = 0
# Training the model
model.train()
counter = 0
for inputs, labels in tqdm(train_loader, desc=f"training | Epoch {epoch+1}/{epochs} "):
# Move to device
inputs, labels = inputs.to(device), labels.to(device)
# Clear optimizers
optimizer.zero_grad()
# Forward pass
output = model.forward(inputs)
# Loss
loss = criterion(output, labels)
# Calculate gradients (backpropogation)
loss.backward()
# Adjust parameters based on gradients
optimizer.step()
# Add the loss to the training set's rnning loss
train_loss += loss.item()*inputs.size(0)
# Get the top class of the output
# See how many of the classes were correct?
# Calculate the mean (get the accuracy for this batch)
# and add it to the running accuracy for this epoch
_, preds = torch.max(output.data, 1)
train_accuracy += torch.sum(preds == labels.data)
# Evaluating the model
model.eval()
counter = 0
# Tell torch not to calculate gradients
with torch.no_grad():
for inputs, labels in tqdm(val_loader, desc=f"Validation | Epoch {epoch+1}/{epochs} "):
# Move to device
inputs, labels = inputs.to(device), labels.to(device)
# Forward pass
output = model.forward(inputs)
# Calculate Loss
valloss = criterion(output, labels)
# Add loss to the validation set's running loss
val_loss += valloss.item()*inputs.size(0)
# See how many of the classes were correct?
# Calculate the mean (get the accuracy for this batch)
# and add it to the running accuracy for this epoch
_, preds = torch.max(output.data, 1)
val_accuracy += torch.sum(preds == labels.data)
# Get the average loss for the entire epoch
train_loss = train_loss/len(train_loader.dataset)
valid_loss = val_loss/len(val_loader.dataset)
# store results
train_losses.append(train_loss)
val_losses.append(valid_loss)
# Print out the information
print(f'\nTrain accuracy: {(train_accuracy/len(train_loader))*100:.6f}')
print(f'Val accuracy: {(val_accuracy/len(val_loader))*100:.6f}')
print('Training Loss: {:.6f} \tValidation Loss: {:.6f}\n'.format(train_loss, valid_loss))
# save accuracies
train_accs.append(train_accuracy/len(train_loader))
val_accs.append(val_accuracy/len(val_loader))
return model, train_losses, val_losses, train_accs, val_accs
def binary_acc(y_pred, y_test):
y_pred_tag = torch.log_softmax(y_pred, dim = 1)
_, y_pred_tags = torch.max(y_pred_tag, dim = 1)
correct_results_sum = (y_pred_tags == y_test).sum().float()
acc = correct_results_sum/y_test.shape[0]
acc = torch.round(acc * 100)
return acc
outs = model(imgs.to(device))
outs.shape
labs.shape
labs
outs
###Output
_____no_output_____
###Markdown
training params
###Code
learning_rate = 9.9e-5
#optimizer
optimizer = optim.AdamW(params=model.parameters(), lr=learning_rate)
#loss
criterion = nn.CrossEntropyLoss()
criterion(outs, labs.to(device))
# train/eval
num_epochs = 15
model, train_losses, val_losses, train_accs, val_accs = training_loop(model=model,
epochs=num_epochs,
train_loader=train_loader,
val_loader=validation_loader)
###Output
_____no_output_____
###Markdown
Results
###Code
plt.figure(figsize=(16,6))
plt.subplot(1, 2, 1)
plt.plot(train_losses, label='training loss')
plt.plot(val_losses, 'orange', label='validation loss')
plt.title("Losses results", size=16)
plt.xlabel("epoch")
plt.ylabel('Loss')
plt.legend()
plt.subplot(1, 2, 2)
plt.plot(train_accs, label='training acc')
plt.plot(val_accs, 'orange', label='validation acc')
plt.title("Accuracies results", size=16)
plt.xlabel("epoch")
plt.ylabel('Accuracy')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Saving model
###Code
# save model
model_path = f'mask_{architecture_name}_based.pt'
torch.save(model.state_dict(),os.path.join(base_dir, , model_path))
###Output
_____no_output_____
###Markdown
Evaluation and submissions
###Code
###Output
_____no_output_____ |
notebook/ML_SP.ipynb | ###Markdown
**Atenção: Devido a natureza dos dados, para correto funcionamento, este notebook precisa ser executado em ambiente com memória RAM maior ou igual a 25 Gb.** **Ciência e Visualização de Dados****Projeto Final - Entrega 03**Alunos: Gleyson Roberto do Nascimento. RA: 043801. Elétrica.Negli René Gallardo Alvarado. RA: 234066. Saúde.Rafael Vinícius da Silveira. RA: 137382. Física.Sérgio Sevileanu. RA: 941095. Elétrica. Neste notebook do Google Colaboratory será realzada a aprendizagem de máquina para os dados do Estado de São Paulo durante os anos de 2008 a 2018 segundo o banco de dados [SIHSUS](https://bigdata-metadados.icict.fiocruz.br/dataset/sistema-de-informacoes-hospitalares-do-sus-sihsus/resource/ae85ac54-6734-43b8-a820-6129a854e1ff).Desta forma, algumas definições iniciais e um disclaimer se fazem necessários para este projeto:Será definido como **diagnóstico equivocado (categoria 0 da variável v258)** aquele em que houve mais de um diagnóstico de CID10, contudo, eles fazem parte do mesmo grupo, de forma que é plausível o equívoco dada a semelhança de sintomas entre os CID10;Será definido como **falha de diagnóstico (categoria 1 da variável v258)** aquele em que houve mais de um diagnóstico de CID10, contudo, eles fazem parte de grupos distintos, de forma que embora possam existir sintomas semelhantes entre os CID10, caberia ao profissional uma análise mais aprofundada antes do diagnóstico.O **diagnóstico correto** (aquele em que houve apenas um diagnóstico de CID10, sem alterações durante o período até a alta) foi suprimido da análise devido ao fato de que evisiesava resultados dado o altíssimo percentual de ocorrência; **Disclaimer**: Considerando a natureza do banco de dados do SIHSUS, isto é, um Big Data em que inúmeros funcionários do Sistema Único de Saúde possuem acesso e inserem os dados de forma manual em realdades e condições bastante distintas, existe a séria possibildade de erro sistemático, desta forma, a acurácia deste trabalho deve ser considerada com ressalvas. Instalando o RAPIDS no Google Colab Verificando se há GPU disponível
###Code
!nvidia-smi
###Output
Thu Jun 24 00:14:52 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 465.27 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla P100-PCIE... Off | 00000000:00:04.0 Off | 0 |
| N/A 43C P0 26W / 250W | 0MiB / 16280MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
###Markdown
Setup:Set up script installs1. Updates gcc in Colab1. Installs Conda1. Install RAPIDS' current stable version of its libraries, as well as some external libraries including: 1. cuDF 1. cuML 1. cuGraph 1. cuSpatial 1. cuSignal 1. BlazingSQL 1. xgboost1. Copy RAPIDS .so files into current working directory, a neccessary workaround for RAPIDS+Colab integration.
###Code
# This get the RAPIDS-Colab install files and test check your GPU. Run this and the next cell only.
# Please read the output of this cell. If your Colab Instance is not RAPIDS compatible, it will warn you and give you remediation steps.
!git clone https://github.com/rapidsai/rapidsai-csp-utils.git
!python rapidsai-csp-utils/colab/env-check.py
# This will update the Colab environment and restart the kernel. Don't run the next cell until you see the session crash.
!bash rapidsai-csp-utils/colab/update_gcc.sh
import os
os._exit(00)
# This will install CondaColab. This will restart your kernel one last time. Run this cell by itself and only run the next cell once you see the session crash.
import condacolab
condacolab.install()
# you can now run the rest of the cells as normal
import condacolab
condacolab.check()
# Installing RAPIDS is now 'python rapidsai-csp-utils/colab/install_rapids.py <release> <packages>'
# The <release> options are 'stable' and 'nightly'. Leaving it blank or adding any other words will default to stable.
# The <packages> option are default blank or 'core'. By default, we install RAPIDSAI and BlazingSQL. The 'core' option will install only RAPIDSAI and not include BlazingSQL,
!python rapidsai-csp-utils/colab/install_rapids.py stable
###Output
Installing RAPIDS Stable 21.06
Starting the RAPIDS+BlazingSQL install on Colab. This will take about 15 minutes.
Collecting package metadata (current_repodata.json): ...working... done
Solving environment: ...working... failed with initial frozen solve. Retrying with flexible solve.
Solving environment: ...working... failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): ...working... done
Solving environment: ...working... done
## Package Plan ##
environment location: /usr/local
added / updated specs:
- cudatoolkit=11.0
- gcsfs
- llvmlite
- openssl
- python=3.7
- rapids-blazing=21.06
The following packages will be downloaded:
package | build
---------------------------|-----------------
abseil-cpp-20210324.1 | h9c3ff4c_0 1015 KB conda-forge
aiohttp-3.7.4.post0 | py37h5e8e339_0 625 KB conda-forge
anyio-3.2.0 | py37h89c1867_0 138 KB conda-forge
appdirs-1.4.4 | pyh9f0ad1d_0 13 KB conda-forge
argon2-cffi-20.1.0 | py37h5e8e339_2 47 KB conda-forge
arrow-cpp-1.0.1 |py37haa335b2_40_cuda 21.1 MB conda-forge
arrow-cpp-proc-3.0.0 | cuda 24 KB conda-forge
async-timeout-3.0.1 | py_1000 11 KB conda-forge
async_generator-1.10 | py_0 18 KB conda-forge
attrs-21.2.0 | pyhd8ed1ab_0 44 KB conda-forge
aws-c-cal-0.5.11 | h95a6274_0 37 KB conda-forge
aws-c-common-0.6.2 | h7f98852_0 168 KB conda-forge
aws-c-event-stream-0.2.7 | h3541f99_13 47 KB conda-forge
aws-c-io-0.10.5 | hfb6a706_0 121 KB conda-forge
aws-checksums-0.1.11 | ha31a3da_7 50 KB conda-forge
aws-sdk-cpp-1.8.186 | hb4091e7_3 4.6 MB conda-forge
backcall-0.2.0 | pyh9f0ad1d_0 13 KB conda-forge
backports-1.0 | py_2 4 KB conda-forge
backports.functools_lru_cache-1.6.4| pyhd8ed1ab_0 9 KB conda-forge
blazingsql-21.06.00 |cuda_11.0_py37_g95ff589f8_0 190.2 MB rapidsai
bleach-3.3.0 | pyh44b312d_0 111 KB conda-forge
blinker-1.4 | py_1 13 KB conda-forge
bokeh-2.2.3 | py37h89c1867_0 7.0 MB conda-forge
boost-1.72.0 | py37h48f8a5e_1 339 KB conda-forge
boost-cpp-1.72.0 | h9d3c048_4 16.3 MB conda-forge
brotli-1.0.9 | h9c3ff4c_4 389 KB conda-forge
ca-certificates-2021.5.30 | ha878542_0 136 KB conda-forge
cachetools-4.2.2 | pyhd8ed1ab_0 12 KB conda-forge
cairo-1.16.0 | h6cf1ce9_1008 1.5 MB conda-forge
certifi-2021.5.30 | py37h89c1867_0 141 KB conda-forge
cfitsio-3.470 | hb418390_7 1.3 MB conda-forge
click-7.1.2 | pyh9f0ad1d_0 64 KB conda-forge
click-plugins-1.1.1 | py_0 9 KB conda-forge
cligj-0.7.2 | pyhd8ed1ab_0 10 KB conda-forge
cloudpickle-1.6.0 | py_0 22 KB conda-forge
colorcet-2.0.6 | pyhd8ed1ab_0 1.5 MB conda-forge
conda-4.10.1 | py37h89c1867_0 3.1 MB conda-forge
cudatoolkit-11.0.221 | h6bb024c_0 953.0 MB nvidia
cudf-21.06.01 |cuda_11.0_py37_g101fc0fda4_2 108.4 MB rapidsai
cudf_kafka-21.06.01 |py37_g101fc0fda4_2 1.7 MB rapidsai
cugraph-21.06.00 | py37_gf9ffd2de_0 65.0 MB rapidsai
cuml-21.06.02 |cuda11.0_py37_g7dfbf8d9e_0 78.9 MB rapidsai
cupy-9.0.0 | py37h4fdb0f7_0 50.3 MB conda-forge
curl-7.77.0 | hea6ffbf_0 149 KB conda-forge
cusignal-21.06.00 | py38_ga78207b_0 1.0 MB rapidsai
cuspatial-21.06.00 | py37_g37798cd_0 15.2 MB rapidsai
custreamz-21.06.01 |py37_g101fc0fda4_2 32 KB rapidsai
cuxfilter-21.06.00 | py37_g9459467_0 136 KB rapidsai
cycler-0.10.0 | py_2 9 KB conda-forge
cyrus-sasl-2.1.27 | h230043b_2 224 KB conda-forge
cytoolz-0.11.0 | py37h5e8e339_3 403 KB conda-forge
dask-2021.5.0 | pyhd8ed1ab_0 4 KB conda-forge
dask-core-2021.5.0 | pyhd8ed1ab_0 735 KB conda-forge
dask-cuda-21.06.00 | py37_0 110 KB rapidsai
dask-cudf-21.06.01 |py37_g101fc0fda4_2 103 KB rapidsai
datashader-0.11.1 | pyh9f0ad1d_0 14.0 MB conda-forge
datashape-0.5.4 | py_1 49 KB conda-forge
decorator-4.4.2 | py_0 11 KB conda-forge
defusedxml-0.7.1 | pyhd8ed1ab_0 23 KB conda-forge
distributed-2021.5.0 | py37h89c1867_0 1.1 MB conda-forge
dlpack-0.5 | h9c3ff4c_0 12 KB conda-forge
entrypoints-0.3 | pyhd8ed1ab_1003 8 KB conda-forge
expat-2.4.1 | h9c3ff4c_0 182 KB conda-forge
faiss-proc-1.0.0 | cuda 24 KB rapidsai
fastavro-1.4.1 | py37h5e8e339_0 496 KB conda-forge
fastrlock-0.6 | py37hcd2ae1e_0 31 KB conda-forge
fiona-1.8.20 | py37ha0cc35a_0 1.1 MB conda-forge
fontconfig-2.13.1 | hba837de_1005 357 KB conda-forge
freetype-2.10.4 | h0708190_1 890 KB conda-forge
freexl-1.0.6 | h7f98852_0 48 KB conda-forge
fsspec-2021.6.0 | pyhd8ed1ab_0 79 KB conda-forge
future-0.18.2 | py37h89c1867_3 714 KB conda-forge
gcsfs-2021.6.0 | pyhd8ed1ab_0 23 KB conda-forge
gdal-3.2.2 | py37hb0e9ad2_0 1.5 MB conda-forge
geopandas-0.9.0 | pyhd8ed1ab_1 5 KB conda-forge
geopandas-base-0.9.0 | pyhd8ed1ab_1 950 KB conda-forge
geos-3.9.1 | h9c3ff4c_2 1.1 MB conda-forge
geotiff-1.6.0 | hcf90da6_5 296 KB conda-forge
gettext-0.19.8.1 | h0b5b191_1005 3.6 MB conda-forge
gflags-2.2.2 | he1b5a44_1004 114 KB conda-forge
giflib-5.2.1 | h36c2ea0_2 77 KB conda-forge
glog-0.5.0 | h48cff8f_0 104 KB conda-forge
google-auth-1.30.2 | pyh6c4a22f_0 77 KB conda-forge
google-auth-oauthlib-0.4.4 | pyhd8ed1ab_0 19 KB conda-forge
google-cloud-cpp-1.28.0 | hbd34f9f_0 9.3 MB conda-forge
greenlet-1.1.0 | py37hcd2ae1e_0 83 KB conda-forge
grpc-cpp-1.38.0 | h2519f57_0 3.6 MB conda-forge
hdf4-4.2.15 | h10796ff_3 950 KB conda-forge
hdf5-1.10.6 |nompi_h6a2412b_1114 3.1 MB conda-forge
heapdict-1.0.1 | py_0 7 KB conda-forge
importlib-metadata-4.5.0 | py37h89c1867_0 31 KB conda-forge
ipykernel-5.5.5 | py37h085eea5_0 167 KB conda-forge
ipython-7.24.1 | py37h085eea5_0 1.1 MB conda-forge
ipython_genutils-0.2.0 | py_1 21 KB conda-forge
ipywidgets-7.6.3 | pyhd3deb0d_0 101 KB conda-forge
jedi-0.18.0 | py37h89c1867_2 923 KB conda-forge
jinja2-3.0.1 | pyhd8ed1ab_0 99 KB conda-forge
joblib-1.0.1 | pyhd8ed1ab_0 206 KB conda-forge
jpeg-9d | h36c2ea0_0 264 KB conda-forge
jpype1-1.3.0 | py37h2527ec5_0 482 KB conda-forge
json-c-0.15 | h98cffda_0 274 KB conda-forge
jsonschema-3.2.0 | pyhd8ed1ab_3 45 KB conda-forge
jupyter-server-proxy-3.0.2 | pyhd8ed1ab_0 27 KB conda-forge
jupyter_client-6.1.12 | pyhd8ed1ab_0 79 KB conda-forge
jupyter_core-4.7.1 | py37h89c1867_0 72 KB conda-forge
jupyter_server-1.8.0 | pyhd8ed1ab_0 255 KB conda-forge
jupyterlab_pygments-0.1.2 | pyh9f0ad1d_0 8 KB conda-forge
jupyterlab_widgets-1.0.0 | pyhd8ed1ab_1 130 KB conda-forge
kealib-1.4.14 | hcc255d8_2 186 KB conda-forge
kiwisolver-1.3.1 | py37h2527ec5_1 78 KB conda-forge
krb5-1.19.1 | hcc1bbae_0 1.4 MB conda-forge
lcms2-2.12 | hddcbb42_0 443 KB conda-forge
libblas-3.9.0 | 9_openblas 11 KB conda-forge
libcblas-3.9.0 | 9_openblas 11 KB conda-forge
libcrc32c-1.1.1 | h9c3ff4c_2 20 KB conda-forge
libcudf-21.06.01 |cuda11.0_g101fc0fda4_2 187.7 MB rapidsai
libcudf_kafka-21.06.01 | g101fc0fda4_2 125 KB rapidsai
libcugraph-21.06.00 |cuda11.0_gf9ffd2de_0 213.6 MB rapidsai
libcuml-21.06.02 |cuda11.0_g7dfbf8d9e_0 95.2 MB rapidsai
libcumlprims-21.06.00 |cuda11.0_gfda2e6c_0 1.1 MB nvidia
libcurl-7.77.0 | h2574ce0_0 334 KB conda-forge
libcuspatial-21.06.00 |cuda11.0_g37798cd_0 7.6 MB rapidsai
libdap4-3.20.6 | hd7c4107_2 11.3 MB conda-forge
libevent-2.1.10 | hcdb4288_3 1.1 MB conda-forge
libfaiss-1.7.0 |cuda110h8045045_8_cuda 67.0 MB conda-forge
libgcrypt-1.9.3 | h7f98852_1 677 KB conda-forge
libgdal-3.2.2 | h804b7da_0 13.2 MB conda-forge
libgfortran-ng-9.3.0 | hff62375_19 22 KB conda-forge
libgfortran5-9.3.0 | hff62375_19 2.0 MB conda-forge
libglib-2.68.3 | h3e27bee_0 3.1 MB conda-forge
libgpg-error-1.42 | h9c3ff4c_0 278 KB conda-forge
libgsasl-1.8.0 | 2 125 KB conda-forge
libhwloc-2.3.0 | h5e5b7d1_1 2.7 MB conda-forge
libkml-1.3.0 | hd79254b_1012 640 KB conda-forge
liblapack-3.9.0 | 9_openblas 11 KB conda-forge
libllvm10-10.0.1 | he513fc3_3 26.4 MB conda-forge
libnetcdf-4.7.4 |nompi_h56d31a8_107 1.3 MB conda-forge
libntlm-1.4 | h7f98852_1002 32 KB conda-forge
libopenblas-0.3.15 |pthreads_h8fe5266_1 9.2 MB conda-forge
libpng-1.6.37 | h21135ba_2 306 KB conda-forge
libpq-13.3 | hd57d9b9_0 2.7 MB conda-forge
libprotobuf-3.16.0 | h780b84a_0 2.5 MB conda-forge
librdkafka-1.5.3 | hc49e61c_1 11.2 MB conda-forge
librmm-21.06.00 |cuda11.0_gee432a0_0 57 KB rapidsai
librttopo-1.1.0 | h1185371_6 235 KB conda-forge
libsodium-1.0.18 | h36c2ea0_1 366 KB conda-forge
libspatialindex-1.9.3 | h9c3ff4c_3 4.6 MB conda-forge
libspatialite-5.0.1 | h20cb978_4 4.4 MB conda-forge
libthrift-0.14.1 | he6d91bd_2 4.5 MB conda-forge
libtiff-4.2.0 | hbd63e13_2 639 KB conda-forge
libutf8proc-2.6.1 | h7f98852_0 95 KB conda-forge
libuuid-2.32.1 | h7f98852_1000 28 KB conda-forge
libuv-1.41.0 | h7f98852_0 1.0 MB conda-forge
libwebp-1.2.0 | h3452ae3_0 85 KB conda-forge
libwebp-base-1.2.0 | h7f98852_2 815 KB conda-forge
libxcb-1.13 | h7f98852_1003 395 KB conda-forge
libxgboost-1.4.2dev.rapidsai21.06| cuda11.0_0 115.3 MB rapidsai
libxml2-2.9.12 | h72842e0_0 772 KB conda-forge
llvmlite-0.36.0 | py37h9d7f4d0_0 2.7 MB conda-forge
locket-0.2.0 | py_2 6 KB conda-forge
mapclassify-2.4.2 | pyhd8ed1ab_0 36 KB conda-forge
markdown-3.3.4 | pyhd8ed1ab_0 67 KB conda-forge
markupsafe-2.0.1 | py37h5e8e339_0 22 KB conda-forge
matplotlib-base-3.4.2 | py37hdd32ed1_0 7.2 MB conda-forge
matplotlib-inline-0.1.2 | pyhd8ed1ab_2 11 KB conda-forge
mistune-0.8.4 |py37h5e8e339_1003 54 KB conda-forge
msgpack-python-1.0.2 | py37h2527ec5_1 91 KB conda-forge
multidict-5.1.0 | py37h5e8e339_1 67 KB conda-forge
multipledispatch-0.6.0 | py_0 12 KB conda-forge
munch-2.5.0 | py_0 12 KB conda-forge
nbclient-0.5.3 | pyhd8ed1ab_0 67 KB conda-forge
nbconvert-6.1.0 | py37h89c1867_0 548 KB conda-forge
nbformat-5.1.3 | pyhd8ed1ab_0 47 KB conda-forge
nccl-2.9.9.1 | h96e36e3_0 82.3 MB conda-forge
nest-asyncio-1.5.1 | pyhd8ed1ab_0 9 KB conda-forge
netifaces-0.10.9 |py37h5e8e339_1003 17 KB conda-forge
networkx-2.5.1 | pyhd8ed1ab_0 1.2 MB conda-forge
nlohmann_json-3.9.1 | h9c3ff4c_1 122 KB conda-forge
nodejs-14.15.4 | h92b4a50_1 15.7 MB conda-forge
notebook-6.4.0 | pyha770c72_0 6.1 MB conda-forge
numba-0.53.1 | py37hb11d6e1_1 3.7 MB conda-forge
numpy-1.21.0 | py37h038b26d_0 6.1 MB conda-forge
nvtx-0.2.3 | py37h5e8e339_0 55 KB conda-forge
oauthlib-3.1.1 | pyhd8ed1ab_0 87 KB conda-forge
olefile-0.46 | pyh9f0ad1d_1 32 KB conda-forge
openjdk-8.0.282 | h7f98852_0 99.3 MB conda-forge
openjpeg-2.4.0 | hb52868f_1 444 KB conda-forge
openssl-1.1.1k | h7f98852_0 2.1 MB conda-forge
orc-1.6.7 | h89a63ab_2 751 KB conda-forge
packaging-20.9 | pyh44b312d_0 35 KB conda-forge
pandas-1.2.5 | py37h219a48f_0 11.8 MB conda-forge
pandoc-2.14.0.3 | h7f98852_0 12.0 MB conda-forge
pandocfilters-1.4.2 | py_1 9 KB conda-forge
panel-0.10.3 | pyhd8ed1ab_0 6.1 MB conda-forge
param-1.10.1 | pyhd3deb0d_0 64 KB conda-forge
parquet-cpp-1.5.1 | 2 3 KB conda-forge
parso-0.8.2 | pyhd8ed1ab_0 68 KB conda-forge
partd-1.2.0 | pyhd8ed1ab_0 18 KB conda-forge
pcre-8.45 | h9c3ff4c_0 253 KB conda-forge
pexpect-4.8.0 | pyh9f0ad1d_2 47 KB conda-forge
pickle5-0.0.11 | py37h5e8e339_0 173 KB conda-forge
pickleshare-0.7.5 | py_1003 9 KB conda-forge
pillow-8.2.0 | py37h4600e1f_1 684 KB conda-forge
pixman-0.40.0 | h36c2ea0_0 627 KB conda-forge
poppler-21.03.0 | h93df280_0 15.9 MB conda-forge
poppler-data-0.4.10 | 0 3.8 MB conda-forge
postgresql-13.3 | h2510834_0 5.3 MB conda-forge
proj-8.0.0 | h277dcde_0 3.1 MB conda-forge
prometheus_client-0.11.0 | pyhd8ed1ab_0 46 KB conda-forge
prompt-toolkit-3.0.19 | pyha770c72_0 244 KB conda-forge
protobuf-3.16.0 | py37hcd2ae1e_0 342 KB conda-forge
psutil-5.8.0 | py37h5e8e339_1 342 KB conda-forge
pthread-stubs-0.4 | h36c2ea0_1001 5 KB conda-forge
ptyprocess-0.7.0 | pyhd3deb0d_0 16 KB conda-forge
py-xgboost-1.4.2dev.rapidsai21.06| cuda11.0py37_0 151 KB rapidsai
pyarrow-1.0.1 |py37hb63ea2f_40_cuda 2.4 MB conda-forge
pyasn1-0.4.8 | py_0 53 KB conda-forge
pyasn1-modules-0.2.7 | py_0 60 KB conda-forge
pyct-0.4.6 | py_0 3 KB conda-forge
pyct-core-0.4.6 | py_0 13 KB conda-forge
pydeck-0.5.0 | pyh9f0ad1d_0 3.6 MB conda-forge
pyee-7.0.4 | pyh9f0ad1d_0 14 KB conda-forge
pygments-2.9.0 | pyhd8ed1ab_0 754 KB conda-forge
pyhive-0.6.4 | pyhd8ed1ab_0 39 KB conda-forge
pyjwt-2.1.0 | pyhd8ed1ab_0 17 KB conda-forge
pynvml-11.0.0 | pyhd8ed1ab_0 39 KB conda-forge
pyparsing-2.4.7 | pyh9f0ad1d_0 60 KB conda-forge
pyppeteer-0.2.2 | py_1 104 KB conda-forge
pyproj-3.0.1 | py37h2bb2a07_1 484 KB conda-forge
pyrsistent-0.17.3 | py37h5e8e339_2 89 KB conda-forge
python-confluent-kafka-1.5.0| py37h8f50634_0 122 KB conda-forge
python-dateutil-2.8.1 | py_0 220 KB conda-forge
python_abi-3.7 | 2_cp37m 4 KB conda-forge
pytz-2021.1 | pyhd8ed1ab_0 239 KB conda-forge
pyu2f-0.1.5 | pyhd8ed1ab_0 31 KB conda-forge
pyviz_comms-2.0.2 | pyhd8ed1ab_0 25 KB conda-forge
pyyaml-5.4.1 | py37h5e8e339_0 189 KB conda-forge
pyzmq-22.1.0 | py37h336d617_0 500 KB conda-forge
rapids-21.06.00 |cuda11.0_py37_ge3c8282_427 5 KB rapidsai
rapids-blazing-21.06.00 |cuda11.0_py37_ge3c8282_427 5 KB rapidsai
rapids-xgboost-21.06.00 |cuda11.0_py37_ge3c8282_427 4 KB rapidsai
re2-2021.04.01 | h9c3ff4c_0 218 KB conda-forge
readline-8.1 | h46c0cb4_0 295 KB conda-forge
requests-oauthlib-1.3.0 | pyh9f0ad1d_0 21 KB conda-forge
rmm-21.06.00 |cuda_11.0_py37_gee432a0_0 7.0 MB rapidsai
rsa-4.7.2 | pyh44b312d_0 28 KB conda-forge
rtree-0.9.7 | py37h0b55af0_1 45 KB conda-forge
s2n-1.0.10 | h9b69904_0 442 KB conda-forge
sasl-0.3a1 | py37hcd2ae1e_0 74 KB conda-forge
scikit-learn-0.24.2 | py37h18a542f_0 7.5 MB conda-forge
scipy-1.6.3 | py37h29e03ee_0 20.5 MB conda-forge
send2trash-1.7.1 | pyhd8ed1ab_0 17 KB conda-forge
shapely-1.7.1 | py37h2d1e849_5 438 KB conda-forge
simpervisor-0.4 | pyhd8ed1ab_0 9 KB conda-forge
snappy-1.1.8 | he1b5a44_3 32 KB conda-forge
sniffio-1.2.0 | py37h89c1867_1 15 KB conda-forge
sortedcontainers-2.4.0 | pyhd8ed1ab_0 26 KB conda-forge
spdlog-1.8.5 | h4bd325d_0 353 KB conda-forge
sqlalchemy-1.4.19 | py37h5e8e339_0 2.3 MB conda-forge
streamz-0.6.2 | pyh44b312d_0 59 KB conda-forge
tblib-1.7.0 | pyhd8ed1ab_0 15 KB conda-forge
terminado-0.10.1 | py37h89c1867_0 26 KB conda-forge
testpath-0.5.0 | pyhd8ed1ab_0 86 KB conda-forge
threadpoolctl-2.1.0 | pyh5ca1d4c_0 15 KB conda-forge
thrift-0.13.0 | py37hcd2ae1e_2 120 KB conda-forge
thrift_sasl-0.4.2 | py37h8f50634_0 14 KB conda-forge
tiledb-2.2.9 | h91fcb0e_0 4.0 MB conda-forge
toolz-0.11.1 | py_0 46 KB conda-forge
tornado-6.1 | py37h5e8e339_1 646 KB conda-forge
traitlets-5.0.5 | py_0 81 KB conda-forge
treelite-1.3.0 | py37hfdac9b6_0 2.7 MB conda-forge
typing-extensions-3.10.0.0 | hd8ed1ab_0 8 KB conda-forge
typing_extensions-3.10.0.0 | pyha770c72_0 28 KB conda-forge
tzcode-2021a | h7f98852_1 68 KB conda-forge
tzdata-2021a | he74cb21_0 121 KB conda-forge
ucx-1.9.0+gcd9efd3 | cuda11.0_0 8.2 MB rapidsai
ucx-proc-1.0.0 | gpu 9 KB rapidsai
ucx-py-0.20.0 | py37_gcd9efd3_0 294 KB rapidsai
wcwidth-0.2.5 | pyh9f0ad1d_2 33 KB conda-forge
webencodings-0.5.1 | py_1 12 KB conda-forge
websocket-client-0.57.0 | py37h89c1867_4 59 KB conda-forge
websockets-8.1 | py37h5e8e339_3 90 KB conda-forge
widgetsnbextension-3.5.1 | py37h89c1867_4 1.8 MB conda-forge
xarray-0.18.2 | pyhd8ed1ab_0 599 KB conda-forge
xerces-c-3.2.3 | h9d8b166_2 1.8 MB conda-forge
xgboost-1.4.2dev.rapidsai21.06| cuda11.0py37_0 17 KB rapidsai
xorg-kbproto-1.0.7 | h7f98852_1002 27 KB conda-forge
xorg-libice-1.0.10 | h7f98852_0 58 KB conda-forge
xorg-libsm-1.2.3 | hd9c2040_1000 26 KB conda-forge
xorg-libx11-1.7.2 | h7f98852_0 941 KB conda-forge
xorg-libxau-1.0.9 | h7f98852_0 13 KB conda-forge
xorg-libxdmcp-1.1.3 | h7f98852_0 19 KB conda-forge
xorg-libxext-1.3.4 | h7f98852_1 54 KB conda-forge
xorg-libxrender-0.9.10 | h7f98852_1003 32 KB conda-forge
xorg-renderproto-0.11.1 | h7f98852_1002 9 KB conda-forge
xorg-xextproto-7.3.0 | h7f98852_1002 28 KB conda-forge
xorg-xproto-7.0.31 | h7f98852_1007 73 KB conda-forge
yarl-1.6.3 | py37h5e8e339_1 141 KB conda-forge
zeromq-4.3.4 | h9c3ff4c_0 352 KB conda-forge
zict-2.0.0 | py_0 10 KB conda-forge
zipp-3.4.1 | pyhd8ed1ab_0 11 KB conda-forge
------------------------------------------------------------
Total: 2.67 GB
The following NEW packages will be INSTALLED:
abseil-cpp conda-forge/linux-64::abseil-cpp-20210324.1-h9c3ff4c_0
aiohttp conda-forge/linux-64::aiohttp-3.7.4.post0-py37h5e8e339_0
anyio conda-forge/linux-64::anyio-3.2.0-py37h89c1867_0
appdirs conda-forge/noarch::appdirs-1.4.4-pyh9f0ad1d_0
argon2-cffi conda-forge/linux-64::argon2-cffi-20.1.0-py37h5e8e339_2
arrow-cpp conda-forge/linux-64::arrow-cpp-1.0.1-py37haa335b2_40_cuda
arrow-cpp-proc conda-forge/linux-64::arrow-cpp-proc-3.0.0-cuda
async-timeout conda-forge/noarch::async-timeout-3.0.1-py_1000
async_generator conda-forge/noarch::async_generator-1.10-py_0
attrs conda-forge/noarch::attrs-21.2.0-pyhd8ed1ab_0
aws-c-cal conda-forge/linux-64::aws-c-cal-0.5.11-h95a6274_0
aws-c-common conda-forge/linux-64::aws-c-common-0.6.2-h7f98852_0
aws-c-event-stream conda-forge/linux-64::aws-c-event-stream-0.2.7-h3541f99_13
aws-c-io conda-forge/linux-64::aws-c-io-0.10.5-hfb6a706_0
aws-checksums conda-forge/linux-64::aws-checksums-0.1.11-ha31a3da_7
aws-sdk-cpp conda-forge/linux-64::aws-sdk-cpp-1.8.186-hb4091e7_3
backcall conda-forge/noarch::backcall-0.2.0-pyh9f0ad1d_0
backports conda-forge/noarch::backports-1.0-py_2
backports.functoo~ conda-forge/noarch::backports.functools_lru_cache-1.6.4-pyhd8ed1ab_0
blazingsql rapidsai/linux-64::blazingsql-21.06.00-cuda_11.0_py37_g95ff589f8_0
bleach conda-forge/noarch::bleach-3.3.0-pyh44b312d_0
blinker conda-forge/noarch::blinker-1.4-py_1
bokeh conda-forge/linux-64::bokeh-2.2.3-py37h89c1867_0
boost conda-forge/linux-64::boost-1.72.0-py37h48f8a5e_1
boost-cpp conda-forge/linux-64::boost-cpp-1.72.0-h9d3c048_4
brotli conda-forge/linux-64::brotli-1.0.9-h9c3ff4c_4
cachetools conda-forge/noarch::cachetools-4.2.2-pyhd8ed1ab_0
cairo conda-forge/linux-64::cairo-1.16.0-h6cf1ce9_1008
cfitsio conda-forge/linux-64::cfitsio-3.470-hb418390_7
click conda-forge/noarch::click-7.1.2-pyh9f0ad1d_0
click-plugins conda-forge/noarch::click-plugins-1.1.1-py_0
cligj conda-forge/noarch::cligj-0.7.2-pyhd8ed1ab_0
cloudpickle conda-forge/noarch::cloudpickle-1.6.0-py_0
colorcet conda-forge/noarch::colorcet-2.0.6-pyhd8ed1ab_0
cudatoolkit nvidia/linux-64::cudatoolkit-11.0.221-h6bb024c_0
cudf rapidsai/linux-64::cudf-21.06.01-cuda_11.0_py37_g101fc0fda4_2
cudf_kafka rapidsai/linux-64::cudf_kafka-21.06.01-py37_g101fc0fda4_2
cugraph rapidsai/linux-64::cugraph-21.06.00-py37_gf9ffd2de_0
cuml rapidsai/linux-64::cuml-21.06.02-cuda11.0_py37_g7dfbf8d9e_0
cupy conda-forge/linux-64::cupy-9.0.0-py37h4fdb0f7_0
curl conda-forge/linux-64::curl-7.77.0-hea6ffbf_0
cusignal rapidsai/noarch::cusignal-21.06.00-py38_ga78207b_0
cuspatial rapidsai/linux-64::cuspatial-21.06.00-py37_g37798cd_0
custreamz rapidsai/linux-64::custreamz-21.06.01-py37_g101fc0fda4_2
cuxfilter rapidsai/linux-64::cuxfilter-21.06.00-py37_g9459467_0
cycler conda-forge/noarch::cycler-0.10.0-py_2
cyrus-sasl conda-forge/linux-64::cyrus-sasl-2.1.27-h230043b_2
cytoolz conda-forge/linux-64::cytoolz-0.11.0-py37h5e8e339_3
dask conda-forge/noarch::dask-2021.5.0-pyhd8ed1ab_0
dask-core conda-forge/noarch::dask-core-2021.5.0-pyhd8ed1ab_0
dask-cuda rapidsai/linux-64::dask-cuda-21.06.00-py37_0
dask-cudf rapidsai/linux-64::dask-cudf-21.06.01-py37_g101fc0fda4_2
datashader conda-forge/noarch::datashader-0.11.1-pyh9f0ad1d_0
datashape conda-forge/noarch::datashape-0.5.4-py_1
decorator conda-forge/noarch::decorator-4.4.2-py_0
defusedxml conda-forge/noarch::defusedxml-0.7.1-pyhd8ed1ab_0
distributed conda-forge/linux-64::distributed-2021.5.0-py37h89c1867_0
dlpack conda-forge/linux-64::dlpack-0.5-h9c3ff4c_0
entrypoints conda-forge/noarch::entrypoints-0.3-pyhd8ed1ab_1003
expat conda-forge/linux-64::expat-2.4.1-h9c3ff4c_0
faiss-proc rapidsai/linux-64::faiss-proc-1.0.0-cuda
fastavro conda-forge/linux-64::fastavro-1.4.1-py37h5e8e339_0
fastrlock conda-forge/linux-64::fastrlock-0.6-py37hcd2ae1e_0
fiona conda-forge/linux-64::fiona-1.8.20-py37ha0cc35a_0
fontconfig conda-forge/linux-64::fontconfig-2.13.1-hba837de_1005
freetype conda-forge/linux-64::freetype-2.10.4-h0708190_1
freexl conda-forge/linux-64::freexl-1.0.6-h7f98852_0
fsspec conda-forge/noarch::fsspec-2021.6.0-pyhd8ed1ab_0
future conda-forge/linux-64::future-0.18.2-py37h89c1867_3
gcsfs conda-forge/noarch::gcsfs-2021.6.0-pyhd8ed1ab_0
gdal conda-forge/linux-64::gdal-3.2.2-py37hb0e9ad2_0
geopandas conda-forge/noarch::geopandas-0.9.0-pyhd8ed1ab_1
geopandas-base conda-forge/noarch::geopandas-base-0.9.0-pyhd8ed1ab_1
geos conda-forge/linux-64::geos-3.9.1-h9c3ff4c_2
geotiff conda-forge/linux-64::geotiff-1.6.0-hcf90da6_5
gettext conda-forge/linux-64::gettext-0.19.8.1-h0b5b191_1005
gflags conda-forge/linux-64::gflags-2.2.2-he1b5a44_1004
giflib conda-forge/linux-64::giflib-5.2.1-h36c2ea0_2
glog conda-forge/linux-64::glog-0.5.0-h48cff8f_0
google-auth conda-forge/noarch::google-auth-1.30.2-pyh6c4a22f_0
google-auth-oauth~ conda-forge/noarch::google-auth-oauthlib-0.4.4-pyhd8ed1ab_0
google-cloud-cpp conda-forge/linux-64::google-cloud-cpp-1.28.0-hbd34f9f_0
greenlet conda-forge/linux-64::greenlet-1.1.0-py37hcd2ae1e_0
grpc-cpp conda-forge/linux-64::grpc-cpp-1.38.0-h2519f57_0
hdf4 conda-forge/linux-64::hdf4-4.2.15-h10796ff_3
hdf5 conda-forge/linux-64::hdf5-1.10.6-nompi_h6a2412b_1114
heapdict conda-forge/noarch::heapdict-1.0.1-py_0
importlib-metadata conda-forge/linux-64::importlib-metadata-4.5.0-py37h89c1867_0
ipykernel conda-forge/linux-64::ipykernel-5.5.5-py37h085eea5_0
ipython conda-forge/linux-64::ipython-7.24.1-py37h085eea5_0
ipython_genutils conda-forge/noarch::ipython_genutils-0.2.0-py_1
ipywidgets conda-forge/noarch::ipywidgets-7.6.3-pyhd3deb0d_0
jedi conda-forge/linux-64::jedi-0.18.0-py37h89c1867_2
jinja2 conda-forge/noarch::jinja2-3.0.1-pyhd8ed1ab_0
joblib conda-forge/noarch::joblib-1.0.1-pyhd8ed1ab_0
jpeg conda-forge/linux-64::jpeg-9d-h36c2ea0_0
jpype1 conda-forge/linux-64::jpype1-1.3.0-py37h2527ec5_0
json-c conda-forge/linux-64::json-c-0.15-h98cffda_0
jsonschema conda-forge/noarch::jsonschema-3.2.0-pyhd8ed1ab_3
jupyter-server-pr~ conda-forge/noarch::jupyter-server-proxy-3.0.2-pyhd8ed1ab_0
jupyter_client conda-forge/noarch::jupyter_client-6.1.12-pyhd8ed1ab_0
jupyter_core conda-forge/linux-64::jupyter_core-4.7.1-py37h89c1867_0
jupyter_server conda-forge/noarch::jupyter_server-1.8.0-pyhd8ed1ab_0
jupyterlab_pygmen~ conda-forge/noarch::jupyterlab_pygments-0.1.2-pyh9f0ad1d_0
jupyterlab_widgets conda-forge/noarch::jupyterlab_widgets-1.0.0-pyhd8ed1ab_1
kealib conda-forge/linux-64::kealib-1.4.14-hcc255d8_2
kiwisolver conda-forge/linux-64::kiwisolver-1.3.1-py37h2527ec5_1
lcms2 conda-forge/linux-64::lcms2-2.12-hddcbb42_0
libblas conda-forge/linux-64::libblas-3.9.0-9_openblas
libcblas conda-forge/linux-64::libcblas-3.9.0-9_openblas
libcrc32c conda-forge/linux-64::libcrc32c-1.1.1-h9c3ff4c_2
libcudf rapidsai/linux-64::libcudf-21.06.01-cuda11.0_g101fc0fda4_2
libcudf_kafka rapidsai/linux-64::libcudf_kafka-21.06.01-g101fc0fda4_2
libcugraph rapidsai/linux-64::libcugraph-21.06.00-cuda11.0_gf9ffd2de_0
libcuml rapidsai/linux-64::libcuml-21.06.02-cuda11.0_g7dfbf8d9e_0
libcumlprims nvidia/linux-64::libcumlprims-21.06.00-cuda11.0_gfda2e6c_0
libcuspatial rapidsai/linux-64::libcuspatial-21.06.00-cuda11.0_g37798cd_0
libdap4 conda-forge/linux-64::libdap4-3.20.6-hd7c4107_2
libevent conda-forge/linux-64::libevent-2.1.10-hcdb4288_3
libfaiss conda-forge/linux-64::libfaiss-1.7.0-cuda110h8045045_8_cuda
libgcrypt conda-forge/linux-64::libgcrypt-1.9.3-h7f98852_1
libgdal conda-forge/linux-64::libgdal-3.2.2-h804b7da_0
libgfortran-ng conda-forge/linux-64::libgfortran-ng-9.3.0-hff62375_19
libgfortran5 conda-forge/linux-64::libgfortran5-9.3.0-hff62375_19
libglib conda-forge/linux-64::libglib-2.68.3-h3e27bee_0
libgpg-error conda-forge/linux-64::libgpg-error-1.42-h9c3ff4c_0
libgsasl conda-forge/linux-64::libgsasl-1.8.0-2
libhwloc conda-forge/linux-64::libhwloc-2.3.0-h5e5b7d1_1
libkml conda-forge/linux-64::libkml-1.3.0-hd79254b_1012
liblapack conda-forge/linux-64::liblapack-3.9.0-9_openblas
libllvm10 conda-forge/linux-64::libllvm10-10.0.1-he513fc3_3
libnetcdf conda-forge/linux-64::libnetcdf-4.7.4-nompi_h56d31a8_107
libntlm conda-forge/linux-64::libntlm-1.4-h7f98852_1002
libopenblas conda-forge/linux-64::libopenblas-0.3.15-pthreads_h8fe5266_1
libpng conda-forge/linux-64::libpng-1.6.37-h21135ba_2
libpq conda-forge/linux-64::libpq-13.3-hd57d9b9_0
libprotobuf conda-forge/linux-64::libprotobuf-3.16.0-h780b84a_0
librdkafka conda-forge/linux-64::librdkafka-1.5.3-hc49e61c_1
librmm rapidsai/linux-64::librmm-21.06.00-cuda11.0_gee432a0_0
librttopo conda-forge/linux-64::librttopo-1.1.0-h1185371_6
libsodium conda-forge/linux-64::libsodium-1.0.18-h36c2ea0_1
libspatialindex conda-forge/linux-64::libspatialindex-1.9.3-h9c3ff4c_3
libspatialite conda-forge/linux-64::libspatialite-5.0.1-h20cb978_4
libthrift conda-forge/linux-64::libthrift-0.14.1-he6d91bd_2
libtiff conda-forge/linux-64::libtiff-4.2.0-hbd63e13_2
libutf8proc conda-forge/linux-64::libutf8proc-2.6.1-h7f98852_0
libuuid conda-forge/linux-64::libuuid-2.32.1-h7f98852_1000
libuv conda-forge/linux-64::libuv-1.41.0-h7f98852_0
libwebp conda-forge/linux-64::libwebp-1.2.0-h3452ae3_0
libwebp-base conda-forge/linux-64::libwebp-base-1.2.0-h7f98852_2
libxcb conda-forge/linux-64::libxcb-1.13-h7f98852_1003
libxgboost rapidsai/linux-64::libxgboost-1.4.2dev.rapidsai21.06-cuda11.0_0
llvmlite conda-forge/linux-64::llvmlite-0.36.0-py37h9d7f4d0_0
locket conda-forge/noarch::locket-0.2.0-py_2
mapclassify conda-forge/noarch::mapclassify-2.4.2-pyhd8ed1ab_0
markdown conda-forge/noarch::markdown-3.3.4-pyhd8ed1ab_0
markupsafe conda-forge/linux-64::markupsafe-2.0.1-py37h5e8e339_0
matplotlib-base conda-forge/linux-64::matplotlib-base-3.4.2-py37hdd32ed1_0
matplotlib-inline conda-forge/noarch::matplotlib-inline-0.1.2-pyhd8ed1ab_2
mistune conda-forge/linux-64::mistune-0.8.4-py37h5e8e339_1003
msgpack-python conda-forge/linux-64::msgpack-python-1.0.2-py37h2527ec5_1
multidict conda-forge/linux-64::multidict-5.1.0-py37h5e8e339_1
multipledispatch conda-forge/noarch::multipledispatch-0.6.0-py_0
munch conda-forge/noarch::munch-2.5.0-py_0
nbclient conda-forge/noarch::nbclient-0.5.3-pyhd8ed1ab_0
nbconvert conda-forge/linux-64::nbconvert-6.1.0-py37h89c1867_0
nbformat conda-forge/noarch::nbformat-5.1.3-pyhd8ed1ab_0
nccl conda-forge/linux-64::nccl-2.9.9.1-h96e36e3_0
nest-asyncio conda-forge/noarch::nest-asyncio-1.5.1-pyhd8ed1ab_0
netifaces conda-forge/linux-64::netifaces-0.10.9-py37h5e8e339_1003
networkx conda-forge/noarch::networkx-2.5.1-pyhd8ed1ab_0
nlohmann_json conda-forge/linux-64::nlohmann_json-3.9.1-h9c3ff4c_1
nodejs conda-forge/linux-64::nodejs-14.15.4-h92b4a50_1
notebook conda-forge/noarch::notebook-6.4.0-pyha770c72_0
numba conda-forge/linux-64::numba-0.53.1-py37hb11d6e1_1
numpy conda-forge/linux-64::numpy-1.21.0-py37h038b26d_0
nvtx conda-forge/linux-64::nvtx-0.2.3-py37h5e8e339_0
oauthlib conda-forge/noarch::oauthlib-3.1.1-pyhd8ed1ab_0
olefile conda-forge/noarch::olefile-0.46-pyh9f0ad1d_1
openjdk conda-forge/linux-64::openjdk-8.0.282-h7f98852_0
openjpeg conda-forge/linux-64::openjpeg-2.4.0-hb52868f_1
orc conda-forge/linux-64::orc-1.6.7-h89a63ab_2
packaging conda-forge/noarch::packaging-20.9-pyh44b312d_0
pandas conda-forge/linux-64::pandas-1.2.5-py37h219a48f_0
pandoc conda-forge/linux-64::pandoc-2.14.0.3-h7f98852_0
pandocfilters conda-forge/noarch::pandocfilters-1.4.2-py_1
panel conda-forge/noarch::panel-0.10.3-pyhd8ed1ab_0
param conda-forge/noarch::param-1.10.1-pyhd3deb0d_0
parquet-cpp conda-forge/noarch::parquet-cpp-1.5.1-2
parso conda-forge/noarch::parso-0.8.2-pyhd8ed1ab_0
partd conda-forge/noarch::partd-1.2.0-pyhd8ed1ab_0
pcre conda-forge/linux-64::pcre-8.45-h9c3ff4c_0
pexpect conda-forge/noarch::pexpect-4.8.0-pyh9f0ad1d_2
pickle5 conda-forge/linux-64::pickle5-0.0.11-py37h5e8e339_0
pickleshare conda-forge/noarch::pickleshare-0.7.5-py_1003
pillow conda-forge/linux-64::pillow-8.2.0-py37h4600e1f_1
pixman conda-forge/linux-64::pixman-0.40.0-h36c2ea0_0
poppler conda-forge/linux-64::poppler-21.03.0-h93df280_0
poppler-data conda-forge/noarch::poppler-data-0.4.10-0
postgresql conda-forge/linux-64::postgresql-13.3-h2510834_0
proj conda-forge/linux-64::proj-8.0.0-h277dcde_0
prometheus_client conda-forge/noarch::prometheus_client-0.11.0-pyhd8ed1ab_0
prompt-toolkit conda-forge/noarch::prompt-toolkit-3.0.19-pyha770c72_0
protobuf conda-forge/linux-64::protobuf-3.16.0-py37hcd2ae1e_0
psutil conda-forge/linux-64::psutil-5.8.0-py37h5e8e339_1
pthread-stubs conda-forge/linux-64::pthread-stubs-0.4-h36c2ea0_1001
ptyprocess conda-forge/noarch::ptyprocess-0.7.0-pyhd3deb0d_0
py-xgboost rapidsai/linux-64::py-xgboost-1.4.2dev.rapidsai21.06-cuda11.0py37_0
pyarrow conda-forge/linux-64::pyarrow-1.0.1-py37hb63ea2f_40_cuda
pyasn1 conda-forge/noarch::pyasn1-0.4.8-py_0
pyasn1-modules conda-forge/noarch::pyasn1-modules-0.2.7-py_0
pyct conda-forge/noarch::pyct-0.4.6-py_0
pyct-core conda-forge/noarch::pyct-core-0.4.6-py_0
pydeck conda-forge/noarch::pydeck-0.5.0-pyh9f0ad1d_0
pyee conda-forge/noarch::pyee-7.0.4-pyh9f0ad1d_0
pygments conda-forge/noarch::pygments-2.9.0-pyhd8ed1ab_0
pyhive conda-forge/noarch::pyhive-0.6.4-pyhd8ed1ab_0
pyjwt conda-forge/noarch::pyjwt-2.1.0-pyhd8ed1ab_0
pynvml conda-forge/noarch::pynvml-11.0.0-pyhd8ed1ab_0
pyparsing conda-forge/noarch::pyparsing-2.4.7-pyh9f0ad1d_0
pyppeteer conda-forge/noarch::pyppeteer-0.2.2-py_1
pyproj conda-forge/linux-64::pyproj-3.0.1-py37h2bb2a07_1
pyrsistent conda-forge/linux-64::pyrsistent-0.17.3-py37h5e8e339_2
python-confluent-~ conda-forge/linux-64::python-confluent-kafka-1.5.0-py37h8f50634_0
python-dateutil conda-forge/noarch::python-dateutil-2.8.1-py_0
pytz conda-forge/noarch::pytz-2021.1-pyhd8ed1ab_0
pyu2f conda-forge/noarch::pyu2f-0.1.5-pyhd8ed1ab_0
pyviz_comms conda-forge/noarch::pyviz_comms-2.0.2-pyhd8ed1ab_0
pyyaml conda-forge/linux-64::pyyaml-5.4.1-py37h5e8e339_0
pyzmq conda-forge/linux-64::pyzmq-22.1.0-py37h336d617_0
rapids rapidsai/linux-64::rapids-21.06.00-cuda11.0_py37_ge3c8282_427
rapids-blazing rapidsai/linux-64::rapids-blazing-21.06.00-cuda11.0_py37_ge3c8282_427
rapids-xgboost rapidsai/linux-64::rapids-xgboost-21.06.00-cuda11.0_py37_ge3c8282_427
re2 conda-forge/linux-64::re2-2021.04.01-h9c3ff4c_0
requests-oauthlib conda-forge/noarch::requests-oauthlib-1.3.0-pyh9f0ad1d_0
rmm rapidsai/linux-64::rmm-21.06.00-cuda_11.0_py37_gee432a0_0
rsa conda-forge/noarch::rsa-4.7.2-pyh44b312d_0
rtree conda-forge/linux-64::rtree-0.9.7-py37h0b55af0_1
s2n conda-forge/linux-64::s2n-1.0.10-h9b69904_0
sasl conda-forge/linux-64::sasl-0.3a1-py37hcd2ae1e_0
scikit-learn conda-forge/linux-64::scikit-learn-0.24.2-py37h18a542f_0
scipy conda-forge/linux-64::scipy-1.6.3-py37h29e03ee_0
send2trash conda-forge/noarch::send2trash-1.7.1-pyhd8ed1ab_0
shapely conda-forge/linux-64::shapely-1.7.1-py37h2d1e849_5
simpervisor conda-forge/noarch::simpervisor-0.4-pyhd8ed1ab_0
snappy conda-forge/linux-64::snappy-1.1.8-he1b5a44_3
sniffio conda-forge/linux-64::sniffio-1.2.0-py37h89c1867_1
sortedcontainers conda-forge/noarch::sortedcontainers-2.4.0-pyhd8ed1ab_0
spdlog conda-forge/linux-64::spdlog-1.8.5-h4bd325d_0
sqlalchemy conda-forge/linux-64::sqlalchemy-1.4.19-py37h5e8e339_0
streamz conda-forge/noarch::streamz-0.6.2-pyh44b312d_0
tblib conda-forge/noarch::tblib-1.7.0-pyhd8ed1ab_0
terminado conda-forge/linux-64::terminado-0.10.1-py37h89c1867_0
testpath conda-forge/noarch::testpath-0.5.0-pyhd8ed1ab_0
threadpoolctl conda-forge/noarch::threadpoolctl-2.1.0-pyh5ca1d4c_0
thrift conda-forge/linux-64::thrift-0.13.0-py37hcd2ae1e_2
thrift_sasl conda-forge/linux-64::thrift_sasl-0.4.2-py37h8f50634_0
tiledb conda-forge/linux-64::tiledb-2.2.9-h91fcb0e_0
toolz conda-forge/noarch::toolz-0.11.1-py_0
tornado conda-forge/linux-64::tornado-6.1-py37h5e8e339_1
traitlets conda-forge/noarch::traitlets-5.0.5-py_0
treelite conda-forge/linux-64::treelite-1.3.0-py37hfdac9b6_0
typing-extensions conda-forge/noarch::typing-extensions-3.10.0.0-hd8ed1ab_0
typing_extensions conda-forge/noarch::typing_extensions-3.10.0.0-pyha770c72_0
tzcode conda-forge/linux-64::tzcode-2021a-h7f98852_1
tzdata conda-forge/noarch::tzdata-2021a-he74cb21_0
ucx rapidsai/linux-64::ucx-1.9.0+gcd9efd3-cuda11.0_0
ucx-proc rapidsai/linux-64::ucx-proc-1.0.0-gpu
ucx-py rapidsai/linux-64::ucx-py-0.20.0-py37_gcd9efd3_0
wcwidth conda-forge/noarch::wcwidth-0.2.5-pyh9f0ad1d_2
webencodings conda-forge/noarch::webencodings-0.5.1-py_1
websocket-client conda-forge/linux-64::websocket-client-0.57.0-py37h89c1867_4
websockets conda-forge/linux-64::websockets-8.1-py37h5e8e339_3
widgetsnbextension conda-forge/linux-64::widgetsnbextension-3.5.1-py37h89c1867_4
xarray conda-forge/noarch::xarray-0.18.2-pyhd8ed1ab_0
xerces-c conda-forge/linux-64::xerces-c-3.2.3-h9d8b166_2
xgboost rapidsai/linux-64::xgboost-1.4.2dev.rapidsai21.06-cuda11.0py37_0
xorg-kbproto conda-forge/linux-64::xorg-kbproto-1.0.7-h7f98852_1002
xorg-libice conda-forge/linux-64::xorg-libice-1.0.10-h7f98852_0
xorg-libsm conda-forge/linux-64::xorg-libsm-1.2.3-hd9c2040_1000
xorg-libx11 conda-forge/linux-64::xorg-libx11-1.7.2-h7f98852_0
xorg-libxau conda-forge/linux-64::xorg-libxau-1.0.9-h7f98852_0
xorg-libxdmcp conda-forge/linux-64::xorg-libxdmcp-1.1.3-h7f98852_0
xorg-libxext conda-forge/linux-64::xorg-libxext-1.3.4-h7f98852_1
xorg-libxrender conda-forge/linux-64::xorg-libxrender-0.9.10-h7f98852_1003
xorg-renderproto conda-forge/linux-64::xorg-renderproto-0.11.1-h7f98852_1002
xorg-xextproto conda-forge/linux-64::xorg-xextproto-7.3.0-h7f98852_1002
xorg-xproto conda-forge/linux-64::xorg-xproto-7.0.31-h7f98852_1007
yarl conda-forge/linux-64::yarl-1.6.3-py37h5e8e339_1
zeromq conda-forge/linux-64::zeromq-4.3.4-h9c3ff4c_0
zict conda-forge/noarch::zict-2.0.0-py_0
zipp conda-forge/noarch::zipp-3.4.1-pyhd8ed1ab_0
The following packages will be UPDATED:
ca-certificates 2020.12.5-ha878542_0 --> 2021.5.30-ha878542_0
certifi 2020.12.5-py37h89c1867_1 --> 2021.5.30-py37h89c1867_0
conda 4.9.2-py37h89c1867_0 --> 4.10.1-py37h89c1867_0
krb5 1.17.2-h926e7f8_0 --> 1.19.1-hcc1bbae_0
libcurl 7.75.0-hc4aaa36_0 --> 7.77.0-h2574ce0_0
libxml2 2.9.10-h72842e0_3 --> 2.9.12-h72842e0_0
openssl 1.1.1j-h7f98852_0 --> 1.1.1k-h7f98852_0
python_abi 3.7-1_cp37m --> 3.7-2_cp37m
readline 8.0-he28a2e2_2 --> 8.1-h46c0cb4_0
Downloading and Extracting Packages
freexl-1.0.6 | 48 KB | | 0%
freexl-1.0.6 | 48 KB | ###3 | 33%
freexl-1.0.6 | 48 KB | ########## | 100%
cytoolz-0.11.0 | 403 KB | | 0%
cytoolz-0.11.0 | 403 KB | ########## | 100%
cytoolz-0.11.0 | 403 KB | ########## | 100%
cudf_kafka-21.06.01 | 1.7 MB | | 0%
cudf_kafka-21.06.01 | 1.7 MB | | 1%
cudf_kafka-21.06.01 | 1.7 MB | 5 | 5%
cudf_kafka-21.06.01 | 1.7 MB | ##4 | 24%
cudf_kafka-21.06.01 | 1.7 MB | ########## | 100%
cudf_kafka-21.06.01 | 1.7 MB | ########## | 100%
expat-2.4.1 | 182 KB | | 0%
expat-2.4.1 | 182 KB | ########## | 100%
prometheus_client-0. | 46 KB | | 0%
prometheus_client-0. | 46 KB | ########## | 100%
webencodings-0.5.1 | 12 KB | | 0%
webencodings-0.5.1 | 12 KB | ########## | 100%
python-dateutil-2.8. | 220 KB | | 0%
python-dateutil-2.8. | 220 KB | ########## | 100%
anyio-3.2.0 | 138 KB | | 0%
anyio-3.2.0 | 138 KB | ########## | 100%
rtree-0.9.7 | 45 KB | | 0%
rtree-0.9.7 | 45 KB | ########## | 100%
testpath-0.5.0 | 86 KB | | 0%
testpath-0.5.0 | 86 KB | ########## | 100%
jsonschema-3.2.0 | 45 KB | | 0%
jsonschema-3.2.0 | 45 KB | ########## | 100%
pyppeteer-0.2.2 | 104 KB | | 0%
pyppeteer-0.2.2 | 104 KB | ########## | 100%
xorg-kbproto-1.0.7 | 27 KB | | 0%
xorg-kbproto-1.0.7 | 27 KB | ########## | 100%
krb5-1.19.1 | 1.4 MB | | 0%
krb5-1.19.1 | 1.4 MB | ########## | 100%
krb5-1.19.1 | 1.4 MB | ########## | 100%
pyproj-3.0.1 | 484 KB | | 0%
pyproj-3.0.1 | 484 KB | ########## | 100%
pyproj-3.0.1 | 484 KB | ########## | 100%
pynvml-11.0.0 | 39 KB | | 0%
pynvml-11.0.0 | 39 KB | ########## | 100%
xorg-libxrender-0.9. | 32 KB | | 0%
xorg-libxrender-0.9. | 32 KB | ########## | 100%
ptyprocess-0.7.0 | 16 KB | | 0%
ptyprocess-0.7.0 | 16 KB | ########## | 100%
aws-c-io-0.10.5 | 121 KB | | 0%
aws-c-io-0.10.5 | 121 KB | ########## | 100%
curl-7.77.0 | 149 KB | | 0%
curl-7.77.0 | 149 KB | ########## | 100%
libuv-1.41.0 | 1.0 MB | | 0%
libuv-1.41.0 | 1.0 MB | ########## | 100%
libuv-1.41.0 | 1.0 MB | ########## | 100%
cycler-0.10.0 | 9 KB | | 0%
cycler-0.10.0 | 9 KB | ########## | 100%
ipython_genutils-0.2 | 21 KB | | 0%
ipython_genutils-0.2 | 21 KB | ########## | 100%
xorg-libx11-1.7.2 | 941 KB | | 0%
xorg-libx11-1.7.2 | 941 KB | ########## | 100%
xorg-libx11-1.7.2 | 941 KB | ########## | 100%
packaging-20.9 | 35 KB | | 0%
packaging-20.9 | 35 KB | ########## | 100%
scipy-1.6.3 | 20.5 MB | | 0%
scipy-1.6.3 | 20.5 MB | ##5 | 26%
scipy-1.6.3 | 20.5 MB | ######1 | 62%
scipy-1.6.3 | 20.5 MB | ########## | 100%
scipy-1.6.3 | 20.5 MB | ########## | 100%
distributed-2021.5.0 | 1.1 MB | | 0%
distributed-2021.5.0 | 1.1 MB | ########## | 100%
distributed-2021.5.0 | 1.1 MB | ########## | 100%
pyct-0.4.6 | 3 KB | | 0%
pyct-0.4.6 | 3 KB | ########## | 100%
custreamz-21.06.01 | 32 KB | | 0%
custreamz-21.06.01 | 32 KB | ##### | 50%
custreamz-21.06.01 | 32 KB | ########## | 100%
pyrsistent-0.17.3 | 89 KB | | 0%
pyrsistent-0.17.3 | 89 KB | ########## | 100%
spdlog-1.8.5 | 353 KB | | 0%
spdlog-1.8.5 | 353 KB | ########## | 100%
typing-extensions-3. | 8 KB | | 0%
typing-extensions-3. | 8 KB | ########## | 100%
jupyterlab_pygments- | 8 KB | | 0%
jupyterlab_pygments- | 8 KB | ########## | 100%
backports-1.0 | 4 KB | | 0%
backports-1.0 | 4 KB | ########## | 100%
nodejs-14.15.4 | 15.7 MB | | 0%
nodejs-14.15.4 | 15.7 MB | ###7 | 37%
nodejs-14.15.4 | 15.7 MB | #########9 | 100%
nodejs-14.15.4 | 15.7 MB | ########## | 100%
dlpack-0.5 | 12 KB | | 0%
dlpack-0.5 | 12 KB | ########## | 100%
zict-2.0.0 | 10 KB | | 0%
zict-2.0.0 | 10 KB | ########## | 100%
jpeg-9d | 264 KB | | 0%
jpeg-9d | 264 KB | ########## | 100%
libcudf_kafka-21.06. | 125 KB | | 0%
libcudf_kafka-21.06. | 125 KB | #2 | 13%
libcudf_kafka-21.06. | 125 KB | #######6 | 77%
libcudf_kafka-21.06. | 125 KB | ########## | 100%
arrow-cpp-proc-3.0.0 | 24 KB | | 0%
arrow-cpp-proc-3.0.0 | 24 KB | ########## | 100%
libcblas-3.9.0 | 11 KB | | 0%
libcblas-3.9.0 | 11 KB | ########## | 100%
pyu2f-0.1.5 | 31 KB | | 0%
pyu2f-0.1.5 | 31 KB | ########## | 100%
libgfortran-ng-9.3.0 | 22 KB | | 0%
libgfortran-ng-9.3.0 | 22 KB | ########## | 100%
libfaiss-1.7.0 | 67.0 MB | | 0%
libfaiss-1.7.0 | 67.0 MB | # | 11%
libfaiss-1.7.0 | 67.0 MB | ##7 | 28%
libfaiss-1.7.0 | 67.0 MB | ####4 | 44%
libfaiss-1.7.0 | 67.0 MB | ######1 | 62%
libfaiss-1.7.0 | 67.0 MB | #######6 | 77%
libfaiss-1.7.0 | 67.0 MB | ########9 | 90%
libfaiss-1.7.0 | 67.0 MB | ########## | 100%
libcudf-21.06.01 | 187.7 MB | | 0%
libcudf-21.06.01 | 187.7 MB | | 0%
libcudf-21.06.01 | 187.7 MB | | 0%
libcudf-21.06.01 | 187.7 MB | | 0%
libcudf-21.06.01 | 187.7 MB | | 1%
libcudf-21.06.01 | 187.7 MB | 2 | 3%
libcudf-21.06.01 | 187.7 MB | 4 | 5%
libcudf-21.06.01 | 187.7 MB | 7 | 7%
libcudf-21.06.01 | 187.7 MB | 9 | 9%
libcudf-21.06.01 | 187.7 MB | #1 | 11%
libcudf-21.06.01 | 187.7 MB | #3 | 13%
libcudf-21.06.01 | 187.7 MB | #5 | 16%
libcudf-21.06.01 | 187.7 MB | #8 | 18%
libcudf-21.06.01 | 187.7 MB | ## | 21%
libcudf-21.06.01 | 187.7 MB | ##3 | 24%
libcudf-21.06.01 | 187.7 MB | ##6 | 27%
libcudf-21.06.01 | 187.7 MB | ##9 | 30%
libcudf-21.06.01 | 187.7 MB | ###2 | 33%
libcudf-21.06.01 | 187.7 MB | ###5 | 36%
libcudf-21.06.01 | 187.7 MB | ###9 | 39%
libcudf-21.06.01 | 187.7 MB | ####1 | 42%
libcudf-21.06.01 | 187.7 MB | ####3 | 44%
libcudf-21.06.01 | 187.7 MB | ####6 | 47%
libcudf-21.06.01 | 187.7 MB | ####9 | 49%
libcudf-21.06.01 | 187.7 MB | #####1 | 52%
libcudf-21.06.01 | 187.7 MB | #####4 | 54%
libcudf-21.06.01 | 187.7 MB | #####6 | 57%
libcudf-21.06.01 | 187.7 MB | #####9 | 59%
libcudf-21.06.01 | 187.7 MB | ######1 | 62%
libcudf-21.06.01 | 187.7 MB | ######4 | 65%
libcudf-21.06.01 | 187.7 MB | ######7 | 67%
libcudf-21.06.01 | 187.7 MB | ######9 | 70%
libcudf-21.06.01 | 187.7 MB | #######2 | 72%
libcudf-21.06.01 | 187.7 MB | #######4 | 75%
libcudf-21.06.01 | 187.7 MB | #######7 | 77%
libcudf-21.06.01 | 187.7 MB | ######## | 80%
libcudf-21.06.01 | 187.7 MB | ########2 | 83%
libcudf-21.06.01 | 187.7 MB | ########5 | 85%
libcudf-21.06.01 | 187.7 MB | ########7 | 88%
libcudf-21.06.01 | 187.7 MB | ######### | 90%
libcudf-21.06.01 | 187.7 MB | #########2 | 93%
libcudf-21.06.01 | 187.7 MB | #########5 | 95%
libcudf-21.06.01 | 187.7 MB | #########7 | 98%
libcudf-21.06.01 | 187.7 MB | ########## | 100%
munch-2.5.0 | 12 KB | | 0%
munch-2.5.0 | 12 KB | ########## | 100%
nbconvert-6.1.0 | 548 KB | | 0%
nbconvert-6.1.0 | 548 KB | ########## | 100%
nbconvert-6.1.0 | 548 KB | ########## | 100%
param-1.10.1 | 64 KB | | 0%
param-1.10.1 | 64 KB | ########## | 100%
libxml2-2.9.12 | 772 KB | | 0%
libxml2-2.9.12 | 772 KB | ########## | 100%
libxml2-2.9.12 | 772 KB | ########## | 100%
markupsafe-2.0.1 | 22 KB | | 0%
markupsafe-2.0.1 | 22 KB | ########## | 100%
jinja2-3.0.1 | 99 KB | | 0%
jinja2-3.0.1 | 99 KB | ########## | 100%
jupyter_server-1.8.0 | 255 KB | | 0%
jupyter_server-1.8.0 | 255 KB | ########## | 100%
jupyter_server-1.8.0 | 255 KB | ########## | 100%
libtiff-4.2.0 | 639 KB | | 0%
libtiff-4.2.0 | 639 KB | ########## | 100%
libtiff-4.2.0 | 639 KB | ########## | 100%
poppler-data-0.4.10 | 3.8 MB | | 0%
poppler-data-0.4.10 | 3.8 MB | ########## | 100%
poppler-data-0.4.10 | 3.8 MB | ########## | 100%
pyasn1-0.4.8 | 53 KB | | 0%
pyasn1-0.4.8 | 53 KB | ########## | 100%
partd-1.2.0 | 18 KB | | 0%
partd-1.2.0 | 18 KB | ########## | 100%
poppler-21.03.0 | 15.9 MB | | 0%
poppler-21.03.0 | 15.9 MB | ###1 | 31%
poppler-21.03.0 | 15.9 MB | #########2 | 93%
poppler-21.03.0 | 15.9 MB | ########## | 100%
libcumlprims-21.06.0 | 1.1 MB | | 0%
libcumlprims-21.06.0 | 1.1 MB | ########## | 100%
libcumlprims-21.06.0 | 1.1 MB | ########## | 100%
parquet-cpp-1.5.1 | 3 KB | | 0%
parquet-cpp-1.5.1 | 3 KB | ########## | 100%
jupyter-server-proxy | 27 KB | | 0%
jupyter-server-proxy | 27 KB | ########## | 100%
pydeck-0.5.0 | 3.6 MB | | 0%
pydeck-0.5.0 | 3.6 MB | ########## | 100%
pydeck-0.5.0 | 3.6 MB | ########## | 100%
sqlalchemy-1.4.19 | 2.3 MB | | 0%
sqlalchemy-1.4.19 | 2.3 MB | ########## | 100%
sqlalchemy-1.4.19 | 2.3 MB | ########## | 100%
appdirs-1.4.4 | 13 KB | | 0%
appdirs-1.4.4 | 13 KB | ########## | 100%
pyparsing-2.4.7 | 60 KB | | 0%
pyparsing-2.4.7 | 60 KB | ########## | 100%
locket-0.2.0 | 6 KB | | 0%
locket-0.2.0 | 6 KB | ########## | 100%
rapids-xgboost-21.06 | 4 KB | | 0%
rapids-xgboost-21.06 | 4 KB | ########## | 100%
rapids-xgboost-21.06 | 4 KB | ########## | 100%
cugraph-21.06.00 | 65.0 MB | | 0%
cugraph-21.06.00 | 65.0 MB | | 0%
cugraph-21.06.00 | 65.0 MB | | 0%
cugraph-21.06.00 | 65.0 MB | | 1%
cugraph-21.06.00 | 65.0 MB | 2 | 3%
cugraph-21.06.00 | 65.0 MB | 8 | 8%
cugraph-21.06.00 | 65.0 MB | #4 | 14%
cugraph-21.06.00 | 65.0 MB | #9 | 20%
cugraph-21.06.00 | 65.0 MB | ##5 | 25%
cugraph-21.06.00 | 65.0 MB | ###2 | 32%
cugraph-21.06.00 | 65.0 MB | ###9 | 40%
cugraph-21.06.00 | 65.0 MB | ####7 | 47%
cugraph-21.06.00 | 65.0 MB | #####4 | 55%
cugraph-21.06.00 | 65.0 MB | ######1 | 62%
cugraph-21.06.00 | 65.0 MB | ######9 | 69%
cugraph-21.06.00 | 65.0 MB | #######6 | 76%
cugraph-21.06.00 | 65.0 MB | ########2 | 82%
cugraph-21.06.00 | 65.0 MB | ######### | 90%
cugraph-21.06.00 | 65.0 MB | #########7 | 97%
cugraph-21.06.00 | 65.0 MB | ########## | 100%
xorg-xextproto-7.3.0 | 28 KB | | 0%
xorg-xextproto-7.3.0 | 28 KB | ########## | 100%
conda-4.10.1 | 3.1 MB | | 0%
conda-4.10.1 | 3.1 MB | ########## | 100%
conda-4.10.1 | 3.1 MB | ########## | 100%
mapclassify-2.4.2 | 36 KB | | 0%
mapclassify-2.4.2 | 36 KB | ########## | 100%
blazingsql-21.06.00 | 190.2 MB | | 0%
blazingsql-21.06.00 | 190.2 MB | | 0%
blazingsql-21.06.00 | 190.2 MB | | 0%
blazingsql-21.06.00 | 190.2 MB | | 0%
blazingsql-21.06.00 | 190.2 MB | | 1%
blazingsql-21.06.00 | 190.2 MB | 2 | 3%
blazingsql-21.06.00 | 190.2 MB | 4 | 5%
blazingsql-21.06.00 | 190.2 MB | 7 | 7%
blazingsql-21.06.00 | 190.2 MB | 9 | 9%
blazingsql-21.06.00 | 190.2 MB | #1 | 11%
blazingsql-21.06.00 | 190.2 MB | #3 | 13%
blazingsql-21.06.00 | 190.2 MB | #5 | 16%
blazingsql-21.06.00 | 190.2 MB | #8 | 18%
blazingsql-21.06.00 | 190.2 MB | ##1 | 21%
blazingsql-21.06.00 | 190.2 MB | ##3 | 24%
blazingsql-21.06.00 | 190.2 MB | ##6 | 26%
blazingsql-21.06.00 | 190.2 MB | ##9 | 29%
blazingsql-21.06.00 | 190.2 MB | ###1 | 32%
blazingsql-21.06.00 | 190.2 MB | ###4 | 35%
blazingsql-21.06.00 | 190.2 MB | ###7 | 37%
blazingsql-21.06.00 | 190.2 MB | #### | 40%
blazingsql-21.06.00 | 190.2 MB | ####3 | 43%
blazingsql-21.06.00 | 190.2 MB | ####6 | 46%
blazingsql-21.06.00 | 190.2 MB | ####9 | 49%
blazingsql-21.06.00 | 190.2 MB | #####2 | 52%
blazingsql-21.06.00 | 190.2 MB | #####5 | 55%
blazingsql-21.06.00 | 190.2 MB | #####7 | 58%
blazingsql-21.06.00 | 190.2 MB | ###### | 61%
blazingsql-21.06.00 | 190.2 MB | ######3 | 64%
blazingsql-21.06.00 | 190.2 MB | ######6 | 67%
blazingsql-21.06.00 | 190.2 MB | ######9 | 70%
blazingsql-21.06.00 | 190.2 MB | #######2 | 73%
blazingsql-21.06.00 | 190.2 MB | #######5 | 76%
blazingsql-21.06.00 | 190.2 MB | #######9 | 79%
blazingsql-21.06.00 | 190.2 MB | ########2 | 82%
blazingsql-21.06.00 | 190.2 MB | ########5 | 85%
blazingsql-21.06.00 | 190.2 MB | ########8 | 88%
blazingsql-21.06.00 | 190.2 MB | #########1 | 91%
blazingsql-21.06.00 | 190.2 MB | #########4 | 94%
blazingsql-21.06.00 | 190.2 MB | #########6 | 97%
blazingsql-21.06.00 | 190.2 MB | #########9 | 99%
blazingsql-21.06.00 | 190.2 MB | ########## | 100%
shapely-1.7.1 | 438 KB | | 0%
shapely-1.7.1 | 438 KB | ########## | 100%
shapely-1.7.1 | 438 KB | ########## | 100%
cusignal-21.06.00 | 1.0 MB | | 0%
cusignal-21.06.00 | 1.0 MB | 1 | 2%
cusignal-21.06.00 | 1.0 MB | 9 | 9%
cusignal-21.06.00 | 1.0 MB | ###9 | 39%
cusignal-21.06.00 | 1.0 MB | ########## | 100%
cusignal-21.06.00 | 1.0 MB | ########## | 100%
threadpoolctl-2.1.0 | 15 KB | | 0%
threadpoolctl-2.1.0 | 15 KB | ########## | 100%
libpng-1.6.37 | 306 KB | | 0%
libpng-1.6.37 | 306 KB | ########## | 100%
fiona-1.8.20 | 1.1 MB | | 0%
fiona-1.8.20 | 1.1 MB | ########## | 100%
fiona-1.8.20 | 1.1 MB | ########## | 100%
xorg-xproto-7.0.31 | 73 KB | | 0%
xorg-xproto-7.0.31 | 73 KB | ########## | 100%
openjpeg-2.4.0 | 444 KB | | 0%
openjpeg-2.4.0 | 444 KB | ########## | 100%
openjpeg-2.4.0 | 444 KB | ########## | 100%
freetype-2.10.4 | 890 KB | | 0%
freetype-2.10.4 | 890 KB | ########## | 100%
freetype-2.10.4 | 890 KB | ########## | 100%
pygments-2.9.0 | 754 KB | | 0%
pygments-2.9.0 | 754 KB | ########## | 100%
pygments-2.9.0 | 754 KB | ########## | 100%
libgsasl-1.8.0 | 125 KB | | 0%
libgsasl-1.8.0 | 125 KB | ########## | 100%
jupyter_core-4.7.1 | 72 KB | | 0%
jupyter_core-4.7.1 | 72 KB | ########## | 100%
wcwidth-0.2.5 | 33 KB | | 0%
wcwidth-0.2.5 | 33 KB | ########## | 100%
xorg-renderproto-0.1 | 9 KB | | 0%
xorg-renderproto-0.1 | 9 KB | ########## | 100%
hdf4-4.2.15 | 950 KB | | 0%
hdf4-4.2.15 | 950 KB | ########## | 100%
hdf4-4.2.15 | 950 KB | ########## | 100%
openssl-1.1.1k | 2.1 MB | | 0%
openssl-1.1.1k | 2.1 MB | ########## | 100%
openssl-1.1.1k | 2.1 MB | ########## | 100%
backcall-0.2.0 | 13 KB | | 0%
backcall-0.2.0 | 13 KB | ########## | 100%
pyct-core-0.4.6 | 13 KB | | 0%
pyct-core-0.4.6 | 13 KB | ########## | 100%
libspatialindex-1.9. | 4.6 MB | | 0%
libspatialindex-1.9. | 4.6 MB | ########## | 100%
libspatialindex-1.9. | 4.6 MB | ########## | 100%
py-xgboost-1.4.2dev. | 151 KB | | 0%
py-xgboost-1.4.2dev. | 151 KB | # | 11%
py-xgboost-1.4.2dev. | 151 KB | ########## | 100%
libxcb-1.13 | 395 KB | | 0%
libxcb-1.13 | 395 KB | ########## | 100%
libxcb-1.13 | 395 KB | ########## | 100%
matplotlib-inline-0. | 11 KB | | 0%
matplotlib-inline-0. | 11 KB | ########## | 100%
giflib-5.2.1 | 77 KB | | 0%
giflib-5.2.1 | 77 KB | ########## | 100%
postgresql-13.3 | 5.3 MB | | 0%
postgresql-13.3 | 5.3 MB | ########## | 100%
postgresql-13.3 | 5.3 MB | ########## | 100%
pthread-stubs-0.4 | 5 KB | | 0%
pthread-stubs-0.4 | 5 KB | ########## | 100%
aiohttp-3.7.4.post0 | 625 KB | | 0%
aiohttp-3.7.4.post0 | 625 KB | ########## | 100%
aiohttp-3.7.4.post0 | 625 KB | ########## | 100%
libnetcdf-4.7.4 | 1.3 MB | | 0%
libnetcdf-4.7.4 | 1.3 MB | ########## | 100%
libnetcdf-4.7.4 | 1.3 MB | ########## | 100%
rapids-blazing-21.06 | 5 KB | | 0%
rapids-blazing-21.06 | 5 KB | ########## | 100%
rapids-blazing-21.06 | 5 KB | ########## | 100%
cuml-21.06.02 | 78.9 MB | | 0%
cuml-21.06.02 | 78.9 MB | | 0%
cuml-21.06.02 | 78.9 MB | 2 | 3%
cuml-21.06.02 | 78.9 MB | 8 | 8%
cuml-21.06.02 | 78.9 MB | 9 | 10%
cuml-21.06.02 | 78.9 MB | #3 | 14%
cuml-21.06.02 | 78.9 MB | #8 | 19%
cuml-21.06.02 | 78.9 MB | ##2 | 23%
cuml-21.06.02 | 78.9 MB | ##5 | 26%
cuml-21.06.02 | 78.9 MB | ##9 | 29%
cuml-21.06.02 | 78.9 MB | ###5 | 35%
cuml-21.06.02 | 78.9 MB | ####1 | 42%
cuml-21.06.02 | 78.9 MB | ####8 | 48%
cuml-21.06.02 | 78.9 MB | #####4 | 54%
cuml-21.06.02 | 78.9 MB | #####9 | 60%
cuml-21.06.02 | 78.9 MB | ######4 | 65%
cuml-21.06.02 | 78.9 MB | ######8 | 69%
cuml-21.06.02 | 78.9 MB | #######4 | 74%
cuml-21.06.02 | 78.9 MB | ######## | 81%
cuml-21.06.02 | 78.9 MB | ########5 | 86%
cuml-21.06.02 | 78.9 MB | #########1 | 91%
cuml-21.06.02 | 78.9 MB | #########6 | 97%
cuml-21.06.02 | 78.9 MB | ########## | 100%
libevent-2.1.10 | 1.1 MB | | 0%
libevent-2.1.10 | 1.1 MB | ########## | 100%
libevent-2.1.10 | 1.1 MB | ########## | 100%
openjdk-8.0.282 | 99.3 MB | | 0%
openjdk-8.0.282 | 99.3 MB | 7 | 7%
openjdk-8.0.282 | 99.3 MB | #8 | 19%
openjdk-8.0.282 | 99.3 MB | ### | 30%
openjdk-8.0.282 | 99.3 MB | ####1 | 41%
openjdk-8.0.282 | 99.3 MB | #####2 | 52%
openjdk-8.0.282 | 99.3 MB | ######4 | 64%
openjdk-8.0.282 | 99.3 MB | #######5 | 75%
openjdk-8.0.282 | 99.3 MB | ########6 | 87%
openjdk-8.0.282 | 99.3 MB | #########7 | 97%
openjdk-8.0.282 | 99.3 MB | ########## | 100%
pytz-2021.1 | 239 KB | | 0%
pytz-2021.1 | 239 KB | ########## | 100%
pytz-2021.1 | 239 KB | ########## | 100%
sasl-0.3a1 | 74 KB | | 0%
sasl-0.3a1 | 74 KB | ########## | 100%
msgpack-python-1.0.2 | 91 KB | | 0%
msgpack-python-1.0.2 | 91 KB | ########## | 100%
libutf8proc-2.6.1 | 95 KB | | 0%
libutf8proc-2.6.1 | 95 KB | ########## | 100%
geopandas-0.9.0 | 5 KB | | 0%
geopandas-0.9.0 | 5 KB | ########## | 100%
scikit-learn-0.24.2 | 7.5 MB | | 0%
scikit-learn-0.24.2 | 7.5 MB | #######4 | 75%
scikit-learn-0.24.2 | 7.5 MB | ########## | 100%
scikit-learn-0.24.2 | 7.5 MB | ########## | 100%
libcugraph-21.06.00 | 213.6 MB | | 0%
libcugraph-21.06.00 | 213.6 MB | | 0%
libcugraph-21.06.00 | 213.6 MB | | 0%
libcugraph-21.06.00 | 213.6 MB | | 0%
libcugraph-21.06.00 | 213.6 MB | | 1%
libcugraph-21.06.00 | 213.6 MB | 2 | 3%
libcugraph-21.06.00 | 213.6 MB | 4 | 5%
libcugraph-21.06.00 | 213.6 MB | 7 | 7%
libcugraph-21.06.00 | 213.6 MB | 9 | 9%
libcugraph-21.06.00 | 213.6 MB | #1 | 12%
libcugraph-21.06.00 | 213.6 MB | #3 | 14%
libcugraph-21.06.00 | 213.6 MB | #6 | 16%
libcugraph-21.06.00 | 213.6 MB | #8 | 19%
libcugraph-21.06.00 | 213.6 MB | ## | 21%
libcugraph-21.06.00 | 213.6 MB | ##3 | 23%
libcugraph-21.06.00 | 213.6 MB | ##5 | 26%
libcugraph-21.06.00 | 213.6 MB | ##7 | 28%
libcugraph-21.06.00 | 213.6 MB | ### | 30%
libcugraph-21.06.00 | 213.6 MB | ###2 | 33%
libcugraph-21.06.00 | 213.6 MB | ###5 | 35%
libcugraph-21.06.00 | 213.6 MB | ###7 | 37%
libcugraph-21.06.00 | 213.6 MB | ###9 | 40%
libcugraph-21.06.00 | 213.6 MB | ####2 | 42%
libcugraph-21.06.00 | 213.6 MB | ####4 | 45%
libcugraph-21.06.00 | 213.6 MB | ####7 | 47%
libcugraph-21.06.00 | 213.6 MB | ####9 | 50%
libcugraph-21.06.00 | 213.6 MB | #####2 | 52%
libcugraph-21.06.00 | 213.6 MB | #####4 | 55%
libcugraph-21.06.00 | 213.6 MB | #####7 | 57%
libcugraph-21.06.00 | 213.6 MB | #####9 | 60%
libcugraph-21.06.00 | 213.6 MB | ######1 | 62%
libcugraph-21.06.00 | 213.6 MB | ######4 | 64%
libcugraph-21.06.00 | 213.6 MB | ######6 | 67%
libcugraph-21.06.00 | 213.6 MB | ######9 | 69%
libcugraph-21.06.00 | 213.6 MB | #######1 | 72%
libcugraph-21.06.00 | 213.6 MB | #######4 | 74%
libcugraph-21.06.00 | 213.6 MB | #######6 | 77%
libcugraph-21.06.00 | 213.6 MB | #######9 | 79%
libcugraph-21.06.00 | 213.6 MB | ########1 | 82%
libcugraph-21.06.00 | 213.6 MB | ########4 | 84%
libcugraph-21.06.00 | 213.6 MB | ########7 | 87%
libcugraph-21.06.00 | 213.6 MB | ########9 | 90%
libcugraph-21.06.00 | 213.6 MB | #########2 | 92%
libcugraph-21.06.00 | 213.6 MB | #########4 | 95%
libcugraph-21.06.00 | 213.6 MB | #########7 | 98%
libcugraph-21.06.00 | 213.6 MB | ########## | 100%
libcugraph-21.06.00 | 213.6 MB | ########## | 100%
markdown-3.3.4 | 67 KB | | 0%
markdown-3.3.4 | 67 KB | ########## | 100%
pyasn1-modules-0.2.7 | 60 KB | | 0%
pyasn1-modules-0.2.7 | 60 KB | ########## | 100%
pexpect-4.8.0 | 47 KB | | 0%
pexpect-4.8.0 | 47 KB | ########## | 100%
pyzmq-22.1.0 | 500 KB | | 0%
pyzmq-22.1.0 | 500 KB | ########## | 100%
pyzmq-22.1.0 | 500 KB | ########## | 100%
libgcrypt-1.9.3 | 677 KB | | 0%
libgcrypt-1.9.3 | 677 KB | ########## | 100%
libgcrypt-1.9.3 | 677 KB | ########## | 100%
libwebp-base-1.2.0 | 815 KB | | 0%
libwebp-base-1.2.0 | 815 KB | ########## | 100%
libwebp-base-1.2.0 | 815 KB | ########## | 100%
jupyterlab_widgets-1 | 130 KB | | 0%
jupyterlab_widgets-1 | 130 KB | ########## | 100%
traitlets-5.0.5 | 81 KB | | 0%
traitlets-5.0.5 | 81 KB | ########## | 100%
pyyaml-5.4.1 | 189 KB | | 0%
pyyaml-5.4.1 | 189 KB | ########## | 100%
libblas-3.9.0 | 11 KB | | 0%
libblas-3.9.0 | 11 KB | ########## | 100%
fsspec-2021.6.0 | 79 KB | | 0%
fsspec-2021.6.0 | 79 KB | ########## | 100%
aws-c-event-stream-0 | 47 KB | | 0%
aws-c-event-stream-0 | 47 KB | ########## | 100%
jpype1-1.3.0 | 482 KB | | 0%
jpype1-1.3.0 | 482 KB | ########## | 100%
jpype1-1.3.0 | 482 KB | ########## | 100%
ca-certificates-2021 | 136 KB | | 0%
ca-certificates-2021 | 136 KB | ########## | 100%
librttopo-1.1.0 | 235 KB | | 0%
librttopo-1.1.0 | 235 KB | ########## | 100%
olefile-0.46 | 32 KB | | 0%
olefile-0.46 | 32 KB | ########## | 100%
xorg-libsm-1.2.3 | 26 KB | | 0%
xorg-libsm-1.2.3 | 26 KB | ########## | 100%
xorg-libxext-1.3.4 | 54 KB | | 0%
xorg-libxext-1.3.4 | 54 KB | ########## | 100%
aws-checksums-0.1.11 | 50 KB | | 0%
aws-checksums-0.1.11 | 50 KB | ########## | 100%
snappy-1.1.8 | 32 KB | | 0%
snappy-1.1.8 | 32 KB | ########## | 100%
xarray-0.18.2 | 599 KB | | 0%
xarray-0.18.2 | 599 KB | ########## | 100%
xarray-0.18.2 | 599 KB | ########## | 100%
gflags-2.2.2 | 114 KB | | 0%
gflags-2.2.2 | 114 KB | ########## | 100%
heapdict-1.0.1 | 7 KB | | 0%
heapdict-1.0.1 | 7 KB | ########## | 100%
netifaces-0.10.9 | 17 KB | | 0%
netifaces-0.10.9 | 17 KB | ########## | 100%
cloudpickle-1.6.0 | 22 KB | | 0%
cloudpickle-1.6.0 | 22 KB | ########## | 100%
liblapack-3.9.0 | 11 KB | | 0%
liblapack-3.9.0 | 11 KB | ########## | 100%
libgdal-3.2.2 | 13.2 MB | | 0%
libgdal-3.2.2 | 13.2 MB | #####3 | 54%
libgdal-3.2.2 | 13.2 MB | ########## | 100%
libgdal-3.2.2 | 13.2 MB | ########## | 100%
bleach-3.3.0 | 111 KB | | 0%
bleach-3.3.0 | 111 KB | ########## | 100%
dask-core-2021.5.0 | 735 KB | | 0%
dask-core-2021.5.0 | 735 KB | ########## | 100%
dask-core-2021.5.0 | 735 KB | ########## | 100%
yarl-1.6.3 | 141 KB | | 0%
yarl-1.6.3 | 141 KB | ########## | 100%
pyhive-0.6.4 | 39 KB | | 0%
pyhive-0.6.4 | 39 KB | ########## | 100%
blinker-1.4 | 13 KB | | 0%
blinker-1.4 | 13 KB | ########## | 100%
async_generator-1.10 | 18 KB | | 0%
async_generator-1.10 | 18 KB | ########## | 100%
certifi-2021.5.30 | 141 KB | | 0%
certifi-2021.5.30 | 141 KB | ########## | 100%
libhwloc-2.3.0 | 2.7 MB | | 0%
libhwloc-2.3.0 | 2.7 MB | ########## | 100%
libhwloc-2.3.0 | 2.7 MB | ########## | 100%
websocket-client-0.5 | 59 KB | | 0%
websocket-client-0.5 | 59 KB | ########## | 100%
gettext-0.19.8.1 | 3.6 MB | | 0%
gettext-0.19.8.1 | 3.6 MB | ########## | 100%
gettext-0.19.8.1 | 3.6 MB | ########## | 100%
arrow-cpp-1.0.1 | 21.1 MB | | 0%
arrow-cpp-1.0.1 | 21.1 MB | ### | 30%
arrow-cpp-1.0.1 | 21.1 MB | ########6 | 86%
arrow-cpp-1.0.1 | 21.1 MB | ########## | 100%
boost-1.72.0 | 339 KB | | 0%
boost-1.72.0 | 339 KB | ########## | 100%
boost-1.72.0 | 339 KB | ########## | 100%
dask-cudf-21.06.01 | 103 KB | | 0%
dask-cudf-21.06.01 | 103 KB | #5 | 15%
dask-cudf-21.06.01 | 103 KB | #########2 | 93%
dask-cudf-21.06.01 | 103 KB | ########## | 100%
attrs-21.2.0 | 44 KB | | 0%
attrs-21.2.0 | 44 KB | ########## | 100%
decorator-4.4.2 | 11 KB | | 0%
decorator-4.4.2 | 11 KB | ########## | 100%
xerces-c-3.2.3 | 1.8 MB | | 0%
xerces-c-3.2.3 | 1.8 MB | ########## | 100%
xerces-c-3.2.3 | 1.8 MB | ########## | 100%
proj-8.0.0 | 3.1 MB | | 0%
proj-8.0.0 | 3.1 MB | ########## | 100%
proj-8.0.0 | 3.1 MB | ########## | 100%
kealib-1.4.14 | 186 KB | | 0%
kealib-1.4.14 | 186 KB | ########## | 100%
requests-oauthlib-1. | 21 KB | | 0%
requests-oauthlib-1. | 21 KB | ########## | 100%
cfitsio-3.470 | 1.3 MB | | 0%
cfitsio-3.470 | 1.3 MB | ########## | 100%
cfitsio-3.470 | 1.3 MB | ########## | 100%
aws-sdk-cpp-1.8.186 | 4.6 MB | | 0%
aws-sdk-cpp-1.8.186 | 4.6 MB | ########## | 100%
aws-sdk-cpp-1.8.186 | 4.6 MB | ########## | 100%
numba-0.53.1 | 3.7 MB | | 0%
numba-0.53.1 | 3.7 MB | ########## | 100%
numba-0.53.1 | 3.7 MB | ########## | 100%
aws-c-common-0.6.2 | 168 KB | | 0%
aws-c-common-0.6.2 | 168 KB | ########## | 100%
joblib-1.0.1 | 206 KB | | 0%
joblib-1.0.1 | 206 KB | ########## | 100%
cyrus-sasl-2.1.27 | 224 KB | | 0%
cyrus-sasl-2.1.27 | 224 KB | ########## | 100%
cyrus-sasl-2.1.27 | 224 KB | ########## | 100%
ipykernel-5.5.5 | 167 KB | | 0%
ipykernel-5.5.5 | 167 KB | ########## | 100%
datashape-0.5.4 | 49 KB | | 0%
datashape-0.5.4 | 49 KB | ########## | 100%
libdap4-3.20.6 | 11.3 MB | | 0%
libdap4-3.20.6 | 11.3 MB | #######2 | 72%
libdap4-3.20.6 | 11.3 MB | ########## | 100%
libdap4-3.20.6 | 11.3 MB | ########## | 100%
ipywidgets-7.6.3 | 101 KB | | 0%
ipywidgets-7.6.3 | 101 KB | ########## | 100%
fontconfig-2.13.1 | 357 KB | | 0%
fontconfig-2.13.1 | 357 KB | ########## | 100%
importlib-metadata-4 | 31 KB | | 0%
importlib-metadata-4 | 31 KB | ########## | 100%
libcurl-7.77.0 | 334 KB | | 0%
libcurl-7.77.0 | 334 KB | ########## | 100%
pyviz_comms-2.0.2 | 25 KB | | 0%
pyviz_comms-2.0.2 | 25 KB | ########## | 100%
tornado-6.1 | 646 KB | | 0%
tornado-6.1 | 646 KB | ########## | 100%
tornado-6.1 | 646 KB | ########## | 100%
defusedxml-0.7.1 | 23 KB | | 0%
defusedxml-0.7.1 | 23 KB | ########## | 100%
prompt-toolkit-3.0.1 | 244 KB | | 0%
prompt-toolkit-3.0.1 | 244 KB | ########## | 100%
prompt-toolkit-3.0.1 | 244 KB | ########## | 100%
zeromq-4.3.4 | 352 KB | | 0%
zeromq-4.3.4 | 352 KB | ########## | 100%
click-plugins-1.1.1 | 9 KB | | 0%
click-plugins-1.1.1 | 9 KB | ########## | 100%
click-7.1.2 | 64 KB | | 0%
click-7.1.2 | 64 KB | ########## | 100%
thrift-0.13.0 | 120 KB | | 0%
thrift-0.13.0 | 120 KB | ########## | 100%
rapids-21.06.00 | 5 KB | | 0%
rapids-21.06.00 | 5 KB | ########## | 100%
rapids-21.06.00 | 5 KB | ########## | 100%
pandocfilters-1.4.2 | 9 KB | | 0%
pandocfilters-1.4.2 | 9 KB | ########## | 100%
cudatoolkit-11.0.221 | 953.0 MB | | 0%
cudatoolkit-11.0.221 | 953.0 MB | ########## | 100%
cudatoolkit-11.0.221 | 953.0 MB | ########## | 100%
send2trash-1.7.1 | 17 KB | | 0%
send2trash-1.7.1 | 17 KB | ########## | 100%
simpervisor-0.4 | 9 KB | | 0%
simpervisor-0.4 | 9 KB | ########## | 100%
future-0.18.2 | 714 KB | | 0%
future-0.18.2 | 714 KB | ########## | 100%
future-0.18.2 | 714 KB | ########## | 100%
libcuml-21.06.02 | 95.2 MB | | 0%
libcuml-21.06.02 | 95.2 MB | | 0%
libcuml-21.06.02 | 95.2 MB | | 0%
libcuml-21.06.02 | 95.2 MB | | 0%
libcuml-21.06.02 | 95.2 MB | 1 | 2%
libcuml-21.06.02 | 95.2 MB | 5 | 6%
libcuml-21.06.02 | 95.2 MB | # | 10%
libcuml-21.06.02 | 95.2 MB | #4 | 14%
libcuml-21.06.02 | 95.2 MB | #8 | 19%
libcuml-21.06.02 | 95.2 MB | ##3 | 24%
libcuml-21.06.02 | 95.2 MB | ##8 | 29%
libcuml-21.06.02 | 95.2 MB | ###4 | 34%
libcuml-21.06.02 | 95.2 MB | ###9 | 40%
libcuml-21.06.02 | 95.2 MB | ####4 | 45%
libcuml-21.06.02 | 95.2 MB | ##### | 50%
libcuml-21.06.02 | 95.2 MB | #####5 | 56%
libcuml-21.06.02 | 95.2 MB | ######1 | 61%
libcuml-21.06.02 | 95.2 MB | ######6 | 67%
libcuml-21.06.02 | 95.2 MB | #######3 | 73%
libcuml-21.06.02 | 95.2 MB | #######8 | 79%
libcuml-21.06.02 | 95.2 MB | ########4 | 85%
libcuml-21.06.02 | 95.2 MB | ########9 | 90%
libcuml-21.06.02 | 95.2 MB | #########3 | 94%
libcuml-21.06.02 | 95.2 MB | #########7 | 98%
libcuml-21.06.02 | 95.2 MB | ########## | 100%
google-auth-1.30.2 | 77 KB | | 0%
google-auth-1.30.2 | 77 KB | ########## | 100%
async-timeout-3.0.1 | 11 KB | | 0%
async-timeout-3.0.1 | 11 KB | ########## | 100%
cligj-0.7.2 | 10 KB | | 0%
cligj-0.7.2 | 10 KB | ########## | 100%
librdkafka-1.5.3 | 11.2 MB | | 0%
librdkafka-1.5.3 | 11.2 MB | ###### | 61%
librdkafka-1.5.3 | 11.2 MB | ########## | 100%
librdkafka-1.5.3 | 11.2 MB | ########## | 100%
pickle5-0.0.11 | 173 KB | | 0%
pickle5-0.0.11 | 173 KB | ########## | 100%
libkml-1.3.0 | 640 KB | | 0%
libkml-1.3.0 | 640 KB | ########## | 100%
libkml-1.3.0 | 640 KB | ########## | 100%
libwebp-1.2.0 | 85 KB | | 0%
libwebp-1.2.0 | 85 KB | ########## | 100%
abseil-cpp-20210324. | 1015 KB | | 0%
abseil-cpp-20210324. | 1015 KB | ########## | 100%
abseil-cpp-20210324. | 1015 KB | ########## | 100%
glog-0.5.0 | 104 KB | | 0%
glog-0.5.0 | 104 KB | ########## | 100%
python-confluent-kaf | 122 KB | | 0%
python-confluent-kaf | 122 KB | ########## | 100%
ucx-py-0.20.0 | 294 KB | | 0%
ucx-py-0.20.0 | 294 KB | 5 | 5%
ucx-py-0.20.0 | 294 KB | ##7 | 27%
ucx-py-0.20.0 | 294 KB | ########## | 100%
ucx-py-0.20.0 | 294 KB | ########## | 100%
pillow-8.2.0 | 684 KB | | 0%
pillow-8.2.0 | 684 KB | ########## | 100%
pillow-8.2.0 | 684 KB | ########## | 100%
cupy-9.0.0 | 50.3 MB | | 0%
cupy-9.0.0 | 50.3 MB | #5 | 16%
cupy-9.0.0 | 50.3 MB | ###9 | 39%
cupy-9.0.0 | 50.3 MB | #####7 | 58%
cupy-9.0.0 | 50.3 MB | #######5 | 75%
cupy-9.0.0 | 50.3 MB | #########8 | 98%
cupy-9.0.0 | 50.3 MB | ########## | 100%
parso-0.8.2 | 68 KB | | 0%
parso-0.8.2 | 68 KB | ########## | 100%
geopandas-base-0.9.0 | 950 KB | | 0%
geopandas-base-0.9.0 | 950 KB | ########## | 100%
geopandas-base-0.9.0 | 950 KB | ########## | 100%
protobuf-3.16.0 | 342 KB | | 0%
protobuf-3.16.0 | 342 KB | ########## | 100%
protobuf-3.16.0 | 342 KB | ########## | 100%
boost-cpp-1.72.0 | 16.3 MB | | 0%
boost-cpp-1.72.0 | 16.3 MB | ####7 | 48%
boost-cpp-1.72.0 | 16.3 MB | ########## | 100%
boost-cpp-1.72.0 | 16.3 MB | ########## | 100%
psutil-5.8.0 | 342 KB | | 0%
psutil-5.8.0 | 342 KB | ########## | 100%
psutil-5.8.0 | 342 KB | ########## | 100%
ucx-proc-1.0.0 | 9 KB | | 0%
ucx-proc-1.0.0 | 9 KB | ########## | 100%
ucx-proc-1.0.0 | 9 KB | ########## | 100%
toolz-0.11.1 | 46 KB | | 0%
toolz-0.11.1 | 46 KB | ########## | 100%
xorg-libxdmcp-1.1.3 | 19 KB | | 0%
xorg-libxdmcp-1.1.3 | 19 KB | ########## | 100%
websockets-8.1 | 90 KB | | 0%
websockets-8.1 | 90 KB | ########## | 100%
grpc-cpp-1.38.0 | 3.6 MB | | 0%
grpc-cpp-1.38.0 | 3.6 MB | ########## | 100%
grpc-cpp-1.38.0 | 3.6 MB | ########## | 100%
jedi-0.18.0 | 923 KB | | 0%
jedi-0.18.0 | 923 KB | ########## | 100%
jedi-0.18.0 | 923 KB | ########## | 100%
cudf-21.06.01 | 108.4 MB | | 0%
cudf-21.06.01 | 108.4 MB | | 0%
cudf-21.06.01 | 108.4 MB | | 0%
cudf-21.06.01 | 108.4 MB | | 1%
cudf-21.06.01 | 108.4 MB | 2 | 3%
cudf-21.06.01 | 108.4 MB | 6 | 6%
cudf-21.06.01 | 108.4 MB | 9 | 10%
cudf-21.06.01 | 108.4 MB | #3 | 13%
cudf-21.06.01 | 108.4 MB | #6 | 17%
cudf-21.06.01 | 108.4 MB | ## | 20%
cudf-21.06.01 | 108.4 MB | ##3 | 24%
cudf-21.06.01 | 108.4 MB | ##6 | 27%
cudf-21.06.01 | 108.4 MB | ###1 | 31%
cudf-21.06.01 | 108.4 MB | ###6 | 36%
cudf-21.06.01 | 108.4 MB | #### | 41%
cudf-21.06.01 | 108.4 MB | ####5 | 46%
cudf-21.06.01 | 108.4 MB | ####9 | 50%
cudf-21.06.01 | 108.4 MB | #####3 | 53%
cudf-21.06.01 | 108.4 MB | #####7 | 57%
cudf-21.06.01 | 108.4 MB | ######1 | 61%
cudf-21.06.01 | 108.4 MB | ######5 | 65%
cudf-21.06.01 | 108.4 MB | ######8 | 69%
cudf-21.06.01 | 108.4 MB | #######2 | 73%
cudf-21.06.01 | 108.4 MB | #######6 | 77%
cudf-21.06.01 | 108.4 MB | ######## | 81%
cudf-21.06.01 | 108.4 MB | ########4 | 85%
cudf-21.06.01 | 108.4 MB | ########9 | 89%
cudf-21.06.01 | 108.4 MB | #########3 | 93%
cudf-21.06.01 | 108.4 MB | #########6 | 97%
cudf-21.06.01 | 108.4 MB | ########## | 100%
dask-cuda-21.06.00 | 110 KB | | 0%
dask-cuda-21.06.00 | 110 KB | #4 | 15%
dask-cuda-21.06.00 | 110 KB | ########7 | 87%
dask-cuda-21.06.00 | 110 KB | ########## | 100%
dask-2021.5.0 | 4 KB | | 0%
dask-2021.5.0 | 4 KB | ########## | 100%
rmm-21.06.00 | 7.0 MB | | 0%
rmm-21.06.00 | 7.0 MB | | 0%
rmm-21.06.00 | 7.0 MB | 4 | 5%
rmm-21.06.00 | 7.0 MB | ##4 | 24%
rmm-21.06.00 | 7.0 MB | #######9 | 79%
rmm-21.06.00 | 7.0 MB | ########## | 100%
rmm-21.06.00 | 7.0 MB | ########## | 100%
fastavro-1.4.1 | 496 KB | | 0%
fastavro-1.4.1 | 496 KB | ########## | 100%
fastavro-1.4.1 | 496 KB | ########## | 100%
panel-0.10.3 | 6.1 MB | | 0%
panel-0.10.3 | 6.1 MB | ########## | 100%
panel-0.10.3 | 6.1 MB | ########## | 100%
libopenblas-0.3.15 | 9.2 MB | | 0%
libopenblas-0.3.15 | 9.2 MB | #####7 | 58%
libopenblas-0.3.15 | 9.2 MB | ########## | 100%
libopenblas-0.3.15 | 9.2 MB | ########## | 100%
llvmlite-0.36.0 | 2.7 MB | | 0%
llvmlite-0.36.0 | 2.7 MB | ########## | 100%
llvmlite-0.36.0 | 2.7 MB | ########## | 100%
fastrlock-0.6 | 31 KB | | 0%
fastrlock-0.6 | 31 KB | ########## | 100%
libcuspatial-21.06.0 | 7.6 MB | | 0%
libcuspatial-21.06.0 | 7.6 MB | | 0%
libcuspatial-21.06.0 | 7.6 MB | #######6 | 76%
libcuspatial-21.06.0 | 7.6 MB | ########## | 100%
libcuspatial-21.06.0 | 7.6 MB | ########## | 100%
pixman-0.40.0 | 627 KB | | 0%
pixman-0.40.0 | 627 KB | ########## | 100%
pixman-0.40.0 | 627 KB | ########## | 100%
notebook-6.4.0 | 6.1 MB | | 0%
notebook-6.4.0 | 6.1 MB | ########## | 100%
notebook-6.4.0 | 6.1 MB | ########## | 100%
geotiff-1.6.0 | 296 KB | | 0%
geotiff-1.6.0 | 296 KB | ########## | 100%
xgboost-1.4.2dev.rap | 17 KB | | 0%
xgboost-1.4.2dev.rap | 17 KB | #########5 | 96%
xgboost-1.4.2dev.rap | 17 KB | ########## | 100%
nvtx-0.2.3 | 55 KB | | 0%
nvtx-0.2.3 | 55 KB | ########## | 100%
readline-8.1 | 295 KB | | 0%
readline-8.1 | 295 KB | ########## | 100%
readline-8.1 | 295 KB | ########## | 100%
google-auth-oauthlib | 19 KB | | 0%
google-auth-oauthlib | 19 KB | ########## | 100%
cairo-1.16.0 | 1.5 MB | | 0%
cairo-1.16.0 | 1.5 MB | ########## | 100%
cairo-1.16.0 | 1.5 MB | ########## | 100%
libcrc32c-1.1.1 | 20 KB | | 0%
libcrc32c-1.1.1 | 20 KB | ########## | 100%
json-c-0.15 | 274 KB | | 0%
json-c-0.15 | 274 KB | ########## | 100%
json-c-0.15 | 274 KB | ########## | 100%
entrypoints-0.3 | 8 KB | | 0%
entrypoints-0.3 | 8 KB | ########## | 100%
faiss-proc-1.0.0 | 24 KB | | 0%
faiss-proc-1.0.0 | 24 KB | ######7 | 68%
faiss-proc-1.0.0 | 24 KB | ########## | 100%
tblib-1.7.0 | 15 KB | | 0%
tblib-1.7.0 | 15 KB | ########## | 100%
libpq-13.3 | 2.7 MB | | 0%
libpq-13.3 | 2.7 MB | ########## | 100%
libpq-13.3 | 2.7 MB | ########## | 100%
sortedcontainers-2.4 | 26 KB | | 0%
sortedcontainers-2.4 | 26 KB | ########## | 100%
zipp-3.4.1 | 11 KB | | 0%
zipp-3.4.1 | 11 KB | ########## | 100%
libsodium-1.0.18 | 366 KB | | 0%
libsodium-1.0.18 | 366 KB | ########## | 100%
libglib-2.68.3 | 3.1 MB | | 0%
libglib-2.68.3 | 3.1 MB | ########## | 100%
libglib-2.68.3 | 3.1 MB | ########## | 100%
orc-1.6.7 | 751 KB | | 0%
orc-1.6.7 | 751 KB | ########## | 100%
orc-1.6.7 | 751 KB | ########## | 100%
backports.functools_ | 9 KB | | 0%
backports.functools_ | 9 KB | ########## | 100%
cuxfilter-21.06.00 | 136 KB | | 0%
cuxfilter-21.06.00 | 136 KB | #1 | 12%
cuxfilter-21.06.00 | 136 KB | ########## | 100%
pyarrow-1.0.1 | 2.4 MB | | 0%
pyarrow-1.0.1 | 2.4 MB | ########9 | 89%
pyarrow-1.0.1 | 2.4 MB | ########## | 100%
typing_extensions-3. | 28 KB | | 0%
typing_extensions-3. | 28 KB | ########## | 100%
colorcet-2.0.6 | 1.5 MB | | 0%
colorcet-2.0.6 | 1.5 MB | ########## | 100%
colorcet-2.0.6 | 1.5 MB | ########## | 100%
libuuid-2.32.1 | 28 KB | | 0%
libuuid-2.32.1 | 28 KB | ########## | 100%
re2-2021.04.01 | 218 KB | | 0%
re2-2021.04.01 | 218 KB | ########## | 100%
libllvm10-10.0.1 | 26.4 MB | | 0%
libllvm10-10.0.1 | 26.4 MB | ###3 | 33%
libllvm10-10.0.1 | 26.4 MB | #######4 | 75%
libllvm10-10.0.1 | 26.4 MB | ########## | 100%
libllvm10-10.0.1 | 26.4 MB | ########## | 100%
multipledispatch-0.6 | 12 KB | | 0%
multipledispatch-0.6 | 12 KB | ########## | 100%
xorg-libice-1.0.10 | 58 KB | | 0%
xorg-libice-1.0.10 | 58 KB | ########## | 100%
sniffio-1.2.0 | 15 KB | | 0%
sniffio-1.2.0 | 15 KB | ########## | 100%
libgpg-error-1.42 | 278 KB | | 0%
libgpg-error-1.42 | 278 KB | ########## | 100%
s2n-1.0.10 | 442 KB | | 0%
s2n-1.0.10 | 442 KB | ########## | 100%
s2n-1.0.10 | 442 KB | ########## | 100%
mistune-0.8.4 | 54 KB | | 0%
mistune-0.8.4 | 54 KB | ########## | 100%
nbclient-0.5.3 | 67 KB | | 0%
nbclient-0.5.3 | 67 KB | ########## | 100%
gdal-3.2.2 | 1.5 MB | | 0%
gdal-3.2.2 | 1.5 MB | ########## | 100%
gdal-3.2.2 | 1.5 MB | ########## | 100%
nlohmann_json-3.9.1 | 122 KB | | 0%
nlohmann_json-3.9.1 | 122 KB | ########## | 100%
nbformat-5.1.3 | 47 KB | | 0%
nbformat-5.1.3 | 47 KB | ########## | 100%
librmm-21.06.00 | 57 KB | | 0%
librmm-21.06.00 | 57 KB | ##8 | 28%
librmm-21.06.00 | 57 KB | ########## | 100%
numpy-1.21.0 | 6.1 MB | | 0%
numpy-1.21.0 | 6.1 MB | ########## | 100%
numpy-1.21.0 | 6.1 MB | ########## | 100%
cuspatial-21.06.00 | 15.2 MB | | 0%
cuspatial-21.06.00 | 15.2 MB | | 0%
cuspatial-21.06.00 | 15.2 MB | 1 | 1%
cuspatial-21.06.00 | 15.2 MB | 5 | 6%
cuspatial-21.06.00 | 15.2 MB | ##3 | 23%
cuspatial-21.06.00 | 15.2 MB | ##### | 50%
cuspatial-21.06.00 | 15.2 MB | #######7 | 78%
cuspatial-21.06.00 | 15.2 MB | ########## | 100%
cuspatial-21.06.00 | 15.2 MB | ########## | 100%
libntlm-1.4 | 32 KB | | 0%
libntlm-1.4 | 32 KB | ########## | 100%
networkx-2.5.1 | 1.2 MB | | 0%
networkx-2.5.1 | 1.2 MB | ########## | 100%
networkx-2.5.1 | 1.2 MB | ########## | 100%
tzdata-2021a | 121 KB | | 0%
tzdata-2021a | 121 KB | ########## | 100%
libprotobuf-3.16.0 | 2.5 MB | | 0%
libprotobuf-3.16.0 | 2.5 MB | ########## | 100%
libprotobuf-3.16.0 | 2.5 MB | ########## | 100%
brotli-1.0.9 | 389 KB | | 0%
brotli-1.0.9 | 389 KB | ########## | 100%
multidict-5.1.0 | 67 KB | | 0%
multidict-5.1.0 | 67 KB | ########## | 100%
treelite-1.3.0 | 2.7 MB | | 0%
treelite-1.3.0 | 2.7 MB | ########## | 100%
treelite-1.3.0 | 2.7 MB | ########## | 100%
google-cloud-cpp-1.2 | 9.3 MB | | 0%
google-cloud-cpp-1.2 | 9.3 MB | #######4 | 75%
google-cloud-cpp-1.2 | 9.3 MB | ########## | 100%
google-cloud-cpp-1.2 | 9.3 MB | ########## | 100%
rsa-4.7.2 | 28 KB | | 0%
rsa-4.7.2 | 28 KB | ########## | 100%
pandas-1.2.5 | 11.8 MB | | 0%
pandas-1.2.5 | 11.8 MB | #####7 | 57%
pandas-1.2.5 | 11.8 MB | ########## | 100%
pandas-1.2.5 | 11.8 MB | ########## | 100%
xorg-libxau-1.0.9 | 13 KB | | 0%
xorg-libxau-1.0.9 | 13 KB | ########## | 100%
python_abi-3.7 | 4 KB | | 0%
python_abi-3.7 | 4 KB | ########## | 100%
terminado-0.10.1 | 26 KB | | 0%
terminado-0.10.1 | 26 KB | ########## | 100%
argon2-cffi-20.1.0 | 47 KB | | 0%
argon2-cffi-20.1.0 | 47 KB | ########## | 100%
greenlet-1.1.0 | 83 KB | | 0%
greenlet-1.1.0 | 83 KB | ########## | 100%
streamz-0.6.2 | 59 KB | | 0%
streamz-0.6.2 | 59 KB | ########## | 100%
libthrift-0.14.1 | 4.5 MB | | 0%
libthrift-0.14.1 | 4.5 MB | ########## | 100%
libthrift-0.14.1 | 4.5 MB | ########## | 100%
tiledb-2.2.9 | 4.0 MB | | 0%
tiledb-2.2.9 | 4.0 MB | ########## | 100%
tiledb-2.2.9 | 4.0 MB | ########## | 100%
tzcode-2021a | 68 KB | | 0%
tzcode-2021a | 68 KB | ########## | 100%
lcms2-2.12 | 443 KB | | 0%
lcms2-2.12 | 443 KB | ########## | 100%
lcms2-2.12 | 443 KB | ########## | 100%
thrift_sasl-0.4.2 | 14 KB | | 0%
thrift_sasl-0.4.2 | 14 KB | ########## | 100%
pcre-8.45 | 253 KB | | 0%
pcre-8.45 | 253 KB | ########## | 100%
bokeh-2.2.3 | 7.0 MB | | 0%
bokeh-2.2.3 | 7.0 MB | ########## | 100%
bokeh-2.2.3 | 7.0 MB | ########## | 100%
pandoc-2.14.0.3 | 12.0 MB | | 0%
pandoc-2.14.0.3 | 12.0 MB | ###7 | 37%
pandoc-2.14.0.3 | 12.0 MB | ########## | 100%
pandoc-2.14.0.3 | 12.0 MB | ########## | 100%
nest-asyncio-1.5.1 | 9 KB | | 0%
nest-asyncio-1.5.1 | 9 KB | ########## | 100%
libxgboost-1.4.2dev. | 115.3 MB | | 0%
libxgboost-1.4.2dev. | 115.3 MB | | 0%
libxgboost-1.4.2dev. | 115.3 MB | | 0%
libxgboost-1.4.2dev. | 115.3 MB | | 0%
libxgboost-1.4.2dev. | 115.3 MB | 1 | 2%
libxgboost-1.4.2dev. | 115.3 MB | 4 | 5%
libxgboost-1.4.2dev. | 115.3 MB | 8 | 8%
libxgboost-1.4.2dev. | 115.3 MB | #1 | 12%
libxgboost-1.4.2dev. | 115.3 MB | #5 | 15%
libxgboost-1.4.2dev. | 115.3 MB | #8 | 19%
libxgboost-1.4.2dev. | 115.3 MB | ##2 | 22%
libxgboost-1.4.2dev. | 115.3 MB | ##5 | 26%
libxgboost-1.4.2dev. | 115.3 MB | ##9 | 30%
libxgboost-1.4.2dev. | 115.3 MB | ###3 | 34%
libxgboost-1.4.2dev. | 115.3 MB | ###7 | 38%
libxgboost-1.4.2dev. | 115.3 MB | ####2 | 42%
libxgboost-1.4.2dev. | 115.3 MB | ####6 | 46%
libxgboost-1.4.2dev. | 115.3 MB | ##### | 50%
libxgboost-1.4.2dev. | 115.3 MB | #####4 | 55%
libxgboost-1.4.2dev. | 115.3 MB | #####8 | 59%
libxgboost-1.4.2dev. | 115.3 MB | ######3 | 63%
libxgboost-1.4.2dev. | 115.3 MB | ######7 | 68%
libxgboost-1.4.2dev. | 115.3 MB | #######1 | 72%
libxgboost-1.4.2dev. | 115.3 MB | #######5 | 75%
libxgboost-1.4.2dev. | 115.3 MB | #######9 | 80%
libxgboost-1.4.2dev. | 115.3 MB | ########4 | 84%
libxgboost-1.4.2dev. | 115.3 MB | ########8 | 88%
libxgboost-1.4.2dev. | 115.3 MB | #########1 | 92%
libxgboost-1.4.2dev. | 115.3 MB | #########6 | 96%
libxgboost-1.4.2dev. | 115.3 MB | ########## | 100%
cachetools-4.2.2 | 12 KB | | 0%
cachetools-4.2.2 | 12 KB | ########## | 100%
pickleshare-0.7.5 | 9 KB | | 0%
pickleshare-0.7.5 | 9 KB | ########## | 100%
libspatialite-5.0.1 | 4.4 MB | | 0%
libspatialite-5.0.1 | 4.4 MB | ########## | 100%
libspatialite-5.0.1 | 4.4 MB | ########## | 100%
matplotlib-base-3.4. | 7.2 MB | | 0%
matplotlib-base-3.4. | 7.2 MB | ########2 | 83%
matplotlib-base-3.4. | 7.2 MB | ########## | 100%
oauthlib-3.1.1 | 87 KB | | 0%
oauthlib-3.1.1 | 87 KB | ########## | 100%
pyee-7.0.4 | 14 KB | | 0%
pyee-7.0.4 | 14 KB | ########## | 100%
libgfortran5-9.3.0 | 2.0 MB | | 0%
libgfortran5-9.3.0 | 2.0 MB | ########## | 100%
libgfortran5-9.3.0 | 2.0 MB | ########## | 100%
gcsfs-2021.6.0 | 23 KB | | 0%
gcsfs-2021.6.0 | 23 KB | ########## | 100%
nccl-2.9.9.1 | 82.3 MB | | 0%
nccl-2.9.9.1 | 82.3 MB | 7 | 8%
nccl-2.9.9.1 | 82.3 MB | #7 | 18%
nccl-2.9.9.1 | 82.3 MB | ##8 | 28%
nccl-2.9.9.1 | 82.3 MB | ###9 | 40%
nccl-2.9.9.1 | 82.3 MB | ##### | 51%
nccl-2.9.9.1 | 82.3 MB | ######2 | 62%
nccl-2.9.9.1 | 82.3 MB | #######3 | 74%
nccl-2.9.9.1 | 82.3 MB | ########5 | 85%
nccl-2.9.9.1 | 82.3 MB | #########5 | 96%
nccl-2.9.9.1 | 82.3 MB | ########## | 100%
pyjwt-2.1.0 | 17 KB | | 0%
pyjwt-2.1.0 | 17 KB | ########## | 100%
aws-c-cal-0.5.11 | 37 KB | | 0%
aws-c-cal-0.5.11 | 37 KB | ########## | 100%
datashader-0.11.1 | 14.0 MB | | 0%
datashader-0.11.1 | 14.0 MB | ####2 | 43%
datashader-0.11.1 | 14.0 MB | ########## | 100%
datashader-0.11.1 | 14.0 MB | ########## | 100%
widgetsnbextension-3 | 1.8 MB | | 0%
widgetsnbextension-3 | 1.8 MB | ########## | 100%
widgetsnbextension-3 | 1.8 MB | ########## | 100%
kiwisolver-1.3.1 | 78 KB | | 0%
kiwisolver-1.3.1 | 78 KB | ########## | 100%
jupyter_client-6.1.1 | 79 KB | | 0%
jupyter_client-6.1.1 | 79 KB | ########## | 100%
ucx-1.9.0+gcd9efd3 | 8.2 MB | | 0%
ucx-1.9.0+gcd9efd3 | 8.2 MB | | 0%
ucx-1.9.0+gcd9efd3 | 8.2 MB | 1 | 1%
ucx-1.9.0+gcd9efd3 | 8.2 MB | 4 | 5%
ucx-1.9.0+gcd9efd3 | 8.2 MB | #9 | 20%
ucx-1.9.0+gcd9efd3 | 8.2 MB | ######6 | 66%
ucx-1.9.0+gcd9efd3 | 8.2 MB | ########## | 100%
ucx-1.9.0+gcd9efd3 | 8.2 MB | ########## | 100%
geos-3.9.1 | 1.1 MB | | 0%
geos-3.9.1 | 1.1 MB | ########## | 100%
geos-3.9.1 | 1.1 MB | ########## | 100%
hdf5-1.10.6 | 3.1 MB | | 0%
hdf5-1.10.6 | 3.1 MB | ########## | 100%
hdf5-1.10.6 | 3.1 MB | ########## | 100%
ipython-7.24.1 | 1.1 MB | | 0%
ipython-7.24.1 | 1.1 MB | ########## | 100%
ipython-7.24.1 | 1.1 MB | ########## | 100%
Preparing transaction: ...working... done
Verifying transaction: ...working... done
Executing transaction: ...working... By downloading and using the CUDA Toolkit conda packages, you accept the terms and conditions of the CUDA End User License Agreement (EULA): https://docs.nvidia.com/cuda/eula/index.html
Enabling notebook extension jupyter-js-widgets/extension...
Paths used for configuration of notebook:
/usr/local/etc/jupyter/nbconfig/notebook.d/plotlywidget.json
/usr/local/etc/jupyter/nbconfig/notebook.d/pydeck.json
/usr/local/etc/jupyter/nbconfig/notebook.d/widgetsnbextension.json
/usr/local/etc/jupyter/nbconfig/notebook.json
Paths used for configuration of notebook:
/usr/local/etc/jupyter/nbconfig/notebook.d/plotlywidget.json
/usr/local/etc/jupyter/nbconfig/notebook.d/pydeck.json
/usr/local/etc/jupyter/nbconfig/notebook.d/widgetsnbextension.json
- Validating: [32mOK[0m
Paths used for configuration of notebook:
/usr/local/etc/jupyter/nbconfig/notebook.d/plotlywidget.json
/usr/local/etc/jupyter/nbconfig/notebook.d/pydeck.json
/usr/local/etc/jupyter/nbconfig/notebook.d/widgetsnbextension.json
/usr/local/etc/jupyter/nbconfig/notebook.json
done
RAPIDS conda installation complete. Updating Colab's libraries...
Copying /usr/local/lib/libcudf.so to /usr/lib/libcudf.so
Copying /usr/local/lib/libnccl.so to /usr/lib/libnccl.so
Copying /usr/local/lib/libcuml.so to /usr/lib/libcuml.so
Copying /usr/local/lib/libcugraph.so to /usr/lib/libcugraph.so
Copying /usr/local/lib/libxgboost.so to /usr/lib/libxgboost.so
Copying /usr/local/lib/libcuspatial.so to /usr/lib/libcuspatial.so
Copying /usr/local/lib/libgeos.so to /usr/lib/libgeos.so
###Markdown
Instalando as Bibliotecas Necessárias
###Code
%matplotlib inline
%load_ext google.colab.data_table
import matplotlib.pyplot as plt
import numpy as np
import gc
import pandas as pd
import pickle
import dask
import dask_cudf
import cudf
from datetime import datetime
from dask import dataframe as dd
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from google.colab import files
from oauth2client.client import GoogleCredentials
pd.set_option('display.max_columns', None)
pd.options.display.precision = 2
pd.options.display.max_rows = 50
import seaborn as sns
import missingno as msno
import matplotlib as mpl
from matplotlib import rcParams
from numba import jit, njit
mpl.rc('figure', max_open_warning = 0)
from sklearn import preprocessing
###Output
_____no_output_____
###Markdown
Criando um Client para o Dask
###Code
from dask.distributed import Client,wait
client = Client()
#client = Client(n_workers=2, threads_per_worker=4)
client.cluster
###Output
/usr/local/lib/python3.7/site-packages/distributed/client.py:1148: VersionMismatchWarning: Mismatched versions found
+---------+--------+-----------+---------+
| Package | client | scheduler | workers |
+---------+--------+-----------+---------+
| numpy | 1.19.5 | 1.19.5 | 1.21.0 |
| tornado | 5.1.1 | 5.1.1 | 6.1 |
+---------+--------+-----------+---------+
warnings.warn(version_module.VersionMismatchWarning(msg[0]["warning"]))
###Markdown
Fazendo autenticação no Google, importando os arquivos através do Google Drive e criando Dask dataframes com limpeza de RAM (garbage collect).
###Code
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
ids = ['1Oyd1VdQo3fHJD5LGXNgi5kZBS812MiKN','183SF0fxXbTVXYfko-BOyuwAB2BmZQAmK']
estados = ['BR','BRPRO']
arquivo = ['brasil.pkl','brasilprocessed.pkl']
dflist=[]
for i in range (len(ids)):
fileDownloaded = drive.CreateFile({'id':ids[i]})
fileDownloaded.GetContentFile(arquivo[i])
globals()[estados[i]] = dd.from_pandas(pd.read_pickle(arquivo[i]),npartitions=245)
n=gc.collect()
globals()[estados[i]] = (globals()[estados[i]]).reset_index(drop=True)
n=gc.collect()
dflist.append(eval(estados[i]))
n=gc.collect()
dflist[0].head()
###Output
_____no_output_____
###Markdown
Fazendo a Aprendizem de Máquina com Computação Paralela. Instalando o Dask Machine Learning.
###Code
!pip install dask-ml
###Output
Collecting dask-ml
Downloading dask_ml-1.9.0-py3-none-any.whl (143 kB)
[?25l
[K |██▎ | 10 kB 28.4 MB/s eta 0:00:01
[K |████▋ | 20 kB 23.1 MB/s eta 0:00:01
[K |██████▉ | 30 kB 16.3 MB/s eta 0:00:01
[K |█████████▏ | 40 kB 13.9 MB/s eta 0:00:01
[K |███████████▌ | 51 kB 7.7 MB/s eta 0:00:01
[K |█████████████▊ | 61 kB 7.4 MB/s eta 0:00:01
[K |████████████████ | 71 kB 8.3 MB/s eta 0:00:01
[K |██████████████████▎ | 81 kB 8.7 MB/s eta 0:00:01
[K |████████████████████▋ | 92 kB 9.0 MB/s eta 0:00:01
[K |███████████████████████ | 102 kB 7.3 MB/s eta 0:00:01
[K |█████████████████████████▏ | 112 kB 7.3 MB/s eta 0:00:01
[K |███████████████████████████▌ | 122 kB 7.3 MB/s eta 0:00:01
[K |█████████████████████████████▊ | 133 kB 7.3 MB/s eta 0:00:01
[K |████████████████████████████████| 143 kB 7.3 MB/s
[?25hRequirement already satisfied: scipy in /usr/local/lib/python3.7/site-packages (from dask-ml) (1.6.3)
Requirement already satisfied: pandas>=0.24.2 in /usr/local/lib/python3.7/site-packages (from dask-ml) (1.2.5)
Collecting dask-glm>=0.2.0
Downloading dask_glm-0.2.0-py2.py3-none-any.whl (12 kB)
Requirement already satisfied: distributed>=2.4.0 in /usr/local/lib/python3.7/site-packages (from dask-ml) (2021.5.0)
Requirement already satisfied: packaging in /usr/local/lib/python3.7/site-packages (from dask-ml) (20.9)
Requirement already satisfied: multipledispatch>=0.4.9 in /usr/local/lib/python3.7/site-packages (from dask-ml) (0.6.0)
Requirement already satisfied: numpy>=1.17.3 in /usr/local/lib/python3.7/site-packages (from dask-ml) (1.21.0)
Requirement already satisfied: scikit-learn>=0.23 in /usr/local/lib/python3.7/site-packages (from dask-ml) (0.24.2)
Requirement already satisfied: dask[array,dataframe]>=2.4.0 in /usr/local/lib/python3.7/site-packages (from dask-ml) (2021.5.0)
Requirement already satisfied: numba>=0.51.0 in /usr/local/lib/python3.7/site-packages (from dask-ml) (0.53.1)
Requirement already satisfied: cloudpickle>=0.2.2 in /usr/local/lib/python3.7/site-packages (from dask-glm>=0.2.0->dask-ml) (1.6.0)
Requirement already satisfied: toolz>=0.8.2 in /usr/local/lib/python3.7/site-packages (from dask[array,dataframe]>=2.4.0->dask-ml) (0.11.1)
Requirement already satisfied: fsspec>=0.6.0 in /usr/local/lib/python3.7/site-packages (from dask[array,dataframe]>=2.4.0->dask-ml) (2021.6.0)
Requirement already satisfied: partd>=0.3.10 in /usr/local/lib/python3.7/site-packages (from dask[array,dataframe]>=2.4.0->dask-ml) (1.2.0)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.7/site-packages (from dask[array,dataframe]>=2.4.0->dask-ml) (5.4.1)
Requirement already satisfied: setuptools in /usr/local/lib/python3.7/site-packages (from distributed>=2.4.0->dask-ml) (49.6.0.post20210108)
Requirement already satisfied: click>=6.6 in /usr/local/lib/python3.7/site-packages (from distributed>=2.4.0->dask-ml) (7.1.2)
Requirement already satisfied: sortedcontainers!=2.0.0,!=2.0.1 in /usr/local/lib/python3.7/site-packages (from distributed>=2.4.0->dask-ml) (2.4.0)
Requirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.7/site-packages (from distributed>=2.4.0->dask-ml) (1.7.0)
Requirement already satisfied: tornado>=5 in /usr/local/lib/python3.7/site-packages (from distributed>=2.4.0->dask-ml) (6.1)
Requirement already satisfied: zict>=0.1.3 in /usr/local/lib/python3.7/site-packages (from distributed>=2.4.0->dask-ml) (2.0.0)
Requirement already satisfied: psutil>=5.0 in /usr/local/lib/python3.7/site-packages (from distributed>=2.4.0->dask-ml) (5.8.0)
Requirement already satisfied: msgpack>=0.6.0 in /usr/local/lib/python3.7/site-packages (from distributed>=2.4.0->dask-ml) (1.0.2)
Requirement already satisfied: six in /usr/local/lib/python3.7/site-packages (from multipledispatch>=0.4.9->dask-ml) (1.15.0)
Requirement already satisfied: llvmlite<0.37,>=0.36.0rc1 in /usr/local/lib/python3.7/site-packages (from numba>=0.51.0->dask-ml) (0.36.0)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/site-packages (from pandas>=0.24.2->dask-ml) (2.8.1)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.7/site-packages (from pandas>=0.24.2->dask-ml) (2021.1)
Requirement already satisfied: locket in /usr/local/lib/python3.7/site-packages (from partd>=0.3.10->dask[array,dataframe]>=2.4.0->dask-ml) (0.2.0)
Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.7/site-packages (from scikit-learn>=0.23->dask-ml) (2.1.0)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/site-packages (from scikit-learn>=0.23->dask-ml) (1.0.1)
Requirement already satisfied: heapdict in /usr/local/lib/python3.7/site-packages (from zict>=0.1.3->distributed>=2.4.0->dask-ml) (1.0.1)
Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.7/site-packages (from packaging->dask-ml) (2.4.7)
Installing collected packages: dask-glm, dask-ml
Successfully installed dask-glm-0.2.0 dask-ml-1.9.0
###Markdown
Instalando as Bibliotecas Necessárias do Sklearn
###Code
import sklearn
from sklearn.metrics import mean_squared_error
from sklearn.metrics import classification_report
from sklearn import preprocessing, metrics
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import BaggingClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.svm import SVC
from sklearn.neighbors import NearestNeighbors
from sklearn.metrics import confusion_matrix
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
import joblib
from dask_ml.model_selection import train_test_split
import warnings
###Output
_____no_output_____
###Markdown
Ajustando os dados para tratamento por Machine Learning
###Code
t1 = (dflist[1]).drop(['v49','v82','v104','v105','v225','v226','v227','v228','v229','v230','v231','v232','v253','v254','v255','v256','v257'], axis=1)
n=gc.collect()
t1 = t1[['v0','v1','v2','v3','v4','v5','v6','v7','v8','v9','v10','v11','v12','v13','v14','v15','v19','v20','v21','v22','v23','v27','v28','v29','v30','v31','v32','v33','v43','v44','v45','v46','v47','v48','v50','v51','v52','v54','v55','v56','v57','v59','v60','v61','v62','v63','v64','v65','v66','v67','v68','v69','v70','v72','v73','v76','v77','v78','v79','v80','v83','v85','v87','v88','v89','v90','v91','v92','v93','v94','v96','v97','v98','v100','v101','v106','v107','v108','v109','v112','v113','v114','v115','v116','v117','v118','v121','v122','v123','v124','v125','v126','v127','v128','v137','v138','v139','v141','v143','v145','v147','v149','v151','v153','v155','v156','v157','v158','v159','v160','v161','v162','v163','v164','v165','v166','v167','v168','v169','v170','v171','v172','v173','v174','v175','v176','v177','v178','v179','v180','v181','v182','v183','v184','v185','v186','v187','v188','v189','v192','v193','v194','v195','v196','v197','v198','v199','v200','v201','v202','v203','v204','v205','v206','v207','v208','v209','v210','v211','v212','v213','v215','v216','v218','v219','v220','v221','v222','v223','v224','v234','v235','v237','v238','v239','v241','v242','v243','v244','v245','v246','v247','v248','v249','v250','v261','v251','v252','v260','v262','v259','v258']]
n=gc.collect()
t1 = t1.loc[t1['v0'].between(350000, 356000,inclusive=True)]
t1.head()
###Output
_____no_output_____
###Markdown
Fazendo a Separação de Dados de Treino (70%) e Dados de Teste (30%).
###Code
xtreino, xteste, ytreino, yteste = train_test_split((t1.iloc[:,0:191]),(t1.iloc[:,191:]), test_size = 0.3,random_state=66,shuffle=True)
n=gc.collect()
###Output
_____no_output_____
###Markdown
Modelo 0: Regressão Logística.
###Code
model = LogisticRegression(C=30000, dual=False, max_iter=3000000)
from joblib import parallel_backend
with parallel_backend('dask'):
model.fit(xtreino,ytreino)
n=gc.collect()
with parallel_backend('dask'):
print('\033[1m'+'R² de treino:'+'\033[0m',model.score(xtreino,ytreino),'\033[1m'+'R² de teste:'+'\033[0m',model.score(xteste,yteste))
print('\033[1m'+'Intercept:'+'\033[0m',model.intercept_,'\033[1m')
print('\033[1m'+'RMSE de Treino:'+'\033[0m',mean_squared_error(ytreino, model.predict(xtreino)),'\033[1m'+'RMSE de Teste:'+'\033[0m',mean_squared_error(yteste, model.predict(xteste)))
print('\n')
print('\033[1m'+'Reporte dos Dados de Treino - SP'+'\033[0m')
print(classification_report(ytreino, model.predict(xtreino)))
print('\n')
plt.figure(figsize=(10, 5))
cnf_matrix_treino = confusion_matrix(ytreino, model.predict(xtreino))
sns.heatmap(pd.DataFrame(cnf_matrix_treino), annot=True, cmap="YlGnBu" ,fmt='g')
plt.suptitle('Matriz de Confusão SP - Dados de Treino',y=1,fontsize=18)
plt.show()
print('\n')
print('\033[1m'+'Reporte dos Dados de Teste - SP'+'\033[0m')
print(classification_report(yteste, model.predict(xteste)))
print('\n')
plt.figure(figsize=(10, 5))
cnf_matrix_teste = confusion_matrix(yteste, model.predict(xteste))
sns.heatmap(pd.DataFrame(cnf_matrix_teste), annot=True, cmap="YlGnBu" ,fmt='g')
plt.suptitle('Matriz de Confusão SP - Dados de Teste',y=1,fontsize=18)
plt.show()
print('\n')
metrics.plot_roc_curve(model,xteste,yteste)
plt.show()
print('\n')
with parallel_backend('dask'):
importancia01a = pd.DataFrame(model.coef_).T
t2= t1[['v0','v1','v2','v3','v4','v5','v6','v7','v8','v9','v10','v11','v12','v13','v14','v15','v19','v20','v21','v22','v23','v27','v28','v29','v30','v31','v32','v33','v43','v44','v45','v46','v47','v48','v50','v51','v52','v54','v55','v56','v57','v59','v60','v61','v62','v63','v64','v65','v66','v67','v68','v69','v70','v72','v73','v76','v77','v78','v79','v80','v83','v85','v87','v88','v89','v90','v91','v92','v93','v94','v96','v97','v98','v100','v101','v106','v107','v108','v109','v112','v113','v114','v115','v116','v117','v118','v121','v122','v123','v124','v125','v126','v127','v128','v137','v138','v139','v141','v143','v145','v147','v149','v151','v153','v155','v156','v157','v158','v159','v160','v161','v162','v163','v164','v165','v166','v167','v168','v169','v170','v171','v172','v173','v174','v175','v176','v177','v178','v179','v180','v181','v182','v183','v184','v185','v186','v187','v188','v189','v192','v193','v194','v195','v196','v197','v198','v199','v200','v201','v202','v203','v204','v205','v206','v207','v208','v209','v210','v211','v212','v213','v215','v216','v218','v219','v220','v221','v222','v223','v224','v234','v235','v237','v238','v239','v241','v242','v243','v244','v245','v246','v247','v248','v249','v250','v261','v251','v252','v260','v262','v259']]
importancia01a['02'] = pd.DataFrame(t2.columns)
importancia01 = importancia01a.sort_values(by=0,ascending=False)
importancia01.head(200)
n=gc.collect()
###Output
_____no_output_____
###Markdown
Modelo 1: Árvore de Decisão.
###Code
model1 = DecisionTreeClassifier(max_depth=2, random_state=18)
with parallel_backend('dask'):
model1.fit(xtreino,ytreino)
n=gc.collect()
with parallel_backend('dask'):
print('\033[1m'+'R² de treino:'+'\033[0m',model1.score(xtreino,ytreino),'\033[1m'+'R² de teste:'+'\033[0m',model1.score(xteste,yteste))
print('\033[1m'+'RMSE de Treino:'+'\033[0m',mean_squared_error(ytreino, model1.predict(xtreino)),'\033[1m'+'RMSE de Teste:'+'\033[0m',mean_squared_error(yteste, model1.predict(xteste)))
print('\n')
print('\033[1m'+'Reporte dos Dados de Treino - SP'+'\033[0m')
print(classification_report(ytreino, model1.predict(xtreino)))
print('\n')
plt.figure(figsize=(10, 5))
cnf_matrix_treino = confusion_matrix(ytreino, model1.predict(xtreino))
sns.heatmap(pd.DataFrame(cnf_matrix_treino), annot=True, cmap="YlGnBu" ,fmt='g')
plt.suptitle('Matriz de Confusão SP - Dados de Treino',y=1,fontsize=18)
plt.show()
print('\n')
print('\033[1m'+'Reporte dos Dados de Teste - SP'+'\033[0m')
print(classification_report(yteste, model1.predict(xteste)))
print('\n')
plt.figure(figsize=(10, 5))
cnf_matrix_teste = confusion_matrix(yteste, model1.predict(xteste))
sns.heatmap(pd.DataFrame(cnf_matrix_teste), annot=True, cmap="YlGnBu" ,fmt='g')
plt.suptitle('Matriz de Confusão SP - Dados de Teste',y=1,fontsize=18)
plt.show()
print('\n')
metrics.plot_roc_curve(model1,xteste,yteste)
plt.show()
print('\n')
with parallel_backend('dask'):
importancia01a = pd.DataFrame(model1.feature_importances_ )
t2= t1[['v0','v1','v2','v3','v4','v5','v6','v7','v8','v9','v10','v11','v12','v13','v14','v15','v19','v20','v21','v22','v23','v27','v28','v29','v30','v31','v32','v33','v43','v44','v45','v46','v47','v48','v50','v51','v52','v54','v55','v56','v57','v59','v60','v61','v62','v63','v64','v65','v66','v67','v68','v69','v70','v72','v73','v76','v77','v78','v79','v80','v83','v85','v87','v88','v89','v90','v91','v92','v93','v94','v96','v97','v98','v100','v101','v106','v107','v108','v109','v112','v113','v114','v115','v116','v117','v118','v121','v122','v123','v124','v125','v126','v127','v128','v137','v138','v139','v141','v143','v145','v147','v149','v151','v153','v155','v156','v157','v158','v159','v160','v161','v162','v163','v164','v165','v166','v167','v168','v169','v170','v171','v172','v173','v174','v175','v176','v177','v178','v179','v180','v181','v182','v183','v184','v185','v186','v187','v188','v189','v192','v193','v194','v195','v196','v197','v198','v199','v200','v201','v202','v203','v204','v205','v206','v207','v208','v209','v210','v211','v212','v213','v215','v216','v218','v219','v220','v221','v222','v223','v224','v234','v235','v237','v238','v239','v241','v242','v243','v244','v245','v246','v247','v248','v249','v250','v261','v251','v252','v260','v262','v259']]
importancia01a['02'] = pd.DataFrame(t2.columns)
importancia01 = importancia01a.sort_values(by=0,ascending=False)
importancia01.head(200)
n=gc.collect()
###Output
_____no_output_____
###Markdown
Modelo 2: Ada Boost.
###Code
model2 = AdaBoostClassifier(n_estimators=50)
with parallel_backend('dask'):
model2.fit(xtreino,ytreino)
n=gc.collect()
with parallel_backend('dask'):
print('\033[1m'+'R² de treino:'+'\033[0m',model2.score(xtreino,ytreino),'\033[1m'+'R² de teste:'+'\033[0m',model2.score(xteste,yteste))
print('\033[1m'+'RMSE de Treino:'+'\033[0m',mean_squared_error(ytreino, model2.predict(xtreino)),'\033[1m'+'RMSE de Teste:'+'\033[0m',mean_squared_error(yteste, model2.predict(xteste)))
print('\n')
print('\033[1m'+'Reporte dos Dados de Treino - SP'+'\033[0m')
print(classification_report(ytreino, model2.predict(xtreino)))
print('\n')
plt.figure(figsize=(10, 5))
cnf_matrix_treino = confusion_matrix(ytreino, model2.predict(xtreino))
sns.heatmap(pd.DataFrame(cnf_matrix_treino), annot=True, cmap="YlGnBu" ,fmt='g')
plt.suptitle('Matriz de Confusão SP - Dados de Treino',y=1,fontsize=18)
plt.show()
print('\n')
print('\033[1m'+'Reporte dos Dados de Teste - SP'+'\033[0m')
print(classification_report(yteste, model2.predict(xteste)))
print('\n')
plt.figure(figsize=(10, 5))
cnf_matrix_teste = confusion_matrix(yteste, model2.predict(xteste))
sns.heatmap(pd.DataFrame(cnf_matrix_teste), annot=True, cmap="YlGnBu" ,fmt='g')
plt.suptitle('Matriz de Confusão SP - Dados de Teste',y=1,fontsize=18)
plt.show()
print('\n')
metrics.plot_roc_curve(model2,xteste,yteste)
plt.show()
print('\n')
with parallel_backend('dask'):
importancia01a = pd.DataFrame(model2.feature_importances_ )
t2= t1[['v0','v1','v2','v3','v4','v5','v6','v7','v8','v9','v10','v11','v12','v13','v14','v15','v19','v20','v21','v22','v23','v27','v28','v29','v30','v31','v32','v33','v43','v44','v45','v46','v47','v48','v50','v51','v52','v54','v55','v56','v57','v59','v60','v61','v62','v63','v64','v65','v66','v67','v68','v69','v70','v72','v73','v76','v77','v78','v79','v80','v83','v85','v87','v88','v89','v90','v91','v92','v93','v94','v96','v97','v98','v100','v101','v106','v107','v108','v109','v112','v113','v114','v115','v116','v117','v118','v121','v122','v123','v124','v125','v126','v127','v128','v137','v138','v139','v141','v143','v145','v147','v149','v151','v153','v155','v156','v157','v158','v159','v160','v161','v162','v163','v164','v165','v166','v167','v168','v169','v170','v171','v172','v173','v174','v175','v176','v177','v178','v179','v180','v181','v182','v183','v184','v185','v186','v187','v188','v189','v192','v193','v194','v195','v196','v197','v198','v199','v200','v201','v202','v203','v204','v205','v206','v207','v208','v209','v210','v211','v212','v213','v215','v216','v218','v219','v220','v221','v222','v223','v224','v234','v235','v237','v238','v239','v241','v242','v243','v244','v245','v246','v247','v248','v249','v250','v261','v251','v252','v260','v262','v259']]
importancia01a['02'] = pd.DataFrame(t2.columns)
importancia01 = importancia01a.sort_values(by=0,ascending=False)
importancia01.head(200)
n=gc.collect()
###Output
_____no_output_____
###Markdown
Modelo 3: Gradient Boosting.
###Code
model3 = GradientBoostingClassifier(n_estimators=300)
with parallel_backend('dask'):
model3.fit(xtreino,ytreino)
n=gc.collect()
with parallel_backend('dask'):
print('\033[1m'+'R² de treino:'+'\033[0m',model3.score(xtreino,ytreino),'\033[1m'+'R² de teste:'+'\033[0m',model3.score(xteste,yteste))
print('\033[1m'+'RMSE de Treino:'+'\033[0m',mean_squared_error(ytreino, model3.predict(xtreino)),'\033[1m'+'RMSE de Teste:'+'\033[0m',mean_squared_error(yteste, model3.predict(xteste)))
print('\n')
print('\033[1m'+'Reporte dos Dados de Treino - SP'+'\033[0m')
print(classification_report(ytreino, model3.predict(xtreino)))
print('\n')
plt.figure(figsize=(10, 5))
cnf_matrix_treino = confusion_matrix(ytreino, model3.predict(xtreino))
sns.heatmap(pd.DataFrame(cnf_matrix_treino), annot=True, cmap="YlGnBu" ,fmt='g')
plt.suptitle('Matriz de Confusão SP - Dados de Treino',y=1,fontsize=18)
plt.show()
print('\n')
print('\033[1m'+'Reporte dos Dados de Teste - SP'+'\033[0m')
print(classification_report(yteste, model3.predict(xteste)))
print('\n')
plt.figure(figsize=(10, 5))
cnf_matrix_teste = confusion_matrix(yteste, model3.predict(xteste))
sns.heatmap(pd.DataFrame(cnf_matrix_teste), annot=True, cmap="YlGnBu" ,fmt='g')
plt.suptitle('Matriz de Confusão SP - Dados de Teste',y=1,fontsize=18)
plt.show()
print('\n')
metrics.plot_roc_curve(model3,xteste,yteste)
plt.show()
print('\n')
with parallel_backend('dask'):
importancia01a = pd.DataFrame(model3.feature_importances_ )
t2= t1[['v0','v1','v2','v3','v4','v5','v6','v7','v8','v9','v10','v11','v12','v13','v14','v15','v19','v20','v21','v22','v23','v27','v28','v29','v30','v31','v32','v33','v43','v44','v45','v46','v47','v48','v50','v51','v52','v54','v55','v56','v57','v59','v60','v61','v62','v63','v64','v65','v66','v67','v68','v69','v70','v72','v73','v76','v77','v78','v79','v80','v83','v85','v87','v88','v89','v90','v91','v92','v93','v94','v96','v97','v98','v100','v101','v106','v107','v108','v109','v112','v113','v114','v115','v116','v117','v118','v121','v122','v123','v124','v125','v126','v127','v128','v137','v138','v139','v141','v143','v145','v147','v149','v151','v153','v155','v156','v157','v158','v159','v160','v161','v162','v163','v164','v165','v166','v167','v168','v169','v170','v171','v172','v173','v174','v175','v176','v177','v178','v179','v180','v181','v182','v183','v184','v185','v186','v187','v188','v189','v192','v193','v194','v195','v196','v197','v198','v199','v200','v201','v202','v203','v204','v205','v206','v207','v208','v209','v210','v211','v212','v213','v215','v216','v218','v219','v220','v221','v222','v223','v224','v234','v235','v237','v238','v239','v241','v242','v243','v244','v245','v246','v247','v248','v249','v250','v261','v251','v252','v260','v262','v259']]
importancia01a['02'] = pd.DataFrame(t2.columns)
importancia01 = importancia01a.sort_values(by=0,ascending=False)
importancia01.head(200)
n=gc.collect()
###Output
_____no_output_____
###Markdown
Modelo 4: Bagging.
###Code
model4 = BaggingClassifier(n_estimators=1)
with parallel_backend('dask'):
model4.fit(xtreino,ytreino)
n=gc.collect()
with parallel_backend('dask'):
print('\033[1m'+'R² de treino:'+'\033[0m',model4.score(xtreino,ytreino),'\033[1m'+'R² de teste:'+'\033[0m',model4.score(xteste,yteste))
print('\033[1m'+'RMSE de Treino:'+'\033[0m',mean_squared_error(ytreino, model4.predict(xtreino)),'\033[1m'+'RMSE de Teste:'+'\033[0m',mean_squared_error(yteste, model4.predict(xteste)))
print('\n')
print('\033[1m'+'Reporte dos Dados de Treino - SP'+'\033[0m')
print(classification_report(ytreino, model4.predict(xtreino)))
print('\n')
plt.figure(figsize=(10, 5))
cnf_matrix_treino = confusion_matrix(ytreino, model4.predict(xtreino))
sns.heatmap(pd.DataFrame(cnf_matrix_treino), annot=True, cmap="YlGnBu" ,fmt='g')
plt.suptitle('Matriz de Confusão SP - Dados de Treino',y=1,fontsize=18)
plt.show()
print('\n')
print('\033[1m'+'Reporte dos Dados de Teste - SP'+'\033[0m')
print(classification_report(yteste, model4.predict(xteste)))
print('\n')
plt.figure(figsize=(10, 5))
cnf_matrix_teste = confusion_matrix(yteste, model4.predict(xteste))
sns.heatmap(pd.DataFrame(cnf_matrix_teste), annot=True, cmap="YlGnBu" ,fmt='g')
plt.suptitle('Matriz de Confusão SP - Dados de Teste',y=1,fontsize=18)
plt.show()
print('\n')
metrics.plot_roc_curve(model4,xteste,yteste)
plt.show()
print('\n')
n=gc.collect()
###Output
_____no_output_____
###Markdown
Modelo 5: Random Forrest.
###Code
model5 = RandomForestClassifier(n_estimators=2)
with parallel_backend('dask'):
model5.fit(xtreino,ytreino)
n=gc.collect()
with parallel_backend('dask'):
print('\033[1m'+'R² de treino:'+'\033[0m',model5.score(xtreino,ytreino),'\033[1m'+'R² de teste:'+'\033[0m',model5.score(xteste,yteste))
print('\033[1m'+'RMSE de Treino:'+'\033[0m',mean_squared_error(ytreino, model5.predict(xtreino)),'\033[1m'+'RMSE de Teste:'+'\033[0m',mean_squared_error(yteste, model5.predict(xteste)))
print('\n')
print('\033[1m'+'Reporte dos Dados de Treino - SP'+'\033[0m')
print(classification_report(ytreino, model5.predict(xtreino)))
print('\n')
plt.figure(figsize=(10, 5))
cnf_matrix_treino = confusion_matrix(ytreino, model5.predict(xtreino))
sns.heatmap(pd.DataFrame(cnf_matrix_treino), annot=True, cmap="YlGnBu" ,fmt='g')
plt.suptitle('Matriz de Confusão SP - Dados de Treino',y=1,fontsize=18)
plt.show()
print('\n')
print('\033[1m'+'Reporte dos Dados de Teste - SP'+'\033[0m')
print(classification_report(yteste, model5.predict(xteste)))
print('\n')
plt.figure(figsize=(10, 5))
cnf_matrix_teste = confusion_matrix(yteste, model5.predict(xteste))
sns.heatmap(pd.DataFrame(cnf_matrix_teste), annot=True, cmap="YlGnBu" ,fmt='g')
plt.suptitle('Matriz de Confusão SP - Dados de Teste',y=1,fontsize=18)
plt.show()
print('\n')
metrics.plot_roc_curve(model5,xteste,yteste)
plt.show()
print('\n')
with parallel_backend('dask'):
importancia01a = pd.DataFrame(model5.feature_importances_ )
t2= t1[['v0','v1','v2','v3','v4','v5','v6','v7','v8','v9','v10','v11','v12','v13','v14','v15','v19','v20','v21','v22','v23','v27','v28','v29','v30','v31','v32','v33','v43','v44','v45','v46','v47','v48','v50','v51','v52','v54','v55','v56','v57','v59','v60','v61','v62','v63','v64','v65','v66','v67','v68','v69','v70','v72','v73','v76','v77','v78','v79','v80','v83','v85','v87','v88','v89','v90','v91','v92','v93','v94','v96','v97','v98','v100','v101','v106','v107','v108','v109','v112','v113','v114','v115','v116','v117','v118','v121','v122','v123','v124','v125','v126','v127','v128','v137','v138','v139','v141','v143','v145','v147','v149','v151','v153','v155','v156','v157','v158','v159','v160','v161','v162','v163','v164','v165','v166','v167','v168','v169','v170','v171','v172','v173','v174','v175','v176','v177','v178','v179','v180','v181','v182','v183','v184','v185','v186','v187','v188','v189','v192','v193','v194','v195','v196','v197','v198','v199','v200','v201','v202','v203','v204','v205','v206','v207','v208','v209','v210','v211','v212','v213','v215','v216','v218','v219','v220','v221','v222','v223','v224','v234','v235','v237','v238','v239','v241','v242','v243','v244','v245','v246','v247','v248','v249','v250','v261','v251','v252','v260','v262','v259']]
importancia01a['02'] = pd.DataFrame(t2.columns)
importancia01 = importancia01a.sort_values(by=0,ascending=False)
importancia01.head(200)
n=gc.collect()
###Output
_____no_output_____
###Markdown
Consolidação das Curvas ROC
###Code
with parallel_backend('dask'):
classifiers = [model, model1, model2, model3, model4, model5]
ax = plt.gca()
for i in classifiers:
metrics.plot_roc_curve(i, xteste, yteste, ax=ax)
###Output
_____no_output_____ |
Notebooks/7.Bagging.ipynb | ###Markdown
Import Modules
###Code
import pandas as pd
import numpy as np
from common7 import file_exists, record_results
from common7 import X_adasyn_mean, y_adasyn, X_resampled_mean, y_resampled, X_smoted_mean, y_smoted, mean_train_scaled, median_train_scaled, y_train
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import BaggingClassifier
###Output
_____no_output_____
###Markdown
Optimize Parameters
###Code
obs = y_adasyn.shape
unique, counts = np.unique(y_adasyn, return_counts=True)
bal = dict(zip(unique, counts))
params = {
'n_estimators': [1, 2, 4, 8, 16, 32, 64, 100, 200],
'max_samples': [1, 5, 10, 20, 40, 80, 150, 300],
'max_features': [1, 2, 4, 6, 8, 10, 15, 20],
'bootstrap': [True, False],
'random_state': [200]
}
grid = GridSearchCV(BaggingClassifier(), param_grid=params, scoring='roc_auc', cv = 10)
grid.fit(X_adasyn_mean, y_adasyn)
grid.best_estimator_
auc_score = roc_auc_score(y_adasyn, grid.predict_proba(X_adasyn_mean)[:, 1])
grid.best_params_
grid.best_score_
results = {
'Model': 'Bagging',
'Hyperparameters': grid.best_params_,
'Target': 'coup',
'Features': 23,
'Observations': obs,
'Train Balance': bal,
'Train_AUC': auc_score,
'CV_AUC': grid.best_score_,
'Notes': 'Round 4: Missing data imputed with global mean. Classes balanced via ADASYN.'
}
record_results(results)
###Output
_____no_output_____ |
Code/jupyter-labs-eda-dataviz.ipynb | ###Markdown
**SpaceX Falcon 9 First Stage Landing Prediction** Assignment: Exploring and Preparing Data Estimated time needed: **70** minutes In this assignment, we will predict if the Falcon 9 first stage will land successfully. SpaceX advertises Falcon 9 rocket launches on its website with a cost of 62 million dollars; other providers cost upward of 165 million dollars each, much of the savings is due to the fact that SpaceX can reuse the first stage.In this lab, you will perform Exploratory Data Analysis and Feature Engineering. Falcon 9 first stage will land successfully  Several examples of an unsuccessful landing are shown here:  Most unsuccessful landings are planned. Space X performs a controlled landing in the oceans. ObjectivesPerform exploratory Data Analysis and Feature Engineering using `Pandas` and `Matplotlib`* Exploratory Data Analysis* Preparing Data Feature Engineering *** Import Libraries and Define Auxiliary Functions We will import the following libraries the lab
###Code
# andas is a software library written for the Python programming language for data manipulation and analysis.
import pandas as pd
#NumPy is a library for the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays
import numpy as np
# Matplotlib is a plotting library for python and pyplot gives us a MatLab like plotting framework. We will use this in our plotter function to plot data.
import matplotlib.pyplot as plt
#Seaborn is a Python data visualization library based on matplotlib. It provides a high-level interface for drawing attractive and informative statistical graphics
import seaborn as sns
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis First, let's read the SpaceX dataset into a Pandas dataframe and print its summary
###Code
df=pd.read_csv("https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/datasets/dataset_part_2.csv")
# If you were unable to complete the previous lab correctly you can uncomment and load this csv
# df = pd.read_csv('https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DS0701EN-SkillsNetwork/api/dataset_part_2.csv')
df.head(5)
###Output
_____no_output_____
###Markdown
First, let's try to see how the `FlightNumber` (indicating the continuous launch attempts.) and `Payload` variables would affect the launch outcome.We can plot out the FlightNumber vs. PayloadMassand overlay the outcome of the launch. We see that as the flight number increases, the first stage is more likely to land successfully. The payload mass is also important; it seems the more massive the payload, the less likely the first stage will return.
###Code
sns.catplot(y="PayloadMass", x="FlightNumber", hue="Class", data=df, aspect = 5)
plt.xlabel("Flight Number",fontsize=20)
plt.ylabel("Pay load Mass (kg)",fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
We see that different launch sites have different success rates. CCAFS LC-40, has a success rate of 60 %, while KSC LC-39A and VAFB SLC 4E has a success rate of 77%. Next, let's drill down to each site visualize its detailed launch records. TASK 1: Visualize the relationship between Flight Number and Launch Site Use the function catplot to plot FlightNumber vs LaunchSite, set the parameter x parameter to FlightNumber,set the y to Launch Site and set the parameter hue to 'class'
###Code
# Plot a scatter point chart with x axis to be Flight Number and y axis to be the launch site, and hue to be the class value
sns.catplot(x='FlightNumber',y='LaunchSite',hue = 'Class',data = df,aspect = 5)
plt.xlabel("Flight Number",fontsize=20)
plt.ylabel("Launch Site",fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
Now try to explain the patterns you found in the Flight Number vs. Launch Site scatter point plots. CCAFS SLC-40 had more the first stage returning, while VAFB SLC 4E seem it did not have the returned first stage ( especcially with flightnumber > 20). KSC LC 39A with flightnumber > 40 had less the first stage returning TASK 2: Visualize the relationship between Payload and Launch Site We also want to observe if there is any relationship between launch sites and their payload mass.
###Code
# Plot a scatter point chart with x axis to be Pay Load Mass (kg) and y axis to be the launch site, and hue to be the class value
sns.catplot(x='PayloadMass',y = 'LaunchSite',hue = 'Class',data = df,aspect = 5)
plt.xlabel('Pay Load Mass',fontsize = 20)
plt.ylabel('Launch Site',fontsize = 20)
plt.show()
###Output
_____no_output_____
###Markdown
Now if you observe Payload Vs. Launch Site scatter point chart you will find for the VAFB-SLC launchsite there are no rockets launched for heavypayload mass(greater than 10000). TASK 3: Visualize the relationship between success rate of each orbit type Next, we want to visually check if there are any relationship between success rate and orbit type. Let's create a `bar chart` for the sucess rate of each orbit
###Code
df
###Output
_____no_output_____
###Markdown
Analyze the ploted bar chart try to find which orbits have high sucess rate.
###Code
df.groupby("Orbit").mean()['Class'].plot(kind='bar')
###Output
_____no_output_____
###Markdown
TASK 4: Visualize the relationship between FlightNumber and Orbit type For each orbit, we want to see if there is any relationship between FlightNumber and Orbit type.
###Code
# Plot a scatter point chart with x axis to be FlightNumber and y axis to be the Orbit, and hue to be the class value
sns.catplot(x='FlightNumber', y='Orbit',hue = 'Class',data = df, aspect = 5,palette="deep")
plt.xlabel('Flight Number',fontsize = 20)
plt.ylabel('Orbit',fontsize = 20)
plt.show()
###Output
_____no_output_____
###Markdown
You should see that in the LEO orbit the Success appears related to the number of flights; on the other hand, there seems to be no relationship between flight number when in GTO orbit. TASK 5: Visualize the relationship between Payload and Orbit type Similarly, we can plot the Payload vs. Orbit scatter point charts to reveal the relationship between Payload and Orbit type
###Code
# Plot a scatter point chart with x axis to be Payload and y axis to be the Orbit, and hue to be the class value
sns.catplot(x='PayloadMass', y='Orbit',hue = 'Class',data = df, aspect = 5,palette="deep")
plt.xlabel('PayLoad',fontsize = 20)
plt.ylabel('Orbit',fontsize = 20)
plt.show()
###Output
_____no_output_____
###Markdown
With heavy payloads the successful landing or positive landing rate are more for Polar,LEO and ISS.However for GTO we cannot distinguish this well as both positive landing rate and negative landing(unsuccessful mission) are both there here. TASK 6: Visualize the launch success yearly trend You can plot a line chart with x axis to be Year and y axis to be average success rate, to get the average launch success trend. The function will help you get the year from the date:
###Code
# A function to Extract years from the date
year=[]
def Extract_year(date):
for i in df["Date"]:
year.append(i.split("-")[0])
return year
# Plot a line chart with x axis to be the extracted year and y axis to be the success rate
Extract_year(df["Date"])
zipped = zip(df['Date'], df['Orbit'], df['Outcome'],df['Class'], year)
df1=pd.DataFrame(zipped, columns=['Date', 'Orbit', 'Outcome', 'Class', 'Year'])
df1
df1.groupby('Year').mean()['Class'].plot(kind='line')
###Output
_____no_output_____
###Markdown
you can observe that the sucess rate since 2013 kept increasing till 2020 Features Engineering By now, you should obtain some preliminary insights about how each important variable would affect the success rate, we will select the features that will be used in success prediction in the future module.
###Code
features = df[['FlightNumber', 'PayloadMass', 'Orbit', 'LaunchSite', 'Flights', 'GridFins', 'Reused', 'Legs', 'LandingPad', 'Block', 'ReusedCount', 'Serial']]
features.head()
###Output
_____no_output_____
###Markdown
TASK 7: Create dummy variables to categorical columns Use the function get_dummies and features dataframe to apply OneHotEncoder to the column Orbits, LaunchSite, LandingPad, and Serial. Assign the value to the variable features_one_hot, display the results using the method head. Your result dataframe must include all features including the encoded ones.
###Code
# HINT: Use get_dummies() function on the categorical columns
features_one_hot = pd.get_dummies(features[["Orbit","LaunchSite", "LandingPad","Serial"]])
features_one_hot.head()
###Output
_____no_output_____
###Markdown
TASK 8: Cast all numeric columns to `float64` Now that our features_one_hot dataframe only contains numbers cast the entire dataframe to variable type float64
###Code
# HINT: use astype function
features_one_hot=features_one_hot.astype("float64")
frames = (features[['FlightNumber', 'PayloadMass', 'Flights', 'GridFins', 'Reused', 'Legs', 'Block', 'ReusedCount']], features_one_hot)
df2=pd.concat(frames)
df2.head()
###Output
_____no_output_____ |
examples/notebooks/49_split_control.ipynb | ###Markdown
[](https://githubtocolab.com/giswqs/leafmap/blob/master/examples/notebooks/49_split_control.ipynb)[](https://gishub.org/leafmap-binder)**Creating a split-panel map**This notebook demonstrates how to add a split-panel map with leafmap anf folium. It also supports streamlit. Note that the ipyleaflet SplitControl does not support streamlit. Uncomment the following line to install [leafmap](https://leafmap.org) if needed.
###Code
# !pip install leafmap
import folium
import leafmap.foliumap as leafmap
###Output
_____no_output_____
###Markdown
The split-panel map requires two layers: `left_layer` and `right_layer`. The layer instance can be a string representing a basemap, or an HTTP URL to a Cloud Optimized GeoTIFF (COG), or a folium TileLayer instance. **Using basemaps**
###Code
m = leafmap.Map(height=500)
m.split_map(left_layer='TERRAIN', right_layer='OpenTopoMap')
m
###Output
_____no_output_____
###Markdown
Show available basemaps.
###Code
# leafmap.basemaps.keys()
###Output
_____no_output_____
###Markdown
**Using COG**
###Code
m = leafmap.Map(height=600, center=[39.4948, -108.5492], zoom=12)
url = 'https://opendata.digitalglobe.com/events/california-fire-2020/pre-event/2018-02-16/pine-gulch-fire20/1030010076004E00.tif'
url2 = 'https://opendata.digitalglobe.com/events/california-fire-2020/post-event/2020-08-14/pine-gulch-fire20/10300100AAC8DD00.tif'
m.split_map(url, url2)
m
###Output
_____no_output_____
###Markdown
**Using folium TileLayer**
###Code
m = leafmap.Map(center=[40, -100], zoom=4)
url1 = 'https://www.mrlc.gov/geoserver/mrlc_display/NLCD_2001_Land_Cover_L48/wms?'
url2 = 'https://www.mrlc.gov/geoserver/mrlc_display/NLCD_2019_Land_Cover_L48/wms?'
left_layer = folium.WmsTileLayer(
url=url1,
layers='NLCD_2001_Land_Cover_L48',
name='NLCD 2001',
attr='MRLC',
fmt="image/png",
transparent=True,
)
right_layer = folium.WmsTileLayer(
url=url2,
layers='NLCD_2019_Land_Cover_L48',
name='NLCD 2019',
attr='MRLC',
fmt="image/png",
transparent=True,
)
m.split_map(left_layer, right_layer)
m
###Output
_____no_output_____ |
01_table_builders.ipynb | ###Markdown
Table Builders A table builder is function or class who's main purpose is to create a specific table out of one or more input tables
###Code
# export
import pandas as pd
# export
def payroll_issues_builder(payern):
"""Create payroll issues table
Args:
payern (DataFrame): pay earnings table (or snapshot)
"""
payroll_issues = payern.pivot_table(
values=['is_finalized', 'is_late'],
aggfunc={'is_finalized':['count','sum','mean'],
'is_late':['sum','mean'],
},
index=['EMPLID','EMPL_RCD','SUB_PAY_END_DT'])
payroll_issues
###Output
_____no_output_____ |
Applied Analytics/Design of Experiments/Experimental Design Basics/Week 2/Pooled t-test.ipynb | ###Markdown
Define the problem
###Code
from scipy import stats
import numpy as np
modified = [16.85,16.4,17.21,16.35,16.52,17.04,16.96,17.15,16.59,16.57]
unmodified = [16.62,16.75,17.37,17.12,16.98,16.87,17.34,17.02,17.08,17.27]
###Output
_____no_output_____
###Markdown
Calculate basic statistics
###Code
n = len(modified)
# Calculate mean, std and variance
modified_mean, unmodified_mean = np.mean(modified), np.mean(unmodified)
modified_var, unmodified_var = np.var(modified, ddof=1), np.var(unmodified, ddof=1)
modified_std, unmodified_std = np.std(modified, ddof=1), np.std(unmodified, ddof=1)
modified_mean, unmodified_mean
modified_std, unmodified_std
###Output
_____no_output_____
###Markdown
Assuming we know the variance
###Code
# If we suppose the variance is 0.3
z = (modified_mean - unmodified_mean) / np.sqrt(0.3**2/n + 0.3**2/n)
z
# Find the p-vallue
p_value = (1 - stats.norm(0,1).cdf(np.abs(z))) * 2
p_value
###Output
_____no_output_____
###Markdown
Pooled t-test
###Code
pooled_var = ((n-1)*modified_var + (n-1)*unmodified_var) / (n+n-2)
pooled_var
pooled_std = np.sqrt(pooled_var)
pooled_std
t = (modified_mean - unmodified_mean) / (pooled_std*np.sqrt(1/n+1/n))
t
df = 2*n-2
t_dist = stats.t(df)
critical_region = t_dist.ppf(0.025)
critical_region
from matplotlib import pyplot as plt
x = np.linspace(-4,4,100)
plt.plot(x, t_dist.pdf(x), label='t distribution')
plt.plot(x, stats.norm(0,1).pdf(x), label='normal distribution')
# Critical region
critical_x = x[x < critical_region]
plt.fill_between(critical_x, t_dist.pdf(critical_x), color="lightblue", label='critical region')
critical_x = x[x > -critical_region]
plt.fill_between(critical_x, t_dist.pdf(critical_x), color="lightblue")
# t value
plt.vlines(t, 0, 0.04, 'b')
plt.legend()
# t is inside the critical_region, hence we can reject the null hypothesis
t < critical_region
###Output
_____no_output_____
###Markdown
Calculate the p-value
###Code
2*t_dist.cdf(t)
###Output
_____no_output_____
###Markdown
Normal plot
###Code
stats.probplot(modified, dist='norm', plot=plt)
stats.probplot(unmodified, dist='norm', plot=plt)
plt.show()
###Output
_____no_output_____
###Markdown
Confidence interval 
###Code
# Upper
modified_mean-unmodified_mean-(t*pooled_std*np.sqrt(1/n+1/n))
# Lower
modified_mean-unmodified_mean+(t*pooled_std*np.sqrt(1/n+1/n))
###Output
_____no_output_____
###Markdown
Test if we get the same result combining the datasets
###Code
# Test if we get the same result combining the datasets
pooled_data = modified + unmodified
np.var(pooled_data, ddof=1)
###Output
_____no_output_____
###Markdown
The result is not the same as if we combined the data Using scipy ttest_ind
###Code
from scipy.stats import ttest_ind
ttest_ind(modified, unmodified, axis=0, equal_var=True)
###Output
_____no_output_____ |
code/experiments/misc/ert_plots.ipynb | ###Markdown
mJADE 10^3
###Code
# load experiment 0 data from file
path = "F:/FHV/Masterthesis/data_backup/MA_Data/experiment_0_10_3/"
cipde0a_ex0_10_3 = []
for i in range(20):
cipde0a_ex0_10_3.append(pp.loadExpObjectFast(path + "cipde0a_mj_rep_" + str(i) + ".json"))
cipde0b_ex0_10_3 = []
for i in range(20):
cipde0b_ex0_10_3.append(pp.loadExpObjectFast(path + "cipde0b_mj_rep_" + str(i) + ".json"))
cipde1_ex0_10_3 = []
for i in range(20):
cipde1_ex0_10_3.append(pp.loadExpObjectFast(path + "cipde1_mj_rep_" + str(i) + ".json"))
cipde2_ex0_10_3 = []
for i in range(20):
cipde2_ex0_10_3.append(pp.loadExpObjectFast(path + "cipde2_mj_rep_" + str(i) + ".json"))
cipde3_ex0_10_3 = []
for i in range(20):
cipde3_ex0_10_3.append(pp.loadExpObjectFast(path + "cipde3_mj_rep_" + str(i) + ".json"))
cipde4_ex0_10_3 = []
for i in range(20):
cipde4_ex0_10_3.append(pp.loadExpObjectFast(path + "cipde4_mj_rep_" + str(i) + ".json"))
cipde5_ex0_10_3 = []
for i in range(20):
cipde5_ex0_10_3.append(pp.loadExpObjectFast(path + "cipde5_mj_rep_" + str(i) + ".json"))
cipde6_ex0_10_3 = []
for i in range(20):
cipde6_ex0_10_3.append(pp.loadExpObjectFast(path + "cipde6_mj_rep_" + str(i) + ".json"))
cipde7_ex0_10_3 = []
for i in range(20):
cipde7_ex0_10_3.append(pp.loadExpObjectFast(path + "cipde7_mj_rep_" + str(i) + ".json"))
cipde8_ex0_10_3 = []
for i in range(20):
cipde8_ex0_10_3.append(pp.loadExpObjectFast(path + "cipde8_mj_rep_" + str(i) + ".json"))
cipde9_ex0_10_3 = []
for i in range(20):
cipde9_ex0_10_3.append(pp.loadExpObjectFast(path + "cipde9_mj_rep_" + str(i) + ".json"))
cipde0a_ex0_10_3_ert_data = []
for c in cipde0a_ex0_10_3:
cipde0a_ex0_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde0b_ex0_10_3_ert_data = []
for c in cipde0b_ex0_10_3:
cipde0b_ex0_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde1_ex0_10_3_ert_data = []
for c in cipde1_ex0_10_3:
cipde1_ex0_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde2_ex0_10_3_ert_data = []
for c in cipde2_ex0_10_3:
cipde2_ex0_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde3_ex0_10_3_ert_data = []
for c in cipde3_ex0_10_3:
cipde3_ex0_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde4_ex0_10_3_ert_data = []
for c in cipde4_ex0_10_3:
cipde4_ex0_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde5_ex0_10_3_ert_data = []
for c in cipde5_ex0_10_3:
cipde5_ex0_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde6_ex0_10_3_ert_data = []
for c in cipde6_ex0_10_3:
cipde6_ex0_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde7_ex0_10_3_ert_data = []
for c in cipde7_ex0_10_3:
cipde7_ex0_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde8_ex0_10_3_ert_data = []
for c in cipde8_ex0_10_3:
cipde8_ex0_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde9_ex0_10_3_ert_data = []
for c in cipde9_ex0_10_3:
cipde9_ex0_10_3_ert_data.append([[c["normL2"]],[10**3]])
###Output
_____no_output_____
###Markdown
mJADE 10^4
###Code
# load experiment 0 data from file
path = "F:/FHV/Masterthesis/data_backup/MA_Data/time_experiment_0/"
cipde0a_ex0_10_4 = []
for i in range(20):
cipde0a_ex0_10_4.append(pp.loadExpObjectFast(path + "cipde0a_mj_rep_" + str(i) + ".json"))
cipde0b_ex0_10_4 = []
for i in range(20):
cipde0b_ex0_10_4.append(pp.loadExpObjectFast(path + "cipde0b_mj_rep_" + str(i) + ".json"))
cipde1_ex0_10_4 = []
for i in range(20):
cipde1_ex0_10_4.append(pp.loadExpObjectFast(path + "cipde1_mj_rep_" + str(i) + ".json"))
cipde2_ex0_10_4 = []
for i in range(20):
cipde2_ex0_10_4.append(pp.loadExpObjectFast(path + "cipde2_mj_rep_" + str(i) + ".json"))
cipde3_ex0_10_4 = []
for i in range(20):
cipde3_ex0_10_4.append(pp.loadExpObjectFast(path + "cipde3_mj_rep_" + str(i) + ".json"))
cipde4_ex0_10_4 = []
for i in range(20):
cipde4_ex0_10_4.append(pp.loadExpObjectFast(path + "cipde4_mj_rep_" + str(i) + ".json"))
cipde5_ex0_10_4 = []
for i in range(20):
cipde5_ex0_10_4.append(pp.loadExpObjectFast(path + "cipde5_mj_rep_" + str(i) + ".json"))
cipde6_ex0_10_4 = []
for i in range(20):
cipde6_ex0_10_4.append(pp.loadExpObjectFast(path + "cipde6_mj_rep_" + str(i) + ".json"))
cipde7_ex0_10_4 = []
for i in range(20):
cipde7_ex0_10_4.append(pp.loadExpObjectFast(path + "cipde7_mj_rep_" + str(i) + ".json"))
cipde8_ex0_10_4 = []
for i in range(20):
cipde8_ex0_10_4.append(pp.loadExpObjectFast(path + "cipde8_mj_rep_" + str(i) + ".json"))
cipde9_ex0_10_4 = []
for i in range(20):
cipde9_ex0_10_4.append(pp.loadExpObjectFast(path + "cipde9_mj_rep_" + str(i) + ".json"))
cipde0a_ex0_10_4_ert_data = []
for c in cipde0a_ex0_10_4:
cipde0a_ex0_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde0b_ex0_10_4_ert_data = []
for c in cipde0b_ex0_10_4:
cipde0b_ex0_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde1_ex0_10_4_ert_data = []
for c in cipde1_ex0_10_4:
cipde1_ex0_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde2_ex0_10_4_ert_data = []
for c in cipde2_ex0_10_4:
cipde2_ex0_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde3_ex0_10_4_ert_data = []
for c in cipde3_ex0_10_4:
cipde3_ex0_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde4_ex0_10_4_ert_data = []
for c in cipde4_ex0_10_4:
cipde4_ex0_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde5_ex0_10_4_ert_data = []
for c in cipde5_ex0_10_4:
cipde5_ex0_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde6_ex0_10_4_ert_data = []
for c in cipde6_ex0_10_4:
cipde6_ex0_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde7_ex0_10_4_ert_data = []
for c in cipde7_ex0_10_4:
cipde7_ex0_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde8_ex0_10_4_ert_data = []
for c in cipde8_ex0_10_4:
cipde8_ex0_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde9_ex0_10_4_ert_data = []
for c in cipde9_ex0_10_4:
cipde9_ex0_10_4_ert_data.append([[c["normL2"]],[10**4]])
###Output
_____no_output_____
###Markdown
mJADE 10^5
###Code
# load experiment 0 data from file
path = "F:/FHV/Masterthesis/data_backup/MA_Data/experiment_0_10_5/"
cipde0a_ex0_10_5 = []
for i in range(20):
cipde0a_ex0_10_5.append(pp.loadExpObjectFast(path + "cipde0a_mj_rep_" + str(i) + ".json"))
cipde0b_ex0_10_5 = []
for i in range(20):
cipde0b_ex0_10_5.append(pp.loadExpObjectFast(path + "cipde0b_mj_rep_" + str(i) + ".json"))
cipde1_ex0_10_5 = []
for i in range(20):
cipde1_ex0_10_5.append(pp.loadExpObjectFast(path + "cipde1_mj_rep_" + str(i) + ".json"))
cipde2_ex0_10_5 = []
for i in range(20):
cipde2_ex0_10_5.append(pp.loadExpObjectFast(path + "cipde2_mj_rep_" + str(i) + ".json"))
cipde3_ex0_10_5 = []
for i in range(20):
cipde3_ex0_10_5.append(pp.loadExpObjectFast(path + "cipde3_mj_rep_" + str(i) + ".json"))
cipde4_ex0_10_5 = []
for i in range(20):
cipde4_ex0_10_5.append(pp.loadExpObjectFast(path + "cipde4_mj_rep_" + str(i) + ".json"))
cipde5_ex0_10_5 = []
for i in range(20):
cipde5_ex0_10_5.append(pp.loadExpObjectFast(path + "cipde5_mj_rep_" + str(i) + ".json"))
cipde6_ex0_10_5 = []
for i in range(20):
cipde6_ex0_10_5.append(pp.loadExpObjectFast(path + "cipde6_mj_rep_" + str(i) + ".json"))
cipde7_ex0_10_5 = []
for i in range(20):
cipde7_ex0_10_5.append(pp.loadExpObjectFast(path + "cipde7_mj_rep_" + str(i) + ".json"))
cipde8_ex0_10_5 = []
for i in range(20):
cipde8_ex0_10_5.append(pp.loadExpObjectFast(path + "cipde8_mj_rep_" + str(i) + ".json"))
cipde9_ex0_10_5 = []
for i in range(20):
cipde9_ex0_10_5.append(pp.loadExpObjectFast(path + "cipde9_mj_rep_" + str(i) + ".json"))
cipde0a_ex0_10_5_ert_data = []
for c in cipde0a_ex0_10_5:
cipde0a_ex0_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde0b_ex0_10_5_ert_data = []
for c in cipde0b_ex0_10_5:
cipde0b_ex0_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde1_ex0_10_5_ert_data = []
for c in cipde1_ex0_10_5:
cipde1_ex0_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde2_ex0_10_5_ert_data = []
for c in cipde2_ex0_10_5:
cipde2_ex0_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde3_ex0_10_5_ert_data = []
for c in cipde3_ex0_10_5:
cipde3_ex0_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde4_ex0_10_5_ert_data = []
for c in cipde4_ex0_10_5:
cipde4_ex0_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde5_ex0_10_5_ert_data = []
for c in cipde5_ex0_10_5:
cipde5_ex0_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde6_ex0_10_5_ert_data = []
for c in cipde6_ex0_10_5:
cipde6_ex0_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde7_ex0_10_5_ert_data = []
for c in cipde7_ex0_10_5:
cipde7_ex0_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde8_ex0_10_5_ert_data = []
for c in cipde8_ex0_10_5:
cipde8_ex0_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde9_ex0_10_5_ert_data = []
for c in cipde9_ex0_10_5:
cipde9_ex0_10_5_ert_data.append([[c["normL2"]],[10**5]])
###Output
_____no_output_____
###Markdown
mJADE 10^6
###Code
# load experiment 0 data from file
path = "F:/FHV/Masterthesis/data_backup/MA_Data/experiment_0/"
cipde0a_ex0_10_6 = []
for i in range(20):
cipde0a_ex0_10_6.append(pp.loadExpObjectFast(path + "cipde0a_mj_rep_" + str(i) + ".json"))
cipde0b_ex0_10_6 = []
for i in range(20):
cipde0b_ex0_10_6.append(pp.loadExpObjectFast(path + "cipde0b_mj_rep_" + str(i) + ".json"))
cipde1_ex0_10_6 = []
for i in range(20):
cipde1_ex0_10_6.append(pp.loadExpObjectFast(path + "cipde1_mj_rep_" + str(i) + ".json"))
cipde2_ex0_10_6 = []
for i in range(20):
cipde2_ex0_10_6.append(pp.loadExpObjectFast(path + "cipde2_mj_rep_" + str(i) + ".json"))
cipde3_ex0_10_6 = []
for i in range(20):
cipde3_ex0_10_6.append(pp.loadExpObjectFast(path + "cipde3_mj_rep_" + str(i) + ".json"))
cipde4_ex0_10_6 = []
for i in range(20):
cipde4_ex0_10_6.append(pp.loadExpObjectFast(path + "cipde4_mj_rep_" + str(i) + ".json"))
cipde5_ex0_10_6 = []
for i in range(20):
cipde5_ex0_10_6.append(pp.loadExpObjectFast(path + "cipde5_mj_rep_" + str(i) + ".json"))
cipde6_ex0_10_6 = []
for i in range(20):
cipde6_ex0_10_6.append(pp.loadExpObjectFast(path + "cipde6_mj_rep_" + str(i) + ".json"))
cipde7_ex0_10_6 = []
for i in range(20):
cipde7_ex0_10_6.append(pp.loadExpObjectFast(path + "cipde7_mj_rep_" + str(i) + ".json"))
cipde8_ex0_10_6 = []
for i in range(20):
cipde8_ex0_10_6.append(pp.loadExpObjectFast(path + "cipde8_mj_rep_" + str(i) + ".json"))
cipde9_ex0_10_6 = []
for i in range(20):
cipde9_ex0_10_6.append(pp.loadExpObjectFast(path + "cipde9_mj_rep_" + str(i) + ".json"))
cipde0a_ex0_10_6_ert_data = []
for c in cipde0a_ex0_10_6:
cipde0a_ex0_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde0b_ex0_10_6_ert_data = []
for c in cipde0b_ex0_10_6:
cipde0b_ex0_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde1_ex0_10_6_ert_data = []
for c in cipde1_ex0_10_6:
cipde1_ex0_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde2_ex0_10_6_ert_data = []
for c in cipde2_ex0_10_6:
cipde2_ex0_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde3_ex0_10_6_ert_data = []
for c in cipde3_ex0_10_6:
cipde3_ex0_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde4_ex0_10_6_ert_data = []
for c in cipde4_ex0_10_6:
cipde4_ex0_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde5_ex0_10_6_ert_data = []
for c in cipde5_ex0_10_6:
cipde5_ex0_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde6_ex0_10_6_ert_data = []
for c in cipde6_ex0_10_6:
cipde6_ex0_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde7_ex0_10_6_ert_data = []
for c in cipde7_ex0_10_6:
cipde7_ex0_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde8_ex0_10_6_ert_data = []
for c in cipde8_ex0_10_6:
cipde8_ex0_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde9_ex0_10_6_ert_data = []
for c in cipde9_ex0_10_6:
cipde9_ex0_10_6_ert_data.append([[c["normL2"]],[10**6]])
###Output
_____no_output_____
###Markdown
mpJADE 10^3
###Code
# load experiment 1 data from file
path = "F:/FHV/Masterthesis/data_backup/MA_Data/experiment_1_10_3/"
cipde0a_ex1_10_3 = []
for i in range(20):
cipde0a_ex1_10_3.append(pp.loadExpObjectFast(path + "cipde0a_mpj_rep_" + str(i) + ".json"))
cipde0b_ex1_10_3 = []
for i in range(20):
cipde0b_ex1_10_3.append(pp.loadExpObjectFast(path + "cipde0b_mpj_rep_" + str(i) + ".json"))
cipde1_ex1_10_3 = []
for i in range(20):
cipde1_ex1_10_3.append(pp.loadExpObjectFast(path + "cipde1_mpj_rep_" + str(i) + ".json"))
cipde2_ex1_10_3 = []
for i in range(20):
cipde2_ex1_10_3.append(pp.loadExpObjectFast(path + "cipde2_mpj_rep_" + str(i) + ".json"))
cipde3_ex1_10_3 = []
for i in range(20):
cipde3_ex1_10_3.append(pp.loadExpObjectFast(path + "cipde3_mpj_rep_" + str(i) + ".json"))
cipde4_ex1_10_3 = []
for i in range(20):
cipde4_ex1_10_3.append(pp.loadExpObjectFast(path + "cipde4_mpj_rep_" + str(i) + ".json"))
cipde5_ex1_10_3 = []
for i in range(20):
cipde5_ex1_10_3.append(pp.loadExpObjectFast(path + "cipde5_mpj_rep_" + str(i) + ".json"))
cipde6_ex1_10_3 = []
for i in range(20):
cipde6_ex1_10_3.append(pp.loadExpObjectFast(path + "cipde6_mpj_rep_" + str(i) + ".json"))
cipde7_ex1_10_3 = []
for i in range(20):
cipde7_ex1_10_3.append(pp.loadExpObjectFast(path + "cipde7_mpj_rep_" + str(i) + ".json"))
cipde8_ex1_10_3 = []
for i in range(20):
cipde8_ex1_10_3.append(pp.loadExpObjectFast(path + "cipde8_mpj_rep_" + str(i) + ".json"))
cipde9_ex1_10_3 = []
for i in range(20):
cipde9_ex1_10_3.append(pp.loadExpObjectFast(path + "cipde9_mpj_rep_" + str(i) + ".json"))
cipde0a_ex1_10_3_ert_data = []
for c in cipde0a_ex1_10_3:
cipde0a_ex1_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde0b_ex1_10_3_ert_data = []
for c in cipde0b_ex1_10_3:
cipde0b_ex1_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde1_ex1_10_3_ert_data = []
for c in cipde1_ex1_10_3:
cipde1_ex1_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde2_ex1_10_3_ert_data = []
for c in cipde2_ex1_10_3:
cipde2_ex1_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde3_ex1_10_3_ert_data = []
for c in cipde3_ex1_10_3:
cipde3_ex1_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde4_ex1_10_3_ert_data = []
for c in cipde4_ex1_10_3:
cipde4_ex1_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde5_ex1_10_3_ert_data = []
for c in cipde5_ex1_10_3:
cipde5_ex1_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde6_ex1_10_3_ert_data = []
for c in cipde6_ex1_10_3:
cipde6_ex1_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde7_ex1_10_3_ert_data = []
for c in cipde7_ex1_10_3:
cipde7_ex1_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde8_ex1_10_3_ert_data = []
for c in cipde8_ex1_10_3:
cipde8_ex1_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde9_ex1_10_3_ert_data = []
for c in cipde9_ex1_10_3:
cipde9_ex1_10_3_ert_data.append([[c["normL2"]],[10**3]])
###Output
_____no_output_____
###Markdown
mpJADE 10^4
###Code
# load experiment 1 data from file
path = "F:/FHV/Masterthesis/data_backup/MA_Data/time_experiment_1/"
cipde0a_ex1_10_4 = []
for i in range(20):
cipde0a_ex1_10_4.append(pp.loadExpObjectFast(path + "cipde0a_mpj_rep_" + str(i) + ".json"))
cipde0b_ex1_10_4 = []
for i in range(20):
cipde0b_ex1_10_4.append(pp.loadExpObjectFast(path + "cipde0b_mpj_rep_" + str(i) + ".json"))
cipde1_ex1_10_4 = []
for i in range(20):
cipde1_ex1_10_4.append(pp.loadExpObjectFast(path + "cipde1_mpj_rep_" + str(i) + ".json"))
cipde2_ex1_10_4 = []
for i in range(20):
cipde2_ex1_10_4.append(pp.loadExpObjectFast(path + "cipde2_mpj_rep_" + str(i) + ".json"))
cipde3_ex1_10_4 = []
for i in range(20):
cipde3_ex1_10_4.append(pp.loadExpObjectFast(path + "cipde3_mpj_rep_" + str(i) + ".json"))
cipde4_ex1_10_4 = []
for i in range(20):
cipde4_ex1_10_4.append(pp.loadExpObjectFast(path + "cipde4_mpj_rep_" + str(i) + ".json"))
cipde5_ex1_10_4 = []
for i in range(20):
cipde5_ex1_10_4.append(pp.loadExpObjectFast(path + "cipde5_mpj_rep_" + str(i) + ".json"))
cipde6_ex1_10_4 = []
for i in range(20):
cipde6_ex1_10_4.append(pp.loadExpObjectFast(path + "cipde6_mpj_rep_" + str(i) + ".json"))
cipde7_ex1_10_4 = []
for i in range(20):
cipde7_ex1_10_4.append(pp.loadExpObjectFast(path + "cipde7_mpj_rep_" + str(i) + ".json"))
cipde8_ex1_10_4 = []
for i in range(20):
cipde8_ex1_10_4.append(pp.loadExpObjectFast(path + "cipde8_mpj_rep_" + str(i) + ".json"))
cipde9_ex1_10_4 = []
for i in range(20):
cipde9_ex1_10_4.append(pp.loadExpObjectFast(path + "cipde9_mpj_rep_" + str(i) + ".json"))
cipde0a_ex1_10_4_ert_data = []
for c in cipde0a_ex1_10_4:
cipde0a_ex1_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde0b_ex1_10_4_ert_data = []
for c in cipde0b_ex1_10_4:
cipde0b_ex1_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde1_ex1_10_4_ert_data = []
for c in cipde1_ex1_10_4:
cipde1_ex1_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde2_ex1_10_4_ert_data = []
for c in cipde2_ex1_10_4:
cipde2_ex1_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde3_ex1_10_4_ert_data = []
for c in cipde3_ex1_10_4:
cipde3_ex1_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde4_ex1_10_4_ert_data = []
for c in cipde4_ex1_10_4:
cipde4_ex1_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde5_ex1_10_4_ert_data = []
for c in cipde5_ex1_10_4:
cipde5_ex1_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde6_ex1_10_4_ert_data = []
for c in cipde6_ex1_10_4:
cipde6_ex1_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde7_ex1_10_4_ert_data = []
for c in cipde7_ex1_10_4:
cipde7_ex1_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde8_ex1_10_4_ert_data = []
for c in cipde8_ex1_10_4:
cipde8_ex1_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde9_ex1_10_4_ert_data = []
for c in cipde9_ex1_10_4:
cipde9_ex1_10_4_ert_data.append([[c["normL2"]],[10**4]])
###Output
_____no_output_____
###Markdown
mpJADE 10^5
###Code
# load experiment 1 data from file
path = "F:/FHV/Masterthesis/data_backup/MA_Data/experiment_1_10_5/"
cipde0a_ex1_10_5 = []
for i in range(20):
cipde0a_ex1_10_5.append(pp.loadExpObjectFast(path + "cipde0a_mpj_rep_" + str(i) + ".json"))
cipde0b_ex1_10_5 = []
for i in range(20):
cipde0b_ex1_10_5.append(pp.loadExpObjectFast(path + "cipde0b_mpj_rep_" + str(i) + ".json"))
cipde1_ex1_10_5 = []
for i in range(20):
cipde1_ex1_10_5.append(pp.loadExpObjectFast(path + "cipde1_mpj_rep_" + str(i) + ".json"))
cipde2_ex1_10_5 = []
for i in range(20):
cipde2_ex1_10_5.append(pp.loadExpObjectFast(path + "cipde2_mpj_rep_" + str(i) + ".json"))
cipde3_ex1_10_5 = []
for i in range(20):
cipde3_ex1_10_5.append(pp.loadExpObjectFast(path + "cipde3_mpj_rep_" + str(i) + ".json"))
cipde4_ex1_10_5 = []
for i in range(20):
cipde4_ex1_10_5.append(pp.loadExpObjectFast(path + "cipde4_mpj_rep_" + str(i) + ".json"))
cipde5_ex1_10_5 = []
for i in range(20):
cipde5_ex1_10_5.append(pp.loadExpObjectFast(path + "cipde5_mpj_rep_" + str(i) + ".json"))
cipde6_ex1_10_5 = []
for i in range(20):
cipde6_ex1_10_5.append(pp.loadExpObjectFast(path + "cipde6_mpj_rep_" + str(i) + ".json"))
cipde7_ex1_10_5 = []
for i in range(20):
cipde7_ex1_10_5.append(pp.loadExpObjectFast(path + "cipde7_mpj_rep_" + str(i) + ".json"))
cipde8_ex1_10_5 = []
for i in range(20):
cipde8_ex1_10_5.append(pp.loadExpObjectFast(path + "cipde8_mpj_rep_" + str(i) + ".json"))
cipde9_ex1_10_5 = []
for i in range(20):
cipde9_ex1_10_5.append(pp.loadExpObjectFast(path + "cipde9_mpj_rep_" + str(i) + ".json"))
cipde0a_ex1_10_5_ert_data = []
for c in cipde0a_ex1_10_5:
cipde0a_ex1_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde0b_ex1_10_5_ert_data = []
for c in cipde0b_ex1_10_5:
cipde0b_ex1_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde1_ex1_10_5_ert_data = []
for c in cipde1_ex1_10_5:
cipde1_ex1_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde2_ex1_10_5_ert_data = []
for c in cipde2_ex1_10_5:
cipde2_ex1_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde3_ex1_10_5_ert_data = []
for c in cipde3_ex1_10_5:
cipde3_ex1_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde4_ex1_10_5_ert_data = []
for c in cipde4_ex1_10_5:
cipde4_ex1_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde5_ex1_10_5_ert_data = []
for c in cipde5_ex1_10_5:
cipde5_ex1_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde6_ex1_10_5_ert_data = []
for c in cipde6_ex1_10_5:
cipde6_ex1_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde7_ex1_10_5_ert_data = []
for c in cipde7_ex1_10_5:
cipde7_ex1_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde8_ex1_10_5_ert_data = []
for c in cipde8_ex1_10_5:
cipde8_ex1_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde9_ex1_10_5_ert_data = []
for c in cipde9_ex1_10_5:
cipde9_ex1_10_5_ert_data.append([[c["normL2"]],[10**5]])
###Output
_____no_output_____
###Markdown
mpJADE 10^6
###Code
# load experiment 1 data from file
path = "F:/FHV/Masterthesis/data_backup/MA_Data/experiment_1/"
cipde0a_ex1_10_6 = []
for i in range(20):
cipde0a_ex1_10_6.append(pp.loadExpObjectFast(path + "cipde0a_mpj_rep_" + str(i) + ".json"))
cipde0b_ex1_10_6 = []
for i in range(20):
cipde0b_ex1_10_6.append(pp.loadExpObjectFast(path + "cipde0b_mpj_rep_" + str(i) + ".json"))
cipde1_ex1_10_6 = []
for i in range(20):
cipde1_ex1_10_6.append(pp.loadExpObjectFast(path + "cipde1_mpj_rep_" + str(i) + ".json"))
cipde2_ex1_10_6 = []
for i in range(20):
cipde2_ex1_10_6.append(pp.loadExpObjectFast(path + "cipde2_mpj_rep_" + str(i) + ".json"))
cipde3_ex1_10_6 = []
for i in range(20):
cipde3_ex1_10_6.append(pp.loadExpObjectFast(path + "cipde3_mpj_rep_" + str(i) + ".json"))
cipde4_ex1_10_6 = []
for i in range(20):
cipde4_ex1_10_6.append(pp.loadExpObjectFast(path + "cipde4_mpj_rep_" + str(i) + ".json"))
cipde5_ex1_10_6 = []
for i in range(20):
cipde5_ex1_10_6.append(pp.loadExpObjectFast(path + "cipde5_mpj_rep_" + str(i) + ".json"))
cipde6_ex1_10_6 = []
for i in range(20):
cipde6_ex1_10_6.append(pp.loadExpObjectFast(path + "cipde6_mpj_rep_" + str(i) + ".json"))
cipde7_ex1_10_6 = []
for i in range(20):
cipde7_ex1_10_6.append(pp.loadExpObjectFast(path + "cipde7_mpj_rep_" + str(i) + ".json"))
cipde8_ex1_10_6 = []
for i in range(20):
cipde8_ex1_10_6.append(pp.loadExpObjectFast(path + "cipde8_mpj_rep_" + str(i) + ".json"))
cipde9_ex1_10_6 = []
for i in range(20):
cipde9_ex1_10_6.append(pp.loadExpObjectFast(path + "cipde9_mpj_rep_" + str(i) + ".json"))
cipde0a_ex1_10_6_ert_data = []
for c in cipde0a_ex1_10_6:
cipde0a_ex1_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde0b_ex1_10_6_ert_data = []
for c in cipde0b_ex1_10_6:
cipde0b_ex1_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde1_ex1_10_6_ert_data = []
for c in cipde1_ex1_10_6:
cipde1_ex1_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde2_ex1_10_6_ert_data = []
for c in cipde2_ex1_10_6:
cipde2_ex1_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde3_ex1_10_6_ert_data = []
for c in cipde3_ex1_10_6:
cipde3_ex1_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde4_ex1_10_6_ert_data = []
for c in cipde4_ex1_10_6:
cipde4_ex1_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde5_ex1_10_6_ert_data = []
for c in cipde5_ex1_10_6:
cipde5_ex1_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde6_ex1_10_6_ert_data = []
for c in cipde6_ex1_10_6:
cipde6_ex1_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde7_ex1_10_6_ert_data = []
for c in cipde7_ex1_10_6:
cipde7_ex1_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde8_ex1_10_6_ert_data = []
for c in cipde8_ex1_10_6:
cipde8_ex1_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde9_ex1_10_6_ert_data = []
for c in cipde9_ex1_10_6:
cipde9_ex1_10_6_ert_data.append([[c["normL2"]],[10**6]])
###Output
_____no_output_____
###Markdown
mpJADEa 10^3
###Code
# load experiment 2 data from file
path = "F:/FHV/Masterthesis/data_backup/MA_Data/experiment_2_10_3/"
cipde0a_ex2_10_3 = []
for i in range(20):
cipde0a_ex2_10_3.append(pp.loadExpObjectFast(path + "cipde0a_mpja_rep_" + str(i) + ".json"))
cipde0b_ex2_10_3 = []
for i in range(20):
cipde0b_ex2_10_3.append(pp.loadExpObjectFast(path + "cipde0b_mpja_rep_" + str(i) + ".json"))
cipde1_ex2_10_3 = []
for i in range(20):
cipde1_ex2_10_3.append(pp.loadExpObjectFast(path + "cipde1_mpja_rep_" + str(i) + ".json"))
cipde2_ex2_10_3 = []
for i in range(20):
cipde2_ex2_10_3.append(pp.loadExpObjectFast(path + "cipde2_mpja_rep_" + str(i) + ".json"))
cipde3_ex2_10_3 = []
for i in range(20):
cipde3_ex2_10_3.append(pp.loadExpObjectFast(path + "cipde3_mpja_rep_" + str(i) + ".json"))
cipde4_ex2_10_3 = []
for i in range(20):
cipde4_ex2_10_3.append(pp.loadExpObjectFast(path + "cipde4_mpja_rep_" + str(i) + ".json"))
cipde5_ex2_10_3 = []
for i in range(20):
cipde5_ex2_10_3.append(pp.loadExpObjectFast(path + "cipde5_mpja_rep_" + str(i) + ".json"))
cipde6_ex2_10_3 = []
for i in range(20):
cipde6_ex2_10_3.append(pp.loadExpObjectFast(path + "cipde6_mpja_rep_" + str(i) + ".json"))
cipde7_ex2_10_3 = []
for i in range(20):
cipde7_ex2_10_3.append(pp.loadExpObjectFast(path + "cipde7_mpja_rep_" + str(i) + ".json"))
cipde8_ex2_10_3 = []
for i in range(20):
cipde8_ex2_10_3.append(pp.loadExpObjectFast(path + "cipde8_mpja_rep_" + str(i) + ".json"))
cipde9_ex2_10_3 = []
for i in range(20):
cipde9_ex2_10_3.append(pp.loadExpObjectFast(path + "cipde9_mpja_rep_" + str(i) + ".json"))
cipde0a_ex2_10_3_ert_data = []
for c in cipde0a_ex2_10_3:
cipde0a_ex2_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde0b_ex2_10_3_ert_data = []
for c in cipde0b_ex2_10_3:
cipde0b_ex2_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde1_ex2_10_3_ert_data = []
for c in cipde1_ex2_10_3:
cipde1_ex2_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde2_ex2_10_3_ert_data = []
for c in cipde2_ex2_10_3:
cipde2_ex2_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde3_ex2_10_3_ert_data = []
for c in cipde3_ex2_10_3:
cipde3_ex2_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde4_ex2_10_3_ert_data = []
for c in cipde4_ex2_10_3:
cipde4_ex2_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde5_ex2_10_3_ert_data = []
for c in cipde5_ex2_10_3:
cipde5_ex2_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde6_ex2_10_3_ert_data = []
for c in cipde6_ex2_10_3:
cipde6_ex2_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde7_ex2_10_3_ert_data = []
for c in cipde7_ex2_10_3:
cipde7_ex2_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde8_ex2_10_3_ert_data = []
for c in cipde8_ex2_10_3:
cipde8_ex2_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde9_ex2_10_3_ert_data = []
for c in cipde9_ex2_10_3:
cipde9_ex2_10_3_ert_data.append([[c["normL2"]],[10**3]])
###Output
_____no_output_____
###Markdown
mpJADEa 10^4
###Code
# load experiment 2 data from file
path = "F:/FHV/Masterthesis/data_backup/MA_Data/time_experiment_2/"
cipde0a_ex2_10_4 = []
for i in range(20):
cipde0a_ex2_10_4.append(pp.loadExpObjectFast(path + "cipde0a_mpja_rep_" + str(i) + ".json"))
cipde0b_ex2_10_4 = []
for i in range(20):
cipde0b_ex2_10_4.append(pp.loadExpObjectFast(path + "cipde0b_mpja_rep_" + str(i) + ".json"))
cipde1_ex2_10_4 = []
for i in range(20):
cipde1_ex2_10_4.append(pp.loadExpObjectFast(path + "cipde1_mpja_rep_" + str(i) + ".json"))
cipde2_ex2_10_4 = []
for i in range(20):
cipde2_ex2_10_4.append(pp.loadExpObjectFast(path + "cipde2_mpja_rep_" + str(i) + ".json"))
cipde3_ex2_10_4 = []
for i in range(20):
cipde3_ex2_10_4.append(pp.loadExpObjectFast(path + "cipde3_mpja_rep_" + str(i) + ".json"))
cipde4_ex2_10_4 = []
for i in range(20):
cipde4_ex2_10_4.append(pp.loadExpObjectFast(path + "cipde4_mpja_rep_" + str(i) + ".json"))
cipde5_ex2_10_4 = []
for i in range(20):
cipde5_ex2_10_4.append(pp.loadExpObjectFast(path + "cipde5_mpja_rep_" + str(i) + ".json"))
cipde6_ex2_10_4 = []
for i in range(20):
cipde6_ex2_10_4.append(pp.loadExpObjectFast(path + "cipde6_mpja_rep_" + str(i) + ".json"))
cipde7_ex2_10_4 = []
for i in range(20):
cipde7_ex2_10_4.append(pp.loadExpObjectFast(path + "cipde7_mpja_rep_" + str(i) + ".json"))
cipde8_ex2_10_4 = []
for i in range(20):
cipde8_ex2_10_4.append(pp.loadExpObjectFast(path + "cipde8_mpja_rep_" + str(i) + ".json"))
cipde9_ex2_10_4 = []
for i in range(20):
cipde9_ex2_10_4.append(pp.loadExpObjectFast(path + "cipde9_mpja_rep_" + str(i) + ".json"))
cipde0a_ex2_10_4_ert_data = []
for c in cipde0a_ex2_10_4:
cipde0a_ex2_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde0b_ex2_10_4_ert_data = []
for c in cipde0b_ex2_10_4:
cipde0b_ex2_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde1_ex2_10_4_ert_data = []
for c in cipde1_ex2_10_4:
cipde1_ex2_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde2_ex2_10_4_ert_data = []
for c in cipde2_ex2_10_4:
cipde2_ex2_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde3_ex2_10_4_ert_data = []
for c in cipde3_ex2_10_4:
cipde3_ex2_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde4_ex2_10_4_ert_data = []
for c in cipde4_ex2_10_4:
cipde4_ex2_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde5_ex2_10_4_ert_data = []
for c in cipde5_ex2_10_4:
cipde5_ex2_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde6_ex2_10_4_ert_data = []
for c in cipde6_ex2_10_4:
cipde6_ex2_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde7_ex2_10_4_ert_data = []
for c in cipde7_ex2_10_4:
cipde7_ex2_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde8_ex2_10_4_ert_data = []
for c in cipde8_ex2_10_4:
cipde8_ex2_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde9_ex2_10_4_ert_data = []
for c in cipde9_ex2_10_4:
cipde9_ex2_10_4_ert_data.append([[c["normL2"]],[10**4]])
###Output
_____no_output_____
###Markdown
mpJADEa 10^5
###Code
# load experiment 2 data from file
path = "F:/FHV/Masterthesis/data_backup/MA_Data/experiment_2_10_5/"
cipde0a_ex2_10_5 = []
for i in range(20):
cipde0a_ex2_10_5.append(pp.loadExpObjectFast(path + "cipde0a_mpja_rep_" + str(i) + ".json"))
cipde0b_ex2_10_5 = []
for i in range(20):
cipde0b_ex2_10_5.append(pp.loadExpObjectFast(path + "cipde0b_mpja_rep_" + str(i) + ".json"))
cipde1_ex2_10_5 = []
for i in range(20):
cipde1_ex2_10_5.append(pp.loadExpObjectFast(path + "cipde1_mpja_rep_" + str(i) + ".json"))
cipde2_ex2_10_5 = []
for i in range(20):
cipde2_ex2_10_5.append(pp.loadExpObjectFast(path + "cipde2_mpja_rep_" + str(i) + ".json"))
cipde3_ex2_10_5 = []
for i in range(20):
cipde3_ex2_10_5.append(pp.loadExpObjectFast(path + "cipde3_mpja_rep_" + str(i) + ".json"))
cipde4_ex2_10_5 = []
for i in range(20):
cipde4_ex2_10_5.append(pp.loadExpObjectFast(path + "cipde4_mpja_rep_" + str(i) + ".json"))
cipde5_ex2_10_5 = []
for i in range(20):
cipde5_ex2_10_5.append(pp.loadExpObjectFast(path + "cipde5_mpja_rep_" + str(i) + ".json"))
cipde6_ex2_10_5 = []
for i in range(20):
cipde6_ex2_10_5.append(pp.loadExpObjectFast(path + "cipde6_mpja_rep_" + str(i) + ".json"))
cipde7_ex2_10_5 = []
for i in range(20):
cipde7_ex2_10_5.append(pp.loadExpObjectFast(path + "cipde7_mpja_rep_" + str(i) + ".json"))
cipde8_ex2_10_5 = []
for i in range(20):
cipde8_ex2_10_5.append(pp.loadExpObjectFast(path + "cipde8_mpja_rep_" + str(i) + ".json"))
cipde9_ex2_10_5 = []
for i in range(20):
cipde9_ex2_10_5.append(pp.loadExpObjectFast(path + "cipde9_mpja_rep_" + str(i) + ".json"))
cipde0a_ex2_10_5_ert_data = []
for c in cipde0a_ex2_10_5:
cipde0a_ex2_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde0b_ex2_10_5_ert_data = []
for c in cipde0b_ex2_10_5:
cipde0b_ex2_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde1_ex2_10_5_ert_data = []
for c in cipde1_ex2_10_5:
cipde1_ex2_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde2_ex2_10_5_ert_data = []
for c in cipde2_ex2_10_5:
cipde2_ex2_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde3_ex2_10_5_ert_data = []
for c in cipde3_ex2_10_5:
cipde3_ex2_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde4_ex2_10_5_ert_data = []
for c in cipde4_ex2_10_5:
cipde4_ex2_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde5_ex2_10_5_ert_data = []
for c in cipde5_ex2_10_5:
cipde5_ex2_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde6_ex2_10_5_ert_data = []
for c in cipde6_ex2_10_5:
cipde6_ex2_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde7_ex2_10_5_ert_data = []
for c in cipde7_ex2_10_5:
cipde7_ex2_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde8_ex2_10_5_ert_data = []
for c in cipde8_ex2_10_5:
cipde8_ex2_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde9_ex2_10_5_ert_data = []
for c in cipde9_ex2_10_5:
cipde9_ex2_10_5_ert_data.append([[c["normL2"]],[10**5]])
###Output
_____no_output_____
###Markdown
mpJADEa 10^6
###Code
# load experiment 2 data from file
path = "F:/FHV/Masterthesis/data_backup/MA_Data/experiment_2/"
cipde0a_ex2_10_6 = []
for i in range(20):
cipde0a_ex2_10_6.append(pp.loadExpObjectFast(path + "cipde0a_mpja_rep_" + str(i) + ".json"))
cipde0b_ex2_10_6 = []
for i in range(20):
cipde0b_ex2_10_6.append(pp.loadExpObjectFast(path + "cipde0b_mpja_rep_" + str(i) + ".json"))
cipde1_ex2_10_6 = []
for i in range(20):
cipde1_ex2_10_6.append(pp.loadExpObjectFast(path + "cipde1_mpja_rep_" + str(i) + ".json"))
cipde2_ex2_10_6 = []
for i in range(20):
cipde2_ex2_10_6.append(pp.loadExpObjectFast(path + "cipde2_mpja_rep_" + str(i) + ".json"))
cipde3_ex2_10_6 = []
for i in range(20):
cipde3_ex2_10_6.append(pp.loadExpObjectFast(path + "cipde3_mpja_rep_" + str(i) + ".json"))
cipde4_ex2_10_6 = []
for i in range(20):
cipde4_ex2_10_6.append(pp.loadExpObjectFast(path + "cipde4_mpja_rep_" + str(i) + ".json"))
cipde5_ex2_10_6 = []
for i in range(20):
cipde5_ex2_10_6.append(pp.loadExpObjectFast(path + "cipde5_mpja_rep_" + str(i) + ".json"))
cipde6_ex2_10_6 = []
for i in range(20):
cipde6_ex2_10_6.append(pp.loadExpObjectFast(path + "cipde6_mpja_rep_" + str(i) + ".json"))
cipde7_ex2_10_6 = []
for i in range(20):
cipde7_ex2_10_6.append(pp.loadExpObjectFast(path + "cipde7_mpja_rep_" + str(i) + ".json"))
cipde8_ex2_10_6 = []
for i in range(20):
cipde8_ex2_10_6.append(pp.loadExpObjectFast(path + "cipde8_mpja_rep_" + str(i) + ".json"))
cipde9_ex2_10_6 = []
for i in range(20):
cipde9_ex2_10_6.append(pp.loadExpObjectFast(path + "cipde9_mpja_rep_" + str(i) + ".json"))
cipde0a_ex2_10_6_ert_data = []
for c in cipde0a_ex2_10_6:
cipde0a_ex2_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde0b_ex2_10_6_ert_data = []
for c in cipde0b_ex2_10_6:
cipde0b_ex2_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde1_ex2_10_6_ert_data = []
for c in cipde1_ex2_10_6:
cipde1_ex2_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde2_ex2_10_6_ert_data = []
for c in cipde2_ex2_10_6:
cipde2_ex2_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde3_ex2_10_6_ert_data = []
for c in cipde3_ex2_10_6:
cipde3_ex2_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde4_ex2_10_6_ert_data = []
for c in cipde4_ex2_10_6:
cipde4_ex2_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde5_ex2_10_6_ert_data = []
for c in cipde5_ex2_10_6:
cipde5_ex2_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde6_ex2_10_6_ert_data = []
for c in cipde6_ex2_10_6:
cipde6_ex2_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde7_ex2_10_6_ert_data = []
for c in cipde7_ex2_10_6:
cipde7_ex2_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde8_ex2_10_6_ert_data = []
for c in cipde8_ex2_10_6:
cipde8_ex2_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde9_ex2_10_6_ert_data = []
for c in cipde9_ex2_10_6:
cipde9_ex2_10_6_ert_data.append([[c["normL2"]],[10**6]])
###Output
_____no_output_____
###Markdown
mpJADEGsk 10^3
###Code
# load experiment 3 data from file
path = "F:/FHV/Masterthesis/data_backup/MA_Data/experiment_3_10_3/"
cipde0a_ex3_10_3 = []
for i in range(20):
cipde0a_ex3_10_3.append(pp.loadExpObjectFast(path + "cipde0a_mpjgsk_rep_" + str(i) + ".json"))
cipde0b_ex3_10_3 = []
for i in range(20):
cipde0b_ex3_10_3.append(pp.loadExpObjectFast(path + "cipde0b_mpjgsk_rep_" + str(i) + ".json"))
cipde1_ex3_10_3 = []
for i in range(20):
cipde1_ex3_10_3.append(pp.loadExpObjectFast(path + "cipde1_mpjgsk_rep_" + str(i) + ".json"))
cipde2_ex3_10_3 = []
for i in range(20):
cipde2_ex3_10_3.append(pp.loadExpObjectFast(path + "cipde2_mpjgsk_rep_" + str(i) + ".json"))
cipde3_ex3_10_3 = []
for i in range(20):
cipde3_ex3_10_3.append(pp.loadExpObjectFast(path + "cipde3_mpjgsk_rep_" + str(i) + ".json"))
cipde4_ex3_10_3 = []
for i in range(20):
cipde4_ex3_10_3.append(pp.loadExpObjectFast(path + "cipde4_mpjgsk_rep_" + str(i) + ".json"))
cipde5_ex3_10_3 = []
for i in range(20):
cipde5_ex3_10_3.append(pp.loadExpObjectFast(path + "cipde5_mpjgsk_rep_" + str(i) + ".json"))
cipde6_ex3_10_3 = []
for i in range(20):
cipde6_ex3_10_3.append(pp.loadExpObjectFast(path + "cipde6_mpjgsk_rep_" + str(i) + ".json"))
cipde7_ex3_10_3 = []
for i in range(20):
cipde7_ex3_10_3.append(pp.loadExpObjectFast(path + "cipde7_mpjgsk_rep_" + str(i) + ".json"))
cipde8_ex3_10_3 = []
for i in range(20):
cipde8_ex3_10_3.append(pp.loadExpObjectFast(path + "cipde8_mpjgsk_rep_" + str(i) + ".json"))
cipde9_ex3_10_3 = []
for i in range(20):
cipde9_ex3_10_3.append(pp.loadExpObjectFast(path + "cipde9_mpjgsk_rep_" + str(i) + ".json"))
cipde0a_ex3_10_3_ert_data = []
for c in cipde0a_ex3_10_3:
cipde0a_ex3_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde0b_ex3_10_3_ert_data = []
for c in cipde0b_ex3_10_3:
cipde0b_ex3_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde1_ex3_10_3_ert_data = []
for c in cipde1_ex3_10_3:
cipde1_ex3_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde2_ex3_10_3_ert_data = []
for c in cipde2_ex3_10_3:
cipde2_ex3_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde3_ex3_10_3_ert_data = []
for c in cipde3_ex3_10_3:
cipde3_ex3_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde4_ex3_10_3_ert_data = []
for c in cipde4_ex3_10_3:
cipde4_ex3_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde5_ex3_10_3_ert_data = []
for c in cipde5_ex3_10_3:
cipde5_ex3_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde6_ex3_10_3_ert_data = []
for c in cipde6_ex3_10_3:
cipde6_ex3_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde7_ex3_10_3_ert_data = []
for c in cipde7_ex3_10_3:
cipde7_ex3_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde8_ex3_10_3_ert_data = []
for c in cipde8_ex3_10_3:
cipde8_ex3_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde9_ex3_10_3_ert_data = []
for c in cipde9_ex3_10_3:
cipde9_ex3_10_3_ert_data.append([[c["normL2"]],[10**3]])
###Output
_____no_output_____
###Markdown
mpJADEGsk 10^4
###Code
# load experiment 3 data from file
path = "F:/FHV/Masterthesis/data_backup/MA_Data/time_experiment_3/"
cipde0a_ex3_10_4 = []
for i in range(20):
cipde0a_ex3_10_4.append(pp.loadExpObjectFast(path + "cipde0a_mpjgsk_rep_" + str(i) + ".json"))
cipde0b_ex3_10_4 = []
for i in range(20):
cipde0b_ex3_10_4.append(pp.loadExpObjectFast(path + "cipde0b_mpjgsk_rep_" + str(i) + ".json"))
cipde1_ex3_10_4 = []
for i in range(20):
cipde1_ex3_10_4.append(pp.loadExpObjectFast(path + "cipde1_mpjgsk_rep_" + str(i) + ".json"))
cipde2_ex3_10_4 = []
for i in range(20):
cipde2_ex3_10_4.append(pp.loadExpObjectFast(path + "cipde2_mpjgsk_rep_" + str(i) + ".json"))
cipde3_ex3_10_4 = []
for i in range(20):
cipde3_ex3_10_4.append(pp.loadExpObjectFast(path + "cipde3_mpjgsk_rep_" + str(i) + ".json"))
cipde4_ex3_10_4 = []
for i in range(20):
cipde4_ex3_10_4.append(pp.loadExpObjectFast(path + "cipde4_mpjgsk_rep_" + str(i) + ".json"))
cipde5_ex3_10_4 = []
for i in range(20):
cipde5_ex3_10_4.append(pp.loadExpObjectFast(path + "cipde5_mpjgsk_rep_" + str(i) + ".json"))
cipde6_ex3_10_4 = []
for i in range(20):
cipde6_ex3_10_4.append(pp.loadExpObjectFast(path + "cipde6_mpjgsk_rep_" + str(i) + ".json"))
cipde7_ex3_10_4 = []
for i in range(20):
cipde7_ex3_10_4.append(pp.loadExpObjectFast(path + "cipde7_mpjgsk_rep_" + str(i) + ".json"))
cipde8_ex3_10_4 = []
for i in range(20):
cipde8_ex3_10_4.append(pp.loadExpObjectFast(path + "cipde8_mpjgsk_rep_" + str(i) + ".json"))
cipde9_ex3_10_4 = []
for i in range(20):
cipde9_ex3_10_4.append(pp.loadExpObjectFast(path + "cipde9_mpjgsk_rep_" + str(i) + ".json"))
cipde0a_ex3_10_4_ert_data = []
for c in cipde0a_ex3_10_4:
cipde0a_ex3_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde0b_ex3_10_4_ert_data = []
for c in cipde0b_ex3_10_4:
cipde0b_ex3_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde1_ex3_10_4_ert_data = []
for c in cipde1_ex3_10_4:
cipde1_ex3_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde2_ex3_10_4_ert_data = []
for c in cipde2_ex3_10_4:
cipde2_ex3_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde3_ex3_10_4_ert_data = []
for c in cipde3_ex3_10_4:
cipde3_ex3_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde4_ex3_10_4_ert_data = []
for c in cipde4_ex3_10_4:
cipde4_ex3_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde5_ex3_10_4_ert_data = []
for c in cipde5_ex3_10_4:
cipde5_ex3_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde6_ex3_10_4_ert_data = []
for c in cipde6_ex3_10_4:
cipde6_ex3_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde7_ex3_10_4_ert_data = []
for c in cipde7_ex3_10_4:
cipde7_ex3_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde8_ex3_10_4_ert_data = []
for c in cipde8_ex3_10_4:
cipde8_ex3_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde9_ex3_10_4_ert_data = []
for c in cipde9_ex3_10_4:
cipde9_ex3_10_4_ert_data.append([[c["normL2"]],[10**4]])
###Output
_____no_output_____
###Markdown
mpJADEGsk 10^5
###Code
# load experiment 3 data from file
path = "F:/FHV/Masterthesis/data_backup/MA_Data/experiment_3_10_5/"
cipde0a_ex3_10_5 = []
for i in range(20):
cipde0a_ex3_10_5.append(pp.loadExpObjectFast(path + "cipde0a_mpjgsk_rep_" + str(i) + ".json"))
cipde0b_ex3_10_5 = []
for i in range(20):
cipde0b_ex3_10_5.append(pp.loadExpObjectFast(path + "cipde0b_mpjgsk_rep_" + str(i) + ".json"))
cipde1_ex3_10_5 = []
for i in range(20):
cipde1_ex3_10_5.append(pp.loadExpObjectFast(path + "cipde1_mpjgsk_rep_" + str(i) + ".json"))
cipde2_ex3_10_5 = []
for i in range(20):
cipde2_ex3_10_5.append(pp.loadExpObjectFast(path + "cipde2_mpjgsk_rep_" + str(i) + ".json"))
cipde3_ex3_10_5 = []
for i in range(20):
cipde3_ex3_10_5.append(pp.loadExpObjectFast(path + "cipde3_mpjgsk_rep_" + str(i) + ".json"))
cipde4_ex3_10_5 = []
for i in range(20):
cipde4_ex3_10_5.append(pp.loadExpObjectFast(path + "cipde4_mpjgsk_rep_" + str(i) + ".json"))
cipde5_ex3_10_5 = []
for i in range(20):
cipde5_ex3_10_5.append(pp.loadExpObjectFast(path + "cipde5_mpjgsk_rep_" + str(i) + ".json"))
cipde6_ex3_10_5 = []
for i in range(20):
cipde6_ex3_10_5.append(pp.loadExpObjectFast(path + "cipde6_mpjgsk_rep_" + str(i) + ".json"))
cipde7_ex3_10_5 = []
for i in range(20):
cipde7_ex3_10_5.append(pp.loadExpObjectFast(path + "cipde7_mpjgsk_rep_" + str(i) + ".json"))
cipde8_ex3_10_5 = []
for i in range(20):
cipde8_ex3_10_5.append(pp.loadExpObjectFast(path + "cipde8_mpjgsk_rep_" + str(i) + ".json"))
cipde9_ex3_10_5 = []
for i in range(20):
cipde9_ex3_10_5.append(pp.loadExpObjectFast(path + "cipde9_mpjgsk_rep_" + str(i) + ".json"))
cipde0a_ex3_10_5_ert_data = []
for c in cipde0a_ex3_10_5:
cipde0a_ex3_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde0b_ex3_10_5_ert_data = []
for c in cipde0b_ex3_10_5:
cipde0b_ex3_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde1_ex3_10_5_ert_data = []
for c in cipde1_ex3_10_5:
cipde1_ex3_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde2_ex3_10_5_ert_data = []
for c in cipde2_ex3_10_5:
cipde2_ex3_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde3_ex3_10_5_ert_data = []
for c in cipde3_ex3_10_5:
cipde3_ex3_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde4_ex3_10_5_ert_data = []
for c in cipde4_ex3_10_5:
cipde4_ex3_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde5_ex3_10_5_ert_data = []
for c in cipde5_ex3_10_5:
cipde5_ex3_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde6_ex3_10_5_ert_data = []
for c in cipde6_ex3_10_5:
cipde6_ex3_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde7_ex3_10_5_ert_data = []
for c in cipde7_ex3_10_5:
cipde7_ex3_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde8_ex3_10_5_ert_data = []
for c in cipde8_ex3_10_5:
cipde8_ex3_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde9_ex3_10_5_ert_data = []
for c in cipde9_ex3_10_5:
cipde9_ex3_10_5_ert_data.append([[c["normL2"]],[10**5]])
###Output
_____no_output_____
###Markdown
mpJADEGsk 10^6
###Code
# load experiment 3 data from file
path = "F:/FHV/Masterthesis/data_backup/MA_Data/experiment_3/"
cipde0a_ex3_10_6 = []
for i in range(20):
cipde0a_ex3_10_6.append(pp.loadExpObjectFast(path + "cipde0a_mpjgsk_rep_" + str(i) + ".json"))
cipde0b_ex3_10_6 = []
for i in range(20):
cipde0b_ex3_10_6.append(pp.loadExpObjectFast(path + "cipde0b_mpjgsk_rep_" + str(i) + ".json"))
cipde1_ex3_10_6 = []
for i in range(20):
cipde1_ex3_10_6.append(pp.loadExpObjectFast(path + "cipde1_mpjgsk_rep_" + str(i) + ".json"))
cipde2_ex3_10_6 = []
for i in range(20):
cipde2_ex3_10_6.append(pp.loadExpObjectFast(path + "cipde2_mpjgsk_rep_" + str(i) + ".json"))
cipde3_ex3_10_6 = []
for i in range(20):
cipde3_ex3_10_6.append(pp.loadExpObjectFast(path + "cipde3_mpjgsk_rep_" + str(i) + ".json"))
cipde4_ex3_10_6 = []
for i in range(20):
cipde4_ex3_10_6.append(pp.loadExpObjectFast(path + "cipde4_mpjgsk_rep_" + str(i) + ".json"))
cipde5_ex3_10_6 = []
for i in range(20):
cipde5_ex3_10_6.append(pp.loadExpObjectFast(path + "cipde5_mpjgsk_rep_" + str(i) + ".json"))
cipde6_ex3_10_6 = []
for i in range(20):
cipde6_ex3_10_6.append(pp.loadExpObjectFast(path + "cipde6_mpjgsk_rep_" + str(i) + ".json"))
cipde7_ex3_10_6 = []
for i in range(20):
cipde7_ex3_10_6.append(pp.loadExpObjectFast(path + "cipde7_mpjgsk_rep_" + str(i) + ".json"))
cipde8_ex3_10_6 = []
for i in range(20):
cipde8_ex3_10_6.append(pp.loadExpObjectFast(path + "cipde8_mpjgsk_rep_" + str(i) + ".json"))
cipde9_ex3_10_6 = []
for i in range(20):
cipde9_ex3_10_6.append(pp.loadExpObjectFast(path + "cipde9_mpjgsk_rep_" + str(i) + ".json"))
cipde0a_ex3_10_6_ert_data = []
for c in cipde0a_ex3_10_6:
cipde0a_ex3_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde0b_ex3_10_6_ert_data = []
for c in cipde0b_ex3_10_6:
cipde0b_ex3_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde1_ex3_10_6_ert_data = []
for c in cipde1_ex3_10_6:
cipde1_ex3_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde2_ex3_10_6_ert_data = []
for c in cipde2_ex3_10_6:
cipde2_ex3_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde3_ex3_10_6_ert_data = []
for c in cipde3_ex3_10_6:
cipde3_ex3_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde4_ex3_10_6_ert_data = []
for c in cipde4_ex3_10_6:
cipde4_ex3_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde5_ex3_10_6_ert_data = []
for c in cipde5_ex3_10_6:
cipde5_ex3_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde6_ex3_10_6_ert_data = []
for c in cipde6_ex3_10_6:
cipde6_ex3_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde7_ex3_10_6_ert_data = []
for c in cipde7_ex3_10_6:
cipde7_ex3_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde8_ex3_10_6_ert_data = []
for c in cipde8_ex3_10_6:
cipde8_ex3_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde9_ex3_10_6_ert_data = []
for c in cipde9_ex3_10_6:
cipde9_ex3_10_6_ert_data.append([[c["normL2"]],[10**6]])
###Output
_____no_output_____
###Markdown
mpJADEaGsk 10^3
###Code
# load experiment 3a data from file
path = "F:/FHV/Masterthesis/data_backup/MA_Data/experiment_3a_10_3/"
cipde0a_ex3a_10_3 = []
for i in range(20):
cipde0a_ex3a_10_3.append(pp.loadExpObjectFast(path + "cipde0a_mpjagsk_rep_" + str(i) + ".json"))
cipde0b_ex3a_10_3 = []
for i in range(20):
cipde0b_ex3a_10_3.append(pp.loadExpObjectFast(path + "cipde0b_mpjagsk_rep_" + str(i) + ".json"))
cipde1_ex3a_10_3 = []
for i in range(20):
cipde1_ex3a_10_3.append(pp.loadExpObjectFast(path + "cipde1_mpjagsk_rep_" + str(i) + ".json"))
cipde2_ex3a_10_3 = []
for i in range(20):
cipde2_ex3a_10_3.append(pp.loadExpObjectFast(path + "cipde2_mpjagsk_rep_" + str(i) + ".json"))
cipde3_ex3a_10_3 = []
for i in range(20):
cipde3_ex3a_10_3.append(pp.loadExpObjectFast(path + "cipde3_mpjagsk_rep_" + str(i) + ".json"))
cipde4_ex3a_10_3 = []
for i in range(20):
cipde4_ex3a_10_3.append(pp.loadExpObjectFast(path + "cipde4_mpjagsk_rep_" + str(i) + ".json"))
cipde5_ex3a_10_3 = []
for i in range(20):
cipde5_ex3a_10_3.append(pp.loadExpObjectFast(path + "cipde5_mpjagsk_rep_" + str(i) + ".json"))
cipde6_ex3a_10_3 = []
for i in range(20):
cipde6_ex3a_10_3.append(pp.loadExpObjectFast(path + "cipde6_mpjagsk_rep_" + str(i) + ".json"))
cipde7_ex3a_10_3 = []
for i in range(20):
cipde7_ex3a_10_3.append(pp.loadExpObjectFast(path + "cipde7_mpjagsk_rep_" + str(i) + ".json"))
cipde8_ex3a_10_3 = []
for i in range(20):
cipde8_ex3a_10_3.append(pp.loadExpObjectFast(path + "cipde8_mpjagsk_rep_" + str(i) + ".json"))
cipde9_ex3a_10_3 = []
for i in range(20):
cipde9_ex3a_10_3.append(pp.loadExpObjectFast(path + "cipde9_mpjagsk_rep_" + str(i) + ".json"))
cipde0a_ex3a_10_3_ert_data = []
for c in cipde0a_ex3a_10_3:
cipde0a_ex3a_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde0b_ex3a_10_3_ert_data = []
for c in cipde0b_ex3a_10_3:
cipde0b_ex3a_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde1_ex3a_10_3_ert_data = []
for c in cipde1_ex3a_10_3:
cipde1_ex3a_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde2_ex3a_10_3_ert_data = []
for c in cipde2_ex3a_10_3:
cipde2_ex3a_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde3_ex3a_10_3_ert_data = []
for c in cipde3_ex3a_10_3:
cipde3_ex3a_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde4_ex3a_10_3_ert_data = []
for c in cipde4_ex3a_10_3:
cipde4_ex3a_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde5_ex3a_10_3_ert_data = []
for c in cipde5_ex3a_10_3:
cipde5_ex3a_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde6_ex3a_10_3_ert_data = []
for c in cipde6_ex3a_10_3:
cipde6_ex3a_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde7_ex3a_10_3_ert_data = []
for c in cipde7_ex3a_10_3:
cipde7_ex3a_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde8_ex3a_10_3_ert_data = []
for c in cipde8_ex3a_10_3:
cipde8_ex3a_10_3_ert_data.append([[c["normL2"]],[10**3]])
cipde9_ex3a_10_3_ert_data = []
for c in cipde9_ex3a_10_3:
cipde9_ex3a_10_3_ert_data.append([[c["normL2"]],[10**3]])
###Output
_____no_output_____
###Markdown
mpJADEaGsk 10^4
###Code
# load experiment 3a data from file
path = "F:/FHV/Masterthesis/data_backup/MA_Data/time_experiment_3a/"
cipde0a_ex3a_10_4 = []
for i in range(20):
cipde0a_ex3a_10_4.append(pp.loadExpObjectFast(path + "cipde0a_mpjagsk_rep_" + str(i) + ".json"))
cipde0b_ex3a_10_4 = []
for i in range(20):
cipde0b_ex3a_10_4.append(pp.loadExpObjectFast(path + "cipde0b_mpjagsk_rep_" + str(i) + ".json"))
cipde1_ex3a_10_4 = []
for i in range(20):
cipde1_ex3a_10_4.append(pp.loadExpObjectFast(path + "cipde1_mpjagsk_rep_" + str(i) + ".json"))
cipde2_ex3a_10_4 = []
for i in range(20):
cipde2_ex3a_10_4.append(pp.loadExpObjectFast(path + "cipde2_mpjagsk_rep_" + str(i) + ".json"))
cipde3_ex3a_10_4 = []
for i in range(20):
cipde3_ex3a_10_4.append(pp.loadExpObjectFast(path + "cipde3_mpjagsk_rep_" + str(i) + ".json"))
cipde4_ex3a_10_4 = []
for i in range(20):
cipde4_ex3a_10_4.append(pp.loadExpObjectFast(path + "cipde4_mpjagsk_rep_" + str(i) + ".json"))
cipde5_ex3a_10_4 = []
for i in range(20):
cipde5_ex3a_10_4.append(pp.loadExpObjectFast(path + "cipde5_mpjagsk_rep_" + str(i) + ".json"))
cipde6_ex3a_10_4 = []
for i in range(20):
cipde6_ex3a_10_4.append(pp.loadExpObjectFast(path + "cipde6_mpjagsk_rep_" + str(i) + ".json"))
cipde7_ex3a_10_4 = []
for i in range(20):
cipde7_ex3a_10_4.append(pp.loadExpObjectFast(path + "cipde7_mpjagsk_rep_" + str(i) + ".json"))
cipde8_ex3a_10_4 = []
for i in range(20):
cipde8_ex3a_10_4.append(pp.loadExpObjectFast(path + "cipde8_mpjagsk_rep_" + str(i) + ".json"))
cipde9_ex3a_10_4 = []
for i in range(20):
cipde9_ex3a_10_4.append(pp.loadExpObjectFast(path + "cipde9_mpjagsk_rep_" + str(i) + ".json"))
cipde0a_ex3a_10_4_ert_data = []
for c in cipde0a_ex3a_10_4:
cipde0a_ex3a_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde0b_ex3a_10_4_ert_data = []
for c in cipde0b_ex3a_10_4:
cipde0b_ex3a_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde1_ex3a_10_4_ert_data = []
for c in cipde1_ex3a_10_4:
cipde1_ex3a_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde2_ex3a_10_4_ert_data = []
for c in cipde2_ex3a_10_4:
cipde2_ex3a_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde3_ex3a_10_4_ert_data = []
for c in cipde3_ex3a_10_4:
cipde3_ex3a_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde4_ex3a_10_4_ert_data = []
for c in cipde4_ex3a_10_4:
cipde4_ex3a_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde5_ex3a_10_4_ert_data = []
for c in cipde5_ex3a_10_4:
cipde5_ex3a_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde6_ex3a_10_4_ert_data = []
for c in cipde6_ex3a_10_4:
cipde6_ex3a_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde7_ex3a_10_4_ert_data = []
for c in cipde7_ex3a_10_4:
cipde7_ex3a_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde8_ex3a_10_4_ert_data = []
for c in cipde8_ex3a_10_4:
cipde8_ex3a_10_4_ert_data.append([[c["normL2"]],[10**4]])
cipde9_ex3a_10_4_ert_data = []
for c in cipde9_ex3a_10_4:
cipde9_ex3a_10_4_ert_data.append([[c["normL2"]],[10**4]])
###Output
_____no_output_____
###Markdown
mpJADEaGsk 10^5
###Code
# load experiment 3a data from file
path = "F:/FHV/Masterthesis/data_backup/MA_Data/experiment_3a_10_5/"
cipde0a_ex3a_10_5 = []
for i in range(20):
cipde0a_ex3a_10_5.append(pp.loadExpObjectFast(path + "cipde0a_mpjagsk_rep_" + str(i) + ".json"))
cipde0b_ex3a_10_5 = []
for i in range(20):
cipde0b_ex3a_10_5.append(pp.loadExpObjectFast(path + "cipde0b_mpjagsk_rep_" + str(i) + ".json"))
cipde1_ex3a_10_5 = []
for i in range(20):
cipde1_ex3a_10_5.append(pp.loadExpObjectFast(path + "cipde1_mpjagsk_rep_" + str(i) + ".json"))
cipde2_ex3a_10_5 = []
for i in range(20):
cipde2_ex3a_10_5.append(pp.loadExpObjectFast(path + "cipde2_mpjagsk_rep_" + str(i) + ".json"))
cipde3_ex3a_10_5 = []
for i in range(20):
cipde3_ex3a_10_5.append(pp.loadExpObjectFast(path + "cipde3_mpjagsk_rep_" + str(i) + ".json"))
cipde4_ex3a_10_5 = []
for i in range(20):
cipde4_ex3a_10_5.append(pp.loadExpObjectFast(path + "cipde4_mpjagsk_rep_" + str(i) + ".json"))
cipde5_ex3a_10_5 = []
for i in range(20):
cipde5_ex3a_10_5.append(pp.loadExpObjectFast(path + "cipde5_mpjagsk_rep_" + str(i) + ".json"))
cipde6_ex3a_10_5 = []
for i in range(20):
cipde6_ex3a_10_5.append(pp.loadExpObjectFast(path + "cipde6_mpjagsk_rep_" + str(i) + ".json"))
cipde7_ex3a_10_5 = []
for i in range(20):
cipde7_ex3a_10_5.append(pp.loadExpObjectFast(path + "cipde7_mpjagsk_rep_" + str(i) + ".json"))
cipde8_ex3a_10_5 = []
for i in range(20):
cipde8_ex3a_10_5.append(pp.loadExpObjectFast(path + "cipde8_mpjagsk_rep_" + str(i) + ".json"))
cipde9_ex3a_10_5 = []
for i in range(20):
cipde9_ex3a_10_5.append(pp.loadExpObjectFast(path + "cipde9_mpjagsk_rep_" + str(i) + ".json"))
cipde0a_ex3a_10_5_ert_data = []
for c in cipde0a_ex3a_10_5:
cipde0a_ex3a_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde0b_ex3a_10_5_ert_data = []
for c in cipde0b_ex3a_10_5:
cipde0b_ex3a_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde1_ex3a_10_5_ert_data = []
for c in cipde1_ex3a_10_5:
cipde1_ex3a_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde2_ex3a_10_5_ert_data = []
for c in cipde2_ex3a_10_5:
cipde2_ex3a_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde3_ex3a_10_5_ert_data = []
for c in cipde3_ex3a_10_5:
cipde3_ex3a_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde4_ex3a_10_5_ert_data = []
for c in cipde4_ex3a_10_5:
cipde4_ex3a_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde5_ex3a_10_5_ert_data = []
for c in cipde5_ex3a_10_5:
cipde5_ex3a_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde6_ex3a_10_5_ert_data = []
for c in cipde6_ex3a_10_5:
cipde6_ex3a_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde7_ex3a_10_5_ert_data = []
for c in cipde7_ex3a_10_5:
cipde7_ex3a_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde8_ex3a_10_5_ert_data = []
for c in cipde8_ex3a_10_5:
cipde8_ex3a_10_5_ert_data.append([[c["normL2"]],[10**5]])
cipde9_ex3a_10_5_ert_data = []
for c in cipde9_ex3a_10_5:
cipde9_ex3a_10_5_ert_data.append([[c["normL2"]],[10**5]])
###Output
_____no_output_____
###Markdown
mpJADEaGsk 10^6
###Code
# load experiment 3a data from file
path = "F:/FHV/Masterthesis/data_backup/MA_Data/experiment_3a/"
cipde0a_ex3a_10_6 = []
for i in range(20):
cipde0a_ex3a_10_6.append(pp.loadExpObjectFast(path + "cipde0a_mpjagsk_rep_" + str(i) + ".json"))
cipde0b_ex3a_10_6 = []
for i in range(20):
cipde0b_ex3a_10_6.append(pp.loadExpObjectFast(path + "cipde0b_mpjagsk_rep_" + str(i) + ".json"))
cipde1_ex3a_10_6 = []
for i in range(20):
cipde1_ex3a_10_6.append(pp.loadExpObjectFast(path + "cipde1_mpjagsk_rep_" + str(i) + ".json"))
cipde2_ex3a_10_6 = []
for i in range(20):
cipde2_ex3a_10_6.append(pp.loadExpObjectFast(path + "cipde2_mpjagsk_rep_" + str(i) + ".json"))
cipde3_ex3a_10_6 = []
for i in range(20):
cipde3_ex3a_10_6.append(pp.loadExpObjectFast(path + "cipde3_mpjagsk_rep_" + str(i) + ".json"))
cipde4_ex3a_10_6 = []
for i in range(20):
cipde4_ex3a_10_6.append(pp.loadExpObjectFast(path + "cipde4_mpjagsk_rep_" + str(i) + ".json"))
cipde5_ex3a_10_6 = []
for i in range(20):
cipde5_ex3a_10_6.append(pp.loadExpObjectFast(path + "cipde5_mpjagsk_rep_" + str(i) + ".json"))
cipde6_ex3a_10_6 = []
for i in range(20):
cipde6_ex3a_10_6.append(pp.loadExpObjectFast(path + "cipde6_mpjagsk_rep_" + str(i) + ".json"))
cipde7_ex3a_10_6 = []
for i in range(20):
cipde7_ex3a_10_6.append(pp.loadExpObjectFast(path + "cipde7_mpjagsk_rep_" + str(i) + ".json"))
cipde8_ex3a_10_6 = []
for i in range(20):
cipde8_ex3a_10_6.append(pp.loadExpObjectFast(path + "cipde8_mpjagsk_rep_" + str(i) + ".json"))
cipde9_ex3a_10_6 = []
for i in range(20):
cipde9_ex3a_10_6.append(pp.loadExpObjectFast(path + "cipde9_mpjagsk_rep_" + str(i) + ".json"))
cipde0a_ex3a_10_6_ert_data = []
for c in cipde0a_ex3a_10_6:
cipde0a_ex3a_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde0b_ex3a_10_6_ert_data = []
for c in cipde0b_ex3a_10_6:
cipde0b_ex3a_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde1_ex3a_10_6_ert_data = []
for c in cipde1_ex3a_10_6:
cipde1_ex3a_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde2_ex3a_10_6_ert_data = []
for c in cipde2_ex3a_10_6:
cipde2_ex3a_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde3_ex3a_10_6_ert_data = []
for c in cipde3_ex3a_10_6:
cipde3_ex3a_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde4_ex3a_10_6_ert_data = []
for c in cipde4_ex3a_10_6:
cipde4_ex3a_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde5_ex3a_10_6_ert_data = []
for c in cipde5_ex3a_10_6:
cipde5_ex3a_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde6_ex3a_10_6_ert_data = []
for c in cipde6_ex3a_10_6:
cipde6_ex3a_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde7_ex3a_10_6_ert_data = []
for c in cipde7_ex3a_10_6:
cipde7_ex3a_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde8_ex3a_10_6_ert_data = []
for c in cipde8_ex3a_10_6:
cipde8_ex3a_10_6_ert_data.append([[c["normL2"]],[10**6]])
cipde9_ex3a_10_6_ert_data = []
for c in cipde9_ex3a_10_6:
cipde9_ex3a_10_6_ert_data.append([[c["normL2"]],[10**6]])
mj_ert_data = []
temp = []
for i in [0.05, 0.01, 0.005, 0.001]:
temp.append([pp.calcSingleERT(cipde0a_ex0_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde0b_ex0_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde1_ex0_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde2_ex0_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde3_ex0_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde4_ex0_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde5_ex0_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde6_ex0_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde7_ex0_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde8_ex0_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde9_ex0_10_3_ert_data, i)[1]])
mj_ert_data.append(np.mean(temp))
temp = []
for i in [0.05, 0.01, 0.005, 0.001]:
temp.append([pp.calcSingleERT(cipde0a_ex0_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde0b_ex0_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde1_ex0_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde2_ex0_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde3_ex0_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde4_ex0_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde5_ex0_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde6_ex0_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde7_ex0_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde8_ex0_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde9_ex0_10_4_ert_data, i)[1]])
mj_ert_data.append(np.mean(temp))
temp = []
for i in [0.05, 0.01, 0.005, 0.001]:
temp.append([pp.calcSingleERT(cipde0a_ex0_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde0b_ex0_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde1_ex0_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde2_ex0_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde3_ex0_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde4_ex0_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde5_ex0_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde6_ex0_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde7_ex0_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde8_ex0_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde9_ex0_10_5_ert_data, i)[1]])
mj_ert_data.append(np.mean(temp))
temp = []
for i in [0.05, 0.01, 0.005, 0.001]:
temp.append([pp.calcSingleERT(cipde0a_ex0_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde0b_ex0_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde1_ex0_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde2_ex0_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde3_ex0_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde4_ex0_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde5_ex0_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde6_ex0_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde7_ex0_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde8_ex0_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde9_ex0_10_6_ert_data, i)[1]])
mj_ert_data.append(np.mean(temp))
mpj_ert_data = []
temp = []
for i in [0.05, 0.01, 0.005, 0.001]:
temp.append([pp.calcSingleERT(cipde0a_ex1_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde0b_ex1_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde1_ex1_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde2_ex1_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde3_ex1_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde4_ex1_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde5_ex1_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde6_ex1_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde7_ex1_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde8_ex1_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde9_ex1_10_3_ert_data, i)[1]])
mpj_ert_data.append(np.mean(temp))
temp = []
for i in [0.05, 0.01, 0.005, 0.001]:
temp.append([pp.calcSingleERT(cipde0a_ex1_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde0b_ex1_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde1_ex1_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde2_ex1_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde3_ex1_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde4_ex1_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde5_ex1_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde6_ex1_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde7_ex1_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde8_ex1_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde9_ex1_10_4_ert_data, i)[1]])
mpj_ert_data.append(np.mean(temp))
temp = []
for i in [0.05, 0.01, 0.005, 0.001]:
temp.append([pp.calcSingleERT(cipde0a_ex1_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde0b_ex1_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde1_ex1_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde2_ex1_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde3_ex1_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde4_ex1_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde5_ex1_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde6_ex1_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde7_ex1_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde8_ex1_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde9_ex1_10_5_ert_data, i)[1]])
mpj_ert_data.append(np.mean(temp))
temp = []
for i in [0.05, 0.01, 0.005, 0.001]:
temp.append([pp.calcSingleERT(cipde0a_ex1_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde0b_ex1_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde1_ex1_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde2_ex1_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde3_ex1_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde4_ex1_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde5_ex1_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde6_ex1_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde7_ex1_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde8_ex1_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde9_ex1_10_6_ert_data, i)[1]])
mpj_ert_data.append(np.mean(temp))
mpja_ert_data = []
temp = []
for i in [0.05, 0.01, 0.005, 0.001]:
temp.append([pp.calcSingleERT(cipde0a_ex2_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde0b_ex2_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde1_ex2_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde2_ex2_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde3_ex2_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde4_ex2_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde5_ex2_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde6_ex2_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde7_ex2_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde8_ex2_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde9_ex2_10_3_ert_data, i)[1]])
mpja_ert_data.append(np.mean(temp))
temp = []
for i in [0.05, 0.01, 0.005, 0.001]:
temp.append([pp.calcSingleERT(cipde0a_ex2_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde0b_ex2_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde1_ex2_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde2_ex2_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde3_ex2_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde4_ex2_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde5_ex2_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde6_ex2_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde7_ex2_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde8_ex2_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde9_ex2_10_4_ert_data, i)[1]])
mpja_ert_data.append(np.mean(temp))
temp = []
for i in [0.05, 0.01, 0.005, 0.001]:
temp.append([pp.calcSingleERT(cipde0a_ex2_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde0b_ex2_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde1_ex2_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde2_ex2_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde3_ex2_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde4_ex2_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde5_ex2_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde6_ex2_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde7_ex2_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde8_ex2_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde9_ex2_10_5_ert_data, i)[1]])
mpja_ert_data.append(np.mean(temp))
temp = []
for i in [0.05, 0.01, 0.005, 0.001]:
temp.append([pp.calcSingleERT(cipde0a_ex2_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde0b_ex2_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde1_ex2_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde2_ex2_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde3_ex2_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde4_ex2_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde5_ex2_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde6_ex2_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde7_ex2_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde8_ex2_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde9_ex2_10_6_ert_data, i)[1]])
mpja_ert_data.append(np.mean(temp))
mpjgsk_ert_data = []
temp = []
for i in [0.05, 0.01, 0.005, 0.001]:
temp.append([pp.calcSingleERT(cipde0a_ex3_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde0b_ex3_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde1_ex3_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde2_ex3_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde3_ex3_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde4_ex3_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde5_ex3_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde6_ex3_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde7_ex3_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde8_ex3_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde9_ex3_10_3_ert_data, i)[1]])
mpjgsk_ert_data.append(np.mean(temp))
temp = []
for i in [0.05, 0.01, 0.005, 0.001]:
temp.append([pp.calcSingleERT(cipde0a_ex3_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde0b_ex3_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde1_ex3_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde2_ex3_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde3_ex3_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde4_ex3_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde5_ex3_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde6_ex3_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde7_ex3_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde8_ex3_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde9_ex3_10_4_ert_data, i)[1]])
mpjgsk_ert_data.append(np.mean(temp))
temp = []
for i in [0.05, 0.01, 0.005, 0.001]:
temp.append([pp.calcSingleERT(cipde0a_ex3_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde0b_ex3_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde1_ex3_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde2_ex3_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde3_ex3_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde4_ex3_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde5_ex3_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde6_ex3_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde7_ex3_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde8_ex3_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde9_ex3_10_5_ert_data, i)[1]])
mpjgsk_ert_data.append(np.mean(temp))
temp = []
for i in [0.05, 0.01, 0.005, 0.001]:
temp.append([pp.calcSingleERT(cipde0a_ex3_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde0b_ex3_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde1_ex3_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde2_ex3_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde3_ex3_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde4_ex3_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde5_ex3_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde6_ex3_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde7_ex3_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde8_ex3_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde9_ex3_10_6_ert_data, i)[1]])
mpjgsk_ert_data.append(np.mean(temp))
mpjagsk_ert_data = []
temp = []
for i in [0.05, 0.01, 0.005, 0.001]:
temp.extend([pp.calcSingleERT(cipde0a_ex3a_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde0b_ex3a_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde1_ex3a_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde2_ex3a_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde3_ex3a_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde4_ex3a_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde5_ex3a_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde6_ex3a_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde7_ex3a_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde8_ex3a_10_3_ert_data, i)[1],
pp.calcSingleERT(cipde9_ex3a_10_3_ert_data, i)[1]])
mpjagsk_ert_data.append(np.mean(temp))
temp = []
for i in [0.05, 0.01, 0.005, 0.001]:
temp.extend([pp.calcSingleERT(cipde0a_ex3a_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde0b_ex3a_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde1_ex3a_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde2_ex3a_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde3_ex3a_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde4_ex3a_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde5_ex3a_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde6_ex3a_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde7_ex3a_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde8_ex3a_10_4_ert_data, i)[1],
pp.calcSingleERT(cipde9_ex3a_10_4_ert_data, i)[1]])
mpjagsk_ert_data.append(np.mean(temp))
temp = []
for i in [0.05, 0.01, 0.005, 0.001]:
temp.extend([pp.calcSingleERT(cipde0a_ex3a_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde0b_ex3a_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde1_ex3a_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde2_ex3a_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde3_ex3a_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde4_ex3a_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde5_ex3a_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde6_ex3a_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde7_ex3a_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde8_ex3a_10_5_ert_data, i)[1],
pp.calcSingleERT(cipde9_ex3a_10_5_ert_data, i)[1]])
mpjagsk_ert_data.append(np.mean(temp))
temp = []
for i in [0.05, 0.01, 0.005, 0.001]:
temp.extend([pp.calcSingleERT(cipde0a_ex3a_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde0b_ex3a_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde1_ex3a_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde2_ex3a_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde3_ex3a_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde4_ex3a_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde5_ex3a_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde6_ex3a_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde7_ex3a_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde8_ex3a_10_6_ert_data, i)[1],
pp.calcSingleERT(cipde9_ex3a_10_6_ert_data, i)[1]])
mpjagsk_ert_data.append(np.mean(temp))
fig = plt.figure()
plt.plot([r"$10^3$", r"$10^4$", r"$10^5$", r"$10^6$"], mj_ert_data, label="serial JADE", marker="o")
plt.plot([r"$10^3$", r"$10^4$", r"$10^5$", r"$10^6$"], mpj_ert_data, label="parallel JADE", marker=".", linestyle="--")
plt.plot([r"$10^3$", r"$10^4$", r"$10^5$", r"$10^6$"], mpja_ert_data, label="adaptive scheme", marker="^")
plt.plot([r"$10^3$", r"$10^4$", r"$10^5$", r"$10^6$"], mpjgsk_ert_data, label="Gauss Sine Kernel", marker="*")
# plt.plot([r"$10^3$", r"$10^4$", r"$10^5$", r"$10^6$"], mpjagsk_ert_data, label="mpjagsk", marker="x")
plt.legend()
plt.xlabel("#FE")
plt.ylabel("Proportion of PDE")
plt.title("Empirical Runtime Distribution on Testbed")
plt.grid()
plt.savefig("./ert_plot.pdf", bbox_inches='tight')
###Output
_____no_output_____ |
SINet/SINet_ONNX.ipynb | ###Markdown
**Install ONNX and ONNX Simplifier**
###Code
!sudo apt-get install protobuf-compiler libprotoc-dev
!pip install onnx
!pip install onnx-simplifier
!pip install torch-summary
###Output
_____no_output_____
###Markdown
**Load And Convert The Model to ONNX Format**Copy the model checkpoint weights(**model_296.pth**) and model class file(**SINet.py**) into the current directory, and finally export the pytorch model to onnx format.
###Code
import torch
from SINet import *
config = [[[3, 1], [5, 1]], [[3, 1], [3, 1]],
[[3, 1], [5, 1]], [[3, 1], [3, 1]], [[5, 1], [3, 2]], [[5, 2], [3, 4]],
[[3, 1], [3, 1]], [[5, 1], [5, 1]], [[3, 2], [3, 4]], [[3, 1], [5, 2]]]
model = SINet(classes=2, p=2, q=8, config=config,
chnn=1)
model.load_state_dict(torch.load('/content/model_296.pth'))
model.eval()
dummy_input = torch.randn(1, 3, 320, 320)
input_names = [ "data" ]
output_names = [ "classifier/1" ]
torch.onnx.export(model, dummy_input, "SINet_320.onnx", verbose=True, input_names=input_names, output_names=output_names, opset_version=11, export_params=True, do_constant_folding=True)
###Output
_____no_output_____
###Markdown
**Note:** The original SINet class file was modified at line 116, i.e **x.size()** to **[int(s) for s in x.size()]** . This makes the size value static and prevents error during onnx conversion. Also, some layers like **ReduceMax** may not be fully supported in **opencv onnx** runtime, so replace this operation with appropriate reshape and **maxpool** layers. **Add Softmax Layer And Save Model** Save another version of the model with **softmax output**.
###Code
from torchsummary import summary
soft_model = nn.Sequential(
model,
nn.Softmax(1)
)
# Export softmax model
dummy_input = torch.randn(1, 3, 320, 320)
soft_model.eval()
input_names = [ "data" ]
output_names = [ "Softmax/1" ]
torch.onnx.export(soft_model, dummy_input, "SINet_320_Softmax.onnx", verbose=True, input_names=input_names, output_names=output_names, opset_version=11, export_params=True, do_constant_folding=True)
###Output
_____no_output_____
###Markdown
**Optimize The Models With ONNX Simplifier**
###Code
!python3 -m onnxsim SINet_320_Softmax.onnx SINet_320_optim_Softmax.onnx
!python3 -m onnxsim SINet_320.onnx SINet_320_optim.onnx
###Output
_____no_output_____
###Markdown
**Run Inference Using ONNX-Runtime**
###Code
import numpy as np
import cv2
import onnxruntime as rt
img = cv2.imread('obama.jpg')
img = cv2.resize(img, (320,320))
img = img.astype(np.float32)
# Preprocess images based on the original training/inference code
mean = [102.890434, 111.25247, 126.91212 ]
std = [62.93292, 62.82138, 66.355705]
img=(img-mean)/std
img /= 255
img = img.transpose((2, 0, 1))
img = img[np.newaxis,...]
# Perform inference using the ONNX runtime
sess = rt.InferenceSession("/content/SINet_320_optim.onnx")
input_name = sess.get_inputs()[0].name
pred_onx = sess.run(None, {input_name: img.astype(np.float32)})[0]
res=np.argmax(pred_onx[0], axis=0)[...,np.newaxis]
###Output
_____no_output_____
###Markdown
Perform pytorch model inference on **GPU** and **compare** the results with **ONNX** model output.
###Code
# Enable gpu mode, if cuda available
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Load the model and inputs into GPU
model.to(device)
inputs=torch.from_numpy(img).float().to(device)
# Perform prediction and plot results
with torch.no_grad():
torch_res = model(inputs)
_, mask = torch.max(torch_res, 1)
torch_res = torch_res.cpu().numpy()
# Compare the outputs of onnx and pytorch models
np.allclose(pred_onx,torch_res, rtol=1e-03, atol=1e-05)
###Output
_____no_output_____
###Markdown
**Note:-** * On a dual core 2.2GHz CPU, the inference time of ONNX model is about **0.064 seconds** (i.e **15 fps**) wihtout any additional optimizations. * On **Tesla T4 GPU**, the avg. execution time of the **pytorch model** was around 0.010s(**100 fps**), whereas on CPU it was around 0.144s. **Plot The Results Using Matplotlib**
###Code
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import ImageGrid
import numpy as np
# Read input and background images
image = cv2.imread('obama.jpg')
image = cv2.resize(image, (320,320))
background = cv2.imread('whitehouse.jpeg')
background = cv2.resize(background, (320,320))
# Crop image using mask and blend with background
output = res*image + (1-res)*background
output = cv2.cvtColor(output.astype(np.uint8), cv2.COLOR_BGR2RGB)
# Plot the results using matplotlib
im1 = cv2.cvtColor(image.astype(np.uint8), cv2.COLOR_BGR2RGB)
im2 = cv2.cvtColor(background.astype(np.uint8), cv2.COLOR_BGR2RGB)
im3 = res.squeeze()*255
im4 = output
fig = plt.figure(figsize=(10., 10.))
grid = ImageGrid(fig, 111, # similar to subplot(111)
nrows_ncols=(2, 2), # creates 2x2 grid of axes
axes_pad=0.2, # pad between axes in inch.
)
for ax, im in zip(grid, [im1, im2, im3, im4]):
# Iterating over the grid returns the Axes.
ax.imshow(im)
plt.show()
###Output
_____no_output_____
###Markdown
**Pytorch To CoreML (Experimental)**Install latest coremltools
###Code
!pip install --upgrade coremltools
###Output
_____no_output_____
###Markdown
Load the pytorch saved model and directly convert in to **CoreML** format
###Code
from SINet import *
import coremltools as ct
import torch
import torchvision
# Load pytorch model
config = [[[3, 1], [5, 1]], [[3, 1], [3, 1]],
[[3, 1], [5, 1]], [[3, 1], [3, 1]], [[5, 1], [3, 2]], [[5, 2], [3, 4]],
[[3, 1], [3, 1]], [[5, 1], [5, 1]], [[3, 2], [3, 4]], [[3, 1], [5, 2]]]
model = SINet(classes=2, p=2, q=8, config=config,
chnn=1)
model.load_state_dict(torch.load('/content/model_296.pth'))
model.eval()
# Get a pytorch model and save it as a *.pt file
pytorch_model = model
pytorch_model.eval()
example_input = torch.rand(1, 3, 320, 320)
traced_model = torch.jit.trace(pytorch_model, example_input)
traced_model.save("sinet.pt")
# Convert the saved PyTorch model to Core ML
mlmodel = ct.convert("sinet.pt",
inputs=[ct.TensorType(shape=(1, 3, 320, 320))])
# Save the coreml model
mlmodel.save("SINet.mlmodel")
###Output
_____no_output_____ |
notebooks/backup/Version-1.4/3.0-data-processing.ipynb | ###Markdown
Script Name : Description : Args : Author : Nikhil Rao in R, converted to Python by Nor Raymond Email : [email protected]
###Code
import os
import glob
import pandas as pd
import numpy as np
import yaml
from IPython.core.display import display, HTML
# Function to load yaml configuration file
def load_config(config_name):
with open(os.path.join(config_path, config_name), 'r') as file:
config = yaml.safe_load(file)
return config
config_path = "conf/base"
try:
# load yaml catalog configuration file
config = load_config("catalog.yml")
os.chdir(config["project_path"])
root_path = os.getcwd()
except:
os.chdir('..')
# load yaml catalog configuration file
config = load_config("catalog.yml")
os.chdir(config["project_path"])
root_path = os.getcwd()
# import data_cleaning module
import src.data.data_cleaning as data_cleaning
###Output
_____no_output_____
###Markdown
Functions to initialize data ingestion
###Code
def group_files_by_language(data_path, files, file_initials):
file_groups = {}
for x in files:
key = x.split('_')[0] #x[:16] # The key is the first 16 characters of the file name
group = file_groups.get(key,[])
group.append(x)
file_groups[key] = group
return file_groups
def create_file_exists_df(files, file_initials):
checker = []
file_exists = []
for fname in files:
for key in file_initials:
if key in fname:
file_exists.append((key, fname))
file_exists = pd.DataFrame(file_exists, columns =['Keyword', 'Filename'])
return file_exists
def data_ingestion_initialize(root_path, run_value, run_value_2):
# Function to load yaml configuration file
def load_config(config_name):
with open(os.path.join(config_path, config_name), 'r') as file:
config = yaml.safe_load(file)
return config
# load yaml catalog configuration file
config = load_config("catalog.yml")
# define reference file paths
ref_path = os.path.join(root_path, config["data_path"]["ref"])
ref_filepath = os.path.join(ref_path, config["filenames"]["rc_col_ref"])
ref_data = pd.read_excel(io = ref_filepath, sheet_name="threshold_raters", header=None)
if len(ref_data) != 0:
ref_data_cols = ref_data[0].tolist()
else:
ref_data_cols = []
print("Initialize data ingestion and file checking...\n")
if run_value == 'Deployment':
# define data input paths
data_path = os.path.join(root_path, config["data_path"]["output"], 'Deployment')
survey_path = ''
else:
# define data input paths
data_path = os.path.join(root_path, config["data_path"]["output"], run_value, run_value_2)
survey_path = os.path.join(root_path, config["data_path"]["survey"])
# get the list of files in raw folder
files = os.listdir(data_path)
files = [f for f in files if f[-5:] == '.xlsx']
file_initials = ['RC', 'Vocab_1', 'Vocab_2']
languages = []
for file in files:
for file_initial in file_initials:
lang = file.split('_' + file_initial)[0]
if not lang.endswith((".xlsx")):
languages.append(lang)
languages = pd.DataFrame(languages, columns = ['Language'])
file_groups = group_files_by_language(data_path, files, file_initials)
file_exists = create_file_exists_df(files, file_initials)
return data_path, files, languages, file_groups, file_exists, ref_data_cols, survey_path
###Output
_____no_output_____
###Markdown
Functions for data processing - DEPLOY
###Code
file_initials = ['RC', 'Vocab_1', 'Vocab_2']
def obtain_file_summary_df(file_initials, file_exists, data_path):
df_summary = []
for k in file_initials:
selected_files = file_exists[file_exists['Keyword'] == k]
selected_filenames = selected_files['Filename'].tolist()
df = pd.DataFrame()
for f in selected_filenames:
data = pd.read_excel(os.path.join(data_path, f), 'Summary')
df = df.append(data)
df_summary.append(df)
return df_summary
def obtain_file_data_df(file_initials, file_exists, data_path):
df_data = []
for k in file_initials:
selected_files = file_exists[file_exists['Keyword'] == k]
selected_filenames = selected_files['Filename'].tolist()
df = pd.DataFrame()
for f in selected_filenames:
data = pd.read_excel(os.path.join(data_path, f), 'Data')
df = df.append(data)
df_data.append(df)
return df_data
def obtain_distinct_raters(df_summary, ref_data_cols):
r1 = df_summary[0] # Joined data for Summary sheet from RC
r2 = df_summary[1] # Joined data for Summary page from Vocab_1
r3 = df_summary[2] # Joined data for Summary page from Vocab_2
raters = pd.concat([r1,r2,r3], ignore_index=True)
raters = raters[['_worker_id', 'Grouping', 'Market', 'Language']]
raters = raters.drop_duplicates()
if len(ref_data_cols) != 0:
threshold_raters = ref_data_cols
raters = raters[raters['_worker_id'].isin(threshold_raters)]
# obtain languages from r1 and create a dataframe
languages = r1.Language.unique().tolist()
languages = pd.DataFrame(languages, columns = ['Language'])
return raters, r1, r2, r3, languages
def merge_raters_to_df_data(df_data, raters):
rc = df_data[0] # Joined data for Data sheet from RC
v1 = df_data[1] # Joined data for Data page from Vocab_1
v2 = df_data[2] # Joined data for Data page from Vocab_2
# Merge raters to v1, v2, and rc
rc = pd.merge(rc, raters, how='left', on=['_worker_id', 'Language'])
v1 = pd.merge(v1, raters, how='left', on=['_worker_id', 'Language'])
v2 = pd.merge(v2, raters, how='left', on=['_worker_id', 'Language'])
# Convert _created_at and _started_at to date-time
rc[['_created_at','_started_at']] = rc[['_created_at','_started_at']].apply(pd.to_datetime, format='%m/%d/%Y %H:%M:%S')
v1[['_created_at','_started_at']] = v1[['_created_at','_started_at']].apply(pd.to_datetime, format='%m/%d/%Y %H:%M:%S')
v2[['_created_at','_started_at']] = v2[['_created_at','_started_at']].apply(pd.to_datetime, format='%m/%d/%Y %H:%M:%S')
return rc, v1, v2
###Output
_____no_output_____
###Markdown
Functions for data processing - PILOT
###Code
def survey_selection(root_path, config):
survey_path = os.path.join(root_path, config["data_path"]["survey"])
# get the list of files in raw folder
files = os.listdir(survey_path)
files = [f for f in files if f[-5:] == '.xlsx']
survey_files = pd.DataFrame(files, columns = ['Survey Filename'])
print(survey_files)
while True:
try:
survey_index = int(input("\nPlease select the number of the survey filename for your pilot run: "))
if survey_index < min(survey_files.index) or survey_index > max(survey_files.index):
print(f"\nYou must enter numbers between {min(survey_files.index)} - {max(survey_files.index)}... Please try again")
continue
elif survey_index == "":
print("\nYou must enter any numbers")
continue
else:
print(f"\nYou have selected {survey_index} for '{survey_files.iloc[survey_index, 0]}'\n")
survey_selected = survey_files.iloc[survey_index, 0]
break
except ValueError:
print(f"\nYou must enter numerical values only... Please try again")
continue
else:
break
return survey_selected, survey_files
def obtain_survey_fluency(survey_data):
fluency = []
for opt in survey_data['31_language_1']:
if opt == 'over_15_years' :
fluency.append('Fluent')
elif opt == '1015_years' :
fluency.append('Fluent')
elif opt == '510_years' :
fluency.append('Intermediate')
elif opt == '03_years' :
fluency.append('Not Fluent')
else:
fluency.append('')
return fluency
def obtain_survey_data(survey_path, survey_selected):
survey_data = pd.read_excel(os.path.join(survey_path, survey_selected), 'Sheet1')
try:
survey_data = survey_data.drop('Unnamed: 42', axis = 1)
survey_data[['_created_at','_started_at']] = survey_data[['_created_at','_started_at']].apply(pd.to_datetime, format='%m/%d/%Y %H:%M:%S')
survey_data = survey_data.rename(columns = {"_created_at" : "survey_created_at", "_started_at" : "survey_started_at"})
survey_data = survey_data[['_worker_id', '31_language_1', 'survey_created_at', 'survey_started_at']]
survey_data['Fluency'] = obtain_survey_fluency(survey_data)
except:
survey_data = survey_data
survey_data[['_created_at','_started_at']] = survey_data[['_created_at','_started_at']].apply(pd.to_datetime, format='%m/%d/%Y %H:%M:%S')
survey_data = survey_data.rename(columns = {"_created_at" : "survey_created_at", "_started_at" : "survey_started_at"})
survey_data = survey_data[['_worker_id', '31_language_1', 'survey_created_at', 'survey_started_at']]
survey_data['Fluency'] = obtain_survey_fluency(survey_data)
return survey_data
def merge_to_survey_data(df_data, raters, survey_data):
rc = df_data[0] # Joined data for Data sheet from RC
v1 = df_data[1] # Joined data for Data page from Vocab_1
v2 = df_data[2] # Joined data for Data page from Vocab_2
# Merge raters data to v1, v2, and rc
rc = pd.merge(rc, raters, how='left', left_on=['_worker_id'], right_on=['_worker_id'])
v1 = pd.merge(v1, raters, how='left', left_on=['_worker_id'], right_on=['_worker_id'])
v2 = pd.merge(v2, raters, how='left', left_on=['_worker_id'], right_on=['_worker_id'])
# Merge raters data to v1, v2, and rc
rc = pd.merge(rc, survey_data, how='left', left_on=['_worker_id'], right_on=['_worker_id'])
v1 = pd.merge(v1, survey_data, how='left', left_on=['_worker_id'], right_on=['_worker_id'])
v2 = pd.merge(v2, survey_data, how='left', left_on=['_worker_id'], right_on=['_worker_id'])
# Drop duplicat cols
rc = rc.drop(['Language_y', 'Market'], axis = 1)
v1 = v1.drop(['Language_y', 'Market'], axis = 1)
v2 = v2.drop(['Language_y', 'Market'], axis = 1)
rc = rc.rename(columns = {"Language_x":"Language"})
v1 = v1.rename(columns = {"Language_x":"Language"})
v2 = v2.rename(columns = {"Language_x":"Language"})
rc['Fluency'] = np.where(rc['Grouping'] == 'GT', 'GT', rc['Fluency'])
v1['Fluency'] = np.where(v1['Grouping'] == 'GT', 'GT', v1['Fluency'])
v2['Fluency'] = np.where(v2['Grouping'] == 'GT', 'GT', v2['Fluency'])
rc['Fluency'] = np.where(rc['Fluency'].isna(), 'Fluent', rc['Fluency'])
v1['Fluency'] = np.where(v1['Fluency'].isna(), 'Fluent', v1['Fluency'])
v2['Fluency'] = np.where(v2['Fluency'].isna(), 'Fluent', v2['Fluency'])
# Convert _created_at and _started_at to date-time
rc[['_created_at','_started_at']] = rc[['_created_at','_started_at']].apply(pd.to_datetime, format='%m/%d/%Y %H:%M:%S')
v1[['_created_at','_started_at']] = v1[['_created_at','_started_at']].apply(pd.to_datetime, format='%m/%d/%Y %H:%M:%S')
v2[['_created_at','_started_at']] = v2[['_created_at','_started_at']].apply(pd.to_datetime, format='%m/%d/%Y %H:%M:%S')
return rc, v1, v2
def main():
file_initials = ['RC', 'Vocab_1', 'Vocab_2']
language, market, run_value, run_value_2 = data_cleaning.main()
data_path, files, languages, file_groups, file_exists, ref_data_cols, survey_path = data_ingestion_initialize(root_path, run_value, run_value_2)
df_summary = obtain_file_summary_df(file_initials, file_exists, data_path)
df_data = obtain_file_data_df(file_initials, file_exists, data_path)
raters, r1, r2, r3, languages = obtain_distinct_raters(df_summary, ref_data_cols)
if run_value == 'Deployment':
rc, v1, v2 = merge_raters_to_df_data(df_data, raters)
else:
survey_selected, survey_files = survey_selection(root_path, config)
survey_data = obtain_survey_data(survey_path, survey_selected)
rc, v1, v2 = merge_to_survey_data(df_data, raters, survey_data)
return raters, r1, r2, r3, languages, rc, v1, v2, run_value, run_value_2, survey_selected, survey_files
if __name__ == "__main__":
raters, r1, r2, r3, languages, rc, v1, v2, run_value, run_value_2, survey_selected, survey_files = main()
print(languages)
print('\nAutomated data processing completed.')
###Output
Initialize data ingestion and file checking...
PASS: All files exists!
|
examples/visualization/plot_parallel_coordinate.ipynb | ###Markdown
Visualizing High-dimentional Parameter Relationships in Jupyter NotebookThis notebook demonstrates a visualization utility of Optuna.After optimizing the hyperparameter of neural networks, `plot_parallel_coordinate()` plots high-dimentional parameter relationships in a study.**Note:** If a parameter contains missing values, a trial with missing values is not plotted. Setting up MNIST Dataset
###Code
from sklearn.datasets import fetch_openml
from sklearn.model_selection import train_test_split
mnist = fetch_openml('mnist_784', version=1)
classes = list(set(mnist.target))
x_train, x_test, y_train, y_test = train_test_split(mnist.data, mnist.target)
###Output
_____no_output_____
###Markdown
Defining Objective Function
###Code
from sklearn.neural_network import MLPClassifier
def objective(trial):
layers = []
n_layers = trial.suggest_int('n_layers', 1, 4)
for i in range(n_layers):
layers.append(trial.suggest_int('n_units_l{}'.format(i), 1, 128))
clf = MLPClassifier(hidden_layer_sizes=tuple(layers))
for step in range(100):
clf.partial_fit(x_train, y_train, classes=classes)
intermediate_value = 1.0 - clf.score(x_test, y_test) # Report intermediate objective value.
trial.report(intermediate_value, step)
if trial.should_prune(step):
raise optuna.structs.TrialPruned() # Handle pruning based on the intermediate value.
return clf.score(x_test, y_test)
###Output
_____no_output_____
###Markdown
Running Optimization
###Code
import optuna
optuna.logging.set_verbosity(optuna.logging.WARNING) # This verbosity change is just to simplify the notebook output.
study = optuna.create_study(
direction='maximize',
pruner=optuna.pruners.SuccessiveHalvingPruner()
)
study.optimize(objective, n_trials=25)
###Output
_____no_output_____
###Markdown
High-dimentional Parameter Relationships of Study
###Code
from optuna.visualization import plot_parallel_coordinate
plot_parallel_coordinate(study)
###Output
_____no_output_____
###Markdown
Select parameters to Visualize
###Code
plot_parallel_coordinate(study, params=["n_layers", "n_units_l1"])
###Output
_____no_output_____ |
doc/nb/XShooter_UVB.ipynb | ###Markdown
XShooter UVB Wavelengths
###Code
# imports
import os
from astropy.io import fits
###Output
_____no_output_____
###Markdown
Load up
###Code
path = '/scratch/REDUX/VLT/XShooter/UVB/Wavecal'
###Output
_____no_output_____
###Markdown
merge2d
###Code
merge2d_file = os.path.join(path, 'CAL_SLIT_MERGE2D_UVB.fits')
merge2d = fits.open(merge2d_file)
merge2d.info()
head0_merge2d = merge2d[0].header
head0_merge2d
###Output
_____no_output_____
###Markdown
coadd2d
###Code
coadd2d_file = os.path.join(path, 'CAL_SLIT_ORDER2D_UVB.fits')
coadd2d = fits.open(coadd2d_file)
coadd2d.info()
###Output
Filename: /scratch/REDUX/VLT/XShooter/UVB/Wavecal/CAL_SLIT_ORDER2D_UVB.fits
No. Name Ver Type Cards Dimensions Format
0 ORD13_FLUX 1 PrimaryHDU 487 (454, 73) float32
1 ORD13_ERRS 1 ImageHDU 33 (454, 73) float32
2 ORD13_QUAL 1 ImageHDU 33 (454, 73) float32
3 ORD14_FLUX 1 ImageHDU 348 (1780, 73) float32
4 ORD14_ERRS 1 ImageHDU 34 (1780, 73) float32
5 ORD14_QUAL 1 ImageHDU 34 (1780, 73) float32
6 ORD15_FLUX 1 ImageHDU 348 (1667, 73) float32
7 ORD15_ERRS 1 ImageHDU 34 (1667, 73) float32
8 ORD15_QUAL 1 ImageHDU 34 (1667, 73) float32
9 ORD16_FLUX 1 ImageHDU 348 (1562, 73) float32
10 ORD16_ERRS 1 ImageHDU 34 (1562, 73) float32
11 ORD16_QUAL 1 ImageHDU 34 (1562, 73) float32
12 ORD17_FLUX 1 ImageHDU 348 (1447, 73) float32
13 ORD17_ERRS 1 ImageHDU 34 (1447, 73) float32
14 ORD17_QUAL 1 ImageHDU 34 (1447, 73) float32
15 ORD18_FLUX 1 ImageHDU 348 (1350, 73) float32
16 ORD18_ERRS 1 ImageHDU 34 (1350, 73) float32
17 ORD18_QUAL 1 ImageHDU 34 (1350, 73) float32
18 ORD19_FLUX 1 ImageHDU 348 (1250, 73) float32
19 ORD19_ERRS 1 ImageHDU 34 (1250, 73) float32
20 ORD19_QUAL 1 ImageHDU 34 (1250, 73) float32
21 ORD20_FLUX 1 ImageHDU 348 (1164, 73) float32
22 ORD20_ERRS 1 ImageHDU 34 (1164, 73) float32
23 ORD20_QUAL 1 ImageHDU 34 (1164, 73) float32
24 ORD21_FLUX 1 ImageHDU 348 (1096, 73) float32
25 ORD21_ERRS 1 ImageHDU 34 (1096, 73) float32
26 ORD21_QUAL 1 ImageHDU 34 (1096, 73) float32
27 ORD22_FLUX 1 ImageHDU 348 (1015, 73) float32
28 ORD22_ERRS 1 ImageHDU 34 (1015, 73) float32
29 ORD22_QUAL 1 ImageHDU 34 (1015, 73) float32
30 ORD23_FLUX 1 ImageHDU 348 (939, 73) float32
31 ORD23_ERRS 1 ImageHDU 34 (939, 73) float32
32 ORD23_QUAL 1 ImageHDU 34 (939, 73) float32
33 ORD24_FLUX 1 ImageHDU 348 (844, 73) float32
34 ORD24_ERRS 1 ImageHDU 34 (844, 73) float32
35 ORD24_QUAL 1 ImageHDU 34 (844, 73) float32
|
examples/notebook-examples/data/pytorch/pytorch_yolov5.ipynb | ###Markdown
PyTorch & Pre-trained Yolov5 Welcome to PrimeHub!In this quickstart, we will show you how to use a pre-trained Yolov5 for Object Detection. Make sure the requirements is installed
###Code
!pip install -qr https://raw.githubusercontent.com/ultralytics/yolov5/master/requirements.txt
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import os
import torch
from IPython.core.display import Image, display
###Output
_____no_output_____
###Markdown
Load Pre-trained Model from PyTorch Hub
###Code
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True)
###Output
_____no_output_____
###Markdown
Load Image and Get the Object Detection Result
###Code
# Load images from COCO dataset
imgs = [
'https://farm7.staticflickr.com/6179/6269442280_4766a3a534_z.jpg',
'http://farm4.staticflickr.com/3052/2749731432_f5be57f30e_z.jpg',
'http://farm3.staticflickr.com/2785/4484897643_8d6604cf12_z.jpg',
'http://farm1.staticflickr.com/50/150324104_97d2122a44_z.jpg',
'http://farm7.staticflickr.com/6148/5951052391_232853ce8b_z.jpg'
] # batch of images
print("Image Inputs:")
for i in imgs:
display(Image(url=i, width=600, unconfined=True))
# Inference
print("Result:")
results = model(imgs)
# Result
resultPath = './inferences'
! rm -rf ./inferences/*
results.print()
results.save(resultPath)
print("Image outputs:")
for f in os.listdir(resultPath):
display(Image(filename=os.path.join(resultPath,f), width=600, unconfined=True))
###Output
_____no_output_____ |
01 Neural Networks and Deep Learning/C1W2/C1W2 Logistic Regression with a Neural Network mindset.ipynb | ###Markdown
--->Disclaimer: >I did not paid and do not own any of the this. >All of the material were found on youtube or in other Github projects - including the code solutions.>>Youtube resources:>- C1W1L1 ~ C1W2L18 - https://www.youtube.com/watch?v=CS4cs9xVecg&list=PLkDaE6sCZn6Ec-XTbcX1uRg2_u4xOEky0&index=1>- Lecture 1 - Class Introduction and Logistics - https://www.youtube.com/watch?v=PySo_6S4ZAg&list=PLoROMvodv4rOABXSygHTsbvUz4G_YQhOb>- 3b1b AI series - https://www.youtube.com/watch?v=aircAruvnKk&list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi> - (in this C1W2 exercise, we work on a single layered network of a single neuron)>>Github project resources:>- https://github.com/Kulbear/deep-learning-coursera (main notebook + solutions)>- https://github.com/andersy005/deep-learning-specialization-coursera (datasets)>>What I did:>Learnt about this subject by re-implementing the solutions in another language (Rust - https://github.com/google/evcxr/tree/master/evcxr_jupyter). >Some aspects of the solutions are different (longer, slower) because this is an opportunity (for me) to better understand the concepts. >For this reason, sometimes I changed terms, names or relations from the original text in order to better correspond to my acutual re-implementation. >(To sum it up, this is "exploratory exercise", not "serious implementation")--- Logistic Regression with a Neural Network mindsetWelcome to your first (required) programming assignment! You will build a logistic regression classifier to recognize cats. This assignment will step you through how to do this with a Neural Network mindset, and so will also hone your intuitions about deep learning.**Instructions:**- Do not use loops (for/while) in your code, unless the instructions explicitly ask you to do so.**You will learn to:**- Build the general architecture of a learning algorithm, including: - Initializing parameters - Calculating the cost function and its gradient - Using an optimization algorithm (gradient descent) - Gather all three functions above into a main model function, in the right order. 0 - Optimization Use :help for seeing commands from evcxr.Before any code is executed, use :opt to set the optimization level (until it's set to 2 = speed)
###Code
:opt
:opt
###Output
_____no_output_____
###Markdown
1 - Packages First, let's run the cell below to import all the packages that you will need during this assignment.
###Code
// Cargo.toml - fine configuration of depedencies (and their versions)
// https://github.com/google/evcxr (version = "0.3.3" )
:dep base64 = "0.10.1"
// # pacman -S hdf5
:dep hdf5 = "0.5.1"
// n-dimensional arrays, with serde (de/serialization) and rayon (parallelization) support
:dep ndarray = { version = "0.12", features = ["serde-1"] }
// https://docs.rs/ndarray/0.12.1/ndarray/doc/ndarray_for_numpy_users/index.html
// https://docs.rs/ndarray/0.12.1/ndarray/struct.ArrayBase.html
// image manipulation and display
:dep image = "0.20.1"
:dep evcxr_image = "1.0.0"
// evcxr_image depends on image ^0.20.0
// has useful numeric traits such as "has a One value", "has a Zero value", etc
:dep num-traits = "0.2.6"
"done" // this takes ~ 5 minutes to compile
// external and scoped libs
extern crate base64;
extern crate hdf5;
extern crate ndarray;
extern crate image;
extern crate evcxr_image;
extern crate num_traits;
"done"
###Output
_____no_output_____
###Markdown
--- 2 - Overview of the Problem set **Problem Statement**: You are given a dataset ("datasets/train_catvnoncat.h5") containing: - a training set of m_train images labeled as cat (y=1) or non-cat (y=0) - a test set of m_test images labeled as cat or non-cat - each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB). Thus, each image is square (height = num_px) and (width = num_px).You will build a simple image-recognition algorithm that can correctly classify pictures as cat or non-cat.Let's get more familiar with the dataset. Load the data by running the following code.
###Code
// scope import
use ndarray::prelude::*;
"done"
// structures definition and conversion implementations
// this is an optional Rust stuff that I decided to try and see how much overhead this would cause
// (instead of working with u8 data directly, I decided to convert triples into a Color structure)
#[derive(hdf5::H5Type, Clone, PartialEq, Debug)]
#[repr(C)]
pub struct Color {
pub red: u8,
pub green: u8,
pub blue: u8,
}
#[derive(hdf5::H5Type, Clone, PartialEq, Debug)]
#[repr(u8)]
pub enum Class {
NotCat = 0,
Cat = 1,
}
impl From<&[u8]> for Color {
fn from(rgb: &[u8]) -> Self {
match rgb {
&[red, green, blue] => Color {red, green, blue},
otherwise => panic!("incorrect color u8 len {}, expected {} (rgb)",
otherwise.len(),
3
),
}
}
}
impl From<ArrayView1<'_, u8>> for Color {
fn from(rgb: ArrayView1<'_, u8>) -> Self {
match rgb.as_slice()
.or_else(|| rgb.as_slice_memory_order())
.or_else(|| rgb.into_slice()) {
Some(slice) => Color::from(slice),
None => {
// extra allocation is needed
// because memory layout wasn't contiguous
// or wasn't standard
let rgb = rgb.to_vec();
Color::from(rgb.as_slice())
}
}
}
}
impl From<(u8, u8, u8)> for Color {
fn from((red, green, blue): (u8, u8, u8)) -> Self {
Color { red, green, blue }
}
}
impl From<&u8> for Class {
fn from(may_cat: &u8) -> Self {
match may_cat {
0 => Class::NotCat,
1 => Class::Cat,
other => panic!("unexpected Cat label {}: should be 0 or 1", other),
}
}
}
"done"
let (train_x, train_y) = {
let file = hdf5::File::open("datasets/train_catvnoncat.h5", "r")
.unwrap();
let x = file.dataset("train_set_x")
.unwrap()
.read::<u8, Ix4>() // shape = [209, 64, 64, 3], element = u8
.unwrap()
.lanes(Axis(3))
.into_iter()
.map(Color::from)
.collect::<Vec<Color>>();
let x = ArrayView::<Color, _>
::from_shape((209, 64, 64), &x) // shape = [209, 64, 64], element = Color
.unwrap();
let y = file.dataset("train_set_y")
.unwrap()
.read::<u8, Ix1>() // shape = [209], element = u8
.unwrap()
.into_iter()
.map(Class::from)
.collect::<Vec<Class>>();
let y = ArrayView::<Class, _>
::from_shape((209,), &y) // shape = [209], element = Class
.unwrap();
(x.to_owned(), y.to_owned())
};
let (test_x, test_y) = {
let file = hdf5::File::open("datasets/test_catvnoncat.h5", "r")
.unwrap();
let x = file.dataset("test_set_x")
.unwrap()
.read::<u8, Ix4>() // shape = [50, 64, 64, 3], element = u8
.unwrap()
.lanes(Axis(3))
.into_iter()
.map(Color::from)
.collect::<Vec<Color>>();
let x = ArrayView::<Color, _>
::from_shape((50, 64, 64), &x) // shape = [50, 64, 64], element = Color
.unwrap();
let y = file.dataset("test_set_y")
.unwrap()
.read::<u8, Ix1>() // shape = [50], element = u8
.unwrap()
.into_iter()
.map(Class::from)
.collect::<Vec<Class>>();
let y = ArrayView::<Class, _>
::from_shape((50,), &y) // shape = [50], element = Class
.unwrap();
(x.to_owned(), y.to_owned())
};
"done"
###Output
_____no_output_____
###Markdown
After preprocessing, we will end up with train_x/y and test_x/y.Each line of your train_x and test_x is an array representing an image. You can visualize an example by running the following code. Feel free also to change the `index` value and re-run to see other images.
###Code
// scoped traits/structs/etc
// <image::RgbImage>::evcxr_display() for image displaying
use evcxr_image::ImageDisplay;
"done"
// one image display
let index = 25;
let (width, height) = (64, 64);
let img_subview = train_x.index_axis(Axis(0), index).to_owned();
use ndarray::s; // slice macro
// test image displaying
let img = image::ImageBuffer::from_fn(width, height, |x, y| {
let color = img_subview.slice(s![y as usize, x as usize]).to_owned(); // shape: [] (zero-dimension), element = Color.
// so this is not an actual array, but a single element. But we treat as an array anyway and get the "first" element.
let color = &color.as_slice().unwrap()[0];
image::Rgb([color.red, color.green, color.blue])
});
img.evcxr_display() // the only output on the cell must be a single image
let index_y = train_y
.index_axis(Axis(0), index)
.as_slice()
.unwrap()[0]
.clone();
println!("index {} is a {:?} image", index, index_y.clone());
match index_y {
Class::Cat => "done nyaa!",
Class::NotCat => "done",
}
###Output
index 25 is a Cat image
###Markdown
--- Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs. **Exercise:** Find the values for: - m_train (number of training examples) - m_test (number of test examples) - num_px (= height = width of a training image)Remember that `train_set_x_orig` is a numpy-array of shape (m_train, num_px, num_px, 3). For instance, you can access `m_train` by writing `train_set_x_orig.shape[0]`.
###Code
let m_train = train_y.shape()[0];
let m_test = test_y.shape()[0];
let num_px = train_x.shape()[1];
"done"
println!("Number of training examples: m_train = {}", m_train);
println!("Number of testing examples: m_test = {}", m_test);
println!("Height/Width of each image: num_px = {}", num_px);
println!("Each image is of size: ({}, {}, {}) (as Colors)", num_px, num_px, 1);
println!("Each image is of size: ({}, {}, {}) (as bytes)", num_px, num_px, <Color as hdf5::H5Type>::type_descriptor().size());
println!("train_x shape: {:?}", train_x.shape());
println!("train_y shape: {:?}", train_y.shape());
println!("test_x shape: {:?}", test_x.shape());
println!("test_y shape: {:?}", test_y.shape());
"done"
###Output
Number of training examples: m_train = 209
Number of testing examples: m_test = 50
Height/Width of each image: num_px = 64
Each image is of size: (64, 64, 1) (as Colors)
Each image is of size: (64, 64, 3) (as bytes)
train_x shape: [209, 64, 64]
train_y shape: [209]
test_x shape: [50, 64, 64]
test_y shape: [50]
###Markdown
--- For convenience, you should now reshape images of shape (height, width)(of Color elements) into a shape (height * width * 3)(of f32 elements). After this, our training (and test) dataset is a ndarray where each column represents a flattened image. Then there should be m_train (respectively m_test) columns - each for a different train set.**Exercise:** Reshape the training/test data sets so that images of size (num_set, height, width)(of Color elements) are flattened into single vectors of shape (height * width * 3, num_set)(of f32 elements).A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b * c * d, a) is to view the matrix through the a Axis (b, c, d, a)(of Color) and then reshape it from a normal iteration. > (I antecipated the u8 -> f32 conversion for color information) To represent color images, the red, green and blue channels (RGB) must be specified for each pixel, and so the pixel value is actually a vector of three numbers ranging from 0 to 255.One common preprocessing step in machine learning is to center and standardize your dataset, meaning that you substract the mean of the whole numpy array from each example, and then divide each example by the standard deviation of the whole numpy array. But for picture datasets, it is simpler and more convenient and works almost as well to just divide every row of the dataset by 255 (the maximum value of a pixel channel). Let's standardize our dataset.
###Code
let (train_x_flat, test_x_flat) = {
let train = train_x // shape = [209, 64, 64], element = Color
.lanes(Axis(0))
.into_iter()
.flat_map(|sets_color| {
let (reds, greens, blues) = sets_color
.iter()
.map(|color| (
color.red.clone() as f32 / 255.0,
color.green.clone() as f32 / 255.0,
color.blue.clone() as f32 / 255.0,
))
.fold((Vec::new(), Vec::new(), Vec::new()),
|(mut reds, mut greens, mut blues),
(red, green, blue)| {
reds.push(red);
greens.push(green);
blues.push(blue);
(reds, greens, blues)
});
let mut serial = reds;
serial.extend(greens);
serial.extend(blues);
serial
})
.collect::<Vec<f32>>(); // len = 209 * 64 * 64 * 3 = 2568192
// layout of a single line: red_0, red_1, ..., red_208, green_1, green_2, ..., green_208, ..., blue_1, blue_2, ... blue_208
// each line information is from one pixel of each image, localized at the same location on each image
let train = ArrayView::<f32, _>
::from_shape((64 * 64 * 3, 209), &train) // shape = [64 * 64 * 3, 209], element = f32 (0.0~1.0)
.unwrap();
let test = test_x // shape = [50, 64, 64], element = Color
.lanes(Axis(0))
.into_iter()
// iterating over 1 pixel color from each test
.flat_map(|sets_color| {
let (reds, greens, blues) = sets_color
.iter()
.map(|color| (
color.red.clone() as f32 / 255.0,
color.green.clone() as f32 / 255.0,
color.blue.clone() as f32 / 255.0,
))
.fold((Vec::new(), Vec::new(), Vec::new()),
|(mut reds, mut greens, mut blues),
(red, green, blue)| {
reds.push(red);
greens.push(green);
blues.push(blue);
(reds, greens, blues)
});
let mut serial = reds;
serial.extend(greens);
serial.extend(blues);
serial
})
.collect::<Vec<f32>>(); // len = 50 * 64 * 64 * 3 = 614400
// layout of a single line: red_0, red_1, ..., red_49, green_1, green_2, ..., green_49, ..., blue_1, blue_2, ... blue_49
// each line information is from one pixel of each image, localized at the same location on each image
let test = ArrayView::<f32, _>
::from_shape((64 * 64 * 3, 50), &test) // shape = [64 * 64 * 3, 50], element = f32 (0.0~1.0)
.unwrap();
(train.to_owned(), test.to_owned())
};
"done"
// outputs (y) are structured in enum (Class), but they should be transformed back to a "rawer" data (such as u8)
// outputs are also one dimensional, but the algo requires that they are a 2D Matrix
// shape: [209], element = Class
let train_y = train_y.map(|class| match class {
Class::Cat => 1u8,
Class::NotCat => 0u8,
});
let train_y = train_y.insert_axis(Axis(0)).to_owned(); // shape: [1, 209], element = u8
// shape: [20209], element = Class
let test_y = test_y.map(|class| match class {
Class::Cat => 1u8,
Class::NotCat => 0u8,
});
let test_y = test_y.insert_axis(Axis(0)).to_owned(); // shape: [1, 50], element = u8
// so now both inputs (x) and outputs (y) are 2DMatrices with the same ammount of columns
"done"
println!("train_x_flat shape: {:?}", train_x_flat.shape());
println!("train_y shape: {:?}", train_y.shape());
println!("test_x_flat shape: {:?}", test_x_flat.shape());
println!("test_y shape: {:?}", test_y.shape());
println!("sanity check after reshaping: {:?}", train_x_flat.slice(s![0..5, 0]));
"done"
###Output
train_x_flat shape: [12288, 209]
train_y shape: [1, 209]
test_x_flat shape: [12288, 50]
test_y shape: [1, 50]
sanity check after reshaping: [0.06666667, 0.12156863, 0.21960784, 0.08627451, 0.12941177] shape=[5], strides=[209], layout=Custom (0x0), const ndim=1
###Markdown
What you need to remember:Common steps for pre-processing a new dataset are:- Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...)- Reshape the datasets such that each example is now a vector of size (num_px \* num_px \* 3, 1)- "Standardize" the data --- 3 - General Architecture of the learning algorithm It's time to design a simple algorithm to distinguish cat images from non-cat images.You will build a Logistic Regression, using a Neural Network mindset. The following Figure explains why **Logistic Regression is actually a very simple Neural Network!****Mathematical expression of the algorithm**:For one example $x^{(i)}$:$$z^{(i)} = w^T x^{(i)} + b \tag{1}$$$$\hat{y}^{(i)} = a^{(i)} = sigmoid(z^{(i)})\tag{2}$$ $$ \mathcal{L}(a^{(i)}, y^{(i)}) = - y^{(i)} \log(a^{(i)}) - (1-y^{(i)} ) \log(1-a^{(i)})\tag{3}$$The cost is then computed by summing over all training examples:$$ J = \frac{1}{m} \sum_{i=1}^m \mathcal{L}(a^{(i)}, y^{(i)})\tag{6}$$**Key steps**:In this exercise, you will carry out the following steps: - Initialize the parameters of the model - Learn the parameters for the model by minimizing the cost - Use the learned parameters to make predictions (on the test set) - Analyse the results and conclude--- 4 - Building the parts of our algorithm The main steps for building a Neural Network are:1. Define the model structure (such as number of input features) 2. Initialize the model's parameters3. Loop: - Calculate current loss (forward propagation) - Calculate current gradient (backward propagation) - Update parameters (gradient descent)You often build 1-3 separately and integrate them into one function we call `model()`. 4.1 - Helper functions**Exercise**: Using your code from "Python Basics", implement `sigmoid()`. As you've seen in the figure above, you need to compute $sigmoid( w^T x + b)$ to make predictions.
###Code
/// Has `e^self` method.
pub trait Exp {
/// The output of the computation.
type Output;
/// Computes `e^self`.
fn exp(self) -> Self::Output;
}
impl Exp for f32 {
type Output = Self;
fn exp(self) -> Self::Output {
Self::exp(self) // calls f32::exp
}
}
impl Exp for f64 {
type Output = Self;
fn exp(self) -> Self::Output {
Self::exp(self) // calls f64::exp
}
}
use num_traits::{One, Zero, Inv};
use std::ops;
/// Has `1 / (1 + e^(-self))` method.
pub trait Sigmoid {
/// calculates `1 / (1 + e^(-self))`.
fn sigmoid(self) -> Self;
}
// implements for all types that meet the requirements.
// note that the output of `-self` does not need to be of type Self
// similarly, the output of e^(-self) does not need to be of type Self
// similarly, the output of (e^(-self) + 1) does not need to be of type Self
// but and finally, the output of (1 / (e^(-self) + 1)) DOES NEED to be of type Self
impl<T> Sigmoid for T
where T: Sized + ops::Neg,
<Self as ops::Neg>
::Output: Exp,
<<Self as ops::Neg>
::Output as Exp>
::Output: One + ops::Add,
<<<Self as ops::Neg>
::Output as Exp>
::Output as ops::Add>
::Output: Inv<Output = Self>,
{
fn sigmoid(self) -> Self {
let neg: <Self as ops::Neg>
::Output
= ops::Neg::neg(self);
let exp: <<Self as ops::Neg>
::Output as Exp>
::Output
= Exp::exp(neg);
let add: <<<Self as ops::Neg>
::Output as Exp>
::Output as ops::Add>
::Output
= exp + One::one();
add.inv()
}
}
"done"
println!("sigmoid(0.0) = (f32) {}", 0f32.sigmoid());
println!("sigmoid(0.0) = (f64) {}", 0f64.sigmoid());
println!("sigmoid(9.2) = (f32) {}", f32::sigmoid(9.2));
println!("sigmoid(9.2) = (f64) {}", f64::sigmoid(9.2));
println!("---");
println!("sigmoid(+16) = (f32) {}", f32::sigmoid(16.0));
println!("sigmoid(+16) = (f64) {}", f64::sigmoid(16.0));
println!("sigmoid(+17) = (f32) {}", f32::sigmoid(17.0)); // around +17 = maximum f32 positive range
println!("sigmoid(+17) = (f64) {}", f64::sigmoid(17.0));
println!("---");
println!("sigmoid(-88) = (f32) {}", Sigmoid::sigmoid(-88f32));
println!("sigmoid(-88) = (f64) {}", Sigmoid::sigmoid(-88f64));
println!("sigmoid(-89) = (f32) {}", Sigmoid::sigmoid(-89f32)); // around -89 = maximum f32 negative range
println!("sigmoid(-89) = (f64) {}", Sigmoid::sigmoid(-89f64));
"done"
###Output
sigmoid(0.0) = (f32) 0.5
sigmoid(0.0) = (f64) 0.5
sigmoid(9.2) = (f32) 0.9998989
sigmoid(9.2) = (f64) 0.9998989708060922
---
sigmoid(+16) = (f32) 0.9999999
sigmoid(+16) = (f64) 0.9999998874648379
sigmoid(+17) = (f32) 1
sigmoid(+17) = (f64) 0.9999999586006244
---
sigmoid(-88) = (f32) 0.000000000000000000000000000000000000006054601
sigmoid(-88) = (f64) 0.000000000000000000000000000000000000006054601895401186
sigmoid(-89) = (f32) 0
sigmoid(-89) = (f64) 0.0000000000000000000000000000000000000022273635617957434
###Markdown
4.2 - Initializing parameters**Exercise:** Implement parameter initialization in the cell below. You have to initialize w as a vector of zeros. If you don't know what numpy function to use, look up np.zeros() in the Numpy library's documentation.
###Code
/// initialize `w` and `b`.
/// `w` is a 2Darray with `dim` rows and 1 col.
/// `dim` is the number of features.
pub fn initialize_with_zeros<T>(dim: usize) -> (Array2<T>, T)
where T: Clone + Zero
{
let w = Array2::<T>::zeros((dim, 1));
let b = T::zero();
(w, b)
}
"done"
let dim = 2;
let (w, b) = initialize_with_zeros::<f32>(2);
println!("w = {:?}", w);
println!("b = {:?}", b);
"done"
###Output
w = [[0.0],
[0.0]] shape=[2, 1], strides=[1, 1], layout=C (0x1), const ndim=2
b = 0.0
###Markdown
For image inputs, w will be of shape (num_px * num_px * 3, 1). 4.3 - Forward and Backward propagationNow that your parameters are initialized, you can do the "forward" and "backward" propagation steps for learning the parameters.**Exercise:** Implement a function `propagate()` that computes the cost function and its gradient.**Hints**:Forward Propagation:- You get X- You compute $A = \sigma(w^T X + b) = (a^{(0)}, a^{(1)}, ..., a^{(m-1)}, a^{(m)})$- You calculate the cost function: $J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(a^{(i)})+(1-y^{(i)})\log(1-a^{(i)})$Here are the two formulas you will be using: $$ \frac{\partial J}{\partial w} = \frac{1}{m}X(A-Y)^T\tag{7}$$$$ \frac{\partial J}{\partial b} = \frac{1}{m} \sum_{i=1}^m (a^{(i)}-y^{(i)})\tag{8}$$
###Code
/// Implement the cost function and its gradient for the propagation
/// explained above.
///
/// ## Parameters
///
/// - `w` is the weight, a 2Darray (num_features, 1).
/// - `b` is the bias, a scalar.
/// - `x` is the data, a 2D array (num_features, num_cases).
/// - `y` is the "label" vector, a 2D (1, num_cases).
/// - For the cat example, 0 means "non-cat", 1 means "cat".
///
/// ## Return
///
/// - `dw` is the gradient of the loss with respect to w, thus same shape
/// as w.
/// - `db` is the gradient of the loss with respect to b, thus same shape
/// as b.
/// - `cost` is the negative log-likelihood cost for logistic regression.
pub fn propagate(w: ArrayView2<f32>, b: f32, x: ArrayView2<f32>, y: ArrayView2<u8>) -> ((Array2<f32>, f32), f32) {
let (num_features, num_cases) = x.dim();
// assert shapes' coherence
{
let (num_features_w, one) = w.dim();
assert_eq!(one, 1);
assert_eq!(num_features, num_features_w);
let (one, num_cases_y) = y.dim();
assert_eq!(one, 1);
assert_eq!(num_cases, num_cases_y);
}
// temporarily converts y elements (u8) into f32
let y = y.mapv(f32::from);
// forward propagation (from x to cost)
let (a, cost): (Array2<f32>, f32) = {
// activation
// 𝐴 = 𝜎(𝑤^𝑇 𝑋 + 𝑏) = (𝑎(0), 𝑎(1),..., 𝑎(𝑚−1), 𝑎(𝑚))
let a: Array2<f32> = {
// 𝑧(𝑖) = 𝑤^𝑇 𝑥(𝑖) + 𝑏
let z =
// w has shape (num_features, 1)
w
// transposes into the shape (1, num_features)
.t()
// ((1, num_features) dot (num_features, num_cases))
//
// from https://docs.rs/ndarray/0.12.1/ndarray/struct.ArrayBase.html#method.dot
//
// If Rhs is two-dimensional,
// then the operation is matrix multiplication,
// where self is treated as a row vector.
// In this case, if self is shape (num_features)
// - which is for our case, (1, num_features) -,
// then rhs is shape (num_features, num_cases)
// and the result is shape (num_cases).
//
// for our case, results in shape (1, num_cases).
.dot(&x)
// vector-like 2Darray + scalar
//
// from https://docs.rs/ndarray/0.12.1/ndarray/struct.ArrayBase.html#impl-Add%3CB%3E
//
// Perform elementwise addition between self
// and the scalar x,
// and return the result (based on self).
//
// so the shape is maintened at (1, num_cases).
+ b;
// 𝑦̂(𝑖) = 𝑎(𝑖) = 𝑠𝑖𝑔𝑚𝑜𝑖𝑑 ( 𝑧(𝑖) )
// for each element, replaces it with a function of itself
// eg. [e1, e2, e3] becomes [f(e1), f(e2), f(e3)]
//
// the shape is maintened at (1, num_cases).
z.mapv(Sigmoid::sigmoid)
};
assert_eq!((1, num_cases), a.dim());
// 𝐽 = −1/𝑚
// (∑[𝑖=1;m]
// 𝑦(𝑖) log(𝑎(𝑖))
// + (1−𝑦(𝑖)) log(1−𝑎(𝑖))
// )
let cost: f32 =
// −1/𝑚 (...)
//
// scalar
(- 1.0 / num_cases as f32)
// multitplication of scalars
*
// since num_cases elements were added in the sum,
// this multiplication represents "taking the average"
// from those elements (since it's dividing by it's length)
// −1/𝑚 ∑[𝑖=1;m] (...)
//
// element-wise cumulative addition of the elements
// results in a scalar
Array::sum(
// after the addition, results in
// (1, num_cases)
&(
// −1/𝑚 ∑[𝑖=1;m] 𝑦(𝑖) log(𝑎(𝑖)) + (...)
//
// ((1, num_cases) * (1, num_cases)), element-wise
// resulting shape is maintened at (1, num_cases)
(a.mapv(f32::ln) * &y)
// ps. the terms 𝑦(𝑖) and log(𝑎(𝑖)) are actually
// swapped so that y does not need to be moved.
// but since they have the same shape and since
// they are row-vectors, this should be allowed
// element-wise addition of two vectors and results in
// (1, num_cases)
+
// −1/𝑚 ∑[𝑖=1;m] (...) + (1−𝑦(𝑖)) log(1−𝑎(𝑖))
//
// the subtraction is element-wise and results in
// (1, num_cases)
//
// the mapping is also element-wise and results in
// (1, num_cases)
//
// the multiplication is element-wise and results in
// (1, num_cases)
((1.0 - &y) * a.mapv(|ai| f32::ln(1.0 - ai)))
)
);
(a, cost)
};
// backward propagation (to find grad)
let (dw, db): (Array2<f32>, f32) = {
// ∂𝐽/∂𝑤 = 1/𝑚 𝑋 ((𝐴−𝑌)^𝑇)
let dw: Array2<f32> =
// −1/𝑚 (...)
//
// scalar
(1.0 / num_cases as f32)
// multitplication of a scalar and a 2Darray
// (num_features, 1) (after the dot product),
// which maintains the shape
// (num_features, 1)
*
// 1/𝑚 𝑋 (...)
//
// (num_features, num_cases)
x
// 1/𝑚 𝑋 (...)
//
// (num_features, num_cases) dot (num_cases, 1), results in
// (num_features, 1)
.dot(
// 1/𝑚 𝑋 (...)^𝑇
//
// (1, num_cases) transposed results in
// (num_cases, 1) - column vector
&Array2::t(
// 1/𝑚 𝑋 ((𝐴−𝑌)^𝑇)
//
// (1, num_cases) - (1, num_cases) results in
// (1, num_cases) - row vector
&(a.clone() - &y)
)
);
assert_eq!((num_features, 1), dw.dim());
// ∂𝐽/∂𝑏 = 1/𝑚 ∑[𝑖=1;𝑚] (𝑎(𝑖)−𝑦(𝑖))
let db: f32 =
// −1/𝑚 (...)
//
// scalar
(1.0 / num_cases as f32)
// −1/𝑚 (...)
//
// scalar and scalar multiplication, results in scalar
*
// 1/𝑚 ∑[𝑖=1;𝑚] (...)
//
// element-wise cumulative addition of the elements
// results in a scalar
Array::sum(
// 1/𝑚 ∑[𝑖=1;𝑚] (𝑎(𝑖)−𝑦(𝑖))
//
// (1, num_cases) - (1, num_cases) results in
// (1, num_cases) - row vector
&(a - &y)
);
(dw, db)
};
assert_eq!(dw.dim(), w.dim());
// return gradients and cost
((dw, db), cost)
}
"done"
use ndarray::arr2;
let w = arr2(&[[1.0], [2.0]]);
let b = 2.0;
let x = arr2(&[[1.0, 2.0], [3.0, 4.0]]);
let y = arr2(&[[1, 0]]);
let ((dw, db), cost) = propagate(w.view(), b, x.view(), y.view());
println!("dw = {:?}", dw);
println!("db = {:?}", db);
println!("cost = {:?}", cost);
"done"
###Output
dw = [[0.9999321],
[1.9998026]] shape=[2, 1], strides=[1, 1], layout=C (0x1), const ndim=2
db = 0.4999352
cost = 5.995632
###Markdown
d) Optimization- You have initialized your parameters.- You are also able to compute a cost function and its gradient.- Now, you want to update the parameters using gradient descent.**Exercise:** Write down the optimization function. The goal is to learn $w$ and $b$ by minimizing the cost function $J$. For a parameter $\theta$, the update rule is $ \theta = \theta - \alpha \text{ } d\theta$, where $\alpha$ is the learning rate.
###Code
/// This function optimizes w and b
/// by running a gradient descent algorithm
///
/// ## Parameters
///
/// - `w` is the weight, a 2Darray (num_features, 1).
/// - `b` is the bias, a scalar.
/// - `x` is the data, a 2D array (num_features, num_cases).
/// - `y` is the "label" vector, a 2D (1, num_cases).
/// - For the cat example, 0 means "non-cat", 1 means "cat".
/// - `num_iterations` is the number of iterations of
/// the optimization loop.
/// - `learning_rate` is the learning rate of
/// the gradient descent update rule.
/// - `print_cost` as True means to print the loss every 100 steps.
///
/// ## Return
///
/// `params` is a dictionary containing the weights w and bias b.
/// - This is not returned but changed in-place.
/// `grads` is a dictionary containing the gradients of the weights
/// and bias with respect to the cost function.
/// - This is not returned.
/// costs is the list of all the costs computed during the optimization,
/// this will be used to plot the learning curve.
pub fn optimize(
mut w: ArrayViewMut2<f32>,
b: &mut f32,
x: ArrayView2<f32>,
y: ArrayView2<u8>,
num_iterations: usize,
learning_rate: f32,
print_cost: bool
) -> Vec<f32> {
// Tips:
// You basically need to write down two steps and iterate through them:
// 1. Calculate the cost and the gradient for the current parameters.
// Use propagate().
// 2. Update the parameters using gradient descent rule for w and b.
let mut costs = vec![];
for i in 0..num_iterations {
// cost and gradient calculation
let ((dw, db), cost) = propagate(w.view(), *b, x.view(), y.view());
// update rule
use ndarray::Zip;
// shape of both w and dw is (num_features, 1)
// w -= learning_rate * dw
Zip::from(w.lanes_mut(Axis(1)))
.and(dw.lanes(Axis(1)))
.apply(|mut wi, dwi| wi[0] -= learning_rate * dwi[0]);
*b = *b - learning_rate * db ;
if i % 100 == 0 {
costs.push(cost);
}
// print the cost every 100 training examples
if print_cost && i % 100 == 0 {
println!("cost after iteration {}: {}", i, cost);
}
}
costs
}
"done"
// repeats a similar small-example definition
let mut w = arr2(&[[1.0], [2.0]]);
let mut b = 2.0;
let x = arr2(&[[1.0, 2.0], [3.0, 4.0]]);
let y = arr2(&[[1, 0]]);
// tests the function
let costs = optimize(w.view_mut(), &mut b, x.view(), y.view(), 100, 0.009, false);
println!("w = {:?}", &w);
println!("b = {:?}", &b);
"done"
###Output
w = [[0.112458035],
[0.23106757]] shape=[2, 1], strides=[1, 1], layout=C (0x1), const ndim=2
b = 1.5593045
###Markdown
**Exercise:** The previous function will output the learned w and b. We are able to use w and b to predict the labels for a dataset X. Implement the `predict()` function. There is two steps to computing predictions:1. Calculate $\hat{Y} = A = \sigma(w^T X + b)$2. Convert the entries of a into 0 (if activation 0.5), stores the predictions in a vector `Y_prediction`. If you wish, you can use an `if`/`else` statement in a `for` loop (though there is also a way to vectorize this).
###Code
// graded function: predict
/// Predict whether the label is 0 or 1
/// using learned logistic regression parameters (w, b)
///
/// ## Parameters
///
/// - `w` are the weights,
/// a numpy array of size (num_px * num_px * 3, 1)
/// - `b` is the bias, a scalar
/// - `x` is the data of size
/// (num_px * num_px * 3, number_of_examples)
///
/// ## Return
///
/// - `y_prediction` is a numpy array (vector)
/// containing all predictions (0/1) for the examples in `x`
pub fn predict(
w: ArrayView2<f32>,
b: f32,
x: ArrayView2<f32>,
) -> Array2<f32> {
let (num_features, num_cases) = x.dim();
let y_prediction = Array2::<f32>::zeros((1, num_cases));
assert_eq!(w.dim(), (num_features, 1));
// compute vector "A" predicting the probabilities of
// a cat being present in the picture
let a = {
let z =
// w has shape (num_features, 1)
w
// transposes into the shape (1, num_features)
.t()
// ((1, num_features) dot (num_features, num_cases))
//
// from https://docs.rs/ndarray/0.12.1/ndarray/struct.ArrayBase.html#method.dot
//
// If Rhs is two-dimensional,
// then the operation is matrix multiplication,
// where self is treated as a row vector.
// In this case, if self is shape (num_features)
// - which is for our case, (1, num_features) -,
// then rhs is shape (num_features, num_cases)
// and the result is shape (num_cases).
//
// for our case, results in shape (1, num_cases).
.dot(&x)
// vector-like 2Darray + scalar
//
// from https://docs.rs/ndarray/0.12.1/ndarray/struct.ArrayBase.html#impl-Add%3CB%3E
//
// Perform elementwise addition between self
// and the scalar x,
// and return the result (based on self).
//
// so the shape is maintened at (1, num_cases).
+ b;
// 𝑦̂(𝑖) = 𝑎(𝑖) = 𝑠𝑖𝑔𝑚𝑜𝑖𝑑 ( 𝑧(𝑖) )
// for each element, replaces it with a function of itself
// eg. [e1, e2, e3] becomes [f(e1), f(e2), f(e3)]
//
// the shape is maintened at (1, num_cases).
z.mapv(Sigmoid::sigmoid)
};
assert_eq!(a.dim(), (1, num_cases));
// convert probabilities a[0,i]
// to actual predictions p[0,i]
a.mapv(|ai| if ai > 0.5 { 1.0 } else { 0.0 })
}
"done"
// repeats a similar small-example definition
let mut w = arr2(&[[1.0], [2.0]]);
let mut b = 2.0;
let x = arr2(&[[1.0, 2.0], [3.0, 4.0]]);
let y = arr2(&[[1, 0]]);
let _costs = optimize(w.view_mut(), &mut b, x.view(), y.view(), 100, 0.009, false);
println!("predictions = {:?}", predict(w.view(), b, x.view()));
"done"
###Output
predictions = [[1.0, 1.0]] shape=[1, 2], strides=[2, 1], layout=C (0x1), const ndim=2
###Markdown
**What to remember:**You've implemented several functions that:- Initialize (w,b)- Optimize the loss iteratively to learn parameters (w,b): - computing the cost and its gradient - updating the parameters using gradient descent- Use the learned (w,b) to predict the labels for a given set of examples--- 5 - Merge all functions into a model You will now see how the overall model is structured by putting together all the building blocks (functions implemented in the previous parts) together, in the right order.**Exercise:** Implement the model function. Use the following notation: - Y_prediction for your predictions on the test set - Y_prediction_train for your predictions on the train set - w, costs, grads for the outputs of optimize()
###Code
// graded function: model
/// Builds the logistic regression model
/// by calling the function you've implemented previously.
///
/// ## Parameters
///
/// - `x_train` is the training set represented by
/// a array of shape (num_px * num_px * 3, m_train).
/// - `y_train` is the training labels represented by
/// a array (vector) of shape (1, m_train).
/// - `x_test` is the test set represented by a numpy array
/// of shape (num_px * num_px * 3, m_test).
/// - `y_test` is the test labels represented by
/// a array (vector) of shape (1, m_test).
/// - `num_iterations` is the hyperparameter representing
/// the number of iterations to optimize the parameters.
/// - `learning_rate` is the hyperparameter representing
/// the learning rate used in the update rule of optimize().
/// - `print_cost` is the set to true to print
/// the cost every 100 iterations.
///
/// ## Return
///
/// - `costs`.
/// - `y_prediction_test`.
/// - `y_prediction_train`.
/// - `w`.
/// - `b`.
pub fn model(
x_train: ArrayView2<f32>,
y_train: ArrayView2<u8>,
x_test: ArrayView2<f32>,
y_test: ArrayView2<f32>,
num_iterations: usize,
learning_rate: f32,
print_cost: bool,
) -> (
std::vec::Vec<f32>,
Array2<f32>,
Array2<f32>,
Array2<f32>,
f32
)
{
// initialize parameters with zeros
let (num_train_features, num_train_cases) =
x_train.dim();
let (mut w, mut b): (_, f32) =
initialize_with_zeros(num_train_features);
// gradient descent
let costs = optimize(
w.view_mut(),
&mut b,
x_train,
y_train,
num_iterations,
learning_rate,
print_cost
);
// predict test/train set examples
let y_prediction_train = predict(w.view(), b, x_train);
let y_prediction_test = predict(w.view(), b, x_test);
// print train/test Errors
println!("train accuracy: {} %",
100.0
- (&y_prediction_train - &y_train.mapv(f32::from))
.mapv(f32::abs)
.mean_axis(Axis(1))
* 100.0
);
println!("test accuracy: {} %",
100.0
- (&y_prediction_test - &y_test.mapv(f32::from))
.mapv(f32::abs)
.mean_axis(Axis(1))
* 100.0
);
(
costs,
y_prediction_test,
y_prediction_train,
w,
b,
)
}
"done"
// use :help for seeing commands from evcxr
// for this last cell, restart the kernel and use the command :opt until it's set to 2 (optimized for speed)
// (this is already done in the first cell)
//
// without optimization this gets ~100x slower, which is impractical
let d = model(
train_x_flat.view(),
train_y.view(),
test_x_flat.view(),
test_y.mapv(f32::from).view(),
2000, // num_iterations
0.005, // learning_rate
true // print_cost
);
"done"
###Output
cost after iteration 0: 0.6931474
cost after iteration 100: 0.58450836
cost after iteration 200: 0.46694902
cost after iteration 300: 0.37600684
cost after iteration 400: 0.33146322
cost after iteration 500: 0.30327305
cost after iteration 600: 0.27987954
cost after iteration 700: 0.26004213
cost after iteration 800: 0.24294066
cost after iteration 900: 0.22800423
cost after iteration 1000: 0.2148195
cost after iteration 1100: 0.20307821
cost after iteration 1200: 0.19254427
cost after iteration 1300: 0.18303333
cost after iteration 1400: 0.17439857
cost after iteration 1500: 0.16652139
cost after iteration 1600: 0.15930453
cost after iteration 1700: 0.15266733
cost after iteration 1800: 0.14654224
cost after iteration 1900: 0.14087206
train accuracy: [99.04306] %
test accuracy: [70] %
###Markdown
**Comment**: Training accuracy is close to 100%. This is a good sanity check: your model is working and has high enough capacity to fit the training data. Test error is 68%. It is actually not bad for this simple model, given the small dataset we used and that logistic regression is a linear classifier. But no worries, you'll build an even better classifier next week!Also, you see that the model is clearly overfitting the training data. Later in this specialization you will learn how to reduce overfitting, for example by using regularization. Using the code below (and changing the `index` variable) you can look at predictions on pictures of the test set.
###Code
// one image display
let index = 5;
let (width, height) = (64, 64);
let img_subview = test_x.index_axis(Axis(0), index).to_owned();
// use ndarray::s; // slice macro // already imported
// test image displaying
let img = image::ImageBuffer::from_fn(width, height, |x, y| {
let color = img_subview.slice(s![y as usize, x as usize]).to_owned(); // shape: [] (zero-dimension), element = Color.
// so this is not an actual array, but a single element. But we treat as an array anyway and get the "first" element.
let color = &color.as_slice().unwrap()[0];
image::Rgb([color.red, color.green, color.blue])
});
img.evcxr_display() // the only output on the cell must be a single image
// example of a picture that was wrongly classified.
let index = 5;
println!(
"y = {} but you predicted that it was a {:?} picture.",
test_y[[0, index]],
Class::from(&(d.1[[0, index]] as u8))
);
"done"
###Output
y = 0 but you predicted that it was a Cat picture.
###Markdown
Let's also plot the cost function and the gradients.
###Code
// plot learning curve (with costs)
"TODO (involves plotting..)"
###Output
_____no_output_____
###Markdown
**Interpretation**:You can see the cost decreasing. It shows that the parameters are being learned. However, you see that you could train the model even more on the training set. Try to increase the number of iterations in the cell above and rerun the cells. You might see that the training set accuracy goes up, but the test set accuracy goes down. This is called overfitting. --- 6 - Further analysis (optional/ungraded exercise) Congratulations on building your first image classification model. Let's analyze it further, and examine possible choices for the learning rate $\alpha$. Choice of learning rate **Reminder**:In order for Gradient Descent to work you must choose the learning rate wisely. The learning rate $\alpha$ determines how rapidly we update the parameters. If the learning rate is too large we may "overshoot" the optimal value. Similarly, if it is too small we will need too many iterations to converge to the best values. That's why it is crucial to use a well-tuned learning rate.Let's compare the learning curve of our model with several choices of learning rates. Run the cell below. This should take about 1 minute. Feel free also to try different values than the three we have initialized the `learning_rates` variable to contain, and see what happens.
###Code
"TODO (involves plotting..)"
###Output
_____no_output_____
###Markdown
**Interpretation**: - Different learning rates give different costs and thus different predictions results.- If the learning rate is too large (0.01), the cost may oscillate up and down. It may even diverge (though in this example, using 0.01 still eventually ends up at a good value for the cost). - A lower cost doesn't mean a better model. You have to check if there is possibly overfitting. It happens when the training accuracy is a lot higher than the test accuracy.- In deep learning, we usually recommend that you: - Choose the learning rate that better minimizes the cost function. - If your model overfits, use other techniques to reduce overfitting. (We'll talk about this in later videos.) --- 7 - Test with your own image (optional/ungraded exercise) Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Change your image's name in the following code 4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!
###Code
"TODO (involves some setup..)"
###Output
_____no_output_____
###Markdown
**What to remember from this assignment:**1. Preprocessing the dataset is important.2. You implemented each function separately: initialize(), propagate(), optimize(). Then you built a model().3. Tuning the learning rate (which is an example of a "hyperparameter") can make a big difference to the algorithm. You will see more examples of this later in this course! Finally, if you'd like, we invite you to try different things on this Notebook. Make sure you submit before trying anything. Once you submit, things you can play with include: - Play with the learning rate and the number of iterations - Try different initialization methods and compare the results - Test other preprocessings (center the data, or divide each row by its standard deviation) Bibliography:- http://www.wildml.com/2015/09/implementing-a-neural-network-from-scratch/- https://stats.stackexchange.com/questions/211436/why-do-we-normalize-images-by-subtracting-the-datasets-image-mean-and-not-the-c
###Code
"end of C1W2"
###Output
_____no_output_____ |
notebook/mapplot.ipynb | ###Markdown
Mapplot routes
###Code
import numpy as np
from pytsp.util.data_generator import DataGenerator
from pytsp.util.plot import Mapplot
figure_height = 1000
figure_width = 1000
def plot_cities(num_cities, height=figure_height,width=figure_width):
stores = DataGenerator(
num_cities=num_cities
).selected_cities
stores_to_plot = np.array([k for k in stores["coordinates"] ] + [stores["coordinates"][0]])
return Mapplot.plot_map(
coordinates=stores_to_plot,
height=height,
width=width)
###Output
_____no_output_____
###Markdown
Plotting 10 cities
###Code
selected_starbucks_stores = DataGenerator().selected_cities
selected_starbucks_stores
coordinates_to_plot = np.array([k for k in selected_starbucks_stores["coordinates"] ] + [selected_starbucks_stores["coordinates"][0]])
Mapplot.plot_map(coordinates_to_plot, height=figure_height,width=figure_width)
###Output
_____no_output_____
###Markdown
Plotting 50
###Code
plot_cities(50)
###Output
_____no_output_____
###Markdown
Plotting 100
###Code
plot_cities(100)
###Output
_____no_output_____ |
06_KG_Load_Data/06_KG_Load_Data.ipynb | ###Markdown
Notebook for loading data to Neo4j to construct Knowledge Graph
###Code
import json
import pandas as pd
import os
import _pickle
from neo4j import GraphDatabase
driver = GraphDatabase.driver(uri = "bolt://localhost:7687",\
auth = ("neo4j", "pinglab"))
class LoadKG():
def __init__(self,import_file_path,reactome_file_path,driver):
self.session = driver.session()
self.import_extension = os.path.basename(os.path.normpath(import_file_path))
self.import_data = import_file_path
self.pathway_data = reactome_file_path
def parse_reactome_file(self):
ppw_df = pd.read_csv(self.pathway_data)
for i,plist in enumerate(ppw_df['Submitted entities found']):
ppw_df['Submitted entities found'][i] = plist.split(';')
return ppw_df
def create_constraints(self):
query = ["CREATE CONSTRAINT UniqueProteinIdConstraint ON (p:Protein) ASSERT p.uniprot_id IS UNIQUE",\
"CREATE CONSTRAINT UniqueDrugIdConstraint ON (d:Drug) ASSERT d.drugbank_id IS UNIQUE",\
"CREATE CONSTRAINT UniqueDrugPwConstraint ON (p:Pathway) ASSERT p.smpdb_id IS UNIQUE",\
"CREATE CONSTRAINT UniqueProteinPwConstraint ON (p:Pathway) ASSERT p.reactome_id IS UNIQUE",\
"CREATE CONSTRAINT UniqueMeSHConstraint ON (m:MeSH) ASSERT m.name IS UNIQUE",\
"CREATE CONSTRAINT UniqueDocConstraint ON (d:Document) ASSERT d.pmid IS UNIQUE"]
for constraint in query:
self.session.run(constraint)
def create_protein_node(self,entity):
import_data = self.import_data
entity=entity
def tx_function(tx,import_data,entity):
if entity=='UniProt':
query = "WITH '" + import_data + "' as url \
CALL apoc.load.json(url) YIELD value \
UNWIND value." + entity + " as p \
MERGE (pn:Protein {uniprot_id:p})"
else:
query = "WITH '" + import_data + "' as url \
CALL apoc.load.json(url) YIELD value \
UNWIND value." + entity + " as p \
MERGE (pn:Protein {uniprot_id:p.uniprot_id})"
tx.run(query,import_data=import_data,entity=entity)
self.session.write_transaction(tx_function,import_data,entity)
def update_protein_node(self,entity=None):
import_data = self.import_data
entity = entity
def tx_function(tx,import_data,entity):
if self.import_extension.lower().endswith('.json'):
query = "WITH '" + import_data + "' as url \
CALL apoc.load.json(url) YIELD value \
UNWIND value." + entity + " as p \
MATCH (pn:Protein {uniprot_id:p.uniprot_id}) \
SET pn.name=p.name,\
pn.drugbank_id=p.drugbank_id,\
pn.group_name=p.group_name"
if self.import_extension.lower().endswith('.csv'):
query = "LOAD CSV WITH HEADERS FROM '" + import_data + "' AS p \
MATCH (pn:Protein {uniprot_id:p.ID}) \
SET pn.CM=p.CM,\
pn.ARR=p.ARR,\
pn.CHD=p.CHD,\
pn.VD=p.VD,\
pn.IHD=p.IHD,\
pn.CCS=p.CCS,\
pn.VOO=p.VOO,\
pn.OHD=p.OHD"
tx.run(query,import_data=import_data,entity=entity)
self.session.write_transaction(tx_function,import_data,entity)
def create_drug_node(self):
import_data = self.import_data
def tx_function(tx,import_data):
query = "WITH '" + import_data + "' as url \
CALL apoc.load.json(url) YIELD value \
MERGE (d:Drug {drugbank_id:value.drugbank_id}) \
ON CREATE SET d.name=value.name, \
d.type='CVD', \
d.synonyms=value.synonyms, \
d.description=value.descriptions, \
d.categories=value.categories, \
d.atc_code=value.`ATC code`, \
d.indication=value.indication"
tx.run(query,import_data=import_data)
self.session.write_transaction(tx_function,import_data)
def create_drugpw_node(self):
import_data = self.import_data
def tx_function(tx,import_data):
query = "WITH '" + import_data + "' as url \
CALL apoc.load.json(url) YIELD value \
UNWIND value.pathways as pway \
MERGE (pw:Pathway {smpdb_id:pway.smpdb_id})\
ON CREATE SET pw.name=pway.name, \
pw.category=pway.category"
tx.run(query,import_data=import_data)
self.session.write_transaction(tx_function,import_data)
def create_proteinpw_node(self):
ppw_df = self.parse_reactome_file()
def tx_function(tx,pw_id,pw_name):
query = "MERGE (pw:Pathway{reactome_id:$pw_id}) \
ON CREATE SET pw.name=$pw_name"
tx.run(query,pw_id=pw_id,pw_name=pw_name)
for pw_id,pw_name in zip(ppw_df["Pathway identifier"],ppw_df["Pathway name"]):
self.session.write_transaction(tx_function,pw_id,pw_name)
def create_mesh_node(self):
import_data = self.import_data
def tx_function(tx,import_data):
query = "WITH '" + import_data + "' as url \
CALL apoc.load.json(url) YIELD value \
UNWIND value.MeSH as m \
MERGE (:MeSH {name:m})"
tx.run(query,import_data=import_data)
self.session.write_transaction(tx_function,import_data)
def create_doc_node(self):
import_data = self.import_data
def tx_function(tx,import_data):
query = "WITH '" + import_data + "' as url \
CALL apoc.load.json(url) YIELD value \
MERGE (:Document {pmid:value.PMID})"
tx.run(query,import_data=import_data)
self.session.write_transaction(tx_function,import_data)
def create_protein2doc_edge(self):
import_data = self.import_data
def tx_function(tx,import_data):
query = "WITH '" + import_data + "' as url \
CALL apoc.load.json(url) YIELD value \
UNWIND value.PMIDs as pmid \
WITH value, pmid \
MATCH (p:Protein {uniprot_id:value.UniProt}) \
MATCH (d:Document {pmid:pmid}) \
MERGE (p)-[:STUDIED_IN]->(d)"
tx.run(query,import_data=import_data)
self.session.write_transaction(tx_function,import_data)
def create_doc2mesh_edge(self):
import_data = self.import_data
def tx_function(tx,import_data):
query = "WITH '" + import_data + "' as url \
CALL apoc.load.json(url) YIELD value \
UNWIND value.MeSH as mesh \
WITH value, mesh \
MATCH (d:Document {pmid:value.PMID}) \
MATCH (m:MeSH {name:mesh}) \
MERGE (d)-[:STUDIES]->(m)"
tx.run(query,import_data=import_data)
self.session.write_transaction(tx_function,import_data)
def create_drug2protein_edge(self, entity):
entity = entity
import_data = self.import_data
def tx_function(tx,import_data,entity):
if entity == 'targets':
query = "WITH '" + import_data + "' as url \
CALL apoc.load.json(url) YIELD value \
UNWIND value." + entity + " as ent \
WITH ent, value \
MATCH (p:Protein {uniprot_id:ent.uniprot_id}) \
MATCH (d:Drug {drugbank_id:value.drugbank_id}) \
MERGE (d)-[t:TARGETS]->(p) \
SET t.actions=ent.actions, \
t.group_actions=ent.actions_of_group"
elif entity in ['carriers','transporters','enzymes']:
ent = entity[:len(entity)-1].upper()
query = "WITH '" + import_data + "' as url \
CALL apoc.load.json(url) YIELD value \
UNWIND value." + entity + " as ent \
WITH ent, value \
MATCH (p:Protein {uniprot_id:ent.uniprot_id}) \
MATCH (d:Drug {drugbank_id:value.drugbank_id}) \
MERGE (d)-[e:RELATED_" + ent +"]->(p) \
SET e.actions=ent.actions, \
e.group_actions=ent.actions_of_group"
else:
raise Exception('entity must be one of the following:\n' +
'targets\n' +
'carriers\n' +
'transporters\n' +
'enzymes')
tx.run(query,import_data=import_data,entity=entity)
self.session.write_transaction(tx_function,import_data,entity)
def create_drug2pw_edge(self):
import_data = self.import_data
def tx_function(tx,import_data):
query = "WITH '" + import_data + "' as url \
CALL apoc.load.json(url) YIELD value \
UNWIND value.pathways as pway \
WITH pway, value \
MATCH (pw:Pathway {smpdb_id:pway.smpdb_id}) \
MATCH (d:Drug {drugbank_id:value.drugbank_id}) \
MERGE (d)-[:INVOLVED_IN]->(pw)"
tx.run(query,import_data=import_data)
self.session.write_transaction(tx_function,import_data)
def create_pw2protein_edge(self):
ppw_df = self.parse_reactome_file()
def tx_function(tx,p,pw_id,pw_name):
query = "MATCH (pw:Pathway{reactome_id:$pw_id}) \
MATCH (p:Protein{uniprot_id:$p}) \
MERGE (pw)-[:CANDIDATE]->(p)"
tx.run(query,p=p,pw_id=pw_id,pw_name=pw_name)
for pw_id,pw_name,plist in zip(ppw_df["Pathway identifier"],ppw_df["Pathway name"],ppw_df["Submitted entities found"]):
for p in plist:
self.session.write_transaction(tx_function,p,pw_id,pw_name)
oKG = LoadKG(import_file_path='file://cvdrug_ent_drugpw.json',\
reactome_file_path='C:\\Users\\ttran\\OneDrive\\Desktop\\COVID-CDV-DATA\\covidii_KG\\reactome\\result.csv',\
driver=driver)
oKG.create_constraints()
oKG.create_protein_node('targets')
oKG.update_protein_node('targets')
oKG.create_drug_node()
oKG.create_drug2protein_edge('targets')
oKG.create_protein_node('enzymes')
oKG.update_protein_node('enzymes')
oKG.create_drug2protein_edge('enzymes')
oKG.create_protein_node('carriers')
oKG.update_protein_node('carriers')
oKG.create_drug2protein_edge('carriers')
oKG.create_protein_node('transporters')
oKG.update_protein_node('transporters')
oKG.create_drug2protein_edge('transporters')
oKG.create_drugpw_node()
oKG.create_drug2pw_edge()
oKG.create_proteinpw_node()
oKG = LoadKG(import_file_path='file://pmid_to_mesh.json',\
reactome_file_path='C:\\Users\\ttran\\OneDrive\\Desktop\\COVID-CDV-DATA\\covidii_KG\\reactome\\result.csv',\
driver=driver)
oKG.create_mesh_node()
oKG.create_doc_node()
oKG.create_doc2mesh_edge()
oKG = LoadKG(import_file_path='file://uniprot_mesh_pub.json',\
reactome_file_path='C:\\Users\\ttran\\OneDrive\\Desktop\\COVID-CDV-DATA\\covidii_KG\\reactome\\result.csv',\
driver=driver)
oKG.create_protein_node('UniProt')
oKG.create_protein2doc_edge()
oKG = LoadKG(import_file_path='file:///protein-nodes.csv',\
reactome_file_path='C:\\Users\\ttran\\OneDrive\\Desktop\\COVID-CDV-DATA\\covidii_KG\\reactome\\result.csv',\
driver=driver)
oKG.update_protein_node()
oKG.create_pw2protein_edge()
driver.close()
###Output
_____no_output_____ |
Concept_in_Python_Andre.ipynb | ###Markdown
creat a class
###Code
class Car:
pass
class MyClass:
X = 40
###Output
_____no_output_____
###Markdown
creat an object
###Code
class Car:
def __init__(self,name,color):
self.name = name
self.color = color
def description(self):
return "The" + self.name + "has a color" + self.color
def show(self):
print("The" + self.name + "has a color" + self.color)
car1 = Car("Honda Civic", "silver gray")
car1.show()
###Output
_____no_output_____
###Markdown
Object methods
###Code
class Person:
def __init__(self,name, age):
self.name = name
self.age = age
def myFunction(self):
print("Hello! My name is", self.name)
print("I am", self.name)
p1 = Person("Andre Isaac", 18)
p1.myFunction()
###Output
Hello! My name is Andre Isaac
I am Andre Isaac
###Markdown
Modify an Object Property
###Code
class Car():
def __init__(self,name,color):
self.name = name
self.color = color
def description(self):
return self.name + self.color
def show(self):
print("The name and color of the car is", self.description())
obj1 = Car("Honda Civic"," silver gray")
obj1.show()
###Output
The name and color of the car is Honda Civic silver gray
###Markdown
Application 1 - Write a program that computes for the Area and Perimeter of a Square, and create a class name Square with side as its attribute
###Code
#Area of a Square = s*S
#Perimeter of a Square = 4*s = s+s+s+s
class Square:
def __init__(self, side):
self.side = side
def Area(self):
return self.side * self.side
def Perimeter(self):
return 4*(self.side)
def display(self):
print("The area of a square is", self.Area())
print("The perimeter of a square is", self.Perimeter())
sq1 = Square(2)
sq1.display()
###Output
The area of a square is 4
The perimeter of a square is 8
###Markdown
Application 2 - Write a Python program that displays your student no. and Fullname (Surname, First Name, MI) and create a class name OOP_58001
###Code
class Person:
def __init__(self,student,number):
self.student = student
self.number = number
def myFunction(self):
print("I am",self.student,"and my student number is", self.number)
print("Section - OOP_58001")
p1= Person("Fulgencio, Andre Isaac C.", 202115427)
p1.myFunction()
###Output
I am Fulgencio, Andre Isaac C. and my student number is 202115427
Section - OOP_58001
|
multi_model_inference/catboost_xgboost_script_mode_local_training_and_serving.ipynb | ###Markdown
CatBoost XGBoost Script Mode Training and Serving This is a sample Python program that trains a simple CatBoost model and a XGBoost model using SageMaker XGBoost Docker image, and then performs inference. This implementation will work on your *local computer* or in the *AWS Cloud*. Prerequisites:1. Install required Python packages: `pip install -r requirements.txt`2. Docker Desktop installed and running on your computer: `docker ps`3. You should have AWS credentials configured on your local machine in order to be able to pull the docker image from ECR.
###Code
import os
import sagemaker
import pandas as pd
from sagemaker.predictor import csv_serializer
from sagemaker.xgboost import XGBoost
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
sess = sagemaker.Session()
bucket = sess.default_bucket()
role = sagemaker.get_execution_role()
prefix = "xgboost_catboost"
###Output
_____no_output_____
###Markdown
Downloading DataDownload training and eval data
###Code
local_train = './data/train/boston_train.csv'
local_validation = './data/validation/boston_validation.csv'
local_test = './data/test/boston_test.csv'
if os.path.isfile('./data/train/boston_train.csv') and \
os.path.isfile('./data/validation/boston_validation.csv') and \
os.path.isfile('./data/test/boston_test.csv'):
print('Training dataset exist. Skipping Download')
else:
print('Downloading training dataset')
os.makedirs("./data", exist_ok=True)
os.makedirs("./data/train", exist_ok=True)
os.makedirs("./data/validation", exist_ok=True)
os.makedirs("./data/test", exist_ok=True)
data = load_boston()
X_train, X_test, y_train, y_test = train_test_split(data.data, data.target, test_size=0.25, random_state=45)
X_val, X_test, y_val, y_test = train_test_split(X_test, y_test, test_size=0.5, random_state=45)
trainX = pd.DataFrame(X_train, columns=data.feature_names)
trainX['target'] = y_train
valX = pd.DataFrame(X_test, columns=data.feature_names)
valX['target'] = y_test
testX = pd.DataFrame(X_test, columns=data.feature_names)
trainX.to_csv(local_train, header=None, index=False)
valX.to_csv(local_validation, header=None, index=False)
testX.to_csv(local_test, header=None, index=False)
print('Downloading completed')
###Output
Training dataset exist. Skipping Download
###Markdown
Model TrainingStarting model training using **local mode**. Note: if launching for the first time in local mode, container image download might take a few minutes to complete.
###Code
training_instance_type = "ml.m5.xlarge"
train_location = sess.upload_data(
local_train, key_prefix="{}/data/{}".format(prefix, "train")
)
validation_location = sess.upload_data(
local_validation, key_prefix="{}/data/{}".format(prefix, "validation")
)
hyperparameters = {"num_round": 6}
estimator_parameters = {
"entry_point": "multi_model_deploy.py",
"source_dir": "code",
"dependencies": ["my_custom_library"],
"instance_type": training_instance_type,
"instance_count": 1,
"hyperparameters": hyperparameters,
"role": role,
"base_job_name": "xgboost-model",
"framework_version": "1.0-1",
"py_version": "py3",
}
estimator = XGBoost(**estimator_parameters)
estimator.fit({'train': train_location, 'validation': validation_location})
print('Completed model training')
model_data = estimator.model_data
model_data
###Output
_____no_output_____
###Markdown
Deploying trained model We can also deploy the trained model and perform invocation uncomment the below cell if you would like to deploy directly from the estimator object.
###Code
# endpoint_name = "xgboost-catboost-endpoint"
# predictor = estimator.deploy(
# initial_instance_count=1, instance_type="ml.m5.xlarge", endpoint_name=endpoint_name
# )
###Output
_____no_output_____
###Markdown
If you already have a model trained previously, you can use the model s3 uri in the model_data field and create a model object for deployment. No need to retrain the model using the estimator.
###Code
from sagemaker.xgboost.model import XGBoostModel
inference_model = XGBoostModel(
model_data=model_data,
role=role,
entry_point="multi_model_deploy.py",
framework_version="1.0-1",
dependencies=["my_custom_library"],
source_dir="code",
)
###Output
_____no_output_____
###Markdown
The entry point script "multi_model_deploy.py" will handle the multiple models in the model artifacts and perform inference against each model. The results will be the mean of each inference output. This is a simple demonstration of how to work with multiple models, but you can design the model ensemble as you need.
###Code
predictor = inference_model.deploy(
initial_instance_count=1,
instance_type="ml.m5.xlarge",
)
from sagemaker.serializers import NumpySerializer, JSONSerializer, CSVSerializer
from sagemaker.deserializers import NumpyDeserializer, JSONDeserializer
predictor.serializer = CSVSerializer()
predictor.deserializer = JSONDeserializer()
with open(local_test, 'r') as f:
payload = f.read().strip()
predictions = predictor.predict(payload)
print('predictions: {}'.format(predictions))
###Output
predictions: [14.613732250744945, 6.962288966510727, 8.509503902178903, 13.653751169637694, 14.4809041819816, 10.950596754285558, 12.75489476110096, 12.860061052753862, 13.370978134023789, 12.781238049928602, 19.381960726603, 12.637171388576272, 12.342439986571616, 12.34379449229714, 12.742601145707347, 25.343351307891936, 18.26171883917135, 16.017435536833347, 17.724286971386697, 17.138903624278267, 12.167699802883755, 11.377891533820002, 15.634961368394507, 14.983961505461675, 6.394251938354945, 7.309393511752864, 10.49425297549001, 14.02178997586292, 12.455626175923042, 23.10296230497474, 9.848035070241453, 12.381957848097404, 14.504886544687285, 4.84468595075827, 13.234406309181658, 10.994238485286514, 25.807487982378284, 11.571759114652789, 9.309522345120484, 14.226494447635211, 5.236289432440608, 17.680829294308253, 11.036644747800972, 6.685870846321972, 11.831237074596912, 6.107978025842288, 23.58894916584692, 4.798365969192604, 9.72145147923905, 6.645664091897018, 25.54999967949498, 12.733042298006314, 20.955854028562634, 11.014646054768274, 12.804176050352085, 7.518356270137302, 17.616387568471552, 4.448178416393886, 15.811580253329923, 15.993594821748578, 16.189651866095915, 6.432431033817089, 8.24452138273289, 12.69777988037961]
###Markdown
Clear up resourcesDelete the endpoint deployed in local
###Code
predictor.delete_endpoint(predictor.endpoint)
###Output
The endpoint attribute has been renamed in sagemaker>=2.
See: https://sagemaker.readthedocs.io/en/stable/v2.html for details.
|
analysis/submitted/.ipynb_checkpoints/milestone3-checkpoint.ipynb | ###Markdown
Task 3 Method Chaining and Python Programs
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import pandas_profiling as pdp
from pandas_profiling import ProfileReport
import matplotlib.pyplot as plt
# Load data, rename columns and drop missing data - testing
load = (
pd.read_csv("../../data/raw/adult.data",header = None)
.rename(columns={0:"Age",1:"Workclass",2:"Final Weight",3:"Education",4:"Education Num",5:"Marital Status",6:"Occupation",7:"Relationship",
8:"Race",9:"Sex",10:"Capital Gain",11:"Capital Loss",12:"Hours per Week",13:"Native Country",14:"Salary"})
.dropna()
.rename(index = lambda x: x + 1)
)
# combining all previous function
def load_and_process(path):
load = (
pd.read_csv(path,header = None)
.rename(columns={0:"Age",1:"Workclass",2:"Final Weight",3:"Education",4:"Education Num",5:"Marital Status",6:"Occupation",7:"Relationship",
8:"Race",9:"Sex",10:"Capital Gain",11:"Capital Loss",12:"Hours per Week",13:"Native Country",14:"Salary"})
.dropna()
.rename(index = lambda x: x + 1)
)
return load
load_and_process("../../data/raw/adult.data")
###Output
_____no_output_____
###Markdown
--- Task 4 Importing and loading data, and initial visualization
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from scripts.project_functions import *
df = load_and_process("../../data/raw/adult.data")
df.head()
###Output
_____no_output_____
###Markdown
Kyle's EDA
###Code
order = df['Education'].value_counts(ascending=False).index
sns.countplot(y="Education",order=order ,data=df).set_title("Amount of Working Adults by Education")
order = df['Race'].value_counts(ascending=False).index
sns.countplot(y="Race",order=order ,data=df).set_title("Amount of Working Adults by Race")
h = sns.displot(df, x = "Age",binwidth=1,hue="Race",multiple = "stack",common_norm=False)
plt.title("Reported Working Adult Age by Race")
plt.xlabel("Age")
plt.ylabel("Race")
sns.countplot(y="Workclass" ,data=df, hue = "Salary").set_title("Amount of Working Adults by Working Class")
sns.countplot(y="Education" ,data=df, hue = "Salary").set_title("Amount of Working Adults by Education")
sns.countplot(y="Marital Status" ,data=df, hue = "Salary").set_title("Amount of Working Adults by Marital Status")
sns.countplot(y="Occupation" ,data=df, hue = "Salary").set_title("Amount of Working Adults by Occupation")
sns.countplot(y="Relationship" ,data=df, hue = "Salary").set_title("Amount of Working Adults by Relationship")
sns.countplot(y="Race" ,data=df, hue = "Salary").set_title("Amount of Working Adults by Race")
sns.countplot(y="Sex" ,data=df, hue = "Salary").set_title("Amount of Working Adults by Sex")
sns.countplot(y="Native Country" ,data=df, hue = "Salary").set_title("Amount of Working Adults by Native Country")
###Output
_____no_output_____
###Markdown
Emiel's EDA
###Code
plt.figure(figsize=(20,6))
plot=sns.countplot(x="Education", hue="Salary", data=df)
plt.title("Adult Salary by Education", size=20)
###Output
_____no_output_____
###Markdown
Noah's EDA
###Code
# Histogram of age and salary distribution. This shows that the majority of people have <=50K salaries,
# and those that do make >50K are generally older population. Data has outliers and needs more cleaning.
sns.histplot(
df,
x="Age",
hue="Salary",
element="step",
common_norm=False,
)
plt.title("Salary of Different Aged Adults", size=20)
# Grouped bar chart of salaries counts grouped by workclass. This plot's groups are cleaned up and replotted later in the EDA.
sns.catplot(
x="Age",
y="Workclass",
hue="Salary",
kind="bar",
data=df
)
plt.title("Adult Salary by Workclass", size=20)
# Grouped bar chart of the salaries of people in each level of education. Some grouping of variables
# would be useful for 'Education Level', such as a group of all highschool grades.
sns.catplot(
y="Education",
hue="Salary",
kind="count",
palette="pastel",
edgecolor=".6",
data=df
)
plt.title("Salary of Different Education Levels", size=20)
###Output
_____no_output_____
###Markdown
--- Task 5 Does Classification Play A Major Role In Determining The Annual Income Of An Adult Worker? Grouping like values together Group together like workclasses into several more specific groups:
###Code
find_and_replace(df, 'Workclass', 'State-gov|Federal-gov|Local-gov', 'Government')
find_and_replace(df, 'Workclass', 'Never-worked|Without-pay|Other', '?')
find_and_replace(df, 'Workclass', 'Self-emp-not-inc|Self-emp-inc', 'Self-employed')
find_and_replace(df, 'Workclass', '?', 'Other')
###Output
_____no_output_____
###Markdown
Group together similar educations:
###Code
find_and_replace(df, 'Education', '11th|9th|7th-8th|5th-6th|10th|1st-4th|12th|Preschool|Replaced', 'Didnt-grad-HS')
find_and_replace(df, 'Education', 'Some-college', 'Bachelors')
###Output
_____no_output_____
###Markdown
Group together married people into several more defined groups:
###Code
find_and_replace(df, 'Marital Status', 'Married-spouse-absent', 'Married-civ-spouse')
find_and_replace(df, 'Marital Status', 'Married-AF-spouse', 'Married-civ-spouse')
find_and_replace(df, 'Marital Status', 'Married-civ-spouse', 'Married')
###Output
_____no_output_____
###Markdown
Group together like occupations:
###Code
find_and_replace(df, 'Occupation', '?', 'Other')
find_and_replace(df, 'Occupation', 'Other-service', 'Other')
find_and_replace(df, 'Occupation', 'Armed-Forces', 'Protective-serv')
###Output
_____no_output_____
###Markdown
Here we can group together salary and education so we can visualize how many are in each category:
###Code
grouped=df.groupby(['Workclass','Salary']).size()
grouped
grouped2=df.groupby(['Sex','Salary']).size().unstack()
grouped2
###Output
_____no_output_____
###Markdown
Initial visualizations
###Code
sns.catplot(
x="Workclass",
hue="Salary",
kind="count",
data=df,
).set_xticklabels(rotation=50)
plt.title("Salary of Different Workclasses", size=20)
###Output
_____no_output_____
###Markdown
AnalysisFrom this visualization above we can see some interesting inormation. We are able to see that if you want to have a salary of >50k it would be best to be self-employed. As well we are able to see that the majority of the working force is people working for private companies. As well that no matter what working class you may be in, there may be a potential to have a salary >50k according to our data.
###Code
sns.catplot(
x="Education",
hue="Salary",
kind="count",
data=df
).set_xticklabels(rotation=50)
plt.title("The distribution of Salary by Education",size=20)
###Output
_____no_output_____
###Markdown
AnalysisHere we could see that almost any education level will have an income lower than 50 thousand dollars. Only high level of education such as masters and so on have a majority of having an annual salary of more than 50 thousand dollars. However, as we could see , the amount of people who are at this level of education is relatively small. Hence, it is safe to assume that it is harder for people to achieve a higher education resulting in a higher income for those who do have higher education.
###Code
sns.catplot(
x="Marital Status",
hue="Salary",
kind="count",
data=df
).set_xticklabels(rotation=70)
plt.title("Dividing Salary by Different Marital Status'", size=20)
###Output
_____no_output_____
###Markdown
AnalysisHere we could see the distribution of salary base on marital status of people. It is clear that for all type of relationship, will have a majority of lower income people (less or equal than 50 thousand dollars). However, the gorup of people that are already married or have a spouse have the most people with more than 50 thousand income.
###Code
sns.catplot(
x="Relationship",
hue="Salary",
kind="count",
data=df
).set_xticklabels(rotation=70)
plt.title("Salaries Varying with Relationship Status", size=20)
###Output
_____no_output_____
###Markdown
This visualization aids the previous graph by giving a more detail explanation.
###Code
sns.catplot(
x="Occupation",
hue="Salary",
kind="count",
data=df,
).set_xticklabels(rotation=70)
plt.title("Dividing Salary by Different Occupations", size=20)
sns.despine()
###Output
_____no_output_____
###Markdown
AnalysisAll of type of occupation has a majority of people with lower income than 50 thousand dollars. However, the difference between the salary as an Executive-managerial and Prof-speciality is relatively much smaller than the other jobs.
###Code
plt.figure(figsize=(10,5))
plot=sns.countplot(x="Race", hue="Salary", data=df)
plt.title("Salaries Divided by Different Race Categories", size=20)
sns.despine()
###Output
_____no_output_____
###Markdown
AnalysisThere is no direct correlation between race and the salary. All races seem to be dominated by <=50K salaries, and for some there may be inaccuracies due to small sample sizes.
###Code
plt.figure(figsize=(10,5))
plot=sns.countplot(x="Sex", hue="Salary", data=df)
plt.title("Dividing Salary by Sex", size=20)
sns.despine()
###Output
_____no_output_____
###Markdown
AnalysisAlthough both sexes has a majority of people with less than 50 thousand annual income. However, the percentage of female with higher than 50 thousand dollars income is much smaller than their male counterpart
###Code
sns.boxplot(x='Workclass', y='Hours per Week', data=df)
plt.title("Comparing the Different Salaries by the Hours Per Week", size=20)
sns.despine()
###Output
_____no_output_____
###Markdown
AnalysisFrom this plot we can see that while self-employed persons tend to have the most salaries above $50K, they have the greatest variance as well. All of the different hours have the majority people of having less than 50 thousand dollars in annual income. However, the gap between the difference seems to be relatively smaller for those who worked in 50 and 60 hours per week.
###Code
sns.violinplot(data=df, x='Salary', hue='Sex', split=True, y='Hours per Week')
plt.title("Salaries as Hours Worked per Week by Sex", size=20)
sns.despine()
###Output
_____no_output_____
###Markdown
AnalysisThis plot examines how salaries between men and women vary with how many hours they work per week. For the most part, women and men have the same salaries if they work the same amount of hours.
###Code
sns.histplot(data=df, x='Age', hue='Sex', multiple='stack', palette='colorblind', edgecolor='.3', linewidth=.5)
sns.despine()
plt.title("Number of Adults of Each Age", size=20)
###Output
_____no_output_____ |
81_bayesian_deep_learning.ipynb | ###Markdown
bayesian_deep_learning> API details. Bayesian Neural Networks in PyMC3
###Code
%matplotlib inline
import theano
import numpy as np
import pandas as pd
import pymc3 as pm
import seaborn as sns
import matplotlib.pyplot as plt
from warnings import filterwarnings
filterwarnings('ignore')
sns.set_style('white')
import sklearn
from sklearn import datasets
from sklearn.preprocessing import scale
from sklearn.model_selection import train_test_split
from sklearn.datasets import make_moons
###Output
WARNING (theano.tensor.blas): Using NumPy C-API based implementation for BLAS functions.
###Markdown
Generating data
###Code
X, Y = make_moons(n_samples=1000, noise=0.2, random_state=0)
X = scale(X)
X = X.astype(float)
Y = Y.astype(float)
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.5)
X_train.shape, Y_train.shape, X_test.shape, Y_test.shape
X[Y==0, 0].shape
# Plot data
fig, ax = plt.subplots(figsize=(12,8))
ax.scatter(X[Y==0, 0], X[Y==0, 1], label='Class 0')
ax.scatter(X[Y==1, 0], X[Y==1, 1], label='Class 1', color='r')
sns.despine()
ax.legend()
ax.set(xlabel='X', ylabel='Y', title='Toy binary classification dataset')
###Output
_____no_output_____
###Markdown
Model specificationHere we will use 2 hidden layers with 5 neurons
###Code
def construct_nn(ann_input, ann_output):
n_hidden = 5
# Initialize random weights between each layer
init_1 = np.random.randn(X.shape[1], n_hidden).astype(float)
init_2 = np.random.randn(n_hidden, n_hidden).astype(float)
init_out = np.random.randn(n_hidden).astype(float)
with pm.Model() as neural_network:
# Weights from input to hidden layers
weights_in_1 = pm.Normal('w_in_1', 0, sd=1,
shape=(X.shape[1], n_hidden),
testval=init_1)
weights_1_2 = pm.Normal('w_1_2', 0, sd=1,
shape=(n_hidden, n_hidden),
testval=init_2)
weights_2_out = pm.Normal('w_2_out', 0, sd=1,
shape=(n_hidden,),
testval=init_out)
# Build neural-network using tanh activation
act_1 = pm.math.tanh(pm.math.dot(ann_input, weights_in_1))
act_2 = pm.math.tanh(pm.math.dot(act_1, weights_1_2))
act_out = pm.math.sigmoid(pm.math.dot(act_2, weights_2_out))
# Binary classification -> Bernoulli likelihood
out = pm.Bernoulli('out', act_out, observed=ann_output,
total_size=Y_train.shape[0]) # Important for minibatches
return neural_network
%%time
ann_input = theano.shared(X_train)
ann_output = theano.shared(Y_train)
neural_network = construct_nn(ann_input, ann_output)
###Output
CPU times: user 422 ms, sys: 227 ms, total: 649 ms
Wall time: 17 s
###Markdown
Variational Inference: Scaling model complexity
###Code
# !pip uninstall theano-pymc # run a few times until it says not installed
# !pip install "pymc3<3.10" "theano==1.0.5"
# Better using:
# !conda install "CPython==3.7.7" "pymc3==3.9.0" "theano==1.0.4"
from pymc3.theanof import set_tt_rng, MRG_RandomStreams
set_tt_rng(MRG_RandomStreams(42))
%%time
with neural_network:
inference = pm.ADVI()
approx = pm.fit(n=50000, method=inference)
trace = approx.sample(draws=5000)
plt.plot(-inference.hist)
plt.ylabel('ELBO')
plt.xlabel('iteration')
###Output
_____no_output_____
###Markdown
Prediction
###Code
# Replace arrays our NN references with test data
ann_input.set_value(X_test)
ann_output.set_value(Y_test)
with neural_network:
ppc = pm.sample_posterior_predictive(trace, samples=500, progressbar=False)
# Use probability of > 0.5 to assume prediction of class 1
pred = ppc['out'].mean(axis=0) > 0.5
# Plot predictions
fig, ax = plt.subplots()
ax.scatter(X_test[pred==0, 0], X_test[pred==0, 1])
ax.scatter(X_test[pred==1, 0], X_test[pred==1, 1], color='r')
sns.despine()
ax.set(title='Predicted labels in testing set', xlabel='X', ylabel='Y');
print('Accuracy = {}%'.format((Y_test == pred).mean() * 100))
###Output
Accuracy = 95.0%
###Markdown
what the classifier has learned
###Code
grid = pm.floatX(np.mgrid[-3:3:100j,-3:3:100j])
grid_2d = grid.reshape(2, -1).T
dummy_out = np.ones(grid.shape[1], dtype=np.int8)
ann_input.set_value(grid_2d)
ann_output.set_value(dummy_out)
with neural_network:
ppc = pm.sample_posterior_predictive(trace, samples=500, progressbar=False)
###Output
_____no_output_____
###Markdown
Probability surface
###Code
cmap = sns.diverging_palette(250, 12, s=85, l=25, as_cmap=True)
fig, ax = plt.subplots(figsize=(14, 8))
contour = ax.contourf(grid[0], grid[1], ppc['out'].mean(axis=0).reshape(100, 100), cmap=cmap)
ax.scatter(X_test[pred==0, 0], X_test[pred==0, 1])
ax.scatter(X_test[pred==1, 0], X_test[pred==1, 1], color='r')
cbar = plt.colorbar(contour, ax=ax)
_ = ax.set(xlim=(-3, 3), ylim=(-3, 3), xlabel='X', ylabel='Y');
cbar.ax.set_ylabel('Posterior predictive mean probability of class label = 0');
###Output
_____no_output_____
###Markdown
Uncertainty in predicted value
###Code
cmap = sns.cubehelix_palette(light=1, as_cmap=True)
fig, ax = plt.subplots(figsize=(14, 8))
contour = ax.contourf(grid[0], grid[1], ppc['out'].std(axis=0).reshape(100, 100), cmap=cmap)
ax.scatter(X_test[pred==0, 0], X_test[pred==0, 1])
ax.scatter(X_test[pred==1, 0], X_test[pred==1, 1], color='r')
cbar = plt.colorbar(contour, ax=ax)
_ = ax.set(xlim=(-3, 3), ylim=(-3, 3), xlabel='X', ylabel='Y');
cbar.ax.set_ylabel('Uncertainty (posterior predictive standard deviation)');
###Output
_____no_output_____ |
notebooks/add non-linearities.ipynb | ###Markdown
Imports
###Code
!pip install sklearn --upgrade
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
import os
from sklearn.preprocessing import MinMaxScaler
from sklearn.linear_model import LinearRegression,LogisticRegression
from sklearn.metrics import mean_squared_error, f1_score
from sklearn.linear_model import Ridge, RidgeCV, Lasso, LassoCV, RidgeClassifier, RidgeClassifierCV
###Output
_____no_output_____
###Markdown
Loading data
###Code
employee = pd.read_csv('C:/py/data/attrition/employee_process.csv')
y=employee['y']
X=employee.drop(columns=['y'])
###Output
_____no_output_____
###Markdown
Add non-linearities
###Code
augmentation =[ ]
for var in X.columns :
if X[var].unique().tolist() == [0,1] :
pass
elif X[var].unique().tolist() ==[1,0]:
pass
else :
augmentation.append(var)
for var in augmentation :
X[var+'_squared'] = X[var]**2
X[var+'_cube'] = X[var]**3
'''for i in range(len(augmentation)-1):
X[augmentation[i]+'_'+augmentation[i+1]] = X[augmentation[i]]*X[augmentation[i+1]]'''
###Output
_____no_output_____
###Markdown
Split into train and test subsample
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
scaler = MinMaxScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
X_train = pd.DataFrame(X_train, columns=X.columns)
X_test = pd.DataFrame(X_test, columns=X.columns)
###Output
_____no_output_____
###Markdown
Defition of the metrics function
###Code
from sklearn.metrics import confusion_matrix
from sklearn.metrics import recall_score
from sklearn.metrics import precision_score
from sklearn.metrics import f1_score,accuracy_score
from sklearn.metrics import precision_recall_curve, PrecisionRecallDisplay
def recall_precision(y_test,y_pred,model):
precision, recall, thresholds = precision_recall_curve(y_test,y_pred)
# convert to f score
fscore = (2 * precision * recall) / (precision + recall)
fscore[np.isnan(fscore)] = 0
#print(fscore)
# locate the index of the largest f score
ix = np.argmax(fscore)
optimal_threshold = thresholds[ix]
# plot the recall precision curve for the model
plt.plot(recall, precision, marker='.', label=model)
no_skill = len(y_test[y_test==1]) / len(y_test)
plt.plot([0,1], [no_skill,no_skill], linestyle='--', label='No Skill')
plt.scatter(recall[ix], precision[ix], marker='o', color='black', label='Best')
# axis labels
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.legend()
# show the plot
plt.show()
print("\nTEST")
print('Best Threshold=%f, F-Score=%.3f' % (optimal_threshold, fscore[ix]))
y_pred2= np.where(y_pred <= optimal_threshold , 0, 1)
cm = confusion_matrix(y_test, y_pred2)
print("Confusion matrix \n" , cm)
precision = cm[1,1]/sum(cm[:,1])
print("\nPrecision : " + str(precision))
recall = cm[1,1]/sum(cm[1,:])
print("Recall : " + str(recall))
f1 = 2*((precision*recall)/(precision+recall))
print("F1 : " + str(f1))
residual = (y_test - y_pred)
print("\n Residuals distribution : \n", residual.describe())
return optimal_threshold
###Output
_____no_output_____
###Markdown
OLS
###Code
model = LinearRegression().fit(X_train, y_train)
y_ols = model.predict(X_test)
Tols = recall_precision(y_test,y_ols,'Ordinary least square')
y_ols2= np.where(y_ols <= Tols , 0, 1)
sort = abs(model.coef_).argsort()
sort = sort[-20:]
plt.barh(X.columns[sort], model.coef_[sort])
plt.xlabel("Feature Importance")
plt.show()
###Output
_____no_output_____
###Markdown
Logistic regression
###Code
clf = LogisticRegression().fit(X_train, y_train)
y_log = clf.predict_proba(X_test)[:,1]
recall_precision(y_test,y_log,'Logistic regression')
y_log2 = np.where(y_log <= 0.489009 , 0, 1)
sort = abs(clf.coef_[0]).argsort()
sort = sort[-20:]
plt.barh(X.columns[sort], clf.coef_[0][sort])
plt.xlabel("Feature Importance")
plt.show()
###Output
_____no_output_____
###Markdown
Ridge
###Code
alphas = 10**np.linspace(5,0,1000)*0.5
# Found the alpha
ridgecv = RidgeCV(alphas = alphas,
cv=10,
scoring = 'neg_mean_squared_error')
ridgecv.fit(X_train, y_train)
print("The optimal alpha seems to be :",round(ridgecv.alpha_,4))
# Train the model
Rclf = Ridge(alpha = ridgecv.alpha_)
trained_Rclf = Rclf.fit(X_train,y_train)
# Prediction
y_ridge = trained_Rclf.predict(X_test)
Tridge = recall_precision(y_test,y_ridge,'Ridge')
y_ridge2 = np.where(y_ridge <= Tridge , 0, 1)
print('The automatic selection method Ridge selected', sum(trained_Rclf.coef_ > 0.001), 'variables in a pool of 95.')
sort = abs(Rclf.coef_).argsort()
sort = sort[-10:]
plt.barh(X.columns[sort], Rclf.coef_[sort])
plt.xlabel("Feature Importance")
plt.show()
sort_ = abs(Rclf.coef_).argsort()
sort_ = sort_[-48:]
selection = pd.DataFrame({
'Variables' : X.columns[sort_],
'Coefficient' : Rclf.coef_[sort_]
})
selection['trie'] = abs(selection['Coefficient'])
selection = selection.sort_values(by=['trie'],ascending=False)
selection = selection.drop(columns=['trie'])
var_ridge = selection['Variables']
selection
#lassocv = LassoCV(alphas = alphas,cv= 10)
lassocv = LassoCV(alphas = None,
cv = 10,
random_state = 0,
max_iter = 10000)
lassocv.fit(X_train, y_train)
# Optimal alpha
print("The optimal alpha seems to be : ", round(lassocv.alpha_,4))
lasso = Lasso(max_iter = 100000)
# Train the model
lasso.set_params(alpha=lassocv.alpha_)
lasso.fit(X_train, y_train)
# Prediction
y_lasso = lasso.predict(X_test)
Tlasso = recall_precision(y_test,y_lasso,'Lasso')
y_lasso2 = np.where(y_lasso <= Tlasso , 0, 1)
print('The automatic selection method lasso selected', sum(lasso.coef_ > 0.001), 'variables in a pool of 117.')
sort = abs(lasso.coef_).argsort()
sort = sort[-10:]
plt.barh(X.columns[sort], lasso.coef_[sort])
plt.xlabel("Feature Importance")
plt.show()
sort_ = abs(lasso.coef_).argsort()
sort_ = sort_[-35:]
selection = pd.DataFrame({
'Variables' : X.columns[sort_],
'Coefficient' : lasso.coef_[sort_]
})
selection['trie'] = abs(selection['Coefficient'])
selection = selection.sort_values(by=['trie'],ascending=False)
selection = selection.drop(columns=['trie'])
var_lasso = selection['Variables']
selection
import statsmodels.api as sm
X_ols = X[var_lasso]
X_ols = sm.add_constant(X_ols)
model = sm.OLS(y,X_ols)
results = model.fit()
results.summary()
###Output
_____no_output_____
###Markdown
Model comparaisons Residuals analysis
###Code
sns.set(style="ticks", color_codes=True)
sns.distplot((y_test-y_ols), hist=False, kde=True, kde_kws = {'shade': False, 'linewidth' : 1},
bins=50, color = 'orange',
hist_kws={'edgecolor':'black'})
sns.distplot((y_test-y_ridge), hist=False, kde=True, kde_kws = {'shade': False, 'linewidth' : 1},
bins=50, color = 'blue',
hist_kws={'edgecolor':'black'})
sns.distplot((y_test-y_lasso), hist=False, kde=True, kde_kws = {'shade': False, 'linewidth' : 1},
bins=50, color = 'green',
hist_kws={'edgecolor':'black'})
plt.xlabel('Residuals')
plt.ylabel('Density')
plt.legend(labels=["OLS","Ridge", "Lasso"])
plt.title('Residuals distribution')
###Output
C:\Users\Bruguet\anaconda3\lib\site-packages\seaborn\distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `kdeplot` (an axes-level function for kernel density plots).
warnings.warn(msg, FutureWarning)
C:\Users\Bruguet\anaconda3\lib\site-packages\seaborn\distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `kdeplot` (an axes-level function for kernel density plots).
warnings.warn(msg, FutureWarning)
C:\Users\Bruguet\anaconda3\lib\site-packages\seaborn\distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `kdeplot` (an axes-level function for kernel density plots).
warnings.warn(msg, FutureWarning)
###Markdown
Using PrecisionRecallDisplay.**from_estimators**
###Code
def recall (y_test,y_pred,model) :
precision, recall, thresholds = precision_recall_curve(y_test,y_pred)
# convert to f score
fscore = (2 * precision * recall) / (precision + recall)
# locate the index of the largest f score
ix = np.argmax(fscore)
# plot the roc curve for the model
recall_plot = plt.plot(recall, precision, marker='.', label=model)
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.legend()
# show the plot
return(recall_plot)
recall_ols = recall(y_test,y_ols, 'Ordinary least square')
recall_rige = recall(y_test,y_ridge, 'Ridge')
recall_lasso = recall(y_test,y_lasso,'Lasso')
f_score = pd.DataFrame({
'Algorithm' : ['OLS','Ridge','Lasso'],
'y_pred' : [y_ols2,y_ridge2,y_lasso2]
})
f1_list = []
for index, row in f_score.iterrows():
f1 = f1_score(y_test, row['y_pred'], average='binary')
f1_list.append(f1)
acc_list = []
for index, row in f_score.iterrows():
acc = accuracy_score(y_test, row['y_pred'])
acc_list.append(acc)
f_score['f1']=f1_list
f_score['acc']=acc_list
f_score = f_score.drop(['y_pred'], axis=1)
f_score.sort_values(by=['f1'],ascending=False)
###Output
_____no_output_____ |
.ipynb_checkpoints/3D_House-checkpoint.ipynb | ###Markdown
Import Libaries
###Code
import os
import shutil
import folium
import rioxarray
import numpy as np
import plotly.graph_objects as go
from glob import glob
from osgeo import gdal
from pyproj import Transformer
from natsort import natsorted, ns
import geopy
from geopy.geocoders import Nominatim
import rasterio
from rasterio.plot import show
%matplotlib inline
# search_address = "Bolivarplaats 20, 2000 Antwerpen"
# search_address = "Steenplein 1, 2000 Antwerpen"
# search_address = "Bolivarplaats 20, 2000 Antwerpen"
print("Format example: Steenplein 1, 2000 Antwerpen")
search_address = input("Enter the adress")
###Output
Format example: Steenplein 1, 2000 Antwerpen
Enter the adressSteenplein 1, 2000 Antwerpen
###Markdown
Generete directory proper management
###Code
main_dir = ['3D Image',"search address data", 'DSM', 'DTM']
for i in main_dir:
if not os.path.exists(i):
os.makedirs(i)
###Output
_____no_output_____
###Markdown
Search all ".tif" files in working dir and sort arrage the files
###Code
def search_tif(path):
tif_files =[]
# using glob library to get all the file with .tif
files = glob(path,recursive = True)
for file in files:
tif_files.append(file)
# ascending order sort files with number in the file
tif_files = natsorted(tif_files, alg=ns.IGNORECASE)
return tif_files
DSM_vla_tif = search_tif('.\\DSM\\**\\*.tif')
DTM_vla_tif = search_tif('.\\DTM\\**\\*.tif')
###Output
_____no_output_____
###Markdown
Geo Location of Single Address---* "User_Agent" is an http request header that is sent with each request. Nominatim requires this value to be set to your application name. The goal is to be able to limit the number of requests per application. "geocode" method, for resolving a location from a string, and may define a reverse method, which resolves a pair of coordinates to an address.* Folium makes it easy to visualize map data it is interactive 'map' Classes for drawing maps. can save in image or interactive html. "marker" set market type.* "pyproj.Transformer" has the capabilities of performing 2D, 3D, and 4D (time) transformations. Transform Geo to Bel Geo cordinates---* Recieve X and Y value EPSG:31370 to compare with .tif file to find the what file does the location of the x and y is Function to create Map Location using GeoPy
###Code
# to get the longtitude and latitude of the address entered & plot address on a map
def lat_long(address):
# GeoPy to get longtitude and latitude
geolocator = Nominatim(user_agent="Address_GeoLocator")
location = geolocator.geocode(address)
house_lat_long = [location.latitude, location.longitude]
return house_lat_long
def house_locate(func):
# to plot address
house_locate = folium.Map(location=func,zoom_start=18)
folium.Marker(location=func,
popup=list(func),
icon=folium.Icon(color='green', icon='location-arrow', prefix='fa') # Customize Icon
).add_to(house_locate)
#house_locate.save(address+".html")
return house_locate
def EPSG_Bel(lon, lat):
# transform to Belgium 'EPSG:31370' coordinate
transformer = Transformer.from_crs("EPSG:4326",
crs_to = 'EPSG:31370',
always_xy=True) #output coordinates using the traditional GIS order
x, y = transformer.transform(lon, lat)
return x,y
lat,lon = lat_long(search_address)
x, y = EPSG_Bel(lon, lat)
print((x, y) , (lat, lon))
###Output
(152001.76442827538, 212535.85022036918) (51.22276185, 4.397410773371294)
###Markdown
Getting Longitute and Latitude form the ".tif" file___* GIS raster dataset every pixels of a dataset is contained within a spatial bounding box
###Code
# create all bounding box from tifs
def bounding_box(tifs):
bounds = []
for i in tifs:
src = rasterio.open(i) # open the source file
bounds.append(src.bounds) # grab the bounding box corordinates
return bounds
# Locate tif that contains the location and geting the location
def check_tif(x,y):
for i,b_box in enumerate(bounding_box_cordinates,1): # number the bounding box
if (x >= b_box[0] and x <= b_box[2]) & (y >= b_box[1] and y <= b_box[3]): # condition to filter the corordinates
if i in range(1,10): # add '0' for signle digit number to get correct files
i = "0" + str(i)
else:
i = str(i)
dsm_path = f'./DSM/DHMVIIDSMRAS1m_k{i}/GeoTIFF/DHMVIIDSMRAS1m_k{i}.tif'
dtm_path = f'./DTM/DHMVIIDTMRAS1m_k{i}/GeoTIFF/DHMVIIDTMRAS1m_k{i}.tif'
print('DSM File :', f'DHMVIIDSMRAS1m_k{i}.tif')
print('DTM File :', f'DHMVIIDTMRAS1m_k{i}.tif')
return dsm_path, dtm_path
bounding_box_cordinates = bounding_box(DSM_vla_tif)
tif_path = check_tif(x,y)
print()
dsm_location = tif_path[0]
print(dsm_location)
dtm_location = tif_path[1]
print(dtm_location)
###Output
DSM File : DHMVIIDSMRAS1m_k15.tif
DTM File : DHMVIIDTMRAS1m_k15.tif
./DSM/DHMVIIDSMRAS1m_k15/GeoTIFF/DHMVIIDSMRAS1m_k15.tif
./DTM/DHMVIIDTMRAS1m_k15/GeoTIFF/DHMVIIDTMRAS1m_k15.tif
###Markdown
Function to clip the house model
###Code
"""
Detail documentaion: xarray - Clip
https://corteva.github.io/rioxarray/stable/examples/clip_geom.html?highlight=rio%20clip
"""
def clip_tif(path,window_size=30):
# work with any file that rasterio can open, generate 2D coordinates from the file’s attributes
da = rioxarray.open_rasterio(path,masked=True) # masked=True will convert from integer to float64 and fill with NaN
# Filter which file is function working with 'DSM' or 'DTM'
data_file = path[2:5].lower()
# set window size
ws = window_size # meters
# create coordinates and geometries
c_1 = [(x-ws),(y+ws)]
c_2 = [(x+ws),(y+ws)]
c_3 = [(x+ws),(y-ws)]
c_4 = [(x-ws),(y-ws)]
geometries = [{'type': 'Polygon', 'coordinates': [[c_1,c_2, c_3,c_4,c_1]]}]
# clip the image as per the geometries size
clipped = da.rio.clip(geometries)
# save clip for Canopy Height Model
clip = clipped.rio.to_raster(f"{search_address}_clipped_{data_file}.tif", dtype="int32", tiled=True)
shutil.move(f"{search_address}_clipped_{data_file}.tif", f"./search address data/{search_address}_clipped_{data_file}.tif")
return clipped.plot();
# processing speed
clip_tif(dsm_location,100);
clip_tif(dtm_location,100);
###Output
_____no_output_____
###Markdown
Function for Canopy Height Model
###Code
"""
Detail documentaion: Numpy masked arrays
https://rasterio.readthedocs.io/en/latest/topics/masks.html?highlight=read(1%2C%20masked%3DTrue)#numpy-masked-arrays
"""
def chm_tif():
# open the digital terrain model
with rasterio.open(f'search address data/{search_address}_clipped_dtm.tif') as src_dataset:
dtm_narr = src_dataset.read(1, masked=True) #read dataset bands as numpy masked arrays
dtm_profile = src_dataset.profile
src_dataset.close()
# open the digital surface model
with rasterio.open(f'search address data/{search_address}_clipped_dsm.tif') as src_dataset:
dsm_narr = src_dataset.read(1, masked=True) # read dataset bands as numpy masked arrays
dsm_profile = src_dataset.profile
src_dataset.close()
# calculate canopy height model
chm_narr = dsm_narr - dtm_narr
# save chm clipped
with rasterio.open(f'search address data/{search_address}_clipped_chm.tif', 'w', **dsm_profile) as dst_dataset:
# Write data to the destination dataset.
dst_dataset.write(chm_narr,1)
dst_dataset.close()
chm_tif = f'search address data/{search_address}_clipped_chm.tif'
return chm_tif
ds = gdal.Open(chm_tif())
data = ds.ReadAsArray()
data = data.astype(np.float32)
ds = None # Close gdal.open
house_locate(lat_long(search_address))
fig = go.Figure(data=[go.Surface(z=data)])
fig.update_traces(contours_z=dict(show=True, usecolormap=True, highlightcolor="limegreen", project_z=True))
fig.update_layout(title=search_address)
fig.show()
fig.write_image(f"./{main_dir[0]}/{search_address}.png")
# to view actual house on google map
import webbrowser
url = 'https://www.google.com.my/maps/place/'+str(lat)+','+str(lon)
webbrowser.open(url)
###Output
_____no_output_____ |
Machine Learning/Sample solutions/U2/Clustering_Solution.ipynb | ###Markdown
First of all: we are going to introduce a dataset on that we apply our clustering method on:
###Code
def twospirals(n_points, noise=.5):
"""
Returns the two spirals dataset.
"""
epsilon = 0.1
n = (np.random.rand(n_points,1)+epsilon) * 780 * (2*np.pi)/360
d1x = -np.cos(n)*n + np.random.rand(n_points,1) * noise
d1y = np.sin(n)*n + np.random.rand(n_points,1) * noise
# hstack/vstack stacks data on top of each other (print shape to see what I mean)
C_1 = np.hstack((d1x,d1y))
C_2 = np.hstack((-d1x,-d1y))
return np.vstack((C_1, C_2))
###Output
_____no_output_____
###Markdown
This is a dataset consisting of clusters twisting around each other. You don't need to understand the mathematics behind it, but you can play around with it if you like (make sure to train on the original dataset, not one you created)
###Code
np.random.seed(10)
data_size = 500
dataset = twospirals(data_size)
labels = np.hstack((np.zeros(data_size),np.ones(data_size)))
# scatter makes a 2D scatter plot. Unfortunately you have to seperate the x-dim from the y-dim
# the labels are helpful for coloring. The algorithm does not use them, since this is unsupervised
plt.scatter(dataset[:,0], dataset[:,1], c = labels)
plt.axis('equal')
plt.show()
###Output
_____no_output_____
###Markdown
a) Implement the DBSCAN algorithm to classify points of the two clusters.b) Plot a scatter plot highlighting the clusters that were found after finding good hyperparameter values eps and minPts.c) Print accuracies for different data_size values.d) For what kind of data_size values does the algorithm fail and why? What would you say are disadvantages of DBSCAN?
###Code
def euclidean_distance(x_1, x_2):
return np.sqrt(np.sum((x_1-x_2)**2, axis = 1))
def rangeQuery(x, eps = 0.5):
distances = euclidean_distance(x, dataset)
neighbor_booleans = distances <= eps
N = dataset[neighbor_booleans]
Positions = np.arange(2*data_size)[neighbor_booleans]
return set(Positions)
minPts = 2
eps = 1.7
labels = {}
C = 0
for i, x in enumerate(dataset):
if i in labels:
continue
N_Positions = rangeQuery(x, eps)
if len(N_Positions) < minPts:
labels[i] = 0 # 0 is the noise label
continue
C = C + 1
labels[i] = C
S_Positions = N_Positions.difference({i})
S_list = list(S_Positions)
for j in S_list:
if j in labels and labels[j] == 0:
labels[j] = labels[i]
if j in labels:
continue
labels[j] = labels[i]
N_Positions = rangeQuery(dataset[j], eps)
if len(N_Positions) >= minPts:
S_append = S_Positions.union(N_Positions).difference(S_Positions)
S_list.extend(list(S_append))
clusters = np.zeros(2*data_size)
print(np.array(list(labels.keys())).astype(int).shape)
clusters[np.array(list(labels.keys())).astype(int)] = np.array(list(labels.values()))
print(C)
plt.scatter(dataset[:,0], dataset[:,1], c = clusters)
plt.axis('equal')
plt.show()
###Output
(1000,)
2
|
notebooks/county_health_rankings.ipynb | ###Markdown
Outcomes & Factors Rankings
###Code
cols = [
'fips', 'state', 'county', 'num_of_ranked_counties', 'health_outcomes_rank',
'health_outcomes_quartile', 'health_factors_rank', 'health_factors_quartile'
]
hr['Outcomes & Factors Rankings'].columns = cols
hr['Outcomes & Factors Rankings'] = hr['Outcomes & Factors Rankings'].loc[1:, [c for c in cols if c not in ['state', 'county']]]
county_rankings = counties.merge(hr['Outcomes & Factors Rankings'], on='fips', how='left')
outcomes_and_factors_syn = syn.setProvenance(
syn.store(Table(
Schema(
name='County Health Rankings (Summarized)',
columns=as_table_columns(county_rankings), parent='syn16816579'), county_rankings)
),
activity=Activity(
name='County Health Rankings',
description='Overall inner-state county health rankings.',
used=[
dict(
name='County Health Rankings and Roadmaps',
url='http://www.countyhealthrankings.org/explore-health-rankings/rankings-data-documentation'
)
]
)
)
###Output
_____no_output_____
###Markdown
Outcomes & Factors SubRankings
###Code
cols = [
'fips', 'state', 'county', 'num_of_ranked_counties',
'length_of_life_rank', 'length_of_life_quartile',
'quality_of_life_rank', 'quality_of_life_quartile',
'health_behaviors_rank', 'health_behaviors_quartile',
'clinical_care_rank', 'clinical_care_quartile',
'social_and_economic_factors_rank', 'social_and_economic_factors_quartile',
'physical_environment_rank', 'physical_environment_quartile'
]
hr['Outcomes & Factors SubRankings'].columns = cols
hr['Outcomes & Factors SubRankings'] = hr['Outcomes & Factors SubRankings'].loc[1:, [c for c in cols if c not in ['state', 'county']]]
county_rankings_sub = counties.merge(hr['Outcomes & Factors SubRankings'], on='fips', how='left')
subrankings = syn.setProvenance(
syn.store(Table(
Schema(
name='County Health Rankings (SubMeasures)',
columns=as_table_columns(county_rankings_sub), parent='syn16816579'), county_rankings_sub)
),
activity=Activity(
name='Parse Into Synapse Table',
description='Extract Excel sheets from original data source into Synapse table.',
used=[
dict(
name='County Health Rankings and Roadmaps',
url='http://www.countyhealthrankings.org/explore-health-rankings/rankings-data-documentation'
)
]
)
)
###Output
_____no_output_____
###Markdown
Ranked Measure Data
###Code
base_cols = [
'fips', 'state', 'county'
]
health_cols = [
'premature_death_yopllr', 'premature_death_yopllr_cilow', 'premature_death_yopllr_ciup', 'premature_death_yopllr_quartile',
'premature_death_yopllr_black', 'premature_death_yopllr_hispanic', 'premature_death_yopllr_white',
'poor_or_fair_health', 'poor_or_fair_health_cilow', 'poor_or_fair_health_ciup', 'poor_or_fair_health_quartile',
'physically_unhealthy_days', 'physically_unhealthy_days_cilow', 'physically_unhealthy_days_ciup', 'physically_unhealthy_days_quartile',
'mentally_unhealthy_days', 'mentally_unhealthy_days_cilow', 'mentally_unhealthy_days_ciup', 'mentally_unhealthy_days_quartile',
'low_birthweight_unreliable', 'low_birthweight', 'low_birthweight_cilow', 'low_birthweight_ciup', 'low_birthweight_quartile',
'low_birthweight_black', 'low_birthweight_hispanic', 'low_birthweight_white',
'adult_smokers', 'adult_smokers_cilow', 'adult_smokers_ciup', 'adult_smokers_quartile',
'adult_obesity', 'adult_obesity_cilow', 'adult_obesity_ciup', 'adult_obesity_quartile',
'food_environment_index', 'food_environment_index_quartile',
'physically_inactive', 'physically_inactive_cilow', 'physically_inactive_ciup', 'physically_inactive_quartile',
'access_to_exercise', 'access_to_exercise_quartile',
'excessive_drinking', 'excessive_drinking_cilow', 'excessive_drinking_ciup', 'excessive_drinking_quartile',
'num_alchohol_impaired_driving_deaths', 'num_driving_deaths', 'perc_alchohol_impaired', 'perc_alchohol_impaired_cilow', 'perc_alchohol_impaired_ciup', 'perc_alchohol_impaired_quartile',
'num_chlamydia_cases', 'chlamydia_rate', 'chlamydia_quartile',
'teen_birth_rate', 'teen_birth_rate_cilow', 'teen_birth_rate_ciup', 'teen_birth_rate_quartile',
'teen_birth_rate_black', 'teen_birth_rate_white', 'teen_birth_rate_hispanic',
'num_uninsured', 'perc_uninsured', 'perc_uninsured_cilow', 'perc_uninsured_ciup', 'perc_uninsured_quartile',
'num_primary_care_physicians', 'pcp_rate', 'pcp_ratio', 'pcp_quartile',
'num_dentists', 'dentist_rate', 'dentist_ratio', 'dentist_quartile',
'num_mental_health_providers', 'mhp_rate', 'mhp_ratio', 'mhp_quartile',
'preventable_hospital_rate_num_medicare_enrollees', 'preventable_hospital_rate', 'preventable_hospital_rate_cilow', 'preventable_hospital_rate_ciup', 'preventable_hospital_rate_quartile',
'num_diabetics', 'perc_of_diabetics_receiving_hba1c', 'perc_of_diabetics_receiving_hba1c_cilow', 'perc_of_diabetics_receiving_hba1c_ciup', 'perc_of_diabetics_receiving_hba1c_quartile',
'perc_of_diabetics_receiving_hba1c_black', 'perc_of_diabetics_receiving_hba1c_white',
'mammography_screening_num_medicare_enrollees', 'perc_mammography', 'perc_mammography_cilow', 'perc_mammography_ciup', 'perc_mammography_quartile',
'perc_mammography_black', 'perc_mammography_white'
]
education_cols = [
'high_school_grad_cohort_size', 'high_school_grad_rate', 'high_school_grad_quartile',
'num_some_college', 'population', 'perc_some_college', 'perc_some_college_cilow', 'perc_some_college_ciup', 'perc_some_college_quartile',
]
social_factor_cols = [
'num_unemployed', 'labor_force', 'perc_unemployed', 'perc_unemployed_quartile',
'perc_children_in_poverty', 'perc_children_in_poverty_cilow', 'perc_children_in_poverty_ciup', 'perc_children_in_poverty_quartile',
'perc_children_in_poverty_black', 'perc_children_in_poverty_hispanic', 'perc_children_in_poverty_white',
'80th_percentile_income', '20th_percentile_income', 'income_inequality_ratio', 'income_inquality_quartile',
'num_single_parent_households', 'num_households', 'perc_single_parent_households', 'perc_single_parent_households_cilow', 'perc_single_parent_households_ciup', 'perc_single_parent_households_quartile',
'num_social_associations', 'social_association_rate', 'social_association_quartile',
'num_violent_crimes', 'violent_crime_rate', 'violent_crime_quartile',
'num_injury_deaths', 'injury_death_rate', 'injury_death_rate_cilow', 'injury_death_rate_ciup', 'injury_death_rate_quartile',
'average_daily_pm2p5', 'average_daily_pm2p5_quartile',
'presence_of_drinking_water_violation', 'presence_of_drinking_water_violation_quartile',
'num_households_with_severe_housing_problems', 'perc_of_households_with_severe_housing_problems', 'perc_of_households_with_severe_housing_problems_cilow',
'perc_of_households_with_severe_housing_problems_ciup', 'perc_of_households_with_severe_housing_problems_quartile',
'perc_drive_alone_to_work', 'perc_drive_alone_to_work_cilow', 'perc_drive_alone_to_work_ciup', 'perc_drive_alone_to_work_quartile',
'perc_drive_alone_to_work_black', 'perc_drive_alone_to_work_hispanic', 'perc_drive_alone_to_work_white',
'num_of_workers_who_drive_alone', 'perc_of_long_commutes_alone', 'perc_of_long_commutes_alone_cilow', 'perc_of_long_commutes_alone_ciup', 'perc_of_long_commutes_alone_quartile'
]
all_cols = base_cols + health_cols + education_cols + social_factor_cols
hr['Ranked Measure Data'].columns = all_cols
hr['Ranked Measure Data'] = hr['Ranked Measure Data'].loc[1:, [c for c in all_cols if c not in ['state', 'name']]]
chr_measures = counties.merge(hr['Ranked Measure Data'], on='fips', how='left')
def fx(x):
if isnum(x):
return np.round(x, 2)
else:
return x
for c in chr_measures.columns:
if c in ['name', 'state', 'stabbr', 'fips', 'county']:
continue
chr_measures[c] = chr_measures[c].apply(fx)
#chr_health = chr_measures.loc[:, base_cols + health_cols]
chr_education = chr_measures.loc[:, base_cols + education_cols]
#chr_social = chr_measures.loc[:, base_cols + social_factor_cols]
# chr_health = syn.setProvenance(
# syn.store(Table(
# Schema(
# name='County Health Rankings (Health Measures)',
# columns=as_table_columns(chr_health), parent='syn16816579'), chr_health)
# ),
# activity=Activity(
# name='Parse Into Synapse Table',
# description='Extract Excel sheets from original data source into Synapse table.',
# used=[
# dict(
# name='County Health Rankings and Roadmaps',
# url='http://www.countyhealthrankings.org/explore-health-rankings/rankings-data-documentation'
# )
# ]
# )
# )
chr_education = syn.setProvenance(
syn.store(Table(
Schema(
name='County Health Rankings (Education Measures)',
columns=as_table_columns(chr_education), parent='syn16816579'), chr_education)
),
activity=Activity(
name='Parse Into Synapse Table',
description='Extract Excel sheets from original data source into Synapse table.',
used=[
dict(
name='County Health Rankings and Roadmaps',
url='http://www.countyhealthrankings.org/explore-health-rankings/rankings-data-documentation'
)
]
)
)
# chr_social = syn.setProvenance(
# syn.store(Table(
# Schema(
# name='County Health Rankings (Social Measures)',
# columns=as_table_columns(chr_social), parent='syn16816579'), chr_social)
# ),
# activity=Activity(
# name='Parse Into Synapse Table',
# description='Extract Excel sheets from original data source into Synapse table.',
# used=[
# dict(
# name='County Health Rankings and Roadmaps',
# url='http://www.countyhealthrankings.org/explore-health-rankings/rankings-data-documentation'
# )
# ]
# )
# )
###Output
_____no_output_____ |
Aula 2 - Python/Curiosidades sobre o Python.ipynb | ###Markdown
Curiosidades sobre o Python Introspecção* É nosso Help
###Code
b = [1,2,3]
b?
print?
###Output
_____no_output_____
###Markdown
Exemplo documentação de funções* Caso seja uma função ou um método de instância, a docstring, se estiver definida, também será exibida.
###Code
def soma_numeros(a,b):
""""
Adiciona 2 números
Return
-------
soma_numeros : a valor numérico, b valor numérico
"""
return a+b
soma_numeros?
###Output
_____no_output_____
###Markdown
Caso queira identificar o o código-fonte
###Code
soma_numeros??
###Output
_____no_output_____ |
Modelado TFM - Machine Learning-corregido cat con one hot.ipynb | ###Markdown
--- Modelado Cargamos librerias
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestClassifier
from sklearn import naive_bayes
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
from sklearn.feature_selection import VarianceThreshold
from sklearn.metrics import accuracy_score, auc, confusion_matrix, f1_score, precision_score, recall_score, roc_curve
from sklearn.model_selection import cross_val_score
from sklearn.tree import plot_tree
from sklearn.tree import export_graphviz
from sklearn.tree import export_text
from sklearn.metrics import mean_squared_error
from pandas.plotting import scatter_matrix
from datetime import datetime,timedelta
from sklearn.feature_selection import SelectFromModel
from sklearn.linear_model import LinearRegression
###Output
_____no_output_____
###Markdown
Vamos a leer nuestra base de datos
###Code
import io
import requests
url="https://raw.githubusercontent.com/TFM123456/Big_Data_and_Data_Science_UCM/main/datos_galicia_limpio.csv"
s=requests.get(url).content
datos_galicia=pd.read_csv(io.StringIO(s.decode('ISO-8859-1')))
datos_galicia.head()
###Output
_____no_output_____
###Markdown
Vamos a eliminar el id y la columna unnamed
###Code
datos_galicia = datos_galicia.drop(columns=["Unnamed: 0"])
datos_galicia = datos_galicia.drop(columns=["id"])
datos_galicia.describe()
datos_galicia.hist(figsize = (12, 12));
datos_galicia.columns
datos_galicia.shape
###Output
_____no_output_____
###Markdown
Vamos a ver si los tipos de datos han migrado bien desde R
###Code
datos_galicia.dtypes
###Output
_____no_output_____
###Markdown
Vamos a cambiar el tipo de dato de fecha
###Code
datos_galicia.dtypes
###Output
_____no_output_____
###Markdown
En primer lugar, creamos nuestras variables objetivo: las pérdidas como variable numérica y causas como variable categórica. Empezamos con la numérica -> Pérdidas
###Code
datos_galicia_num = datos_galicia.copy()
datos_galicia_num['target']=datos_galicia_num['perdidas']
datos_galicia_num.columns
datos_galicia_num = datos_galicia_num.drop(columns=['perdidas'])
###Output
_____no_output_____
###Markdown
Eliminamos nuestra otra variable objetivo
###Code
datos_galicia_num = datos_galicia_num.drop(columns=['causa'])
datos_galicia_num.columns
datos_galicia_num.shape
###Output
_____no_output_____
###Markdown
Finalmente, tenemos un dataset de 8504 filas y 28 columnas Comprobamos que no haya NAs
###Code
datos_galicia_num.isnull().sum()
###Output
_____no_output_____
###Markdown
Ahora, vamos a separar las variables categóricas de las numéricas
###Code
lista_numericas=datos_galicia_num._get_numeric_data()
lista_categoricas=datos_galicia_num.select_dtypes(include = ["object"])
###Output
_____no_output_____
###Markdown
Comprobamos
###Code
len(lista_categoricas.columns)
len(lista_numericas.columns)
###Output
_____no_output_____
###Markdown
ha incluido correctamente todas las columnas. Vemos que incluye cada lista
###Code
lista_categoricas.columns
lista_numericas.columns
###Output
_____no_output_____
###Markdown
Vamos a ver como se distribuyen los valores en las variables categoricas
###Code
for i in lista_categoricas:
print(datos_galicia_num[i].value_counts())
###Output
2002-09-02 42
2001-09-18 42
2006-08-09 37
2006-08-06 33
2002-08-31 33
..
2007-03-08 1
2013-04-19 1
2006-04-18 1
2005-04-11 1
2002-06-02 1
Name: fecha, Length: 1970, dtype: int64
Ourense 2983
A Coruña 2415
Pontevedra 2029
Lugo 1077
Name: idprovincia, dtype: int64
VIANA DO BOLO 216
SANTA COMBA 170
MANZANEDA 164
RODEIRO 147
MUIÃâOS 128
...
BERGONDO 1
CARIÃâO 1
BEADE 1
CORCUBIÃâN 1
LOURENZÃÂ 1
Name: idmunicipio, Length: 268, dtype: int64
NO INFO 6331
1K-10K 1436
< 1K 541
10K-100K 192
> 100K 4
Name: gastos, dtype: int64
Superior a 125 2983
Inferior a 80 2415
Entre 100-125 2029
NO INFO 1077
Name: ALTITUD, dtype: int64
< 2 m/s 4719
2-4 m/s 2619
4-6 m/s 816
6-8 m/s 248
> 8 m/s 102
Name: VELMEDIA, dtype: int64
Q3 3579
Q1 2070
Q2 2062
Q4 793
Name: Trimestre, dtype: int64
agosto 1526
septiembre 1282
marzo 1205
abril 953
febrero 785
julio 771
junio 655
octubre 486
mayo 454
diciembre 199
noviembre 108
enero 80
Name: Mes, dtype: int64
N 1921
NE 1811
W 1585
NW 1158
E 921
SW 439
S 438
SE 231
Name: DIR_VIENTO, dtype: int64
###Markdown
Eliminamos idmunicipio porque son demasiadas categorias
###Code
datos_galicia_num = datos_galicia_num.drop(columns=['idmunicipio'])
datos_galicia_num.shape
###Output
_____no_output_____
###Markdown
Transformamos las variables categóricas -> codificación one-hot Este método consiste en crear una nueva variable binaria por cada categoria existente en la variable inicial, donde 1 serán las observaciones que pertenezcan a esa categoría y 0 las demás. En muchas tareas, tales como la regresión lineal, es común usar k-1 variables binarias en lugar de k, donde k es el número total de categorías. Esto se debe a que estamos añadiendo una variable extra redundante que no es más que una combinación lineal de las otras y seguramente afectará de manera negativa al rendimiento del modelo. Además, al eliminar una variable no estamos perdiendo información, ya que se entiende que, si el resto de las categorías contienen un 0, la categoría correspondiente es la de la variable eliminada.
###Code
dummies= pd.get_dummies(datos_galicia_num['idprovincia'], drop_first = True)
dummies.head()
datos_galicia_num = pd.concat([datos_galicia_num, dummies], axis = 1)
###Output
_____no_output_____
###Markdown
Tenemos que tener tres variables mas
###Code
datos_galicia_num.shape
datos_galicia_num.columns
###Output
_____no_output_____
###Markdown
Eliminamos idprovincia
###Code
datos_galicia_num = datos_galicia_num.drop(columns=['idprovincia'])
datos_galicia_num.shape
###Output
_____no_output_____
###Markdown
Lo hacemos con el resto de variables categoricas
###Code
dummies2= pd.get_dummies(datos_galicia_num['gastos'], drop_first = True)
datos_galicia_num = pd.concat([datos_galicia_num, dummies2], axis = 1)
dummies3= pd.get_dummies(datos_galicia_num['ALTITUD'], drop_first = True)
datos_galicia_num = pd.concat([datos_galicia_num, dummies3], axis = 1)
dummies4= pd.get_dummies(datos_galicia_num['Trimestre'], drop_first = True)
datos_galicia_num = pd.concat([datos_galicia_num, dummies4], axis = 1)
dummies5= pd.get_dummies(datos_galicia_num['DIR_VIENTO'], drop_first = True)
datos_galicia_num = pd.concat([datos_galicia_num, dummies5], axis = 1)
dummies6= pd.get_dummies(datos_galicia_num['VELMEDIA'], drop_first = True)
datos_galicia_num = pd.concat([datos_galicia_num, dummies6], axis = 1)
dummies7= pd.get_dummies(datos_galicia_num['Mes'], drop_first = True)
datos_galicia_num = pd.concat([datos_galicia_num, dummies7], axis = 1)
len(dummies2.columns)+len(dummies3.columns)+len(dummies4.columns)+len(dummies5.columns)+len(dummies6.columns)+len(dummies7.columns)
###Output
_____no_output_____
###Markdown
Ahora tenemos que tener 60
###Code
datos_galicia_num.shape
###Output
_____no_output_____
###Markdown
Ahora eliminamos las variables originales
###Code
datos_galicia_num = datos_galicia_num.drop(columns=['gastos'])
datos_galicia_num = datos_galicia_num.drop(columns=['ALTITUD'])
datos_galicia_num = datos_galicia_num.drop(columns=['Trimestre'])
datos_galicia_num = datos_galicia_num.drop(columns=['DIR_VIENTO'])
datos_galicia_num = datos_galicia_num.drop(columns=['VELMEDIA'])
datos_galicia_num = datos_galicia_num.drop(columns=['Mes'])
#Son 6 por lo que al final, tenemos que tener 54 variables
datos_galicia_num.shape
###Output
_____no_output_____
###Markdown
Modelado Dividimos los datos en Train y Test y separamos ambas entre x -> entradas ( variables explicativas) e y-> salidas ( variable objetivo)Nuestro conjunto de Train es el entrenamiento, en Test probaremos los resultados de nuestras predicciones.
###Code
datos_galicia_num['fecha'] = pd.to_datetime(datos_galicia_num['fecha'])
datos_galicia_num = datos_galicia_num.drop('fecha', axis = 'columns')
X_train, X_test, y_train, y_test = train_test_split(
datos_galicia_num.drop('target', axis = 'columns'),
datos_galicia_num['target'],
train_size = 0.8,
random_state = 1234,
shuffle = True)
###Output
_____no_output_____
###Markdown
Comprobamos las dimensiones de Train y Test
###Code
X_train.shape
y_train.shape
X_test.shape
y_test.shape
###Output
_____no_output_____
###Markdown
Normalizamos
###Code
import tensorflow as tf
from tensorflow import keras
import numpy as np
X_train = np.asarray(X_train).astype(np.float32)
norm= tf.keras.layers.experimental.preprocessing.Normalization(axis = -1,dtype=None,mean = None,variance=None)
norm.adapt(X_train)
x_train_norm = norm(X_train)
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Primero vamos a generar varios tipos de modelos con todas las variables para ver cuál puede ser el modelo que más se ajuste a nuestros datos Regresión lineal múltiple Los algoritmos son utilizados para aprender a predecir el valor de una variable continua a partir de una o más variables explicativas.
###Code
from sklearn import linear_model
X_train.shape
model = LinearRegression()
model.fit(X_train, y_train)
model.score(X_test,y_test)
model.score(X_test,y_test)
print('Coefficients: \n', model.coef_)
y_pred = model.predict(X_test)
x_ax = range(len(y_test))
plt.plot(x_ax, y_test, label="original")
plt.plot(x_ax, y_pred, label="predicted")
plt.title("Boston test and predicted data")
plt.xlabel('X-axis')
plt.ylabel('Y-axis')
plt.legend(loc='best',fancybox=True, shadow=True)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Arbol de decisión En el caso de regresión, en lugar de usar Gini como medida de impureza, usamos MSE, el error cuadrático medio. Para este problema,si usamos un árbol de decisión de profundidad 2, obtenemos el siguiente árbol.
###Code
modelo_arbol = DecisionTreeRegressor()
modelo_arbol.fit(X_train, y_train)
score = modelo_arbol.score(X_train, y_train)
print("R-squared:", score)
modelo_arbol.score(X_train,y_train)
modelo_arbol.score(X_test,y_test)
y_pred = modelo_arbol.predict(X_test)
###Output
_____no_output_____
###Markdown
Representamos nuestro error
###Code
x_ax = range(len(y_test))
plt.plot(x_ax, y_test, label="original")
plt.plot(x_ax, y_pred, label="predicción")
plt.title("Pérdidas test y predicción de los datos")
plt.xlabel('X-axis')
plt.ylabel('Y-axis')
plt.legend(loc='best',fancybox=True, shadow=True)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Random Forest
###Code
from sklearn.ensemble import RandomForestRegressor
from sklearn.datasets import make_regression
modelo_randfor = RandomForestRegressor().fit(X_train, y_train)
y_pred = modelo_randfor.predict(X_test)
errors = abs(y_pred - y_test)
print('Mean Absolute Error:', round(np.mean(errors), 2), 'degrees.')
modelo_randfor.score(X_train,y_train)
modelo_randfor.score(X_test,y_test)
###Output
_____no_output_____
###Markdown
Naive Bayes Obtiene la probabilidad considerando las variables predictoras independientes, como si actuasen cada una de manera individual frente a la variable objetivo
###Code
modelo_bayes = naive_bayes.GaussianNB().fit(X_train, y_train)
y_pred = modelo_bayes.predict(X_test)
print(modelo_bayes.score(X_train, y_train))
print(modelo_bayes.score(X_test, y_test))
###Output
0.004703115814226925
###Markdown
Gradient Boosting
###Code
from sklearn.ensemble import GradientBoostingRegressor
model_Gboost = GradientBoostingRegressor(random_state=0).fit(X_train, y_train)
model_Gboost.score(X_train, y_train)
model.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
Continuamos con la categórica -> Causa
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from pandas.plotting import scatter_matrix
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import accuracy_score, auc, confusion_matrix, f1_score, precision_score, recall_score, roc_curve
import pickle
###Output
_____no_output_____
###Markdown
Creamos un dataset igual que el incial
###Code
datos_galicia.columns
datos_galicia.shape
datos_galicia_cat = datos_galicia.copy()
datos_galicia_cat.dtypes
datos_galicia_cat.shape
datos_galicia_cat['target']=datos_galicia_cat['causa']
datos_galicia_cat = datos_galicia_cat.drop(columns=['causa'])
datos_galicia_cat.shape
datos_galicia_cat.columns
###Output
_____no_output_____
###Markdown
Eliminamos la otra variable predictiva
###Code
datos_galicia_cat = datos_galicia_cat.drop(columns=['perdidas'])
datos_galicia_cat.shape
###Output
_____no_output_____
###Markdown
Vemos como se distrubuye la variabe categórica
###Code
print(datos_galicia_cat.groupby('target').size())
###Output
target
causa desconocida 667
fuego reproducido 157
intencionado 7158
negligencia 441
rayo 81
dtype: int64
###Markdown
Separamos categóricas de numéricas
###Code
lista_numericas=datos_galicia_cat._get_numeric_data()
lista_categoricas=datos_galicia_cat.select_dtypes(include = ["object"])
lista_numericas.columns
lista_categoricas.columns
###Output
_____no_output_____
###Markdown
Eliminamos idmunicipio por exceso de categorias Transformamos las variables categóricas -> One-Hot Primero vemos las categorías que componen a cada una de ellas
###Code
for i in lista_categoricas:
print(datos_galicia_cat[i].value_counts())
datos_galicia_cat = datos_galicia_cat.drop(columns=['fecha'])
datos_galicia_cat = datos_galicia_cat.drop(columns=['idmunicipio'])
datos_galicia_cat.shape
dummies= pd.get_dummies(datos_galicia_cat['idprovincia'], drop_first = True)
datos_galicia_cat = pd.concat([datos_galicia_cat, dummies], axis = 1)
dummies2= pd.get_dummies(datos_galicia_cat['gastos'], drop_first = True)
datos_galicia_cat = pd.concat([datos_galicia_cat, dummies2], axis = 1)
dummies3= pd.get_dummies(datos_galicia_cat['ALTITUD'], drop_first = True)
datos_galicia_cat = pd.concat([datos_galicia_cat, dummies3], axis = 1)
dummies4= pd.get_dummies(datos_galicia_cat['Trimestre'], drop_first = True)
datos_galicia_cat = pd.concat([datos_galicia_cat, dummies4], axis = 1)
dummies5= pd.get_dummies(datos_galicia_cat['DIR_VIENTO'], drop_first = True)
datos_galicia_cat = pd.concat([datos_galicia_cat, dummies5], axis = 1)
dummies6= pd.get_dummies(datos_galicia_cat['VELMEDIA'], drop_first = True)
datos_galicia_cat = pd.concat([datos_galicia_cat, dummies6], axis = 1)
dummies7= pd.get_dummies(datos_galicia_cat['Mes'], drop_first = True)
datos_galicia_cat = pd.concat([datos_galicia_cat, dummies7], axis = 1)
len(dummies.columns)+len(dummies2.columns)+len(dummies3.columns)+len(dummies4.columns)+len(dummies5.columns)+len(dummies6.columns)+len(dummies7.columns)
25+35
datos_galicia_cat.shape
datos_galicia_cat = datos_galicia_cat.drop(columns=['idprovincia'])
datos_galicia_cat = datos_galicia_cat.drop(columns=['gastos'])
datos_galicia_cat = datos_galicia_cat.drop(columns=['ALTITUD'])
datos_galicia_cat = datos_galicia_cat.drop(columns=['Trimestre'])
datos_galicia_cat = datos_galicia_cat.drop(columns=['DIR_VIENTO'])
datos_galicia_cat = datos_galicia_cat.drop(columns=['VELMEDIA'])
datos_galicia_cat = datos_galicia_cat.drop(columns=['Mes'])
60-7
##debemos tener 53 variables
datos_galicia_cat.shape
datos_galicia_cat.dtypes
def saca_metricas(y1, y2):
print('matriz de confusión')
print(confusion_matrix(y1, y2))
print('accuracy')
print(accuracy_score(y1, y2))
print('precision')
print(precision_score(y1, y2))
print('recall')
print(recall_score(y1, y2))
print('f1')
print(f1_score(y1, y2))
false_positive_rate, recall, thresholds = roc_curve(y1, y2)
roc_auc = auc(false_positive_rate, recall)
print('AUC')
print(roc_auc)
plt.plot(false_positive_rate, recall, 'b')
plt.plot([0, 1], [0, 1], 'r--')
plt.title('AUC = %0.2f' % roc_auc)
X_train, X_test, y_train, y_test = train_test_split(
datos_galicia_cat.drop('target', axis = 'columns'),
datos_galicia_cat['target'],
train_size = 0.8,
random_state = 1234,
shuffle = True)
datos_galicia_cat.columns
!pip uninstall scikit-learn -y
!pip install scikit-learn==0.24.1
!pip3 install "scikit_learn==0.22.2.post1"
import sklearn
from sklearn.linear_model import LogisticRegression
import numpy as np
from sklearn.preprocessing import LabelBinarizer
y = datos_galicia_cat['target']
y_dense = LabelBinarizer().fit_transform(y)
print(y_dense)
from scipy import sparse
y_sparse = sparse.csr_matrix(y_dense)
classifier = RandomForestClassifier().fit(X_train, y_train)
y_pred = classifier.predict(X_test)
classifier.score(X_test, y_test)
classifier.score(X_train, y_train)
from sklearn.datasets import make_classification
from sklearn.multioutput import MultiOutputClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.utils import shuffle
from sklearn.tree import DecisionTreeClassifier
dtree_model = DecisionTreeClassifier(max_depth = 2).fit(X_train, y_train)
dtree_predictions = dtree_model.predict(X_test)
# creating a confusion matrix
cm = confusion_matrix(y_test, dtree_predictions)
cm
dtree_model.score(X_train, y_train)
dtree_model.score(X_test, y_test)
from sklearn.naive_bayes import GaussianNB
gnb = GaussianNB().fit(X_train, y_train)
gnb_predictions = gnb.predict(X_test)
# accuracy on X_test
accuracy = gnb.score(X_test, y_test)
print(accuracy)
# creating a confusion matrix
cm = confusion_matrix(y_test, gnb_predictions)
accuracy2 = gnb.score(X_train, y_train)
print(accuracy2)
cm
###Output
_____no_output_____ |
docs/jupyter/.ipynb_checkpoints/draw-box-checkpoint.ipynb | ###Markdown
Canvas Drawing Test
###Code
from ipycanvas import Canvas
canvas = Canvas(width=200, height=200)
canvas.fill_rect(25, 25, 100, 100)
canvas.clear_rect(45, 45, 60, 60)
canvas.stroke_rect(50, 50, 50, 50)
canvas
###Output
_____no_output_____ |
ESPNStatScraper.ipynb | ###Markdown
**Outline:**The Purpose of this notebook is to scrape the ESPN website for international rubgby player statistics. The script could easily be adapted for other sports and for stats about the games themselves but I've not tested that as all I want at this point in time are stats for players featuring in the 6 nations championship.
###Code
import time, bs4, requests
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException, ElementNotVisibleException
import re
from pprint import pprint
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = [12, 12]
###Output
_____no_output_____
###Markdown
The below numbers show the 6 nations teams as identified in the ESPN system.Finding other teams should be as simple as going to their main page and checking the address bar.
###Code
england = "1"
scotland = "2"
ireland = "3"
wales = "4"
france = "9"
italy = "20"
lions = "32" #2017
###Output
_____no_output_____
###Markdown
First we go to the page summarising all the games that a team has played in a given year. We use a webdriver and Beautiful Soup to strip this table to provide us with the date of the match, teams that played, the score and a link to the report on the game where we can find more detailed stats.
###Code
# Set your team
teamNo = "20"
year = "2015"
teamSeason ="http://www.espn.co.uk/rugby/results/_/team/"+teamNo+"/season/"+year
browser = webdriver.Chrome(r"C:\Users\Maurice\Desktop\Python\chromedriver_win32\chromedriver.exe")
browser.get(teamSeason)
seasonHTML = browser.page_source
browser.close()
# Use beautiful soup to search the HTML for the main table containing the results
seasonSoup = bs4.BeautifulSoup(seasonHTML, "html.parser")
schedule = seasonSoup.find("div", {"id": "sched-container"})
tables = schedule.select('table')
# Function takes a table and converts it into an array of data.
def makeList(table):
result = []
allrows = table.findAll('tr')
for row in allrows:
result.append([])
allcols = row.findAll('td')
for col in allcols:
thestrings = [s for s in col.findAll(text=True)]
thetext = ''.join(thestrings)
result[-1].append(thestrings)
return result
# We use the above function in the one below to organise the stats by team
# Function returns a list with two sublists
def teamFixtures(tableSet):
teams = []
for t in tableSet:
teams.append(makeList(t)[1:])
teams = [i for sublist in teams for i in sublist]
return teams
# Applying the function to our table from the results page
arrays= teamFixtures(tables)
# Below function creates a date format that is useable in a Pandas dataframe
monthDict = {"Feb":"02", "Mar":"03", "May":"05", "Jun":"06", "Jul":"07", "Aug":"08", "Sep":"09", "Oct":"10", "Nov":"11", "Dec":"12"}
def dateToNum(date):
monDay = date.split(", ")[1]
mon, day = monDay.split(" ")
if len(day) == 1:
day = "0"+day
return year+monthDict[mon]+day
#We don't want all the information from the arrays table just the date, teams and score
namesScore = []
for m in arrays:
namesScore.append([dateToNum(m[0][0]), m[1][1], m[2][1], m[1][2]])
#We also search through the table for links to the more detailed Full Time report
links = []
for link in schedule.findAll('a', href=True, text='FT'):
links.append(link['href'])
# Some games only have a brief summary and no stats so we'll ditch those from our final list.
# The below segment copies the indices of good links to filter our links and data arrays below.
duds = []
goodLinks = []
for i, t in enumerate(links):
if 'report' not in t:
duds.append(i)
else:
goodLinks.append(i)
# We're not going to follow the link to the full report. We just want the last segment which identifies the game.
links = [i.split("?")[1] for i in links]
full_reports = [list(zip(namesScore,links))[i] for i in goodLinks]
pprint(full_reports)
testGame = full_reports[1]
testPage = "http://www.espn.co.uk/rugby/playerstats?"+testGame[1]
browser = webdriver.Chrome(r"C:\Users\Maurice\Desktop\Python\chromedriver_win32\chromedriver.exe")
tabs = ["Scoring", "Attacking", "Defending", "Discipline"]
#Function goes to an ESPN page of player stats for a game and cycles through four tabs of stats.
#It returns a full page of HTML for each
def scrapePage(address):
while True:
browser.get(address)
try:
browser.find_element_by_xpath("//*[contains(text(),'Yes')]").click()
except ElementNotVisibleException:
pass
pages = []
count = 0
while count < 4:
try:
browser.find_element_by_xpath("//*[contains(text(),'"+tabs[count]+"')]").click()
pages.append(browser.page_source)
count+=1
except NoSuchElementException:
break
if count > 0:
break
else:
pass
browser.close()
return pages
#pageSet = scrapePage(testPage)
#print(len(pageSet))
# Function takes the HTML of a set of pages and returns the first two tables on each.
# We already know these are the player stats.
def gettables(sources):
output=[]
for s in sources:
soup = bs4.BeautifulSoup(s, "html.parser")
tables = soup.select('table')
output.append(tables[0:2])
return output
#tables = gettables(pageSet)
# Function takes a table and converts it into an array of data.
def makeList(table):
result = []
allrows = table.findAll('tr')
for row in allrows:
result.append([])
allcols = row.findAll('td')
for col in allcols:
thestrings = [s for s in col.findAll(text=True)]
thetext = ''.join(thestrings)
result[-1].append(thestrings)
return result
# We use the above function in the one below to organise the stats by team
# Function returns a list with two sublists
def teamTables(tableSet):
team1 = []
team2 = []
for t in tableSet:
team1.append(makeList(t[0]))
team2.append(makeList(t[1]))
teams = [team1, team2]
return teams
#tablePair = teamTables(tables)
# Due to an issue with the text scraping this function is required
# It runs over the arrays and splits the player names from their positions
def nameSplit(teamStats):
result=[]
for table in teamStats:
newtable = []
for row in table:
if len(row)<1:
newtable.append(row)
else:
try:
newtable.append([row[0][0], row[0][1]]+[i for sublist in row[1:len(row)] for i in sublist])
except IndexError:
newtable.append([row[0][0], "R"]+[i for sublist in row[1:len(row)] for i in sublist])
result.append(newtable)
return result
#team1 = nameSplit(tablePair[0])
scoringHeaders = ["Name", "Position", "Try", "Try Assist", "Conversion", "Penalty", "Drop Goal", "Points"]
attackingHeaders = ["Name", "Position", "Blank", "Passes", "Runs", "Meters Run", "Clean Breaks", "Defenders Beaten", "Offloads", "Blank"]
defendingHeaders = ["Name", "Position", "Turnovers Conceeded", "Tackles", "Missed Tackles", "Lineouts Won"]
disciplineHeaders = ["Name", "Position", "Penalties", "Yellow Cards", "Red Cards"]
headers = [scoringHeaders, attackingHeaders, defendingHeaders, disciplineHeaders]
# Function adds headers to
def addHeaders(tableset):
for i, t in enumerate(tableset):
t[0] = headers[i]
#addHeaders(team1)
#addHeaders(team2)
#pprint(team1)
import pandas as pd
from functools import reduce
# Function takes one of our arrays of player stats and makes a dataframe
def tableToDF(table):
df = pd.DataFrame(table)
df.columns = df.iloc[0]
#df.set_index('Name', inplace=True)
df.drop([0], axis=0, inplace =True)
return df
# Function takes all our tables for a team, converts to DF and merge them on the name and position
# Function also converts all numeric columns to numbers and drops some blank columns
def tablesToDFs(tables):
dfs = []
for t in tables:
dfs.append(tableToDF(t))
df_final = reduce(lambda left,right: pd.merge(left,right, on=['Name', 'Position']), dfs)
cols = list(df_final.columns)
cols.remove('Name')
cols.remove('Position')
for col in cols:
df_final[col]=df_final[col].apply(pd.to_numeric, errors='coerce')
df_final.drop(['Blank'], axis=1, inplace=True)
return df_final
#teamDF1 = tablesToDFs(team1)
#addCols = testGame[0]
#print(addCols)
def homeTeamsData(teamDF):
teamDF["Team"] = addCols[1]
teamDF["Opposition"] = addCols[2]
teamDF["Points For"] = addCols[3].split(" ")[0]
teamDF["Points Against"] = addCols[3].split(" ")[2]
teamDF["Home/Away"] = "Home"
teamDF["Date"] = addCols[0]
def awayTeamsData(teamDF):
teamDF["Team"] = addCols[2]
teamDF["Opposition"] = addCols[1]
teamDF["Points For"] = addCols[3].split(" ")[2]
teamDF["Points Against"] = addCols[3].split(" ")[0]
teamDF["Home/Away"] = "Away"
teamDF["Date"] = addCols[0]
#homeTeamsData(teamDF1)
#teamDF1.to_csv('testFile.csv')
#print(teamDF1.head(3))
pprint(full_reports)
###Output
[(['20151011', 'ROM', 'ITALY', '22 - 32'], 'gameId=182008&league=164205'),
(['20151004', 'IRE', 'ITALY', '16 - 9'], 'gameId=181996&league=164205'),
(['20150926', 'ITALY', 'CAN', '23 - 18'], 'gameId=181984&league=164205'),
(['20150919', 'FRA', 'ITALY', '32 - 10'], 'gameId=181972&league=164205'),
(['20150905', 'WALES', 'ITALY', '23 - 19'], 'gameId=252323&league=252321'),
(['20150829', 'SCOT', 'ITALY', '48 - 7'], 'gameId=263333&league=252321'),
(['20150823', 'ITALY', 'SCOT', '12 - 16'], 'gameId=263325&league=248937'),
(['20150321', 'ITALY', 'WALES', '20 - 61'], 'gameId=180691&league=180659'),
(['20150315', 'ITALY', 'FRA', '0 - 29'], 'gameId=180690&league=180659'),
(['20150228', 'SCOT', 'ITALY', '19 - 22'], 'gameId=180685&league=180659'),
(['20150214', 'ENG', 'ITALY', '47 - 17'], 'gameId=180682&league=180659'),
(['20150207', 'ITALY', 'IRE', '3 - 26'], 'gameId=180680&league=180659')]
###Markdown
The below cell takes all the above functions and puts them together to save CSVs of all the games a team has played in a given year.
###Code
select_reports = full_reports[1:]
for m in select_reports:
testPage = "http://www.espn.co.uk/rugby/playerstats?"+m[1]
browser = webdriver.Chrome(r"C:\Users\Maurice\Desktop\Python\chromedriver_win32\chromedriver.exe")
pageSet = scrapePage(testPage)
tables = gettables(pageSet)
tablePair = teamTables(tables)
teamDFs = []
for t in tablePair:
team = nameSplit(t)
addHeaders(team)
teamDFs.append(tablesToDFs(team))
addCols = m[0]
homeTeamsData(teamDFs[0])
awayTeamsData(teamDFs[1])
result = pd.concat(teamDFs)
result.to_csv(addCols[0]+" "+addCols[1]+"-"+addCols[2]+'.csv')
###Output
_____no_output_____ |
notebooks/2_multiple_linear_regression/2_multiple_linear_regression.ipynb | ###Markdown
Data Dictionary for data on Savings
###Code
Image(filename='images/data_dictionary_savings.png')
def rename_cols_and_save(xls_name):
df = pd.read_excel("../../data/{0}.xls".format(xls_name), index_col=None, header=None)
if xls_name == 'hprice1':
names_dict = {0:'price',
1:'assess',
2:'bdrms',
3:'lotsize',
4:'sqrft',
5:'colonial',
6:'lprice',
7:'lassess',
8:'llotsize',
9:'lsqrft',
}
elif xls_name == 'saving':
names_dict = {0:'sav',
1:'inc',
2:'size',
3:'edu',
4:'age',
5:'black',
6:'cons',
}
df.rename(columns = names_dict, inplace = True)
df.to_csv("../../data/{0}.csv".format(xls_name), index=False)
return df
df = rename_cols_and_save(xls_name='saving')
df.head()
df.describe().T
###Output
_____no_output_____
###Markdown
Distribution Plots
###Code
def dist_plot(df, var, sav_as):
sns_plot = sns.distplot(df[var], color='b').get_figure()
sns_plot.savefig("images/dist_plot_{0}.png".format(sav_as))
return None
dist_plot(df=df, var='sav', sav_as='sav')
dist_plot(df=df, var='inc', sav_as='inc')
dist_plot(df=df, var='edu', sav_as='edu')
dist_plot(df=df, var='age', sav_as='age')
dist_plot(df=df, var='cons', sav_as='cons')
###Output
_____no_output_____
###Markdown
Create 2d Scatterplots
###Code
def create_scatter_matrix(df, list_of_vars, save_as):
sns_plot = sns.pairplot(df[list_of_vars])
sns_plot.savefig("images/scatter_matrix_{0}.png".format(save_as))
return sns_plot
create_scatter_matrix(df=df, list_of_vars=['inc', 'edu', 'sav', 'size', 'age'], save_as='saving_income_education')
###Output
_____no_output_____
###Markdown
Create Box Plots to Analyze Savings Rate
###Code
def box_plot(df, var_x, var_y):
sns_plot = sns.boxplot(x=var_x, y=var_y, data=df).get_figure()
sns_plot.savefig("images/box_plot_{0}_{1}.png".format(var_x,var_y))
return None
box_plot(df=df, var_x='black', var_y='sav')
###Output
_____no_output_____
###Markdown
3D Visualization of Drivers of Savings
###Code
def create_3d_plot(df, x, y, z):
# Configure the trace.
trace = go.Scatter3d(
x=df[x], # <-- Put your data instead
y=df[y], # <-- Put your data instead
z=df[z], # <-- Put your data instead
mode='markers',
marker={
'size': 10,
'opacity': 0.8,
}
)
# Configure the layout.
layout = go.Layout(
margin={'l': 0, 'r': 0, 'b': 0, 't': 0}
)
data = [trace]
plot_figure = go.Figure(data=data, layout=layout)
# Render the plot.
return plotly.offline.iplot(plot_figure)
create_3d_plot(df=df, x='inc', y='edu', z='sav')
###Output
_____no_output_____
###Markdown
Multiple Regression Model
###Code
def regression_model(list_of_x, y, df):
X = df[list_of_x]
X = sm.add_constant(X)
y = df[y]
# Note the difference in argument order
model = sm.OLS(y, X).fit()
return model
model = regression_model(list_of_x=['edu', 'inc'], y='sav', df=df)
model.summary()
model = regression_model(list_of_x=['black','inc', 'edu', 'size', 'age'], y='sav', df=df)
model.summary()
def scatter_with_line(df, x, y):
sns_plot = sns.lmplot(x=x, y=y, data=df)
sns_plot.savefig("images/scatter_with_line_{0}_{1}.png".format(x, y))
return sns_plot
scatter_with_line(df=df, x='inc', y='sav')
###Output
_____no_output_____
###Markdown
Extending the Model with Non-Linear Term
###Code
df['age_sq'] = df['age']**2
model = regression_model(list_of_x=['black','inc', 'edu', 'size', 'age', 'age_sq'], y='sav', df=df)
model.summary()
###Output
_____no_output_____
###Markdown
Interaction Variables
###Code
df['inc_x_edu'] = df['inc']*df['edu']
model = regression_model(list_of_x=['black','inc', 'edu', 'size', 'age', 'age_sq', 'inc_x_edu'], y='sav', df=df)
model.summary()
###Output
_____no_output_____
###Markdown
Log-linear model
###Code
df = df[df['sav'] > 0]
df['lsav'] = np.log(df['sav'])
model = regression_model(list_of_x=['black','inc', 'edu', 'size', 'age', 'age_sq', 'inc_x_edu'], y='lsav', df=df)
model.summary()
###Output
_____no_output_____ |
notebooks/attacker-analysis.ipynb | ###Markdown
Attacker Based Confidence and Success Rate
###Code
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
from scipy.stats.distributions import norm
plt.style.use("fast")
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
matplotlib.rcParams.update({'font.size': 14})
fig, axes = plt.subplots(1,1,figsize=(4,3))
colors = plt.cm.RdBu(np.linspace(0.0,0.9,20))
epsilon = 1
delta = 1e-6
delta_f = 1
def get_classic_sigma(epsilon, delta, delta_f):
return delta_f*np.sqrt(2*np.log(1.25/delta))/epsilon
sigma = get_classic_sigma(epsilon,delta, delta_f)
x_1 = np.linspace(-200,200,1000)
y_1 = norm.pdf(x_1,loc=0,scale=sigma)
y_2 = norm.pdf(x_1,loc=1,scale=sigma)
belief_1 = y_1 / (y_1+y_2)
belief_2 = y_2 / (y_1+y_2)
axes.plot(x_1,belief_1, label=r"$\beta(\mathcal{D})$", c=colors[3])
axes.plot(x_1,belief_2, label=r"$\beta(\mathcal{D}')$", c=colors[-2])
axes.set_xlabel("Output $r$ of $\mathcal{M}_{Gau}(\cdot)$")
axes.grid(ls="dashed")
#axes.set_ylim(-0.02,0.5)
axes.set_xlim(-200,200)
#axes.set_yticks([0,0.1,0.2,0.3,0.4,0.5])
plt.legend(loc='upper center', bbox_to_anchor=(0.5, -0.25), ncol=2, facecolor="white")
plt.savefig("gauss_limit.pdf", bbox_inches='tight')
plt.show()
from scipy.stats import laplace
plt.style.use("fast")
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
matplotlib.rcParams.update({'font.size': 14})
fig, axes = plt.subplots(1,1,figsize=(4,3))
colors = plt.cm.RdBu(np.linspace(0.0,0.9,20))
epsilon = 1
delta = 1e-6
delta_f = 1
def get_classic_sigma(epsilon, delta, delta_f):
return delta_f*np.sqrt(2*np.log(1.25/delta))/epsilon
sigma = get_classic_sigma(epsilon,delta, delta_f)
x_1 = np.linspace(-4,4,1000)
y_1 = laplace.pdf(x_1,loc=0,scale=delta_f/epsilon)
y_2 = laplace.pdf(x_1,loc=1,scale=delta_f/epsilon)
belief_1 = y_1 / (y_1+y_2)
belief_2 = y_2 / (y_1+y_2)
axes.plot(x_1,belief_1, label=r"$\beta(\mathcal{D})$", c=colors[3])
axes.plot(x_1,belief_2, label=r"$\beta(\mathcal{D}')$", c=colors[-2])
axes.set_xlabel("Output $r$ of $\mathcal{M}_{Lap}(\cdot)$")
axes.grid(ls="dashed")
axes.set_ylim(-0.02,1.02)
axes.set_xlim(-4,4)
axes.set_yticks(np.linspace(0.0,1.0,5))
plt.legend(loc='upper center', bbox_to_anchor=(0.5, -0.25), ncol=2, facecolor="white")
plt.savefig("laplace_limit.pdf", bbox_inches='tight')
plt.show()
plt.style.use("fast")
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
matplotlib.rcParams.update({'font.size': 14})
fig, axes = plt.subplots(1,1,figsize=(4,3))
colors = plt.cm.RdBu(np.linspace(0.0,0.9,20))
epsilon = 6
delta = 1e-6
delta_f = 1
def get_classic_sigma(epsilon, delta, delta_f):
return delta_f*np.sqrt(2*np.log(1.25/delta))/epsilon
sigma = get_classic_sigma(epsilon,delta, delta_f)
x_1 = np.linspace(-4,4,1000)
y_1 = norm.pdf(x_1,loc=0,scale=sigma)
y_2 = norm.pdf(x_1,loc=1,scale=sigma)
axes.plot(x_1,y_1, label=r'PDF of $\mathcal{M}(\mathcal{D})$, $(6,1e^{-6})$-DP', c=colors[3])
axes.plot(x_1,y_2, label=r"PDF of $\mathcal{M}(\mathcal{D}')$, $(6,1e^{-6})$-DP", c=colors[-2])
epsilon = 3
delta = 1e-6
delta_f = 1
def get_classic_sigma(epsilon, delta, delta_f):
return delta_f*np.sqrt(2*np.log(1.25/delta))/epsilon
sigma = get_classic_sigma(epsilon,delta, delta_f)
y_1 = norm.pdf(x_1,loc=0,scale=sigma)
y_2 = norm.pdf(x_1,loc=1,scale=sigma)
axes.plot(x_1,y_1, label=r'PDF of $\mathcal{M}(\mathcal{D})$, $(3,1e^{-6})$-DP', c=colors[3], linestyle="--")
axes.plot(x_1,y_2, label=r"PDF of $\mathcal{M}(\mathcal{D}')$, $(3,1e^{-6})$-DP", c=colors[-2], linestyle="--")
axes.set_xlabel("Output $r$ of $\mathcal{M}(\cdot)$")
img = [[0.1]*9+[0.8]*7]
plt.imshow(img, aspect='auto', extent=[-4.0, 4.0, -0.02,0.5], cmap=plt.cm.RdBu,vmin=0,vmax=1, alpha=0.15)
axes.grid(ls="dashed")
axes.set_ylim(-0.02,0.5)
axes.set_xlim(-4,4)
axes.set_yticks([0,0.1,0.2,0.3,0.4,0.5])
plt.legend(loc='upper center', bbox_to_anchor=(0.5, -0.25), ncol=1, facecolor="white")
plt.savefig("decision_boundary_pdfs.pdf", bbox_inches='tight')
plt.show()
plt.style.use("fast")
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
matplotlib.rcParams.update({'font.size': 14})
fig, axes = plt.subplots(1,1,figsize=(4,3))
colors = plt.cm.RdBu(np.linspace(0.0,0.9,20))
x_min, x_max = -10,10
epsilon = 6
delta = 1e-6
delta_f = 1
def get_classic_sigma(epsilon, delta, delta_f):
return delta_f*np.sqrt(2*np.log(1.25/delta))/epsilon
sigma = get_classic_sigma(epsilon,delta, delta_f)
x_1 = np.linspace(x_min,x_max,1000)
y_1 = norm.pdf(x_1,loc=0,scale=sigma)
y_2 = norm.pdf(x_1,loc=1,scale=sigma)
rho_D = y_1/(y_1+y_2)
rho_D_prime = y_2/(y_1+y_2)
axes.plot(x_1,rho_D, label=r'$\beta(\mathcal{D})$ of $\mathcal{A}_{DI}$, $(6,1e^{-6})$-DP', c=colors[3])
axes.plot(x_1,rho_D_prime, label=r"$\beta(\mathcal{D}')$ of $\mathcal{A}_{DI}$, $(6,1e^{-6})$-DP", c=colors[-2])
epsilon = 3
delta = 1e-6
delta_f = 1
def get_classic_sigma(epsilon, delta, delta_f):
return delta_f*np.sqrt(2*np.log(1.25/delta))/epsilon
sigma = get_classic_sigma(epsilon,delta, delta_f)
y_1 = norm.pdf(x_1,loc=0,scale=sigma)
y_2 = norm.pdf(x_1,loc=1,scale=sigma)
rho_D = y_1/(y_1+y_2)
rho_D_prime = y_2/(y_1+y_2)
axes.plot(x_1,rho_D, label=r'$\beta(\mathcal{D})$ of $\mathcal{A}_{DI}$, $(3,1e^{-6})$-DP', c=colors[3], linestyle="--")
axes.plot(x_1,rho_D_prime, label=r"$\beta(\mathcal{D}')$ of $\mathcal{A}_{DI}$, $(3,1e^{-6})$-DP", c=colors[-2],linestyle="--")
axes.set_xlabel(r"Output $r$ of $\mathcal{M}(\cdot)$")
#axes.set_ylabel(r"$\beta(\cdot)$")
img = [[0.1]*21+[0.8]*19]
plt.imshow(img, aspect='auto', extent=[x_min, x_max, -0.02,1], cmap=plt.cm.RdBu,vmin=0,vmax=1, alpha=0.15)
axes.grid(ls="dashed")
#axes.set_ylim(-0.02,0.5)
#axes.set_xlim(-4,4)
#axes.set_yticks([0,0.1,0.2,0.3,0.4,0.5])
plt.legend(loc='upper center', bbox_to_anchor=(0.5, -0.25), ncol=1, facecolor="white")
plt.savefig("decision_boundary_belief.pdf", bbox_inches='tight')
plt.show()
plt.style.use("fast")
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
matplotlib.rcParams.update({'font.size': 14})
fig, axes = plt.subplots(1,1,figsize=(4,3))
colors = plt.cm.RdBu(np.linspace(0.0,0.9,20))
epsilon = 3
delta = 1e-6
delta_f = 1
def get_classic_sigma(epsilon, delta, delta_f):
return delta_f*np.sqrt(2*np.log(1.25/delta))/epsilon
sigma = get_classic_sigma(epsilon,delta, delta_f)
x_1 = np.linspace(-4,4,1000)
y_1 = norm.pdf(x_1,loc=0,scale=sigma)
y_2 = norm.pdf(x_1,loc=1,scale=sigma)
#ticks = np.copy(x_1)
#ticks = np.array(["%.2f" % number for number in ticks])
#ticks[500] = "f(D)"
plt.xticks(rotation=45)
plt.xticks(np.linspace(-4,4,9), ['-4','-3','-2','-1','f(D)=0','f(D\')=1', '2','3','4'])
axes.plot(x_1,y_1, label=r'$g_{X_1}$', c=colors[3])
axes.plot(x_1,y_2, label=r"$g_{X_0}$", c=colors[-2])
x_1 = np.linspace(-4,0.5,1000)
x_2 = np.linspace(0.5,4,1000)
y_1 = norm.pdf(x_1,loc=1,scale=sigma)
y_2 = norm.pdf(x_2,loc=0,scale=sigma)
axes.fill_between(x_1,y_1, hatch="...", facecolor=None, alpha=0)
axes.fill_between(x_2,y_2, hatch="...", facecolor=None, alpha=0)
img = [[0.1]*9+[0.8]*7]
plt.imshow(img, aspect='auto', extent=[-4.0, 4.0, -0.02,0.5], cmap=plt.cm.RdBu,vmin=0,vmax=1, alpha=0.15)
axes.set_xlabel(r"Output $r$ of $\mathcal{M}(\cdot)$")
axes.grid(ls="dashed")
axes.set_ylim(-0.02,0.5)
axes.set_xlim(-4,4)
axes.set_yticks([0,0.1,0.2,0.3,0.4,0.5])
plt.legend(loc='upper center', bbox_to_anchor=(0.5, -0.5), ncol=2, facecolor="white")
plt.savefig("error_rate_pdfs_eps3.pdf", bbox_inches='tight')
plt.show()
def adaptive_success_rate(epsilon_i, delta_i, iterations):
return 1 - norm.cdf(-np.sqrt(iterations)*epsilon_i/(2*np.sqrt(2*np.log(1.25/delta_i))))
def success_rate(epsilon, delta):
return 1- norm.cdf(-epsilon/(2*np.sqrt(2*np.log(1.25/delta))))
from scipy.special import binom
def majority_voting_success_rate(epsilon_i, delta_i, iterations):
p = 0
prob = success_rate(epsilon_i, delta_i)
n = iterations
k = int(np.ceil(n/2))
while k <= n:
p += binom(n, k)*(prob**k)*((1-prob)**(n-k))
k += 1
return p
plt.style.use("fast")
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
matplotlib.rcParams.update({'font.size': 14})
fig, axes = plt.subplots(1,1,figsize=(4,3))
colors = plt.cm.RdBu(np.linspace(0.0,0.9,20))
epsilon = 3
delta = 1e-6
delta_f = 1
iterations_list = np.linspace(1,10,10)
majority = []
adaptive = []
for iterations in iterations_list:
epsilon_i = epsilon/iterations
delta_i = delta/iterations
print(majority_voting_success_rate(epsilon_i, delta_i, iterations))
majority.append(majority_voting_success_rate(epsilon_i, delta_i, iterations))
adaptive.append(adaptive_success_rate(epsilon_i, delta_i, iterations))
axes.plot(iterations_list, majority, label="Majority Voting", c=colors[-2])
axes.plot(iterations_list, adaptive, label="Adaptive Voting", c=colors[3])
plt.legend()
plt.show()
from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes
from mpl_toolkits.axes_grid1.inset_locator import mark_inset
plt.style.use("fast")
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
matplotlib.rcParams.update({'font.size': 14})
fig, axes = plt.subplots(1,1,figsize=(4,3))
colors = plt.cm.RdBu(np.linspace(0.0,0.9,21))
epsilons = np.linspace(0.001,10,1000)
deltas = [1e-1,1e-2,1e-4,1e-6,1e-10,0]
def confidence(eps):
return 1./(1+np.power(np.e,-eps))
def get_expected_confidence_bound(eps,delta):
p_c = confidence(eps)
return p_c + delta*(1-p_c)
for i, delta in enumerate(deltas):
expected_conf = get_expected_confidence_bound(epsilons,delta)
axes.plot(epsilons,expected_conf,label=r"$\delta=$"+str(delta), c=colors[i])
axes.set_ylabel(r"$\rho_c$")
axes.set_xlabel(r"$\epsilon$")
axes.grid(ls="dashed")
#axins = zoomed_inset_axes(axes, 5, loc=4) # zoom = 6
#epsilons_in = np.linspace(0.001,1,1000)
#compute new confidences for the detail view
#for i, delta in enumerate(deltas):
# expected_conf = get_expected_confidence_bound(epsilons_in,delta)
# axins.plot(epsilons_in,expected_conf, c=colors[i])
# sub region of the original image
#x1, x2, y1, y2 = -0.01, 0.5, 0.49, 0.58
#axins.set_xlim(x1, x2)
#axins.set_ylim(y1, y2)
axes.set_xscale('log')
#plt.xticks(visible=False)
#plt.yticks(visible=False)
axes.set_ylim([0.48,1.02])
axes.set_xlim([0,6])
axes.set_yticks([0.5,0.6,0.7,0.8,0.9,1])
axes.set_xticks([1e-3,1e-2,1e-1,1e0,1e1])
axes.set_xticklabels([1e-3,1e-2,1e-1,1e0,1e1])
# draw a bbox of the region of the inset axes in the parent axes and
# connecting lines between the bbox and the inset axes area
#mark_inset(axes, axins, loc1=2, loc2=3)
axes.legend(loc='upper center', bbox_to_anchor=(0.5, -0.25), ncol=2, facecolor="white")
plt.savefig("expected_confidence_bound.pdf", bbox_inches='tight')
plt.show()
from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes
from mpl_toolkits.axes_grid1.inset_locator import mark_inset
from scipy.spatial.distance import mahalanobis
from dpa.attacker import GaussianAttacker
plt.style.use("fast")
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
matplotlib.rcParams.update({'font.size': 14})
fig, axes = plt.subplots(1,1,figsize=(4,3))
adversary = GaussianAttacker()
colors = plt.cm.RdBu(np.linspace(0.0,0.9,21))
epsilons = np.linspace(0.001,100,1000)
deltas = [1e-1,1e-2,1e-4,1e-6,1e-10,1e-20]#[1e-1,1e-2,1e-4,1e-6,1e-10,1e-20, 1e-50, 1e-100,1e-200, 1e-250, 1e-300]
dimensions = 1
def get_classic_sigma(epsilon, delta, delta_f):
return delta_f*np.sqrt(2*np.log(1.25/delta))/epsilon
def get_expected_success_rate(eps,delta, dimensions):
mu_1 = np.repeat(0,dimensions)
mu_2 = np.repeat(1,dimensions)
delta_f = np.sqrt(np.dot(mu_1-mu_2,mu_1-mu_2))
sigmas = get_classic_sigma(eps,delta,delta_f)
distances = np.array([np.sqrt(np.dot(mu_1-mu_2,mu_1-mu_2))/sigma for sigma in sigmas])
alt = 1-norm.cdf(-epsilons/(2*np.sqrt(2*np.log(1.25/delta))))
return alt#1-norm.cdf(-distances/2)
#return alt
def advantage(eps, delta, dimensions):
mu_1 = np.repeat(0,dimensions)
mu_2 = np.repeat(1,dimensions)
delta_f = np.sqrt(np.dot(mu_1-mu_2,mu_1-mu_2))
sigmas = get_classic_sigma(eps,delta,delta_f)
distances = np.array([np.sqrt(np.dot(mu_1-mu_2,mu_1-mu_2))/sigma for sigma in sigmas])
alt = norm.cdf(epsilons/(2*np.sqrt(2*np.log(1.25/delta)))) - norm.cdf(-epsilons/(2*np.sqrt(2*np.log(1.25/delta))))
return alt
#return np.abs(1-2*1/2*(1+erf(np.sqrt(k)*1/(2*noise_mult*np.sqrt(2)))))
for i, delta in enumerate(deltas):
nps = []
#print(epsilons)
#for eps in epsilons:
# print("eps ", eps, "delta ", delta)
# print(adversary.get_noise_multiplier_for_rdp_eps(eps, delta, 2))
success_rate = advantage(epsilons,delta,dimensions)#advantage(nps, 1)#get_expected_success_rate(epsilons,delta,dimensions)
print(success_rate)
axes.plot(epsilons,success_rate,label=r"$\delta=$"+str(delta), c=colors[i])
axes.set_ylabel(r"$\rho_a$")
axes.set_xlabel(r"$\epsilon$")
axes.grid(ls="dashed")
axes.set_xscale('log')
axes.set_ylim([0,1.02])
axes.set_yticks([0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1])
axes.set_xticks([1e-3,1e-2,1e-1,1e0,1e1,1e2])
#axes.set_xticks([0,1,2,3,4,5,6])
axes.legend(loc='upper center', bbox_to_anchor=(0.5, -0.25), ncol=2, facecolor="white")
plt.savefig("expected_success_rate.pdf", bbox_inches='tight')
plt.show()
###Output
[1.77501374e-04 1.79437164e-02 3.57010359e-02 5.34406657e-02
7.11538378e-02 8.88318237e-02 1.06465947e-01 1.24047596e-01
1.41568236e-01 1.59019425e-01 1.76392819e-01 1.93680193e-01
2.10873443e-01 2.27964605e-01 2.44945863e-01 2.61809560e-01
2.78548206e-01 2.95154493e-01 3.11621300e-01 3.27941704e-01
3.44108987e-01 3.60116647e-01 3.75958402e-01 3.91628200e-01
4.07120223e-01 4.22428895e-01 4.37548886e-01 4.52475120e-01
4.67202773e-01 4.81727285e-01 4.96044355e-01 5.10149951e-01
5.24040306e-01 5.37711925e-01 5.51161579e-01 5.64386315e-01
5.77383446e-01 5.90150558e-01 6.02685504e-01 6.14986405e-01
6.27051646e-01 6.38879876e-01 6.50470002e-01 6.61821187e-01
6.72932846e-01 6.83804640e-01 6.94436474e-01 7.04828491e-01
7.14981064e-01 7.24894793e-01 7.34570498e-01 7.44009212e-01
7.53212176e-01 7.62180829e-01 7.70916804e-01 7.79421919e-01
7.87698170e-01 7.95747723e-01 8.03572906e-01 8.11176203e-01
8.18560241e-01 8.25727790e-01 8.32681747e-01 8.39425132e-01
8.45961081e-01 8.52292833e-01 8.58423728e-01 8.64357193e-01
8.70096741e-01 8.75645956e-01 8.81008489e-01 8.86188054e-01
8.91188411e-01 8.96013369e-01 9.00666771e-01 9.05152493e-01
9.09474434e-01 9.13636508e-01 9.17642643e-01 9.21496769e-01
9.25202815e-01 9.28764705e-01 9.32186349e-01 9.35471640e-01
9.38624446e-01 9.41648612e-01 9.44547948e-01 9.47326227e-01
9.49987186e-01 9.52534515e-01 9.54971857e-01 9.57302805e-01
9.59530898e-01 9.61659619e-01 9.63692392e-01 9.65632579e-01
9.67483478e-01 9.69248325e-01 9.70930284e-01 9.72532455e-01
9.74057866e-01 9.75509474e-01 9.76890165e-01 9.78202753e-01
9.79449979e-01 9.80634510e-01 9.81758939e-01 9.82825787e-01
9.83837500e-01 9.84796454e-01 9.85704947e-01 9.86565210e-01
9.87379397e-01 9.88149596e-01 9.88877822e-01 9.89566021e-01
9.90216069e-01 9.90829779e-01 9.91408894e-01 9.91955092e-01
9.92469991e-01 9.92955142e-01 9.93412038e-01 9.93842111e-01
9.94246736e-01 9.94627228e-01 9.94984850e-01 9.95320810e-01
9.95636265e-01 9.95932317e-01 9.96210024e-01 9.96470392e-01
9.96714384e-01 9.96942915e-01 9.97156860e-01 9.97357051e-01
9.97544277e-01 9.97719293e-01 9.97882814e-01 9.98035518e-01
9.98178051e-01 9.98311025e-01 9.98435018e-01 9.98550580e-01
9.98658230e-01 9.98758462e-01 9.98851739e-01 9.98938502e-01
9.99019165e-01 9.99094120e-01 9.99163736e-01 9.99228361e-01
9.99288324e-01 9.99343934e-01 9.99395480e-01 9.99443236e-01
9.99487459e-01 9.99528390e-01 9.99566256e-01 9.99601267e-01
9.99633624e-01 9.99663513e-01 9.99691108e-01 9.99716574e-01
9.99740061e-01 9.99761714e-01 9.99781667e-01 9.99800042e-01
9.99816957e-01 9.99832520e-01 9.99846831e-01 9.99859986e-01
9.99872071e-01 9.99883168e-01 9.99893353e-01 9.99902696e-01
9.99911262e-01 9.99919112e-01 9.99926303e-01 9.99932886e-01
9.99938910e-01 9.99944420e-01 9.99949456e-01 9.99954058e-01
9.99958261e-01 9.99962097e-01 9.99965597e-01 9.99968788e-01
9.99971697e-01 9.99974347e-01 9.99976760e-01 9.99978956e-01
9.99980953e-01 9.99982769e-01 9.99984420e-01 9.99985919e-01
9.99987279e-01 9.99988514e-01 9.99989634e-01 9.99990649e-01
9.99991569e-01 9.99992402e-01 9.99993156e-01 9.99993838e-01
9.99994454e-01 9.99995012e-01 9.99995515e-01 9.99995970e-01
9.99996380e-01 9.99996750e-01 9.99997084e-01 9.99997384e-01
9.99997655e-01 9.99997899e-01 9.99998118e-01 9.99998315e-01
9.99998492e-01 9.99998652e-01 9.99998795e-01 9.99998923e-01
9.99999038e-01 9.99999141e-01 9.99999234e-01 9.99999317e-01
9.99999391e-01 9.99999458e-01 9.99999517e-01 9.99999570e-01
9.99999617e-01 9.99999660e-01 9.99999698e-01 9.99999731e-01
9.99999762e-01 9.99999788e-01 9.99999812e-01 9.99999834e-01
9.99999852e-01 9.99999869e-01 9.99999884e-01 9.99999898e-01
9.99999909e-01 9.99999920e-01 9.99999929e-01 9.99999937e-01
9.99999945e-01 9.99999951e-01 9.99999957e-01 9.99999962e-01
9.99999967e-01 9.99999971e-01 9.99999974e-01 9.99999977e-01
9.99999980e-01 9.99999982e-01 9.99999985e-01 9.99999986e-01
9.99999988e-01 9.99999990e-01 9.99999991e-01 9.99999992e-01
9.99999993e-01 9.99999994e-01 9.99999995e-01 9.99999995e-01
9.99999996e-01 9.99999996e-01 9.99999997e-01 9.99999997e-01
9.99999998e-01 9.99999998e-01 9.99999998e-01 9.99999998e-01
9.99999999e-01 9.99999999e-01 9.99999999e-01 9.99999999e-01
9.99999999e-01 9.99999999e-01 9.99999999e-01 9.99999999e-01
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00]
[1.28379986e-04 1.29785286e-02 2.58253111e-02 3.86653972e-02
5.14954619e-02 6.43121880e-02 7.71122685e-02 8.98924096e-02
1.02649333e-01 1.15379777e-01 1.28080504e-01 1.40748294e-01
1.53379958e-01 1.65972331e-01 1.78522279e-01 1.91026701e-01
2.03482532e-01 2.15886741e-01 2.28236340e-01 2.40528378e-01
2.52759952e-01 2.64928203e-01 2.77030317e-01 2.89063533e-01
3.01025140e-01 3.12912481e-01 3.24722951e-01 3.36454006e-01
3.48103156e-01 3.59667973e-01 3.71146091e-01 3.82535203e-01
3.93833071e-01 4.05037517e-01 4.16146432e-01 4.27157775e-01
4.38069570e-01 4.48879914e-01 4.59586973e-01 4.70188982e-01
4.80684249e-01 4.91071154e-01 5.01348151e-01 5.11513764e-01
5.21566593e-01 5.31505310e-01 5.41328664e-01 5.51035473e-01
5.60624635e-01 5.70095116e-01 5.79445961e-01 5.88676285e-01
5.97785280e-01 6.06772209e-01 6.15636407e-01 6.24377283e-01
6.32994317e-01 6.41487061e-01 6.49855137e-01 6.58098235e-01
6.66216116e-01 6.74208610e-01 6.82075611e-01 6.89817083e-01
6.97433051e-01 7.04923607e-01 7.12288907e-01 7.19529165e-01
7.26644659e-01 7.33635725e-01 7.40502758e-01 7.47246207e-01
7.53866581e-01 7.60364440e-01 7.66740396e-01 7.72995114e-01
7.79129308e-01 7.85143740e-01 7.91039218e-01 7.96816597e-01
8.02476774e-01 8.08020688e-01 8.13449319e-01 8.18763685e-01
8.23964843e-01 8.29053883e-01 8.34031932e-01 8.38900146e-01
8.43659714e-01 8.48311855e-01 8.52857813e-01 8.57298860e-01
8.61636291e-01 8.65871427e-01 8.70005605e-01 8.74040188e-01
8.77976552e-01 8.81816092e-01 8.85560219e-01 8.89210358e-01
8.92767943e-01 8.96234424e-01 8.99611256e-01 9.02899905e-01
9.06101844e-01 9.09218549e-01 9.12251504e-01 9.15202192e-01
9.18072101e-01 9.20862719e-01 9.23575532e-01 9.26212025e-01
9.28773681e-01 9.31261979e-01 9.33678392e-01 9.36024390e-01
9.38301432e-01 9.40510972e-01 9.42654456e-01 9.44733319e-01
9.46748986e-01 9.48702871e-01 9.50596377e-01 9.52430894e-01
9.54207799e-01 9.55928453e-01 9.57594207e-01 9.59206393e-01
9.60766329e-01 9.62275317e-01 9.63734643e-01 9.65145575e-01
9.66509363e-01 9.67827243e-01 9.69100427e-01 9.70330114e-01
9.71517481e-01 9.72663687e-01 9.73769872e-01 9.74837157e-01
9.75866643e-01 9.76859410e-01 9.77816521e-01 9.78739015e-01
9.79627916e-01 9.80484223e-01 9.81308919e-01 9.82102964e-01
9.82867298e-01 9.83602844e-01 9.84310501e-01 9.84991150e-01
9.85645653e-01 9.86274850e-01 9.86879563e-01 9.87460594e-01
9.88018726e-01 9.88554722e-01 9.89069327e-01 9.89563266e-01
9.90037246e-01 9.90491955e-01 9.90928064e-01 9.91346226e-01
9.91747075e-01 9.92131227e-01 9.92499284e-01 9.92851828e-01
9.93189424e-01 9.93512624e-01 9.93821961e-01 9.94117952e-01
9.94401100e-01 9.94671893e-01 9.94930801e-01 9.95178282e-01
9.95414780e-01 9.95640723e-01 9.95856526e-01 9.96062591e-01
9.96259306e-01 9.96447046e-01 9.96626176e-01 9.96797044e-01
9.96959991e-01 9.97115343e-01 9.97263415e-01 9.97404512e-01
9.97538928e-01 9.97666946e-01 9.97788839e-01 9.97904870e-01
9.98015292e-01 9.98120348e-01 9.98220275e-01 9.98315296e-01
9.98405631e-01 9.98491488e-01 9.98573067e-01 9.98650562e-01
9.98724158e-01 9.98794033e-01 9.98860357e-01 9.98923296e-01
9.98983006e-01 9.99039639e-01 9.99093339e-01 9.99144244e-01
9.99192488e-01 9.99238198e-01 9.99281495e-01 9.99322497e-01
9.99361315e-01 9.99398056e-01 9.99432822e-01 9.99465710e-01
9.99496814e-01 9.99526224e-01 9.99554023e-01 9.99580295e-01
9.99605115e-01 9.99628558e-01 9.99650696e-01 9.99671594e-01
9.99691318e-01 9.99709928e-01 9.99727483e-01 9.99744039e-01
9.99759648e-01 9.99774360e-01 9.99788224e-01 9.99801285e-01
9.99813586e-01 9.99825169e-01 9.99836073e-01 9.99846334e-01
9.99855988e-01 9.99865069e-01 9.99873609e-01 9.99881637e-01
9.99889183e-01 9.99896273e-01 9.99902934e-01 9.99909189e-01
9.99915062e-01 9.99920575e-01 9.99925748e-01 9.99930602e-01
9.99935154e-01 9.99939423e-01 9.99943425e-01 9.99947175e-01
9.99950689e-01 9.99953981e-01 9.99957064e-01 9.99959950e-01
9.99962651e-01 9.99965179e-01 9.99967543e-01 9.99969755e-01
9.99971823e-01 9.99973756e-01 9.99975562e-01 9.99977250e-01
9.99978826e-01 9.99980299e-01 9.99981673e-01 9.99982956e-01
9.99984152e-01 9.99985269e-01 9.99986310e-01 9.99987281e-01
9.99988186e-01 9.99989029e-01 9.99989815e-01 9.99990546e-01
9.99991228e-01 9.99991862e-01 9.99992452e-01 9.99993001e-01
9.99993512e-01 9.99993987e-01 9.99994429e-01 9.99994840e-01
9.99995221e-01 9.99995575e-01 9.99995904e-01 9.99996210e-01
9.99996494e-01 9.99996757e-01 9.99997001e-01 9.99997228e-01
9.99997438e-01 9.99997632e-01 9.99997813e-01 9.99997980e-01
9.99998135e-01 9.99998279e-01 9.99998411e-01 9.99998534e-01
9.99998648e-01 9.99998753e-01 9.99998851e-01 9.99998941e-01
9.99999024e-01 9.99999101e-01 9.99999172e-01 9.99999237e-01
9.99999298e-01 9.99999354e-01 9.99999406e-01 9.99999453e-01
9.99999497e-01 9.99999538e-01 9.99999575e-01 9.99999610e-01
9.99999641e-01 9.99999671e-01 9.99999697e-01 9.99999722e-01
9.99999745e-01 9.99999766e-01 9.99999785e-01 9.99999803e-01
9.99999820e-01 9.99999835e-01 9.99999848e-01 9.99999861e-01
9.99999873e-01 9.99999884e-01 9.99999893e-01 9.99999902e-01
9.99999911e-01 9.99999918e-01 9.99999925e-01 9.99999932e-01
9.99999938e-01 9.99999943e-01 9.99999948e-01 9.99999952e-01
9.99999957e-01 9.99999960e-01 9.99999964e-01 9.99999967e-01
9.99999970e-01 9.99999973e-01 9.99999975e-01 9.99999977e-01
9.99999979e-01 9.99999981e-01 9.99999983e-01 9.99999984e-01
9.99999986e-01 9.99999987e-01 9.99999988e-01 9.99999989e-01
9.99999990e-01 9.99999991e-01 9.99999992e-01 9.99999993e-01
9.99999993e-01 9.99999994e-01 9.99999994e-01 9.99999995e-01
9.99999995e-01 9.99999996e-01 9.99999996e-01 9.99999997e-01
9.99999997e-01 9.99999997e-01 9.99999997e-01 9.99999998e-01
9.99999998e-01 9.99999998e-01 9.99999998e-01 9.99999998e-01
9.99999999e-01 9.99999999e-01 9.99999999e-01 9.99999999e-01
9.99999999e-01 9.99999999e-01 9.99999999e-01 9.99999999e-01
9.99999999e-01 9.99999999e-01 9.99999999e-01 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00]
[9.18457384e-05 9.28531184e-03 1.84775452e-02 2.76673259e-02
3.68534348e-02 4.60346543e-02 5.52097688e-02 6.43775650e-02
7.35368328e-02 8.26863651e-02 9.18249590e-02 1.00951416e-01
1.10064541e-01 1.19163147e-01 1.28246050e-01 1.37312075e-01
1.46360050e-01 1.55388812e-01 1.64397208e-01 1.73384089e-01
1.82348316e-01 1.91288760e-01 2.00204299e-01 2.09093823e-01
2.17956232e-01 2.26790434e-01 2.35595350e-01 2.44369913e-01
2.53113067e-01 2.61823766e-01 2.70500980e-01 2.79143689e-01
2.87750888e-01 2.96321585e-01 3.04854802e-01 3.13349575e-01
3.21804953e-01 3.30220003e-01 3.38593805e-01 3.46925456e-01
3.55214065e-01 3.63458762e-01 3.71658690e-01 3.79813010e-01
3.87920898e-01 3.95981550e-01 4.03994175e-01 4.11958003e-01
4.19872280e-01 4.27736271e-01 4.35549257e-01 4.43310539e-01
4.51019435e-01 4.58675282e-01 4.66277437e-01 4.73825273e-01
4.81318184e-01 4.88755581e-01 4.96136897e-01 5.03461581e-01
5.10729104e-01 5.17938952e-01 5.25090636e-01 5.32183682e-01
5.39217637e-01 5.46192068e-01 5.53106560e-01 5.59960717e-01
5.66754165e-01 5.73486548e-01 5.80157528e-01 5.86766788e-01
5.93314029e-01 5.99798974e-01 6.06221360e-01 6.12580949e-01
6.18877517e-01 6.25110861e-01 6.31280798e-01 6.37387161e-01
6.43429803e-01 6.49408595e-01 6.55323427e-01 6.61174206e-01
6.66960856e-01 6.72683322e-01 6.78341564e-01 6.83935560e-01
6.89465305e-01 6.94930811e-01 7.00332109e-01 7.05669242e-01
7.10942273e-01 7.16151282e-01 7.21296361e-01 7.26377620e-01
7.31395186e-01 7.36349199e-01 7.41239813e-01 7.46067201e-01
7.50831546e-01 7.55533048e-01 7.60171919e-01 7.64748386e-01
7.69262690e-01 7.73715084e-01 7.78105832e-01 7.82435216e-01
7.86703524e-01 7.90911060e-01 7.95058139e-01 7.99145087e-01
8.03172241e-01 8.07139949e-01 8.11048569e-01 8.14898470e-01
8.18690032e-01 8.22423641e-01 8.26099697e-01 8.29718605e-01
8.33280781e-01 8.36786649e-01 8.40236641e-01 8.43631197e-01
8.46970763e-01 8.50255796e-01 8.53486755e-01 8.56664110e-01
8.59788335e-01 8.62859911e-01 8.65879323e-01 8.68847064e-01
8.71763630e-01 8.74629524e-01 8.77445253e-01 8.80211326e-01
8.82928259e-01 8.85596570e-01 8.88216783e-01 8.90789421e-01
8.93315014e-01 8.95794093e-01 8.98227192e-01 9.00614845e-01
9.02957592e-01 9.05255971e-01 9.07510523e-01 9.09721791e-01
9.11890317e-01 9.14016645e-01 9.16101320e-01 9.18144886e-01
9.20147888e-01 9.22110871e-01 9.24034378e-01 9.25918953e-01
9.27765140e-01 9.29573480e-01 9.31344513e-01 9.33078780e-01
9.34776817e-01 9.36439161e-01 9.38066347e-01 9.39658906e-01
9.41217368e-01 9.42742262e-01 9.44234111e-01 9.45693438e-01
9.47120763e-01 9.48516602e-01 9.49881468e-01 9.51215872e-01
9.52520320e-01 9.53795315e-01 9.55041357e-01 9.56258942e-01
9.57448562e-01 9.58610704e-01 9.59745853e-01 9.60854488e-01
9.61937086e-01 9.62994116e-01 9.64026046e-01 9.65033339e-01
9.66016451e-01 9.66975837e-01 9.67911943e-01 9.68825215e-01
9.69716090e-01 9.70585002e-01 9.71432381e-01 9.72258651e-01
9.73064229e-01 9.73849531e-01 9.74614965e-01 9.75360934e-01
9.76087837e-01 9.76796067e-01 9.77486013e-01 9.78158056e-01
9.78812575e-01 9.79449943e-01 9.80070525e-01 9.80674685e-01
9.81262777e-01 9.81835155e-01 9.82392163e-01 9.82934143e-01
9.83461429e-01 9.83974353e-01 9.84473239e-01 9.84958406e-01
9.85430171e-01 9.85888841e-01 9.86334721e-01 9.86768110e-01
9.87189303e-01 9.87598587e-01 9.87996248e-01 9.88382563e-01
9.88757807e-01 9.89122249e-01 9.89476152e-01 9.89819776e-01
9.90153374e-01 9.90477198e-01 9.90791490e-01 9.91096491e-01
9.91392437e-01 9.91679558e-01 9.91958081e-01 9.92228227e-01
9.92490212e-01 9.92744251e-01 9.92990551e-01 9.93229317e-01
9.93460747e-01 9.93685038e-01 9.93902382e-01 9.94112964e-01
9.94316969e-01 9.94514576e-01 9.94705960e-01 9.94891292e-01
9.95070740e-01 9.95244468e-01 9.95412635e-01 9.95575398e-01
9.95732911e-01 9.95885321e-01 9.96032774e-01 9.96175414e-01
9.96313378e-01 9.96446802e-01 9.96575819e-01 9.96700558e-01
9.96821144e-01 9.96937700e-01 9.97050346e-01 9.97159198e-01
9.97264371e-01 9.97365976e-01 9.97464119e-01 9.97558907e-01
9.97650443e-01 9.97738825e-01 9.97824152e-01 9.97906518e-01
9.97986015e-01 9.98062733e-01 9.98136759e-01 9.98208179e-01
9.98277074e-01 9.98343526e-01 9.98407612e-01 9.98469409e-01
9.98528990e-01 9.98586427e-01 9.98641791e-01 9.98695148e-01
9.98746564e-01 9.98796104e-01 9.98843830e-01 9.98889802e-01
9.98934079e-01 9.98976717e-01 9.99017772e-01 9.99057297e-01
9.99095344e-01 9.99131964e-01 9.99167206e-01 9.99201116e-01
9.99233741e-01 9.99265125e-01 9.99295312e-01 9.99324344e-01
9.99352260e-01 9.99379100e-01 9.99404903e-01 9.99429704e-01
9.99453540e-01 9.99476446e-01 9.99498454e-01 9.99519597e-01
9.99539906e-01 9.99559412e-01 9.99578144e-01 9.99596130e-01
9.99613398e-01 9.99629974e-01 9.99645883e-01 9.99661151e-01
9.99675801e-01 9.99689857e-01 9.99703341e-01 9.99716274e-01
9.99728677e-01 9.99740571e-01 9.99751975e-01 9.99762907e-01
9.99773386e-01 9.99783428e-01 9.99793052e-01 9.99802273e-01
9.99811107e-01 9.99819569e-01 9.99827674e-01 9.99835435e-01
9.99842867e-01 9.99849981e-01 9.99856792e-01 9.99863311e-01
9.99869549e-01 9.99875519e-01 9.99881230e-01 9.99886694e-01
9.99891920e-01 9.99896917e-01 9.99901697e-01 9.99906266e-01
9.99910634e-01 9.99914810e-01 9.99918800e-01 9.99922614e-01
9.99926258e-01 9.99929739e-01 9.99933064e-01 9.99936240e-01
9.99939273e-01 9.99942169e-01 9.99944934e-01 9.99947573e-01
9.99950092e-01 9.99952496e-01 9.99954790e-01 9.99956979e-01
9.99959067e-01 9.99961058e-01 9.99962958e-01 9.99964769e-01
9.99966496e-01 9.99968142e-01 9.99969711e-01 9.99971207e-01
9.99972632e-01 9.99973990e-01 9.99975284e-01 9.99976516e-01
9.99977690e-01 9.99978807e-01 9.99979872e-01 9.99980885e-01
9.99981850e-01 9.99982768e-01 9.99983642e-01 9.99984473e-01
9.99985264e-01 9.99986017e-01 9.99986732e-01 9.99987413e-01
9.99988060e-01 9.99988676e-01 9.99989261e-01 9.99989817e-01
9.99990346e-01 9.99990848e-01 9.99991326e-01 9.99991779e-01
9.99992210e-01 9.99992619e-01 9.99993007e-01 9.99993376e-01
9.99993727e-01 9.99994059e-01 9.99994375e-01 9.99994674e-01
9.99994958e-01 9.99995228e-01 9.99995484e-01 9.99995727e-01
9.99995957e-01 9.99996175e-01 9.99996382e-01 9.99996578e-01
9.99996764e-01 9.99996940e-01 9.99997107e-01 9.99997265e-01
9.99997415e-01 9.99997557e-01 9.99997692e-01 9.99997819e-01
9.99997940e-01 9.99998054e-01 9.99998162e-01 9.99998264e-01
9.99998361e-01 9.99998453e-01 9.99998539e-01 9.99998621e-01
9.99998699e-01 9.99998772e-01 9.99998842e-01 9.99998907e-01
9.99998969e-01 9.99999028e-01 9.99999083e-01 9.99999136e-01
9.99999185e-01 9.99999232e-01 9.99999276e-01 9.99999318e-01
9.99999357e-01 9.99999394e-01 9.99999429e-01 9.99999463e-01
9.99999494e-01 9.99999523e-01 9.99999551e-01 9.99999577e-01
9.99999602e-01 9.99999626e-01 9.99999648e-01 9.99999669e-01
9.99999688e-01 9.99999707e-01 9.99999724e-01 9.99999740e-01
9.99999756e-01 9.99999771e-01 9.99999784e-01 9.99999797e-01
9.99999809e-01 9.99999821e-01 9.99999832e-01 9.99999842e-01
9.99999851e-01 9.99999861e-01 9.99999869e-01 9.99999877e-01
9.99999885e-01 9.99999892e-01 9.99999898e-01 9.99999904e-01
9.99999910e-01 9.99999916e-01 9.99999921e-01 9.99999926e-01
9.99999931e-01 9.99999935e-01 9.99999939e-01 9.99999943e-01
9.99999946e-01 9.99999950e-01 9.99999953e-01 9.99999956e-01
9.99999959e-01 9.99999961e-01 9.99999964e-01 9.99999966e-01
9.99999968e-01 9.99999970e-01 9.99999972e-01 9.99999974e-01
9.99999976e-01 9.99999977e-01 9.99999979e-01 9.99999980e-01
9.99999981e-01 9.99999982e-01 9.99999984e-01 9.99999985e-01
9.99999986e-01 9.99999987e-01 9.99999987e-01 9.99999988e-01
9.99999989e-01 9.99999990e-01 9.99999990e-01 9.99999991e-01
9.99999992e-01 9.99999992e-01 9.99999993e-01 9.99999993e-01
9.99999994e-01 9.99999994e-01 9.99999994e-01 9.99999995e-01
9.99999995e-01 9.99999996e-01 9.99999996e-01 9.99999996e-01
9.99999996e-01 9.99999997e-01 9.99999997e-01 9.99999997e-01
9.99999997e-01 9.99999997e-01 9.99999998e-01 9.99999998e-01
9.99999998e-01 9.99999998e-01 9.99999998e-01 9.99999998e-01
9.99999998e-01 9.99999999e-01 9.99999999e-01 9.99999999e-01
9.99999999e-01 9.99999999e-01 9.99999999e-01 9.99999999e-01
9.99999999e-01 9.99999999e-01 9.99999999e-01 9.99999999e-01
9.99999999e-01 9.99999999e-01 9.99999999e-01 9.99999999e-01
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00]
[7.52891390e-05 7.61154868e-03 1.51471292e-02 2.26813585e-02
3.02135649e-02 3.77430770e-02 4.52692245e-02 5.27913376e-02
6.03087478e-02 6.78207879e-02 7.53267920e-02 8.28260959e-02
9.03180370e-02 9.78019550e-02 1.05277192e-01 1.12743090e-01
1.20198998e-01 1.27644264e-01 1.35078240e-01 1.42500281e-01
1.49909744e-01 1.57305992e-01 1.64688389e-01 1.72056304e-01
1.79409110e-01 1.86746182e-01 1.94066901e-01 2.01370652e-01
2.08656824e-01 2.15924812e-01 2.23174013e-01 2.30403831e-01
2.37613675e-01 2.44802959e-01 2.51971101e-01 2.59117526e-01
2.66241665e-01 2.73342953e-01 2.80420832e-01 2.87474751e-01
2.94504162e-01 3.01508526e-01 3.08487309e-01 3.15439986e-01
3.22366035e-01 3.29264943e-01 3.36136203e-01 3.42979316e-01
3.49793789e-01 3.56579136e-01 3.63334879e-01 3.70060547e-01
3.76755678e-01 3.83419813e-01 3.90052507e-01 3.96653317e-01
4.03221812e-01 4.09757566e-01 4.16260162e-01 4.22729191e-01
4.29164253e-01 4.35564955e-01 4.41930912e-01 4.48261748e-01
4.54557095e-01 4.60816594e-01 4.67039893e-01 4.73226651e-01
4.79376532e-01 4.85489211e-01 4.91564371e-01 4.97601704e-01
5.03600911e-01 5.09561699e-01 5.15483787e-01 5.21366901e-01
5.27210777e-01 5.33015157e-01 5.38779795e-01 5.44504452e-01
5.50188899e-01 5.55832913e-01 5.61436283e-01 5.66998805e-01
5.72520284e-01 5.78000535e-01 5.83439379e-01 5.88836648e-01
5.94192182e-01 5.99505830e-01 6.04777449e-01 6.10006904e-01
6.15194071e-01 6.20338832e-01 6.25441079e-01 6.30500711e-01
6.35517637e-01 6.40491773e-01 6.45423045e-01 6.50311386e-01
6.55156736e-01 6.59959047e-01 6.64718275e-01 6.69434387e-01
6.74107355e-01 6.78737162e-01 6.83323797e-01 6.87867257e-01
6.92367547e-01 6.96824680e-01 7.01238675e-01 7.05609561e-01
7.09937371e-01 7.14222149e-01 7.18463944e-01 7.22662812e-01
7.26818817e-01 7.30932029e-01 7.35002527e-01 7.39030393e-01
7.43015720e-01 7.46958604e-01 7.50859150e-01 7.54717468e-01
7.58533674e-01 7.62307892e-01 7.66040251e-01 7.69730885e-01
7.73379935e-01 7.76987549e-01 7.80553878e-01 7.84079080e-01
7.87563319e-01 7.91006763e-01 7.94409587e-01 7.97771970e-01
8.01094096e-01 8.04376154e-01 8.07618338e-01 8.10820846e-01
8.13983883e-01 8.17107656e-01 8.20192377e-01 8.23238262e-01
8.26245533e-01 8.29214414e-01 8.32145133e-01 8.35037922e-01
8.37893019e-01 8.40710663e-01 8.43491097e-01 8.46234567e-01
8.48941324e-01 8.51611620e-01 8.54245712e-01 8.56843859e-01
8.59406323e-01 8.61933368e-01 8.64425261e-01 8.66882272e-01
8.69304675e-01 8.71692741e-01 8.74046750e-01 8.76366978e-01
8.78653708e-01 8.80907221e-01 8.83127802e-01 8.85315737e-01
8.87471314e-01 8.89594821e-01 8.91686550e-01 8.93746791e-01
8.95775838e-01 8.97773986e-01 8.99741528e-01 9.01678761e-01
9.03585982e-01 9.05463488e-01 9.07311577e-01 9.09130547e-01
9.10920699e-01 9.12682331e-01 9.14415743e-01 9.16121234e-01
9.17799106e-01 9.19449658e-01 9.21073189e-01 9.22670000e-01
9.24240391e-01 9.25784661e-01 9.27303108e-01 9.28796032e-01
9.30263730e-01 9.31706500e-01 9.33124639e-01 9.34518443e-01
9.35888207e-01 9.37234227e-01 9.38556794e-01 9.39856203e-01
9.41132745e-01 9.42386710e-01 9.43618388e-01 9.44828067e-01
9.46016034e-01 9.47182574e-01 9.48327972e-01 9.49452511e-01
9.50556471e-01 9.51640134e-01 9.52703777e-01 9.53747676e-01
9.54772108e-01 9.55777345e-01 9.56763659e-01 9.57731320e-01
9.58680595e-01 9.59611752e-01 9.60525055e-01 9.61420765e-01
9.62299144e-01 9.63160450e-01 9.64004939e-01 9.64832867e-01
9.65644484e-01 9.66440043e-01 9.67219790e-01 9.67983972e-01
9.68732833e-01 9.69466614e-01 9.70185555e-01 9.70889894e-01
9.71579865e-01 9.72255701e-01 9.72917633e-01 9.73565888e-01
9.74200693e-01 9.74822272e-01 9.75430846e-01 9.76026633e-01
9.76609850e-01 9.77180712e-01 9.77739430e-01 9.78286214e-01
9.78821271e-01 9.79344807e-01 9.79857022e-01 9.80358118e-01
9.80848292e-01 9.81327739e-01 9.81796653e-01 9.82255223e-01
9.82703638e-01 9.83142084e-01 9.83570744e-01 9.83989799e-01
9.84399428e-01 9.84799806e-01 9.85191109e-01 9.85573508e-01
9.85947171e-01 9.86312266e-01 9.86668958e-01 9.87017409e-01
9.87357778e-01 9.87690225e-01 9.88014903e-01 9.88331967e-01
9.88641567e-01 9.88943852e-01 9.89238968e-01 9.89527061e-01
9.89808271e-01 9.90082740e-01 9.90350603e-01 9.90611998e-01
9.90867058e-01 9.91115914e-01 9.91358695e-01 9.91595528e-01
9.91826539e-01 9.92051850e-01 9.92271583e-01 9.92485856e-01
9.92694787e-01 9.92898491e-01 9.93097080e-01 9.93290666e-01
9.93479358e-01 9.93663263e-01 9.93842487e-01 9.94017134e-01
9.94187304e-01 9.94353099e-01 9.94514615e-01 9.94671950e-01
9.94825198e-01 9.94974452e-01 9.95119803e-01 9.95261340e-01
9.95399152e-01 9.95533324e-01 9.95663941e-01 9.95791085e-01
9.95914838e-01 9.96035280e-01 9.96152489e-01 9.96266541e-01
9.96377511e-01 9.96485474e-01 9.96590500e-01 9.96692662e-01
9.96792029e-01 9.96888667e-01 9.96982644e-01 9.97074025e-01
9.97162873e-01 9.97249251e-01 9.97333221e-01 9.97414842e-01
9.97494173e-01 9.97571270e-01 9.97646191e-01 9.97718991e-01
9.97789722e-01 9.97858438e-01 9.97925190e-01 9.97990029e-01
9.98053003e-01 9.98114161e-01 9.98173550e-01 9.98231216e-01
9.98287204e-01 9.98341558e-01 9.98394321e-01 9.98445534e-01
9.98495240e-01 9.98543478e-01 9.98590287e-01 9.98635705e-01
9.98679771e-01 9.98722519e-01 9.98763987e-01 9.98804209e-01
9.98843218e-01 9.98881048e-01 9.98917732e-01 9.98953301e-01
9.98987785e-01 9.99021215e-01 9.99053620e-01 9.99085029e-01
9.99115469e-01 9.99144969e-01 9.99173553e-01 9.99201250e-01
9.99228082e-01 9.99254076e-01 9.99279255e-01 9.99303642e-01
9.99327261e-01 9.99350133e-01 9.99372280e-01 9.99393724e-01
9.99414484e-01 9.99434580e-01 9.99454033e-01 9.99472861e-01
9.99491082e-01 9.99508715e-01 9.99525777e-01 9.99542285e-01
9.99558255e-01 9.99573704e-01 9.99588647e-01 9.99603100e-01
9.99617077e-01 9.99630593e-01 9.99643662e-01 9.99656298e-01
9.99668514e-01 9.99680322e-01 9.99691736e-01 9.99702767e-01
9.99713428e-01 9.99723730e-01 9.99733684e-01 9.99743300e-01
9.99752591e-01 9.99761565e-01 9.99770234e-01 9.99778606e-01
9.99786690e-01 9.99794497e-01 9.99802035e-01 9.99809313e-01
9.99816339e-01 9.99823120e-01 9.99829666e-01 9.99835984e-01
9.99842080e-01 9.99847963e-01 9.99853639e-01 9.99859115e-01
9.99864397e-01 9.99869493e-01 9.99874408e-01 9.99879148e-01
9.99883719e-01 9.99888126e-01 9.99892376e-01 9.99896473e-01
9.99900422e-01 9.99904229e-01 9.99907898e-01 9.99911434e-01
9.99914842e-01 9.99918125e-01 9.99921289e-01 9.99924336e-01
9.99927272e-01 9.99930100e-01 9.99932824e-01 9.99935447e-01
9.99937973e-01 9.99940405e-01 9.99942747e-01 9.99945001e-01
9.99947171e-01 9.99949260e-01 9.99951270e-01 9.99953205e-01
9.99955066e-01 9.99956858e-01 9.99958581e-01 9.99960239e-01
9.99961834e-01 9.99963368e-01 9.99964843e-01 9.99966262e-01
9.99967626e-01 9.99968938e-01 9.99970199e-01 9.99971411e-01
9.99972577e-01 9.99973697e-01 9.99974773e-01 9.99975808e-01
9.99976802e-01 9.99977757e-01 9.99978675e-01 9.99979556e-01
9.99980403e-01 9.99981216e-01 9.99981998e-01 9.99982748e-01
9.99983468e-01 9.99984159e-01 9.99984823e-01 9.99985461e-01
9.99986072e-01 9.99986660e-01 9.99987223e-01 9.99987764e-01
9.99988283e-01 9.99988780e-01 9.99989258e-01 9.99989716e-01
9.99990156e-01 9.99990577e-01 9.99990981e-01 9.99991369e-01
9.99991740e-01 9.99992097e-01 9.99992438e-01 9.99992766e-01
9.99993080e-01 9.99993381e-01 9.99993669e-01 9.99993945e-01
9.99994210e-01 9.99994463e-01 9.99994706e-01 9.99994939e-01
9.99995162e-01 9.99995375e-01 9.99995580e-01 9.99995776e-01
9.99995963e-01 9.99996143e-01 9.99996314e-01 9.99996479e-01
9.99996636e-01 9.99996787e-01 9.99996931e-01 9.99997069e-01
9.99997201e-01 9.99997327e-01 9.99997448e-01 9.99997563e-01
9.99997674e-01 9.99997780e-01 9.99997881e-01 9.99997977e-01
9.99998070e-01 9.99998158e-01 9.99998243e-01 9.99998323e-01
9.99998400e-01 9.99998474e-01 9.99998545e-01 9.99998612e-01
9.99998676e-01 9.99998738e-01 9.99998796e-01 9.99998853e-01
9.99998906e-01 9.99998957e-01 9.99999006e-01 9.99999053e-01
9.99999097e-01 9.99999140e-01 9.99999180e-01 9.99999219e-01
9.99999256e-01 9.99999291e-01 9.99999325e-01 9.99999357e-01
9.99999388e-01 9.99999417e-01 9.99999445e-01 9.99999471e-01
9.99999497e-01 9.99999521e-01 9.99999544e-01 9.99999566e-01
9.99999587e-01 9.99999607e-01 9.99999626e-01 9.99999644e-01
9.99999661e-01 9.99999678e-01 9.99999694e-01 9.99999709e-01
9.99999723e-01 9.99999737e-01 9.99999749e-01 9.99999762e-01
9.99999774e-01 9.99999785e-01 9.99999795e-01 9.99999806e-01
9.99999815e-01 9.99999824e-01 9.99999833e-01 9.99999841e-01
9.99999849e-01 9.99999857e-01 9.99999864e-01 9.99999871e-01
9.99999877e-01 9.99999884e-01 9.99999889e-01 9.99999895e-01
9.99999900e-01 9.99999905e-01 9.99999910e-01 9.99999915e-01
9.99999919e-01 9.99999923e-01 9.99999927e-01 9.99999931e-01
9.99999934e-01 9.99999938e-01 9.99999941e-01 9.99999944e-01
9.99999947e-01 9.99999950e-01 9.99999952e-01 9.99999955e-01
9.99999957e-01 9.99999959e-01 9.99999961e-01 9.99999963e-01
9.99999965e-01 9.99999967e-01 9.99999969e-01 9.99999971e-01
9.99999972e-01 9.99999974e-01 9.99999975e-01 9.99999976e-01
9.99999978e-01 9.99999979e-01 9.99999980e-01 9.99999981e-01
9.99999982e-01 9.99999983e-01 9.99999984e-01 9.99999985e-01
9.99999985e-01 9.99999986e-01 9.99999987e-01 9.99999988e-01
9.99999988e-01 9.99999989e-01 9.99999990e-01 9.99999990e-01
9.99999991e-01 9.99999991e-01 9.99999992e-01 9.99999992e-01
9.99999993e-01 9.99999993e-01 9.99999993e-01 9.99999994e-01
9.99999994e-01 9.99999994e-01 9.99999995e-01 9.99999995e-01
9.99999995e-01 9.99999996e-01 9.99999996e-01 9.99999996e-01
9.99999996e-01 9.99999996e-01 9.99999997e-01 9.99999997e-01
9.99999997e-01 9.99999997e-01 9.99999997e-01 9.99999997e-01
9.99999998e-01 9.99999998e-01 9.99999998e-01 9.99999998e-01
9.99999998e-01 9.99999998e-01 9.99999998e-01 9.99999998e-01
9.99999999e-01 9.99999999e-01 9.99999999e-01 9.99999999e-01
9.99999999e-01 9.99999999e-01 9.99999999e-01 9.99999999e-01
9.99999999e-01 9.99999999e-01 9.99999999e-01 9.99999999e-01
9.99999999e-01 9.99999999e-01 9.99999999e-01 9.99999999e-01
9.99999999e-01 9.99999999e-01 9.99999999e-01 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00]
[5.85049996e-05 5.91474858e-03 1.17706735e-02 1.76259644e-02
2.34803059e-02 2.93333829e-02 3.51848803e-02 4.10344835e-02
4.68818780e-02 5.27267498e-02 5.85687853e-02 6.44076713e-02
7.02430951e-02 7.60747446e-02 8.19023082e-02 8.77254751e-02
9.35439352e-02 9.93573791e-02 1.05165498e-01 1.10967985e-01
1.16764532e-01 1.22554834e-01 1.28338586e-01 1.34115484e-01
1.39885226e-01 1.45647509e-01 1.51402034e-01 1.57148501e-01
1.62886613e-01 1.68616072e-01 1.74336585e-01 1.80047856e-01
1.85749595e-01 1.91441509e-01 1.97123311e-01 2.02794711e-01
2.08455426e-01 2.14105169e-01 2.19743659e-01 2.25370616e-01
2.30985759e-01 2.36588812e-01 2.42179501e-01 2.47757550e-01
2.53322691e-01 2.58874652e-01 2.64413167e-01 2.69937972e-01
2.75448802e-01 2.80945398e-01 2.86427500e-01 2.91894852e-01
2.97347201e-01 3.02784294e-01 3.08205882e-01 3.13611719e-01
3.19001559e-01 3.24375160e-01 3.29732283e-01 3.35072690e-01
3.40396147e-01 3.45702422e-01 3.50991285e-01 3.56262510e-01
3.61515872e-01 3.66751149e-01 3.71968124e-01 3.77166579e-01
3.82346302e-01 3.87507082e-01 3.92648712e-01 3.97770985e-01
4.02873701e-01 4.07956661e-01 4.13019667e-01 4.18062527e-01
4.23085050e-01 4.28087049e-01 4.33068339e-01 4.38028738e-01
4.42968069e-01 4.47886156e-01 4.52782826e-01 4.57657910e-01
4.62511242e-01 4.67342658e-01 4.72151999e-01 4.76939106e-01
4.81703827e-01 4.86446011e-01 4.91165510e-01 4.95862179e-01
5.00535877e-01 5.05186465e-01 5.09813810e-01 5.14417779e-01
5.18998243e-01 5.23555076e-01 5.28088157e-01 5.32597367e-01
5.37082589e-01 5.41543711e-01 5.45980623e-01 5.50393219e-01
5.54781395e-01 5.59145052e-01 5.63484093e-01 5.67798425e-01
5.72087957e-01 5.76352601e-01 5.80592275e-01 5.84806896e-01
5.88996388e-01 5.93160675e-01 5.97299687e-01 6.01413355e-01
6.05501615e-01 6.09564403e-01 6.13601662e-01 6.17613336e-01
6.21599372e-01 6.25559721e-01 6.29494336e-01 6.33403173e-01
6.37286193e-01 6.41143359e-01 6.44974635e-01 6.48779991e-01
6.52559399e-01 6.56312833e-01 6.60040270e-01 6.63741693e-01
6.67417083e-01 6.71066428e-01 6.74689717e-01 6.78286942e-01
6.81858098e-01 6.85403184e-01 6.88922200e-01 6.92415149e-01
6.95882038e-01 6.99322876e-01 7.02737674e-01 7.06126448e-01
7.09489214e-01 7.12825992e-01 7.16136805e-01 7.19421677e-01
7.22680637e-01 7.25913715e-01 7.29120943e-01 7.32302357e-01
7.35457994e-01 7.38587895e-01 7.41692102e-01 7.44770661e-01
7.47823618e-01 7.50851024e-01 7.53852931e-01 7.56829393e-01
7.59780467e-01 7.62706212e-01 7.65606689e-01 7.68481961e-01
7.71332094e-01 7.74157155e-01 7.76957215e-01 7.79732344e-01
7.82482617e-01 7.85208109e-01 7.87908898e-01 7.90585065e-01
7.93236690e-01 7.95863857e-01 7.98466651e-01 8.01045160e-01
8.03599473e-01 8.06129680e-01 8.08635875e-01 8.11118150e-01
8.13576602e-01 8.16011328e-01 8.18422428e-01 8.20810001e-01
8.23174151e-01 8.25514981e-01 8.27832596e-01 8.30127103e-01
8.32398609e-01 8.34647225e-01 8.36873061e-01 8.39076228e-01
8.41256841e-01 8.43415014e-01 8.45550863e-01 8.47664505e-01
8.49756058e-01 8.51825641e-01 8.53873376e-01 8.55899382e-01
8.57903784e-01 8.59886705e-01 8.61848268e-01 8.63788600e-01
8.65707828e-01 8.67606077e-01 8.69483477e-01 8.71340156e-01
8.73176245e-01 8.74991873e-01 8.76787173e-01 8.78562275e-01
8.80317313e-01 8.82052420e-01 8.83767730e-01 8.85463377e-01
8.87139497e-01 8.88796226e-01 8.90433698e-01 8.92052051e-01
8.93651423e-01 8.95231950e-01 8.96793770e-01 8.98337022e-01
8.99861844e-01 9.01368375e-01 9.02856755e-01 9.04327123e-01
9.05779618e-01 9.07214381e-01 9.08631551e-01 9.10031270e-01
9.11413677e-01 9.12778913e-01 9.14127119e-01 9.15458435e-01
9.16773002e-01 9.18070961e-01 9.19352453e-01 9.20617618e-01
9.21866597e-01 9.23099532e-01 9.24316561e-01 9.25517827e-01
9.26703469e-01 9.27873627e-01 9.29028441e-01 9.30168051e-01
9.31292598e-01 9.32402219e-01 9.33497054e-01 9.34577242e-01
9.35642922e-01 9.36694232e-01 9.37731309e-01 9.38754292e-01
9.39763317e-01 9.40758521e-01 9.41740041e-01 9.42708013e-01
9.43662571e-01 9.44603853e-01 9.45531991e-01 9.46447121e-01
9.47349376e-01 9.48238889e-01 9.49115792e-01 9.49980220e-01
9.50832301e-01 9.51672169e-01 9.52499953e-01 9.53315783e-01
9.54119789e-01 9.54912098e-01 9.55692840e-01 9.56462142e-01
9.57220130e-01 9.57966931e-01 9.58702670e-01 9.59427473e-01
9.60141462e-01 9.60844761e-01 9.61537494e-01 9.62219782e-01
9.62891747e-01 9.63553508e-01 9.64205187e-01 9.64846901e-01
9.65478769e-01 9.66100910e-01 9.66713439e-01 9.67316472e-01
9.67910125e-01 9.68494513e-01 9.69069748e-01 9.69635944e-01
9.70193212e-01 9.70741665e-01 9.71281411e-01 9.71812562e-01
9.72335225e-01 9.72849509e-01 9.73355520e-01 9.73853365e-01
9.74343150e-01 9.74824978e-01 9.75298954e-01 9.75765180e-01
9.76223758e-01 9.76674790e-01 9.77118376e-01 9.77554615e-01
9.77983606e-01 9.78405446e-01 9.78820233e-01 9.79228062e-01
9.79629028e-01 9.80023226e-01 9.80410750e-01 9.80791690e-01
9.81166140e-01 9.81534190e-01 9.81895930e-01 9.82251448e-01
9.82600834e-01 9.82944174e-01 9.83281555e-01 9.83613062e-01
9.83938780e-01 9.84258793e-01 9.84573184e-01 9.84882035e-01
9.85185427e-01 9.85483442e-01 9.85776157e-01 9.86063652e-01
9.86346006e-01 9.86623294e-01 9.86895593e-01 9.87162979e-01
9.87425526e-01 9.87683308e-01 9.87936397e-01 9.88184866e-01
9.88428785e-01 9.88668226e-01 9.88903257e-01 9.89133948e-01
9.89360367e-01 9.89582580e-01 9.89800654e-01 9.90014655e-01
9.90224647e-01 9.90430694e-01 9.90632860e-01 9.90831207e-01
9.91025796e-01 9.91216689e-01 9.91403946e-01 9.91587626e-01
9.91767787e-01 9.91944488e-01 9.92117785e-01 9.92287736e-01
9.92454395e-01 9.92617818e-01 9.92778059e-01 9.92935171e-01
9.93089207e-01 9.93240220e-01 9.93388260e-01 9.93533378e-01
9.93675625e-01 9.93815049e-01 9.93951700e-01 9.94085625e-01
9.94216871e-01 9.94345486e-01 9.94471514e-01 9.94595002e-01
9.94715995e-01 9.94834535e-01 9.94950667e-01 9.95064434e-01
9.95175877e-01 9.95285039e-01 9.95391960e-01 9.95496680e-01
9.95599239e-01 9.95699677e-01 9.95798032e-01 9.95894342e-01
9.95988645e-01 9.96080977e-01 9.96171374e-01 9.96259873e-01
9.96346509e-01 9.96431316e-01 9.96514328e-01 9.96595579e-01
9.96675103e-01 9.96752931e-01 9.96829096e-01 9.96903629e-01
9.96976561e-01 9.97047923e-01 9.97117745e-01 9.97186056e-01
9.97252886e-01 9.97318263e-01 9.97382215e-01 9.97444770e-01
9.97505955e-01 9.97565797e-01 9.97624322e-01 9.97681557e-01
9.97737526e-01 9.97792254e-01 9.97845767e-01 9.97898088e-01
9.97949242e-01 9.97999251e-01 9.98048138e-01 9.98095927e-01
9.98142638e-01 9.98188295e-01 9.98232919e-01 9.98276530e-01
9.98319149e-01 9.98360797e-01 9.98401493e-01 9.98441257e-01
9.98480109e-01 9.98518067e-01 9.98555150e-01 9.98591375e-01
9.98626762e-01 9.98661327e-01 9.98695088e-01 9.98728062e-01
9.98760264e-01 9.98791713e-01 9.98822423e-01 9.98852411e-01
9.98881691e-01 9.98910279e-01 9.98938190e-01 9.98965439e-01
9.98992039e-01 9.99018005e-01 9.99043350e-01 9.99068088e-01
9.99092232e-01 9.99115796e-01 9.99138791e-01 9.99161231e-01
9.99183127e-01 9.99204492e-01 9.99225338e-01 9.99245675e-01
9.99265516e-01 9.99284871e-01 9.99303751e-01 9.99322167e-01
9.99340130e-01 9.99357649e-01 9.99374734e-01 9.99391396e-01
9.99407645e-01 9.99423488e-01 9.99438936e-01 9.99453998e-01
9.99468683e-01 9.99482999e-01 9.99496954e-01 9.99510558e-01
9.99523818e-01 9.99536743e-01 9.99549339e-01 9.99561616e-01
9.99573579e-01 9.99585237e-01 9.99596597e-01 9.99607666e-01
9.99618450e-01 9.99628957e-01 9.99639193e-01 9.99649165e-01
9.99658878e-01 9.99668339e-01 9.99677554e-01 9.99686529e-01
9.99695270e-01 9.99703782e-01 9.99712071e-01 9.99720142e-01
9.99728001e-01 9.99735653e-01 9.99743103e-01 9.99750355e-01
9.99757415e-01 9.99764287e-01 9.99770977e-01 9.99777488e-01
9.99783825e-01 9.99789992e-01 9.99795993e-01 9.99801833e-01
9.99807516e-01 9.99813045e-01 9.99818425e-01 9.99823659e-01
9.99828751e-01 9.99833705e-01 9.99838523e-01 9.99843210e-01
9.99847768e-01 9.99852202e-01 9.99856513e-01 9.99860707e-01
9.99864784e-01 9.99868749e-01 9.99872604e-01 9.99876352e-01
9.99879996e-01 9.99883539e-01 9.99886983e-01 9.99890330e-01
9.99893584e-01 9.99896747e-01 9.99899821e-01 9.99902808e-01
9.99905711e-01 9.99908532e-01 9.99911273e-01 9.99913937e-01
9.99916524e-01 9.99919038e-01 9.99921481e-01 9.99923853e-01
9.99926158e-01 9.99928397e-01 9.99930571e-01 9.99932683e-01
9.99934734e-01 9.99936725e-01 9.99938659e-01 9.99940537e-01
9.99942361e-01 9.99944131e-01 9.99945850e-01 9.99947519e-01
9.99949138e-01 9.99950711e-01 9.99952237e-01 9.99953718e-01
9.99955156e-01 9.99956551e-01 9.99957905e-01 9.99959219e-01
9.99960494e-01 9.99961731e-01 9.99962931e-01 9.99964096e-01
9.99965225e-01 9.99966321e-01 9.99967384e-01 9.99968415e-01
9.99969415e-01 9.99970385e-01 9.99971326e-01 9.99972238e-01
9.99973123e-01 9.99973980e-01 9.99974812e-01 9.99975618e-01
9.99976400e-01 9.99977158e-01 9.99977892e-01 9.99978604e-01
9.99979295e-01 9.99979964e-01 9.99980612e-01 9.99981240e-01
9.99981849e-01 9.99982439e-01 9.99983011e-01 9.99983565e-01
9.99984102e-01 9.99984622e-01 9.99985126e-01 9.99985614e-01
9.99986087e-01 9.99986545e-01 9.99986988e-01 9.99987418e-01
9.99987833e-01 9.99988236e-01 9.99988626e-01 9.99989004e-01
9.99989370e-01 9.99989724e-01 9.99990067e-01 9.99990399e-01
9.99990720e-01 9.99991031e-01 9.99991332e-01 9.99991623e-01
9.99991905e-01 9.99992178e-01 9.99992442e-01 9.99992698e-01
9.99992945e-01 9.99993184e-01 9.99993416e-01 9.99993640e-01
9.99993856e-01 9.99994066e-01 9.99994268e-01 9.99994465e-01
9.99994654e-01 9.99994838e-01 9.99995015e-01 9.99995187e-01
9.99995352e-01 9.99995513e-01 9.99995668e-01 9.99995818e-01
9.99995963e-01 9.99996103e-01 9.99996238e-01 9.99996369e-01
9.99996496e-01 9.99996618e-01 9.99996736e-01 9.99996851e-01
9.99996961e-01 9.99997068e-01 9.99997171e-01 9.99997271e-01
9.99997367e-01 9.99997460e-01 9.99997550e-01 9.99997636e-01
9.99997720e-01 9.99997801e-01 9.99997879e-01 9.99997955e-01
9.99998028e-01 9.99998098e-01 9.99998166e-01 9.99998232e-01
9.99998295e-01 9.99998357e-01 9.99998416e-01 9.99998473e-01
9.99998528e-01 9.99998581e-01 9.99998632e-01 9.99998682e-01
9.99998730e-01 9.99998776e-01 9.99998820e-01 9.99998863e-01
9.99998905e-01 9.99998945e-01 9.99998983e-01 9.99999020e-01
9.99999056e-01 9.99999091e-01 9.99999124e-01 9.99999157e-01
9.99999188e-01 9.99999218e-01 9.99999247e-01 9.99999275e-01
9.99999301e-01 9.99999327e-01 9.99999352e-01 9.99999376e-01
9.99999400e-01 9.99999422e-01 9.99999444e-01 9.99999465e-01
9.99999485e-01 9.99999504e-01 9.99999523e-01 9.99999541e-01
9.99999558e-01 9.99999575e-01 9.99999591e-01 9.99999606e-01
9.99999621e-01 9.99999635e-01 9.99999649e-01 9.99999663e-01
9.99999675e-01 9.99999688e-01 9.99999700e-01 9.99999711e-01
9.99999722e-01 9.99999733e-01 9.99999743e-01 9.99999753e-01
9.99999762e-01 9.99999772e-01 9.99999780e-01 9.99999789e-01
9.99999797e-01 9.99999805e-01 9.99999812e-01 9.99999820e-01
9.99999827e-01 9.99999834e-01 9.99999840e-01 9.99999846e-01
9.99999852e-01 9.99999858e-01 9.99999864e-01 9.99999869e-01
9.99999874e-01 9.99999879e-01 9.99999884e-01 9.99999888e-01
9.99999893e-01 9.99999897e-01 9.99999901e-01 9.99999905e-01
9.99999909e-01 9.99999912e-01 9.99999916e-01 9.99999919e-01
9.99999922e-01 9.99999926e-01 9.99999929e-01 9.99999931e-01
9.99999934e-01 9.99999937e-01 9.99999939e-01 9.99999942e-01
9.99999944e-01 9.99999946e-01 9.99999949e-01 9.99999951e-01
9.99999953e-01 9.99999955e-01 9.99999956e-01 9.99999958e-01
9.99999960e-01 9.99999961e-01 9.99999963e-01 9.99999965e-01
9.99999966e-01 9.99999967e-01 9.99999969e-01 9.99999970e-01
9.99999971e-01 9.99999972e-01 9.99999974e-01 9.99999975e-01
9.99999976e-01 9.99999977e-01 9.99999978e-01 9.99999979e-01
9.99999979e-01 9.99999980e-01 9.99999981e-01 9.99999982e-01
9.99999983e-01 9.99999983e-01 9.99999984e-01 9.99999985e-01
9.99999985e-01 9.99999986e-01 9.99999987e-01 9.99999987e-01
9.99999988e-01 9.99999988e-01 9.99999989e-01 9.99999989e-01
9.99999990e-01 9.99999990e-01 9.99999991e-01 9.99999991e-01
9.99999991e-01 9.99999992e-01 9.99999992e-01 9.99999992e-01
9.99999993e-01 9.99999993e-01 9.99999993e-01 9.99999994e-01
9.99999994e-01 9.99999994e-01 9.99999994e-01 9.99999995e-01
9.99999995e-01 9.99999995e-01 9.99999995e-01 9.99999995e-01
9.99999996e-01 9.99999996e-01 9.99999996e-01 9.99999996e-01
9.99999996e-01 9.99999997e-01 9.99999997e-01 9.99999997e-01
9.99999997e-01 9.99999997e-01 9.99999997e-01 9.99999997e-01
9.99999997e-01 9.99999998e-01 9.99999998e-01 9.99999998e-01
9.99999998e-01 9.99999998e-01 9.99999998e-01 9.99999998e-01
9.99999998e-01 9.99999998e-01 9.99999998e-01 9.99999998e-01
9.99999999e-01 9.99999999e-01 9.99999999e-01 9.99999999e-01
9.99999999e-01 9.99999999e-01 9.99999999e-01 9.99999999e-01
9.99999999e-01 9.99999999e-01 9.99999999e-01 9.99999999e-01
9.99999999e-01 9.99999999e-01 9.99999999e-01 9.99999999e-01
9.99999999e-01 9.99999999e-01 9.99999999e-01 9.99999999e-01
9.99999999e-01 9.99999999e-01 9.99999999e-01 9.99999999e-01
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00
1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00]
[4.14689061e-05 4.19244976e-03 8.34331714e-03 1.24939587e-02
1.66442621e-02 2.07941152e-02 2.49434055e-02 2.90920209e-02
3.32398493e-02 3.73867785e-02 4.15326966e-02 4.56774916e-02
4.98210516e-02 5.39632649e-02 5.81040199e-02 6.22432050e-02
6.63807089e-02 7.05164202e-02 7.46502280e-02 7.87820212e-02
8.29116890e-02 8.70391209e-02 9.11642063e-02 9.52868350e-02
9.94068971e-02 1.03524283e-01 1.07638882e-01 1.11750585e-01
1.15859284e-01 1.19964869e-01 1.24067232e-01 1.28166264e-01
1.32261857e-01 1.36353903e-01 1.40442295e-01 1.44526925e-01
1.48607687e-01 1.52684473e-01 1.56757177e-01 1.60825694e-01
1.64889917e-01 1.68949741e-01 1.73005062e-01 1.77055774e-01
1.81101773e-01 1.85142956e-01 1.89179219e-01 1.93210458e-01
1.97236571e-01 2.01257456e-01 2.05273011e-01 2.09283134e-01
2.13287725e-01 2.17286682e-01 2.21279906e-01 2.25267296e-01
2.29248754e-01 2.33224181e-01 2.37193479e-01 2.41156549e-01
2.45113294e-01 2.49063618e-01 2.53007425e-01 2.56944617e-01
2.60875101e-01 2.64798781e-01 2.68715564e-01 2.72625354e-01
2.76528060e-01 2.80423589e-01 2.84311848e-01 2.88192747e-01
2.92066193e-01 2.95932098e-01 2.99790370e-01 3.03640922e-01
3.07483664e-01 3.11318509e-01 3.15145369e-01 3.18964157e-01
3.22774788e-01 3.26577175e-01 3.30371234e-01 3.34156881e-01
3.37934032e-01 3.41702605e-01 3.45462516e-01 3.49213685e-01
3.52956030e-01 3.56689471e-01 3.60413928e-01 3.64129323e-01
3.67835577e-01 3.71532613e-01 3.75220353e-01 3.78898722e-01
3.82567645e-01 3.86227045e-01 3.89876850e-01 3.93516985e-01
3.97147379e-01 4.00767959e-01 4.04378654e-01 4.07979393e-01
4.11570108e-01 4.15150728e-01 4.18721186e-01 4.22281414e-01
4.25831345e-01 4.29370914e-01 4.32900055e-01 4.36418703e-01
4.39926795e-01 4.43424267e-01 4.46911059e-01 4.50387107e-01
4.53852351e-01 4.57306732e-01 4.60750191e-01 4.64182668e-01
4.67604106e-01 4.71014449e-01 4.74413641e-01 4.77801626e-01
4.81178350e-01 4.84543759e-01 4.87897800e-01 4.91240421e-01
4.94571572e-01 4.97891201e-01 5.01199258e-01 5.04495696e-01
5.07780465e-01 5.11053519e-01 5.14314811e-01 5.17564295e-01
5.20801927e-01 5.24027662e-01 5.27241457e-01 5.30443270e-01
5.33633058e-01 5.36810782e-01 5.39976401e-01 5.43129876e-01
5.46271168e-01 5.49400240e-01 5.52517055e-01 5.55621577e-01
5.58713771e-01 5.61793602e-01 5.64861036e-01 5.67916042e-01
5.70958586e-01 5.73988639e-01 5.77006168e-01 5.80011146e-01
5.83003542e-01 5.85983330e-01 5.88950481e-01 5.91904970e-01
5.94846771e-01 5.97775859e-01 6.00692210e-01 6.03595801e-01
6.06486609e-01 6.09364613e-01 6.12229792e-01 6.15082126e-01
6.17921596e-01 6.20748183e-01 6.23561870e-01 6.26362639e-01
6.29150475e-01 6.31925362e-01 6.34687285e-01 6.37436230e-01
6.40172186e-01 6.42895138e-01 6.45605076e-01 6.48301989e-01
6.50985867e-01 6.53656700e-01 6.56314480e-01 6.58959200e-01
6.61590852e-01 6.64209429e-01 6.66814927e-01 6.69407340e-01
6.71986665e-01 6.74552898e-01 6.77106035e-01 6.79646077e-01
6.82173020e-01 6.84686866e-01 6.87187613e-01 6.89675262e-01
6.92149817e-01 6.94611278e-01 6.97059649e-01 6.99494933e-01
7.01917135e-01 7.04326260e-01 7.06722313e-01 7.09105302e-01
7.11475233e-01 7.13832113e-01 7.16175953e-01 7.18506759e-01
7.20824543e-01 7.23129314e-01 7.25421084e-01 7.27699864e-01
7.29965666e-01 7.32218504e-01 7.34458391e-01 7.36685341e-01
7.38899369e-01 7.41100491e-01 7.43288722e-01 7.45464079e-01
7.47626579e-01 7.49776240e-01 7.51913081e-01 7.54037120e-01
7.56148376e-01 7.58246871e-01 7.60332625e-01 7.62405659e-01
7.64465994e-01 7.66513654e-01 7.68548661e-01 7.70571038e-01
7.72580809e-01 7.74578000e-01 7.76562635e-01 7.78534739e-01
7.80494339e-01 7.82441462e-01 7.84376133e-01 7.86298382e-01
7.88208235e-01 7.90105723e-01 7.91990872e-01 7.93863714e-01
7.95724278e-01 7.97572595e-01 7.99408695e-01 8.01232611e-01
8.03044373e-01 8.04844013e-01 8.06631566e-01 8.08407064e-01
8.10170540e-01 8.11922028e-01 8.13661563e-01 8.15389180e-01
8.17104914e-01 8.18808800e-01 8.20500875e-01 8.22181174e-01
8.23849736e-01 8.25506596e-01 8.27151792e-01 8.28785363e-01
8.30407346e-01 8.32017780e-01 8.33616705e-01 8.35204159e-01
8.36780181e-01 8.38344813e-01 8.39898094e-01 8.41440065e-01
8.42970766e-01 8.44490240e-01 8.45998527e-01 8.47495669e-01
8.48981709e-01 8.50456689e-01 8.51920652e-01 8.53373640e-01
8.54815697e-01 8.56246866e-01 8.57667191e-01 8.59076716e-01
8.60475486e-01 8.61863545e-01 8.63240937e-01 8.64607708e-01
8.65963902e-01 8.67309566e-01 8.68644744e-01 8.69969483e-01
8.71283829e-01 8.72587828e-01 8.73881526e-01 8.75164970e-01
8.76438207e-01 8.77701284e-01 8.78954248e-01 8.80197146e-01
8.81430026e-01 8.82652935e-01 8.83865922e-01 8.85069034e-01
8.86262319e-01 8.87445825e-01 8.88619602e-01 8.89783697e-01
8.90938159e-01 8.92083036e-01 8.93218379e-01 8.94344234e-01
8.95460652e-01 8.96567682e-01 8.97665373e-01 8.98753774e-01
8.99832935e-01 9.00902905e-01 9.01963734e-01 9.03015471e-01
9.04058166e-01 9.05091869e-01 9.06116629e-01 9.07132497e-01
9.08139522e-01 9.09137755e-01 9.10127245e-01 9.11108042e-01
9.12080197e-01 9.13043759e-01 9.13998779e-01 9.14945307e-01
9.15883392e-01 9.16813085e-01 9.17734437e-01 9.18647496e-01
9.19552314e-01 9.20448941e-01 9.21337426e-01 9.22217819e-01
9.23090172e-01 9.23954533e-01 9.24810953e-01 9.25659482e-01
9.26500170e-01 9.27333066e-01 9.28158221e-01 9.28975685e-01
9.29785506e-01 9.30587736e-01 9.31382423e-01 9.32169617e-01
9.32949368e-01 9.33721725e-01 9.34486738e-01 9.35244456e-01
9.35994928e-01 9.36738203e-01 9.37474330e-01 9.38203360e-01
9.38925339e-01 9.39640318e-01 9.40348344e-01 9.41049467e-01
9.41743735e-01 9.42431197e-01 9.43111900e-01 9.43785893e-01
9.44453224e-01 9.45113942e-01 9.45768093e-01 9.46415725e-01
9.47056887e-01 9.47691626e-01 9.48319988e-01 9.48942022e-01
9.49557774e-01 9.50167291e-01 9.50770620e-01 9.51367808e-01
9.51958901e-01 9.52543945e-01 9.53122988e-01 9.53696074e-01
9.54263250e-01 9.54824561e-01 9.55380054e-01 9.55929773e-01
9.56473764e-01 9.57012072e-01 9.57544742e-01 9.58071818e-01
9.58593346e-01 9.59109370e-01 9.59619933e-01 9.60125081e-01
9.60624857e-01 9.61119305e-01 9.61608468e-01 9.62092389e-01
9.62571113e-01 9.63044682e-01 9.63513138e-01 9.63976525e-01
9.64434885e-01 9.64888260e-01 9.65336692e-01 9.65780224e-01
9.66218896e-01 9.66652751e-01 9.67081829e-01 9.67506171e-01
9.67925820e-01 9.68340814e-01 9.68751196e-01 9.69157004e-01
9.69558280e-01 9.69955063e-01 9.70347393e-01 9.70735309e-01
9.71118850e-01 9.71498056e-01 9.71872965e-01 9.72243617e-01
9.72610049e-01 9.72972300e-01 9.73330408e-01 9.73684410e-01
9.74034344e-01 9.74380248e-01 9.74722159e-01 9.75060113e-01
9.75394148e-01 9.75724300e-01 9.76050605e-01 9.76373099e-01
9.76691819e-01 9.77006799e-01 9.77318076e-01 9.77625684e-01
9.77929659e-01 9.78230035e-01 9.78526848e-01 9.78820130e-01
9.79109917e-01 9.79396242e-01 9.79679139e-01 9.79958641e-01
9.80234783e-01 9.80507596e-01 9.80777113e-01 9.81043368e-01
9.81306393e-01 9.81566219e-01 9.81822879e-01 9.82076404e-01
9.82326826e-01 9.82574176e-01 9.82818486e-01 9.83059786e-01
9.83298106e-01 9.83533478e-01 9.83765930e-01 9.83995495e-01
9.84222200e-01 9.84446076e-01 9.84667152e-01 9.84885457e-01
9.85101020e-01 9.85313870e-01 9.85524034e-01 9.85731543e-01
9.85936422e-01 9.86138701e-01 9.86338406e-01 9.86535566e-01
9.86730207e-01 9.86922357e-01 9.87112042e-01 9.87299288e-01
9.87484123e-01 9.87666571e-01 9.87846660e-01 9.88024415e-01
9.88199861e-01 9.88373023e-01 9.88543927e-01 9.88712598e-01
9.88879060e-01 9.89043338e-01 9.89205456e-01 9.89365437e-01
9.89523307e-01 9.89679089e-01 9.89832805e-01 9.89984480e-01
9.90134137e-01 9.90281797e-01 9.90427485e-01 9.90571222e-01
9.90713030e-01 9.90852933e-01 9.90990951e-01 9.91127107e-01
9.91261421e-01 9.91393916e-01 9.91524612e-01 9.91653530e-01
9.91780691e-01 9.91906116e-01 9.92029824e-01 9.92151837e-01
9.92272173e-01 9.92390854e-01 9.92507898e-01 9.92623325e-01
9.92737154e-01 9.92849404e-01 9.92960095e-01 9.93069245e-01
9.93176871e-01 9.93282994e-01 9.93387631e-01 9.93490800e-01
9.93592518e-01 9.93692804e-01 9.93791676e-01 9.93889149e-01
9.93985242e-01 9.94079971e-01 9.94173354e-01 9.94265407e-01
9.94356146e-01 9.94445588e-01 9.94533749e-01 9.94620645e-01
9.94706291e-01 9.94790704e-01 9.94873899e-01 9.94955892e-01
9.95036696e-01 9.95116329e-01 9.95194804e-01 9.95272136e-01
9.95348339e-01 9.95423429e-01 9.95497420e-01 9.95570325e-01
9.95642158e-01 9.95712934e-01 9.95782666e-01 9.95851368e-01
9.95919052e-01 9.95985733e-01 9.96051423e-01 9.96116136e-01
9.96179884e-01 9.96242679e-01 9.96304535e-01 9.96365464e-01
9.96425478e-01 9.96484589e-01 9.96542809e-01 9.96600150e-01
9.96656625e-01 9.96712243e-01 9.96767018e-01 9.96820960e-01
9.96874080e-01 9.96926390e-01 9.96977900e-01 9.97028622e-01
9.97078566e-01 9.97127743e-01 9.97176163e-01 9.97223836e-01
9.97270774e-01 9.97316985e-01 9.97362480e-01 9.97407269e-01
9.97451361e-01 9.97494767e-01 9.97537495e-01 9.97579556e-01
9.97620958e-01 9.97661711e-01 9.97701823e-01 9.97741305e-01
9.97780164e-01 9.97818409e-01 9.97856050e-01 9.97893094e-01
9.97929551e-01 9.97965428e-01 9.98000733e-01 9.98035476e-01
9.98069664e-01 9.98103304e-01 9.98136405e-01 9.98168975e-01
9.98201021e-01 9.98232551e-01 9.98263572e-01 9.98294092e-01
9.98324118e-01 9.98353656e-01 9.98382715e-01 9.98411302e-01
9.98439422e-01 9.98467084e-01 9.98494293e-01 9.98521057e-01
9.98547381e-01 9.98573274e-01 9.98598740e-01 9.98623786e-01
9.98648419e-01 9.98672645e-01 9.98696470e-01 9.98719899e-01
9.98742939e-01 9.98765596e-01 9.98787875e-01 9.98809782e-01
9.98831323e-01 9.98852503e-01 9.98873327e-01 9.98893802e-01
9.98913932e-01 9.98933723e-01 9.98953180e-01 9.98972308e-01
9.98991112e-01 9.99009598e-01 9.99027769e-01 9.99045631e-01
9.99063188e-01 9.99080446e-01 9.99097409e-01 9.99114082e-01
9.99130468e-01 9.99146574e-01 9.99162402e-01 9.99177957e-01
9.99193245e-01 9.99208268e-01 9.99223031e-01 9.99237538e-01
9.99251793e-01 9.99265801e-01 9.99279564e-01 9.99293088e-01
9.99306375e-01 9.99319430e-01 9.99332256e-01 9.99344857e-01
9.99357237e-01 9.99369399e-01 9.99381346e-01 9.99393083e-01
9.99404612e-01 9.99415937e-01 9.99427061e-01 9.99437988e-01
9.99448720e-01 9.99459261e-01 9.99469614e-01 9.99479782e-01
9.99489769e-01 9.99499576e-01 9.99509207e-01 9.99518665e-01
9.99527953e-01 9.99537074e-01 9.99546030e-01 9.99554824e-01
9.99563459e-01 9.99571937e-01 9.99580261e-01 9.99588434e-01
9.99596459e-01 9.99604336e-01 9.99612070e-01 9.99619663e-01
9.99627116e-01 9.99634433e-01 9.99641615e-01 9.99648665e-01
9.99655586e-01 9.99662378e-01 9.99669045e-01 9.99675589e-01
9.99682012e-01 9.99688315e-01 9.99694501e-01 9.99700572e-01
9.99706530e-01 9.99712377e-01 9.99718114e-01 9.99723744e-01
9.99729269e-01 9.99734690e-01 9.99740009e-01 9.99745228e-01
9.99750348e-01 9.99755372e-01 9.99760301e-01 9.99765137e-01
9.99769881e-01 9.99774535e-01 9.99779100e-01 9.99783579e-01
9.99787972e-01 9.99792281e-01 9.99796508e-01 9.99800655e-01
9.99804721e-01 9.99808710e-01 9.99812622e-01 9.99816458e-01
9.99820221e-01 9.99823911e-01 9.99827530e-01 9.99831079e-01
9.99834559e-01 9.99837971e-01 9.99841317e-01 9.99844598e-01
9.99847815e-01 9.99850970e-01 9.99854062e-01 9.99857095e-01
9.99860067e-01 9.99862982e-01 9.99865839e-01 9.99868640e-01
9.99871386e-01 9.99874077e-01 9.99876716e-01 9.99879302e-01
9.99881837e-01 9.99884322e-01 9.99886758e-01 9.99889145e-01
9.99891484e-01 9.99893777e-01 9.99896024e-01 9.99898227e-01
9.99900385e-01 9.99902500e-01 9.99904572e-01 9.99906603e-01
9.99908593e-01 9.99910543e-01 9.99912453e-01 9.99914325e-01
9.99916159e-01 9.99917956e-01 9.99919717e-01 9.99921441e-01
9.99923131e-01 9.99924786e-01 9.99926408e-01 9.99927996e-01
9.99929552e-01 9.99931076e-01 9.99932569e-01 9.99934031e-01
9.99935463e-01 9.99936866e-01 9.99938240e-01 9.99939585e-01
9.99940903e-01 9.99942193e-01 9.99943457e-01 9.99944695e-01
9.99945906e-01 9.99947093e-01 9.99948255e-01 9.99949393e-01
9.99950507e-01 9.99951598e-01 9.99952665e-01 9.99953711e-01
9.99954735e-01 9.99955737e-01 9.99956718e-01 9.99957678e-01
9.99958618e-01 9.99959539e-01 9.99960440e-01 9.99961322e-01
9.99962185e-01 9.99963030e-01 9.99963856e-01 9.99964666e-01
9.99965458e-01 9.99966233e-01 9.99966992e-01 9.99967734e-01
9.99968461e-01 9.99969172e-01 9.99969868e-01 9.99970548e-01
9.99971215e-01 9.99971866e-01 9.99972504e-01 9.99973128e-01
9.99973739e-01 9.99974336e-01 9.99974920e-01 9.99975492e-01
9.99976052e-01 9.99976599e-01 9.99977134e-01 9.99977658e-01
9.99978170e-01 9.99978671e-01 9.99979161e-01 9.99979640e-01
9.99980109e-01 9.99980567e-01 9.99981016e-01 9.99981454e-01
9.99981883e-01 9.99982303e-01 9.99982713e-01 9.99983114e-01
9.99983506e-01 9.99983890e-01 9.99984265e-01 9.99984631e-01
9.99984990e-01 9.99985340e-01 9.99985683e-01 9.99986018e-01
9.99986346e-01 9.99986666e-01 9.99986979e-01 9.99987285e-01
9.99987584e-01 9.99987877e-01 9.99988163e-01 9.99988442e-01
9.99988715e-01 9.99988982e-01 9.99989243e-01 9.99989498e-01
9.99989747e-01 9.99989991e-01 9.99990229e-01 9.99990462e-01
9.99990689e-01 9.99990911e-01 9.99991128e-01 9.99991341e-01
9.99991548e-01 9.99991750e-01 9.99991948e-01 9.99992142e-01
9.99992330e-01 9.99992515e-01 9.99992695e-01 9.99992872e-01
9.99993044e-01 9.99993212e-01 9.99993376e-01 9.99993536e-01
9.99993693e-01 9.99993846e-01 9.99993996e-01 9.99994142e-01
9.99994284e-01 9.99994423e-01 9.99994560e-01 9.99994692e-01
9.99994822e-01 9.99994949e-01 9.99995073e-01 9.99995193e-01
9.99995311e-01 9.99995427e-01 9.99995539e-01 9.99995649e-01
9.99995756e-01 9.99995861e-01 9.99995963e-01 9.99996063e-01
9.99996160e-01 9.99996255e-01 9.99996348e-01 9.99996439e-01
9.99996527e-01 9.99996614e-01 9.99996698e-01 9.99996781e-01
9.99996861e-01 9.99996939e-01 9.99997016e-01 9.99997090e-01
9.99997163e-01 9.99997235e-01 9.99997304e-01 9.99997372e-01
9.99997438e-01 9.99997502e-01 9.99997565e-01 9.99997627e-01
9.99997687e-01 9.99997745e-01 9.99997802e-01 9.99997858e-01
9.99997912e-01 9.99997965e-01 9.99998017e-01 9.99998067e-01
9.99998117e-01 9.99998165e-01 9.99998211e-01 9.99998257e-01
9.99998302e-01 9.99998345e-01 9.99998388e-01 9.99998429e-01
9.99998469e-01 9.99998508e-01 9.99998547e-01 9.99998584e-01
9.99998621e-01 9.99998656e-01 9.99998691e-01 9.99998725e-01
9.99998758e-01 9.99998790e-01 9.99998821e-01 9.99998852e-01
9.99998882e-01 9.99998911e-01 9.99998939e-01 9.99998967e-01
9.99998994e-01 9.99999020e-01 9.99999046e-01 9.99999071e-01
9.99999095e-01 9.99999119e-01 9.99999142e-01 9.99999164e-01
9.99999186e-01 9.99999208e-01 9.99999228e-01 9.99999249e-01
9.99999269e-01 9.99999288e-01 9.99999307e-01 9.99999325e-01
9.99999343e-01 9.99999360e-01 9.99999377e-01 9.99999394e-01
9.99999410e-01 9.99999426e-01 9.99999441e-01 9.99999456e-01
9.99999470e-01 9.99999485e-01 9.99999498e-01 9.99999512e-01
9.99999525e-01 9.99999538e-01 9.99999550e-01 9.99999562e-01
9.99999574e-01 9.99999585e-01 9.99999596e-01 9.99999607e-01
9.99999618e-01 9.99999628e-01 9.99999638e-01 9.99999648e-01
9.99999658e-01 9.99999667e-01 9.99999676e-01 9.99999685e-01
9.99999693e-01 9.99999702e-01 9.99999710e-01 9.99999718e-01
9.99999725e-01 9.99999733e-01 9.99999740e-01 9.99999747e-01
9.99999754e-01 9.99999761e-01 9.99999768e-01 9.99999774e-01
9.99999780e-01 9.99999786e-01 9.99999792e-01 9.99999798e-01]
###Markdown
Comparing Bhattacharyya with other bounds
###Code
from scipy.spatial.distance import mahalanobis
from scipy.stats import multivariate_normal as mvn
print(mahalanobis(0,1,1/4))
sigma=3
print(1-norm.cdf(-mahalanobis(0,1,1/sigma)/2, scale=sigma))
print(1-norm.cdf(0.5, loc=1, scale=sigma))
def bhattacharyya_upper(mu1,mu2,Sigma_inv):
part1 = (1/8)*mahalanobis(mu1,mu2,Sigma_inv)**2
part2 = 0.5*0.5
return np.e**(-part1)*np.sqrt(part2)
def bhattacharyya_lower(mu1,mu2,Sigma_inv):
part1 = (1/8)*mahalanobis(mu1,mu2,Sigma_inv)**2
part2 = 0.5*0.5
return (1-np.sqrt(1-4*part2*np.e**(-2*part1)))/2
def empirical_success_rate(mu1, mu2, sigma, dims, num_samples):
size = [num_samples,dims]
samples = norm.rvs(size=size,loc=0,scale=sigma)
pdf_1 = mvn.pdf(samples, mean=mu1, cov=sigma**2)
pdf_2 = mvn.pdf(samples, mean=mu2, cov=sigma**2)
pdf_sum = pdf_1 + pdf_2
belief_1 = pdf_1 /pdf_sum
belief_2 = pdf_2 /pdf_sum
beliefs = np.vstack([belief_1,belief_2]).T
decision = np.argmax(beliefs, axis=1)
_, counts = np.unique(decision,return_counts=True)
return counts[0]/size[0]
def empirical_lda_transformed_success_rate(mu1, mu2, sigma, dims, num_samples, enable_plot=False):
size = [num_samples//2,dims]
samples_1 = norm.rvs(size=size,loc=mu1,scale=sigma)
samples_2 = norm.rvs(size=size,loc=mu2,scale=sigma)
samples = np.concatenate([samples_1, samples_2])
sigma_inv = 1/(sigma**2)
alpha = (mu1-mu2)*sigma_inv
y = np.matmul(alpha, np.transpose(samples))
if enable_plot:
bins = 50
plt.hist(samples_1, color="red", bins=bins, alpha=0.6)
plt.hist(samples_2, color="blue", bins=bins, alpha=0.6)
plt.show()
sigma_inv = 1/(sigma**2) *np.identity(dims)
muy1 = np.dot((mu1-mu2)*1/(sigma**2),mu1)
muy2 = np.dot((mu1-mu2)*1/(sigma**2),mu2)
sigy = mahalanobis(mu1,mu2,sigma_inv)**2
xs = np.linspace(np.min(y), np.max(y),1000)
fig, axes = plt.subplots()
axes.plot(xs,norm.pdf(xs,loc=muy1, scale=np.sqrt(sigy)),c="red")
axes.plot(xs,norm.pdf(xs,loc=muy2, scale=np.sqrt(sigy)),c="blue")
ax2 = axes.twinx()
ax2.hist(y[:num_samples//2], color="red", bins=bins, alpha=0.3)
ax2.hist(y[num_samples//2:], color="blue", bins=bins, alpha=0.3)
plt.show()
xs = np.linspace(np.min(y), np.max(y),1000)
fig, axes = plt.subplots()
axes.plot(xs,norm.pdf(xs,loc=muy1, scale=np.sqrt(sigy)),c="red")
axes.plot(xs,norm.pdf(xs,loc=muy2, scale=np.sqrt(sigy)),c="blue")
axes.plot(xs,norm.pdf(xs,loc=mu1, scale=sigma),c="green")
axes.plot(xs,norm.pdf(xs,loc=mu2, scale=sigma),c="black")
plt.show()
sigma_inv = 1/(sigma**2)
condition = 0.5 * sigma_inv * np.dot(mu1-mu2,mu1+mu2)
decision = y >= condition
return (np.sum(decision[:num_samples//2])+np.sum(~decision[num_samples//2:]))/num_samples
empirical_lda_transformed_success_rate(np.array([0]),np.array([1]),5,1,20000,True)
def empirical_lda_success_rate(mu1, mu2, sigma, dims, num_samples):
size = [num_samples//2,dims]
samples_1 = norm.rvs(size=size,loc=mu1,scale=sigma)
samples_2 = norm.rvs(size=size,loc=mu2,scale=sigma)
samples = np.concatenate([samples_1, samples_2])
alpha = (1/sigma)*(mu1-mu2)
mu = (mu1+mu2)/2
mu = np.repeat([mu],num_samples,axis=0)
h = np.matmul(alpha,np.transpose(samples-mu))
decision = h > 0
return (np.sum(decision[:num_samples//2])+np.sum(~decision[num_samples//2:]))/num_samples
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
from sklearn.metrics import accuracy_score
def empirical_lda_sklearn_success_rate(mu1, mu2, sigma, dims, num_samples):
size = [num_samples//2,dims]
samples_1 = norm.rvs(size=size,loc=mu1,scale=sigma)
samples_2 = norm.rvs(size=size,loc=mu2,scale=sigma)
samples = np.concatenate([samples_1, samples_2])
labels = np.concatenate([np.zeros(num_samples//2),np.ones(num_samples//2)])
classifier = LDA()
classifier.fit(samples,labels)
predictions = classifier.predict(samples)
return accuracy_score(labels, predictions)
sigmas =np.linspace(0.0001,5,100)
y_1 = []
dim = 1
mu1 = np.array([0]*dim)
mu2 = np.array([1]*dim)
y_2 = []
y_3 = []
y_4 = []
y_5 = []
for sigma in sigmas:
Sigma_inv = 1/(sigma**2) *np.identity(dim)
muy1 = np.dot((mu1-mu2)*1/(sigma**2),mu1)
muy2 = np.dot((mu1-mu2)*1/(sigma**2),mu2)
sigy = mahalanobis(mu1,mu2,Sigma_inv)**2
y_1.append(1-norm.cdf(muy2+sigy/2,loc=muy1, scale=np.sqrt(sigy)))
y_2.append(1-norm.cdf(-mahalanobis(mu1,mu2,Sigma_inv)/2, loc = 0, scale=1))
y_3.append(1-bhattacharyya_lower(mu1,mu2,Sigma_inv))
y_4.append(1-bhattacharyya_upper(mu1,mu2,Sigma_inv))
y_5.append(empirical_success_rate(mu1,mu2,sigma, dim, 20000))
plt.plot(sigmas, y_2, label="Suspicious Analytical Solution")
plt.plot(sigmas,y_1, label="exact",ls="--")
plt.plot(sigmas,y_5, label="Simulated exact")
plt.plot(sigmas, y_3,label="Bhattacharyya Upper")
plt.plot(sigmas, y_4,label="Bhattacharyya Lower")
plt.xlabel(r"$\sigma$")
plt.ylabel(r"$\rho_s$")
plt.grid(ls="dashed")
plt.legend()
plt.show()
y_5
from scipy.stats import multivariate_normal as mvn
sigma = 2
dim=10
size = [20000,dim]
samples = norm.rvs(size=size,loc=0,scale=sigma)
pdf_1 = mvn.pdf(samples, mean=[0]*dim, cov=sigma)
pdf_2 = mvn.pdf(samples, mean=[1]*dim, cov=sigma)
pdf_sum = pdf_1 + pdf_2
belief_1 = pdf_1 /pdf_sum
belief_2 = pdf_2 /pdf_sum
beliefs = np.vstack([belief_1,belief_2]).T
decision = np.argmax(beliefs, axis=1)
_, (TP,FP) = np.unique(decision,return_counts=True)
print("Success Rate aka Precision: {:.2f}".format( TP/size[0]))
n = 10000
mu1 = 0
mu2 = 1
delta_f = np.linalg.norm(mu1-mu2)
delta = 1e-6
epsilon = 2
std = delta_f*np.sqrt(2*np.log(1.25/delta))/epsilon
samples_1 = norm.rvs(size=n,loc=mu1,scale=std)
samples_2 = norm.rvs(size=n,loc=mu2,scale=std)
samples = np.concatenate([samples_1,samples_2])
labels = np.concatenate([np.zeros(n),np.ones(n)])
belief_1 = norm.pdf(samples,loc=mu1,scale=std)/(norm.pdf(samples,loc=mu1,scale=std)+norm.pdf(samples,loc=mu2,scale=std))
belief_2 = norm.pdf(samples,loc=mu2,scale=std)/(norm.pdf(samples,loc=mu1,scale=std)+norm.pdf(samples,loc=mu2,scale=std))
beliefs = np.vstack([belief_1,belief_2]).T
max_belief = np.max(beliefs)
success_rate = np.sum(np.argmax(beliefs[:n],axis=1) == 0)/(n)
print(max_belief, success_rate)
###Output
0.8323356713559683 0.5696
###Markdown
Empirical Attacker Analysis
###Code
from scipy.stats import multivariate_normal as mvn
def run_attack(D_1, D_2, query, sensitivity, epsilon, delta, iterations):
"""
:D_1: true data set
:D_2: alternative data set
"""
std = sensitivity*np.sqrt(2*np.log(1.25/delta))/epsilon
mu_1 = query(D_1)
mu_2 = query(D_2)
noisy_results = []
prior_1, prior_2 = 0.5, 0.5
dim = np.shape(mu_1)
#list of query results
magnitude = epsilon*(std)**2/sensitivity- sensitivity/2
direction = (mu_1-mu_2)/np.linalg.norm(mu_1-mu_2)
tails = direction*magnitude
private = mu_1 + tails
noisy_results = norm.rvs(size=(iterations, )+dim,loc=mu_1,scale=std)
#noisy_results = [private]*iterations
unbiased_guesses = []
biased_guesses = []
unbiased_beliefs_1 = [0.5]
unbiased_beliefs_2 = [0.5]
biased_beliefs_1 = [0.5]
biased_beliefs_2 = [0.5]
#guess
for i, result in enumerate(noisy_results):
pdf_1 = mvn.pdf(result, mean=mu_1, cov=std**2)
pdf_2 = mvn.pdf(result, mean=mu_2, cov=std**2)
pdf_sum = prior_1*pdf_1 + prior_2*pdf_2
prior_1 = biased_belief_1 = prior_1 * pdf_1 / pdf_sum
prior_2 = biased_belief_2 = prior_2 * pdf_2 / pdf_sum
biased_beliefs_1.append(biased_belief_1)
biased_beliefs_2.append(biased_belief_2)
unbiased_belief_1 = pdf_1 / (pdf_1 + pdf_2)
unbiased_belief_2 = pdf_2 / (pdf_1 + pdf_2)
unbiased_beliefs_1.append(unbiased_belief_1)
unbiased_beliefs_2.append(unbiased_belief_2)
biased_guesses.append(np.argmax([biased_belief_1, biased_belief_2]))
unbiased_guesses.append(np.argmax([unbiased_belief_1, unbiased_belief_2]))
mean_result = np.mean(noisy_results)
pdf_1 = mvn.pdf(mean_result, mean=mu_1, cov=std**2/iterations)
pdf_2 = mvn.pdf(mean_result, mean=mu_2, cov=std**2/iterations)
mean_belief_1 = pdf_1 / (pdf_1 + pdf_2)
mean_belief_2 = pdf_2 / (pdf_1 + pdf_2)
mean_guess = np.argmax([mean_belief_1, mean_belief_2])
#print(mean_belief_1, mean_belief_2, biased_belief_1, biased_belief_2)
return unbiased_beliefs_1, unbiased_beliefs_2, unbiased_guesses, biased_beliefs_1,\
biased_beliefs_2, biased_guesses, mean_guess, noisy_results
D_1 = [5,10,2]
D_2 = [5,1,2]
query = np.sum
sensitivity = 9
iterations = 10
epsilon = 5
delta = 0.001
epsilon_i = epsilon/iterations
delta_i = delta/iterations
_,_,_,beliefs_1, beliefs_2, _ , _, noisy_results = run_attack(D_1,D_2, query,sensitivity,epsilon_i, delta_i, iterations)
plt.style.use("fast")
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
matplotlib.rcParams.update({'font.size': 14})
fig, axes = plt.subplots(1,1,figsize=(4,3))
colors = plt.cm.RdBu(np.linspace(0.0,0.9,20))
xs = np.arange(0,iterations+1)
mu_1, mu_2 = query(D_1), query(D_2)
line1 = axes.plot(xs[1:], noisy_results, label=r"$\mathcal{M}_{Gau}(\mathcal{D})$",color="grey", alpha=0.5)# ,ls="dotted")
axes.set_xlabel("Iterations")
axes.set_xlim([0,iterations])
axes.set_ylabel(r"Output of $\mathcal{M}_{Gau}$")
axes.set_yticks([-10000,-5000,0,5000,10000])
ax2 = axes.twinx()
line2 = ax2.plot(xs, beliefs_1, color=colors[3], label=r"$\beta(\mathcal{D})$")
line3 = ax2.plot(xs, beliefs_2, color=colors[-2], label=r"$\beta(\mathcal{D}')$")
ax2.set_ylabel(r"$\beta$")
ax2.set_yticks([0.42,0.46,0.5,0.54,0.58])
lines = line1+line2+line3
labels = [line.get_label() for line in lines]
plt.legend(lines, labels, loc='upper center', bbox_to_anchor=(0.5, -0.25), ncol=3, facecolor="white")
axes.grid(ls="dashed")
#plt.savefig("synthethic_attack_sample.pdf", bbox_inches='tight')
plt.show()
runs = 5
iterations=10
query = np.sum
sensitivity = 9
epsilon = 5
delta = 0.001
epsilon_i = epsilon/iterations
delta_i = delta/iterations
biased_decisions = []
majority_decisions = []
mean_decisions = []
success_rates = []
for i in range(runs):
_,_,unbiased_guesses,_,_, biased_guesses,mean_guess, noisy_results = run_attack(D_1,D_2, query,sensitivity,epsilon_i, delta_i, iterations)
biased_decisions.append(biased_guesses[-1])
_,counts = np.unique(unbiased_guesses, return_counts=True)
majority_decisions.append(np.argmax(counts))
mean_decisions.append(mean_guess)
plt.style.use("fast")
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
matplotlib.rcParams.update({'font.size': 14})
fig, (axes, axes2, axes3) = plt.subplots(1,3,figsize=(20,3.1))
colors = plt.cm.RdBu(np.linspace(0.0,0.9,20))
_, counts = np.unique(biased_decisions,return_counts=True)
print(counts[0]/runs)
axes.set_axisbelow(True)
axes.grid(ls="dashed")
bar1, bar2 = axes.bar([0,1], counts,width=0.5)
bar1.set_facecolor(colors[3])
bar1.set_label(r"$\mathcal{D}$")
bar2.set_facecolor(colors[-2])
bar2.set_label(r"$\mathcal{D}'$")
axes.set_xticks([0,1])
axes.set_xlim([-0.5,1.5])
axes.set_xticklabels(["Correct", "Wrong"])
axes.set_ylabel("Count")
#axes.set_yticks([0,100,200,300,400,500])
axes.set_xlabel("Guesses")
axes2.set_axisbelow(True)
axes2.grid(ls="dashed")
_, counts = np.unique(majority_decisions,return_counts=True)
print(counts[0]/runs)
bar1, bar2 = axes2.bar([0,1], counts,width=0.5)
bar1.set_facecolor(colors[3])
bar1.set_label(r"$\mathcal{D}$")
bar2.set_facecolor(colors[-2])
bar2.set_label(r"$\mathcal{D}'$")
axes2.set_xticks([0,1])
axes2.set_xlim([-0.5,1.5])
axes2.set_xticklabels(["Correct", "Wrong"])
axes2.set_ylabel("Count")
#axes.set_yticks([0,100,200,300,400,500])
axes2.set_xlabel("Guesses")
axes3.set_axisbelow(True)
axes3.grid(ls="dashed")
_, counts = np.unique(mean_decisions,return_counts=True)
print(counts[0]/runs)
bar1, bar2 = axes3.bar([0,1], counts,width=0.5)
bar1.set_facecolor(colors[3])
bar1.set_label(r"$\mathcal{D}$")
bar2.set_facecolor(colors[-2])
bar2.set_label(r"$\mathcal{D}'$")
axes3.set_xticks([0,1])
axes3.set_xlim([-0.5,1.5])
axes3.set_xticklabels(["Correct", "Wrong"])
axes3.set_ylabel("Count")
#axes.set_yticks([0,100,200,300,400,500])
axes3.set_xlabel("Guesses")
plt.legend(loc='upper center', bbox_to_anchor=(-1, -0.25), ncol=3, facecolor="white")
#plt.savefig("synthethic_attack_guesses.pdf", bbox_inches='tight')
plt.subplots_adjust(wspace=0.5)
plt.show()
def success_rate(epsilon, delta):
return 1- norm.cdf(-epsilon/(2*np.sqrt(2*np.log(1.25/delta))))
from scipy.special import binom
def majority_voting_success_rate(epsilon_i, delta_i, iterations):
p = 0
prob = success_rate(epsilon_i, delta_i)
n = iterations
k = int(np.ceil(n/2))
while k <= n:
p += binom(n, k)*(prob**k)*((1-prob)**(n-k))
k += 1
return p
def average_guess_success_rate(epsilon_i, delta_i, iterations):
return 1- norm.cdf(-(epsilon_i*np.sqrt(iterations))/(2*np.sqrt(2*np.log(1.25/delta_i))))
#epsilon = 5
#delta = 0.001
#iterations = 20
#epsilon_i = epsilon/iterations
#delta_i = delta/ iterations
print("One Iteration under composition: ", success_rate(epsilon_i, delta_i))
print("One iteration no composition: ", success_rate(epsilon, delta))
print("Composition and majority voting: ", majority_voting_success_rate(epsilon_i, delta_i, iterations))
print("Composition and average guessing: ", average_guess_success_rate(epsilon_i, delta_i, iterations))
###Output
One Iteration under composition: 0.9072458876540372
###Markdown
Another Synthethic Experiment: Belief distribution
###Code
D_1 = [5,10,2]
D_2 = [5,1,2]
query = np.sum
sensitivity = 9
iterations = 1
epsilon = 10
delta = 0.001
runs=1000
epsilon_i = epsilon/iterations
delta_i = delta/iterations
max_beliefs = []
for run in range(runs):
_,_,_,beliefs_1, beliefs_2, _ , _, noisy_results = run_attack(D_1,D_2, query,sensitivity,epsilon_i, delta_i, iterations)
max_beliefs.append(np.max([beliefs_1[-1], beliefs_2[-1]]))
max_beliefs = np.array(max_beliefs)
plt.style.use("fast")
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
matplotlib.rcParams.update({'font.size': 14})
fig, axes = plt.subplots(1,1,figsize=(4,3))
colors = plt.cm.RdBu(np.linspace(0.0,0.9,20))
cm = plt.cm.get_cmap('RdYlBu_r')
# Plot histogram.
n, bins, patches = axes.hist(max_beliefs,bins=25,edgecolor='black')
bin_centers = 0.5 * (bins[:-1] + bins[1:])
# scale values to interval [0,1]
col = bin_centers - np.min(bin_centers)
col /= np.max(col)
for c, p in zip(col, patches):
plt.setp(p, 'facecolor', cm(c))
axes.set_xlabel(r"$\beta(\cdot)$ over {} runs".format(runs))
axes.set_ylabel(r"Output of $\mathcal{M}_{Gau}$")
#plt.legend( loc='upper center', bbox_to_anchor=(0.5, -0.25), ncol=3, facecolor="white")
axes.set_axisbelow(True)
axes.grid(ls="dashed")
plt.savefig("synthethic_attack_belief_distributions_eps10.pdf", bbox_inches='tight')
plt.show()
###Output
_____no_output_____
###Markdown
Multi dimenional Gaussian Belief Simulation
###Code
mu1 = np.zeros(2)
mu2 = np.ones(2)
sensitivity = np.linalg.norm(mu1-mu2)
epsilon = 1
delta = 0.001
std = sensitivity*np.sqrt(2*np.log(1.25/delta))/epsilon
angles = np.linspace(0,1.9,20)*np.pi
radius = np.abs(epsilon*std**2/sensitivity - sensitivity/2)
xs = np.cos(angles)*radius
ys = np.sin(angles)*radius
results = np.transpose([xs, ys])
plt.scatter(results[:,0], results[:,1])
plt.show()
beliefs = []
for result in results:
pdf_1 = mvn.pdf(result, mean=mu1, cov=std**2)
pdf_2 = mvn.pdf(result, mean=mu2, cov=std**2)
pdf_sum = pdf_1 + pdf_2
beliefs.append(pdf_1 / pdf_sum)
plt.plot(angles/np.pi, beliefs)
plt.plot(angles/np.pi, [1/(1+np.e**(-epsilon))]*len(angles))
plt.show()
Logs, fixed_D, batch_10000
###Output
_____no_output_____ |
TheGerrymanderProject_v3.ipynb | ###Markdown
**Gerrymandering Project****Project Statement**The following project represents a collaborative effort to explore dynamic programming through the problem of gerrymandering. Namely, our team sought to determine wehther gerrymandering is possible. For each precinct, which is comprised of a number of districts, there will be some fraction which votes for party A (e.g. Republican), Party B (e.g. Democrat), or other (e.g. independent). --- *UVA CS 5012, August 2021*Alexander DeLuca, [email protected] Farrell, [email protected] Kebaish, [email protected] Redfield, [email protected] Sachs, [email protected] Taucher, [email protected] Introduction***Gerrymandering***The etymology of gerrymandering is a portmanteau of Gerry and Mander. The former refers to the former Vice President of the United States, Gerry Eldridge; the latter is based on a drawing by Elkanah Tisdale, who redrew the map of the district akin to a monster, as an exaggeration of the seemingly unnatural shapes of the voting district. A dinner guester remarked its similarity to a salamnder, to which Richard Alsop replied, "[No, a] Gerry Mander"The name was derived during Gerry Eldridge's appointment as the Governor of Massachusetts. Specifically, in 1812, Massachusetts had devised and adopted electoral district boundaries under the auspices of new legislation. However, the Republican party held the State Senate Majority, and devised the distrct lines to improve their control over legislative districts. Consequently, in order to satisfy their capture of the majority, the Republican part drew the district lines such that a majority of Republican voters were present.***Dynamic Programming***In order to determine whether gerrymandering is possible for a particular district, our team constructed a dynamic programming algorithm. In particular, a greedy algorithm was used, in which a sequence of choices are made, and each choice selects the optimal heuristic at that step (the optimal or "best" choice representing the greedy component). Some publications have refferred to four greedy principles: 1. Best-Global: a partial solution is considered which is best with respect to local optimality criterion; optimality is only guaranteed for completed solutions2. Better Global: similar to best, but stronger, in the sense that, in a situation with two partial solutions, the first that is better than the second is selected. 3. Best Local: during the running of the alogirthm, the partial solution that is the best thus far, with respect to the local optimality criterion, remains the best after applying the greedy step. 4. Better Local: the strongest of the four principles, in which, given the result of a construction step on the second, there is a way of performing a construction step on the first that is at least as good with respecct to the local optimality criterion. References: https://www.smithsonianmag.com/history/where-did-term-gerrymander-come-180964118/https://core.ac.uk/download/pdf/82042073.pdf The ProblemGerrymandering remains a highly controversial issue, having found its way in the Supreme Court on multiple occasions — the latest, in 2019 (in which a 5-4 decision ruled that partisan redistricting is not reviewable by federal courts). Yet, even before the political ramifications of gerrymandering can be entertained, one should first consider the mathematical basis underling the redrawing of districts. Namely, is gerrymandering in a set of districts even a mathematical possibility? To accomplish this, our team set out to solve the following question: given a set number of *n* precincts, with each precinct divided into two districts, each containing *m* number of registered voters, can dynamic programming be utilized to establish the possibility (boolean ‘Yes’ or’No’) of gerrymandering (i.e. the redrawing of districts to favor a particular political party)? The DataVoter Registration data by precinct and district is maintained by states. There is not a central federal database that tracks the state breakdowns and voter registrations. This makes sense because voting is executed by each state, and they have leeway to maintain voting data as they see necessary; but it makes our goal a little more difficult to reach.Not all states maintain voter registration data by party. In fact, not all states require a party declaration for voters during registration.For the scope of our project, we have chosen to target states where data is available with the following breakdown:* Precinct* District* Registered Republicans* Registered Democrats StorageFor data storage and retrieval we are using SQLite. Here, we establish a connection to the database and define a cursor to be used throughout the project.
###Code
import sqlite3 # https://docs.python.org/3/library/sqlite3.html
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import scipy.stats as stats
import math
import numpy as np
## Establish a connection to our database
conn = sqlite3.connect('gerrymander.db')
## Create a cursor to execute commands through the connection
cursor = conn.cursor()
###Output
_____no_output_____
###Markdown
Redeploy the Database every timeTo make it easier to rebuild and deploy to new environments, we have provided a "recreate" flag. When recreate is True, we drop existing tables and recreate them from scratch. We also prefer to recreate for an easier delivery of the .ipynb file; anyone can deploy the entire database on their preferred notebook platform.Our approach for inserting data is efficient and fast, so rebuilding is clean, quick, and easy.
###Code
## When recreate is True, we drop all database tables and recreate them for an updated, clean deployment.
recreate = True
if recreate == True:
cursor.execute("DROP TABLE IF EXISTS precinct")
cursor.execute("DROP TABLE IF EXISTS party")
cursor.execute("DROP VIEW IF EXISTS for_algo")
conn.commit()
# Quick verification to make sure everything was dropped
cursor.execute("SELECT name FROM sqlite_master WHERE type='table'")
cursor.fetchall()
###Output
_____no_output_____
###Markdown
Talk to GitHubWe store the scripts for building the database, including the data and schema, in a github repository. We are using python's urllib3 library to communicate over https. In this step, as required by urllib3, we define a pool manager to communicate over https with our github repo.
###Code
## Our SQL Scripts are in Github
## prepare to read from github
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
gitread = urllib3.PoolManager()
###Output
_____no_output_____
###Markdown
Parameters UsedPrior to data procurement, we needed to develop a team-wide understanding of the requisite algorithmic inputs. This paradigm allowed us to set a minimum data standard for all state data sets; the lack of a federal database necessitated a state-first approach. With Brittanica defining gerrymandering as “the practice of drawing the boundaries of electoral districts in a way that gives one political party an unfair advantage over its rivals”, we could safely conclude that we needed district level data and their composite precincts. These district and precinct variables configure the boundary, and along with Democratic and Republican voter counts, are the primary inputs we need.Democrat and Republican voters counts represent a topic worthy of further discussion. Though the American political system is largely defined by extreme bipartisanship, there are various third parties with significant registration numbers and voter turnouts. However, since these various third parties do not have the political heft nor the voter base to indulge in gerrymandering, they can be safely removed from consideration in the algorithm. From a data standpoint, this means we only need Democrat and Republican counts with their cumulative total.The real point of contention revolved around whether to use voter registration numbers or voter turnout numbers. The consensus was that, in heavily red or heavily blue precincts, having the outcome as a foregone conclusion could introduce a biased suppressant of voter turnout. This inclination is widely supportedby research, “there is far greater variation in primary election turnout rates, depending on how many seriously contested races are on the ballot”. Due to this difficult-to-account for bias, we thought registration numbers were a more apt depiction of a precinct’s voting potential. There was a downside to this, unfortunately. Although there was a wealth of voter turnout data, voter registration data was far sparser and not consistently compiled state to state. **Parameters**n = number of individuals per *precinct*Given: M - Number of People Per Precinct $A_{1}$, $A_{2}$, ... $A_{n}$ where A is a particular political party, with n voters and, in a two party system, *B* is the difference of number of votes of M - *A* voters. Output: 2 districts, |$D_{1}$|, |$D_{2}$|1. Where |$D_{1}$| = |$D_{2}$|, i.e. The same number of precincts. 2. We want to determine whether A has a majority in both districts, i.e. the total number of people divided by fourA($D_{1}$) > m * n/4 A($D_{2}$) > m * n /4 Assumption: Each precinct consists of two districts **Algorithm Schema** $S_{j}$,$_{k}$,$_{x}$,$_{y}$ = $S_{j-1}$,$_{k-1}$,$_{x-A_j}$,$_{y}$ ```for j =1 to n for k = 1 to n/2 for x = 1 to m*j for y = 1 to m*j ``` $S_{j}$,$_{k}$,$_{x}$,$_{y}$ = there is a split of first j precincts in which |𝐷1| = k and x people in D1 vote A and y people in D2 vote A Gerrymandering Algorithm (code)
###Code
class NDSparseMatrix:
def __init__(self):
self.elements = {}
def addValue(self, tuple, value):
self.elements[tuple] = value
def readValue(self, tuple):
try:
value = self.elements[tuple]
except KeyError:
value = 0
return value
SuperMatrix = NDSparseMatrix()
SuperMatrix.addValue((0,0,0,0), 1)
def GerryManderingIdentifier(df):
Percent_Done_List = ["25", "50", "75"]
i = 0
Number_of_Precincts = len(df.index) - 1
Total_Votes = df['Total_Votes'].sum().astype(int)
Half_Precincts = math.ceil(Number_of_Precincts/2)
Total_Matrix_Size = Number_of_Precincts * Number_of_Precincts * Total_Votes * Total_Votes
count = 0
Percent_Done = .25 * Total_Matrix_Size
for j in range(1, Number_of_Precincts + 1):
for k in range(1, Number_of_Precincts + 1):
for x in range(0, Total_Votes + 1):
for y in range(0, Total_Votes + 1):
count = count + 1
if count > Percent_Done and i < 3:
print(Percent_Done_List[i],"% Done")
Percent_Done = Percent_Done + (.25 * Total_Matrix_Size)
i = i + 1
if SuperMatrix.readValue((j - 1,k - 1, x - df['REP_VOTES'][j],y)) == 1 or SuperMatrix.readValue((j - 1,k,x,y - df['REP_VOTES'][j])) == 1:
SuperMatrix.addValue((j, k, x, y), 1)
if j == (Number_of_Precincts) and k == (Half_Precincts) and x > Total_Votes/4 and y > Total_Votes/4 and SuperMatrix.readValue((j, k, x, y)) == 1:
print("final J", j)
print("final K", k)
print("final X", x)
print("final Y", y)
return True
break
return False
###Output
_____no_output_____
###Markdown
**Algorithmic Analysis**In terms of time complexity, due to the series of nested loops, the run-time is approximately O($n^4m^2$). As can be seen in the interactive graph belows, the algorithm is bound primarily by the quartic term: n^4.That is, when plotting just $n^4$ against $m^2$, the regression for the ($n^4$) terms dwarfs that of the $m^2$ term. Then, in the second graph, the dot product, $n^4m^2$, is compared against $n^4$ against $m^2$, respectively, which takes a far greater runtime than the n and m terms on their own, respectively. This is explained by the combinatorial product of each step of the algorithm, which is O(n) in the first step, O(n) in the second step, O(m*n) in the second step, and O(m*n) in the third step. 1. O(n)2. O(n)3. O(m*n) 4. O(m*n)
###Code
import seaborn as sns
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
n_terms = list(range(0, 101))
m_terms = list(range(0,101))
n_terms_to_the_fourth = np.array([x**4 for x in n_terms])
m_terms_to_the_second = np.array([y**2 for y in m_terms])
def time_complexity_conversion(n_arr, m_arr):
i = 0
m_n_list = []
while i < len(n_arr):
m_n_term = n_terms_to_the_fourth[i] * m_terms_to_the_second[i]
m_n_list.append(m_n_term)
i+=1
return m_n_list
m_n_terms = np.array(time_complexity_conversion(n_terms, m_terms))
df_2 = pd.DataFrame({'n': n_terms_to_the_fourth, 'm': m_terms_to_the_second})
df = pd.DataFrame({'n': n_terms_to_the_fourth, 'm': m_terms_to_the_second, 'mn': m_n_terms})
fig, ax =plt.subplots(1,2)
sns.lineplot(data = df, ax=ax[0])
sns.lineplot(data = df_2, ax=ax[1])
fig.show()
###Output
_____no_output_____
###Markdown
Example Dataset to Test GerryMandering
###Code
precinct_data = pd.DataFrame()
precinct_data = precinct_data.append(pd.DataFrame({"Presinct":"DUMMY ROW","District": 0,"REP_VOTES":0, "DEM_VOTES": 0, "Total_Votes": 0},index=[0]))
precinct_data = precinct_data.append(pd.DataFrame({"Presinct":"1-99092","District": 1,"REP_VOTES":65, "DEM_VOTES": 35, "Total_Votes": 100},index=[0]))
precinct_data = precinct_data.append(pd.DataFrame({"Presinct":"1-99093","District": 1,"REP_VOTES":60, "DEM_VOTES": 40, "Total_Votes": 100},index=[0]))
precinct_data = precinct_data.append(pd.DataFrame({"Presinct":"1-99094","District": 2,"REP_VOTES":45, "DEM_VOTES": 55, "Total_Votes": 100},index=[0]))
precinct_data = precinct_data.append(pd.DataFrame({"Presinct":"1-99095","District": 2,"REP_VOTES":47, "DEM_VOTES": 53, "Total_Votes": 100},index=[0]))
precinct_data.reset_index(inplace = True)
precinct_data.drop('index',axis=1,inplace=True)
LetsRun = GerryManderingIdentifier(precinct_data)
if LetsRun:
print("GerryMandering is possible")
else:
print("GerryMandering is not possible")
###Output
25 % Done
50 % Done
75 % Done
final J 4
final K 2
final X 110
final Y 107
GerryMandering is possible
###Markdown
Where Did We Get The DataWe ultimately opted for data across 5 states: Alaska, Arizona, Kentucky, North Carolina, and Rhode Island. A more pervasive data breadth would have theoreticallyunlocked more dynamic visualization tools, like a Heroku-hosted Python application,but the algorithm’s speed performance precluded any such need. We explore our individual data sets below.Alaska’s data was procured from the state of Alaska’s elections site, where the data is updated as of August 3. The format is quite amenable to data scraping and was fairly granular in the party breakdown, arranged in a wide format.Arizona’s data was neatly handed to us at the project kickoff and taken directly from Kaggle—with the caveat that is slightly more dated data from Q1 2019. Interestingly, this data is presented in a long format and requires data reshaping/preprocessing. We also noticed that the party labels are (unsurprisingly) inconsistent state to state.Kentucky’s data set was downloaded in PDF from their state election site and is updated as of July 15. The data is presented in a wide format and the party labels largely mirror that of Arizona’s data set. This dataset is supplemented with gender stratifications, which makes it an interesting candidate for further projects.For North Carolina, the data is organized in long format.Data for Rhode Island is in a long format and largely stripped of the granularity of the preceding datasets. Party identification labels are different and lumped into three categories, “Republican”, “Democrat”, and “Unaffiliated”. The accuracy of this“unaffiliated” group comes into question and epitomizes the bipartisan political lensthat drives most of the United States. As such, Rhode Island would likely represent the minimum state data collected and restrict any downstream analysis. Build the tablesIn this step we build the schema structure. The create statements are stored in scripts in github, so this section shows executing the contents of the tables.sql script that we read from github.We have two tables in our schema: * Precinct: Holds all data for precincts, districts, and number of voter registrations by party. There is a row for every party in each precinct, so precinct is not a unique key. Additionally, within states, precinct is not unique, it must be used with district.* party: An id and party name, just to keep the party data consistent within our database - party names and abbreviations change between states, but here we want them to be consistent. Party can be joined with precinct on precinct.party = party.id
###Code
## Build the table structure
## We have two tables: party and precinct
## The github url for the tables script
create_tables = 'https://raw.githubusercontent.com/Sartire/gerrymander/main/State_Data/tables.sql'
## GET contents of the tables.sql script from github
dat = gitread.request("GET", create_tables)
## Execute the table creation commands
cursor.executescript(dat.data.decode("utf-8"))
## Preprocess for algorithm to use
view_def = '''
CREATE VIEW for_algo AS
SELECT * FROM
((SELECT STATE, PRECINCT, DISTRICT, VOTERS as REP_VOTES
FROM precinct WHERE PARTY = 'REP') NATURAL JOIN (
SELECT STATE, PRECINCT, DISTRICT, SUM(VOTERS) as Total_Votes
FROM precinct
WHERE PARTY = 'REP' OR PARTY = 'DEM'
GROUP BY STATE, PRECINCT, DISTRICT))
'''
cursor.execute(view_def)
## Commit Schema Changes
conn.commit()
## Let's see the names of the tables we built
ourtables = cursor.execute("SELECT name FROM sqlite_master WHERE type='table'")
if ourtables:
print('\nTables in the Gerrymander Database\n')
for atable in ourtables:
print("\t"+atable[0])
sql = '''
SELECT * from for_algo
'''
Arizona = pd.read_sql_query(sql, conn)
print(Arizona)
###Output
Empty DataFrame
Columns: [STATE, PRECINCT, DISTRICT, REP_VOTES, Total_Votes]
Index: []
###Markdown
ArizonaHere, we load the data from Arizona into our database. The data is from Kaggle, and was suggested as our "stake in the sand" data. Since Arizona's data had an entry for every party for each precinct, all of our data will follow the same format, no matter its original layout.[Arizona Data on Kaggle](https://www.kaggle.com/arizonaSecofState/arizona-voter-registration-by-precinct)
###Code
## Arizona!
cursor.execute("DELETE FROM precinct WHERE STATE = 'AZ'")
conn.commit()
az_url = 'https://raw.githubusercontent.com/Sartire/gerrymander/main/State_Data/az/az.insert.sql'
## GET contents of the script from a github url
dat = gitread.request("GET", az_url)
## INSERT Data using statements from the github insert script
cursor.executescript(dat.data.decode("utf-8"))
conn.commit()
## Quick verification that data was loaded for this state
cursor.execute("SELECT count(*) from precinct")
verify = cursor.fetchone()[0]
cursor.execute("SELECT sum(voters), party from precinct where state = 'AZ' group by party order by 1 DESC")
print(verify, cursor.fetchall())
###Output
7270 [(1308384, 'REP'), (1251984, 'OTH'), (1169259, 'DEM'), (32096, 'LBT'), (6535, 'GRN')]
###Markdown
Arizona GerryMandering Example
###Code
#ARIZONA
sql = '''
SELECT * from for_algo where state = 'AZ'
'''
Arizona = pd.read_sql_query(sql, conn)
#Some Presincts have only 1 or 2 voters. Lets look at Prescincts that have more than 100 total DEM/REP Voters
Arizona = Arizona[(Arizona["Total_Votes"] > 100)]
Arizona.sort_values(by=['Total_Votes'], inplace=True)
Arizona = Arizona.head(6)
#Need an empty row in the 0th index
empty_df = pd.DataFrame([[np.nan] * len(Arizona.columns)], columns=Arizona.columns)
Arizona = empty_df.append(Arizona, ignore_index=True)
Arizona = Arizona.reset_index(drop=True)
if GerryManderingIdentifier(Arizona):
print("GerryMandering Possible In Arizona District")
else:
print("GerryMandering Not Possible In Arizona District")
###Output
25 % Done
50 % Done
75 % Done
final J 6
final K 3
final X 170
final Y 195
GerryMandering Possible In Arizona District
###Markdown
KentuckyThe state of Kentucky updates and publishes voter registration on a regular basis. Here, we are using data from July 2021.[Kentucky Data](https://elect.ky.gov/Resources/Documents/voterstatsprecinct-20210715-090237.pdf)
###Code
## Kentucky!
cursor.execute("DELETE FROM precinct WHERE STATE = 'KY'")
conn.commit()
ky_url = 'https://raw.githubusercontent.com/Sartire/gerrymander/main/State_Data/ky/ky.insert.sql'
## GET contents of the script from a github url
dat = gitread.request("GET", ky_url)
## INSERT Data using statements from the github insert script
cursor.executescript(dat.data.decode("utf-8"))
conn.commit()
## Quick verification that data was loaded for this state
cursor.execute("SELECT count(*) from precinct")
verify = cursor.fetchone()[0]
cursor.execute("SELECT sum(voters), party from precinct where state = 'KY' group by party order by 1 DESC")
print(verify, cursor.fetchall())
#Kentucky
sql = '''
SELECT * from for_algo where state = 'KY'
'''
Kentucky = pd.read_sql_query(sql, conn)
#Some Presincts have only 1 or 2 voters. Lets look at Prescincts that have more than 100 total DEM/REP Voters
Kentucky = Kentucky[(Kentucky["Total_Votes"] > 100)]
Kentucky.sort_values(by=['Total_Votes'], inplace=True)
Kentucky = Kentucky.head(6)
#Need an empty row in the 0th index
empty_df = pd.DataFrame([[np.nan] * len(Kentucky.columns)], columns=Kentucky.columns)
Kentucky = empty_df.append(Kentucky, ignore_index=True)
Kentucky = Kentucky.reset_index(drop=True)
if GerryManderingIdentifier(Kentucky):
print("GerryMandering Possible In Kentucky District")
else:
print("GerryMandering Not Possible In Kentucky District")
###Output
STATE_x PRECINCT DISTRICT_x ... DISTRICT_y PARTY_y VOTERS_y
0 KY A102 1-16-051-3 ... 1-16-051-3 DEM 70
1 KY A102 1-16-051-3 ... 1-16-051-3 REP 369
2 KY A102 1-16-051-3 ... 1-09-022-1 DEM 356
3 KY A102 1-16-051-3 ... 1-09-022-1 REP 872
4 KY A102 1-16-051-3 ... 6-07-053-5 DEM 711
... ... ... ... ... ... ... ...
555923 KY H103 6-07-056-5 ... 1-14-024-3 REP 144
555924 KY H103 6-07-056-5 ... 4-26-033-6 DEM 551
555925 KY H103 6-07-056-5 ... 4-26-033-6 REP 831
555926 KY H103 6-07-056-5 ... 6-07-056-5 DEM 508
555927 KY H103 6-07-056-5 ... 6-07-056-5 REP 613
[555928 rows x 9 columns]
25 % Done
50 % Done
75 % Done
final J 6
final K 3
final X 154
final Y 186
GerryMandering Possible In Kentucky District
###Markdown
Rhode IslandRhode Island maintains a searchable database of voter information. This data is from August 2021.[Rhode Island Voter Information](https://app.powerbigov.us/view?r=eyJrIjoiZmNjMDYyYzUtOTRjMS00OWUzLThlNzQtNTBhNjU0ZDdkMmQ5IiwidCI6IjJkMGYxZGI2LWRkNTktNDc3Mi04NjVmLTE5MTQxNzVkMDdjMiJ9)
###Code
## Rhode Island
## https://app.powerbigov.us/view?r=eyJrIjoiZmNjMDYyYzUtOTRjMS00OWUzLThlNzQtNTBhNjU0ZDdkMmQ5IiwidCI6IjJkMGYxZGI2LWRkNTktNDc3Mi04NjVmLTE5MTQxNzVkMDdjMiJ9
cursor.execute("DELETE FROM precinct WHERE STATE = 'RI'")
conn.commit()
ri_url = 'https://raw.githubusercontent.com/Sartire/gerrymander/main/State_Data/ri/riinsert.sql'
## GET contents of the script from a github url
dat = gitread.request("GET", ri_url)
## INSERT Data using statements from the github insert script
cursor.executescript(dat.data.decode("utf-8"))
conn.commit()
## Quick verification that data was loaded for this state
cursor.execute("SELECT count(*) from precinct")
verify = cursor.fetchone()[0]
cursor.execute("SELECT sum(voters), party from precinct where state = 'RI' group by party order by 1 DESC")
print(verify, cursor.fetchall())
cursor.execute("SELECT * from precinct where state = 'RI' and precinct='101'" )
cursor.fetchall()
#RhodeIsland
sql = '''
SELECT * from for_algo where state = 'RI'
'''
RhodeIsland = pd.read_sql_query(sql, conn)
#Some Presincts have only 1 or 2 voters. Lets look at Prescincts that have more than 100 total DEM/REP Voters
RhodeIsland = RhodeIsland[(RhodeIsland["Total_Votes"] > 100)]
RhodeIsland.sort_values(by=['Total_Votes'], inplace=True)
RhodeIsland = RhodeIsland.head(6)
#Need an empty row in the 0th index
empty_df = pd.DataFrame([[np.nan] * len(RhodeIsland.columns)], columns=RhodeIsland.columns)
RhodeIsland = empty_df.append(RhodeIsland, ignore_index=True)
RhodeIsland = RhodeIsland.reset_index(drop=True)
if GerryManderingIdentifier(RhodeIsland):
print("GerryMandering Possible In Rhode Island District")
else:
print("GerryMandering Not Possible In Rhode Island District")
###Output
25 % Done
50 % Done
75 % Done
final J 6
final K 3
final X 202
final Y 383
GerryMandering Possible In Rhode Island District
###Markdown
AlaskaAlaska publishes voter party affiliation by precinct and district on their elections website. This data is from August 2021.[Alaska Voter Statistics](https://www.elections.alaska.gov/statistics/2021/AUG/VOTERS%20BY%20PARTY%20AND%20PRECINCT.htm)
###Code
## Alaska
## https://www.elections.alaska.gov/statistics/2021/AUG/VOTERS%20BY%20PARTY%20AND%20PRECINCT.htm
cursor.execute("DELETE FROM precinct WHERE STATE = 'AK'")
conn.commit()
ak_url = 'https://raw.githubusercontent.com/Sartire/gerrymander/main/State_Data/ak/ak.insert.sql'
## GET contents of the script from a github url
dat = gitread.request("GET", ak_url)
## INSERT Data using statements from the github insert script
cursor.executescript(dat.data.decode("utf-8"))
conn.commit()
## Quick verification that data was loaded for this state
cursor.execute("SELECT count(*) from precinct")
verify = cursor.fetchone()[0]
cursor.execute("SELECT sum(voters), party from precinct where state = 'AK' group by party order by 1 DESC")
print(verify, cursor.fetchall())
cursor.execute("SELECT * from precinct where state = 'AK' and precinct='36-690'" )
cursor.fetchall()
#Alaska
sql = '''
SELECT * from for_algo where state = 'AK'
'''
Alaska = pd.read_sql_query(sql, conn)
#Some Presincts have only 1 or 2 voters. Lets look at Prescincts that have more than 100 total DEM/REP Voters
Alaska = Alaska[(Alaska["Total_Votes"] > 100)]
Alaska.sort_values(by=['Total_Votes'], inplace=True)
Alaska = Alaska.head(6)
#Need an empty row in the 0th index
empty_df = pd.DataFrame([[np.nan] * len(Alaska.columns)], columns=Alaska.columns)
Alaska = empty_df.append(Alaska, ignore_index=True)
Alaska = Alaska.reset_index(drop=True)
if GerryManderingIdentifier(Alaska):
print("GerryMandering Possible In Alaska District")
else:
print("GerryMandering Not Possible In Alaska District")
###Output
25 % Done
50 % Done
75 % Done
final J 6
final K 3
final X 156
final Y 161
GerryMandering Possible In Alaska District
###Markdown
North CarolinaThe North Carolina voter data was found through a Kaggle database and dates from the end of February 2020. While more recent data can be acquired through the NC Voter Board website as shown in the description of the Kaggle repository, it appears that data from 2021 does not include the precinct. For this reason, we stuck with the 2020 data from Kaggle.[North Carolina Voter Information](https://www.kaggle.com/jerimee/north-carolina-voter-file)
###Code
## North Carolina
cursor.execute("DELETE FROM precinct WHERE STATE = 'NC'")
conn.commit()
nc_url = 'https://raw.githubusercontent.com/Sartire/gerrymander/main/State_Data/nc/ncinsert.sql'
## GET contents of the script from a github url
dat = gitread.request("GET", nc_url)
## INSERT Data using statements from the github insert script
cursor.executescript(dat.data.decode("utf-8"))
conn.commit()
## Quick verification that data was loaded for this state
cursor.execute("SELECT count(*) from precinct")
verify = cursor.fetchone()[0]
cursor.execute("SELECT sum(voters), precinct from precinct where state = 'NC' group by precinct order by 1 DESC")
print(cursor.fetchall())
#NorthCarolina
sql = '''
SELECT * from for_algo where state = 'NC'
'''
NorthCarolina = pd.read_sql_query(sql, conn)
#Some Presincts have only 1 or 2 voters. Lets look at Prescincts that have more than 100 total DEM/REP Voters
NorthCarolina = NorthCarolina[(NorthCarolina["Total_Votes"] > 100)]
NorthCarolina.sort_values(by=['Total_Votes'], inplace=True)
NorthCarolina = NorthCarolina.head(6)
#Need an empty row in the 0th index
empty_df = pd.DataFrame([[np.nan] * len(NorthCarolina.columns)], columns=NorthCarolina.columns)
NorthCarolina = empty_df.append(NorthCarolina, ignore_index=True)
NorthCarolina = NorthCarolina.reset_index(drop=True)
if GerryManderingIdentifier(NorthCarolina):
print("GerryMandering Possible In North Carolina District")
else:
print("GerryMandering Not Possible In North Carolina District")
## In real life we want to close the cursor
## But during development it is easier to manually close when the current session is complete.
## cursor.close()
###Output
_____no_output_____
###Markdown
Data VisualizationA few rudimentary plots are produced below for your convenience.
###Code
import plotly.express as px
select_con = '''select STATE, PARTY,SUM(VOTERS) AS "Registered Voters" from precinct where PARTY LIKE 'DEM%' or PARTY LIKE 'REP%' group by 1, 2'''
select_con = pd.read_sql(select_con, conn)
select_con
px.bar(select_con, x="STATE", y="Registered Voters", color='PARTY')
select_con = '''select STATE, PARTY,DISTRICT,SUM(VOTERS) AS "Registered Voters" from precinct where (PARTY LIKE 'DEM%' or PARTY LIKE 'REP%') and STATE = 'AZ' group by 1, 2,3'''
select_con = pd.read_sql(select_con, conn)
select_con
fig1 = px.pie(select_con[select_con["PARTY"] == "REP"], values='Registered Voters', names='DISTRICT', title='Arizona Registered Republican Voters By District')
fig2 = px.pie(select_con[select_con["PARTY"] != "REP"], values='Registered Voters', names='DISTRICT', title='Arizona Registered Democrat Voters By District')
fig1
fig2
###Output
_____no_output_____ |
00_Intro_Coursebook.ipynb | ###Markdown
🎉Welcome to the unpackAI AI Course for Business Professionals!**Congratulations** on being part of the unpackAI Bootcamp. We are super excited to learn and progress with you throughout the upcoming weeks. This course will put you ahead than 99% of the rest of the world in in AI. In the next 5 weeks we will dive into the main areas of ML applications (Computer Vision, Tabular Data, Recommender Systems, Natural Language Processing). In each area you will build your own model and understand what matters when it comes to building your own AI project. 🏰AI Program StructureBelow you can find out more about the entire Bootcamp structure divided into weeks. For every week we have a clear learning objective, paired with a class coursebook, and workbook. In total you will have to invest around 10 hours per week.|Week | Content | Learning Objectives||:--- |:--- | :--- ||**0** | **Warm Up & Intro** | **Get to know more about your classmates, the mentors, and learn about the fundamental concepts of Machine Learning, how it works, its limitations, and potential.**|1 |Computer Vision | Dive into Computer Vision, and learn about how machines are able to derive insights and make predictions from visual data. Build your own computer vision application, by gathering your own images and train your own model.|2 |Predictive Analytics | Comprehend how AutoML can be applied to spreadsheet data such as sales, marketing, or customer data, and learn how to deduce actionable insights for the future, and build your own classification or regression application.|3 |Recommender Systems | Learn more about Recommender Systems, and understand how TikTok, Youtube, and Netflix are able to recommend your next favorite piece of content. Choose a dataset to build your own model to predict and recommend.|4 | Natural Language Processing (NLP) | Apply AI & Machine Learning to text, discover Language Models and go through the process of how an AI model is able to generate, summarize and classify text. Build your own NLP application to automatically generate movie reviews, or analyze sentiment. 📕 Learning Objectives of This Coursebook* Learn how to interact and use with Google Collab.* Dive into a brief introduction of neural networks and machine learning.* Build your first Image Classifier. Introduction to Google ColabColaboratory, or "Colab" for short, allows you to write and execute Python in your browser, with - Zero configuration required- Free access to GPUs- Easy sharingWhether you're a **student**, a **data scientist** or an **AI researcher**, Colab can make your work easier. Watch [Introduction to Colab](https://www.youtube.com/watch?v=inN8seMm7UI) to learn more, or just get started below! Getting startedThe document you are reading is not a static web page, but an interactive environment called a **Colab notebook** that lets you write and execute code.For example, here is a **code cell** with a short Python script that computes a value, stores it in a variable, and prints the result:
###Code
1+1
###Output
_____no_output_____
###Markdown
To execute the code in the above cell, select it with a click and then either:* press the play button to the left of the code (the little triangle ▶️)* Use the keyboard shortcut Ctrl + Enter (for Windows) or Command + Enter (for Mac).* Use the keyboard shortcut Shift + Enter to execute and move to the next cellTo edit the code, just click the cell and start editing.Variables that you define in one cell can later be used in other cells:
###Code
seconds_in_a_day = 86400
seconds_in_a_week = 7 * seconds_in_a_day
seconds_in_a_week
###Output
_____no_output_____
###Markdown
Colab notebooks allow you to combine **executable code** and **rich text** in a single document, along with **images**, **HTML**, **LaTeX** and more. When you create your own Colab notebooks, they are stored in your Google Drive account. You can easily share your Colab notebooks with co-workers or friends, allowing them to comment on your notebooks or even edit them. To learn more, see [Overview of Colab](/notebooks/basic_features_overview.ipynb). To create a new Colab notebook you can use the File menu above, or use the following link: [create a new Colab notebook](http://colab.research.google.comcreate=true).Colab notebooks are Jupyter notebooks that are hosted by Colab on Google Cloud. To learn more about the Jupyter project, see [jupyter.org](https://www.jupyter.org). Some useful tricks that are absolutely important for a better Google Colab experience **1. Save time with keyboard shortcuts**You can access all the shortcuts selecting ***“Tools” → “Keyboard Shortcuts”***. **2. Activate your GPU**The default hardware of Google Colab is CPU. Though in order to train computationally heavy Deep Learning models, you need to utilize GPU hardware which is **free** up to 12 hours of non-stop model training. Click on: ***“Runtime” → “Change runtime type” → “Hardware accelerator”***. Then select the desired hardware.*In the content below, we will dive into why a GPU is important.*You can easily check if the GPU is enabled by executing the following code (if it returns ' ' your GPU is not enabled):
###Code
import tensorflow as tf
tf.test.gpu_device_name()
###Output
_____no_output_____
###Markdown
**3. Open the table of contents**Click on the **Table of contents** indicated by the three-lines symbol on the left to see the entire Content.
###Code
###Output
_____no_output_____
###Markdown
**4. Light or Dark**If you prefer, you can change Google Colab to dark theme.Go to ***Tools → Settings → Site*** and under **Theme**, pick **dark**. AI & Machine Learning - The Theory everyone needs to know. In this coursebook, we will cover the theoretical foundation that you will need to pratically dive into AI & Machine Learning.We will be utilizing the materials of [fast.ai](https://course.fast.ai/), [HuggingFace](https://huggingface.co/), other leading libraries and sources and own content that we adopted to further make AI & Machine Learning education more accessible to everyone. AI is for Everyone It is the general assumptions that its very difficult to make use of AI. However, that is not true. Even you will be able to make use of AI in a quick amount of time. In the table below, you can find what is definitely not needed when intending to build an AI application.Myth (don't need) | Truth --- | --- Lots of math | Just high school math is sufficientLots of data | We've seen record-breaking results with < 50 items of dataLots of expensive computers | You can get what you need for state of the art work for free What is Artificial Intelligence (AI)? Artificial intelligence is the simulation of human intelligence processes by machines. At its simplest form, artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving.Humans uilize AI to ease, improve or entirely outsource decision-making through the computational power that machines can offer.You can see that AI is a very general field and both Machine Learning (ML) and Deep Learning (DL) are sub-fields of artificial intelligence. Both have experienced recent breakthroughs and have become the most promosing fields in AI. Thus, we will be focussing on those two fields. What is Machine Learning (ML)? Machine learning is a branch of artificial intelligence (AI) and computer science which focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy.The process of learning begins with observations or data, such as examples, direct experience, or instruction, in order to look for patterns in data and make better decisions in the future based on the examples that we provide. The primary aim is to allow the computers learn automatically without human intervention or assistance and adjust actions accordingly.This allows us to move away from rule-based systems, where every scenario has to be pre-defined and linked to an action. An independent system, that becomes "smarter" the more data it has access to and can make predictions or decisions without being explicitly programmed to do so is extremely powerful in our digital age.Within Machine Learning there are two basic approaches for a machine to "learn" from data. These two approaches are called **supervised learning** and **unsupervised learning**.* **Supervised Learning**: Supervised learning is a machine learning approach that’s defined by its use of labeled datasets. That means that for every data input we also provide the result/output for the program to compare its prediction with the actual result to adjust itself. Using labeled inputs and outputs, the model can measure its accuracy and learn over time.>*Example: A program that receives an image of a piece of clothing as an input, and is expected to predict what type of clothes (i.e. t-shirt, shoes etc.) it is. It was trained on a labeled dataset of clothing images that tell the model which photos were t-shirts, shoes, jackets etc.** **Unsupervised Learning**: Unsupervised learning uses machine learning algorithms to analyze and cluster *unlabeled* data sets. These algorithms discover hidden patterns in data without the need for human intervention (hence, they are “unsupervised”).> *Example: A program that receives an unlabeled dataset of clothing images and is expected to find patterns within the images. Being unsupervised, it can find patterns in various images that ultimately represent a shoe, jacket, etc.**The difference between the will become more clear over the next weeks, when we will work with supervised and unsupervised machine learning applications.* What is Deep Learning (DL)? Deep learning is a subset of machine learning that utilizes computing systems called **neural networks** to learn and find patterns from data to perform tasks at high accuracy. What are Neural Networks? A neural network is a series of algorithms that process data and are able to flexibly adjust themselves to achieve the highest accuracy. Their name and structure are inspired by the human brain, mimicking the way that biological neurons signal to one another.In a neural network a "neuron" is a node, a computational unit that can process data. Each node has an associated weight, a parameter can flexibly change based on the incoming data which will also impact the entire network as a result. The entire network optimises itself to achieve better results.A neural consists of at least 3 layers:1. The **input layer** brings the initial data into the system for further processing by subsequent layers.2. The **hidden layer** is located between the input and output layer, and applies weights to the inputs and directs them to the output.3. The **output layer** is responsible for producing the final result. The output layer takes in the inputs which are passed in from the layer before it, and computes a decision out of the series of inputs.Data is passed through the neural network multiple times. What is a GPU? A **Graphics Processing Unit (GPU)**, Also known as a _graphics card_ is a processing unit that is being used when running your neural network and training your models.It is a special kind of processor in your computer that can handle thousands of single tasks at the same time, especially designed for displaying 3D environments on a computer for playing games. These same basic tasks are very similar to what neural networks do, such that GPUs can run neural networks hundreds of times faster than regular CPUs. All modern computers contain a GPU, but few contain the right kind of GPU necessary for deep learning.In this Bootcamp, we will utilize the free GPU resources provided by Colab. Areas of Application Within this Bootcamp we will dive into the 4 main applications of Machine & Deep Learning. In each field, there are many possibilities how to apply the technology to solve problems.|Field |Definition | Tasks ||:---| :--- | :--- |Computer vision| Enable computers and systems to derive meaningful information from digital images, videos and other visual inputs. |Satellite and drone imagery interpretation (e.g., for disaster resilience); face recognition; image captioning; reading traffic signs; locating pedestrians and vehicles in autonomous vehiclesTabular Data|Tabular data is data that is structured into rows, each of which contains information about some thing. |Sales Forecasting, Customer Purchasing Predictions, Churn Prediction, Marketing Budget OptimizationRecommendation systems|Algorithms aimed at suggesting relevant items to users. |Web search; product recommendations; home page layoutNatural language processing (NLP)|Machines that understand and respond to text or voice data in much the same way humans do. |Answering questions; speech recognition; summarizing documents; classifying documents; finding names, dates, etc. in documents; searching for articles mentioning a conceptComputer Vision & Natural Language Processing in particular are two fields that received the most attention when utilizing deep learning.Let's jump into some practice and train our first model. Your First Deep Learning Model As we said before, we will teach you how to do things before we explain why they work. Following this top-down approach, we will begin by actually training an image classifier to recognize dogs and cats with almost 100% accuracy. To train this model and run our experiments, you will need to do some initial setup. Don't worry, it's not as hard as it looks.**An image classifier is a Computer Vision Task that uses a supervised learning approach to predict the class (i.e. Cat or Dog) of an image.** In order to run our own model, we'll be downloading a _dataset_ of dog and cat photos, and using that to _train a model_. A dataset is simply a bunch of data—it could be images, emails, financial indicators, sounds, or anything else. There are many datasets made freely available that are suitable for training models. Many of these datasets are created by academics to help advance research, many are made available for competitions (there are competitions where data scientists can compete to see who has the most accurate model!), and some are by-products of other processes (such as financial filings). In our case, the dataset is called the [Oxford-IIIT Pet Dataset](http://www.robots.ox.ac.uk/~vgg/data/pets/) that contains 7,349 images of cats and dogs from 37 different breeds will be downloaded from the fast.ai datasets collection to the GPU server you are using, and will then be extracted.Please run all code cells below as well.
###Code
#@title Run this cell to install all libraries (packages of techniques) that we need.
!pip install -Uqq fastbook
!pip install fastai -Uqq --upgrade
from fastbook import *
from fastai.vision.all import *
#import fastbook
#fastbook.setup_book()
###Output
[K |████████████████████████████████| 720 kB 4.2 MB/s
[K |████████████████████████████████| 46 kB 4.0 MB/s
[K |████████████████████████████████| 1.2 MB 48.4 MB/s
[K |████████████████████████████████| 189 kB 56.7 MB/s
[K |████████████████████████████████| 56 kB 4.5 MB/s
[K |████████████████████████████████| 51 kB 313 kB/s
[?25h
###Markdown
Lets now download our dataset that we have described above. Don't worry, the dataset will not be permanently be saved on your computer.
###Code
path = untar_data(URLs.PETS)/'images'
###Output
_____no_output_____
###Markdown
After downloading a dataset full of images, we will now have to prepare the data in a way that we can "feed" it to our model. Please run the cell below!
###Code
#@title Prepare My Data for my model
def is_dog(x): return x[0].islower()
dls = ImageDataLoaders.from_name_func(
path, get_image_files(path), valid_pct=0.2, seed=42,
label_func=is_dog, item_tfms=Resize(224))
###Output
_____no_output_____
###Markdown
Lets finally train our model. If you observe the code below you will be able to see `dls`, `resnet34` and `error_rate`.`dls`: We packaged all our processed data into our "DataLoaders" (to load the data into our model) for which we use the shortcut `dls`.`resnet34`: Resnet is a type of neural network that is pre-built and that we can easily use. The "34" stands for the amount of total layers the neural network has.`error_rate`: The `error_rate` is our defined metric to observe the performance of our model.
###Code
learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(1)
###Output
Downloading: "https://download.pytorch.org/models/resnet34-b627a593.pth" to /root/.cache/torch/hub/checkpoints/resnet34-b627a593.pth
###Markdown
Wow! We have achieved an `error_rate` of 0.6%.But, how do we know if this model is any good? The error rate calculates, which is the proportion of images that were incorrectly identified. The error rate serves as our metric—our measure of model quality, chosen to be intuitive and comprehensible. As you can see, the model is nearly perfect, even though the training time was very short. In fact, the accuracy you've achieved already is far better than anybody had ever achieved just 10 years ago!Finally, let's check that this model actually works. Go and get a photo of a dog, or a cat; if you don't have one handy, just search Google Images and download an image that you find there. Now execute the cell with `uploader` defined. It will output a button: click on it, select the image you want to classify, and it will be uploaded.
###Code
# Upload an image from your computer
uploader = widgets.FileUpload()
uploader
###Output
_____no_output_____
###Markdown
If you have correctly uploaded an image above, you should be able to see it below.
###Code
img = PILImage.create(uploader.data[0])
img
###Output
_____no_output_____
###Markdown
Let's now use our trained model to predict if our uploaded image is a cat or a dog?
###Code
is_dog, _, probs = learn.predict(img)
#print(f"Is this a dog?: {is_dog}.")
print(f"Probability it's a dog: {probs[1].item():.6f}")
print(f"probability it's a cat: {probs[0].item():.6f}")
###Output
_____no_output_____
###Markdown
Congratulations on your first classifier!But what does this mean? What did you actually do? In order to explain this, let's zoom out again to take in the big picture. What did my model do?
###Code
#@title
gv('''ordering=in
model[shape=box3d width=1 height=0.7 label="Neural Network + Parameters"]
inputs->model->predictions; labels->loss; predictions->loss
loss->model[constraint=false label=update]''')
###Output
_____no_output_____
###Markdown
The image above is a conceptualization of what kind of process the model went through in order to achieve such a low `error_rate` and successfully predict if the input image is a cat or dog. Let's go through the process together.1. Inputs: The input is the data from our downloaded dataset. Just like above we had to prepare the data before even "feeding" it into our neural network.1. Neural Network: Just like we have learned above, the data is passed into the first input layer of the neural network and goes through each layer.1. Prediction: Based on the output of the output layer, a prediction is produced.1. Label: Aside from the prediction the model also looks at the actual content of the image which is described via the label.1. Loss: Now that we have a prediction and a label, we compare them and calculate a loss. The more predictions were wrong, the higher the loss. The wronger the predictions, even higher is the loss.1. Update: Based on the loss (if high or low), the model updates its parameters to achieve a lower loss (and higher accuracy).1. Parameters: The values that impact the prediction which are improved with each iteration the data goes through the model. Also find a Glossary below, that describes each keyword that is important in Deep Learning. Do not worry, if you are not yet able to fully comprehend each word.Deep Learning Jargon Glossary|Term | Meaning ||:--- | :--- ||Label | The data that we're trying to predict, such as "dog" or "cat"||Neural Network | The existing _template_ of the series of algorithms that we're trying to ajdust to reach our objective;||Model | The combination of the neural network with a particular set of parameters/weights||Parameters/Weights | The values in the model that change what task it can do, and are updated through model training||Fit | Update the parameters of the model such that the predictions of the model using the input data match the target labels||Train | A synonym for _fit_||Pretrained model | A model that has already been trained, generally using a large dataset, and will be fine-tuned||Fine-tune | Update a pretrained model for a different task||Epoch | One complete pass through the input data||Loss | A measure of how good the model is, chosen to drive training via SGD||Metric | A measurement of how good the model is, using the validation set, chosen for human consumption||Validation set | A set of data held out from training, used only for measuring how good the model is||Training set | The data used for fitting the model; does not include any data from the validation set||Overfitting | Training a model in such a way that it _remembers_ specific features of the input data, rather than generalizing well to data not seen during training||CNN | Convolutional neural network; a type of neural network that works particularly well for computer vision tasks| Limitations Inherent To Machine LearningFrom this picture we can now see some fundamental things about training a deep learning model:- A model cannot be created without data.- A model can only learn to operate on the patterns seen in the input data used to train it.- This learning approach only creates *predictions*, not recommended *actions*.- It's not enough to just have examples of input data; we need *labels* for that data too (e.g., pictures of dogs and cats aren't enough to train a model; we need a label for each one, saying which ones are dogs, and which are cats).Generally speaking, we've seen that most organizations that say they don't have enough data, actually mean they don't have enough *labeled* data. If any organization is interested in doing something in practice with a model, then presumably they have some inputs they plan to run their model against. And presumably they've been doing that some other way for a while (e.g., manually, or with some heuristic program), so they have data from those processes! For instance, a radiology practice will almost certainly have an archive of medical scans (since they need to be able to check how their patients are progressing over time), but those scans may not have structured labels containing a list of diagnoses or interventions (since radiologists generally create free-text natural language reports, not structured data). We'll be discussing labeling approaches a lot in this book, because it's such an important issue in practice.Since these kinds of machine learning models can only make *predictions* (i.e., attempt to replicate labels), this can result in a significant gap between organizational goals and model capabilities. For instance, in this book you'll learn how to create a *recommendation system* that can predict what products a user might purchase. This is often used in e-commerce, such as to customize products shown on a home page by showing the highest-ranked items. But such a model is generally created by looking at a user and their buying history (*inputs*) and what they went on to buy or look at (*labels*), which means that the model is likely to tell you about products the user already has or already knows about, rather than new products that they are most likely to be interested in hearing about. That's very different to what, say, an expert at your local bookseller might do, where they ask questions to figure out your taste, and then tell you about authors or series that you've never heard of before. Another critical insight comes from considering how a model interacts with its environment. This can create *feedback loops*, as described here:- A *predictive policing* model is created based on where arrests have been made in the past. In practice, this is not actually predicting crime, but rather predicting arrests, and is therefore partially simply reflecting biases in existing policing processes.- Law enforcement officers then might use that model to decide where to focus their police activity, resulting in increased arrests in those areas.- Data on these additional arrests would then be fed back in to retrain future versions of the model.This is a *positive feedback loop*, where the more the model is used, the more biased the data becomes, making the model even more biased, and so forth.Feedback loops can also create problems in commercial settings. For instance, a video recommendation system might be biased toward recommending content consumed by the biggest watchers of video (e.g., conspiracy theorists and extremists tend to watch more online video content than the average), resulting in those users increasing their video consumption, resulting in more of those kinds of videos being recommended. We'll consider this topic more in detail in >. Machine Learning Workflow
###Code
#@title
gv('''ordering=in
problem[shape=box3d width=1 height=1 label="Translate a Business Goal\n to Machine Learning task"]
dataset[shape=box3d width=1 height=1 label="Collect and clean \n your dataset"]
transform[shape=box3d width=1 height=1 label="Data\n Transformation"]
train[shape=box3d width=1 height=1 label="Train \n your model"]
predict[shape=box3d width=1 height=1 label="Interpret the model \n and make \n predictions"]
problem->dataset->transform->train->predict''')
###Output
_____no_output_____ |
sci_py_lecture_scipy.ipynb | ###Markdown
Short intro- The *SciPy* **framework** builds on top of the *low-level NumPy* for multidimensional arrays,- and provides a large number of *higher-level* **scientific algorithms**. Some of the topics that *SciPy* covers are:- Special functions (```scipy.special```)- Integration (```scipy.integrate```)- Optimization (```scipy.optimize```)- Interpolation (```scipy.interpolate```)- Fourier Transforms (```scipy.fftpack```)- Signal Processing (```scipy.signal```)- Linear Algebra (```scipy.linalg```)- Sparse Eigenvalue Problems (```scipy.sparse```)- Statistics (```scipy.stats```)- Multi-dimensional image processing (```scipy.ndimage```)- File IO (```scipy.io```)
###Code
# Different kinds of import
# fetch all -- from scipy import * -- for REAL??
# part of -- from scipy.linag as la -- hmm, reasonable
# part of part of -- from scipy.special import jn, yn
from scipy import * # use this can reduce a lot of work (for me)
###Output
_____no_output_____
###Markdown
Special Functions - *I have no idea what the* ```special functions``` *mean*.- Here's a wiki [link](https://zh.wikipedia.org/wiki/%E8%B4%9D%E5%A1%9E%E5%B0%94%E5%87%BD%E6%95%B0)
###Code
from scipy.special import jn, yn, jn_zeros, yn_zeros
n = 0
x = 0.0
print(
"J_{:d}({:f}) = {:f}".format(n, x, jn(n, x))
)
x = 1.0
print(
"J_{:d}({:f}) = {:f}".format(n, x, jn(n, x))
)
x = linspace(0, 10, 100)
fig, ax = plt.subplots()
for n in range(4):
ax.plot(x, jn(n, x), label=f"$J_{n}(x)$")
ax.legend()
n = 0
m = 4
jn_zeros(n, m)
###Output
_____no_output_____
###Markdown
Integration> It was called (numerical) ***quadrature*** as well.
###Code
from scipy.integrate import quad, dblquad, tplquad
###Output
_____no_output_____
###Markdown
- 0x01 - basic usage
###Code
def f(x):
return x
x_lower = 0
x_upper = 1
val, abs_err = quad(f, x_lower, x_upper)
val
abs_err
###Output
_____no_output_____
###Markdown
- 0x02 - wtf usage
###Code
def integrand(x, n):
"""
Bessel function of first kind and order n.
"""
return jn(n, x) # :(
x_lower = 0
x_upper = 1
val, abs_err = quad(integrand, x_lower, x_upper, args=(3,))
val
abs_err
###Output
_____no_output_____
###Markdown
- 0x03 - simple func
###Code
val, abs_err = quad(lambda x: exp(-x ** 2), -Inf, Inf) # use 'Inf' as integral limits (is fine)
val
abs_err
# what's this for??
analytical = sqrt(pi)
analytical
###Output
_____no_output_____
###Markdown
- 0x04 - higher-dimen integration
###Code
def integrand(x, y):
return exp(-x**2 - y**2)
x_lower, x_upper = (0, 10)
y_lower, y_upper = (0, 10)
val, abs_err = dblquad(
integrand,
x_lower,
x_upper,
lambda x: y_lower,
lambda x: y_upper,
)
val
abs_err
###Output
_____no_output_____ |
docs/content/distributions.ipynb | ###Markdown
Distributions IntroductionThis section will begin to formalize the connection between random variables, probability density functions, and population parameters. We generally use language like the random variable $X$ follows a named distribution, which has a probability density function defined by, possibly many, parameters. Hence, the word distribution in some sense is just the name the binds random variables, probability density functions, and parameters together. Warm UpBefore we look at two common named population parameters, let's introduce a few new words that we'll use throughout this section.- **support**: The set of values a random variable might assume, and equally the set of values a random variable's probability density function is defined over. For example, the support of $X \sim \text{Uniform}(1, 6)$ is the integers from $1$ to $6$ inclusive.- **expected value**: A population-level measure of center for a random variable. For example, $3.5$ for a fair die.- **variance**: A population-level measure of variability for a random variable.- **standard deviation**: The square root of the variance. Random VariablesA random variable is a function from a set of all possible outcomes,named the sample space, to exactly one real number. We oftenassume that random variables follow named distributions, e.g. $Y \sim\text{Uniform}(a, b)$ where $a < b$, or $X \sim \text{Bernoulli}(p)$ where $p\in [0, 1]$. Named distributions are common because they oftenabstractly represent processes in the world worth measuring. Based onthe outcome of the process of interest, we calculate probabilitiesfor a random variable that follows a specific distribution.The Uniform distribution represents well rolling die. Much of theprobabilities surrounding gambling are found by assuming randomvariables follow various Uniform distributions. Ignoring payouts, roulette is essentially a random variable $X \sim \text{Uniform}(1, 36)$.The Bernoulli distirbution represents well any process that has twomutually exclusive outcomes with a fixed probability of "success."Anything from unfair coins to the outcomes of elections are modeledwith Bernoulli random variables.These are not the only random variables, nor are random variablesrestricted to countable outcomes. Discrete random variables arerestrictued to countable outcomes and continous random variables arethe extension to uncountable outcomes. Discrete random variables takeon non-negative mass or probability at single points in thesupport of the random variable, and thus have probability massfunctions. On the other hand, continuous random variables haveprobability density functions, since zero mass occurs at distinctpoints in the support of the random variable. These lecture noteswill only use the name probability density functions, even whenreferring to discrete random variables.Before providing a long list of some other common named distributions, we will discuss the mean and variance of a random variable. These quantities describe a measure of center and a measure of spread of random variables. Recall, statistics uses data to estimate population parameters. The mean and variance of a random variable are two of the more commonly estimated quantities that describe a population. With a data set in hand, the sample mean (add up all the data divide by the number of data points) is an approximation of the mean of a random variable. With a data set in hand, the measure of spread called the variance is an approximation of the variance of a random variable.
###Code
import numpy as np
import pandas as pd
import bplot as bp
bp.LaTeX()
bp.dpi(300)
###Output
_____no_output_____
###Markdown
Mean of a Random Variable Think back to our discrete random variable that represented rolling asingle fair die, $X \sim \text{Uniform}(1, 6)$. We formalized themathematical notation $P(X \in \{2,4,6\}) = 1/2$ by imaginingrolling the same fair die an infinite number of times and dividing thenumber of times either $2, 4$, or $6$ turns up by the total number ofrolls. Next, we will formalize, in a similar notion, the idea of the mean of a random variable.The expected value describes a measure of center of a randomvariable. This is related to but not exactly, the same thing as, the samplemean where you add up all the numbers and divide by how every manynumbers there are. The expected value does not describe data. The expected value instead describes a measure ofcenter of the probaility density function for a random variable.For the discrete random variable $X \sim \text{Uniform}(1, 6)$ the probability density function is displayed below. More generally, as uniform implies sameness, mathematically the probability density function is the same for all arguments$$uniform(x|a, b) = \frac{1}{b - a + 1} $$for $x \in \\{a, a+1, \ldots, b-1, b\\}$. Notice that the random variable is only defined for integer values between $a$ and $b$ inclusive. These values make up the **support** of the random variable. Think of the support as the values for which the probability density function is positive.
###Code
x = np.arange(1, 7)
fx = 1 / (6 - 1 + 1)
df = pd.DataFrame({'x': x, 'f': fx})
bp.point(df['x'], df['f'])
bp.labels(x='$x$', y='uniform$(x|1,6)$', size=18)
###Output
_____no_output_____
###Markdown
Example Since population mean describes a measure of center, and the probability density function takes on the same value $1/6$ at each value in the support $\{1, 2, 3, 4, 5, 6\}$, the expected value must be the value in the middle of the support, namely $3.5$. Formally, we read $\mathbb{E}(X) = 3.5$ as the **expected value** of the random variable $X$ is $3.5$. As the sample mean is to data, the expected value is to a random variable. More formally, the expected value of $X \sim \text{Uniform}(a, b)$ is$$ \mathbb{E}(X) = \sum_{x = a}^b x * \text{uniform}(x|a,b) = \sum_{x = a}^b x * \frac{1}{b - a + 1}. $$In R, we can apply this formula to $X \sim \text{Uniform}(1,6)$,
###Code
a = 1; b = 6
x = np.arange(1, 6 + 1)
fx = 1 / (b - a + 1)
sum(x * fx) # E(X)
###Output
_____no_output_____
###Markdown
Notice that the we are simply weighting each value in the support of the random variable by the probability density function evaluated at each value in the support. The expected value is to be thought of as the value you'd get by taking the sample mean of the outcomes produced by infinitely rolling a fair die. Let's approximate this process in R,
###Code
N = int(1e3)
die = np.random.choice(x, size=N)
flips = np.arange(1, N + 1)
df = pd.DataFrame({'flip': flips,
'm': np.cumsum(die)/flips})
bp.line(df['flip'], df['m'])
bp.line_h(y=3.5, color='black')
bp.labels(x='flip', y='$\hat{E}(X)$', size=18)
###Output
_____no_output_____
###Markdown
**DEFINITION**. Let $X \sim F$, where $F$ is the name of a distribution. The expected value of a random variable is$$ \mathbb{E}(X) = \int_{\mathbb{R}} x\,d\text{F}(x).$$The fancy integral here is just to remind you that for discrete random variables, the integral becomes a sum, as above, and for continuous random variables the integral stays. In both cases, the sum/integral ranges over the support of the random variable and the summand/integrand is the product of $x$ and the probability density function. Variance and Standard Deviation of a Random Variable Where the mean is a measure of center of a random variable, the variance is a measure of spread. Specifically, the variance measures squared distance from the mean, again weighted by the probability density function. **DEFINITION**. Let $X \sim F$, where $F$ is the name of a distribution function with mean $\mu = \mathbb{E}(X)$, the variance of $X$ is $$ \mathbb{V}(X) = \int_{\mathbb{R}} (x - \mathbb{E}(X))^2 \, dF(x).$$**DEFINITION**. Let $X \sim F$, where $F$ is the name of a distribution function with variance $\mathbb{V}(X)$, the standard deviation of $X$ is$$ \mathbb{D}(X) = \sqrt{\mathbb{V}(X)}.$$The standard deviation is another measure of spread, like the variance, but the standard deviation is in the same units as the mean. ExampleIn Python, we can apply this formula to $X \sim \text{Uniform}(1,6)$ by first calculating the expected value $\mathbb{E}(X)$,
###Code
a = 1; b = 6
x = np.arange(1, 6 + 1)
fx = 1 / (b - a + 1)
m = sum(x * fx)
v = sum(np.power(x - m, 2) * fx)
np.sqrt(v)
###Output
_____no_output_____
###Markdown
ExampleAssume $X \sim \text{Bernoulli}(p)$. If you work through the math for the variance, then you end up at an equation that we can almost make sense of.$$ \mathbb{V}(X) = p(1 - p) $$Let's plot this.
###Code
p = np.linspace(0, 1, 101)
fp = p * (1 - p)
bp.curve(p, fp)
bp.labels(x='$p$', y='$f(p)$', size=18)
###Output
_____no_output_____ |
Session 1/Colab/census analysis.ipynb | ###Markdown
We will start by loading the census data and viewing a few rows. Look at the values in this sample of rows and get an initial feel for what it contains.
###Code
import pandas as pd
census_data = pd.read_csv(data_dir + 'us_census_1994.txt', delimiter="\t")
census_data
###Output
_____no_output_____
###Markdown
Note: The fnlwgt ('Final Weight') column was deviced by the statisticians based on demographic population sizes. It turns out not to be useful for our machine learning task so you can ignore it for the purposes of this tutorial. Next we run the describe command again and get summaries of the numerical columns.Once again pay attention to max and min values but this time we will also examine the mean (average), and interquartile ranges (25%, 50%, 75%) too.**What do the min value for age tell you about this sample? Do you think America's oldest person is included? What is the average age? Do you think you can find someone in the dataset who is exactly average age?****What do you think the max value of hoursperweek tells us about that column?**Interquartile ranges are a useful way to understand the distribution of the data. To calculate quartiles for a column we sort the data in numeric order (we don't actually have to do this, Python is doing that for us. Which is handy!). The 50% quartile is also known as the median and represents the midpoint of the data. So the age of the person right in the middle is 37 when we sort by age. Note this is different, but quite close to the mean.**Compare the 50% with the means of the other columns.****What do you think the 75% point tells us about 'capital gain'?****What do you think the 25% point tells us about 'hours per week'?**For more information on quartiles have a look at: https://www.mathsisfun.com/data/quartiles.html
###Code
census_data.describe()
###Output
_____no_output_____
###Markdown
If at any point you need a reminder of the column names in the dataset, just run this command:
###Code
census_data.columns
###Output
_____no_output_____
###Markdown
We will now look at the salary column. When we come to machine learning this will be the column we try to predict: Given the values in all the other columns, does the person earn more than $50,000?The "groupby" function, combined with "size", gives us a count of rows per value in a column. Run this for the "salary" column.**Is the data evenly distributed?**
###Code
census_data.groupby(by=['salary']).size()
###Output
_____no_output_____
###Markdown
One useful way of exploring relationships between numerical columns is to use a 'pairs plot'. This plot creates a grid of graphs for selected columns. In this case we are colouring points or columns by salary (you can see in the legend on the right hand side which colour is which).The plot itself needs some explanation.There are 9 plots here. We will start with the 3 along the diagonal. These are graphs on the values for each attribute, starting with 'age' on the left which shows the distribution of ages in the dataset split by salary. The scale is at the bottom of the left hand column. **Where do the peaks lie for each salary group?**Compare the top left and bottom right plots. **Do they tell you the same thing about proportions of the two salary bands?**The remaining charts can be put in pairs. The bottom left has 'age' on the x-axis and 'educationnum' on the y-axis, while the top right has the same variables but swapped around. In other words if you flip the bottom left one around you will get the top right one. So you can ignore the graphs below (or above) the diagonal if you want to.These other charts can be used to spot correlations between columns. A correlation is when two variables move in tandem, as one goes up the other one goes up (or they can go down). **Do any of these variables look correlated?**Thinking about the Machine Learning task of prediction, **which of these variables would you use to "Guess" someone's salary?**
###Code
# Pairs plot
import seaborn as sns
sns.pairplot(census_data[{'age','educationnum','capitalgain', 'salary'}], hue="salary", kind='scatter', plot_kws={'alpha':0.5})
###Output
_____no_output_____
###Markdown
For the remaining columns we can use grouping to create summary counts. This won't work so well for numerical columns where there are many values (especially if they have a decimal value, which we don't see in this dataset). In this case a graph would be better. But when we're dealing with 'categorical' values such as 'maritalstatus' it works well.(Don't forget the columns function from above if you can't remember the column names)**What do you notice about the 'occupation' column?****Are there other columns with the same issue?****Is this a problem?**
###Code
census_data.groupby(by=['maritalstatus']).size()
###Output
_____no_output_____
###Markdown
We can also use groupby to group two (or more) columns at a time.**What does the output of the next command tell us about 'educationnum' and 'education'?**
###Code
census_data.groupby(by=['educationnum','education']).size()
###Output
_____no_output_____
###Markdown
This leads us to an important aspect of preparing data for Machine Learning. The majority of ML algorithms require numbers as input. Most of our columns are labels.Let's compare the 'education' column with 'workclass'. **Could you put both of them in an order of superiority?****Could you assign a numeric value to 'education'?****What about to workclass?** One technique for converting categorical variables to numeric values is to use **'one hot encoding'**.This technique converts the values in a column to binary values by creating a new column per unique value in the column. We can do this with the 'get_dummies' function. Running this for the 'workclass' column returns a table with 9 columns, one for each unique value in the column. Compare with the output of the 'groupby' command above if you need reassurance.This hasn't added the columns to our census data yet. First we need to do something with the question marks.
###Code
pd.get_dummies(census_data['workclass'])
###Output
_____no_output_____
###Markdown
What should we do with the question marks?There are a number of techniques available:* Remove the rows altogether* Assign a default value* Assign an average value* Work out a realistic value for each row**Consider the advantages and disadvantages of each approach** For this exercise we will choose the second option and assign a default value of 'Unknown-Workclass'. Then check that it has worked by outputting a summary again. You should see that '?' has been replaced.
###Code
census_data.loc[(census_data['workclass'].str.contains('\?')), 'workclass'] = 'Unknown-Workclass'
census_data.groupby(by=['workclass']).size()
###Output
_____no_output_____
###Markdown
Now we can return to creating 'one hot encodings'. First off all we save the new columns into a variable called new_columns. Then we drop the 'workclass' column from the census data table. When you run this it will output a few rows as before. Note that the number of columns has reduced by one.
###Code
new_columns = pd.get_dummies(census_data['workclass'])
census_data = census_data.drop(['workclass'], axis=1)
census_data
###Output
_____no_output_____
###Markdown
Now we will add our new columns into the table. There are now 23 columns in the table.**Try and repeat this process for one other column which had question marks in it**
###Code
census_data = census_data.join(new_columns)
census_data
###Output
_____no_output_____ |
week8/week8.ipynb | ###Markdown
Pandas**pandas is short for Python Data Analysis Library
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
In pandas you need to work with DataFrames and Series. According to [the documentation of pandas](https://pandas.pydata.org/pandas-docs/stable/):* **DataFrame**: Two-dimensional, size-mutable, potentially heterogeneous tabular data. Data structure also contains labeled axes (rows and columns). Arithmetic operations align on both row and column labels. Can be thought of as a dict-like container for Series objects. The primary pandas data structure.* **Series**: One-dimensional ndarray with axis labels (including time series).
###Code
pd.Series([5, 6, 7, 8, 9, 10])
pd.DataFrame([1, 2, 3, 4, 5])
pd.DataFrame({'Student': ['1', '2'], 'Name': ['Alice', 'Michael'], 'Surname': ['Brown', 'Williams']})
pd.DataFrame([{'Student': '1', 'Name': 'Alice', 'Surname': 'Brown'},
{'Student': '2', 'Name': 'Anna', 'Surname': 'White'}])
###Output
_____no_output_____
###Markdown
Check how to create it:* pd.DataFrame().from_records()* pd.DataFrame().from_dict() This data set is too big for github, download it from [here](https://www.kaggle.com/START-UMD/gtd). You will need to register on Kaggle first.
###Code
df = pd.read_csv('globalterrorismdb_0718dist.csv', encoding='ISO-8859-1')
###Output
/opt/anaconda3/lib/python3.8/site-packages/IPython/core/interactiveshell.py:3145: DtypeWarning: Columns (4,6,31,33,61,62,63,76,79,90,92,94,96,114,115,121) have mixed types.Specify dtype option on import or set low_memory=False.
has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
###Markdown
Let's explore the second set of data. How many rows and columns are there? General information on this data set: Let's take a look at the dataset information. In .info (), you can pass additional parameters, including:* **verbose**: whether to print information about the DataFrame in full (if the table is very large, then some information may be lost);* **memory_usage**: whether to print memory consumption (the default is True, but you can put either False, which will remove memory consumption, or 'deep', which will calculate the memory consumption more accurately);* **null_counts**: Whether to count the number of empty elements (default is True).
###Code
df.describe(include = ['object'])
###Output
_____no_output_____
###Markdown
The describe method shows the basic statistical characteristics of the data for each numeric feature (int64 and float64 types): the number of non-missing values, mean, standard deviation, range, median, 0.25 and 0.75 quartiles. How to look only at the column names, index: How to look at the first 10 lines? How to look at the last 15 lines? How to request only one particular line (by counting lines)? How to request only one particular line by its index? Look only at the unique values of some columns. How many unique values there are in ```city``` column? = On how many cities this data set hold information on terrorist attacks? In what years did the largest number of terrorist attacks occur (according to only to this data set)? How we can sort all rows by year in descending order? Which data types we have in each column? How to check the missing values? Let's delete a column ```approxdate``` from this data set, because it contains a lot of missing values:
###Code
df.drop(['approxdate'], axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
Create a new variable ```casualties``` by summing up the value in ```Killed``` and ```Wounded```. Rename a column ```iyear``` to ```Year```:
###Code
df.rename({'iyear' : 'Year'}, axis='columns', inplace=True)
###Output
_____no_output_____
###Markdown
How to drop all missing values? Replace these missing values with others?
###Code
df.dropna(inplace=True)
###Output
_____no_output_____ |
Array/0917/611. Valid Triangle Number.ipynb | ###Markdown
说明: 给定一个数组,该数组由非负整数组成,您的任务是计算从数组中选择的可组成三角形的三元组的数量(如果我们将它们作为三角形的边长)。Example 1: Input: [2,2,3,4] Output: 3 Explanation: Valid combinations are: 2,3,4 (using the first 2) 2,3,4 (using the second 2) 2,2,3Note: 1、The length of the given array won't exceed 1000. 2、The integers in the given array are in the range of [0, 1000].
###Code
class Solution:
def triangleNumber(self, nums) -> int:
if not nums: return 0
count = 0
nums.sort()
for i in range(len(nums) - 2):
one = nums[i]
for j in range(i+1, len(nums)-1):
two = nums[j]
for t in nums[j+1:]:
if 0 < t < one+two:
count += 1
elif t >= one+two:
break
return count
class Solution:
def triangleNumber(self, nums) -> int:
length = len(nums)
t = 0
nums.sort()
for i in range(length - 2):
k = i + 2
for j in range(i+1, length-1):
M = nums[i] + nums[j] - 1
if M < nums[j]:
continue
while k < length and nums[k] <= M:
k += 1
t += min(k, length) - (j + 1)
return t
class Solution:
def triangleNumber(self, nums) -> int:
nums.sort()
count = 0
# 在index=2的时候停止
for i in range(len(nums)-1, 1, -1): # 3, 2, [1, 0]
left = 0
right = i - 1
print(left, right)
while left < right:
if nums[left] + nums[right] > nums[i]:
count += right - left
right -= 1
else:
left += 1
return count
nums_ = [2,2,3,4]
solution = Solution()
solution.triangleNumber(nums_)
class Solution:
def triangleNumber(self, nums) -> int:
a = {2:3}
val = a.setdefault(3, 0)
a[3] += 1
a
###Output
_____no_output_____ |
homework/day_3/monte-carlo-LJ-xudong-hw3.ipynb | ###Markdown
Monte Carlo Simulation - AdvancedIn this homework, we will work with the Lennard Jones equation added with cutoff distance and periodic boundary conditions.$$ U(r) = 4 \epsilon \left[\left(\frac{\sigma}{r}\right)^{12} -\left(\frac{\sigma}{r}\right)^{6} \right] $$ Reduced units:$$ U^*\left(r^*_{ij} \right) = 4 \left[\left(\frac{1}{r^*_{ij}}\right)^{12} -\left(\frac{1}{r^*_{ij}}\right)^{6} \right] $$
###Code
import math, os
import matplotlib.pyplot as plt
%matplotlib notebook
def calculate_LJ(r_ij):
"""
The LJ interaction energy between two particles.
Computes the pairwise Lennard Jones interaction energy based on the separation distance in reduced unites.
Parameters
----------
r_ij : float
The distance between the particles in reduced units.
Returns
-------
pairwise_energy : float
The pairwise Lennard Jones interaction energy in reduced units.
"""
r6_term = math.pow(1/r_ij,6)
r12_term = math.pow(r6_term,2)
pairwise_energy = 4 * (r12_term - r6_term)
ax.plot(r_ij,pairwise_energy,'ob')
return pairwise_energy
def calculate_distance(coord1,coord2,box_length=None):
"""
Calculate the distance between two 3D coordinates.
Parameters
----------
coord1, coord2 : list
The atomic coordinates [x, y, z]
box_length : float, optional
The box length. This function assumes box is a cube.
Returns
-------
distance : float
The distance between the two atoms.
"""
distance = 0
vector = [0,0,0]
for i in range(3):
vector[i] = coord1[i] -coord2[i]
if box_length is None:
pass
else:
if vector[i] > box_length/2:
vector[i] -= box_length
elif vector[i] < -box_length/2:
vector[i] += box_length
dim_dist = vector[i] ** 2
distance += dim_dist
distance = math.sqrt(distance)
return distance
###Output
_____no_output_____
###Markdown
Tail CorrectionTruncating interactions using a cutoff removes contribution to the potential energy that might be non-negligible. The tail correction for our system makes a correction for use of the cutoff. We only have to calculate this once at the start of our simulation. The formula is:$$U_{tail} = \frac{8\pi N^2}{3V} \epsilon \sigma ^3 \left[\frac{1}{3}\left(\frac{\sigma}{r_c}\right)^9 - \left(\frac{\sigma}{r_c}\right)^3\right]$$In reduced units:$$U_{tail} = \frac{8\pi N^2}{3V} \left[\frac{1}{3}\left(\frac{1}{r_c}\right)^9 - \left(\frac{1}{r_c}\right)^3\right]$$
###Code
def calculate_tail_correction(cutoff, box_length, num_atoms):
"""
Calculate the tail correction.
Parameters
----------
cutoff : float
The curoff distance.
box_length : float
The length of the cell.
num_atoms : int
Number of atoms in a given system.
Returns
-------
tail_co_LJ : float
A float number that shows the value of tail correction energy for the given system.
"""
tail_co_LJ = 0
coeff = 0
r3 = math.pow(1/cutoff,3)
r9 = math.pow(r3,3)
coeff = 8 * math.pi * (num_atoms ** 2)/(3 * (box_length ** 3))
tail_co_LJ = coeff * (r9/3 - r3)
return tail_co_LJ
def calculate_total_energy(coordinates, cutoff=3, box_length=None):
"""
Calculate the total Lennard Jones energy of a system of particles.
Parameters
----------
coordinates : list
Nested list containing particle coordinates.
cutoff : float
A criteria distance for intermolecular interaction truncation
box_length : float, optional
The box length. This function assumes box is a cube.
Returns
-------
total_energy : float
The total pairwise Lennard Jones energy of the system of particles.
"""
total_energy = 0
num_atoms = len(coordinates)
for i in range(num_atoms):
for j in range(i+1,num_atoms):
# print(F'Comparing atom number {i} with atom number {j}')
dist_ij = calculate_distance(coordinates[i], coordinates[j], box_length)
if dist_ij < cutoff:
interaction_energy = calculate_LJ(dist_ij)
total_energy += interaction_energy
return total_energy
def read_xyz(filepath):
"""
Reads coordinates from an xyz file.
Parameters
----------
filepath : str
The path to the xyz file to be processed.
Returns
-------
atomic_coordinates : list
A two dimensional list containing atomic coordinates
"""
with open(filepath) as f:
box_length = float(f.readline().split()[0])
num_atoms = float(f.readline())
coordinates = f.readlines()
atomic_coordinates = []
for atom in coordinates:
split_atoms = atom.split()
float_coords = []
# We split this way to get rid of the atom label.
for coord in split_atoms[1:]:
float_coords.append(float(coord))
atomic_coordinates.append(float_coords)
return atomic_coordinates, box_length
fig = plt.figure()
ax = fig.add_subplot(111)
fig.show()
plt.ylim(-1.1,0.1)
for i in range(1, 51):
r = i * 0.1
calculate_LJ(r)
###Output
_____no_output_____
###Markdown
From this graph, it is obvious that when $r^*_{ij}$ > 3, the pairwise energy is almost 0 and the energy curve reaches a plateau. The general set of cutoff distance at 3$\sigma$ is reasonable.
###Code
assert calculate_LJ(1) == 0
assert calculate_LJ(math.pow(2,(1/6))) == -1
file_path = os.path.join('lj_sample_configurations','lj_sample_config_periodic1.txt')
coordinates, box_length = read_xyz(file_path)
calculate_total_energy(coordinates)
calculate_total_energy(coordinates,box_length=10)
assert abs(calculate_total_energy(coordinates,3, box_length=10) - (-4351.5)) < 0.1
###Output
_____no_output_____
###Markdown
Flow of calculation1. Generate an initial system state 'm'.1. Choose an atom with uniform probability from state 'm'.1. Propose a new state 'n' by translating the particle with a uniform random displacement in each direction.1. Calculate the energy change for the particle.1. Accept or reject the new state.
###Code
def accept_or_reject(delta_e, beta):
"""
Accept or reject based on change in energy and temperature.
Parameters
----------
delta_e : float
The change of the system's energy.
beta : float
1 over the temperature.
Returns
-------
accept : bool
Accept the move or not.
"""
if delta_e <= 0:
accept = True
else:
rand_num = random.random()
p_acc = math.exp(-beta * delta_e)
if rand_num < p_acc:
accept = True
else:
accept = False
return accept
delta_energy = -1
beta = 1
assert accept_or_reject(delta_energy,beta) is True
delta_energy = 0
beta = 1
assert accept_or_reject(delta_energy,beta) is True
import random
random.seed(0)
random.random()
delta_energy = 1
beta = 1
p_acc = math.exp(-beta * delta_energy)
print(p_acc)
random.seed(0)
delta_energy = 1
beta = 1
assert accept_or_reject(delta_energy,beta) is False
random.seed(1)
delta_energy = 1
beta = 1
assert accept_or_reject(delta_energy,beta) is True
# Unset random seed
random.seed()
def calculate_pair_energy(coordinates, i_particle, box_length, cutoff):
"""
Calculate the interaction energy of the particles with its environment (all other particles in the system)
Parameters
----------
coordinates : list
The coordinates for all the particles within the system.
i_particle : int
The particle index for which to calculate the energy.
box_length : float
The length of the simulation box.
cutoff : float
The simulation cutoff. Beyond this distance, interactions are not calculated.
Returns
-------
e_total : float
The pairwise interaction energy with the i_th particle with all other particles in the system.
"""
e_total = 0
num_atoms = len(coordinates)
for i in range(num_atoms):
# only consider the interactions with particles that is not the i_particle
if i != i_particle:
r_ij = calculate_distance(coordinates[i_particle], coordinates[i], box_length)
if r_ij < cutoff:
e_pair = calculate_LJ(r_ij)
e_total += e_pair
return e_total
coordinates = [[0, 0, 0], [0, math.pow(2, 1/6), 0], [0, 2*math.pow(2, 1/6), 0]]
assert calculate_pair_energy(coordinates, 1, 10, 3) == -2
assert calculate_pair_energy(coordinates, 0, 10, 3) == calculate_pair_energy(coordinates, 2, 10, 3)
###Output
_____no_output_____
###Markdown
Monte Carlo Simulation loop
###Code
import os
import random
# Set simulation parameters
reduced_temperature = 0.9
num_steps = 500
max_displacement = 0.1
cutoff = 3
# Reporting information
freq = 100
steps = []
energies = []
# Calculate quantities
beta = 1 / reduced_temperature
# Read initial coordinates
file_path = os.path.join('lj_sample_configurations','lj_sample_config_periodic1.txt')
coordinates, box_length = read_xyz(file_path)
num_particles = len(coordinates)
# Calculate based on the inputs
total_energy = calculate_total_energy(coordinates, cutoff, box_length)
total_energy += calculate_tail_correction(cutoff, box_length, num_particles)
for step in range(num_steps):
# 1. Randomly pick one particle in the num_particles particles.
random_particle = random.randrange(0,num_particles)
# 2. Calculate the interaction energy of the selected particles with the system and store this value.
current_energy = calculate_pair_energy(coordinates, random_particle, box_length, cutoff)
# 3. Generate a random displacement in x, y, z directions with range (-max_displacement, max_displacement).
x_rand = random.uniform(-max_displacement, max_displacement)
y_rand = random.uniform(-max_displacement, max_displacement)
z_rand = random.uniform(-max_displacement, max_displacement)
# 4. Modify the coordinate of the selected particle by generated displacement.
coordinates[random_particle][0] += x_rand
coordinates[random_particle][1] += y_rand
coordinates[random_particle][2] += z_rand
# 5. Calculate the new interaction energy of the new particle and store this value.
proposed_energy = calculate_pair_energy(coordinates, random_particle, box_length, cutoff)
# 6. Calculate energy change and decide if this move is accepted.
delta_energy = proposed_energy - current_energy
accept = accept_or_reject(delta_energy, beta)
# 7. If accepted, keep movement. Else, revert to the old position.
if accept == True:
total_energy += delta_energy
else:
# if rejected, roll back to the origin coordinates of the selected particle.
coordinates[random_particle][0] -= x_rand
coordinates[random_particle][1] -= y_rand
coordinates[random_particle][2] -= z_rand
# 8. Print the energy and store the coordinates at certain intervals.
if step % freq == 0:
print(step, total_energy/num_particles)
steps.append(step)
energies.append(total_energy/num_particles)
import matplotlib.pyplot as plt
%matplotlib notebook
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(steps, energies)
###Output
_____no_output_____
###Markdown
Generate initial configuration randomly
###Code
import random
def initial_config(box_volume, num_particles):
"""
Generate the initial configuration randomly by random()
Parameters
----------
box_volume : float
The volume of the simulation box.
Here, the simulation box are regarded as a cubic box so that the box_length can be calculated directly.
num_particles : int
The number of particles within this system.
Returns
-------
box_length : float
The length of the cubic simulation box.
coordinates : list
The list that containing the coordinates of the atoms in the generated configuration.
"""
# Assume that the simulation box is a cubic box and calculate the box_length
box_length = box_volume ** (1/3)
# Create a new empty list to store the generated coordinates.
coordinates = []
# Generate num_particles of coordinates
for i in range(num_particles):
x_rand = random.uniform(0, box_length)
y_rand = random.uniform(0, box_length)
z_rand = random.uniform(0, box_length)
coordinate = [x_rand, y_rand, z_rand]
coordinates.append(coordinate)
return box_length, coordinates
# sanity check for def initial_config
box_vol = 1000
num_atoms = 5
box_length, coordiantes = initial_config(box_vol, num_atoms)
print(box_length)
print('\n')
print(coordiantes)
###Output
_____no_output_____ |
Hand-on-Machine-Learning-with-Scikit-learning-and-Tensorflow/Chapter14/Recurrent Neural Networls.ipynb | ###Markdown
SetUp
###Code
def reset_graph(seed=42):
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)
###Output
_____no_output_____
###Markdown
Basic RNNs in Tensorflow
###Code
import tensorflow as tf
import numpy as np
reset_graph()
n_inputs = 3
n_neurons = 5
X0 = tf.placeholder(tf.float32, [None, n_inputs])
X1 = tf.placeholder(tf.float32, [None, n_inputs])
Wx = tf.Variable(tf.random_normal(shape=[n_inputs, n_neurons],dtype=tf.float32))
Wy = tf.Variable(tf.random_normal(shape=[n_neurons,n_neurons],dtype=tf.float32))
b = tf.Variable(tf.zeros([1, n_neurons], dtype=tf.float32))
Y0 = tf.tanh(tf.matmul(X0, Wx) + b)
Y1 = tf.tanh(tf.matmul(Y0, Wy) + tf.matmul(X1, Wx) + b)
init = tf.global_variables_initializer()
X0_batch = np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 0, 1]]) # t = 0
X1_batch = np.array([[9, 8, 7], [0, 0, 0], [6, 5, 4], [3, 2, 1]]) # t = 1
with tf.Session() as sess:
init.run()
Y0_val,Y1_val = sess.run([Y0,Y1], feed_dict={X0:X0_batch,X1:X1_batch})
print(Y0_val)
print(Y1_val)
###Output
[[ 1. -1. -1. 0.40200216 -1. ]
[-0.12210433 0.62805319 0.96718419 -0.99371207 -0.25839335]
[ 0.99999827 -0.9999994 -0.9999975 -0.85943311 -0.9999879 ]
[ 0.99928284 -0.99999815 -0.99990582 0.98579615 -0.92205751]]
###Markdown
Using static_rnn
###Code
n_inputs= 3
n_neurons = 5
reset_graph()
X0 = tf.placeholder(tf.float32, [None, n_inputs])
X1 = tf.placeholder(tf.float32, [None, n_inputs])
basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
output_seqs, state = tf.contrib.rnn.static_rnn(basic_cell, [X0, X1], dtype=tf.float32)
Y0, Y1 = output_seqs
init = tf.global_variables_initializer()
X0_batch = np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 0, 1]])
X1_batch = np.array([[9, 8, 7], [0, 0, 0], [6, 5, 4], [3, 2, 1]])
with tf.Session() as sess:
init.run()
Y0_val, Y1_val = sess.run([Y0,Y1], feed_dict={X0:X0_batch,X1:X1_batch})
Y0_val
Y1_val
###Output
_____no_output_____
###Markdown
Using dynamic_rnn()
###Code
n_steps = 2
n_inputs = 3
n_neurons = 5
reset_graph()
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
outputs, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32)
init = tf.global_variables_initializer()
X_batch = np.array([
[[0, 1, 2], [9, 8, 7]], # instance 1
[[3, 4, 5], [0, 0, 0]], # instance 2
[[6, 7, 8], [6, 5, 4]], # instance 3
[[9, 0, 1], [3, 2, 1]], # instance 4
])
with tf.Session() as sess:
init.run()
output_val = sess.run(outputs, feed_dict={X:X_batch})
print(output_val)
###Output
[[[ 0.90414059 0.49652389 -0.86023885 0.39286929 -0.30018684]
[ 0.99999994 0.76327085 -1. 0.99888641 -0.7229408 ]]
[[ 0.99988353 0.77785885 -0.99992859 0.9727248 -0.78886396]
[ 0.44762579 -0.06916652 -0.51665425 -0.84579295 0.88807124]]
[[ 0.99999976 0.91130525 -0.99999994 0.99912328 -0.94954252]
[ 0.9999842 0.20443429 -0.99999785 0.94190502 0.3501083 ]]
[[ 0.99490303 0.88642204 -0.99999577 0.99939179 0.97382319]
[ 0.95951742 0.73643577 -0.99815822 -0.26513484 0.06432986]]]
###Markdown
Setting the sequence lengths
###Code
n_steps = 2
n_inputs = 3
n_neurons = 5
reset_graph()
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
seq_length = tf.placeholder(tf.int32, [None])
outputs, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32,
sequence_length=seq_length)
init = tf.global_variables_initializer()
X_batch = np.array([
# step 0 step 1
[[0, 1, 2], [9, 8, 7]], # instance 1
[[3, 4, 5], [0, 0, 0]], # instance 2 (padded with zero vectors)
[[6, 7, 8], [6, 5, 4]], # instance 3
[[9, 0, 1], [3, 2, 1]], # instance 4
])
seq_length_batch = np.array([2, 1, 2, 2])
with tf.Session() as sess:
init.run()
outputs_val, states_val = sess.run(
[outputs, states], feed_dict={X: X_batch, seq_length: seq_length_batch})
print(outputs_val)
###Output
[[[-0.68579948 -0.25901747 -0.80249101 -0.18141513 -0.37491536]
[-0.99996698 -0.94501185 0.98072106 -0.9689762 0.99966913]]
[[-0.99099374 -0.64768541 -0.67801034 -0.7415446 0.7719509 ]
[ 0. 0. 0. 0. 0. ]]
[[-0.99978048 -0.85583007 -0.49696958 -0.93838578 0.98505187]
[-0.99951065 -0.89148796 0.94170523 -0.38407657 0.97499216]]
[[-0.02052618 -0.94588047 0.99935204 0.37283331 0.9998163 ]
[-0.91052347 0.05769409 0.47446665 -0.44611037 0.89394671]]]
###Markdown
Training a sequence classifier
###Code
reset_graph()
n_steps = 28
n_inputs = 28
n_neurons = 150
n_outputs = 10
learning_rate = 0.001
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.int32, [None])
basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
outputs, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32)
logits = tf.layers.dense(states, n_outputs)
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y,
logits=logits)
loss = tf.reduce_mean(xentropy)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/")
X_test = mnist.test.images.reshape((-1, n_steps, n_inputs))
y_test = mnist.test.labels
n_epochs = 10
batch_size = 150
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
X_batch = X_batch.reshape((-1, n_steps, n_inputs))
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test})
print(epoch, "Train accuracy:", acc_train, "Test accuracy:", acc_test)
###Output
0 Train accuracy: 0.973333 Test accuracy: 0.8929
1 Train accuracy: 0.94 Test accuracy: 0.9401
2 Train accuracy: 0.953333 Test accuracy: 0.9513
3 Train accuracy: 0.973333 Test accuracy: 0.9661
4 Train accuracy: 0.98 Test accuracy: 0.9673
5 Train accuracy: 0.973333 Test accuracy: 0.9678
6 Train accuracy: 0.98 Test accuracy: 0.9709
7 Train accuracy: 0.986667 Test accuracy: 0.9699
8 Train accuracy: 0.98 Test accuracy: 0.9735
9 Train accuracy: 0.973333 Test accuracy: 0.9649
###Markdown
Multi-layer RNN
###Code
reset_graph()
n_steps = 28
n_inputs = 28
n_outputs = 10
learning_rate = 0.001
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.int32, [None])
n_neurons = 100
n_layers = 3
layers = [tf.contrib.rnn.BasicRNNCell(num_units=n_neurons,
activation=tf.nn.relu)
for layer in range(n_layers)]
multi_layer_cell = tf.contrib.rnn.MultiRNNCell(layers)
outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
states_concat = tf.concat(axis=1, values=states)
logits = tf.layers.dense(states_concat, n_outputs)
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
n_epochs = 10
batch_size = 150
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
X_batch = X_batch.reshape((-1, n_steps, n_inputs))
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test})
print(epoch, "Train accuracy:", acc_train, "Test accuracy:", acc_test)
###Output
0 Train accuracy: 0.94 Test accuracy: 0.9442
1 Train accuracy: 0.94 Test accuracy: 0.9669
2 Train accuracy: 0.966667 Test accuracy: 0.9703
3 Train accuracy: 0.986667 Test accuracy: 0.9647
4 Train accuracy: 0.973333 Test accuracy: 0.9726
5 Train accuracy: 0.98 Test accuracy: 0.975
6 Train accuracy: 0.993333 Test accuracy: 0.9751
7 Train accuracy: 0.986667 Test accuracy: 0.9801
8 Train accuracy: 0.966667 Test accuracy: 0.9762
9 Train accuracy: 1.0 Test accuracy: 0.9838
###Markdown
Time Series
###Code
import matplotlib.pyplot as plt
t_min, t_max = 0, 30
resolution = 0.1
def time_series(t):
return t * np.sin(t) / 3 + 2 * np.sin(t*5)
def next_batch(batch_size, n_steps):
t0 = np.random.rand(batch_size, 1) * (t_max - t_min - n_steps * resolution)
Ts = t0 + np.arange(0., n_steps + 1) * resolution
ys = time_series(Ts)
return ys[:, :-1].reshape(-1, n_steps, 1), ys[:, 1:].reshape(-1, n_steps, 1)
t = np.linspace(t_min, t_max, int((t_max - t_min) / resolution))
n_steps = 20
t_instance = np.linspace(12.2, 12.2 + resolution * (n_steps + 1), n_steps + 1)
plt.figure(figsize=(11,4))
plt.subplot(121)
plt.title("A time series (generated)", fontsize=14)
plt.plot(t, time_series(t), label=r"$t . \sin(t) / 3 + 2 . \sin(5t)$")
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "b-", linewidth=3, label="A training instance")
plt.legend(loc="lower left", fontsize=14)
plt.axis([0, 30, -17, 13])
plt.xlabel("Time")
plt.ylabel("Value")
plt.subplot(122)
plt.title("A training instance", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.legend(loc="upper left")
plt.xlabel("Time")
plt.show()
X_batch, y_batch = next_batch(1, n_steps)
np.c_[X_batch, y_batch]
###Output
_____no_output_____
###Markdown
Using an OuputProjectionWrapper
###Code
n_steps = 20
n_inputs = 1
n_neurons = 100
n_outputs = 1
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])
cell = tf.contrib.rnn.OutputProjectionWrapper(tf.contrib.rnn.BasicRNNCell(num_units=n_neurons, activation=tf.nn.relu),
output_size=n_outputs)
outputs, states=tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
learning_rate = 0.001
loss = tf.reduce_mean(tf.square(outputs - y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
n_iteration = 1500
batch_size = 50
saver = tf.train.Saver()
with tf.Session() as sess:
init.run()
for iteration in range(n_iteration):
X_batch, y_batch = next_batch(batch_size, n_steps)
sess.run(training_op, feed_dict={X:X_batch, y:y_batch})
if iteration % 100 == 0:
mse = loss.eval(feed_dict={X:X_batch, y:y_batch})
print(iteration, "\tMSE:", mse)
saver.save(sess, './my_time_series_model')
with tf.Session() as sess:
saver.restore(sess, "./my_time_series_model")
X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
y_pred = sess.run(outputs, feed_dict={X:X_new})
y_pred
plt.title("Testing the model", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.plot(t_instance[1:], y_pred[0,:,0], "r.", markersize=10, label="prediction")
plt.legend(loc="upper left")
plt.xlabel("Time")
plt.show()
###Output
_____no_output_____
###Markdown
Generative RNN
###Code
with tf.Session() as sess:
saver.restore(sess, './my_time_series_model')
sequence = [0.] * n_steps
for iteration in range(300):
X_batch = np.array(sequence[-n_steps:]).reshape(1,n_steps,1)
y_pred = sess.run(outputs, feed_dict={X:X_batch})
sequence.append(y_pred[0,-1,0])
plt.figure(figsize=(8,4))
plt.plot(np.arange(len(sequence)), sequence, "b-")
plt.plot(t[:n_steps], sequence[:n_steps], "b-", linewidth=3)
plt.xlabel("Time")
plt.ylabel("Value")
plt.show()
###Output
_____no_output_____
###Markdown
Deep RNNs
###Code
reset_graph()
n_inputs = 2
n_steps = 5
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
n_neurons = 100
n_layers = 3
layers = [tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
for _ in range(n_layers)]
multi_layer_cell = tf.contrib.rnn.MultiRNNCell(layers)
output, states= tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
init = tf.global_variables_initializer()
X_batch = np.random.rand(2, n_steps, n_inputs)
with tf.Session() as sess:
init.run()
output_val, state_val = sess.run([output,states],feed_dict={X:X_batch})
output_val.shape
###Output
_____no_output_____
###Markdown
LSTM
###Code
reset_graph()
n_steps = 28
n_inputs = 28
n_neurons = 150
n_outputs = 10
n_layers = 3
learning_rate = 0.001
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.int32, [None])
lstm_cells = [tf.contrib.rnn.BasicLSTMCell(num_units=n_neurons)
for layer in range(n_layers)]
multi_cell = tf.contrib.rnn.MultiRNNCell(lstm_cells)
outputs, states = tf.nn.dynamic_rnn(multi_cell, X, dtype=tf.float32)
top_layer_h_state = states[-1][1]
logits = tf.layers.dense(top_layer_h_state, n_outputs, name="softmax")
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
states
n_epochs = 10
batch_size = 150
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
X_batch = X_batch.reshape((batch_size, n_steps, n_inputs))
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test})
print("Epoch", epoch, "Train accuracy =", acc_train, "Test accuracy =", acc_test)
###Output
Epoch 0 Train accuracy = 0.96 Test accuracy = 0.953
Epoch 1 Train accuracy = 0.973333 Test accuracy = 0.9674
Epoch 2 Train accuracy = 0.993333 Test accuracy = 0.9756
Epoch 3 Train accuracy = 0.993333 Test accuracy = 0.982
Epoch 4 Train accuracy = 0.986667 Test accuracy = 0.9815
Epoch 5 Train accuracy = 1.0 Test accuracy = 0.9846
Epoch 6 Train accuracy = 0.986667 Test accuracy = 0.9832
Epoch 7 Train accuracy = 0.98 Test accuracy = 0.9838
Epoch 8 Train accuracy = 1.0 Test accuracy = 0.9866
Epoch 9 Train accuracy = 0.993333 Test accuracy = 0.9839
|
unsupervised-learning/lesson 1/Feature Scaling - Solution.ipynb | ###Markdown
Feature Scaling - SolutionWith any distance based machine learning model (regularized regression methods, neural networks, and now kmeans), you will want to scale your data. If you have some features that are on completely different scales, this can greatly impact the clusters you get when using K-Means. In this notebook, you will get to see this first hand. To begin, let's read in the necessary libraries.
###Code
import numpy as np
import pandas as pd
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
from sklearn import preprocessing as p
%matplotlib inline
plt.rcParams['figure.figsize'] = (16, 9)
import helpers2 as h
import tests as t
# Create the dataset for the notebook
data = h.simulate_data(200, 2, 4)
df = pd.DataFrame(data)
df.columns = ['height', 'weight']
df['height'] = np.abs(df['height']*100)
df['weight'] = df['weight'] + np.random.normal(50, 10, 200)
###Output
_____no_output_____
###Markdown
`1.` Next, take a look at the data to get familiar with it. The dataset has two columns, and it is stored in the **df** variable. It might be useful to get an idea of the spread in the current data, as well as a visual of the points.
###Code
df.describe()
plt.scatter(df['height'], df['weight']);
###Output
_____no_output_____
###Markdown
Now that we've got a dataset, let's look at some options for scaling the data. As well as how the data might be scaled. There are two very common types of feature scaling that we should discuss:**I. MinMaxScaler**In some cases it is useful to think of your data in terms of the percent they are as compared to the maximum value. In these cases, you will want to use **MinMaxScaler**.**II. StandardScaler**Another very popular type of scaling is to scale data so that it has mean 0 and variance 1. In these cases, you will want to use **StandardScaler**. It is probably more appropriate with this data to use **StandardScaler**. However, to get practice with feature scaling methods in python, we will perform both.`2.` First let's fit the **StandardScaler** transformation to this dataset. I will do this one so you can see how to apply preprocessing in sklearn.
###Code
df_ss = p.StandardScaler().fit_transform(df) # Fit and transform the data
df_ss = pd.DataFrame(df_ss) #create a dataframe
df_ss.columns = ['height', 'weight'] #add column names again
plt.scatter(df_ss['height'], df_ss['weight']); # create a plot
###Output
_____no_output_____
###Markdown
`3.` Now it's your turn. Try fitting the **MinMaxScaler** transformation to this dataset. You should be able to use the previous example to assist.
###Code
df_mm = p.MinMaxScaler().fit_transform(df) # fit and transform
df_mm = pd.DataFrame(df_mm) #create a dataframe
df_mm.columns = ['height', 'weight'] #change the column names
plt.scatter(df_mm['height'], df_mm['weight']); #plot the data
###Output
_____no_output_____
###Markdown
`4.` Now let's take a look at how kmeans divides the dataset into different groups for each of the different scalings of the data. Did you end up with different clusters when the data was scaled differently?
###Code
def fit_kmeans(data, centers):
'''
INPUT:
data = the dataset you would like to fit kmeans to (dataframe)
centers = the number of centroids (int)
OUTPUT:
labels - the labels for each datapoint to which group it belongs (nparray)
'''
kmeans = KMeans(centers)
labels = kmeans.fit_predict(data)
return labels
labels = fit_kmeans(df, 10) #fit kmeans to get the labels
# Plot the original data with clusters
plt.scatter(df['height'], df['weight'], c=labels, cmap='Set1');
labels = fit_kmeans(df_mm, 10) #fit kmeans to get the labels
#plot each of the scaled datasets
plt.scatter(df_mm['height'], df_mm['weight'], c=labels, cmap='Set1');
labels = fit_kmeans(df_ss, 10)
plt.scatter(df_ss['height'], df_ss['weight'], c=labels, cmap='Set1');
###Output
_____no_output_____ |
workshops/CSCI+6360-Data+Science-Workshops.ipynb | ###Markdown
First let us discuss what exactly are AutoEncoders ?
###Code
# Autoendoer using H2o
#CSCI6360 H2O WORKSHOP
from IPython.display import Image,display
from IPython.core.display import HTML
import matplotlib.pyplot as plot
from h2o.estimators.deeplearning import H2ODeepLearningEstimator
from h2o.grid.grid_search import H2OGridSearch
#special thanks to wikipedia for the image
#Code available at https://github.com/CodeMaster001/CSCI6360
img = Image(url="images/autoencoder_structure.png")
display(img)
img_1 = Image(url="images/autoencoder_equation.png") #special thanks to wikipedia for the image
display(img_1)
img_1 = Image(url="images/autoencoder_network.png") #special thanks to ufld.stanford.edu for the image
display(img_1)
###Output
_____no_output_____
###Markdown
This is a workshop on H2O,a library that is used extensively in the production environment, widely used in healthcare and financeHere is its OFFICIAL WEBSITEhttps://www.h2o.ai First, lets create a seperate environment for h2o in Anaconda and performs a switch to that environment.conda create --name h2o-py python=3.5 h2o h2o-pyAs,I am currently in Mac , I would like to use a UI as I am a bit confortable with it , reducing complexity is nice !!!What is h2o?A library which is used for building machine learning models at ease on huge dataset.It supports mxnet, tensorflow and caffe. It is not an alternative for any of those,its just exends the backent (h2o.ai!!)Other examples that work like h2o is keras. We are now going to import h2o inside pythonAdventages of having h2o :1.Notable adventages variation in Stochastic Gradient descent implementationH2O SGD algorithm is executed in parallel across all cores. The training set is also distributed across all nodes.At the final an average is taken of all the values.For more detials on Page 16.http://docs.h2o.ai/h2o/latest-stable/h2o-docs/booklets/DeepLearningBooklet.pdfLets start h2o programming
###Code
import h2o
h2o.init() #initialize h2o cluster
#Once h2o is initialized it actually automatically sets up the spark cluster if spark is configured as a backend,
#applies same for mxnet and tensorflow
h2o.init(ip="localhost", port=54323)
###Output
Checking whether there is an H2O instance running at http://localhost:54321. connected.
Warning: Your H2O cluster version is too old (5 months and 4 days)! Please download and install the latest version from http://h2o.ai/download/
###Markdown
Once h2o is initialized it actually automatically sets up the spark cluster, if spark is configured as a backend, applies same for mxnet, tensorflow and theano backends. We will discuss shortly how to use spark. Please use Sparkling Water if you want to use H2O wth spark.Now lets see our cluster status info.
###Code
h2o.cluster().show_status()
###Output
_____no_output_____
###Markdown
Lets us import a file and see if its added to cluster
###Code
h2o.ls() #list files
#Now lets import a file to H2o cluster
h2o.import_file("LICENSE")
h2o.ls() #first let us see if License1 file is actually present in cluster
#h2o.remove("LICENSE1.hex") #REMOVE THE LICENSE FILE
h2o.remove("LICENSE")
h2o.ls()
help(h2o.import_file)
###Output
Help on function import_file in module h2o.h2o:
import_file(path=None, destination_frame=None, parse=True, header=0, sep=None, col_names=None, col_types=None, na_strings=None, pattern=None)
Import a dataset that is already on the cluster.
The path to the data must be a valid path for each node in the H2O cluster. If some node in the H2O cluster
cannot see the file, then an exception will be thrown by the H2O cluster. Does a parallel/distributed
multi-threaded pull of the data. The main difference between this method and :func:`upload_file` is that
the latter works with local files, whereas this method imports remote files (i.e. files local to the server).
If you running H2O server on your own maching, then both methods behave the same.
:param path: path(s) specifying the location of the data to import or a path to a directory of files to import
:param destination_frame: The unique hex key assigned to the imported file. If none is given, a key will be
automatically generated.
:param parse: If True, the file should be parsed after import.
:param header: -1 means the first line is data, 0 means guess, 1 means first line is header.
:param sep: The field separator character. Values on each line of the file are separated by
this character. If not provided, the parser will automatically detect the separator.
:param col_names: A list of column names for the file.
:param col_types: A list of types or a dictionary of column names to types to specify whether columns
should be forced to a certain type upon import parsing. If a list, the types for elements that are
one will be guessed. The possible types a column may have are:
- "unknown" - this will force the column to be parsed as all NA
- "uuid" - the values in the column must be true UUID or will be parsed as NA
- "string" - force the column to be parsed as a string
- "numeric" - force the column to be parsed as numeric. H2O will handle the compression of the numeric
data in the optimal manner.
- "enum" - force the column to be parsed as a categorical column.
- "time" - force the column to be parsed as a time column. H2O will attempt to parse the following
list of date time formats: (date) "yyyy-MM-dd", "yyyy MM dd", "dd-MMM-yy", "dd MMM yy", (time)
"HH:mm:ss", "HH:mm:ss:SSS", "HH:mm:ss:SSSnnnnnn", "HH.mm.ss" "HH.mm.ss.SSS", "HH.mm.ss.SSSnnnnnn".
Times can also contain "AM" or "PM".
:param na_strings: A list of strings, or a list of lists of strings (one list per column), or a dictionary
of column names to strings which are to be interpreted as missing values.
:param pattern: Character string containing a regular expression to match file(s) in the folder if `path` is a
directory.
:returns: a new :class:`H2OFrame` instance.
:examples:
>>> # Single file import
>>> iris = import_file("h2o-3/smalldata/iris.csv")
>>> # Return all files in the folder iris/ matching the regex r"iris_.*\.csv"
>>> iris_pattern = h2o.import_file(path = "h2o-3/smalldata/iris",
... pattern = "iris_.*\.csv")
###Markdown
Lets load the ECG training dataset
###Code
train = h2o.import_file("data/ecg_discord_train.csv")
train.summary()
###Output
Parse progress: |█████████████████████████████████████████████████████████| 100%
###Markdown
Lets load the heart disease dataset
###Code
test = h2o.import_file("data/ecg_discord_test.csv")
model = H2ODeepLearningEstimator(activation="RectifierWithDropout",
hidden=[32,32,32],
autoencoder=True,input_dropout_ratio=0.2,sparse=True,l1=1e-5,epochs=10)
model.train(x=train.names,training_frame=train,validation_frame=test)
model.predict(test)
model = H2ODeepLearningEstimator(activation="RectifierWithDropout",
hidden=[32,32,32],
autoencoder=False,input_dropout_ratio=0.2,sparse=True,l1=1e-5,epochs=10)
model.train(x=train.names[:-1],y=train.names[-1],training_frame=train,validation_frame=test)
print(train.names[-1])
model_path = h2o.save_model(model = model,force = True)
print(model_path)
saved_model = h2o.load_model(model_path)
print(saved_model)
hyper_parameters = {'input_dropout_ratio':[0.1,0.2,0.5,0.7]}
h2o_gridSearch = H2OGridSearch(H2ODeepLearningEstimator(activation="RectifierWithDropout",
hidden=[50,40,30,20,10,5],
autoencoder=True,sparse=True,l1=1e-5,epochs=10),hyper_parameters)
h2o_gridSearch.train(x=train.names,training_frame=train,validation_frame=test)
print(h2o_gridSearch.get_grid(sort_by="mse"))
###Output
input_dropout_ratio \
0 0.2
1 0.5
2 0.7
3 0.1
model_ids \
0 Grid_DeepLearning_ecg_discord_train4.hex_model_python_150893959697...
1 Grid_DeepLearning_ecg_discord_train4.hex_model_python_150893959697...
2 Grid_DeepLearning_ecg_discord_train4.hex_model_python_150893959697...
3 Grid_DeepLearning_ecg_discord_train4.hex_model_python_150893959697...
mse
0 1.1177247737987623
1 1.1227460791988992
2 1.1277023677794682
3 1.1294747939986314
###Markdown
Now tell me who is going to win ? one with greater epochs or with lower dropout or something else ?
###Code
hyper_parameters = {'input_dropout_ratio':[0.1,0.2,0.5,0.7],'epochs':[10,20,30,40]}
h2o_gridSearch = H2OGridSearch(H2ODeepLearningEstimator(activation="RectifierWithDropout",
hidden=[32,32,32],
autoencoder=True,sparse=True,l1=1e-5,epochs=10),hyper_parameters)
h2o_gridSearch.train(x=train.names,training_frame=train,validation_frame=test)
print(h2o_gridSearch.get_grid(sort_by="mse"))
###Output
deeplearning Grid Build progress: |███████████████████████████████████████| 100%
epochs input_dropout_ratio \
0 10.0 0.7
1 20.0 0.1
2 30.0 0.7
3 30.0 0.1
4 30.0 0.5
5 10.0 0.2
6 40.0 0.1
7 20.0 0.7
8 40.0 0.2
9 10.0 0.1
10 40.0 0.5
11 40.0 0.7
12 30.0 0.2
13 10.0 0.5
14 20.0 0.2
15 20.0 0.5
model_ids \
0 Grid_DeepLearning_ecg_discord_train4.hex_model_python_150893959697...
1 Grid_DeepLearning_ecg_discord_train4.hex_model_python_150893959697...
2 Grid_DeepLearning_ecg_discord_train4.hex_model_python_150893959697...
3 Grid_DeepLearning_ecg_discord_train4.hex_model_python_150893959697...
4 Grid_DeepLearning_ecg_discord_train4.hex_model_python_150893959697...
5 Grid_DeepLearning_ecg_discord_train4.hex_model_python_150893959697...
6 Grid_DeepLearning_ecg_discord_train4.hex_model_python_150893959697...
7 Grid_DeepLearning_ecg_discord_train4.hex_model_python_150893959697...
8 Grid_DeepLearning_ecg_discord_train4.hex_model_python_150893959697...
9 Grid_DeepLearning_ecg_discord_train4.hex_model_python_150893959697...
10 Grid_DeepLearning_ecg_discord_train4.hex_model_python_150893959697...
11 Grid_DeepLearning_ecg_discord_train4.hex_model_python_150893959697...
12 Grid_DeepLearning_ecg_discord_train4.hex_model_python_150893959697...
13 Grid_DeepLearning_ecg_discord_train4.hex_model_python_150893959697...
14 Grid_DeepLearning_ecg_discord_train4.hex_model_python_150893959697...
15 Grid_DeepLearning_ecg_discord_train4.hex_model_python_150893959697...
mse
0 1.1535938547394573
1 1.1562329316103637
2 1.1566448588666485
3 1.159861698950193
4 1.1644009813475424
5 1.1645614739950838
6 1.1655196697103989
7 1.1656905475634471
8 1.1717198498945969
9 1.1720297722474604
10 1.1754004353240988
11 1.1760561973726498
12 1.1779863170373361
13 1.1793471264297541
14 1.192625404833221
15 1.1967470032123524
|
cs231n/assignment/assignment1/softmax.ipynb | ###Markdown
Softmax exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*This exercise is analogous to the SVM exercise. You will:- implement a fully-vectorized **loss function** for the Softmax classifier- implement the fully-vectorized expression for its **analytic gradient**- **check your implementation** with numerical gradient- use a validation set to **tune the learning rate and regularization** strength- **optimize** the loss function with **SGD**- **visualize** the final learned weights
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000, num_dev=500):
"""
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the linear classifier. These are the same steps as we used for the
SVM, but condensed to a single function.
"""
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
mask = np.random.choice(num_training, num_dev, replace=False)
X_dev = X_train[mask]
y_dev = y_train[mask]
# Preprocessing: reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_val = np.reshape(X_val, (X_val.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
X_dev = np.reshape(X_dev, (X_dev.shape[0], -1))
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis = 0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
X_dev -= mean_image
# add bias dimension and transform into columns
X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])
X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])
X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])
X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))])
return X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev = get_CIFAR10_data()
print 'Train data shape: ', X_train.shape
print 'Train labels shape: ', y_train.shape
print 'Validation data shape: ', X_val.shape
print 'Validation labels shape: ', y_val.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
print 'dev data shape: ', X_dev.shape
print 'dev labels shape: ', y_dev.shape
###Output
Train data shape: (49000, 3073)
Train labels shape: (49000,)
Validation data shape: (1000, 3073)
Validation labels shape: (1000,)
Test data shape: (1000, 3073)
Test labels shape: (1000,)
dev data shape: (500, 3073)
dev labels shape: (500,)
###Markdown
Softmax ClassifierYour code for this section will all be written inside **cs231n/classifiers/softmax.py**.
###Code
# First implement the naive softmax loss function with nested loops.
# Open the file cs231n/classifiers/softmax.py and implement the
# softmax_loss_naive function.
from cs231n.classifiers.softmax import softmax_loss_naive
import time
# Generate a random softmax weight matrix and use it to compute the loss.
W = np.random.randn(3073, 10) * 0.0001
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)
# As a rough sanity check, our loss should be something close to -log(0.1).
print 'loss: %f' % loss
print 'sanity check: %f' % (-np.log(0.1))
###Output
loss: 2.397488
sanity check: 2.302585
###Markdown
Inline Question 1:Why do we expect our loss to be close to -log(0.1)? Explain briefly.**Your answer:**当W比较小的时候,WX基本为0,取对数全部为1.也就是说对于每一个输入,所有类别的得分一样都为1。平均化之后就是1/N N为类别总数。也就是说每一个输入样本的lossi都为 -log(1/N) 那么求平均值依旧还是-log(1/N)
###Code
# Complete the implementation of softmax_loss_naive and implement a (naive)
# version of the gradient that uses nested loops.
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)
# As we did for the SVM, use numeric gradient checking as a debugging tool.
# The numeric gradient should be close to the analytic gradient.
from cs231n.gradient_check import grad_check_sparse
f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 0.0)[0]
grad_numerical = grad_check_sparse(f, W, grad, 10)
# similar to SVM case, do another gradient check with regularization
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 1e2)
f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 1e2)[0]
grad_numerical = grad_check_sparse(f, W, grad, 10)
# Now that we have a naive implementation of the softmax loss function and its gradient,
# implement a vectorized version in softmax_loss_vectorized.
# The two versions should compute the same results, but the vectorized version should be
# much faster.
tic = time.time()
loss_naive, grad_naive = softmax_loss_naive(W, X_dev, y_dev, 0.00001)
toc = time.time()
print 'naive loss: %e computed in %fs' % (loss_naive, toc - tic)
from cs231n.classifiers.softmax import softmax_loss_vectorized
tic = time.time()
loss_vectorized, grad_vectorized = softmax_loss_vectorized(W, X_dev, y_dev, 0.00001)
toc = time.time()
print 'vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic)
# As we did for the SVM, we use the Frobenius norm to compare the two versions
# of the gradient.
grad_difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro')
print 'Loss difference: %f' % np.abs(loss_naive - loss_vectorized)
print 'Gradient difference: %f' % grad_difference
# Use the validation set to tune hyperparameters (regularization strength and
# learning rate). You should experiment with different ranges for the learning
# rates and regularization strengths; if you are careful you should be able to
# get a classification accuracy of over 0.35 on the validation set.
from cs231n.classifiers import Softmax
results = {}
best_val = -1
best_softmax = None
#learning_rates = [1e-7, 5e-7]
learning_rates = [1e-7]
regularization_strengths = [4e3,5e3,6e3,7e3]
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained softmax classifer in best_softmax. #
################################################################################
#pass
for lr in learning_rates:
for reg in regularization_strengths:
softmax = Softmax()
softmax.train(X_train, y_train, learning_rate=lr, reg=reg, num_iters=3000, verbose=False)
y_predicted_train = softmax.predict(X_train)
y_predicted_val = softmax.predict(X_val)
train_accuracy = np.mean(y_predicted_train == y_train)
val_accuracy = np.mean(y_predicted_val == y_val)
if val_accuracy > best_val:
best_val = val_accuracy
best_softmax = softmax
results[(lr,reg)] = (train_accuracy, val_accuracy)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy)
print 'best validation accuracy achieved during cross-validation: %f' % best_val
# evaluate on test set
# Evaluate the best softmax on test set
y_test_pred = best_softmax.predict(X_test)
test_accuracy = np.mean(y_test == y_test_pred)
print 'softmax on raw pixels final test set accuracy: %f' % (test_accuracy, )
# Visualize the learned weights for each class
w = best_softmax.W[:-1,:] # strip out the bias
w = w.reshape(32, 32, 3, 10)
w_min, w_max = np.min(w), np.max(w)
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for i in xrange(10):
plt.subplot(2, 5, i + 1)
# Rescale the weights to be between 0 and 255
wimg = 255.0 * (w[:, :, :, i].squeeze() - w_min) / (w_max - w_min)
plt.imshow(wimg.astype('uint8'))
plt.axis('off')
plt.title(classes[i])
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.