path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
Covid/materialize.ipynb | ###Markdown
Matrializing the Covid dataset* **Author:** Anders Munk-Nielsen * **Ouotput:** `covid.csv`: Each row is a `country` on a `date`. * **Source data:** * [OWID Covid dataset](https://ourworldindata.org/coronavirus-source-data): Covid dataset with deaths, tests, cases, and time-constant country information. * [Apple mobility data](https://covid19.apple.com/mobility): Daily data on mobility from Apple's devices, the variables `mobility_driving`, `mobility_transit`, `mobility_walking`. * [Climate data, from the US' NOAA](ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/daily/): Daily data from weather stations across the world. The raw data files are [`2020.csv.gz`](ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/daily/by_year/2020.csv.gz), [`2021.csv.gz`](ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/daily/by_year/2021.csv.gz). * Additionally, a list of NOAA country classifications (like ISO2) [`2020_countries.txt`](https://www1.ncdc.noaa.gov/pub/data/ghcn/daily/ghcnd-countries.txt)) * [Google mobility data](https://www.google.com/covid19/mobility/): the variables `location_retail_and_recreation`,`location_grocery_and_pharmacy`,`location_parks`,`location_transit_stations`,`location_workplaces`,`location_residential` Setup
###Code
import pandas as pd
import numpy as np
import os
assert os.path.isdir('./Raw'), f'There must be a subfolder "Raw" (with the raw data) in this directory.'
files = os.listdir('./Raw')
print(f'Files located in ./Raw: {files}')
for f in ['Global_Mobility_Report.csv', 'country_iso2_iso3.txt', 'owid-covid-data.csv']:
assert f in files, f'File "{f}" not found in ./Raw: Please download it first.'
###Output
Files located in ./Raw: ['country_iso2_iso3.txt', '2020_countries.txt', '2020.csv', '2021.csv', '2020.csv.gz', 'owid-covid-data.csv', 'Global_Mobility_Report.csv', 'applemobilitytrends-2021-11-15.csv', '2021.csv.gz']
###Markdown
Apple mobility data file changes depending on the date on which it was downloaded.
###Code
tmp = [f for f in files if f.startswith('applemobilitytrends-') and f.endswith('.csv')]
assert len(tmp) == 1, f'There must be precisely one file starting with "applemobilitytrends-" and ending with ".csv"'
AppleFile = f'Raw/{tmp[0]}'
print(f'Found apple file: {AppleFile}')
###Output
Found apple file: Raw/applemobilitytrends-2021-11-15.csv
###Markdown
Unzip weather dataThe raw file comes in .gz format. The unzipped versions are extremely large, so we will delete them afterwards.
###Code
weather_files = [f for f in files if f.endswith('.csv.gz')]
years = [int(w[:4]) for w in weather_files]
print(f'Found weather data for the years: {years}')
# unzip the climate data files
import gzip
import shutil
for y in years:
with gzip.open(f'Raw/{y}.csv.gz', 'rb') as f_in:
with open(f'Raw/{y}.csv', 'wb') as f_out:
shutil.copyfileobj(f_in, f_out)
###Output
_____no_output_____
###Markdown
Read in **COVID data**
###Code
C = pd.read_csv('Raw/owid-covid-data.csv').rename(columns={'location':'country'})
C.date = pd.to_datetime(C.date)
# Drop a few observations
Ikeep = ((C.iso_code.isin(['OWID_KOS', 'OWID_WRL'])) | (C.iso_code.isnull())) == False
C = C.loc[Ikeep, :]
# list of all countries we have Covid data for: ISO3 code and name
countries_C = C[['iso_code', 'country']].drop_duplicates()
###Output
_____no_output_____
###Markdown
**Apple Mobility data**
###Code
A = pd.read_csv(AppleFile, low_memory=False) # low-memory option required since column 3 has mixed types... apparrently
###Output
_____no_output_____
###Markdown
**Temperature data** (from NOAA). This data unfortunately stores country names in ISO2 format, so we will have to merge between ISO2 and ISO3. The dataset contains many additional weather-related variables which we will not use.
###Code
TT = []
for y in years:
# each file takes approx 45sec to read in to pandas
fname = f'raw/{y}.csv' # climate data for this year (unzipped earlier)
# the variables, x1-x4, contain detailed information about the sensor data point
# for instance, some of them are error codes
T = pd.read_csv(fname, names=['station_id', 'date', 'statistic', 'value', 'flag_measurement', 'flag_quality','flag_source', 'flag_value'])
T.date = pd.to_datetime(T.date, format='%Y%m%d')
# ISO2 name of teh country in NOAA format
T['country_iso2'] = T.station_id.str[:2]
TT.append(T)
T = pd.concat(TT, axis=0)
T.head()
del TT # cleanup (each dataframe is quite large)
###Output
_____no_output_____
###Markdown
Only keep temperature information and convert to celsius
###Code
T = T[T.statistic == 'TAVG'].copy()
T.value = T.value / 10.0 # baseline is measured in tens of degrees celsius(!)
###Output
_____no_output_____
###Markdown
ISO2-to-name list: Fortunately, the NOAA provides a list of their ISO2 country name abbreviations(***Note:*** Only reading in the list from 2020, assuming that no country names have changed...)
###Code
Tc = pd.read_csv('raw/2020_countries.txt', names=['country_iso2','country']).set_index('country_iso2').sort_index()
Tc.country = Tc.country.str.strip()
###Output
_____no_output_____
###Markdown
Google data
###Code
goog = pd.read_csv('Raw/Global_Mobility_Report.csv', low_memory = False)
goog = goog[(goog.sub_region_1.isnull()) & (goog.metro_area.isnull())] # only entire countries
vv = [f'sub_region_{i}' for i in [1,2]]
for v in vv + ['metro_area', 'iso_3166_2_code', 'census_fips_code']:
assert goog[v].isnull().all() #
del goog[v]
goog.date = pd.to_datetime(goog.date)
###Output
_____no_output_____
###Markdown
Construct a key between country names in the COVID and temperature datasets
###Code
countries_C = pd.merge(countries_C, Tc.reset_index(), on='country', how='left')
I = countries_C.country_iso2.isnull()
print(f'There are {I.sum()} (of {I.shape[0]}) where we cannot find them (either the name is spelled differently or there is no station)')
###Output
There are 70 (of 235) where we cannot find them (either the name is spelled differently or there is no station)
###Markdown
Compute daily average temperatures across weather stationsThe NOAA dataset contains many weather monitoring stations and reports data on a number of different statistics. The unit of observation is a `station_id`-`date`-`statistic` triplet. We are only interested in one statistic, namely `TAVG` (the daily average temperature). One might additionally consider dropping observations where `flag_quality` indicates a problem with the measurement. Compute average of `TAVG` within a (country, date)-pair. NOTE: Temperatures are measured in *tenths of degrees celsius*.
###Code
I = (T.statistic == 'TAVG')
meanT = T.loc[I, ['date', 'country_iso2', 'value']].groupby(['date', 'country_iso2']).value.mean().reset_index().rename(columns={'value':'temperature'})
###Output
_____no_output_____
###Markdown
Merge on the country names from the Covid data
###Code
CandT = pd.merge(countries_C, meanT, on='country_iso2')
###Output
_____no_output_____
###Markdown
Example of temperatures
###Code
ax = CandT[CandT.country.isin(["Denmark", "Italy", "United Kingdom", "South Africa"])].groupby([pd.Grouper(key='date', freq='d'), 'country']).temperature.mean().unstack().plot();
###Output
_____no_output_____
###Markdown
Apple data The Apple mobilitiy data contains indices for mobility, which are based on the relative search volume for trips in the Apple maps platform (releative to January 13 2020). It is a panel over location and date, where locations can be either entire countries or sub-regions or cities. We will focus on entire countries and delete all the other information.
###Code
# only retain country-wide data
A = A[A.geo_type == 'country/region'].copy()
for v in ['geo_type', 'alternative_name', 'sub-region', 'country']:
del A[v]
###Output
_____no_output_____
###Markdown
Convert from wide to longThe dataset has dates in the columns rather than rows. I.e. it is in *wide* format.
###Code
# dictionaries to rename columns
# assumes that all columns from 2 and onwards are the date variables
# we have to insert a "v" in front of the variable name to force pandas
# to think of the variable as a string.
ren = {v:f'v{i}' for i,v in enumerate(A.columns[2:])}
ren_back = {i:v for i,v in enumerate(A.columns[2:])} # this will get us back
R = pd.wide_to_long(A.rename(columns=ren), stubnames='v', i=['region', 'transportation_type'], j='date')
R = R.reset_index(level='date') # make date a variable and not a part of the index
R.date = R.date.map(ren_back) # convert from {0,1,2,...} to the correspondinig dates
R.date = pd.to_datetime(R.date) # convert from Object to Datetime
R = R.reset_index().rename(columns={'v':'mobility', 'region':'country'})
# long to wide: get one row per country, and different transportation types as variables
R = R.set_index(['country', 'date', 'transportation_type']).unstack()
# collapse multilevel column index
R.columns = [' '.join(col).strip() for col in R.columns.values]
R.sort_index(inplace=True)
# spaces in names are problematic: make underscores
R.columns = [x.replace(' ', '_') for x in R.columns.values]
# look at the beautiful frame
R
countries_C
###Output
_____no_output_____
###Markdown
Merge Datasets temperatures
###Code
C2 = pd.merge(C, CandT[['iso_code', 'temperature', 'date']], on=['iso_code', 'date'], how='left')
###Output
_____no_output_____
###Markdown
mobility
###Code
C2 = C2.set_index(['country', 'date']).sort_index()
C2 = C2.join(R)
###Output
_____no_output_____
###Markdown
Print some details about the failed merges between the Apple and Covid datasets.
###Code
cc = [c for c in R.index.get_level_values('country').unique() if c not in countries_C.country.values]
print(f'No Covid data for these countries where we have Apple data: {cc}')
cc = [c for c in countries_C.country.values if c not in R.index.get_level_values('country').unique()]
print(f'There are {len(cc)} countries where we have Covid data but no Apple mobility data (e.g. {np.random.choice(cc, 3)})')
print('--- missing values in final dataset of apple mobilityi values ---')
C2[['mobility_driving', 'mobility_walking', 'mobility_transit']].isnull().mean()
###Output
--- missing values in final dataset of apple mobilityi values ---
###Markdown
Google data The country region code does not conform with NOAA, so it is not currently in use.
###Code
del goog['country_region_code']
###Output
_____no_output_____
###Markdown
Variable names are so long that it causes problems sometimes. So we shorten them a bit.
###Code
goog.rename(columns={"country_region":"country",
'retail_and_recreation_percent_change_from_baseline':'location_retail_and_recreation',
'grocery_and_pharmacy_percent_change_from_baseline':'location_grocery_and_pharmacy', 'parks_percent_change_from_baseline':'location_parks',
'transit_stations_percent_change_from_baseline':'location_transit_stations','workplaces_percent_change_from_baseline':'location_workplaces',
'residential_percent_change_from_baseline':'location_residential'}, inplace=True)
###Output
_____no_output_____
###Markdown
Find the rows in the Google data that we have an exact match on for country.
###Code
I = goog.country.isin(C2.index.get_level_values('country'))
print(f'We match {100.*I.mean():5.2f}% of rows')
###Output
We match 96.35% of rows
###Markdown
Merge on to remaining dataset
###Code
C2 = C2.join(goog[I].set_index(['country', 'date']))
###Output
_____no_output_____
###Markdown
Output
###Code
C2.to_csv('covid.csv')
# delete unzipped climate data files
for y in years:
os.remove(f'Raw/{y}.csv')
###Output
_____no_output_____ |
telem-NN-one-hot-encoding-breast-data.ipynb | ###Markdown
Example of one-hot-encoding fromhttps://machinelearningmastery.com/one-hot-encoding-for-categorical-data/
###Code
from numpy import mean
from numpy import std
from pandas import read_csv
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
from sklearn import metrics
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import pickle
# define the location of the dataset
url = "https://raw.githubusercontent.com/jbrownlee/Datasets/master/breast-cancer.csv"
# load the dataset
dataset = read_csv(url, header=None)
# retrieve the array of data
data = dataset.values
# deploy any clean and subset methods
lg.info(f'cadprep run')
# separate into input and output columns
X = data[:, :-1].astype(str)
y = data[:, -1].astype(str)
# split the dataset into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=1)
lg.info(f'raw training data: {type(X_train).__name__} {X_train.shape}')
X_train
y_train
# one-hot encode input variables
onehot_encoder = OneHotEncoder(sparse=False)
onehot_encoder.fit(X_train)
X_train_enc = onehot_encoder.transform(X_train)
X_test_enc = onehot_encoder.transform(X_test)
lg.info(f'onehot encoding')
lg.info(f'encoded training data: {type(X_train_enc).__name__} {X_train_enc.shape}')
X_train_enc
# ordinal encode target variable
label_encoder = LabelEncoder()
label_encoder.fit(y_train)
y_train_enc = label_encoder.transform(y_train)
y_test_enc = label_encoder.transform(y_test)
y_train_enc
# define the model
model = keras.Sequential()
model.add(layers.Dense(10, input_dim=X_train_enc.shape[1], activation='relu', kernel_initializer='he_normal'))
model.add(layers.Dense(1, activation='sigmoid'))
# compile the keras model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit the keras model on the dataset
model.fit(X_train_enc, y_train_enc, epochs=100, batch_size=16, verbose=2)
model.summary
# evaluate the keras model
_, accuracy = model.evaluate(X_test_enc, y_test_enc, verbose=0)
print('Accuracy: %.2f' % (accuracy*100))
# define the model
# model = LogisticRegression()
# # fit on the training set
# model.fit(X_train, y_train)
# predict on test set
lg.info(f'{model.name} run')
y_hat = model.predict(X_test_enc)
print(y_hat[:5])
print(y_test_enc[:5])
# yhat = model.predict(X_test_enc)
# # print(metrics.classification_report(y_test_enc, yhat))
y_test_enc
# yhat
# # evaluate predictions
# accuracy = metrics.accuracy_score(y_test_enc, yhat)
lg.info(f'accuracy: {accuracy*100:.2f}')
plot_loss(model.history.history['loss'], model.history.history['val_loss'])
plot_accuracy(model.history.history['accuracy'], model.history.history['val_accuracy'])
# conf_mat = metrics.confusion_matrix(y_test, yhat)
# (tn, fp, fn, tp) = conf_mat.ravel()
# print(' | pred n', '| pred p')
# print('-------------------------')
# print('cond n | tn', tn, ' | fp', fp)
# print('cond p | fn', fn, ' | tp', tp)
# precision = tp/(tp+fp) # PPV
# recall = tp/(tp+fn) # sensitivity
# lg.info(f' precision: {precision:.2f}')
# lg.info(f' recall: {recall:.2f}')
# save the model to disk
pfilename = f'{nb_fname}.sav'
pickle.dump(model, open(pfilename, 'wb'))
print(pfilename)
# # some time later...
# # load the model from disk
# loaded_model = pickle.load(open(pfilename, 'rb'))
# result = loaded_model.score(X_test, y_test)
# print(f'{result*100:.2f}')
###Output
_____no_output_____ |
Regression/Linear Models/PoissonRegressor_RobustScaler_PolynomialFeatures.ipynb | ###Markdown
PoissonRegressor with RobustScaler & Polynomial Features This code template is for regression analysis using Poisson Regressor with rescaling technique as RobustScaler and feature transformation technique as Polynomial Features. Required Packages
###Code
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as se
from sklearn.linear_model import PoissonRegressor
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import RobustScaler, PolynomialFeatures
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
InitializationFilepath of CSV file
###Code
#filepath
file_path= ''
###Output
_____no_output_____
###Markdown
List of features which are required for model training .
###Code
#x_values
features = []
###Output
_____no_output_____
###Markdown
Target feature for prediction.
###Code
#y_value
target=''
###Output
_____no_output_____
###Markdown
Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
###Code
df=pd.read_csv(file_path)
df.head()
###Output
_____no_output_____
###Markdown
Feature SelectionsIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X and target/outcome to Y.
###Code
X=df[features]
Y=df[target]
###Output
_____no_output_____
###Markdown
Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
###Code
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
###Output
_____no_output_____
###Markdown
Calling preprocessing functions on the feature and target set.
###Code
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
###Output
_____no_output_____
###Markdown
Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
###Code
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
###Output
_____no_output_____
###Markdown
Data SplittingThe train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
###Code
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
###Output
_____no_output_____
###Markdown
ModelPoisson regression is a generalized linear model form of regression used to model count data and contingency tables. It assumes the response variable or target variable Y has a Poisson distribution, and assumes the logarithm of its expected value can be modeled by a linear combination of unknown parameters. It is sometimes known as a log-linear model, especially when used to model contingency tables. Model Tuning Parameters> **alpha** -> Constant that multiplies the penalty term and thus determines the regularization strength. alpha = 0 is equivalent to unpenalized GLMs.> **tol** -> Stopping criterion.> **max_iter** -> The maximal number of iterations for the solver. Feature TransformationGenerate polynomial and interaction features.Generate a new feature matrix consisting of all polynomial combinations of the features with degree less than or equal to the specified degree. For example, if an input sample is two dimensional and of the form [a, b], the degree-2 polynomial features are [1, a, b, a^2, ab, b^2].Read more at [scikit-learn.org](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PolynomialFeatures.html) Data ScalingUsed sklearn.preprocessing.RobustScalerScale features using statistics that are robust to outliers.This Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range). The IQR is the range between the 1st quartile (25th quantile) and the 3rd quartile (75th quantile).Read more at [scikit-learn.org](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.RobustScaler.html)
###Code
model=make_pipeline(RobustScaler(),PolynomialFeatures(),PoissonRegressor())
model.fit(x_train,y_train)
###Output
_____no_output_____
###Markdown
Model AccuracyWe will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.> **score**: The **score** function returns the coefficient of determination R2 of the prediction.
###Code
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
###Output
Accuracy score 86.73 %
###Markdown
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions. > **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model. > **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
###Code
y_pred=model.predict(x_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
###Output
R2 Score: 88.64 %
Mean Absolute Error 2717.21
Mean Squared Error 17362026.54
###Markdown
Prediction PlotFirst, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
###Code
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(x_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
###Output
_____no_output_____ |
Lists_Tutorial_One.ipynb | ###Markdown
Here's a couple more bite-sized tutorial chunks. Make sure you do the work along with the tutorial (e.g., type it out in your own copy of Jupyter... merely following along is nowhere near as efficient). Lists: How Do They Work? In our work with data, whether scraped from online or delivered to us with a bow, LISTS will be invaluable. If you are familiar with the British science fiction series *Dr. Who*, then it might be useful to think of LISTS as working rather like his Police Call Box in space: From the outside, the Doctor's time-and-space-traveling phonebooth looks just like any regular phonebooth. About ten feet high and four feet wide: "Just enough space," you say to yourself, "to store a single variable."But that's the genius of the booth's design: Once you open the door and go *inside*, it seems to extend forever in all directions. The INSIDE is bigger than the OUTSIDE by orders of magnitude."So it is with LISTS," he suggested, in a *segue* that seemed, well, *inevitable* by that point. And so it is indeed with LISTS. On the outside, they look like every other variable: Small, neat, compact. But internally, LISTS are something else altogether. Quick Refresher: Variables Variables keep track of values for us:
###Code
car_year = 2015
avg_mpg = 39.9
car_mfg = "MINI"
car_model = "Cooper"
###Output
_____no_output_____
###Markdown
Once those values are stored, I can either *look* at them or *perform* with them:
###Code
print(car_year)
###Output
2015
###Markdown
I can copy these values, too, and manipulate them:
###Code
new_mpg = avg_mpg * 1.1
print("Adjusted MPG: ",new_mpg) # Use a comma to separate values when you print()
###Output
Adjusted MPG: 43.89
###Markdown
I can turn join several values into one:
###Code
car_name = car_mfg + ' ' + car_model # the ' ' is just a space!
print("She drives a " + car_name + '.') # the '.' is just a period!
###Output
She drives a MINI Cooper.
###Markdown
If I want to *concatenate* numbers with words, though, I have to use the `str()` function. Your Jupyter Notebook sees numbers as having intrinsic value (for example: "2015" implies a quantity of 2,015). We have to tell Jupyter to see "2015" as though it were a word or a symbol. Otherwise, those plus signs (+) will cause problems.
###Code
title = str(car_year) + ' ' + car_name
print("She still drives a " + title)
###Output
She still drives a 2015 MINI Cooper
###Markdown
Lists But what happens when there is more than a single value? This is, invariably, the case with everything computational. Its what makes compputation special. Multiplicity.Let's say we have not one car, but four:* Cooper* Rabbit* Scirroco* DeLoreanIt would be possible to store these values in four different variables, but tiresome and inefficient. To wit:
###Code
car1 = 'Cooper'
car2 = 'Rabbit'
car3 = 'Scirrocco'
car4 = 'DeLorean'
print(car1, car2, car3, car4)
###Output
Cooper Rabbit Scirrocco DeLorean
###Markdown
Instead, let's keep them all in the same LIST variable:
###Code
cars = ['Cooper','Rabbit','Scirroco','DeLorean'] # notice the square brackets!
print(cars) # this prints our whole list out.
###Output
['Cooper', 'Rabbit', 'Scirroco', 'DeLorean']
###Markdown
Now, of course, we can access each element of the list separately by referring to its index inside some square brackets:
###Code
print('My ' + cars[2] + ' is faster than my ' + cars[1])
###Output
My Scirroco is faster than my Rabbit
###Markdown
But we can also store that information in a way that simple variables won't allow.
###Code
slow_car = 2
fast_car = 3
print('My ' + cars[fast_car] + ' is faster than my ' + cars[slow_car])
slow_car = 1
fast_car = 0
print('My ' + cars[fast_car] + ' is faster than my ' + cars[slow_car])
###Output
My DeLorean is faster than my Scirroco
My Cooper is faster than my Rabbit
###Markdown
Looping Lists This, then, is the cool part: We can use LISTS to take advantage of the fact that computers excel at repetitive tasks. Let's return to our list of cars, for example. Every time we write the name of a car to the screen, the computer is doing (almost) the exact same repetitive task. So we use a `for/in` statement to let the computer bear the burden of repetition.
###Code
color_palette = ['red', 'maple', 'berry', 'banana', 'creamsicle', 'patent leather black', 'fudge', 'neutral grey']
for paint_color in color_palette:
print ("One can of " + paint_color)
###Output
One can of red
One can of maple
One can of berry
One can of banana
One can of creamsicle
One can of patent leather black
One can of fudge
One can of neutral grey
###Markdown
The second line of that code is so important, and maybe a bit strange. Where did "paint_color" come from? How does it know what a paint_color is? The answer is it doesn't know: paint_color is just a counter, an iterator, a clicker -- it is how Python counts from 0 to 7 without losing its place. For us, though, that counter (paint_color) has the added benefit of capturing a specific value from an indexed list. So as Python counts from 0 to 7, it pulls the value stored in the color_palette list at that index and stores it in paint_color. Which means our variable paint_color is automatically updated for as many different values as are contained in our list! Of course, paint_color doesn't have to be called paint_color. That's just a convenience for us. In conventional Python, a lot of programmers use the word 'item':
###Code
for item in color_palette:
print (item)
###Output
red
maple
berry
banana
creamsicle
patent leather black
fudge
neutral grey
###Markdown
Of course, we don't have to take every item from the list. We can ask that Python only show items that meet certain specifications. The function `len(String)`, for example, returns the number of letters in a String. Let's say we just want to see the long words. To do this, I'll add a CONDITIONAL (using the keyword "IF"). Every time that condition is met (is "TRUE"), Python will carry out the code indented below it. If it isn't TRUE, Python just loops back to the top.
###Code
for item in color_palette:
if len(item)>6:
print(item)
###Output
banana
creamsicle
patent leather black
neutral grey
###Markdown
Or we could do the same for, say, 5 letter words:
###Code
for item in color_palette:
if len(item) == 5:
print(item)
###Output
maple
berry
fudge
###Markdown
Note that the equals sign (above) is not what you're used to. Human beings use equals signs for so many different reasons that we've had to come up with some clever ways of telling the computer what we mean. So when we want to ASSIGN A VALUE to a variable, we tell Python:
###Code
alpha = 10
###Output
_____no_output_____
###Markdown
However, when we want to check on the value of a variable (say, the variable ALPHA), we're really asking Python to do very different work. So we use two equals signs, like this:
###Code
if alpha == 10:
print ("Yes, alpha is ten.")
###Output
Yes, alpha is ten.
|
Inspect your class.ipynb | ###Markdown
Introduction **What?** Inspect your class
###Code
dir()
###Output
_____no_output_____ |
04_plot/03_bar_plots_histograms.ipynb | ###Markdown
Colores
###Code
plt.barh(np.arange(10),nums,color='limegreen',edgecolor='maroon')
plt.barh(np.arange(10),nums,color='k',edgecolor='r')
###Output
_____no_output_____
###Markdown
Alineacion
###Code
plt.bar(np.arange(10),nums,color='k',edgecolor='r',align='edge')
###Output
_____no_output_____
###Markdown
Hatch y Fill
###Code
plt.bar(np.arange(10),nums,color='w',edgecolor='r',hatch='o')
plt.bar(np.arange(10),nums,color='w',edgecolor='k',hatch='x')
###Output
_____no_output_____
###Markdown
Tamaño de la barra
###Code
plt.bar(np.arange(10),nums,color='r',edgecolor='k',hatch='x',width=0.5)
plt.bar(np.arange(10),nums,color='r',edgecolor='k',hatch='x',width=5)
###Output
_____no_output_____
###Markdown
Espacios
###Code
plt.bar(np.arange(10),nums,color='r',edgecolor='k',hatch='x',bottom=1*nums)
###Output
_____no_output_____
###Markdown
Histogramas
###Code
rands = np.random.normal(size=int(1e6))
plt.hist(rands)
plt.hist(rands,range=(0,4))
plt.hist(rands,range=(0,1))
plt.hist(rands,range=(0,0.1))
plt.hist(rands,bins='auto')
plt.show()
plt.hist(rands,bins=np.arange(2))
plt.show()
plt.hist(rands,bins='auto',histtype='step')
plt.show()
plt.hist(rands,bins='auto',cumulative=True) #Suma los anteriores valores
plt.show()
plt.hist(rands,cumulative=True) #Suma los anteriores valores
plt.show()
plt.hist((rands,rands*0.5),bins='auto',histtype='stepfilled') #Dos graficos
plt.show()
plt.hist((rands,rands*0.5,rands*0.4,rands*1),bins='auto',histtype='stepfilled') #Dos graficos
plt.show()
plt.hist((rands,rands*0.5,rands*0.4,rands*1),bins='auto',histtype='step') #Dos graficos
plt.show()
plt.hist((rands,rands*0.5,rands*0.4,rands*1),bins='auto',histtype='barstacked') #Dos graficos
plt.show()
plt.hist((rands,rands*0.5,rands*0.4),bins='auto',histtype='bar') #Dos graficos
plt.show()
###Output
_____no_output_____
###Markdown
Barras
###Code
a = plt.bar(np.arange(10),nums,color='r',edgecolor='k',bottom=1*nums)
b = plt.bar(np.arange(10),nums,color='b',edgecolor='k',hatch='o')
a = plt.bar(np.arange(10),nums,color='r',edgecolor='k',bottom=1*nums)
b = plt.bar(np.arange(10),nums,color='b',edgecolor='k',hatch='o')
plt.legend((a[0],b[0]),('Hombres,Mujeres'))
plt.hist((rands,rands*0.5,rands*0.4),bins='auto',histtype='bar') #Dos graficos
plt.show()
###Output
_____no_output_____ |
Chapter07/Exercise09/7_9.ipynb | ###Markdown
Exercise 7.9 This question uses the variables $\mathrm{dis}$ (the weighted mean of distances to five Boston employment centers) and $\mathrm{nox}$ (nitrogen oxides concentration in parts per 10 million) from the Boston data. We will treat $\mathrm{dis}$ as the predictor and $\mathrm{nox}$ as the response. Use the poly() function to fit a cubic polynomial regression to predict $\mathrm{nox}$ using $\mathrm{dis}$. Report the regression output, and plot the resulting data and polynomial fits. Plot the polynomial fits for a range of different polynomial degrees (say, from 1 to 10), and report the associated residual sum of squares. Perform cross-validation or another approach to select the optimal degree for the polynomial, and explain your results. Use the bs() function to fit a regression spline to predict $\mathrm{nox}$ using $\mathrm{dis}$. Report the output for the fit using four degrees of freedom. How did you choose the knots? Plot the resulting fit. Now fit a regression spline for a range of degrees of freedom, and plot the resulting fits and report the resulting RSS. Describe the results obtained. Perform cross-validation or another approach in order to select the best degrees of freedom for a regression spline on this data. Describe your results.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%run ../../customModules/usefulFunctions.ipynb
# https://stackoverflow.com/questions/34398054/ipython-notebook-cell-multiple-outputs
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
import statsmodels.api as sm
from sklearn.model_selection import LeaveOneOut
from patsy import dmatrix
df = pd.read_csv("../../DataSets/Boston/Boston.csv")
df = df[['dis', 'nox']]
df.head()
df_x = df[['dis']]
df_y = df[['nox']]
###Output
_____no_output_____
###Markdown
Exercise 7.9.1 Use the poly() function to fit a cubic polynomial regression to predict $\mathrm{nox}$ using $\mathrm{dis}$. Report the regression output, and plot the resulting data and polynomial fits.
###Code
total_degrees = 3
independent = df_x.columns[0]
polynomialMap = {independent: 1}
for i in range(2, total_degrees + 1):
variable_name = independent + '^' + str(i)
df_x[variable_name] = df_x[independent]**i
polynomialMap[variable_name] = i
df_x.insert(0, 'Intercept', 1)
model = sm.OLS(df_y, df_x)
fitted = model.fit()
fitted.summary()
createPolynomialLinearRegressionPlot(df_x, df_y, fitted_model=fitted, height=6, width=8, polynomialMap=polynomialMap)
###Output
_____no_output_____
###Markdown
Exercise 7.9.2 Plot the polynomial fits for a range of different polynomial degrees (say, from 1 to 10), and report the associated residual sum of squares.
###Code
df_x = df_x[['Intercept', 'dis']].copy()
total_degrees = 10
rss_train_arr = np.zeros((total_degrees, ))
independent = 'dis'
polynomialMap = {independent: 1}
for i in range(1, total_degrees + 1):
if i >= 2:
variable_name = independent + '^' + str(i)
df_x[variable_name] = df_x[independent]**i
polynomialMap[variable_name] = i
fitted = sm.OLS(df_y, df_x).fit()
rss_train_arr[i-1] = fitted.ssr
createPolynomialLinearRegressionPlot(df_x, df_y, fitted_model=fitted, height=6, width=8, polynomialMap=polynomialMap)
min_rss = np.amin(rss_train_arr)
min_idx = np.where(rss_train_arr == min_rss)[0][0]
rss_train_arr = np.delete(rss_train_arr, min_idx)
degrees_arr = np.arange(1, total_degrees + 1)
degrees_arr = np.delete(degrees_arr, min_idx)
fig, ax = plt.subplots(1, 1, constrained_layout=True, figsize=(8, 4))
_ = ax.scatter(
degrees_arr, rss_train_arr
)
_ = ax.scatter(min_idx + 1, min_rss, marker='x', label='smallest train RSS',
c=plt.rcParams['axes.prop_cycle'].by_key()['color'][0])
_ = ax.set_xlabel('degrees of polynomial fit')
_ = ax.set_ylabel('train RSS')
_ = ax.legend()
###Output
_____no_output_____
###Markdown
Exercise 7.9.3 Perform cross-validation or another approach to select the optimal degree for the polynomial, and explain your results.
###Code
descriptiveColumns = ['Intercept', 'dis']
total_degrees = 10
rss_test_arr = np.zeros((total_degrees, ))
independent = 'dis'
loocv = LeaveOneOut() # leave-one-out cross-validation
for i in range(1, total_degrees + 1):
if i >= 2:
variable_name = independent + '^' + str(i)
descriptiveColumns.append(variable_name)
RSS = 0
for train_index, test_index in loocv.split(df_x):
df_x_train, df_x_test = df_x[descriptiveColumns].iloc[train_index], df_x[descriptiveColumns].iloc[test_index]
df_y_train, df_y_test = df_y.iloc[train_index], df_y.iloc[test_index]
fitted = sm.OLS(df_y_train, df_x_train).fit()
Y_pred = fitted.predict(df_x_test.to_numpy())
RSS += (df_y_test.iloc[0, 0] - Y_pred[0])**2
rss_test_arr[i-1] = RSS
min_rss = np.amin(rss_test_arr)
min_idx = np.where(rss_test_arr == min_rss)[0][0]
rss_test_arr = np.delete(rss_test_arr, min_idx)
degrees_arr = np.arange(1, total_degrees + 1)
degrees_arr = np.delete(degrees_arr, min_idx)
fig, ax = plt.subplots(1, 1, constrained_layout=True, figsize=(8, 4))
_ = ax.scatter(
degrees_arr, rss_test_arr
)
_ = ax.scatter(min_idx + 1, min_rss, marker='x', label='smallest test RSS',
c=plt.rcParams['axes.prop_cycle'].by_key()['color'][0])
_ = ax.set_xlabel('degrees of polynomial fit')
_ = ax.set_ylabel('test RSS')
_ = ax.legend()
###Output
_____no_output_____
###Markdown
Exercise 7.9.4 Use the bs() function to fit a regression spline to predict $\mathrm{nox}$ using $\mathrm{dis}$. Report the output for the fit using four degrees of freedom. How did you choose the knots? Plot the resulting fit.
###Code
df = 4
df_X_transformed = dmatrix(f'bs(df_x["dis"], df={df}, include_intercept=True)',
{'df_x["dis"]': df_x["dis"]}, return_type='dataframe')
assert df_X_transformed.shape[1] == df + 1
fitted = sm.GLM(df_y, df_X_transformed).fit()
fitted.summary()
knots = df - 3
plotCubicSpines(df_x[['dis']], df_y, {knots: fitted})
###Output
_____no_output_____
###Markdown
Exercise 7.9.5 Now fit a regression spline for a range of degrees of freedom, and plot the resulting fits and report the resulting RSS. Describe the results obtained.
###Code
total_knots = 10
rss_train_arr = np.zeros((total_knots, ))
for knots in range(1, total_knots + 1):
df = knots + 3
df_X_transformed = dmatrix(f'bs(df_x["dis"], df={df}, include_intercept=True)',
{'df_x["dis"]': df_x["dis"]}, return_type='dataframe')
assert df_X_transformed.shape[1] == df + 1
fitted = sm.GLM(df_y, df_X_transformed).fit()
sr_Y_pred = fitted.predict(dmatrix_func(f'bs(df_x["dis"], df={df}, include_intercept=True)',
{'df_x["dis"]': df_x["dis"]}, return_type='dataframe'))
rss_train_arr[knots-1] = ((sr_Y_pred - df_y['nox'])**2).sum()
plotCubicSpines(df_x[['dis']], df_y, {knots: fitted})
min_rss = np.amin(rss_train_arr)
min_idx = np.where(rss_train_arr == min_rss)[0][0]
rss_train_arr = np.delete(rss_train_arr, min_idx)
knots_arr = np.arange(1, total_knots + 1)
knots_arr = np.delete(knots_arr, min_idx)
fig, ax = plt.subplots(1, 1, constrained_layout=True, figsize=(8, 4))
_ = ax.scatter(
knots_arr, rss_train_arr
)
_ = ax.scatter(min_idx + 1, min_rss, marker='x', label='smallest train RSS',
c=plt.rcParams['axes.prop_cycle'].by_key()['color'][0])
_ = ax.set_xlabel('number of knots')
_ = ax.set_ylabel('train RSS')
_ = ax.legend()
###Output
_____no_output_____
###Markdown
Exercise 7.9.6 Perform cross-validation or another approach in order to select the best degrees of freedom for a regression spline on this data. Describe your results.
###Code
total_knots = 10
rss_test_arr = np.zeros((total_knots, ))
for knots in range(1, total_knots + 1):
df = knots + 3
RSS = 0
for train_index, test_index in loocv.split(df_x):
df_x_train, df_x_test = df_x[['dis']].iloc[train_index], df_x[['dis']].iloc[test_index]
df_y_train, df_y_test = df_y.iloc[train_index], df_y.iloc[test_index]
fitted = sm.OLS(df_y_train, df_x_train).fit()
Y_pred = fitted.predict(df_x_test.to_numpy())
df_X_transformed_train = dmatrix(f'bs(df_x_train["dis"], df={df}, include_intercept=True)',
{'df_x_train["dis"]': df_x_train["dis"]}, return_type='dataframe')
assert df_X_transformed_train.shape[1] == df + 1
fitted = sm.GLM(df_y_train, df_X_transformed_train).fit()
sr_Y_pred = fitted.predict(dmatrix_func(f'bs(df_x_test["dis"], df={df}, include_intercept=True)',
{'df_x_test["dis"]': df_x_test["dis"]}, return_type='dataframe'))
RSS += (df_y_test.iloc[0, 0] - sr_Y_pred.iloc[0])**2
rss_test_arr[knots-1] = RSS
min_rss = np.amin(rss_test_arr)
min_idx = np.where(rss_test_arr == min_rss)[0][0]
rss_test_arr = np.delete(rss_test_arr, min_idx)
knots_arr = np.arange(1, total_knots + 1)
knots_arr = np.delete(knots_arr, min_idx)
fig, ax = plt.subplots(1, 1, constrained_layout=True, figsize=(8, 4))
_ = ax.scatter(
knots_arr, rss_test_arr
)
_ = ax.scatter(min_idx + 1, min_rss, marker='x', label='smallest test RSS',
c=plt.rcParams['axes.prop_cycle'].by_key()['color'][0])
_ = ax.set_xlabel('number of knots')
_ = ax.set_ylabel('test RSS')
_ = ax.legend()
###Output
_____no_output_____ |
notebooks/10. bitconnect_price.ipynb | ###Markdown
Bitconnect Price by: Widya Meiriska 1. Read Dataset
###Code
import csv
import pandas as pd
import numpy as np
df = pd.read_csv('../data/raw/bitcoin/bitconnect_price.csv', parse_dates = ['Date'])
df.tail()
###Output
_____no_output_____
###Markdown
2. Data Investigation
###Code
df.columns
df.count()
df.dtypes
###Output
_____no_output_____
###Markdown
There is no missing data here but there are several data which have different format. Some of the data do not use number format.
###Code
# Change object to format number
df['Volume'] = df['Volume'].apply(lambda x: float(str(x).replace(',','')))
df['Market Cap'] = df['Market Cap'].apply(lambda x: float(str(x).replace(',','')))
df.info()
df.isnull().sum()
# Cek missing data
missingdf = pd.DataFrame(df.isna().sum()).rename(columns = {0: 'total'})
missingdf['percent'] = missingdf['total'] / len(df)
missingdf
df.describe()
###Output
_____no_output_____
###Markdown
Now the data is clean, no null value and has same format 3. Data Visualization
###Code
# Set Date as it's index
df.set_index('Date', inplace = True )
# Visualization the average of Open based on time (Week)
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(25, 25))
plt.subplot(3,3,1)
plt.ylabel('Open')
df.Open.plot()
plt.title('Date vs Open')
plt.subplot(3,3,2)
plt.ylabel('Low')
df.Low.plot()
plt.title('Date vs Low')
plt.subplot(3,3,3)
plt.ylabel('High')
df.High.plot()
plt.title('Date vs High')
plt.subplot(3,3,4)
plt.ylabel('Close')
df.Close.plot()
plt.title('Date vs Close')
plt.subplot(3,3,5)
plt.ylabel('Volume')
df.Volume.plot()
plt.title('Date vs Volume')
plt.subplot(3,3,6)
plt.ylabel('Market Cap')
df['Market Cap'].plot()
plt.title('Date vs Market Cap')
###Output
_____no_output_____ |
9 google customer revenue prediction/using-classification-for-predictions.ipynb | ###Markdown
IntroductionI believe the main issue we have in this challenge is not to predict revenues but more to get these zeros right since less than 1.3 % of the sessions have a non-zero revenue.The idea in this kernel is to classify non-zero transactions first and use that to help our regressor get better results.The kernel only presents one way of doing it. No special feature engineering or set of hyperparameters, just a code shell/structure ;-) Check file structure
###Code
import os
print(os.listdir("../input"))
###Output
_____no_output_____
###Markdown
Import packages
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import matplotlib.cbook as cbook
import seaborn as sns
from sklearn.metrics import mean_squared_error, roc_auc_score, log_loss
import gc
import time
from pandas.core.common import SettingWithCopyWarning
import warnings
import lightgbm as lgb
from sklearn.model_selection import KFold, GroupKFold
warnings.simplefilter('error', SettingWithCopyWarning)
gc.enable()
%matplotlib inline
###Output
_____no_output_____
###Markdown
Get data
###Code
train = pd.read_csv('../input/create-extracted-json-fields-dataset/extracted_fields_train.gz',
dtype={'date': str, 'fullVisitorId': str, 'sessionId':str}, nrows=None)
test = pd.read_csv('../input/create-extracted-json-fields-dataset/extracted_fields_test.gz',
dtype={'date': str, 'fullVisitorId': str, 'sessionId':str}, nrows=None)
train.shape, test.shape
###Output
_____no_output_____
###Markdown
Get targets
###Code
y_clf = (train['totals.transactionRevenue'].fillna(0) > 0).astype(np.uint8)
y_reg = train['totals.transactionRevenue'].fillna(0)
del train['totals.transactionRevenue']
y_clf.mean(), y_reg.mean()
###Output
_____no_output_____
###Markdown
Add date features
###Code
for df in [train, test]:
df['date'] = pd.to_datetime(df['date'])
df['vis_date'] = pd.to_datetime(df['visitStartTime'])
df['sess_date_dow'] = df['vis_date'].dt.dayofweek
df['sess_date_hours'] = df['vis_date'].dt.hour
df['sess_date_dom'] = df['vis_date'].dt.day
###Output
_____no_output_____
###Markdown
Create list of features
###Code
excluded_features = [
'date', 'fullVisitorId', 'sessionId', 'totals.transactionRevenue',
'visitId', 'visitStartTime', 'non_zero_proba', 'vis_date'
]
categorical_features = [
_f for _f in train.columns
if (_f not in excluded_features) & (train[_f].dtype == 'object')
]
if 'totals.transactionRevenue' in train.columns:
del train['totals.transactionRevenue']
if 'totals.transactionRevenue' in test.columns:
del test['totals.transactionRevenue']
###Output
_____no_output_____
###Markdown
Factorize categoricals
###Code
for f in categorical_features:
train[f], indexer = pd.factorize(train[f])
test[f] = indexer.get_indexer(test[f])
###Output
_____no_output_____
###Markdown
Classify non-zero revenues
###Code
folds = GroupKFold(n_splits=5)
train_features = [_f for _f in train.columns if _f not in excluded_features]
print(train_features)
oof_clf_preds = np.zeros(train.shape[0])
sub_clf_preds = np.zeros(test.shape[0])
for fold_, (trn_, val_) in enumerate(folds.split(y_clf, y_clf, groups=train['fullVisitorId'])):
trn_x, trn_y = train[train_features].iloc[trn_], y_clf.iloc[trn_]
val_x, val_y = train[train_features].iloc[val_], y_clf.iloc[val_]
clf = lgb.LGBMClassifier(
num_leaves=31,
learning_rate=0.03,
n_estimators=1000,
subsample=.9,
colsample_bytree=.9,
random_state=1
)
clf.fit(
trn_x, trn_y,
eval_set=[(val_x, val_y)],
early_stopping_rounds=50,
verbose=50
)
oof_clf_preds[val_] = clf.predict_proba(val_x, num_iteration=clf.best_iteration_)[:, 1]
print(roc_auc_score(val_y, oof_clf_preds[val_]))
sub_clf_preds += clf.predict_proba(test[train_features], num_iteration=clf.best_iteration_)[:, 1] / folds.n_splits
roc_auc_score(y_clf, oof_clf_preds)
###Output
_____no_output_____
###Markdown
Add classification to dataset
###Code
train['non_zero_proba'] = oof_clf_preds
test['non_zero_proba'] = sub_clf_preds
###Output
_____no_output_____
###Markdown
Predict revenues at session level
###Code
train_features = [_f for _f in train.columns if _f not in excluded_features] + ['non_zero_proba']
print(train_features)
oof_reg_preds = np.zeros(train.shape[0])
sub_reg_preds = np.zeros(test.shape[0])
importances = pd.DataFrame()
for fold_, (trn_, val_) in enumerate(folds.split(y_reg, y_reg, groups=train['fullVisitorId'])):
trn_x, trn_y = train[train_features].iloc[trn_], y_reg.iloc[trn_].fillna(0)
val_x, val_y = train[train_features].iloc[val_], y_reg.iloc[val_].fillna(0)
reg = lgb.LGBMRegressor(
num_leaves=31,
learning_rate=0.03,
n_estimators=1000,
subsample=.9,
colsample_bytree=.9,
random_state=1
)
reg.fit(
trn_x, np.log1p(trn_y),
eval_set=[(val_x, np.log1p(val_y))],
early_stopping_rounds=50,
verbose=50
)
imp_df = pd.DataFrame()
imp_df['feature'] = train_features
imp_df['gain'] = reg.booster_.feature_importance(importance_type='gain')
imp_df['fold'] = fold_ + 1
importances = pd.concat([importances, imp_df], axis=0, sort=False)
oof_reg_preds[val_] = reg.predict(val_x, num_iteration=reg.best_iteration_)
oof_reg_preds[oof_reg_preds < 0] = 0
_preds = reg.predict(test[train_features], num_iteration=reg.best_iteration_)
_preds[_preds < 0] = 0
sub_reg_preds += np.expm1(_preds) / folds.n_splits
mean_squared_error(np.log1p(y_reg.fillna(0)), oof_reg_preds) ** .5
import warnings
warnings.simplefilter('ignore', FutureWarning)
importances['gain_log'] = np.log1p(importances['gain'])
mean_gain = importances[['gain', 'feature']].groupby('feature').mean()
importances['mean_gain'] = importances['feature'].map(mean_gain['gain'])
plt.figure(figsize=(8, 12))
sns.barplot(x='gain_log', y='feature', data=importances.sort_values('mean_gain', ascending=False))
###Output
_____no_output_____
###Markdown
Save predictionsMaybe one day Kaggle will support file compression for submissions from kernels...I'm aware I sum the logs instead of summing the actual revenues...
###Code
test['PredictedLogRevenue'] = sub_reg_preds
test[['fullVisitorId', 'PredictedLogRevenue']].groupby('fullVisitorId').sum()['PredictedLogRevenue'].apply(np.log1p).reset_index()\
.to_csv('test_clf_reg_log_of_sum.csv', index=False)
###Output
_____no_output_____
###Markdown
Plot Actual Dollar estimates per dates
###Code
# Go to actual revenues
train['PredictedRevenue'] = np.expm1(oof_reg_preds)
test['PredictedRevenue'] = sub_reg_preds
train['totals.transactionRevenue'] = y_reg
# Sum by date on train and test
trn_group = train[['date', 'PredictedRevenue', 'totals.transactionRevenue']].groupby('date').sum().reset_index()
sub_group = test[['date', 'PredictedRevenue']].groupby('date').sum().reset_index()
# Now plot all this
years = mdates.YearLocator() # every year
months = mdates.MonthLocator() # every month
yearsFmt = mdates.DateFormatter('%Y-%m')
fig, ax = plt.subplots(figsize=(15, 6))
ax.set_title('Actual Dollar Revenues - we are way off...', fontsize=15, fontweight='bold')
ax.plot(pd.to_datetime(trn_group['date']).values, trn_group['totals.transactionRevenue'].values)
ax.plot(pd.to_datetime(trn_group['date']).values, trn_group['PredictedRevenue'].values)
ax.plot(pd.to_datetime(sub_group['date']).values, sub_group['PredictedRevenue'].values)
# # format the ticks
ax.xaxis.set_major_locator(months)
ax.xaxis.set_major_formatter(yearsFmt)
ax.xaxis.set_minor_locator(months)
ax.format_xdata = mdates.DateFormatter('%Y-%m-%d')
# # ax.format_ydata = price
ax.grid(True)
# rotates and right aligns the x labels, and moves the bottom of the
# axes up to make room for them
fig.autofmt_xdate()
###Output
_____no_output_____
###Markdown
Display using np.log1p
###Code
# Go to actual revenues
train['PredictedRevenue'] = np.expm1(oof_reg_preds)
test['PredictedRevenue'] = sub_reg_preds
train['totals.transactionRevenue'] = y_reg
# Sum by date on train and test
trn_group = train[['date', 'PredictedRevenue', 'totals.transactionRevenue']].groupby('date').sum().reset_index()
sub_group = test[['date', 'PredictedRevenue']].groupby('date').sum().reset_index()
years = mdates.YearLocator() # every year
months = mdates.MonthLocator() # every month
yearsFmt = mdates.DateFormatter('%Y-%m')
fig, ax = plt.subplots(figsize=(15, 6))
ax.set_title('We are also off in logs... or am I just stupid ?', fontsize=15, fontweight='bold')
ax.plot(pd.to_datetime(trn_group['date']).values, np.log1p(trn_group['totals.transactionRevenue'].values))
ax.plot(pd.to_datetime(trn_group['date']).values, np.log1p(trn_group['PredictedRevenue'].values))
ax.plot(pd.to_datetime(sub_group['date']).values, np.log1p(sub_group['PredictedRevenue'].values))
# # format the ticks
ax.xaxis.set_major_locator(months)
ax.xaxis.set_major_formatter(yearsFmt)
ax.xaxis.set_minor_locator(months)
ax.format_xdata = mdates.DateFormatter('%Y-%m-%d')
# # ax.format_ydata = price
ax.grid(True)
# rotates and right aligns the x labels, and moves the bottom of the
# axes up to make room for them
fig.autofmt_xdate()
###Output
_____no_output_____
###Markdown
Using sum of logs - no really ?
###Code
# Keep amounts in logs
train['PredictedRevenue'] = oof_reg_preds
test['PredictedRevenue'] = np.log1p(sub_reg_preds)
train['totals.transactionRevenue'] = np.log1p(y_reg)
# You really mean summing up the logs ???
trn_group = train[['date', 'PredictedRevenue', 'totals.transactionRevenue']].groupby('date').sum().reset_index()
sub_group = test[['date', 'PredictedRevenue']].groupby('date').sum().reset_index()
years = mdates.YearLocator() # every year
months = mdates.MonthLocator() # every month
yearsFmt = mdates.DateFormatter('%Y-%m')
fig, ax = plt.subplots(figsize=(15, 6))
ax.set_title('Summing up logs looks a lot better !?! Is the challenge to find the correct metric ???', fontsize=15, fontweight='bold')
ax.plot(pd.to_datetime(trn_group['date']).values, trn_group['totals.transactionRevenue'].values)
ax.plot(pd.to_datetime(trn_group['date']).values, trn_group['PredictedRevenue'].values)
ax.plot(pd.to_datetime(sub_group['date']).values, sub_group['PredictedRevenue'].values)
# # format the ticks
ax.xaxis.set_major_locator(months)
ax.xaxis.set_major_formatter(yearsFmt)
ax.xaxis.set_minor_locator(months)
ax.format_xdata = mdates.DateFormatter('%Y-%m-%d')
# # ax.format_ydata = price
ax.grid(True)
# rotates and right aligns the x labels, and moves the bottom of the
# axes up to make room for them
fig.autofmt_xdate()
###Output
_____no_output_____ |
3. Python in Data Mining/normal_weight/solution.ipynb | ###Markdown
اضافه کردن کتابخانههای مورد نیاز
###Code
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
با کتابخانهی`numpy`آرایهی `population`را از فایل میخوانیم.
###Code
population = np.load('population.npy')
population
###Output
_____no_output_____
###Markdown
نمودار توزیع آن را رسم میکنیم
###Code
# hist
plt.figure(figsize = (16,6),dpi=200)
plt.hist(population,bins = 100)
plt.title("Weight Distribution of Sample")
plt.show()
###Output
_____no_output_____
###Markdown
تابع توزیع تجمعی آن را رسم میکنیم
###Code
population_sorted = np.sort(population)
y = np.arange(1 , len(population)+1) / len(population)
# cdf
plt.figure(figsize=(16,6),dpi=200)
plt.plot(population_sorted,y,'.',linestyle = 'none')
plt.title("Cumulative Distribution Function")
plt.show()
###Output
_____no_output_____
###Markdown
یک توزیع نرمال با میانگین و واریانسی برابر با `population`تولید میکنیم.(به وسیلهی `np.random.normal`)و در متغیر `normal_dist`ذخیره میکنیم
###Code
mu = np.mean(population)
sigma = np.var(population)
size = np.size(population)
print(mu, sigma, size)
normal_dist = np.random.normal(mu, sigma, size)
normal_dist
###Output
_____no_output_____
###Markdown
تابع توزیع تجمعی را برای`normal_dist`و`population`در کنار هم رسم میکنیم
###Code
normal_dist_sorted = np.sort(normal_dist)
y_normal_dist = np.arange(1 , len(normal_dist)+1) / len(normal_dist)
plt.figure(figsize = (16,6))
plt.plot(population_sorted,y,'.',c = 'blue',linestyle = 'none',label= 'Population')
plt.plot(normal_dist_sorted,y_normal_dist,'.',c = 'red',linestyle = 'none',label = 'Normal Distribution')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
میانگین اختلاف دو توزیع معیار مناسبی برای فاصلهی آنهاست.تابع زیر را طوری تکمیل کنید که دو توزیع را بگیرد و فاصلهی آنها را حساب کند.سپس آن را در فایل`dists_diff.py`ذخیره کنید.پس از تکمیل و اجرای تابع میتوانید کد زیر را در خط اول سلول پایین قرار دهید تا خودش کد آن را در فایل ثبت کند.`%%writefile dists_diff.py`
###Code
def dists_diff(dist_1,dist_2):
return np.mean(np.abs(dist_1 - dist_2))
###Output
_____no_output_____
###Markdown
تابع را امتحان میکنیم
###Code
print(dists_diff(population,normal_dist))
diff = dists_diff(population,normal_dist)
###Output
_____no_output_____
###Markdown
کد زیر را اجرا کنید تا `normal_dist`و`diff`را در فایل ذخیره کند و آن را ارسال کنید.
###Code
#dumper script
np.savez('result_dist.npz',normal_dist=normal_dist,diff=diff)
import zlib
import zipfile
def compress(file_names):
print("File Paths:")
print(file_names)
# Select the compression mode ZIP_DEFLATED for compression
# or zipfile.ZIP_STORED to just store the file
compression = zipfile.ZIP_DEFLATED
# create the zip file first parameter path/name, second mode
with zipfile.ZipFile("result.zip", mode="w") as zf:
for file_name in file_names:
# Add file to the zip file
# first parameter file to zip, second filename in zip
zf.write('./'+file_name, file_name, compress_type=compression)
file_names= ["result_dist.npz","dists_diff.py", "solution.ipynb"]
compress(file_names)
###Output
File Paths:
['result_dist.npz', 'dists_diff.py', 'solution.ipynb']
|
[MAC015]_Trabalho_03.ipynb | ###Markdown
Elaborar um programa que dimensione para o usuário, em termos de seu diâmetro (em mm), um eixo circular que funcione acoplado a um motor que gira a uma frequência de 120 rpm e dissipa 150 cv de potência (1 cv = 736 W), considerando a solicitação à torção.Para tal, considerar eixo maciço para os seguintes materiais:Aço: G = 75 GPa, Tensão admissível ao cisalhamento = 50 MPaLatão: G = 40 GPa, Tensão admissível ao cisalhamento = 48 MPaAlumínio: G = 25 GPa, Tensão admissível ao cisalhamento = 25 MPaPara o dimensionamento, o programa deverá levar em conta não só as tensões admissíveis de cada material, mas também uma limitação para o ângulo de torção, que não deve exceder 1º. Considerar o comprimento L do eixo como 20 vezes o diâmetro (no caso dos perfis tubulares, 20 vezes o diâmetro externo).Fazer também o dimensionamento para perfis tubulares de aço (considerar as mesmas propriedades já citadas para o material). Deve-se ler a tabela de perfis disponíveis (para este exercício, foram escolhidos perfis Vallourec 141,3 mm), cujos dados geométricos pertinentes ao exercício foram planilhados em Excel, e verificar os tubos disponíveis em relação à tensão e ao ângulo de torção. Escolher o perfil mais econômico (de menor massa linear).Considerar como dados de entrada a potência dissipada pelo motor e sua frequência de giro.Fornecer como saída uma tabela indicando para o usuário:MaterialSeção escolhida (em mm)Apresentar análise crítica sobre o "peso" da limitação da tensão x limitação do ângulo de torção no dimensionamento dos eixos.Analisar o que acontece com o dimensionamento se:a) A potência do motor variab) A frequência de giro do motor varia
###Code
# Importando as bibliotecas necessárias
import pandas as pd
import numpy as np
# Cálculo do momento torsor (N.mm)
# Função que retorna diâmetro e ângulo
def retorna_dados(modulo_el, tensao_ad):
frequencia = 2 # Frequência em hertz (120 rpm)
potencia = 200*736 # Potência convertida para W
T = (potencia)/(frequencia*2*np.pi) # Torque (N.m)
# Cálculo das propriedades geométricas. Como não sabemos o raio do eixo, vamos usar as equações deduzidas nas aulas
raio = np.cbrt((2*(T*10**3))/(np.pi*tensao_ad)) # Raio em mm
diametro = (2*raio) # Diâmetro em mm
diametro = round(diametro) # Arredonda o valor do diâmetro para o inteiro mais superior mais próximo
momento_in = (np.pi/2)*(((diametro/2)/1000)**4) # Passa o raio para metro e encontra o momento de inércia polar
comprimento = (diametro*20)/1000 # Passa o diâmetro para metro e multiplica por 20, de acordo com o enunciado
angulo = (T*comprimento)/((modulo_el*10**9)*momento_in) # Define ângulo em graus
return diametro, np.degrees(angulo) # Retorna diâmetro em mm e ângulo em graus
# Dados dos materiais (Aço, latão e alumínio)
modulos_el = [75,40,25] # Módulos de elasticidade
tensoes_ad = [50,48,25] # Tensões admissível ao cisalhamento
lista_diametros = [] # Varíavel que será usada para armazenar os diâmetros
lista_angulos = [] # Varíavel que será usada para armazenar os ângulos
for i in range(3):
diametro, angulo = retorna_dados(modulos_el[i], tensoes_ad[i])
lista_diametros.append(diametro)
lista_angulos.append(angulo)
# Mostra os resultados
outputDf = pd.DataFrame(
{
"Material":["Aço","Latão","Alumínio"],
"Diâmetro (mm)":lista_diametros,
"Ângulo (graus)":lista_angulos
}
)
outputDf
# Limitando o ângulo de torção, que não deve exceder 1º
# Cálculo do momento torsor (N.mm)
# Função que retorna diâmetro e ângulo
def retorna_dados(modulo_el, tensao_ad):
frequencia = 2 # Frequência em hertz (120 rpm)
potencia = 200*736 # Potência convertida para W
T = (potencia)/(frequencia*2*np.pi) # Torque (N.m)
# Cálculo das propriedades geométricas. Como não sabemos o raio do eixo, vamos usar as equações deduzidas nas aulas
raio = np.cbrt((2*(T*10**3))/(np.pi*tensao_ad)) # Raio em mm
diametro = (2*raio) # Diâmetro em mm
diametro = round(diametro) # Arredonda o valor do diâmetro para o inteiro mais superior mais próximo
momento_in = (np.pi/2)*(((diametro/2)/1000)**4) # Passa o raio para metro e encontra o momento de inércia polar
comprimento = (diametro*20)/1000 # Passa o diâmetro para metro e multiplica por 20, de acordo com o enunciado
angulo = (T*comprimento)/((modulo_el*10**9)*momento_in) # Define ângulo em graus
return diametro, np.degrees(angulo) # Retorna diâmetro em mm e ângulo em graus
# Dados dos materiais (Aço, latão e alumínio)
modulos_el = [75,40,25] # Módulos de elasticidade [75,40,25]
tensoes_ad = [35,48,25] # Tensões admissível ao cisalhamento [50,48,25]
# Intervalo de tensões admissíveis para o aço
int_aco = np.arange(1, 50, 2)
# Intervalo de tensões admissíveis para o latão
int_latao = np.arange(1, 48, 2)
# Intervalo de tensões admissíveis para o alumínio
int_aluminio = np.arange(1, 25, 2)
lista_Ndiametros = [] # Varíavel que será usada para armazenar os diâmetros
lista_Nangulos = [] # Varíavel que será usada para armazenar os ângulos
# Aço - Início do laço para limitar o ângulo em 1°
angulo_aco = 0
for i in int_aco:
if (angulo_aco<1):
diametro, angulo_aco = retorna_dados(modulos_el[0], i)
# Guarda o último diâmetro encontrado com ângulo menor que 1°
lista_Ndiametros.append(diametro)
lista_Nangulos.append(angulo_aco)
# Latão - Início do laço para limitar o ângulo em 1°
angulo_latao = 0
for i in int_latao:
if (angulo_latao<1):
diametro, angulo_latao = retorna_dados(modulos_el[1], i)
# Guarda o último diâmetro encontrado com ângulo menor que 1°
lista_Ndiametros.append(diametro)
lista_Nangulos.append(angulo_latao)
# Alumínio - Início do laço para limitar o ângulo em 1°
angulo_aluminio = 0
for i in int_aluminio:
if (angulo_aluminio<1):
diametro, angulo_aluminio = retorna_dados(modulos_el[2], i)
# Guarda o último diâmetro encontrado com ângulo menor que 1°
lista_Ndiametros.append(diametro)
lista_Nangulos.append(angulo_aluminio)
# Mostra os resultados dos diâmetros quando existe o ângulo não pode ultrapassar 1°
outputDf = pd.DataFrame(
{
"Material":["Aço","Latão","Alumínio"],
"Diâmetro (mm)":lista_Ndiametros,
"Ângulo (graus)":lista_Nangulos
}
)
outputDf
###Output
_____no_output_____ |
blender_bot.ipynb | ###Markdown
Install Dependancies
###Code
#install pytorch
!pip3 install torch==1.9.1+cu111 torchvision==0.10.1+cu111 torchaudio===0.9.1 -f https://download.pytorch.org/whl/torch_stable.html
#install transformers
!pip install transformers
###Output
_____no_output_____
###Markdown
Import Model
###Code
#import model class and tokenizer
from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration
#download and setup the model and tokenizer
model_name = 'facebook/blenderbot-400M-distill'
tokenizer = BlenderbotTokenizer.from_pretrained(model_name)
model = BlenderbotForConditionalGeneration.from_pretrained(model_name)
###Output
_____no_output_____
###Markdown
make conversation
###Code
#making an utterance
utterance = "My name is gold, I like football and coding"
#tokenize the utterance
inputs = tokenizer(utterance, return_tensors="pt")
inputs
#generate model results
result = model.generate(**inputs)
result
tokenizer.decode(result[0])
###Output
_____no_output_____ |
notebooks/.ipynb_checkpoints/pm2-2-quality-prediction-in-a-mining-process-checkpoint.ipynb | ###Markdown
Quality Prediction in a Mining Process by using RNNIn this notebook, we are going to predict how much impurity is in the ore concentrate. As this impurity is measured every hour, if we can predict how much silica (impurity) is in the ore concentrate, we can help the engineers, giving them early information to take actions. Hence, they will be able to take corrective actions in advance (reduce impurity, if it is the case) and also help the environment (reducing the amount of ore that goes to tailings as you reduce silica in the ore concentrate). To this end, we are going to use the dataset **Quality Prediction in a Mining Process Data** from [Kaggle](https://www.kaggle.com/edumagalhaes/quality-prediction-in-a-mining-process/home). In order to have a clean notebook, some functions are implemented in the file *utils.py* (e.g., plot_loss_and_accuracy). Summary: - [Data Pre-processing](data_preprocessing) - [Data Visualisation](data_viz) - [Data Normalisation](normalisation) - [Building the Models](models) - [Splitting the Data into Train and Test Sets](split) - [Gated Recurrent Unit (GRU)](gru) - [Long-short Term Memory (LSTM)](lstm) __All the libraries used in this notebook are Open Source__.
###Code
# Standard libraries - no deep learning yet
import numpy as np # written in C, is faster and robust library for numerical and matrix operations
import pandas as pd # data manipulation library, it is widely used for data analysis and relies on numpy library.
import matplotlib.pyplot as plt # for plotting
from datetime import datetime # supplies classes for manipulating dates and times in both simple and complex ways
from utils import *
# the following to lines will tell to the python kernel to alway update the kernel for every utils.py
# modification, without the need of restarting the kernel.
# Of course, for every motification in util.py, we need to reload this cell
%load_ext autoreload
%autoreload 2
%matplotlib inline
###Output
Using TensorFlow backend.
###Markdown
Data Pre-processing**First download the dataset (click [here](https://www.kaggle.com/edumagalhaes/quality-prediction-in-a-mining-process/downloads/quality-prediction-in-a-mining-process.zip/1)) unzip quality-prediction-in-a-mining-process.zip**The **Quality Prediction in a Mining Process Data** includes ([Kaggle](https://www.kaggle.com/edumagalhaes/quality-prediction-in-a-mining-process/home)): - The first column shows time and date range (from march of 2017 until september of 2017). Some columns were sampled every 20 second. Others were sampled on a hourly base. *This make the data processing harder, however, for this tutorial we will not re-sample the data*. - The second and third columns are quality measures of the iron ore pulp right before it is fed into the flotation plant. - Column 4 until column 8 are the most important variables that impact in the ore quality in the end of the process. - Column 9 until column 22, we can see process data level and air flow inside the flotation columns, which also impact in ore quality. - The last two columns are the final iron ore pulp quality measurement from the lab. Target is to predict the last column, which is the % of silica in the iron ore concentrate.We are going to use [Pandas](https://pandas.pydata.org/) for the data processing. The function [read_csv](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) is going to be used to read the csv file.
###Code
dataset = pd.read_csv('../data/MiningProcess_Flotation_Plant_Database.csv',index_col=0, decimal=",")
# Set the index name to 'date'
dataset.index.name = 'date'
dataset.head()
###Output
_____no_output_____
###Markdown
Given our time and computational resources restrictions, we are going to select the first 100,000 observations for this tutorial.
###Code
dataset = dataset.iloc[:100000,:]
###Output
_____no_output_____
###Markdown
Data Visualisation
###Code
# Ploting the Silica Concentrate
plt.figure(figsize = (15, 5))
plt.xlabel("x")
plt.ylabel("Silica Concentrate")
dataset['% Silica Concentrate'].plot()
###Output
_____no_output_____
###Markdown
Data NormalisationHere are going to normalise all the features and transform the data into a supervised learning problem. The features to be predicted are removed, as we would like to predict just the *Silica Concentrate* (last element in every feature array). Transforming the data into a supervised learning problemThis step will involve framing the dataset as a **supervised learning problem**. As we would like to predict the "silica concentrate", we will set the corresponding column to be the output (label $y$).We would like to predict the silica concentrate ( $y_t$) at the current time ($t$) given the measurements at the prior time steps (lets say $t-1, t-2, \dots t-n$, in which $n$ is the number of past observations to be used to forcast $y_t$).The function **create_window** (see _utils.py_) converts the time-series to a supervised learning problem. The new dataset is constructed as a **DataFrame**, with each column suitably named both by variable number and time step, for example, $var1(t-1)$ for **%Iron Feed** at the previous observation ($t-1$). This allows you to design a variety of different time step sequence type forecasting problems from a given univariate or multivariate time series.
###Code
# Scikit learn libraries
from sklearn.preprocessing import MinMaxScaler #Allows normalisation
# Convert the data to float
values = dataset.values.astype('float32')
# Normalise features
scaler = MinMaxScaler(feature_range=(0, 1))
scaled = scaler.fit_transform(values)
# Specify the number of lag
n_in = 5
n_features = 23
# Transform the time-series to a supervised learning problem representation
reframed = create_window(scaled, n_in = n_in, n_out = 1, drop_nan = True)
# Summarise the new frames (reframes)
print(reframed.head(1))
###Output
var1(t-5) var2(t-5) var3(t-5) var4(t-5) var5(t-5) var6(t-5) \
5 0.476715 0.502299 0.483124 0.665398 0.459145 0.641907
var7(t-5) var8(t-5) var9(t-5) var10(t-5) ... var14(t) var15(t) \
5 0.660635 0.377486 0.432691 0.4025 ... 0.382131 0.41283
var16(t) var17(t) var18(t) var19(t) var20(t) var21(t) var22(t) \
5 0.375913 0.4394 0.535916 0.559504 0.511396 0.516056 0.875677
var23(t)
5 0.114165
[1 rows x 138 columns]
###Markdown
Building the ModelsSo far, we just preprocessed the dataset. Now, we are going to build the following sequential models: - [Gated Recurrent Unit (GRU)](gru) - [Long Short-Term Memory (LSTM)](lstm) The models consists in a **many_to_one** architecture, in which the input is a **sequence** of the past observations and the output is the predicted value (in this case with dimension equal 1).
###Code
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.layers import LSTM, GRU
from sklearn.metrics import mean_squared_error # allows compute the mean square error to performance analysis
###Output
_____no_output_____
###Markdown
Splitting the Data into Train and Test Sets
###Code
# split into train and test sets
values = reframed.values
# We will use 80% of the data for training and 20% for testing
n_train = round(0.8 * dataset.shape[0])
train = values[:n_train, :]
test = values[n_train:, :]
# Split into input and outputs
n_obs = n_in * n_features # the number of total features is given by the number of past
# observations * number of features. In this case we have
# 5 past observations and 23 features, so the number of total
# features is 115.
x_train, y_train = train[:, :n_obs], train[:, n_features-1] # note that fore y_train, we are removing
# just the last observation of the
# silica concentrate
x_test, y_test = test[:, :n_obs], test[:, n_features-1]
print('Number of total features (n_obs): ', x_train.shape[1])
print('Number of samples in training set: ', x_train.shape[0])
print('Number of samples in testing set: ', x_test.shape[0])
# Reshape input to be 3D [samples, timesteps, features]
x_train = x_train.reshape((x_train.shape[0], n_in, n_features))
x_test = x_test.reshape((x_test.shape[0], n_in, n_features))
###Output
Number of total features (n_obs): 115
Number of samples in training set: 80000
Number of samples in testing set: 19995
###Markdown
Gated Recurrent Unit (GRU)To build the model, we are going use the following components from Keras: - [Sequencial](https://keras.io/models/sequential/): allows us to create models layer-by-layer. - [GRU](https://keras.io/layers/recurrent/): provides a GRU architecture - [Dense](https://keras.io/layers/core/): provides a regular fully-connected layer - [Activation](https://keras.io/activations/): defines the activation function to be usedBasically, we can define the sequence of the model by using _Sequential()_:```python model = Sequential() model.add(GRU(...)) ...```where the function _add(...)_ that stack the layers. Once created the model, we can configure the training by using the function [compile](https://keras.io/models/model/). Here we need to define the [loss](https://keras.io/losses/) function (mean squared error, mean absolute error, cosine proximity, among others.) and the [optimizer](https://keras.io/optimizers/) (Stochastic gradient descent, RMSprop, adam, among others), as follows:```python model.compile(loss = "...", optimizer = "...")```Also, we have the option to see a summary representation of the model by using the function [summary](https://keras.io/models/about-keras-models/about-keras-models). This function summarises the model and tell us the number of parameters that we need to tune.
###Code
# Define the model.
model_gru = Sequential()
# the input_shape is the number of past observations (n_in) and the number of features
# per past observations (23)
model_gru.add(GRU(input_shape=(x_train.shape[1], x_train.shape[2]),
units = 128,
return_sequences = False))
model_gru.add(Dense(units=1))
# We compile the model by defining the mean absolute error (denoted by mae) as loss function and
# adam as optimizer
model_gru.compile(loss = "mae",
optimizer = "adam")
# just print the model
model_gru.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
gru_1 (GRU) (None, 128) 58368
_________________________________________________________________
dense_1 (Dense) (None, 1) 129
=================================================================
Total params: 58,497
Trainable params: 58,497
Non-trainable params: 0
_________________________________________________________________
###Markdown
Training the ModelOnce defined the model, we need to train it by using the function [fit](https://keras.io/models/model/). This function performs the optmisation step. Hence, we can define the following parameters such as: - batch size: defines the number of samples that will be propagated through the network - epochs: defines the number of times in which all the training set (x_train_scaled) are used once to update the weights - validation split: defines the percentage of training data to be used for validation - among others (click [here](https://keras.io/models/model/) for more information) This function return the _history_ of the training, that can be used for further performance analysis.
###Code
# Training
hist_gru = model_gru.fit(x_train, y_train,
epochs=50,
batch_size=256,
validation_split = 0.1,
verbose=1, # To not print the output, set verbose=0
shuffle=False)
###Output
Train on 72000 samples, validate on 8000 samples
Epoch 1/50
72000/72000 [==============================] - 5s 73us/step - loss: 0.1246 - val_loss: 0.0610
Epoch 2/50
72000/72000 [==============================] - 5s 65us/step - loss: 0.0609 - val_loss: 0.0330
Epoch 3/50
72000/72000 [==============================] - 5s 66us/step - loss: 0.0368 - val_loss: 0.0550
Epoch 4/50
72000/72000 [==============================] - 5s 65us/step - loss: 0.0349 - val_loss: 0.0256
Epoch 5/50
72000/72000 [==============================] - 5s 67us/step - loss: 0.0457 - val_loss: 0.0380
Epoch 6/50
72000/72000 [==============================] - 5s 76us/step - loss: 0.0322 - val_loss: 0.0196
Epoch 7/50
72000/72000 [==============================] - 5s 76us/step - loss: 0.0295 - val_loss: 0.0235
Epoch 8/50
72000/72000 [==============================] - 5s 66us/step - loss: 0.0259 - val_loss: 0.0133
Epoch 9/50
72000/72000 [==============================] - 5s 66us/step - loss: 0.0327 - val_loss: 0.0318
Epoch 10/50
72000/72000 [==============================] - 5s 67us/step - loss: 0.0250 - val_loss: 0.0087
Epoch 11/50
72000/72000 [==============================] - 5s 65us/step - loss: 0.0182 - val_loss: 0.0354
Epoch 12/50
72000/72000 [==============================] - 5s 66us/step - loss: 0.0280 - val_loss: 0.0115
Epoch 13/50
72000/72000 [==============================] - 5s 67us/step - loss: 0.0201 - val_loss: 0.0375
Epoch 14/50
72000/72000 [==============================] - 5s 66us/step - loss: 0.0172 - val_loss: 0.0112
Epoch 15/50
72000/72000 [==============================] - 5s 68us/step - loss: 0.0250 - val_loss: 0.0432
Epoch 16/50
72000/72000 [==============================] - 5s 67us/step - loss: 0.0220 - val_loss: 0.0057
Epoch 17/50
72000/72000 [==============================] - 5s 68us/step - loss: 0.0237 - val_loss: 0.0082
Epoch 18/50
72000/72000 [==============================] - 5s 68us/step - loss: 0.0203 - val_loss: 0.0186
Epoch 19/50
72000/72000 [==============================] - 5s 70us/step - loss: 0.0209 - val_loss: 0.0199
Epoch 20/50
72000/72000 [==============================] - 5s 69us/step - loss: 0.0123 - val_loss: 0.0097
Epoch 21/50
72000/72000 [==============================] - 5s 68us/step - loss: 0.0170 - val_loss: 0.0251
Epoch 22/50
72000/72000 [==============================] - 5s 70us/step - loss: 0.0202 - val_loss: 0.0170
Epoch 23/50
72000/72000 [==============================] - 6s 85us/step - loss: 0.0218 - val_loss: 0.0394
Epoch 24/50
72000/72000 [==============================] - 5s 72us/step - loss: 0.0235 - val_loss: 0.0387
Epoch 25/50
72000/72000 [==============================] - 5s 71us/step - loss: 0.0161 - val_loss: 0.0155
Epoch 26/50
72000/72000 [==============================] - 5s 69us/step - loss: 0.0138 - val_loss: 0.0082
Epoch 27/50
72000/72000 [==============================] - 5s 70us/step - loss: 0.0142 - val_loss: 0.0238
Epoch 28/50
72000/72000 [==============================] - 5s 69us/step - loss: 0.0237 - val_loss: 0.0311
Epoch 29/50
72000/72000 [==============================] - 5s 71us/step - loss: 0.0235 - val_loss: 0.0070
Epoch 30/50
72000/72000 [==============================] - 5s 70us/step - loss: 0.0173 - val_loss: 0.0073
Epoch 31/50
72000/72000 [==============================] - 5s 70us/step - loss: 0.0111 - val_loss: 0.0067
Epoch 32/50
72000/72000 [==============================] - 5s 69us/step - loss: 0.0170 - val_loss: 0.0077
Epoch 33/50
72000/72000 [==============================] - 5s 69us/step - loss: 0.0200 - val_loss: 0.0391
Epoch 34/50
72000/72000 [==============================] - 5s 69us/step - loss: 0.0192 - val_loss: 0.0081
Epoch 35/50
72000/72000 [==============================] - 5s 68us/step - loss: 0.0199 - val_loss: 0.0131
Epoch 36/50
72000/72000 [==============================] - 5s 70us/step - loss: 0.0141 - val_loss: 0.0087
Epoch 37/50
72000/72000 [==============================] - 5s 69us/step - loss: 0.0118 - val_loss: 0.0297
Epoch 38/50
72000/72000 [==============================] - 6s 77us/step - loss: 0.0220 - val_loss: 0.0198
Epoch 39/50
72000/72000 [==============================] - 5s 73us/step - loss: 0.0162 - val_loss: 0.0150
Epoch 40/50
72000/72000 [==============================] - 5s 70us/step - loss: 0.0179 - val_loss: 0.0220
Epoch 41/50
72000/72000 [==============================] - 5s 70us/step - loss: 0.0182 - val_loss: 0.0085
Epoch 42/50
72000/72000 [==============================] - 5s 70us/step - loss: 0.0153 - val_loss: 0.0223
Epoch 43/50
72000/72000 [==============================] - 5s 70us/step - loss: 0.0201 - val_loss: 0.0200
Epoch 44/50
72000/72000 [==============================] - 5s 69us/step - loss: 0.0172 - val_loss: 0.0098
Epoch 45/50
72000/72000 [==============================] - 5s 70us/step - loss: 0.0135 - val_loss: 0.0083
Epoch 46/50
72000/72000 [==============================] - 5s 69us/step - loss: 0.0131 - val_loss: 0.0118
Epoch 47/50
72000/72000 [==============================] - 5s 75us/step - loss: 0.0118 - val_loss: 0.0062
Epoch 48/50
72000/72000 [==============================] - 5s 72us/step - loss: 0.0137 - val_loss: 0.0154
Epoch 49/50
72000/72000 [==============================] - 5s 69us/step - loss: 0.0160 - val_loss: 0.0096
Epoch 50/50
72000/72000 [==============================] - 5s 72us/step - loss: 0.0114 - val_loss: 0.0099
###Markdown
Prediction and Performance AnalysisHere we can see if the model overfits or underfits. First, we are going to plot the 'loss' and the 'Accuracy' in from the training step.
###Code
plot_loss(hist_gru)
###Output
_____no_output_____
###Markdown
Once the model was trained, we can use the function [predict](https://keras.io/models/model/) for prediction tasks. We are going to use the function **inverse_transform** (see _utils.py_) to invert the scaling (transform the values to the original ones).Given the predictions and expected values in their original scale, we can then compute the error score for the model.
###Code
yhat_gru = model_gru.predict(x_test)
# performing the inverse transform on test_X and yhat_rnn
inv_y_gru, inv_yhat_gru = inverse_transform_multiple(x_test, y_test, yhat_gru, scaler, n_in, n_features)
# calculate RMSE
mse_gru = mean_squared_error(inv_y_gru, inv_yhat_gru)
print('Test MSE: %.3f' % mse_gru)
###Output
Test MSE: 0.017
###Markdown
Visualising the predicted Data
###Code
plot_comparison([inv_y_gru, inv_yhat_gru],
['Test_Y value','Predicted Value'],
title='Prediction Comparison')
plot_comparison([inv_y_gru[0:300], inv_yhat_gru[0:300]],
['Test_Y value', 'Predicted Value'],
title='Prediction Comparison of the first 300 Observations')
plot_comparison([inv_y_gru[5500:5800], inv_yhat_gru[5500:5800]],
['Test_Y value', 'Predicted Value'],
title='Prediction Comparison of a Farthest 300 Observations')
###Output
_____no_output_____
###Markdown
Long-Short Term Memory (LSTM)Now **you** are going to build the model based on LSTM. Like GRU, we are going use the following components from Keras: - [Sequencial](https://keras.io/models/sequential/): allows us to create models layer-by-layer. - [LSTM](https://keras.io/layers/recurrent/): provides a LSTM architecture - [Dense](https://keras.io/layers/core/): provides a regular fully-connected layer - [Activation](https://keras.io/activations/): defines the activation function to be usedBasically, you are going to define the sequence of the model by using _Sequential()_:```python model = Sequential() model.add(LSTM(...)) ...```and configure the training by using the function [compile](https://keras.io/models/model/):```python model.compile(loss = "...", optimizer = "...")```Follow the below steps for this task. **Step 1**: Create the model: 1) Define the number of layers (we suggest at this stage to use just one, but it is up to you) 2) Create the fully connected layer For example:```python Define the model.model_lstm = Sequential() Stacking just one LSTMmodel_lstm.add(LSTM(input_shape=(train_X.shape[1], train_X.shape[2]), units = 128, return_sequences = False)) Fully connected layermodel_lstm.add(Dense(units=1)) ``` **Step 2**: Configure the training: 1) Define the loss function (e.g., 'mae' for mean average error or 'mse' for mean squared error) 2) Define the optimiser (e.g., 'adam', 'rmsprop', 'sgd', 'adagrad, etc) For example:```pythonmodel_lstm.compile(loss = "mae", optimizer = "adam")``` **Step 3:** Call the function ```pythonmodel_lstm.summary()```to summarise the model. **Step 4:** Defined the number of epochs, validation_split and batch_size that best fit for you model and call the function fit to train the model.For example:```pythonhist_lstm = model_lstm.fit(train_X, train_y, epochs=50, batch_size=256, validation_split = 0.1, verbose=1, shuffle=False) ``` Using the history Here we can see if the model overfits or underfits
###Code
plot_loss(hist_lstm)
yhat_lstm = model_lstm.predict(test_X)
# performing the inverse transform on test_X and yhat_rnn
inv_y_lstm, inv_yhat_lstm = inverse_transform_multiple(test_X, test_y, yhat_lstm, scaler, n_in, n_features)
# calculate RMSE
mse_lstm = mean_squared_error(inv_y_lstm, inv_yhat_lstm)
print('Test MSE: %.3f' % mse_lstm)
###Output
_____no_output_____
###Markdown
Visualising the predicted Data
###Code
plot_comparison([inv_y_lstm, inv_yhat_lstm],
['Test_Y value','Predicted Value'],
title='Prediction Comparison')
plot_comparison([inv_y_lstm[0:300], inv_yhat_lstm[0:300]],
['Test_Y value', 'Predicted Value'],
title='Prediction Comparison of first 300 Observations')
plot_comparison([inv_y_lstm[5500:5800], inv_yhat_lstm[5500:5800]],
['Test_Y value', 'Predicted Value'],
title='Prediction Comparison of a Farthest 300 Observations')
###Output
_____no_output_____
###Markdown
Models comparison**Exercise**: run the code below and discuss the results.
###Code
plot_comparison([inv_y_lstm[0:300],
inv_yhat_gru[0:300], inv_yhat_lstm[0:300]],
['Original', 'GRU', 'LSTM'],
title='Prediction Comparison of the First 300 Observations')
plot_comparison([inv_y_lstm[5500:5800],
inv_yhat_gru[5500:5800], inv_yhat_lstm[5500:5800]],
['Original', 'GRU', 'LSTM'],
title='Prediction Comparison of a Farthest 300 Observations')
print('Comparing the MSE of the three models:')
print(' GRU: ', mse_gru)
print(' LSTM: ', mse_lstm)
###Output
_____no_output_____ |
02PredictionsForTimeIntervals.ipynb | ###Markdown
02 predictions for time intervalsRelating patient/attendance transactional dataframes to a daily or hourly dataframe requires a many-to-many link, which is not possible directly in ft. A linking dataframe can be used which does not contain unique keys, and can be joined on both sides with a many relationship.
###Code
import pandas as pd
import numpy as np
import featuretools as ft
from create_data import make_attendances_dataframe
df = make_attendances_dataframe(15)
###Output
_____no_output_____
###Markdown
create all dataframes we need
###Code
from create_data import make_timeindex_dataframe, make_HourlyTimeAttenNum_dataframe
df_ActiveVisits = make_HourlyTimeAttenNum_dataframe(df,'arrival_datetime','departure_datetime')
df_ActiveVisits.head()
df_hours = make_timeindex_dataframe(df,'hour','h')
df_hours.head(3)
df_days = make_timeindex_dataframe(df,'day','D')
df_days.head(3)
###Output
_____no_output_____
###Markdown
Make entitity sets - as before
###Code
import featuretools.variable_types as vtypes
data_variable_types = {'atten_id': vtypes.Id,
'pat_id': vtypes.Id,
'arrival_datetime': vtypes.Datetime,
'time_in_department': vtypes.Numeric,
'departure_datetime': vtypes.Datetime,
'gender': vtypes.Boolean,
'ambulance_arrival': vtypes.Boolean}
es = ft.EntitySet('Hospital')
es = es.entity_from_dataframe(entity_id='attendances',
dataframe=df,
index='atten_id',
time_index='arrival_datetime',
secondary_time_index={'departure_datetime':['time_in_department']}, # dictionary here!
variable_types=data_variable_types)
###Output
_____no_output_____
###Markdown
make entity with each attendance and hour it is active
###Code
df_ActiveVisits.head(3)
# Make linking-es (active_visits)
es = es.entity_from_dataframe(entity_id='active_visits',
dataframe=df_ActiveVisits,
make_index=True,
index='index',
variable_types={'atten_id':vtypes.Id,
'hour':vtypes.Datetime})
###Output
_____no_output_____
###Markdown
make entity with hourly index
###Code
df_hours.head(3)
# Make hours eset
es = es.entity_from_dataframe(entity_id='hours',
dataframe=df_hours,
index='hour',
variable_types={'hour':vtypes.Datetime})
###Output
_____no_output_____
###Markdown
As we have made more entities with dataframes (and not normalised them from existing entities) we must explicitly tell the entity set the relationships:
###Code
# add es relationships
rel_Atten_ActiveVisits = ft.Relationship(es["attendances"]["atten_id"],
es["active_visits"]["atten_id"])
rel_Hours_ActiveVisits = ft.Relationship(es["hours"]["hour"],
es["active_visits"]["hour"])
es = es.add_relationships([rel_Atten_ActiveVisits,rel_Hours_ActiveVisits])
###Output
_____no_output_____
###Markdown
creating features for individual hoursNow we have all our entities linked we can run DFS on the entity "hours". This will generate features like "COUNT(active_visits)" -> in other words -> "Occupancy" for that particlar time of day.
###Code
fm, features = ft.dfs(entityset=es,
target_entity='hours',
# agg_primitives=[],
# trans_primitives=[],
verbose=True,
max_depth = 5)
fm.head(5)
features
fm.dropna(axis=1).head()
###Output
_____no_output_____
###Markdown
Further notes adding "Intesting" variables
###Code
df.head()
es["attendances"]["ambulance_arrival"].interesting_values = [True, False]
fm, features = ft.dfs(entityset=es,
target_entity='hours',
agg_primitives=['count','mean','percent_true'],
trans_primitives=[],
where_primitives=['count'],
verbose=True,
max_depth = 8)
fm.head(5)
ft.list_primitives()
###Output
_____no_output_____ |
keras-tutorials/1. MLP/2-Advanced-MLP-2.ipynb | ###Markdown
Advanced MLP - 2- Putting it altogether
###Code
import matplotlib.pyplot as plt
import numpy as np
from sklearn.ensemble import VotingClassifier
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from keras.datasets import mnist
from keras.wrappers.scikit_learn import KerasClassifier
from keras.datasets import mnist
from keras.models import Sequential
from keras.utils.np_utils import to_categorical
from keras.models import Sequential
from keras.layers import Activation, Dense, BatchNormalization, Dropout
from keras import optimizers
###Output
_____no_output_____
###Markdown
Load Dataset
###Code
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# reshaping X data: (n, 28, 28) => (n, 784)
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1] * X_train.shape[2]))
X_test = X_test.reshape((X_test.shape[0], X_test.shape[1] * X_test.shape[2]))
# We use all training data and validate on all test data
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
###Output
(60000, 784)
(10000, 784)
(60000,)
(10000,)
###Markdown
Training & Validating Model- Measures to improve training is applied simultaneously - More training set - Weight Initialization scheme - Nonlinearity (Activation function) - Optimizers: adaptvie - Batch Normalization - Dropout (Regularization) - Model Ensemble
###Code
def mlp_model():
model = Sequential()
model.add(Dense(50, input_shape = (784, ), kernel_initializer='he_normal'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(50, kernel_initializer='he_normal'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(50, kernel_initializer='he_normal'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(50, kernel_initializer='he_normal'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(10, kernel_initializer='he_normal'))
model.add(Activation('softmax'))
adam = optimizers.Adam(lr = 0.001)
model.compile(optimizer = adam, loss = 'categorical_crossentropy', metrics = ['accuracy'])
return model
# create 5 models to ensemble
model1 = KerasClassifier(build_fn = mlp_model, epochs = 100)
model2 = KerasClassifier(build_fn = mlp_model, epochs = 100)
model3 = KerasClassifier(build_fn = mlp_model, epochs = 100)
model4 = KerasClassifier(build_fn = mlp_model, epochs = 100)
model5 = KerasClassifier(build_fn = mlp_model, epochs = 100)
ensemble_clf = VotingClassifier(estimators = [('model1', model1), ('model2', model2), ('model3', model3), ('model4', model4), ('model5', model5)], voting = 'soft')
ensemble_clf.fit(X_train, y_train)
y_pred = ensemble_clf.predict(X_test)
print('Acc: ', accuracy_score(y_pred, y_test))
###Output
Acc: 0.9801
|
011/exercise/socialads (1).ipynb | ###Markdown
**Hi - Welcome to the Decision trees exercise.**** This is a reference notebook for the tasks given in exercise section.The first half of this notebook is meant for data preprocessing, it's not require but heavily encouraged to go over them and understand what is going on.The main task of the assignment is in the second half of the notebookRun cells below which import all required libraries** Importing Libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
Importing datasetAnd viewing first and last rows
###Code
dataset = pd.read_csv("/content/Social_Network_Ads.csv")
dataset.head()
dataset.tail()
###Output
_____no_output_____
###Markdown
Performing Analysis and Checking For null values
###Code
dataset.shape
###Output
_____no_output_____
###Markdown
We are having three columns named Age, EstimatedSalary and purchased. Age is the age of viewer viewing adds and EstimatedSalary is their salaries. Purchase(0) signifies the viwer did not purchased it , while purchase(1) signifies he/she purchased it.
###Code
#lets check-out for null values
dataset.isnull().sum()
###Output
_____no_output_____
###Markdown
**Luckily in our dataset we dont have any null values(empty). ** Please note that in most of the projects you will get values which you need to address and remove first, before moving forward.
###Code
#let's view the statistical information regarding the dataset
dataset.describe()
sns.countplot(dataset["Purchased"])
###Output
/usr/local/lib/python3.7/dist-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
FutureWarning
###Markdown
We can see around 1/3 of total peoples are actually purchasing adds. Let's Try analyzing data by visualization
###Code
plt.figure(figsize=(20,10))
dataset.groupby(['Age'])['Purchased'].count().plot.bar()
#plt.ylabel('Avg Salary')
#plt.title("Top 10 Highest Paying Jobs",fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
Proceed with any different visual data analysis in the next two cells (Optional but encouraged)
###Code
###Output
_____no_output_____
###Markdown
After you have done visualisation and analysis Try to write some concluding points you understood from data,like which among the two factors does our output most depend, is there any particular trend or not,etc. **Splitting dataset into dependent and independent variable.** Here age and estimated salary are the two independent variable(x) and Purchased(y) is dependent variable based on two factors age and estimated salary.
###Code
y= dataset.pop("Purchased")
x= dataset
###Output
_____no_output_____
###Markdown
Now let split dataset into training testing for model.
###Code
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.25, random_state=3)
print(x_train.shape)
print(x_test.shape)
print(y_train.shape)
print(y_test.shape)
###Output
(300, 2)
(100, 2)
(300,)
(100,)
###Markdown
Feature ScalingBefore feeding and fitting we'll normalise the range of the independent data or features of data,for better results. **Task 1** your task here is to use ant type of feature scaling methods from sklearn, and standardise or normalise the input features for both training as well as testing data. And print them to see the changes when there was'nt any scaling.
###Code
###Output
_____no_output_____
###Markdown
Building model
###Code
from sklearn.tree import DecisionTreeClassifier
classifier = DecisionTreeClassifier(criterion = 'entropy', random_state = 0)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
y_pred
y_test
round(np.mean(y_pred), 2)
###Output
_____no_output_____
###Markdown
Model Evaluation: **Task 2**Your next task is to check the accuracy of our model, on both training as well as testing data. try to implement score or accuracy function(method), also find out cross validation score of model. Checking results visually
###Code
from matplotlib.colors import ListedColormap
X_set, y_set = sc.inverse_transform(X_train), y_train
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 0.25),
np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 0.25))
plt.contourf(X1, X2, classifier.predict(sc.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('Decision Tree Classification (Training set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
###Output
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
###Markdown
**Task 3**Your third and final task is to, use the same method as in above cell and try to visulaise the result for training set.
###Code
###Output
_____no_output_____ |
probability/06_correlation_covariance.ipynb | ###Markdown
Даны значения величины заработной платы заемщиков банка (zp) и значения их поведенческого кредитного скоринга (ks):zp = [35, 45, 190, 200, 40, 70, 54, 150, 120, 110],ks = [401, 574, 874, 919, 459, 739, 653, 902, 746, 832].Найдите ковариацию этих двух величин с помощью элементарных действий, а затем с помощью функции cov из numpyПолученные значения должны быть равны.Найдите коэффициент корреляции Пирсона с помощью ковариации и среднеквадратичных отклонений двух признаков, а затем с использованием функций из библиотек numpy и pandas.
###Code
zp = np.array([35, 45, 190, 200, 40, 70, 54, 150, 120, 110])
ks = np.array([401, 574, 874, 919, 459, 739, 653, 902, 746, 832])
cov = np.mean(zp * ks) - np.mean(zp) * np.mean(ks)
# func
cov_func = np.cov(ks, zp, ddof=0)
c = np.std(zp)
d = np.std(ks)
corr = cov / (c*d)
# func
corr_func = np.corrcoef(zp, ks)
print(cov,';',corr)
###Output
9157.839999999997 ; 0.8874900920739158
###Markdown
Измерены значения IQ выборки студентов,обучающихся в местных технических вузах:131, 125, 115, 122, 131, 115, 107, 99, 125, 111.Известно, что в генеральной совокупности IQ распределен нормально.Найдите доверительный интервал для математического ожидания с надежностью 0.95.
###Code
iq = np.array([131, 125, 115, 122, 131, 115, 107, 99, 125, 111])
height_std = iq.std(ddof=1)
m = iq.mean()
int1 = m - (2.262 * height_std / 10**0.5)
int2 = m + (2.262 * height_std / 10**0.5)
print('[' , int1, ';' , int2,']')
###Output
[ 110.55660776308164 ; 125.64339223691834 ]
###Markdown
Известно, что рост футболистов в сборной распределен нормальнос дисперсией генеральной совокупности, равной 25 кв.см. Объем выборки равен 27,среднее выборочное составляет 174.2. Найдите доверительный интервал для математическогоожидания с надежностью 0.95.
###Code
int2 = 174.2 + (1.96 * 5/27**0.5)
int1 = 174.2 - (1.96 * 5/27**0.5)
print('[' , int1, ';' , int2,']')
###Output
[ 172.31398912064722 ; 176.08601087935276 ]
|
udemy_ml_bootcamp/Machine Learning Sections/Linear-Regression/Linear Regression with Python.ipynb | ###Markdown
___ ___ Linear Regression with Python** This is mostly just code for reference. Please watch the video lecture for more info behind all of this code.**Your neighbor is a real estate agent and wants some help predicting housing prices for regions in the USA. It would be great if you could somehow create a model for her that allows her to put in a few features of a house and returns back an estimate of what the house would sell for.She has asked you if you could help her out with your new data science skills. You say yes, and decide that Linear Regression might be a good path to solve this problem!Your neighbor then gives you some information about a bunch of houses in regions of the United States,it is all in the data set: USA_Housing.csv.The data contains the following columns:* 'Avg. Area Income': Avg. Income of residents of the city house is located in.* 'Avg. Area House Age': Avg Age of Houses in same city* 'Avg. Area Number of Rooms': Avg Number of Rooms for Houses in same city* 'Avg. Area Number of Bedrooms': Avg Number of Bedrooms for Houses in same city* 'Area Population': Population of city house is located in* 'Price': Price that the house sold at* 'Address': Address for the house **Let's get started!** Check out the dataWe've been able to get some data from your neighbor for housing prices as a csv set, let's get our environment ready with the libraries we'll need and then import the data! Import Libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
Check out the Data
###Code
USAhousing = pd.read_csv('USA_Housing.csv')
USAhousing.head()
USAhousing.info()
USAhousing.describe()
USAhousing.columns
###Output
_____no_output_____
###Markdown
EDALet's create some simple plots to check out the data!
###Code
sns.pairplot(USAhousing)
sns.distplot(USAhousing['Price'])
sns.heatmap(USAhousing.corr())
###Output
_____no_output_____
###Markdown
Training a Linear Regression ModelLet's now begin to train out regression model! We will need to first split up our data into an X array that contains the features to train on, and a y array with the target variable, in this case the Price column. We will toss out the Address column because it only has text info that the linear regression model can't use. X and y arrays
###Code
X = USAhousing[['Avg. Area Income', 'Avg. Area House Age', 'Avg. Area Number of Rooms',
'Avg. Area Number of Bedrooms', 'Area Population']]
y = USAhousing['Price']
###Output
_____no_output_____
###Markdown
Train Test SplitNow let's split the data into a training set and a testing set. We will train out model on the training set and then use the test set to evaluate the model.
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=101)
###Output
_____no_output_____
###Markdown
Creating and Training the Model
###Code
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
lm.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
Model EvaluationLet's evaluate the model by checking out it's coefficients and how we can interpret them.
###Code
# print the intercept
print(lm.intercept_)
coeff_df = pd.DataFrame(lm.coef_,X.columns,columns=['Coefficient'])
coeff_df
###Output
_____no_output_____
###Markdown
Interpreting the coefficients:- Holding all other features fixed, a 1 unit increase in **Avg. Area Income** is associated with an **increase of \$21.52 **.- Holding all other features fixed, a 1 unit increase in **Avg. Area House Age** is associated with an **increase of \$164883.28 **.- Holding all other features fixed, a 1 unit increase in **Avg. Area Number of Rooms** is associated with an **increase of \$122368.67 **.- Holding all other features fixed, a 1 unit increase in **Avg. Area Number of Bedrooms** is associated with an **increase of \$2233.80 **.- Holding all other features fixed, a 1 unit increase in **Area Population** is associated with an **increase of \$15.15 **.Does this make sense? Probably not because I made up this data. If you want real data to repeat this sort of analysis, check out the [boston dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_boston.html): from sklearn.datasets import load_boston boston = load_boston() print(boston.DESCR) boston_df = boston.data Predictions from our ModelLet's grab predictions off our test set and see how well it did!
###Code
predictions = lm.predict(X_test)
plt.scatter(y_test,predictions)
###Output
_____no_output_____
###Markdown
**Residual Histogram**
###Code
sns.distplot((y_test-predictions),bins=50);
###Output
_____no_output_____
###Markdown
Regression Evaluation MetricsHere are three common evaluation metrics for regression problems:**Mean Absolute Error** (MAE) is the mean of the absolute value of the errors:$$\frac 1n\sum_{i=1}^n|y_i-\hat{y}_i|$$**Mean Squared Error** (MSE) is the mean of the squared errors:$$\frac 1n\sum_{i=1}^n(y_i-\hat{y}_i)^2$$**Root Mean Squared Error** (RMSE) is the square root of the mean of the squared errors:$$\sqrt{\frac 1n\sum_{i=1}^n(y_i-\hat{y}_i)^2}$$Comparing these metrics:- **MAE** is the easiest to understand, because it's the average error.- **MSE** is more popular than MAE, because MSE "punishes" larger errors, which tends to be useful in the real world.- **RMSE** is even more popular than MSE, because RMSE is interpretable in the "y" units.All of these are **loss functions**, because we want to minimize them.
###Code
from sklearn import metrics
print('MAE:', metrics.mean_absolute_error(y_test, predictions))
print('MSE:', metrics.mean_squared_error(y_test, predictions))
print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, predictions)))
###Output
MAE: 82288.2225191
MSE: 10460958907.2
RMSE: 102278.829223
|
Akash_Spacy_001.ipynb | ###Markdown
###Code
!pip install spacy
!python -m spacy download en_core_web_sm
import spacy
nlp = spacy.load("en_core_web_sm")
doc = nlp("Apple is looking at buying U.K. startup for $1 billion")
for token in doc:
print(token.text, token.pos_, token.dep_)
#tokenization created by [email protected]
import spacy
a = spacy.load("en_core_web_sm")
doc = a("Apple is looking at buying U.K. startup for $1 billion")
for token in doc:
print(token.text)
import spacy
nlp = spacy.load("en_core_web_sm")
doc = nlp("Apple is looking at buying U.K. startup for $1 billion")
for token in doc:
print(token.text, token.lemma_, token.pos_, token.tag_, token.dep_,
token.shape_, token.is_alpha, token.is_stop)
import spacy
nlp = spacy.load("en_core_web_sm")
doc = nlp("Apple is looking at buying U.K. startup for $1 billion")
for ent in doc.ents:
print(ent.text, ent.start_char, ent.end_char, ent.label_)
###Output
_____no_output_____ |
Autonomous+driving v3.ipynb | ###Markdown
Autonomous driving - Car detectionWelcome to your week 3 programming assignment. You will learn about object detection using the very powerful YOLO model. Many of the ideas in this notebook are described in the two YOLO papers: Redmon et al., 2016 (https://arxiv.org/abs/1506.02640) and Redmon and Farhadi, 2016 (https://arxiv.org/abs/1612.08242). **You will learn to**:- Use object detection on a car detection dataset- Deal with bounding boxesRun the following cell to load the packages and dependencies that are going to be useful for your journey!
###Code
import argparse
import os
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
import scipy.io
import scipy.misc
import numpy as np
import pandas as pd
import PIL
import tensorflow as tf
from keras import backend as K
from keras.layers import Input, Lambda, Conv2D
from keras.models import load_model, Model
from yolo_utils import read_classes, read_anchors, generate_colors, preprocess_image, draw_boxes, scale_boxes
from yad2k.models.keras_yolo import yolo_head, yolo_boxes_to_corners, preprocess_true_boxes, yolo_loss, yolo_body
%matplotlib inline
###Output
Using TensorFlow backend.
###Markdown
**Important Note**: As you can see, we import Keras's backend as K. This means that to use a Keras function in this notebook, you will need to write: `K.function(...)`. 1 - Problem StatementYou are working on a self-driving car. As a critical component of this project, you'd like to first build a car detection system. To collect data, you've mounted a camera to the hood (meaning the front) of the car, which takes pictures of the road ahead every few seconds while you drive around. Pictures taken from a car-mounted camera while driving around Silicon Valley. We would like to especially thank [drive.ai](https://www.drive.ai/) for providing this dataset! Drive.ai is a company building the brains of self-driving vehicles.You've gathered all these images into a folder and have labelled them by drawing bounding boxes around every car you found. Here's an example of what your bounding boxes look like. **Figure 1** : **Definition of a box** If you have 80 classes that you want YOLO to recognize, you can represent the class label $c$ either as an integer from 1 to 80, or as an 80-dimensional vector (with 80 numbers) one component of which is 1 and the rest of which are 0. The video lectures had used the latter representation; in this notebook, we will use both representations, depending on which is more convenient for a particular step. In this exercise, you will learn how YOLO works, then apply it to car detection. Because the YOLO model is very computationally expensive to train, we will load pre-trained weights for you to use. 2 - YOLO YOLO ("you only look once") is a popular algoritm because it achieves high accuracy while also being able to run in real-time. This algorithm "only looks once" at the image in the sense that it requires only one forward propagation pass through the network to make predictions. After non-max suppression, it then outputs recognized objects together with the bounding boxes. 2.1 - Model detailsFirst things to know:- The **input** is a batch of images of shape (m, 608, 608, 3)- The **output** is a list of bounding boxes along with the recognized classes. Each bounding box is represented by 6 numbers $(p_c, b_x, b_y, b_h, b_w, c)$ as explained above. If you expand $c$ into an 80-dimensional vector, each bounding box is then represented by 85 numbers. We will use 5 anchor boxes. So you can think of the YOLO architecture as the following: IMAGE (m, 608, 608, 3) -> DEEP CNN -> ENCODING (m, 19, 19, 5, 85).Lets look in greater detail at what this encoding represents. **Figure 2** : **Encoding architecture for YOLO** If the center/midpoint of an object falls into a grid cell, that grid cell is responsible for detecting that object. Since we are using 5 anchor boxes, each of the 19 x19 cells thus encodes information about 5 boxes. Anchor boxes are defined only by their width and height.For simplicity, we will flatten the last two last dimensions of the shape (19, 19, 5, 85) encoding. So the output of the Deep CNN is (19, 19, 425). **Figure 3** : **Flattening the last two last dimensions** Now, for each box (of each cell) we will compute the following elementwise product and extract a probability that the box contains a certain class. **Figure 4** : **Find the class detected by each box** Here's one way to visualize what YOLO is predicting on an image:- For each of the 19x19 grid cells, find the maximum of the probability scores (taking a max across both the 5 anchor boxes and across different classes). - Color that grid cell according to what object that grid cell considers the most likely.Doing this results in this picture: **Figure 5** : Each of the 19x19 grid cells colored according to which class has the largest predicted probability in that cell. Note that this visualization isn't a core part of the YOLO algorithm itself for making predictions; it's just a nice way of visualizing an intermediate result of the algorithm. Another way to visualize YOLO's output is to plot the bounding boxes that it outputs. Doing that results in a visualization like this: **Figure 6** : Each cell gives you 5 boxes. In total, the model predicts: 19x19x5 = 1805 boxes just by looking once at the image (one forward pass through the network)! Different colors denote different classes. In the figure above, we plotted only boxes that the model had assigned a high probability to, but this is still too many boxes. You'd like to filter the algorithm's output down to a much smaller number of detected objects. To do so, you'll use non-max suppression. Specifically, you'll carry out these steps: - Get rid of boxes with a low score (meaning, the box is not very confident about detecting a class)- Select only one box when several boxes overlap with each other and detect the same object. 2.2 - Filtering with a threshold on class scoresYou are going to apply a first filter by thresholding. You would like to get rid of any box for which the class "score" is less than a chosen threshold. The model gives you a total of 19x19x5x85 numbers, with each box described by 85 numbers. It'll be convenient to rearrange the (19,19,5,85) (or (19,19,425)) dimensional tensor into the following variables: - `box_confidence`: tensor of shape $(19 \times 19, 5, 1)$ containing $p_c$ (confidence probability that there's some object) for each of the 5 boxes predicted in each of the 19x19 cells.- `boxes`: tensor of shape $(19 \times 19, 5, 4)$ containing $(b_x, b_y, b_h, b_w)$ for each of the 5 boxes per cell.- `box_class_probs`: tensor of shape $(19 \times 19, 5, 80)$ containing the detection probabilities $(c_1, c_2, ... c_{80})$ for each of the 80 classes for each of the 5 boxes per cell.**Exercise**: Implement `yolo_filter_boxes()`.1. Compute box scores by doing the elementwise product as described in Figure 4. The following code may help you choose the right operator: ```pythona = np.random.randn(19*19, 5, 1)b = np.random.randn(19*19, 5, 80)c = a * b shape of c will be (19*19, 5, 80)```2. For each box, find: - the index of the class with the maximum box score ([Hint](https://keras.io/backend/argmax)) (Be careful with what axis you choose; consider using axis=-1) - the corresponding box score ([Hint](https://keras.io/backend/max)) (Be careful with what axis you choose; consider using axis=-1)3. Create a mask by using a threshold. As a reminder: `([0.9, 0.3, 0.4, 0.5, 0.1] < 0.4)` returns: `[False, True, False, False, True]`. The mask should be True for the boxes you want to keep. 4. Use TensorFlow to apply the mask to box_class_scores, boxes and box_classes to filter out the boxes we don't want. You should be left with just the subset of boxes you want to keep. ([Hint](https://www.tensorflow.org/api_docs/python/tf/boolean_mask))Reminder: to call a Keras function, you should use `K.function(...)`.
###Code
# GRADED FUNCTION: yolo_filter_boxes
def yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = .6):
"""Filters YOLO boxes by thresholding on object and class confidence.
Arguments:
box_confidence -- tensor of shape (19, 19, 5, 1)
boxes -- tensor of shape (19, 19, 5, 4)
box_class_probs -- tensor of shape (19, 19, 5, 80)
threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
Returns:
scores -- tensor of shape (None,), containing the class probability score for selected boxes
boxes -- tensor of shape (None, 4), containing (b_x, b_y, b_h, b_w) coordinates of selected boxes
classes -- tensor of shape (None,), containing the index of the class detected by the selected boxes
Note: "None" is here because you don't know the exact number of selected boxes, as it depends on the threshold.
For example, the actual output size of scores would be (10,) if there are 10 boxes.
"""
# Step 1: Compute box scores
### START CODE HERE ### (≈ 1 line)
box_scores = np.multiply(box_confidence, box_class_probs)#(19, 19, 5, 1)*(19, 19, 5, 80) Here brodcasting will happen
### END CODE HERE ###
# Step 2: Find the box_classes thanks to the max box_scores, keep track of the corresponding score
### START CODE HERE ### (≈ 2 lines)
box_classes = K.argmax(box_scores, axis=-1)# (19, 19, 5) Out of 80 values 1 will be selected in last dimension
box_class_scores = K.max(box_scores, axis=-1)# (19, 19, 5) Out of 80 values 1 will be selected in last dimension
### END CODE HERE ###
# Step 3: Create a filtering mask based on "box_class_scores" by using "threshold". The mask should have the
# same dimension as box_class_scores, and be True for the boxes you want to keep (with probability >= threshold)
### START CODE HERE ### (≈ 1 line)
filtering_mask = box_class_scores >= threshold
### END CODE HERE ###
# Step 4: Apply the mask to scores, boxes and classes
### START CODE HERE ### (≈ 3 lines)
scores = tf.boolean_mask(box_class_scores, filtering_mask, name='scores')
boxes = tf.boolean_mask(boxes, filtering_mask, name='boxes')
classes = tf.boolean_mask(box_classes, filtering_mask, name="classes")
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_a:
box_confidence = tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([19, 19, 5, 4], mean=1, stddev=4, seed = 1)
box_class_probs = tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = 0.5)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.shape))
print("boxes.shape = " + str(boxes.shape))
print("classes.shape = " + str(classes.shape))
###Output
scores[2] = 10.7506
boxes[2] = [ 8.42653275 3.27136683 -0.5313437 -4.94137383]
classes[2] = 7
scores.shape = (?,)
boxes.shape = (?, 4)
classes.shape = (?,)
###Markdown
**Expected Output**: **scores[2]** 10.7506 **boxes[2]** [ 8.42653275 3.27136683 -0.5313437 -4.94137383] **classes[2]** 7 **scores.shape** (?,) **boxes.shape** (?, 4) **classes.shape** (?,) 2.3 - Non-max suppression Even after filtering by thresholding over the classes scores, you still end up a lot of overlapping boxes. A second filter for selecting the right boxes is called non-maximum suppression (NMS). **Figure 7** : In this example, the model has predicted 3 cars, but it's actually 3 predictions of the same car. Running non-max suppression (NMS) will select only the most accurate (highest probabiliy) one of the 3 boxes. Non-max suppression uses the very important function called **"Intersection over Union"**, or IoU. **Figure 8** : Definition of "Intersection over Union". **Exercise**: Implement iou(). Some hints:- In this exercise only, we define a box using its two corners (upper left and lower right): `(x1, y1, x2, y2)` rather than the midpoint and height/width.- To calculate the area of a rectangle you need to multiply its height `(y2 - y1)` by its width `(x2 - x1)`.- You'll also need to find the coordinates `(xi1, yi1, xi2, yi2)` of the intersection of two boxes. Remember that: - xi1 = maximum of the x1 coordinates of the two boxes - yi1 = maximum of the y1 coordinates of the two boxes - xi2 = minimum of the x2 coordinates of the two boxes - yi2 = minimum of the y2 coordinates of the two boxes- In order to compute the intersection area, you need to make sure the height and width of the intersection are positive, otherwise the intersection area should be zero. Use `max(height, 0)` and `max(width, 0)`.In this code, we use the convention that (0,0) is the top-left corner of an image, (1,0) is the upper-right corner, and (1,1) the lower-right corner.
###Code
# GRADED FUNCTION: iou
def iou(box1, box2):
"""Implement the intersection over union (IoU) between box1 and box2
Arguments:
box1 -- first box, list object with coordinates (x1, y1, x2, y2)
box2 -- second box, list object with coordinates (x1, y1, x2, y2)
"""
# Calculate the (y1, x1, y2, x2) coordinates of the intersection of box1 and box2. Calculate its Area.
### START CODE HERE ### (≈ 5 lines)
xi1 = max(box1[0], box2[0])
yi1 = max(box1[1], box2[1])
xi2 = min(box1[2], box2[2])
yi2 = min(box1[3], box2[3])
inter_area = max((xi2-xi1+1) * (yi2-yi1+1), 0)
### END CODE HERE ###
# Calculate the Union area by using Formula: Union(A,B) = A + B - Inter(A,B)
### START CODE HERE ### (≈ 3 lines)
box1_area = (box1[2]-box1[0] + 1) * (box1[3]-box1[1] + 1)
box2_area = (box2[2]-box2[0] + 1) * (box2[3]-box2[1] + 1)
union_area = float(box1_area + box2_area - inter_area)*2
### END CODE HERE ###
# compute the IoU
### START CODE HERE ### (≈ 1 line)
iou = inter_area / union_area
### END CODE HERE ###
return iou
box1 = (2, 1, 4, 3)
box2 = (1, 2, 3, 4)
print("iou = " + str(iou(box1, box2)))
###Output
iou = 0.14285714285714285
###Markdown
**Expected Output**: **iou = ** 0.14285714285714285 You are now ready to implement non-max suppression. The key steps are: 1. Select the box that has the highest score.2. Compute its overlap with all other boxes, and remove boxes that overlap it more than `iou_threshold`.3. Go back to step 1 and iterate until there's no more boxes with a lower score than the current selected box.This will remove all boxes that have a large overlap with the selected boxes. Only the "best" boxes remain.**Exercise**: Implement yolo_non_max_suppression() using TensorFlow. TensorFlow has two built-in functions that are used to implement non-max suppression (so you don't actually need to use your `iou()` implementation):- [tf.image.non_max_suppression()](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression)- [K.gather()](https://www.tensorflow.org/api_docs/python/tf/gather)
###Code
# GRADED FUNCTION: yolo_non_max_suppression
def yolo_non_max_suppression(scores, boxes, classes, max_boxes = 10, iou_threshold = 0.5):
"""
Applies Non-max suppression (NMS) to set of boxes
Arguments:
scores -- tensor of shape (None,), output of yolo_filter_boxes()
boxes -- tensor of shape (None, 4), output of yolo_filter_boxes() that have been scaled to the image size (see later)
classes -- tensor of shape (None,), output of yolo_filter_boxes()
max_boxes -- integer, maximum number of predicted boxes you'd like
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (, None), predicted score for each box
boxes -- tensor of shape (4, None), predicted box coordinates
classes -- tensor of shape (, None), predicted class for each box
Note: The "None" dimension of the output tensors has obviously to be less than max_boxes. Note also that this
function will transpose the shapes of scores, boxes, classes. This is made for convenience.
"""
max_boxes_tensor = K.variable(max_boxes, dtype='int32') # tensor to be used in tf.image.non_max_suppression()
K.get_session().run(tf.variables_initializer([max_boxes_tensor])) # initialize variable max_boxes_tensor
# Use tf.image.non_max_suppression() to get the list of indices corresponding to boxes you keep
### START CODE HERE ### (≈ 1 line)
nms_indices = tf.image.non_max_suppression(boxes, scores, max_boxes, iou_threshold, name="nms_indices")
### END CODE HERE ###
# Use K.gather() to select only nms_indices from scores, boxes and classes
### START CODE HERE ### (≈ 3 lines)
scores = tf.gather(scores, nms_indices)
boxes = tf.gather(boxes, nms_indices)
classes = tf.gather(classes, nms_indices)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
scores = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([54, 4], mean=1, stddev=4, seed = 1)
classes = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
###Output
scores[2] = 6.9384
boxes[2] = [-5.299932 3.13798141 4.45036697 0.95942086]
classes[2] = -2.24527
scores.shape = (10,)
boxes.shape = (10, 4)
classes.shape = (10,)
###Markdown
**Expected Output**: **scores[2]** 6.9384 **boxes[2]** [-5.299932 3.13798141 4.45036697 0.95942086] **classes[2]** -2.24527 **scores.shape** (10,) **boxes.shape** (10, 4) **classes.shape** (10,) 2.4 Wrapping up the filteringIt's time to implement a function taking the output of the deep CNN (the 19x19x5x85 dimensional encoding) and filtering through all the boxes using the functions you've just implemented. **Exercise**: Implement `yolo_eval()` which takes the output of the YOLO encoding and filters the boxes using score threshold and NMS. There's just one last implementational detail you have to know. There're a few ways of representing boxes, such as via their corners or via their midpoint and height/width. YOLO converts between a few such formats at different times, using the following functions (which we have provided): ```pythonboxes = yolo_boxes_to_corners(box_xy, box_wh) ```which converts the yolo box coordinates (x,y,w,h) to box corners' coordinates (x1, y1, x2, y2) to fit the input of `yolo_filter_boxes````pythonboxes = scale_boxes(boxes, image_shape)```YOLO's network was trained to run on 608x608 images. If you are testing this data on a different size image--for example, the car detection dataset had 720x1280 images--this step rescales the boxes so that they can be plotted on top of the original 720x1280 image. Don't worry about these two functions; we'll show you where they need to be called.
###Code
# GRADED FUNCTION: yolo_eval
def yolo_eval(yolo_outputs, image_shape = (720., 1280.), max_boxes=10, score_threshold=.6, iou_threshold=.5):
"""
Converts the output of YOLO encoding (a lot of boxes) to your predicted boxes along with their scores, box coordinates and classes.
Arguments:
yolo_outputs -- output of the encoding model (for image_shape of (608, 608, 3)), contains 4 tensors:
box_confidence: tensor of shape (None, 19, 19, 5, 1)
box_xy: tensor of shape (None, 19, 19, 5, 2)
box_wh: tensor of shape (None, 19, 19, 5, 2)
box_class_probs: tensor of shape (None, 19, 19, 5, 80)
image_shape -- tensor of shape (2,) containing the input shape, in this notebook we use (608., 608.) (has to be float32 dtype)
max_boxes -- integer, maximum number of predicted boxes you'd like
score_threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (None, ), predicted score for each box
boxes -- tensor of shape (None, 4), predicted box coordinates
classes -- tensor of shape (None,), predicted class for each box
"""
### START CODE HERE ###
# Retrieve outputs of the YOLO model (≈1 line)
box_confidence, box_xy, box_wh, box_class_probs = yolo_outputs
# Convert boxes to be ready for filtering functions
boxes = yolo_boxes_to_corners(box_xy, box_wh)
# Use one of the functions you've implemented to perform Score-filtering with a threshold of score_threshold (≈1 line)
scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, score_threshold)
# Scale boxes back to original image shape.
boxes = scale_boxes(boxes, image_shape)
# Use one of the functions you've implemented to perform Non-max suppression with a threshold of iou_threshold (≈1 line)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes, max_boxes, iou_threshold)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
yolo_outputs = (tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1))
scores, boxes, classes = yolo_eval(yolo_outputs)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
###Output
scores[2] = 138.791
boxes[2] = [ 1292.32971191 -278.52166748 3876.98925781 -835.56494141]
classes[2] = 54
scores.shape = (10,)
boxes.shape = (10, 4)
classes.shape = (10,)
###Markdown
**Expected Output**: **scores[2]** 138.791 **boxes[2]** [ 1292.32971191 -278.52166748 3876.98925781 -835.56494141] **classes[2]** 54 **scores.shape** (10,) **boxes.shape** (10, 4) **classes.shape** (10,) **Summary for YOLO**:- Input image (608, 608, 3)- The input image goes through a CNN, resulting in a (19,19,5,85) dimensional output. - After flattening the last two dimensions, the output is a volume of shape (19, 19, 425): - Each cell in a 19x19 grid over the input image gives 425 numbers. - 425 = 5 x 85 because each cell contains predictions for 5 boxes, corresponding to 5 anchor boxes, as seen in lecture. - 85 = 5 + 80 where 5 is because $(p_c, b_x, b_y, b_h, b_w)$ has 5 numbers, and and 80 is the number of classes we'd like to detect- You then select only few boxes based on: - Score-thresholding: throw away boxes that have detected a class with a score less than the threshold - Non-max suppression: Compute the Intersection over Union and avoid selecting overlapping boxes- This gives you YOLO's final output. 3 - Test YOLO pretrained model on images In this part, you are going to use a pretrained model and test it on the car detection dataset. As usual, you start by **creating a session to start your graph**. Run the following cell.
###Code
sess = K.get_session()
###Output
_____no_output_____
###Markdown
3.1 - Defining classes, anchors and image shape. Recall that we are trying to detect 80 classes, and are using 5 anchor boxes. We have gathered the information about the 80 classes and 5 boxes in two files "coco_classes.txt" and "yolo_anchors.txt". Let's load these quantities into the model by running the next cell. The car detection dataset has 720x1280 images, which we've pre-processed into 608x608 images.
###Code
class_names = read_classes("model_data/coco_classes.txt")
anchors = read_anchors("model_data/yolo_anchors.txt")
image_shape = (720., 1280.)
###Output
_____no_output_____
###Markdown
3.2 - Loading a pretrained modelTraining a YOLO model takes a very long time and requires a fairly large dataset of labelled bounding boxes for a large range of target classes. You are going to load an existing pretrained Keras YOLO model stored in "yolo.h5". (These weights come from the official YOLO website, and were converted using a function written by Allan Zelener. References are at the end of this notebook. Technically, these are the parameters from the "YOLOv2" model, but we will more simply refer to it as "YOLO" in this notebook.) Run the cell below to load the model from this file.
###Code
yolo_model = load_model("model_data/yolo.h5")
###Output
/opt/conda/lib/python3.6/site-packages/keras/models.py:251: UserWarning: No training configuration found in save file: the model was *not* compiled. Compile it manually.
warnings.warn('No training configuration found in save file: '
###Markdown
This loads the weights of a trained YOLO model. Here's a summary of the layers your model contains.
###Code
yolo_model.summary()
###Output
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 608, 608, 3) 0
____________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 608, 608, 32) 864 input_1[0][0]
____________________________________________________________________________________________________
batch_normalization_1 (BatchNorm (None, 608, 608, 32) 128 conv2d_1[0][0]
____________________________________________________________________________________________________
leaky_re_lu_1 (LeakyReLU) (None, 608, 608, 32) 0 batch_normalization_1[0][0]
____________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 304, 304, 32) 0 leaky_re_lu_1[0][0]
____________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 304, 304, 64) 18432 max_pooling2d_1[0][0]
____________________________________________________________________________________________________
batch_normalization_2 (BatchNorm (None, 304, 304, 64) 256 conv2d_2[0][0]
____________________________________________________________________________________________________
leaky_re_lu_2 (LeakyReLU) (None, 304, 304, 64) 0 batch_normalization_2[0][0]
____________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D) (None, 152, 152, 64) 0 leaky_re_lu_2[0][0]
____________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 152, 152, 128) 73728 max_pooling2d_2[0][0]
____________________________________________________________________________________________________
batch_normalization_3 (BatchNorm (None, 152, 152, 128) 512 conv2d_3[0][0]
____________________________________________________________________________________________________
leaky_re_lu_3 (LeakyReLU) (None, 152, 152, 128) 0 batch_normalization_3[0][0]
____________________________________________________________________________________________________
conv2d_4 (Conv2D) (None, 152, 152, 64) 8192 leaky_re_lu_3[0][0]
____________________________________________________________________________________________________
batch_normalization_4 (BatchNorm (None, 152, 152, 64) 256 conv2d_4[0][0]
____________________________________________________________________________________________________
leaky_re_lu_4 (LeakyReLU) (None, 152, 152, 64) 0 batch_normalization_4[0][0]
____________________________________________________________________________________________________
conv2d_5 (Conv2D) (None, 152, 152, 128) 73728 leaky_re_lu_4[0][0]
____________________________________________________________________________________________________
batch_normalization_5 (BatchNorm (None, 152, 152, 128) 512 conv2d_5[0][0]
____________________________________________________________________________________________________
leaky_re_lu_5 (LeakyReLU) (None, 152, 152, 128) 0 batch_normalization_5[0][0]
____________________________________________________________________________________________________
max_pooling2d_3 (MaxPooling2D) (None, 76, 76, 128) 0 leaky_re_lu_5[0][0]
____________________________________________________________________________________________________
conv2d_6 (Conv2D) (None, 76, 76, 256) 294912 max_pooling2d_3[0][0]
____________________________________________________________________________________________________
batch_normalization_6 (BatchNorm (None, 76, 76, 256) 1024 conv2d_6[0][0]
____________________________________________________________________________________________________
leaky_re_lu_6 (LeakyReLU) (None, 76, 76, 256) 0 batch_normalization_6[0][0]
____________________________________________________________________________________________________
conv2d_7 (Conv2D) (None, 76, 76, 128) 32768 leaky_re_lu_6[0][0]
____________________________________________________________________________________________________
batch_normalization_7 (BatchNorm (None, 76, 76, 128) 512 conv2d_7[0][0]
____________________________________________________________________________________________________
leaky_re_lu_7 (LeakyReLU) (None, 76, 76, 128) 0 batch_normalization_7[0][0]
____________________________________________________________________________________________________
conv2d_8 (Conv2D) (None, 76, 76, 256) 294912 leaky_re_lu_7[0][0]
____________________________________________________________________________________________________
batch_normalization_8 (BatchNorm (None, 76, 76, 256) 1024 conv2d_8[0][0]
____________________________________________________________________________________________________
leaky_re_lu_8 (LeakyReLU) (None, 76, 76, 256) 0 batch_normalization_8[0][0]
____________________________________________________________________________________________________
max_pooling2d_4 (MaxPooling2D) (None, 38, 38, 256) 0 leaky_re_lu_8[0][0]
____________________________________________________________________________________________________
conv2d_9 (Conv2D) (None, 38, 38, 512) 1179648 max_pooling2d_4[0][0]
____________________________________________________________________________________________________
batch_normalization_9 (BatchNorm (None, 38, 38, 512) 2048 conv2d_9[0][0]
____________________________________________________________________________________________________
leaky_re_lu_9 (LeakyReLU) (None, 38, 38, 512) 0 batch_normalization_9[0][0]
____________________________________________________________________________________________________
conv2d_10 (Conv2D) (None, 38, 38, 256) 131072 leaky_re_lu_9[0][0]
____________________________________________________________________________________________________
batch_normalization_10 (BatchNor (None, 38, 38, 256) 1024 conv2d_10[0][0]
____________________________________________________________________________________________________
leaky_re_lu_10 (LeakyReLU) (None, 38, 38, 256) 0 batch_normalization_10[0][0]
____________________________________________________________________________________________________
conv2d_11 (Conv2D) (None, 38, 38, 512) 1179648 leaky_re_lu_10[0][0]
____________________________________________________________________________________________________
batch_normalization_11 (BatchNor (None, 38, 38, 512) 2048 conv2d_11[0][0]
____________________________________________________________________________________________________
leaky_re_lu_11 (LeakyReLU) (None, 38, 38, 512) 0 batch_normalization_11[0][0]
____________________________________________________________________________________________________
conv2d_12 (Conv2D) (None, 38, 38, 256) 131072 leaky_re_lu_11[0][0]
____________________________________________________________________________________________________
batch_normalization_12 (BatchNor (None, 38, 38, 256) 1024 conv2d_12[0][0]
____________________________________________________________________________________________________
leaky_re_lu_12 (LeakyReLU) (None, 38, 38, 256) 0 batch_normalization_12[0][0]
____________________________________________________________________________________________________
conv2d_13 (Conv2D) (None, 38, 38, 512) 1179648 leaky_re_lu_12[0][0]
____________________________________________________________________________________________________
batch_normalization_13 (BatchNor (None, 38, 38, 512) 2048 conv2d_13[0][0]
____________________________________________________________________________________________________
leaky_re_lu_13 (LeakyReLU) (None, 38, 38, 512) 0 batch_normalization_13[0][0]
____________________________________________________________________________________________________
max_pooling2d_5 (MaxPooling2D) (None, 19, 19, 512) 0 leaky_re_lu_13[0][0]
____________________________________________________________________________________________________
conv2d_14 (Conv2D) (None, 19, 19, 1024) 4718592 max_pooling2d_5[0][0]
____________________________________________________________________________________________________
batch_normalization_14 (BatchNor (None, 19, 19, 1024) 4096 conv2d_14[0][0]
____________________________________________________________________________________________________
leaky_re_lu_14 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_14[0][0]
____________________________________________________________________________________________________
conv2d_15 (Conv2D) (None, 19, 19, 512) 524288 leaky_re_lu_14[0][0]
____________________________________________________________________________________________________
batch_normalization_15 (BatchNor (None, 19, 19, 512) 2048 conv2d_15[0][0]
____________________________________________________________________________________________________
leaky_re_lu_15 (LeakyReLU) (None, 19, 19, 512) 0 batch_normalization_15[0][0]
____________________________________________________________________________________________________
conv2d_16 (Conv2D) (None, 19, 19, 1024) 4718592 leaky_re_lu_15[0][0]
____________________________________________________________________________________________________
batch_normalization_16 (BatchNor (None, 19, 19, 1024) 4096 conv2d_16[0][0]
____________________________________________________________________________________________________
leaky_re_lu_16 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_16[0][0]
____________________________________________________________________________________________________
conv2d_17 (Conv2D) (None, 19, 19, 512) 524288 leaky_re_lu_16[0][0]
____________________________________________________________________________________________________
batch_normalization_17 (BatchNor (None, 19, 19, 512) 2048 conv2d_17[0][0]
____________________________________________________________________________________________________
leaky_re_lu_17 (LeakyReLU) (None, 19, 19, 512) 0 batch_normalization_17[0][0]
____________________________________________________________________________________________________
conv2d_18 (Conv2D) (None, 19, 19, 1024) 4718592 leaky_re_lu_17[0][0]
____________________________________________________________________________________________________
batch_normalization_18 (BatchNor (None, 19, 19, 1024) 4096 conv2d_18[0][0]
____________________________________________________________________________________________________
leaky_re_lu_18 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_18[0][0]
____________________________________________________________________________________________________
conv2d_19 (Conv2D) (None, 19, 19, 1024) 9437184 leaky_re_lu_18[0][0]
____________________________________________________________________________________________________
batch_normalization_19 (BatchNor (None, 19, 19, 1024) 4096 conv2d_19[0][0]
____________________________________________________________________________________________________
conv2d_21 (Conv2D) (None, 38, 38, 64) 32768 leaky_re_lu_13[0][0]
____________________________________________________________________________________________________
leaky_re_lu_19 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_19[0][0]
____________________________________________________________________________________________________
batch_normalization_21 (BatchNor (None, 38, 38, 64) 256 conv2d_21[0][0]
____________________________________________________________________________________________________
conv2d_20 (Conv2D) (None, 19, 19, 1024) 9437184 leaky_re_lu_19[0][0]
____________________________________________________________________________________________________
leaky_re_lu_21 (LeakyReLU) (None, 38, 38, 64) 0 batch_normalization_21[0][0]
____________________________________________________________________________________________________
batch_normalization_20 (BatchNor (None, 19, 19, 1024) 4096 conv2d_20[0][0]
____________________________________________________________________________________________________
space_to_depth_x2 (Lambda) (None, 19, 19, 256) 0 leaky_re_lu_21[0][0]
____________________________________________________________________________________________________
leaky_re_lu_20 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_20[0][0]
____________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 19, 19, 1280) 0 space_to_depth_x2[0][0]
leaky_re_lu_20[0][0]
____________________________________________________________________________________________________
conv2d_22 (Conv2D) (None, 19, 19, 1024) 11796480 concatenate_1[0][0]
____________________________________________________________________________________________________
batch_normalization_22 (BatchNor (None, 19, 19, 1024) 4096 conv2d_22[0][0]
____________________________________________________________________________________________________
leaky_re_lu_22 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_22[0][0]
____________________________________________________________________________________________________
conv2d_23 (Conv2D) (None, 19, 19, 425) 435625 leaky_re_lu_22[0][0]
====================================================================================================
Total params: 50,983,561
Trainable params: 50,962,889
Non-trainable params: 20,672
____________________________________________________________________________________________________
###Markdown
**Note**: On some computers, you may see a warning message from Keras. Don't worry about it if you do--it is fine.**Reminder**: this model converts a preprocessed batch of input images (shape: (m, 608, 608, 3)) into a tensor of shape (m, 19, 19, 5, 85) as explained in Figure (2). 3.3 - Convert output of the model to usable bounding box tensorsThe output of `yolo_model` is a (m, 19, 19, 5, 85) tensor that needs to pass through non-trivial processing and conversion. The following cell does that for you.
###Code
yolo_outputs = yolo_head(yolo_model.output, anchors, len(class_names))
###Output
_____no_output_____
###Markdown
You added `yolo_outputs` to your graph. This set of 4 tensors is ready to be used as input by your `yolo_eval` function. 3.4 - Filtering boxes`yolo_outputs` gave you all the predicted boxes of `yolo_model` in the correct format. You're now ready to perform filtering and select only the best boxes. Lets now call `yolo_eval`, which you had previously implemented, to do this.
###Code
scores, boxes, classes = yolo_eval(yolo_outputs, image_shape)
###Output
_____no_output_____
###Markdown
3.5 - Run the graph on an imageLet the fun begin. You have created a (`sess`) graph that can be summarized as follows:1. yolo_model.input is given to `yolo_model`. The model is used to compute the output yolo_model.output 2. yolo_model.output is processed by `yolo_head`. It gives you yolo_outputs 3. yolo_outputs goes through a filtering function, `yolo_eval`. It outputs your predictions: scores, boxes, classes **Exercise**: Implement predict() which runs the graph to test YOLO on an image.You will need to run a TensorFlow session, to have it compute `scores, boxes, classes`.The code below also uses the following function:```pythonimage, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))```which outputs:- image: a python (PIL) representation of your image used for drawing boxes. You won't need to use it.- image_data: a numpy-array representing the image. This will be the input to the CNN.**Important note**: when a model uses BatchNorm (as is the case in YOLO), you will need to pass an additional placeholder in the feed_dict {K.learning_phase(): 0}.
###Code
def predict(sess, image_file):
"""
Runs the graph stored in "sess" to predict boxes for "image_file". Prints and plots the preditions.
Arguments:
sess -- your tensorflow/Keras session containing the YOLO graph
image_file -- name of an image stored in the "images" folder.
Returns:
out_scores -- tensor of shape (None, ), scores of the predicted boxes
out_boxes -- tensor of shape (None, 4), coordinates of the predicted boxes
out_classes -- tensor of shape (None, ), class index of the predicted boxes
Note: "None" actually represents the number of predicted boxes, it varies between 0 and max_boxes.
"""
# Preprocess your image
image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))
# Run the session with the correct tensors and choose the correct placeholders in the feed_dict.
# You'll need to use feed_dict={yolo_model.input: ... , K.learning_phase(): 0})
### START CODE HERE ### (≈ 1 line)
out_scores, out_boxes, out_classes = sess.run([scores, boxes, classes], feed_dict = {yolo_model.input: image_data, K.learning_phase():0})
### END CODE HERE ###
# Print predictions info
print('Found {} boxes for {}'.format(len(out_boxes), image_file))
# Generate colors for drawing bounding boxes.
colors = generate_colors(class_names)
# Draw bounding boxes on the image file
draw_boxes(image, out_scores, out_boxes, out_classes, class_names, colors)
# Save the predicted bounding box on the image
image.save(os.path.join("out", image_file), quality=90)
# Display the results in the notebook
output_image = scipy.misc.imread(os.path.join("out", image_file))
imshow(output_image)
return out_scores, out_boxes, out_classes
###Output
_____no_output_____
###Markdown
Run the following cell on the "test.jpg" image to verify that your function is correct.
###Code
out_scores, out_boxes, out_classes = predict(sess, "test.jpg")
###Output
Found 7 boxes for test.jpg
car 0.60 (925, 285) (1045, 374)
car 0.66 (706, 279) (786, 350)
bus 0.67 (5, 266) (220, 407)
car 0.70 (947, 324) (1280, 705)
car 0.74 (159, 303) (346, 440)
car 0.80 (761, 282) (942, 412)
car 0.89 (367, 300) (745, 648)
|
landmark-recognition-challenge/Demo_OrbMatch.ipynb | ###Markdown
Demo_OrbMatch Run name
###Code
import time
project_name = 'Google_LandMark_Rec'
step_name = 'Demo_OrbMatch'
time_str = time.strftime("%Y%m%d_%H%M%S", time.localtime())
run_name = project_name + '_' + step_name + '_' + time_str
print('run_name: ' + run_name)
t0 = time.time()
###Output
run_name: Google_LandMark_Rec_Demo_OrbMatch_20180509_025627
###Markdown
Import PKGs
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
from IPython.display import display
import os
import gc
import math
import shutil
import zipfile
import pickle
import h5py
from PIL import Image
from tqdm import tqdm
from multiprocessing import cpu_count
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, accuracy_score
###Output
C:\ProgramData\Anaconda3\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
###Markdown
Project folders
###Code
cwd = os.getcwd()
input_folder = os.path.join(cwd, 'input')
output_folder = os.path.join(cwd, 'output')
model_folder = os.path.join(cwd, 'model')
org_train_folder = os.path.join(input_folder, 'org_train')
org_test_folder = os.path.join(input_folder, 'org_test')
train_folder = os.path.join(input_folder, 'data_train')
val_folder = os.path.join(input_folder, 'data_val')
test_folder = os.path.join(input_folder, 'data_test')
test_sub_folder = os.path.join(test_folder, 'test')
train_csv_file = os.path.join(input_folder, 'train.csv')
test_csv_file = os.path.join(input_folder, 'test.csv')
sample_submission_folder = os.path.join(input_folder, 'sample_submission.csv')
###Output
_____no_output_____
###Markdown
Preview csv
###Code
train_csv = pd.read_csv(train_csv_file)
print('train_csv.shape is {0}.'.format(train_csv.shape))
display(train_csv.head(2))
test_csv = pd.read_csv(test_csv_file)
print('test_csv.shape is {0}.'.format(test_csv.shape))
display(test_csv.head(2))
train_id = train_csv['id']
train_landmark_id = train_csv['landmark_id']
id_2_landmark_id_dict = dict(zip(train_id, train_landmark_id))
print('len(id_2_landmark_id_dict)=%d' % len(id_2_landmark_id_dict))
index = 0
print('id: %s, \tlandmark_id:%s' % (train_id[index], id_2_landmark_id_dict[train_id[index]]))
index = 1
print('id: %s, \tlandmark_id:%s' % (train_id[index], id_2_landmark_id_dict[train_id[index]]))
###Output
len(id_2_landmark_id_dict)=1225029
id: cacf8152e2d2ae60, landmark_id:4676
id: 0a58358a2afd3e4e, landmark_id:6651
###Markdown
OrbMatch
###Code
class OrbMatch(object):
def __init__(
image_file,
original_folder,
n_features=500,
is_crossCheck = True,
n_matches=100,
min_distance=60,
min_good_match=50,
n_class = 14951,
top=3,
):
self._image_file = image_file
self._original_folder = original_folder
self._n_features = n_features
self._is_crossCheck = is_crossCheck
self._n_matches = n_matches
self._min_distance = min_distance
self._min_good_match = min_good_match
self._n_class = n_class
self._top = top
self._key_point = None
self._destance = None
self._clf = cv2.ORB_create(self._n_features)
self._bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=self._is_crossCheck)
# self._class_weight = class_weight
def get_class_weight():
pass
def image_detect_and_compute(image_file):
"""Detect and compute interest points and their descriptors."""
img = cv2.imread(image_file)
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
kp, des = self._clf.detectAndCompute(img, None)
return des
def match():
img = cv2.imread(image_file)
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
des1 = self.image_detect_and_compute(self._image_file)
matched_image_classes = np.zeros(self._n_class)
for file_name in os.listdir(self._original_folder):
image_file = os.path.join(self._original_folder, file_name)
print(image_file)
des2 = self.image_detect_and_compute(image_file)
matches = self._bf.match(des1, des2)
# matches = sorted(matches, key = lambda x: x.distance) # Sort matches by distance. Best come first.
matches = list(filter(lambda x: x.distance < self._min_distance, matches))
print(len(matches))
if len(matches) < self._min_good_match:
class_indx = get_class_indx(file_name)
matched_image_classes[class_indx] = matched_image_classes[class_indx] + 1
matched_image_classes = matched_image_classes / sum(matched_image_classes)
best_matches = np.argsort(matched_image_classes)[:self._top]
return best_matches
###Output
_____no_output_____ |
notebooks/input.ipynb | ###Markdown
Python Type Annotations, Dataclasses, and Serialization with Datafiles
###Code
import sys
assert sys.version_info > (3, 7)
sys.version_info
from dataclasses import dataclass
@dataclass
class InventoryItem:
"""Class for keeping track of an item in inventory."""
name: str
unit_price: float
quantity_on_hand: int = 0
def total_cost(self) -> float:
return self.unit_price * self.quantity_on_hand
try:
item = InventoryItem()
except TypeError as e:
print(repr(e))
item = InventoryItem("Widget", 1.99)
item
item.name
item.unit_price = 2.99
item
###Output
_____no_output_____
###Markdown
Type Annotations
###Code
assert sys.version_info > (3, 6)
###Output
_____no_output_____
###Markdown
Argument Annotations
###Code
def greet(name: str):
print("Hello, " + name)
greet("Jace")
greet
greet.__annotations__
import typing # stdlib
hints = typing.get_type_hints(greet)
hints # `dict` with real classes
import inspect # stdlib
signature = inspect.signature(greet)
signature # `Signature` object
signature.parameters
signature.parameters['name']
# POSITIONAL_OR_KEYWORD: greet(name)
# KEYWORD_ONLY: greet(*, name)
# VAR_POSITIONAL: greet(*names)
# VAR_KEYWORD: greet(**names)
signature.parameters['name'].kind
signature.parameters['name'].annotation
###Output
_____no_output_____
###Markdown
Return Annotations
###Code
from decimal import Decimal
def add_tax(subtotal, rate=0.06) -> Decimal:
cents = Decimal('0.01')
return Decimal(subtotal * (1 + rate)).quantize(cents)
add_tax(4.99)
inspect.signature(add_tax).return_annotation
###Output
_____no_output_____
###Markdown
Variable Annotations
###Code
class Person:
name: str
Person.__annotations__
###Output
_____no_output_____
###Markdown
Optional Values
###Code
from typing import Optional
def fill(password: Optional[str]):
if password is not None:
...
fill("abc123")
fill(None)
###Output
_____no_output_____
###Markdown
Homogeneous Lists
###Code
from typing import List
def print_one_more_than(numbers: List[int]):
for number in numbers:
print(number + 1)
###Output
_____no_output_____
###Markdown
Mixed Types
###Code
from typing import Union
def print_items_or_keys(values: Union[list, dict]):
for value in values:
print(value)
###Output
_____no_output_____
###Markdown
⚠️ Circular Annotations
###Code
class Node:
def connect_edge(edge: 'Edge'):
pass
class Edge:
def connect_node(node: Node):
pass
from __future__ import annotations
class Node:
def connect_edge(edge: Edge):
pass
class Edge:
def connect_node(node: Node):
pass
###Output
_____no_output_____
###Markdown
Type Checking (with mypy)
###Code
# pip install mypy==0.720
from mypy import api
def mypy(filename):
"""Emulate `$ mypy <filename>` for notebooks."""
message, _, _ = api.run([filename])
print(message or "(no errors)")
%%writefile greet.py
def greet(name: str):
print("Hello, " + name)
greet("Jace")
mypy('greet.py')
%%writefile greet2.py
def greet(name: str):
print("Hello, " + name)
greet(42)
mypy('greet2.py')
%%writefile people.py
from typing import Iterable, List
class Person:
def __init__(self, name):
self.name = name
def get_people(*names: Iterable[str]) -> List[Person] :
return [Person(name) for name in names]
people = get_people("Alice", "Bob")
people[1].age
mypy('people.py')
###Output
_____no_output_____
###Markdown
Dataclasses
###Code
from dataclasses import dataclass
@dataclass
class InventoryItem:
"""Class for keeping track of an item in inventory."""
name: str
unit_price: float
quantity_on_hand: int = 0
def total_cost(self) -> float:
return self.unit_price * self.quantity_on_hand
###Output
_____no_output_____
###Markdown
`__init__`
###Code
InventoryItem("Widget A", 1.99)
InventoryItem("Widge B", 1.99, 300)
InventoryItem("Widget C", 1.99, quantity_on_hand=400)
InventoryItem(name="Widget D", unit_price=1.99, quantity_on_hand=500)
try:
InventoryItem(name="Widget E")
except TypeError as e:
print(repr(e))
###Output
_____no_output_____
###Markdown
`__repr__`
###Code
item = InventoryItem("Widget", 1.99)
repr(item)
eval("InventoryItem(name='Widget', unit_price=1.99, quantity_on_hand=0)")
###Output
_____no_output_____
###Markdown
`__eq__`
###Code
item_a = InventoryItem("Widget A", 1.99)
item_b = InventoryItem("Widget B", 1.99)
item_x = InventoryItem("Widget A", 1.99, quantity_on_hand=0)
item_a == item_b
item_a == item_x
###Output
_____no_output_____
###Markdown
Ordered Dataclasses
###Code
@dataclass(order=True)
class Person:
last_name: str
first_name: str
def __str__(self):
return f'{self.first_name} {self.last_name}'
people = [
Person(first_name="Alice", last_name="Smith"),
Person(first_name="Bob", last_name="Smith"),
Person(first_name="Carl", last_name="Davidson"),
]
for person in people:
print(person)
people.sort()
for person in people:
print(person)
###Output
_____no_output_____
###Markdown
Frozen Dataclasses
###Code
@dataclass(frozen=True)
class Badge:
number: int
badges = [Badge(1001), Badge(1002), Badge(1003)]
try:
badges[1].number = 1004
except AttributeError as e:
print(repr(e))
###Output
_____no_output_____
###Markdown
Field Customization
###Code
from dataclasses import field
@dataclass(order=True)
class Person:
name: str = field(compare=False)
age: int
def __str__(self):
return f'{self.name} ({self.age})'
people = [
Person("Alice Smith", 30),
Person("Bob Smith", 25),
Person("Carl Davidson", 41),
]
for person in people:
print(person)
people.sort()
for person in people:
print(person)
###Output
_____no_output_____
###Markdown
⚠️ Custom `__init__`
###Code
@dataclass
class Bill:
subtotal: float
tip: float = 0.0
def __post_init__(self):
self.total = self.subtotal + self.tip
bill = Bill(12.99, tip=3.00)
bill
bill.total
###Output
_____no_output_____
###Markdown
⚠️ Mutable Default Values
###Code
from typing import List
try:
@dataclass
class Group:
members: List[Person] = []
except ValueError as e:
print(repr(e))
from dataclasses import field
@dataclass
class Group:
members: List[Person] = field(default_factory=list)
group = Group()
group.members.append(people[0])
group
###Output
_____no_output_____
###Markdown
Utilities
###Code
import dataclasses
dataclasses.is_dataclass(item)
for field in dataclasses.fields(item):
print(field, end='\n\n')
dataclasses.asdict(item)
dataclasses.astuple(item)
###Output
_____no_output_____
###Markdown
Serialization (with datafiles)
###Code
# pip install datafiles==0.4
%%sh
rm -rf items
from datafiles import datafile
@datafile('items/{self.name}.yml')
class MyInventoryItem:
"""Class for keeping track of an item in inventory."""
name: str
unit_price: float
quantity_on_hand: int = 0
def total_cost(self) -> float:
return self.unit_price * self.quantity_on_hand
item = MyInventoryItem("widget", 1.99)
%%sh
cat items/widget.yml
item.quantity_on_hand += 100
%%sh
cat items/widget.yml
%%writefile items/widget.yml
unit_price: 2.5 # was 3.0
quantity_on_hand: 100
item.unit_price
from datafiles import Missing
item = MyInventoryItem("widget", Missing)
assert item.unit_price == 2.5
assert item.quantity_on_hand == 100
item
###Output
_____no_output_____ |
machine-learning/notes/01_linear_regression/01. Linear Regression with Tensorflow.ipynb | ###Markdown
Linear Regression with TensorFlow - Credits: [Linear Regression with TensorFlow Tutorial](http://nbviewer.jupyter.org/github/ageron/handson-ml/blob/master/09_up_and_running_with_tensorflow.ipynbLinear-Regression) by Aurélien Géron- __The following code manipulates 2D arrays to perform Linear Regression on the California housing dataset.__
###Code
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import numpy.random as rnd
import os
# to make this notebook's output stable across runs
rnd.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "tensorflow"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
import tensorflow as tf
from sklearn.datasets import fetch_california_housing
housing = fetch_california_housing()
m, n = housing.data.shape
housing_data_plus_bias = np.c_[np.ones((m, 1)), housing.data]
###Output
_____no_output_____
###Markdown
Using the Normal Equation
###Code
tf.reset_default_graph()
X = tf.constant(housing_data_plus_bias, dtype=tf.float64, name="X")
y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float64, name="y")
XT = tf.transpose(X)
theta = tf.matmul(tf.matmul(tf.matrix_inverse(tf.matmul(XT, X)), XT), y)
with tf.Session() as sess:
result = theta.eval()
print(result)
###Output
[[ -3.69419202e+01]
[ 4.36693293e-01]
[ 9.43577803e-03]
[ -1.07322041e-01]
[ 6.45065694e-01]
[ -3.97638942e-06]
[ -3.78654265e-03]
[ -4.21314378e-01]
[ -4.34513755e-01]]
###Markdown
- It starts by fetching the dataset; then it adds an extra bias input feature ($x_0 = 1$) to all training instances (it does so using NumPy soit runs immediately)- Then it creates two TensorFlow constant nodes, X and y, to hold this data and the targets, and it uses some of the matrix operations provided by TensorFlow to define theta.- You may recognize that the definition of theta corresponds to the NormalEquation $$\large \theta = (X^{T}X)^{-1}X^{T}y$$ - __Compare with Scikit-Learn__
###Code
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(housing.data, housing.target.reshape(-1, 1))
print(np.r_[lin_reg.intercept_.reshape(-1, 1), lin_reg.coef_.T])
###Output
[[ -3.69419202e+01]
[ 4.36693293e-01]
[ 9.43577803e-03]
[ -1.07322041e-01]
[ 6.45065694e-01]
[ -3.97638942e-06]
[ -3.78654265e-03]
[ -4.21314378e-01]
[ -4.34513755e-01]]
###Markdown
Using Batch Gradient Descent- Gradient Descent requires scaling the feature vectors first. We could do this using TF, but let's just use Scikit-Learn for now.
###Code
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaled_housing_data = scaler.fit_transform(housing.data)
scaled_housing_data_plus_bias = np.c_[np.ones((m, 1)), scaled_housing_data]
print(scaled_housing_data_plus_bias.mean(axis=0))
print(scaled_housing_data_plus_bias.mean(axis=1))
print(scaled_housing_data_plus_bias.mean())
print(scaled_housing_data_plus_bias.shape)
###Output
[ 1.00000000e+00 6.60969987e-17 5.50808322e-18 6.60969987e-17
-1.06030602e-16 -1.10161664e-17 3.44255201e-18 -1.07958431e-15
-8.52651283e-15]
[ 0.38915536 0.36424355 0.5116157 ..., -0.06612179 -0.06360587
0.01359031]
0.111111111111
(20640, 9)
###Markdown
Manually computing the gradients - The `random_uniform()` function creates a node in the graph that will generate a tensor containingrandom values, given its shape and value range, much like NumPy’s `rand()` function.- The `assign()` function creates a node that will assign a new value to a variable. In this case, itimplements the Batch Gradient Descent step $θ^\text{(next step)} = θ – η∇θMSE(θ)$.- The main loop executes the training step over and over again (`n_epochs times`), and every 100 iterations it prints out the current Mean Squared Error (mse). You should see the MSE go down at every iteration.
###Code
tf.reset_default_graph()
n_epochs = 1000
learning_rate = 0.01
X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name="X")
y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
gradients = 2/m * tf.matmul(tf.transpose(X), error)
training_op = tf.assign(theta, theta - learning_rate * gradients)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
if epoch % 100 == 0:
print("Epoch", epoch, "MSE =", mse.eval())
sess.run(training_op)
best_theta = theta.eval()
print("Best theta:")
print(best_theta)
###Output
Epoch 0 MSE = 2.75443
Epoch 100 MSE = 0.632222
Epoch 200 MSE = 0.57278
Epoch 300 MSE = 0.558501
Epoch 400 MSE = 0.549069
Epoch 500 MSE = 0.542288
Epoch 600 MSE = 0.537379
Epoch 700 MSE = 0.533822
Epoch 800 MSE = 0.531243
Epoch 900 MSE = 0.529371
Best theta:
[[ 2.06855226e+00]
[ 7.74078071e-01]
[ 1.31192386e-01]
[ -1.17845096e-01]
[ 1.64778158e-01]
[ 7.44080753e-04]
[ -3.91945168e-02]
[ -8.61356616e-01]
[ -8.23479712e-01]]
###Markdown
Using autodiff- __TensorFlow’s autodiff feature__: it can automatically and efficientlycompute the gradients for you. - Same as above except for the gradients = ... line.
###Code
tf.reset_default_graph()
n_epochs = 1000
learning_rate = 0.01
X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name="X")
y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
gradients = tf.gradients(mse, [theta])[0]
training_op = tf.assign(theta, theta - learning_rate * gradients)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
if epoch % 100 == 0:
print("Epoch", epoch, "MSE =", mse.eval())
sess.run(training_op)
best_theta = theta.eval()
print("Best theta:")
print(best_theta)
###Output
Epoch 0 MSE = 2.75443
Epoch 100 MSE = 0.632222
Epoch 200 MSE = 0.57278
Epoch 300 MSE = 0.558501
Epoch 400 MSE = 0.549069
Epoch 500 MSE = 0.542288
Epoch 600 MSE = 0.537379
Epoch 700 MSE = 0.533822
Epoch 800 MSE = 0.531243
Epoch 900 MSE = 0.529371
Best theta:
[[ 2.06855249e+00]
[ 7.74078071e-01]
[ 1.31192386e-01]
[ -1.17845066e-01]
[ 1.64778143e-01]
[ 7.44078017e-04]
[ -3.91945094e-02]
[ -8.61356676e-01]
[ -8.23479772e-01]]
###Markdown
Using a `GradientDescentOptimizer`
###Code
tf.reset_default_graph()
n_epochs = 1000
learning_rate = 0.01
X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name="X")
y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(mse)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
if epoch % 100 == 0:
print("Epoch", epoch, "MSE =", mse.eval())
sess.run(training_op)
best_theta = theta.eval()
print("Best theta:")
print(best_theta)
###Output
Epoch 0 MSE = 2.75443
Epoch 100 MSE = 0.632222
Epoch 200 MSE = 0.57278
Epoch 300 MSE = 0.558501
Epoch 400 MSE = 0.549069
Epoch 500 MSE = 0.542288
Epoch 600 MSE = 0.537379
Epoch 700 MSE = 0.533822
Epoch 800 MSE = 0.531243
Epoch 900 MSE = 0.529371
Best theta:
[[ 2.06855249e+00]
[ 7.74078071e-01]
[ 1.31192386e-01]
[ -1.17845066e-01]
[ 1.64778143e-01]
[ 7.44078017e-04]
[ -3.91945094e-02]
[ -8.61356676e-01]
[ -8.23479772e-01]]
###Markdown
Using a momentum optimizer
###Code
tf.reset_default_graph()
n_epochs = 1000
learning_rate = 0.01
X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name="X")
y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
optimizer = tf.train.MomentumOptimizer(learning_rate=learning_rate, momentum=0.25)
training_op = optimizer.minimize(mse)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
sess.run(training_op)
best_theta = theta.eval()
print("Best theta:")
print(best_theta)
###Output
Best theta:
[[ 2.06855392e+00]
[ 7.94067979e-01]
[ 1.25333667e-01]
[ -1.73580602e-01]
[ 2.18767926e-01]
[ -1.64708309e-03]
[ -3.91250364e-02]
[ -8.85289013e-01]
[ -8.50607991e-01]]
###Markdown
Feeding data to the training algorithm - In order to replace X and y at every iteration with the next mini-batch, the simplest way to do this is to useplaceholder nodes. - These nodes are special because they don’t actually perform any computation, they just output the data you tell them to output at runtime. Placeholder nodes
###Code
>>> tf.reset_default_graph()
>>> A = tf.placeholder(tf.float32, shape=(None, 3))
>>> B = A + 5
>>> with tf.Session() as sess:
... B_val_1 = B.eval(feed_dict={A: [[1, 2, 3]]})
... B_val_2 = B.eval(feed_dict={A: [[4, 5, 6], [7, 8, 9]]})
...
>>> print(B_val_1)
>>> print(B_val_2)
###Output
[[ 6. 7. 8.]]
[[ 9. 10. 11.]
[ 12. 13. 14.]]
###Markdown
Mini-batch Gradient Descent
###Code
tf.reset_default_graph()
n_epochs = 1000
learning_rate = 0.01
X = tf.placeholder(tf.float32, shape=(None, n + 1), name="X")
y = tf.placeholder(tf.float32, shape=(None, 1), name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(mse)
init = tf.global_variables_initializer()
def fetch_batch(epoch, batch_index, batch_size):
rnd.seed(epoch * n_batches + batch_index)
indices = rnd.randint(m, size=batch_size)
X_batch = scaled_housing_data_plus_bias[indices]
y_batch = housing.target.reshape(-1, 1)[indices]
return X_batch, y_batch
n_epochs = 10
batch_size = 100
n_batches = int(np.ceil(m / batch_size))
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
for batch_index in range(n_batches):
X_batch, y_batch = fetch_batch(epoch, batch_index, batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
best_theta = theta.eval()
print("Best theta:")
print(best_theta)
###Output
Best theta:
[[ 2.07001591]
[ 0.82045609]
[ 0.1173173 ]
[-0.22739051]
[ 0.31134021]
[ 0.00353193]
[-0.01126994]
[-0.91643935]
[-0.87950081]]
###Markdown
Saving and restoring a model
###Code
tf.reset_default_graph()
n_epochs = 1000
learning_rate = 0.01
X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name="X")
y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(mse)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
if epoch % 100 == 0:
print("Epoch", epoch, "MSE =", mse.eval())
save_path = saver.save(sess, "/tmp/my_model.ckpt")
sess.run(training_op)
best_theta = theta.eval()
save_path = saver.save(sess, "my_model_final.ckpt")
print("Best theta:")
print(best_theta)
###Output
Epoch 0 MSE = 2.75443
Epoch 100 MSE = 0.632222
Epoch 200 MSE = 0.57278
Epoch 300 MSE = 0.558501
Epoch 400 MSE = 0.549069
Epoch 500 MSE = 0.542288
Epoch 600 MSE = 0.537379
Epoch 700 MSE = 0.533822
Epoch 800 MSE = 0.531243
Epoch 900 MSE = 0.529371
Best theta:
[[ 2.06855249e+00]
[ 7.74078071e-01]
[ 1.31192386e-01]
[ -1.17845066e-01]
[ 1.64778143e-01]
[ 7.44078017e-04]
[ -3.91945094e-02]
[ -8.61356676e-01]
[ -8.23479772e-01]]
###Markdown
Visualizing the graph inside Jupyter
###Code
from IPython.display import clear_output, Image, display, HTML
def strip_consts(graph_def, max_const_size=32):
"""Strip large constant values from graph_def."""
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = b"<stripped %d bytes>"%size
return strip_def
def show_graph(graph_def, max_const_size=32):
"""Visualize TensorFlow graph."""
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code = """
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
""".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe = """
<iframe seamless style="width:1200px;height:620px;border:0" srcdoc="{}"></iframe>
""".format(code.replace('"', '"'))
display(HTML(iframe))
show_graph(tf.get_default_graph())
###Output
_____no_output_____
###Markdown
Using TensorBoard
###Code
tf.reset_default_graph()
from datetime import datetime
now = datetime.utcnow().strftime("%Y%m%d%H%M%S")
root_logdir = "tf_logs"
logdir = "{}/run-{}/".format(root_logdir, now)
n_epochs = 1000
learning_rate = 0.01
X = tf.placeholder(tf.float32, shape=(None, n + 1), name="X")
y = tf.placeholder(tf.float32, shape=(None, 1), name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(mse)
init = tf.global_variables_initializer()
mse_summary = tf.summary.scalar('MSE', mse)
summary_writer = tf.summary.FileWriter(logdir, tf.get_default_graph())
###Output
_____no_output_____
###Markdown
- __`mse_summary`__: creates a node in the graph that will evaluate the MSE value and write it to a TensorBoardcompatible binary log string called a summary.- __`summary_writer`__: creates a FileWriter that you will use to write summaries to logfiles in the log directory.
###Code
n_epochs = 10
batch_size = 100
n_batches = int(np.ceil(m / batch_size))
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
for batch_index in range(n_batches):
X_batch, y_batch = fetch_batch(epoch, batch_index, batch_size)
if batch_index % 10 == 0:
summary_str = mse_summary.eval(feed_dict={X: X_batch, y: y_batch})
step = epoch * n_batches + batch_index
summary_writer.add_summary(summary_str, step)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
best_theta = theta.eval()
summary_writer.flush()
summary_writer.close()
print("Best theta:")
print(best_theta)
###Output
Best theta:
[[ 2.07001591]
[ 0.82045609]
[ 0.1173173 ]
[-0.22739051]
[ 0.31134021]
[ 0.00353193]
[-0.01126994]
[-0.91643935]
[-0.87950081]]
###Markdown
- __list the contents of the log directory__
###Code
!ls -l tf_logs/run*
###Output
tf_logs/run-20170501070404:
total 56
-rw-r--r-- 1 tvu 1742120565 26334 May 1 00:04 events.out.tfevents.1493622244.TVU-C02JC4K9DKQ2.socal.rr.com
tf_logs/run-20170501075145:
total 56
-rw-r--r-- 1 tvu 1742120565 27132 May 1 00:51 events.out.tfevents.1493625105.TVU-C02JC4K9DKQ2.socal.rr.com
tf_logs/run-20170502225234:
total 32
-rw-r--r-- 1 tvu 1742120565 15755 May 2 15:52 events.out.tfevents.1493765554.TVU-C02JC4K9DKQ2.local
tf_logs/run-20170502225247:
total 56
-rw-r--r-- 1 tvu 1742120565 26334 May 2 15:52 events.out.tfevents.1493765567.TVU-C02JC4K9DKQ2.local
###Markdown
- __Fire up the TensorBoard server__
###Code
!tensorboard --logdir tf_logs/
###Output
Starting TensorBoard b'47' at http://0.0.0.0:6006
(Press CTRL+C to quit)
^C
###Markdown
Name Scopes- When dealing with more complex models such as neural networks, the graph can easily become clutteredwith thousands of nodes. To avoid this, you can create name scopes to group related nodes.
###Code
tf.reset_default_graph()
now = datetime.utcnow().strftime("%Y%m%d%H%M%S")
root_logdir = "tf_logs"
logdir = "{}/run-{}/".format(root_logdir, now)
n_epochs = 1000
learning_rate = 0.01
X = tf.placeholder(tf.float32, shape=(None, n + 1), name="X")
y = tf.placeholder(tf.float32, shape=(None, 1), name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
with tf.name_scope('loss') as scope:
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(mse)
init = tf.global_variables_initializer()
mse_summary = tf.summary.scalar('MSE', mse)
summary_writer = tf.summary.FileWriter(logdir, tf.get_default_graph())
###Output
_____no_output_____
###Markdown
- __In TensorBoard, the mse and error nodes now appear inside the "loss" namespace__
###Code
n_epochs = 10
batch_size = 100
n_batches = int(np.ceil(m / batch_size))
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
for batch_index in range(n_batches):
X_batch, y_batch = fetch_batch(epoch, batch_index, batch_size)
if batch_index % 10 == 0:
summary_str = mse_summary.eval(feed_dict={X: X_batch, y: y_batch})
step = epoch * n_batches + batch_index
summary_writer.add_summary(summary_str, step)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
best_theta = theta.eval()
summary_writer.flush()
summary_writer.close()
print("Best theta:")
print(best_theta)
print(error.op.name)
print(mse.op.name)
tf.reset_default_graph()
a1 = tf.Variable(0, name="a") # name == "a"
a2 = tf.Variable(0, name="a") # name == "a_1"
with tf.name_scope("param"): # name == "param"
a3 = tf.Variable(0, name="a") # name == "param/a"
with tf.name_scope("param"): # name == "param_1"
a4 = tf.Variable(0, name="a") # name == "param_1/a"
for node in (a1, a2, a3, a4):
print(node.op.name)
###Output
a
a_1
param/a
param_1/a
###Markdown
Modularity- An ugly flat code:
###Code
tf.reset_default_graph()
n_features = 3
X = tf.placeholder(tf.float32, shape=(None, n_features), name="X")
w1 = tf.Variable(tf.random_normal((n_features, 1)), name="weights1")
w2 = tf.Variable(tf.random_normal((n_features, 1)), name="weights2")
b1 = tf.Variable(0.0, name="bias1")
b2 = tf.Variable(0.0, name="bias2")
linear1 = tf.add(tf.matmul(X, w1), b1, name="linear1")
linear2 = tf.add(tf.matmul(X, w2), b2, name="linear2")
relu1 = tf.maximum(linear1, 0, name="relu1")
relu2 = tf.maximum(linear1, 0, name="relu2") # Oops, cut&paste error! Did you spot it?
output = tf.add_n([relu1, relu2], name="output")
###Output
_____no_output_____
###Markdown
- Much better, using a function to build the ReLUs:
###Code
tf.reset_default_graph()
def relu(X):
with tf.name_scope("relu"):
w_shape = int(X.get_shape()[1]), 1
w = tf.Variable(tf.random_normal(w_shape), name="weights")
b = tf.Variable(0.0, name="bias")
linear = tf.add(tf.matmul(X, w), b, name="linear")
return tf.maximum(linear, 0, name="max")
n_features = 3
X = tf.placeholder(tf.float32, shape=(None, n_features), name="X")
relus = [relu(X) for i in range(5)]
output = tf.add_n(relus, name="output")
summary_writer = tf.summary.FileWriter("logs/relu2", tf.get_default_graph())
summary_writer.close()
###Output
_____no_output_____
###Markdown
- Sharing a threshold variable the classic way, by defining it outside of the `relu()` function then passing it as a parameter:
###Code
tf.reset_default_graph()
def relu(X, threshold):
with tf.name_scope("relu"):
w_shape = int(X.get_shape()[1]), 1
w = tf.Variable(tf.random_normal(w_shape), name="weights")
b = tf.Variable(0.0, name="bias")
linear = tf.add(tf.matmul(X, w), b, name="linear")
return tf.maximum(linear, threshold, name="max")
threshold = tf.Variable(0.0, name="threshold")
X = tf.placeholder(tf.float32, shape=(None, n_features), name="X")
relus = [relu(X, threshold) for i in range(5)]
output = tf.add_n(relus, name="output")
###Output
_____no_output_____
###Markdown
Sharing variables- If you want to share a variable between various components of your graph, one simple option is to createit first, then pass it as a parameter to the functions that need it.
###Code
tf.reset_default_graph()
def relu(X, threshold):
with tf.name_scope("relu"):
w_shape = int(X.get_shape()[1]), 1
w = tf.Variable(tf.random_normal(w_shape), name="weights")
b = tf.Variable(0.0, name="bias")
linear = tf.add(tf.matmul(X, w), b, name="linear")
return tf.maximum(linear, threshold, name="max")
threshold = tf.Variable(0.0, name="threshold")
X = tf.placeholder(tf.float32, shape=(None, n_features), name="X")
relus = [relu(X, threshold) for i in range(5)]
output = tf.add_n(relus, name="output")
tf.reset_default_graph()
def relu(X):
with tf.name_scope("relu"):
if not hasattr(relu, "threshold"):
relu.threshold = tf.Variable(0.0, name="threshold")
w_shape = int(X.get_shape()[1]), 1
w = tf.Variable(tf.random_normal(w_shape), name="weights")
b = tf.Variable(0.0, name="bias")
linear = tf.add(tf.matmul(X, w), b, name="linear")
return tf.maximum(linear, relu.threshold, name="max")
X = tf.placeholder(tf.float32, shape=(None, n_features), name="X")
relus = [relu(X) for i in range(5)]
output = tf.add_n(relus, name="output")
tf.reset_default_graph()
def relu(X):
with tf.variable_scope("relu", reuse=True):
threshold = tf.get_variable("threshold", shape=(), initializer=tf.constant_initializer(0.0))
w_shape = int(X.get_shape()[1]), 1
w = tf.Variable(tf.random_normal(w_shape), name="weights")
b = tf.Variable(0.0, name="bias")
linear = tf.add(tf.matmul(X, w), b, name="linear")
return tf.maximum(linear, threshold, name="max")
X = tf.placeholder(tf.float32, shape=(None, n_features), name="X")
with tf.variable_scope("relu"):
threshold = tf.get_variable("threshold", shape=(), initializer=tf.constant_initializer(0.0))
relus = [relu(X) for i in range(5)]
output = tf.add_n(relus, name="output")
summary_writer = tf.summary.FileWriter("logs/relu6", tf.get_default_graph())
summary_writer.close()
tf.reset_default_graph()
def relu(X):
with tf.variable_scope("relu"):
threshold = tf.get_variable("threshold", shape=(), initializer=tf.constant_initializer(0.0))
w_shape = int(X.get_shape()[1]), 1
w = tf.Variable(tf.random_normal(w_shape), name="weights")
b = tf.Variable(0.0, name="bias")
linear = tf.add(tf.matmul(X, w), b, name="linear")
return tf.maximum(linear, threshold, name="max")
X = tf.placeholder(tf.float32, shape=(None, n_features), name="X")
with tf.variable_scope("", default_name="") as scope:
first_relu = relu(X) # create the shared variable
scope.reuse_variables() # then reuse it
relus = [first_relu] + [relu(X) for i in range(4)]
output = tf.add_n(relus, name="output")
summary_writer = tf.summary.FileWriter("logs/relu8", tf.get_default_graph())
summary_writer.close()
tf.reset_default_graph()
with tf.variable_scope("param"):
x = tf.get_variable("x", shape=(), initializer=tf.constant_initializer(0.))
#x = tf.Variable(0., name="x")
with tf.variable_scope("param", reuse=True):
y = tf.get_variable("x")
with tf.variable_scope("", default_name="", reuse=True):
z = tf.get_variable("param/x", shape=(), initializer=tf.constant_initializer(0.))
print(x is y)
print(x.op.name)
print(y.op.name)
print(z.op.name)
###Output
True
param/x
param/x
param/x
|
frl_fake.ipynb | ###Markdown
**Checking the field size limit for csv because from past attempt to load the data, i get `_csv.Error: field larger than field limit (131072)`
###Code
csv.field_size_limit()
###Output
_____no_output_____
###Markdown
**The following script i found on stackoverflow will increase the csv filed limit to the maximmum**
###Code
maxInt = sys.maxsize
while True:
# decrease the maxInt value by factor 10
# as long as the OverflowError occurs.
try:
csv.field_size_limit(maxInt)
break
except OverflowError:
maxInt = int(maxInt/10)
csv.field_size_limit()
import pandas as pd
input_data = 's3://sagemaker-studio-zvdmh7fos3/news_cleaned_2018_02_13.csv'
chunksize = 500000 # 500 thousand rows at one go.
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import os
import sys
import csv
from tqdm import tqdm
tqdm.pandas(desc="progress-bar")
import seaborn as sns
import dask.dataframe as dd
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
sns.set(style='darkgrid',palette='Dark2',rc={'figure.figsize':(9,6),'figure.dpi':100}) #setting seaborn plot size and resolution
%%time
df_list = [] # list to hold the batch dataframe
for df_chunk in tqdm(pd.read_csv(input_data, chunksize=chunksize, encoding ='utf-8', engine='python')):
# append the chunk to list and merge all
df_list.append(df_chunk)
%%time
# Merge all the chunked dataframes into one dataframe
frl_df = pd.concat(df_list)
# Delete the chunked dataframe list to release memory
del df_list
# See what we have loaded
frl_df.info()
def missing_value(df):
"""" Function to calculate the number and percent of missing values in a dataframe"""
total = df.isnull().sum().sort_values(ascending=False)
percent = ((df.isnull().sum()/df.isnull().count())*100).sort_values(ascending=False)
missing_value = pd.concat([total, percent], axis=1, keys=['Total','Percent'])
return missing_value
missing_value(frl_df)
print(frl_df.head())
print(frl_df.tail())
list(frl_df.columns)
###Output
_____no_output_____
###Markdown
**Out of the 17 columns, I decided to drop all columns with more than 76% missing values plus url, id, Unnamed**
###Code
frl_df.drop(['id','keywords','url','content','meta_description','meta_keywords','authors','summary','source','Unnamed: 0'], axis=1, inplace=True)
frl_df.head()
###Output
_____no_output_____
###Markdown
**Converted the dates to datetime**
###Code
frl_df['scraped_at'] = pd.to_datetime(frl_df['scraped_at'])
frl_df['inserted_at'] = pd.to_datetime(frl_df['inserted_at'])
frl_df['updated_at'] = pd.to_datetime(frl_df['updated_at'])
frl_df.info()
frl_df['scraped_at'] = pd.to_datetime(frl_df['scraped_at'], utc=True)
frl_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 8529090 entries, 0 to 8529089
Data columns (total 7 columns):
# Column Dtype
--- ------ -----
0 domain object
1 type object
2 scraped_at datetime64[ns, UTC]
3 inserted_at datetime64[ns]
4 updated_at datetime64[ns]
5 title object
6 tags object
dtypes: datetime64[ns, UTC](1), datetime64[ns](2), object(4)
memory usage: 455.5+ MB
###Markdown
**Remove the extensions from the domain**
###Code
def clean_domain(text):
"""
Function to remove the extension from the clean_domain
Use .split('.',1)[0] if domain in format: domain.com
alternately, use .split('.')[1] if domain in format: www.domain.com
"""
site_url = text.split('.',1)[0]
return site_url
frl_df['domain'] = frl_df['domain'].astype(str).apply(clean_domain)
print(frl_df.head())
###Output
domain type scraped_at \
0 express rumor 2018-01-25 16:17:44.789555+00:00
1 barenakedislam hate 2018-01-25 16:17:44.789555+00:00
2 barenakedislam hate 2018-01-25 16:17:44.789555+00:00
3 barenakedislam hate 2018-01-25 16:17:44.789555+00:00
4 barenakedislam hate 2018-01-25 16:17:44.789555+00:00
inserted_at updated_at \
0 2018-02-02 01:19:41.756632 2018-02-02 01:19:41.756664
1 2018-02-02 01:19:41.756632 2018-02-02 01:19:41.756664
2 2018-02-02 01:19:41.756632 2018-02-02 01:19:41.756664
3 2018-02-02 01:19:41.756632 2018-02-02 01:19:41.756664
4 2018-02-02 01:19:41.756632 2018-02-02 01:19:41.756664
title tags
0 Is life an ILLUSION? Researchers prove 'realit... NaN
1 Donald Trump NaN
2 Donald Trump NaN
3 MORE WINNING! Israeli intelligence source, DEB... NaN
4 “Oh, Trump, you coward, you just wait, we will... NaN
###Markdown
**Checking the min and max dates for all the dates column**
###Code
print(frl_df['scraped_at'].min(), frl_df['scraped_at'].max() )
print(frl_df['inserted_at'].min(), frl_df['inserted_at'].max() )
print(frl_df['updated_at'].min(), frl_df['updated_at'].max() )
###Output
2018-02-02 01:19:41.756664 2018-02-11 00:14:20.346871
###Markdown
**Since the date range from `scraped_at` column is the only one that span two years. It can be used in the model. The other two can be dropped from the dataframe**
###Code
frl_df.drop(['inserted_at','updated_at'], axis=1, inplace=True)
print(frl_df.shape)
print('\n', frl_df.tail())
frl_df.head()
frl_df.groupby('domain').size()
frl_df['domain'].value_counts(ascending=False, dropna=False)
frl_df['domain'].value_counts(ascending=False, dropna=False).tail(20)
frl_df['domain'].value_counts(ascending=False, dropna=False).head(20)
type.index
type.values
type.plot.barh()
frl_df['title'].head(15)
frl_df['title'].tail(10)
import nltk
nltk.download('stopwords')
###Output
[nltk_data] Downloading package stopwords to /root/nltk_data...
[nltk_data] Package stopwords is already up-to-date!
|
notebooks/1_2_MV__xAI_tools.ipynb | ###Markdown
SHAP Comparision interpret installed
###Code
# !pip install interpret
from interpret import set_visualize_provider
from interpret.provider import InlineProvider
set_visualize_provider(InlineProvider())
from interpret import show
from interpret.blackbox import ShapKernel
###Output
_____no_output_____
###Markdown
SHAP
###Code
shap = ShapKernel(predict_fn=clf.predict_proba, data=X_train)
shap_local = shap.explain_local(X_test[:10], y_test[:10])
#show(shap_local)
show(shap_local)
###Output
_____no_output_____
###Markdown
dalex installed
###Code
import dalex as dx
exp = dx.Explainer(clf, X_train, y_train, label = "Cancer Type Prediction")
shap = exp.predict_parts(X_test.iloc[0], type = 'shap', B = 10)
shap.plot(max_vars=10)
###Output
_____no_output_____
###Markdown
SHAP installed
###Code
!pip install shap
import shap
# Create object that can calculate shap values
explainer = shap.TreeExplainer(clf)
# Calculate Shap values
shap_values = explainer.shap_values(X_test)
from sklearn.ensemble import RandomForestClassifier
#rf = RandomForestClassifier(n_estimators=30, max_depth=20, random_state=0, max_features='sqrt',\
# class_weight='balanced')
#rf.fit(X_train, y_train)
# use Kernel SHAP to explain test set predictions
explainer = shap.KernelExplainer(clf.predict_proba, X_train, link="logit")
shap_values = explainer.shap_values(X_test)
# plot the SHAP values for the Setosa output of the first instance
#shap.force_plot(explainer.expected_value[0], shap_values[0][0,:], X_test.iloc[0,:], link="logit")
shap.initjs()
shap.force_plot(explainer.expected_value[0], shap_values[0][0,:], X_test.iloc[0,:], link="logit")
shap.initjs()
shap.force_plot(explainer.expected_value[0], shap_values[0][15,:], X_test.iloc[15,:], link="logit")
###Output
_____no_output_____ |
jupyter/annotation/french/date_matcher_multi_language_fr.ipynb | ###Markdown
DateMatcher multi-language This annotator allows you to specify a source language that will be used to identify temporal keywords and extract dates.
###Code
# Import Spark NLP
from sparknlp.base import *
from sparknlp.annotator import *
from sparknlp.pretrained import PretrainedPipeline
import sparknlp
# Start Spark Session with Spark NLP
# start() functions has two parameters: gpu and spark23
# sparknlp.start(gpu=True) will start the session with GPU support
# sparknlp.start(spark23=True) is when you have Apache Spark 2.3.x installed
spark = sparknlp.start()
spark
sparknlp.version()
###Output
_____no_output_____
###Markdown
French examples Let's import some articoles sentences from the news where relative dates are present.
###Code
fr_articles = [
("Le dimanche 11 juillet 2021, Chiellini a utilisé le mot Kiricocho lorsque Saka s'est approché du ballon pour le penalty.",),
("La prochaine Coupe du monde aura lieu en novembre 2022.",),
]
###Output
_____no_output_____
###Markdown
Let's fill a DataFrame with the text column
###Code
articles_cols = ["text"]
df = spark.createDataFrame(data=fr_articles, schema=articles_cols)
df.printSchema()
df.show()
###Output
root
|-- text: string (nullable = true)
+--------------------+
| text|
+--------------------+
|Le dimanche 11 ju...|
|La prochaine Coup...|
+--------------------+
###Markdown
Now, let's create a simple pipeline to apply the DateMatcher, specifying the source language
###Code
document_assembler = DocumentAssembler() \
.setInputCol("text") \
.setOutputCol("document")
date_matcher = DateMatcher() \
.setInputCols(['document']) \
.setOutputCol("date") \
.setFormat("MM/dd/yyyy") \
.setSourceLanguage("fr")
### Let's transform the Data
assembled = document_assembler.transform(df)
date_matcher.transform(assembled).select('date').show(10, False)
###Output
+-------------------------------------------------+
|date |
+-------------------------------------------------+
|[[date, 10, 21, 07/11/2021, [sentence -> 0], []]]|
|[[date, 41, 53, 11/01/2022, [sentence -> 0], []]]|
+-------------------------------------------------+
|
03.11.sequence.classification.Multi.biLSTM.dropout.ipynb | ###Markdown
Sequence classification by RNN- Creating the **data pipeline** with `tf.data`- Preprocessing word sequences (variable input sequence length) using `padding technique` by `user function (pad_seq)`- Using `tf.nn.embedding_lookup` for getting vector of tokens (eg. word, character)- Creating the model as **Class**- Reference - https://github.com/golbin/TensorFlow-Tutorials/blob/master/10%20-%20RNN/02%20-%20Autocomplete.py - https://github.com/aisolab/TF_code_examples_for_Deep_learning/blob/master/Tutorial%20of%20implementing%20Sequence%20classification%20with%20RNN%20series.ipynb
###Code
import os
import sys
import time
import string
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
slim = tf.contrib.slim
rnn = tf.contrib.rnn
sess_config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True))
###Output
_____no_output_____
###Markdown
Prepare example data
###Code
words = ['good', 'bad', 'amazing', 'so good', 'bull shit', 'awesome', 'how dare', 'very much', 'nice']
y = np.array([[1.,0.], [0.,1.], [1.,0.], [1.,0.], [0.,1.], [1.,0.], [0.,1.], [1.,0.], [1.,0.]])
# Character quantization
char_space = string.ascii_lowercase
char_space = char_space + ' ' + '*' # '*' means padding token
print("char_space: {}".format(char_space))
idx2char = [char for char in char_space]
print("idx2char: {}".format(idx2char))
char2idx = {char : idx for idx, char in enumerate(char_space)}
print("char2idx: {}".format(char2idx))
###Output
_____no_output_____
###Markdown
Create pad_seq function
###Code
def pad_seq(sequences, max_length, dic):
"""Padding sequences
Args:
sequences (list of characters): input data
max_length (int): max length for padding
dic (dictionary): char to index
Returns:
seq_indices (2-rank np.array):
seq_length (1-rank np.array): sequence lengthes of all data
"""
seq_length, seq_indices = [], []
for sequence in sequences:
seq_length.append(len(sequence))
seq_idx = [dic.get(char) for char in sequence]
seq_idx += (max_length - len(seq_idx)) * [dic.get('*')] # 27 is idx of meaningless token "*"
seq_indices.append(seq_idx)
return np.array(seq_indices), np.array(seq_length)
###Output
_____no_output_____
###Markdown
Apply pad_seq function to data
###Code
max_length = 10
X_indices, X_length = pad_seq(sequences=words, max_length=max_length, dic=char2idx)
print("X_indices")
print(X_indices)
print("X_length")
print(X_length)
###Output
_____no_output_____
###Markdown
Define CharRNN class
###Code
class CharRNN:
def __init__(self, seq_indices, seq_length, labels, num_classes, hidden_dims, dic):
# data pipeline
with tf.variable_scope('input_layer'):
self._seq_indices = seq_indices
self._seq_length = seq_length
self._labels = labels
self._keep_prob = tf.placeholder(tf.float32)
one_hot = tf.eye(len(dic), dtype=tf.float32)
self._one_hot = tf.get_variable(name='one_hot_embedding',
initializer=one_hot,
trainable=False) # embedding vector training 안할 것이기 때문
self._seq_embeddings = tf.nn.embedding_lookup(params=self._one_hot,
ids=self._seq_indices)
# MultiLayer bi-directional RNN cell with dropout
with tf.variable_scope('multi_bi-directional_lstm_cell_dropout'):
# forward cell
multi_cells_fw = []
for hidden_dim in hidden_dims:
cell_fw = rnn.BasicLSTMCell(num_units=hidden_dim, state_is_tuple=True)
cell_fw = rnn.DropoutWrapper(cell=cell_fw, output_keep_prob=self._keep_prob)
multi_cells_fw.append(cell_fw)
multi_cells_fw = rnn.MultiRNNCell(cells=multi_cells_fw, state_is_tuple=True)
# backward cell
multi_cells_bw = []
for hidden_dim in hidden_dims:
cell_bw = rnn.BasicLSTMCell(num_units=hidden_dim, state_is_tuple=True)
cell_bw = rnn.DropoutWrapper(cell=cell_bw, output_keep_prob=self._keep_prob)
multi_cells_bw.append(cell_bw)
multi_cells_bw = rnn.MultiRNNCell(cells=multi_cells_bw, state_is_tuple=True)
_, states = tf.nn.bidirectional_dynamic_rnn(multi_cells_fw, multi_cells_bw,
inputs=self._seq_embeddings,
sequence_length=self._seq_length,
dtype=tf.float32)
final_state = tf.concat([states[0][-1].h, states[1][-1].h], axis=1)
with tf.variable_scope('output_layer'):
self._logits = slim.fully_connected(inputs=final_state,
num_outputs=num_classes,
activation_fn=None)
with tf.variable_scope('loss'):
self.loss = tf.losses.softmax_cross_entropy(onehot_labels=self._labels,
logits=self._logits)
with tf.variable_scope('prediction'):
self._prediction = tf.argmax(input=self._logits, axis=-1, output_type=tf.int32)
def predict(self, sess, seq_indices, seq_length):
feed_dict = {self._seq_indices : seq_indices,
self._seq_length : seq_length,
self._keep_prob : 1.0}
return sess.run(self._prediction, feed_dict=feed_dict)
###Output
_____no_output_____
###Markdown
Create a model of CharRNN
###Code
# hyper-parameters
num_classes = 2
learning_rate = 0.003
batch_size = 2
max_epochs = 20
###Output
_____no_output_____
###Markdown
Print dataset
###Code
print("X_indices: \n{}".format(X_indices))
print("X_length: {}".format(X_length))
print("y: \n{}".format(y))
###Output
_____no_output_____
###Markdown
Set up dataset with `tf.data` create input pipeline with `tf.data.Dataset`
###Code
## create data pipeline with tf.data
train_dataset = tf.data.Dataset.from_tensor_slices((X_indices, X_length, y))
train_dataset = train_dataset.shuffle(buffer_size = 100)
train_dataset = train_dataset.batch(batch_size = batch_size)
print(train_dataset)
###Output
_____no_output_____
###Markdown
Define Iterator
###Code
train_iterator = train_dataset.make_initializable_iterator()
seq_indices, seq_length, labels = train_iterator.get_next()
char_rnn = CharRNN(seq_indices=seq_indices, seq_length=seq_length,
labels=labels, num_classes=num_classes,
hidden_dims=[32, 16], dic=char2idx)
###Output
_____no_output_____
###Markdown
Creat training op and train model
###Code
## create training op
optimizer = tf.train.AdamOptimizer(learning_rate)
train_op = optimizer.minimize(char_rnn.loss)
###Output
_____no_output_____
###Markdown
`tf.Session()` and train
###Code
sess = tf.Session()
sess.run(tf.global_variables_initializer())
loss_history = []
step = 0
for epochs in range(max_epochs):
start_time = time.time()
sess.run(train_iterator.initializer)
avg_loss = []
while True:
try:
_, loss_ = sess.run([train_op, char_rnn.loss],
feed_dict={char_rnn._keep_prob: 0.5})
avg_loss.append(loss_)
step += 1
except tf.errors.OutOfRangeError:
#print("End of dataset") # ==> "End of dataset"
break
avg_loss_ = np.mean(avg_loss)
loss_history.append(avg_loss_)
duration = time.time() - start_time
examples_per_sec = batch_size / float(duration)
print("epochs: {}, step: {}, loss: {:g}, ({:.2f} examples/sec; {:.3f} sec/batch)".format(epochs+1, step, avg_loss_, examples_per_sec, duration))
plt.plot(loss_history, label='train')
y_pred = char_rnn.predict(sess=sess, seq_indices=X_indices, seq_length=X_length)
accuracy = np.mean(y_pred==np.argmax(y, axis=-1))
print('training accuracy: {:.2%}'.format(accuracy))
###Output
_____no_output_____ |
examples/api/robot_api.ipynb | ###Markdown
机器人类API 目录- [初始化机器人对象](初始化机器人对象)- [机器人添加设备](机器人添加设备)- [获取设备对象](获取设备对象)- [获取设备CAN_ID](获取设备CAN_ID)- [删除设备](删除设备)- [获取机器人本地时间](获取机器人本地时间)- [延时函数](延时函数)- [机器人使能](机器人使能)- [机器人失能](机器人失能)
###Code
from protobot.can_bus import Robot
###Output
_____no_output_____
###Markdown
初始化机器人对象
###Code
robot = Robot()
###Output
_____no_output_____
###Markdown
机器人添加设备`add_device(name, node_factory, node_id, *node_params)`参数:- name: 设备名- node_factory: 节点工厂类- node_id: 节点CAN_ID- node_params: 节点参数返回值:- 设备对象
###Code
from protobot.can_bus.nodes import MotorFactory
robot.add_device('motor0', factory = MotorFactory(), node_id = 0x10, reduction=-44)
###Output
_____no_output_____
###Markdown
获取设备对象`device(name)`参数:- name: 设备名返回值:- 设备对象 / None
###Code
motor = robot.device('motor0')
###Output
_____no_output_____
###Markdown
获取设备CAN_ID`device_id(name)`参数:- name: 设备名 返回值:- 设备CAN_ID / None
###Code
robot.device_id('motor0')
###Output
_____no_output_____
###Markdown
删除设备`remove_device(name)`参数:- name: 设备名
###Code
robot.remove_device('motor0')
###Output
_____no_output_____
###Markdown
获取机器人本地时间`time()`返回值:- 时间(s)
###Code
robot.time()
###Output
_____no_output_____
###Markdown
延时函数`delay(seconds)`参数:- seconds: 延时时间(s)
###Code
robot.delay(1)
###Output
_____no_output_____
###Markdown
机器人使能`enable()`
###Code
robot.enable()
###Output
_____no_output_____
###Markdown
机器人失能`disable()`
###Code
robot.disable()
###Output
_____no_output_____ |
TPflashBenchmark.ipynb | ###Markdown
###Code
!pip install neqsim
from neqsim.thermo import fluid, createfluid, TPflash,printFrame,fluidcreator
fluid1 = fluid("srk", 303.15, 35.01325)
fluid1.addComponent("nitrogen", 0.0028941);
fluid1.addComponent("CO2", 0.054069291);
fluid1.addComponent("methane", 0.730570915);
fluid1.addComponent("ethane", 0.109004002);
fluid1.addComponent("propane", 0.061518891);
fluid1.addComponent("n-butane", 0.0164998);
fluid1.addComponent("i-butane", 0.006585);
fluid1.addComponent("n-pentane", 0.005953);
fluid1.addComponent("i-pentane", 0.0040184);
fluid1.addTBPfraction("C6", 0.6178399, 86.17801 / 1000.0, 0.6639999);
fluid1.addComponent("water", 0.27082);
fluid1.createDatabase(True);
fluid1.setMixingRule(2);
fluid1.setMultiPhaseCheck(True);
import time
start = time.time()
print("start benchmark...")
for lp in range(5000):
TPflash(fluid1)
end = time.time()
print("time ", (end - start), " sec")
printFrame(fluid1)
###Output
start benchmark...
time 7.368252754211426 sec
total gas oil aqueous
nitrogen 1.5396E-3 3.72093E-3 2.03085E-4 5.3483E-10 [mole fraction]
CO2 2.87637E-2 4.95261E-2 2.03023E-2 2.23283E-5 [mole fraction]
methane 3.88648E-1 8.34819E-1 1.37574E-1 2.19164E-7 [mole fraction]
ethane 5.79878E-2 7.71317E-2 5.97076E-2 5.94714E-10 [mole fraction]
propane 3.27268E-2 2.19863E-2 5.14963E-2 1.29002E-13 [mole fraction]
n-butane 8.77755E-3 2.37476E-3 1.67214E-2 5.04215E-17 [mole fraction]
i-butane 3.50308E-3 1.27767E-3 6.4009E-3 1.67078E-17 [mole fraction]
n-pentane 3.16687E-3 3.21665E-4 6.47506E-3 1.82465E-20 [mole fraction]
i-pentane 2.1377E-3 2.77047E-4 4.3213E-3 2.74085E-20 [mole fraction]
C6_PC 3.28678E-1 7.46058E-3 6.93439E-1 1.16776E-15 [mole fraction]
water 1.44071E-1 1.10398E-3 3.35895E-3 9.99977E-1 [mole fraction]
Density 3.00866E1 6.09739E2 9.97766E2 [kg/m^3]
PhaseFraction 3.88126E-1 4.69806E-1 1.42067E-1 [mole fraction]
MolarMass 4.27477E1 1.98841E1 6.91152E1 1.80156E1 [kg/kmol]
Z factor 9.19721E-1 1.71119E-1 3.3188E-2 [-]
Heat Capacity (Cp) 2.23978E0 2.51135E0 4.80319E0 [kJ/kg*K]
Heat Capacity (Cv) 1.59283E0 2.02406E0 3.50947E0 [kJ/kg*K]
Speed of Sound 3.88861E2 8.42774E2 3.4089E3 [m/sec]
Enthalpy -4.19086E2 1.86211E1 -3.5623E2 -2.53636E3 [kJ/kg]
Entropy -1.38473E0 -1.09405E0 -1.02859E0 -6.77947E0 [kJ/kg*K]
JT coefficient 5.21024E-1 -3.49181E-2 -2.16758E-2 [K/bar]
Viscosity 1.22789E-5 2.19592E-4 8.00023E-4 [kg/m*sec]
Conductivity 3.5515E-2 1.0019E-1 6.22135E-1 [W/m*K]
SurfaceTension 9.74704E-3 9.74704E-3 [N/m]
4.7675E-2 4.7675E-2 [N/m]
6.17181E-2 6.17181E-2 [N/m]
Pressure 35.01325 35.01325 35.01325 [bar]
Temperature 303.15 303.15 303.15 [K]
Model SRK-EOS SRK-EOS SRK-EOS -
Mixing Rule classic classic classic -
Stream -
|
notebooks/01_gtr_jd.ipynb | ###Markdown
Gateway to ResearchThis notebook loads and shows the Gateway to Research dataCheck this [repo](https://github.com/nestauk/gtr_data_processing) for additional information about the GtR data. Preamble
###Code
%run notebook_preamble.ipy
# Functions etc here
import re
from pylab import *
from plotnine import *
import geopandas as gpd
from string import punctuation
from pyproj import Proj
def flatten_list(a_list):
return([x for el in a_list for x in el])
###Output
_____no_output_____
###Markdown
Analysis of pre-processed data
###Code
#Reads in the data that has been processed for university effects
my_path = 'filepath/060819_gtr_creative_sect.csv'
gtr = pd.read_csv(my_path,compression='zip',na_values='[]').iloc[:,1:]
list(gtr)
gtr.head(n=5)
gtr['creative_sector'].value_counts()
###Output
_____no_output_____
###Markdown
Creates a flag for all the categories and individual flags to handle the individual components i.e. a=['Museums, galleries and libraries', 'Film, TV, video, radio and photography' 'Design','Architecture','Publishing' , 'Advertising and marketing','Crafts', 'IT,software and computer services', 'Music, performing and visual arts']
###Code
creative_industry=['Museums, galleries and libraries', 'Film, TV, video, radio and photography' 'Design','Architecture','Publishing' , 'Advertising and marketing','Crafts', 'IT, software and computer services', 'Music, performing and visual arts']
#General creative function
def creativesearch(x):
regex = re.compile("|".join(word for word in creative_industry), re.IGNORECASE)
if regex.search(x):
return 1 #This is done as you can't subset dataset with None and not equals operator
else:
return 0
#Domain function
def domain(x,y): # y is the word x is the column it is applied to
regex = re.compile(y, re.IGNORECASE)
if regex.search(x):
return 1 #This is done as you can't subset dataset with None and not equals operator
else:
return 0
#Set as string
gtr[['creative_sector']]=gtr[['creative_sector']].astype(str)
#Apply functions
gtr['creative_flag']=gtr[['creative_sector']].applymap(creativesearch)
#Creates sector flags for each category
for elem in creative_industry:
gtr[elem]=gtr[['creative_sector']].applymap(lambda x:domain(x, elem))
gtr.head(n=6)
###Output
_____no_output_____
###Markdown
Does the count of ai by the different creative sectors
###Code
#Sums the dataframe by AI status
countby_ai_status=gtr.groupby(['ai_mod']).sum()
#Drops most of the variables, except the ones we want
countby_ai_status=countby_ai_status[creative_industry+['creative_flag']]
#Pastes to clipboard
countby_ai_status.to_clipboard()
ax=countby_ai_status.loc[True , : ].plot.bar(figsize=(10,5))
ax.set_ylabel('Number of AI related projects')
#view_the_abstracts=gtr['abstract'][(gtr['ai_mod']==True) & (gtr['creative_flag']==1)]
###Output
_____no_output_____
###Markdown
Looks at how the number of projects is changing over time AI projects
###Code
(ggplot(gtr[gtr['ai_mod']==True],aes(x='year',group='ai_mod',color='ai_mod'))+
geom_freqpoly(binwidth = 1, show_legend=False) +xlab("Year")+ylab("Number of AI projects")+xlim(2007,2018)+ylim(0,300))
###Output
_____no_output_____
###Markdown
Creative projects
###Code
#Was creative_flag_semantic
(ggplot(gtr[gtr['creative_flag']==True],aes(x='year',group='creative_flag',color='creative_flag'))+
geom_freqpoly(binwidth = 1, show_legend=False) +xlab("Year")+ylab("Number of Creative projects")+xlim(2007,2018)+ylim(0,400))
###Output
_____no_output_____
###Markdown
AI and Creative projects
###Code
(ggplot(gtr[(gtr['creative_flag']==True) & (gtr['ai_mod']==1)],aes(x='year'))+
geom_freqpoly(binwidth = 1, show_legend=False) +xlab("Year")+ylab("Number of AI and Creative projects")+xlim(2007,2018)+ylim(0,100))
###Output
_____no_output_____
###Markdown
Spatial analysis
###Code
#Loads data
stem="filepath"
files="Local_Authority_Districts_December_2017_Super_Generalised_Clipped_Boundaries_in_United_Kingdom_WGS84.shp"
#proje="+proj=utm +zone=33 +ellps=WGS84 +datum=WGS84 +units=m +no_defs"
UK_lad=gpd.read_file(stem+files)
UK_lad.crs
#Sets the projection
#UK_lad = UK_lad.to_crs({'init' :'epsg:25832'})
###Output
_____no_output_____
###Markdown
Note: Issue in the projection to resolve
###Code
#Check it's loaded
ax=UK_lad.plot( figsize=(5, 5))
ax.set_title('')
ax.axis('off')
###Output
_____no_output_____
###Markdown
Does spatial counts of local authorities
###Code
#Sorts out the multiple local authorities
#subsets the data so ai and creative only
creative_ai=gtr[(gtr['ai_mod']==True) & (gtr['creative_flag']==1)]
creative_ai.shape
def strip_punctuation(s):
return ''.join(c for c in s if c not in punctuation)
creative_ai['all_lad_code']=creative_ai['all_lad_code'].astype(str)
creative_ai['all_lad_code']=creative_ai['all_lad_code'].map(strip_punctuation)
#Convert the dataframe of lists into one single list
#concatenate the strings
a=''
for elem in creative_ai['all_lad_code']:
a=a+' '+str(elem)
# split them to get a list
a=a.split()
###Output
_____no_output_____
###Markdown
Does a table of the number of local authorities in list
###Code
from collections import Counter
#Count the elements of the dataframe
d=Counter(a)
#Convert the counter to a dataframe
ai_creative_count = pd.DataFrame.from_dict(d, orient='index').reset_index()
#sort out the column names
ai_creative_count.rename(columns={'index':'la_code', 0:'project count'}, inplace=True)
ai_creative_count.head(n=5)
#Merges the two datasets
UK_lad=UK_lad.merge(ai_creative_count, how='left', left_on='lad17cd' , right_on='la_code')
UK_lad.tail(n=5)
###Output
_____no_output_____
###Markdown
Local Authority map for all participating organisations
###Code
UK_lad['project count']=UK_lad['project count'].fillna(0)
ax=UK_lad.plot(column='project count', cmap='cool', figsize=(15,15))
ax.set_title('')
ax.axis('off')
###Output
_____no_output_____
###Markdown
Table of local authorities count for all participating organisation
###Code
tabs=UK_lad[['lad17nm' ,'project count']].sort_values(by='project count', ascending=False)
#set as integer
tabs['project count']=tabs['project count'].astype(int)
#renames the columns
tabs.rename(columns={'lad17nm':'local authority', 'project count':'project partner count'}, inplace=True)
#drops the index
tabs=tabs.reset_index(drop=True)
tabs.head(n=12)
tabs.to_clipboard()
###Output
_____no_output_____
###Markdown
Topic analysis of the data at the intersection of AI and creative
###Code
#import sklearn
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
from sklearn.feature_extraction import stop_words
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.decomposition import NMF, LatentDirichletAllocation
from sklearn import metrics #for the cluster metrics like silhoute score
from sklearn import manifold #for TSNE
import numpy as np
import re
from string import punctuation
from time import time
#Select the data, admittedly a small sample
df=gtr[(gtr['ai_mod']==True) & (gtr['creative_flag']==1) ]
column_names = ['abstract']
df[column_names].shape
###Output
_____no_output_____
###Markdown
Text cleaning
###Code
#Sets to lower case
df[column_names] = df[column_names].applymap(lambda x: x.lower())
#Removes the utf characters
def utfremove(x): #Need the \ to escape the "
return re.sub(r"u'|u\"", "", x)
df[column_names] = df[column_names].applymap(utfremove)
#Removes new line characters
def nlremove(x): #Need the \ to escape the "
return re.sub(r"\\n", "", x)
#Removes hyperlinks
def htmlremove(x):
return re.sub(r"http\S+", "", x)
df[column_names] = df[column_names].applymap(htmlremove)
#Removes punctuation
def strip_punctuation(s):
return ''.join(c for c in s if c not in punctuation)
df[column_names] = df[column_names].applymap(strip_punctuation)
#Removes numbers
def numremove(x):
return re.sub("\d+", "", x)
df[column_names] = df[column_names].applymap(numremove)
#Removes stopwords
def stopremove(x):
from nltk.corpus import stopwords
stop = stopwords.words('english')
querywords = x.split()
stopwords= list(stop_words.ENGLISH_STOP_WORDS)
resultwords = [word for word in querywords if word.lower() not in stopwords]
result = ' '.join(resultwords)
return(result)
#Removes the stop words
df[column_names] = df[column_names].applymap(stopremove)
print(df.shape)
###Output
_____no_output_____
###Markdown
Document term matrix and tfidf
###Code
# The tfidf stage
#Maximum number of features
n_features=200
x=df['abstract']
# TfidfVectorizer converts a collection of raw documents to a matrix of TF-IDF features.
#max_df gives the highest proportion of documents that words are allowed to appear in
tfidf_vectorizer = TfidfVectorizer(max_df=0.8, min_df=5, max_features=n_features, stop_words='english',ngram_range=(1,2))
t0 = time()
tfidf = tfidf_vectorizer.fit_transform(x)
print("done in %0.3fs." % (time() - t0))
#Converts the tfidf to a data frame which can be viewed
tfidfdata=pd.DataFrame(tfidf.toarray(), columns=tfidf_vectorizer.get_feature_names())
# Use tf (raw term count) features
tf_vectorizer = CountVectorizer(max_df=0.8, min_df=5, max_features=n_features,stop_words='english', ngram_range=(1,2))
t0 = time()
tf = tf_vectorizer.fit_transform(x)
print("done in %0.3fs." % (time() - t0))
print()
#import print_function
from time import time
n_samples = 2000
n_features = 1000
n_top_words = 10
def print_top_words(model, feature_names, n_top_words):
for topic_idx, topic in enumerate(model.components_):
message = "Topic #%d: " % topic_idx
message += " ".join([feature_names[i]
for i in topic.argsort()[:-n_top_words - 1:-1]]) #argsort() returns the indices that sort an array
print(message)
print("Fitting LDA models with tf features, " "n_samples=%d and n_features=%d..." % (n_samples, n_features))
#Notes this needs python 3 to work
lda = LatentDirichletAllocation(n_components=3, max_iter=5,
learning_method='online',
learning_offset=50.,
random_state=0)
t0 = time()
#Fits the model to the term inverse document frequency matrix
lda.fit(tfidf)
print("done in %0.3fs." % (time() - t0))
print("\nTopics in LDA model:")
tf_feature_names = tf_vectorizer.get_feature_names() #Gets the names of the words the tern frequency is defined over
print_top_words(lda, tf_feature_names, n_top_words)
###Output
_____no_output_____ |
notebooks/losses_evaluation/Dstripes/basic/ell/convolutional/AE/DstripesAE_Convolutional_reconst_1ell_01ssim.ipynb | ###Markdown
Settings
###Code
%env TF_KERAS = 1
import os
sep_local = os.path.sep
import sys
sys.path.append('..'+sep_local+'..')
print(sep_local)
os.chdir('..'+sep_local+'..'+sep_local+'..'+sep_local+'..'+sep_local+'..')
print(os.getcwd())
import tensorflow as tf
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Dataset loading
###Code
dataset_name='Dstripes'
import tensorflow as tf
train_ds = tf.data.Dataset.from_generator(
lambda: training_generator,
output_types=tf.float32 ,
output_shapes=tf.TensorShape((batch_size, ) + image_size)
)
test_ds = tf.data.Dataset.from_generator(
lambda: testing_generator,
output_types=tf.float32 ,
output_shapes=tf.TensorShape((batch_size, ) + image_size)
)
_instance_scale=1.0
for data in train_ds:
_instance_scale = float(data[0].numpy().max())
break
_instance_scale
import numpy as np
from collections.abc import Iterable
if isinstance(inputs_shape, Iterable):
_outputs_shape = np.prod(inputs_shape)
_outputs_shape
###Output
_____no_output_____
###Markdown
Model's Layers definition
###Code
units=20
c=50
enc_lays = [
tf.keras.layers.Conv2D(filters=units, kernel_size=3, strides=(2, 2), activation='relu'),
tf.keras.layers.Conv2D(filters=units*9, kernel_size=3, strides=(2, 2), activation='relu'),
tf.keras.layers.Flatten(),
# No activation
tf.keras.layers.Dense(latents_dim)
]
dec_lays = [
tf.keras.layers.Dense(units=c*c*units, activation=tf.nn.relu),
tf.keras.layers.Reshape(target_shape=(c , c, units)),
tf.keras.layers.Conv2DTranspose(filters=units, kernel_size=3, strides=(2, 2), padding="SAME", activation='relu'),
tf.keras.layers.Conv2DTranspose(filters=units*3, kernel_size=3, strides=(2, 2), padding="SAME", activation='relu'),
# No activation
tf.keras.layers.Conv2DTranspose(filters=3, kernel_size=3, strides=(1, 1), padding="SAME")
]
###Output
_____no_output_____
###Markdown
Model definition
###Code
model_name = dataset_name+'AE_Convolutional_reconst_1ell_01ssmi'
experiments_dir='experiments'+sep_local+model_name
from training.autoencoding_basic.autoencoders.autoencoder import autoencoder as AE
inputs_shape=image_size
variables_params = \
[
{
'name': 'inference',
'inputs_shape':inputs_shape,
'outputs_shape':latents_dim,
'layers': enc_lays
}
,
{
'name': 'generative',
'inputs_shape':latents_dim,
'outputs_shape':inputs_shape,
'layers':dec_lays
}
]
from utils.data_and_files.file_utils import create_if_not_exist
_restore = os.path.join(experiments_dir, 'var_save_dir')
create_if_not_exist(_restore)
_restore
#to restore trained model, set filepath=_restore
ae = AE(
name=model_name,
latents_dim=latents_dim,
batch_size=batch_size,
variables_params=variables_params,
filepath=None
)
from evaluation.quantitive_metrics.structural_similarity import prepare_ssim_multiscale
from statistical.losses_utilities import similarity_to_distance
from statistical.ae_losses import expected_loglikelihood as ell
ae.compile(loss={'x_logits': lambda x_true, x_logits: ell(x_true, x_logits)+ 0.1*similarity_to_distance(prepare_ssim_multiscale([ae.batch_size]+ae.get_inputs_shape()))(x_true, x_logits)})
###Output
_____no_output_____
###Markdown
Callbacks
###Code
from training.callbacks.sample_generation import SampleGeneration
from training.callbacks.save_model import ModelSaver
es = tf.keras.callbacks.EarlyStopping(
monitor='loss',
min_delta=1e-12,
patience=12,
verbose=1,
restore_best_weights=False
)
ms = ModelSaver(filepath=_restore)
csv_dir = os.path.join(experiments_dir, 'csv_dir')
create_if_not_exist(csv_dir)
csv_dir = os.path.join(csv_dir, ae.name+'.csv')
csv_log = tf.keras.callbacks.CSVLogger(csv_dir, append=True)
csv_dir
image_gen_dir = os.path.join(experiments_dir, 'image_gen_dir')
create_if_not_exist(image_gen_dir)
sg = SampleGeneration(latents_shape=latents_dim, filepath=image_gen_dir, gen_freq=5, save_img=True, gray_plot=False)
###Output
_____no_output_____
###Markdown
Model Training
###Code
from training.callbacks.disentangle_supervied import DisentanglementSuperviedMetrics
from training.callbacks.disentangle_unsupervied import DisentanglementUnsuperviedMetrics
gts_mertics = DisentanglementSuperviedMetrics(
ground_truth_data=eval_dataset,
representation_fn=lambda x: ae.encode(x),
random_state=np.random.RandomState(0),
file_Name=gts_csv,
num_train=10000,
num_test=100,
batch_size=batch_size,
continuous_factors=False,
gt_freq=10
)
gtu_mertics = DisentanglementUnsuperviedMetrics(
ground_truth_data=eval_dataset,
representation_fn=lambda x: ae.encode(x),
random_state=np.random.RandomState(0),
file_Name=gtu_csv,
num_train=20000,
num_test=500,
batch_size=batch_size,
gt_freq=10
)
ae.fit(
x=train_ds,
input_kw=None,
steps_per_epoch=int(1e4),
epochs=int(1e6),
verbose=2,
callbacks=[ es, ms, csv_log, sg, gts_mertics, gtu_mertics],
workers=-1,
use_multiprocessing=True,
validation_data=test_ds,
validation_steps=int(1e4)
)
###Output
_____no_output_____
###Markdown
Model Evaluation inception_score
###Code
from evaluation.generativity_metrics.inception_metrics import inception_score
is_mean, is_sigma = inception_score(ae, tolerance_threshold=1e-6, max_iteration=200)
print(f'inception_score mean: {is_mean}, sigma: {is_sigma}')
###Output
_____no_output_____
###Markdown
Frechet_inception_distance
###Code
from evaluation.generativity_metrics.inception_metrics import frechet_inception_distance
fis_score = frechet_inception_distance(ae, training_generator, tolerance_threshold=1e-6, max_iteration=10, batch_size=32)
print(f'frechet inception distance: {fis_score}')
###Output
_____no_output_____
###Markdown
perceptual_path_length_score
###Code
from evaluation.generativity_metrics.perceptual_path_length import perceptual_path_length_score
ppl_mean_score = perceptual_path_length_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200, batch_size=32)
print(f'perceptual path length score: {ppl_mean_score}')
###Output
_____no_output_____
###Markdown
precision score
###Code
from evaluation.generativity_metrics.precision_recall import precision_score
_precision_score = precision_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200)
print(f'precision score: {_precision_score}')
###Output
_____no_output_____
###Markdown
recall score
###Code
from evaluation.generativity_metrics.precision_recall import recall_score
_recall_score = recall_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200)
print(f'recall score: {_recall_score}')
###Output
_____no_output_____
###Markdown
Image Generation image reconstruction Training dataset
###Code
%load_ext autoreload
%autoreload 2
from training.generators.image_generation_testing import reconstruct_from_a_batch
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'reconstruct_training_images_like_a_batch_dir')
create_if_not_exist(save_dir)
reconstruct_from_a_batch(ae, training_generator, save_dir)
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'reconstruct_testing_images_like_a_batch_dir')
create_if_not_exist(save_dir)
reconstruct_from_a_batch(ae, testing_generator, save_dir)
###Output
_____no_output_____
###Markdown
with Randomness
###Code
from training.generators.image_generation_testing import generate_images_like_a_batch
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'generate_training_images_like_a_batch_dir')
create_if_not_exist(save_dir)
generate_images_like_a_batch(ae, training_generator, save_dir)
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'generate_testing_images_like_a_batch_dir')
create_if_not_exist(save_dir)
generate_images_like_a_batch(ae, testing_generator, save_dir)
###Output
_____no_output_____
###Markdown
Complete Randomness
###Code
from training.generators.image_generation_testing import generate_images_randomly
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'random_synthetic_dir')
create_if_not_exist(save_dir)
generate_images_randomly(ae, save_dir)
from training.generators.image_generation_testing import interpolate_a_batch
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'interpolate_dir')
create_if_not_exist(save_dir)
interpolate_a_batch(ae, testing_generator, save_dir)
###Output
100%|██████████| 15/15 [00:00<00:00, 19.90it/s]
|
docs/notebook/admin/users.ipynb | ###Markdown
[admin] Users commandThe `users` command in `admin` scope could help you manage users. Setup PrimeHub Python SDK
###Code
from primehub import PrimeHub, PrimeHubConfig
ph = PrimeHub(PrimeHubConfig())
if ph.is_ready():
print("PrimeHub Python SDK setup successfully")
else:
print("PrimeHub Python SDK couldn't get the group information, follow the 00-getting-started.ipynb to complete it")
###Output
_____no_output_____
###Markdown
Help documentation
###Code
help(ph.admin.users)
###Output
_____no_output_____
###Markdown
User management---```$ primehub admin usersUsage: primehub admin users Manage usersAvailable Commands: create Create a user delete Delete an user by id get Get an user by id list List users reset-password Reset password by id update Update the user```---For `create` and `update` actions are needed a configuration to mutate a user. Here is the fields table: Fields| field | required | type | description || --- | --- | --- | --- || username | required | string | lower case alphanumeric characters, '-', '.', and underscores ("_") are allowed, and must start with a letter or numeric.` || email | optional | string | a valid email || firstName | optional | string | || lastName | optional | string | || isAdmin | optional | boolean | grant the administrator role to the user || volumeCapacity | optional | int | customize the size of the user volume. unit: `GB`|| groups | optional | assign the user to groups | please see the `connect` examples |These fields are only used with email activation (only for `create`):| field | required | type | description || --- | --- | --- | --- || sendEmail | optional | boolean | send an activation email to the user. (it worked if the smtp was set)|| resetActions.set | optional | string[] | ask for actions, valid actions: `['VERIFY_EMAIL', 'UPDATE_PASSWORD']` || expiresIn | optional | int | expired duration for the activation email | Examples You could find [more examples on our github](https://github.com/InfuseAI/primehub-python-sdk/blob/main/docs/CLI/admin/users.md).
###Code
# Create a user with admin role
config = {
"username": "user-admin-from-jupyter",
"groups": {
"connect": [
{
"id": ph.group_id
}
]
},
"isAdmin": True
}
data = ph.admin.users.create(config)
# List users
list(ph.admin.users.list())
# Get the user details
ph.admin.users.get(data['id'])
# Delete the user
ph.admin.users.delete(data['id'])
###Output
_____no_output_____ |
COMPLETE_ER_Copy_of_LS_DSPT3_111_A_First_Look_at_Data.ipynb | ###Markdown
Lambda School Data Science - A First Look at Data Lecture - let's explore Python DS libraries and examples!The Python Data Science ecosystem is huge. You've seen some of the big pieces - pandas, scikit-learn, matplotlib. What parts do you want to see more of?
###Code
2 + 2
import numpy as np
np.random.randint(0,10, size=10)
# if you place a ? at the end of something, it will bring up help
import matplotlib.pyplot as plt
x = [1, 2, 3, 4]
y = [2, 4, 6, 8]
print(x,y)
plt.scatter(x,y)
plt.plot(x,y, color='g')
import pandas as pd
df = pd.DataFrame({'first_col':x, 'second_col':y})
df
df['first_col']
df['second_col']
df.shape
# four rows of four observations and two columns
df['third_col'] = df['first_col'] + 2*df['second_col']
df
df.shape
arr_1 = np.random.randint(low=0, high=100, size=10000)
arr_2 = np.random.randint(low=0, high=100, size=10000)
arr_1.shape
arr_2.shape
arr_1 + arr_2
x+y
# the result of the operation will depend on the varibles that you use
# this is an appended list, Concatenation
type(arr_1)
# a numpy 'N' dimensional array
df
df['fourth_col'] = df['third_col'] > 10
df
df['third_col'] > 10
df.shape
df[df['second_col'] < 10]
###Output
_____no_output_____
###Markdown
Assignment - now it's your turnPick at least one Python DS library, and using documentation/examples reproduce in this notebook something cool. It's OK if you don't fully understand it or get it 100% working, but do put in effort and look things up.
###Code
# TODO - your code here
# Use what we did live in lecture as an example
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
arr_A = np.random.randint(low=-10000, high=10000, size=1000000)
# print(arr_A)
arr_B = np.random.randint(low=-10000, high=10000, size=1000000)
arr_B.shape
arr_B + arr_A
x = np.all(arr_B*arr_A).reshape(1)
y = np.any(arr_A*arr_B).reshape(1)
x.shape, y.shape
z = np.arange(24*24*24).reshape(8,6,4)
plt.scatter(arr_B,arr_A, color='g')
plt.rcParams['agg.path.chunksize'] = 10000
plt.plot(arr_B,arr_A, color='g')
df.shape
df2 = pd.DataFrame({'first_col':arr_A, 'second_col':arr_B})
df2.shape
df2.head()
df2['third_col'] = df2['first_col'] + 2*df2['second_col']
df2['fourth_col'] = df2['third_col'] > 10000
df2['fifth_col'] = df2['fourth_col'] < 10000
df2.head()
df2.describe
xyz=np.array(np.random.random((1000,5000)))
x=xyz[:,0]
y=xyz[:,10]
z=xyz[:,50]*100
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
ax.plot(x,y,z)
plt.show()
fig = plt.figure()
ax = fig.add_subplot(222, projection='3d')
pnt3d=ax.scatter(x,y,z,c=z)
cbar=plt.colorbar(pnt3d)
cbar.set_label("Fart (Smellers)")
plt.show()
###Output
_____no_output_____ |
src/notebooks/2-content-based-recommenders.ipynb | ###Markdown
Content-based Recommenders 1 About the DataFrom the [source](https://www.kaggle.com/prajitdatta/movielens-100k-dataset/):> MovieLens data sets were collected by the GroupLens Research Projectat the University of Minnesota.> This data set consists of:* 100,000 ratings (1-5) from 943 users on 1682 movies. * Each user has rated at least 20 movies. * Simple demographic info for the users (age, gender, occupation, zip) > The data was collected through the MovieLens web site during the seven-month period from September 19th, 1997 through April 22nd, 1998. 2 Reading the Data 2.1 Items
###Code
import math
import numpy as np
import pandas as pd
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.preprocessing import normalize
###Output
_____no_output_____
###Markdown
Item data is stored as a delimiter-separated values file with `sep='|'`. The file contains no hearders, so we need to input the column names ourselves.We also need to set `encoding='ISO-8859-1` to avoid encoding erros when reading the data.
###Code
items_colnames = ['movie_id', 'title', 'release_date', 'video_release_date',
'imdb_url', 'unknown', 'action', 'adventure', 'animation',
'children', 'comedy', 'crime', 'documentary', 'drama',
'fantasy', 'film_noir', 'horror', 'musical', 'mystery',
'romance', 'sci_fi', 'thriller', 'war', 'western']
# Make sure you unzip the .zip file in src/data/ before running this cell
items_all_columns = pd.read_csv('../data/ml-100k/u.item', sep='|', header=None,
names=items_colnames, encoding='ISO-8859-1')
###Output
_____no_output_____
###Markdown
We will drop the columns we don't need to build our recommender.
###Code
items_clean = items_all_columns.drop(['release_date', 'video_release_date',
'imdb_url'], axis=1)
###Output
_____no_output_____
###Markdown
2.2 Ratings User data, on the other side, is stored as a tab-delimited file. It contains no headers as well, so we need to input them manually.
###Code
ratings_colnames = ['user_id', 'movie_id', 'rating', 'timestamp']
ratings_all_columns = pd.read_csv('../data/ml-100k/u.data', sep='\t',
header=None, names=ratings_colnames)
###Output
_____no_output_____
###Markdown
We will drop the `timestamp` column, as we will not need it throughout the exercise.
###Code
ratings_clean = ratings_all_columns.drop(['timestamp'], axis=1)
###Output
_____no_output_____
###Markdown
2.3 Read All We will put it all together to create out `item` and `user` dataframes, that we will use to build our recommender.
###Code
# Make sure you unzip the .zip file in src/data/ before running this cell
def make_data():
items = make_items_data()
ratings = make_ratings_data()
return items, ratings
def make_items_data():
items_colnames = ['movie_id', 'title', 'release_date', 'video_release_date',
'imdb_url', 'unknown', 'action', 'adventure', 'animation',
'children', 'comedy', 'crime', 'documentary', 'drama',
'fantasy', 'film_noir', 'horror', 'musical', 'mystery',
'romance', 'sci_fi', 'thriller', 'war', 'western']
items_all_columns = pd.read_csv('../data/ml-100k/u.item', sep='|',
header=None, names=items_colnames,
encoding='ISO-8859-1')
items_clean = items_all_columns.drop(['release_date', 'video_release_date',
'imdb_url'], axis=1)
return items_clean
def make_ratings_data():
ratings_colnames = ['user_id', 'movie_id', 'rating', 'timestamp']
ratings_all_columns = pd.read_csv('../data/ml-100k/u.data', sep='\t',
header=None, names=ratings_colnames)
ratings_clean = ratings_all_columns.drop(['timestamp'], axis=1)
return ratings_clean
items, ratings = make_data()
###Output
_____no_output_____
###Markdown
The `items` dataframe contains 19 genres: a 1 indicates the movie is of that genre, a 0 indicates it is not, and movies can be in several genres at once.
###Code
items.head(n=3)
items.describe()
###Output
_____no_output_____
###Markdown
The `ratings` dataframe containt the full dataset, 100,000 ratings (1-5) by 943 users on 1,682 items. Each user has rated at least 20 movies.
###Code
ratings.head(n=3)
ratings.describe()
###Output
_____no_output_____
###Markdown
2 Building a Content-Based Filtering Recommender (TL;DR)The whole point of content-based filtering is to build up a profile of the things a users likes, and use it to *predict* his or her liking of other items.The universe of all possible item attributes defines a *content-space*, and each item has a position in that space (see [vector space](https://en.wikipedia.org/wiki/Vector_space_model)), that describes its content.The key concept is building a vector of item attribute preferences for each user - what we call a *user profile* - and use that to make predictions.Item profiles can be combined with user actions to create the user profiles we need to match against future items.The user profile is a vector in the same content-space, and the match between the user's profile and the item is measured by how closely the two align.This is how this is typically done:1. Collect or compute item vectors that describe items in the corpus' content-space (e.g. document text, keywords, tags, metadata)2. Use item vectors and user actions to build user profiles as vectors that reveal user preferences in the same content-space3. Predict user interest in previously unseen items of the corpus. 2 Item Attributes 2.1 Item Vectors In a content-based recommenders, preferences are defined as *content*: a set of attributes that describe the items we are recommending.We should start by modelling items according to their relevant attributes, i.e. like movies relative to the movie genre.The good thing is: *this is already done for us* in the dataset! Terry Gilliam's Twelve Monkeys is modelled as drama and sci-fi, for example.
###Code
items.iloc[6]
###Output
_____no_output_____
###Markdown
From there, and this is the idea that underlyies most recommender systems, we use the priciple of **stable preferences**.Assuming that user preferences are stable over time, we can *reveal* those preferences by attribute, inferring them from the items the user liked in the past.From there, we can simply recommended new items with the attributes the user prefers the most. We call this *content-based filtering* (or CBF, here onwards).**In short, assuming I like Twelve Monkeys, and in a nutshell, therefore I like drama and sci-fi.**Note you could use attributes or *tags* other than movie genres, like the director or the main actors, for example. 2.2 Creating User Profiles What we want to do is to combine the user ratings with the metadata for each movie. There are different strategies, but we will `merge` them.
###Code
user_profile_data = ratings.merge(items)
user_profile_data.head(n=3)
###Output
_____no_output_____
###Markdown
We will select all columns corresponding to tags, and `multiply` them by the ratings.This means that each tag, if present in the movie, will have a weight in the user profile equal to the rating given by the user to the movie.
###Code
user_profile_data.head()
user_ratings = user_profile_data['rating']
user_profile_tags = user_profile_data.iloc[:, 4:].multiply(user_ratings, axis=0)
user_profile_tags.head()
###Output
_____no_output_____
###Markdown
An user's profile is a vector, comprised of the user's relative preference for each tag. A way to accomplish this is to sum all user ratings per tag.
###Code
user_profiles = pd.concat([user_profile_data.iloc[:, 0:1], user_profile_tags],
axis=1).groupby('user_id').sum()
user_profiles.head()
###Output
_____no_output_____
###Markdown
Check an user's profile below, his preferences can be defined as 20% *drama*, 15% *comedy*, 12% *action*, 9% *thriller*, and so on.
###Code
user_profiles.loc[1].sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
Now we can see how these preferences relate to each of the movies in our corpus, to make predictions.But first, let's create a function with all the logic above.
###Code
def make_user_profiles(users, items):
user_profile_data = users.merge(items)
user_ratings = user_profile_data['rating']
user_profile_tags = user_profile_data.iloc[:, 4:].multiply(user_ratings,
axis=0)
user_profiles = pd.concat([user_profile_data.iloc[:, 0:1],
user_profile_tags], axis=1).groupby('user_id')
user_profiles = user_profiles.sum()
return user_profiles
user_profiles = make_user_profiles(ratings, items)
###Output
_____no_output_____
###Markdown
2.3 Predictions Let's start by selecting a user.
###Code
user_id = 1
user = user_profiles.loc[user_id]
###Output
_____no_output_____
###Markdown
Then, we will exclude the movies that he has already rated.
###Code
items_rated = ratings[ratings.user_id == user_id].movie_id
items_unseen = items.set_index('movie_id').drop(items_rated.values)
items_unseen = items_unseen.drop(['title'], axis=1)
###Output
_____no_output_____
###Markdown
Now we can make our predictions for the remaining items, based on the user's taste profile.Now that we have our user's generic profile, containing his relative preference for each tag, we can extrapolate that to make predictions for other items.A simple way to accomplish this would be to multiply each movie profile by the user taste, using a [dot-product](https://en.wikipedia.org/wiki/Dot_product) (also called the inner product).
###Code
predictions = items_unseen.dot(user)
predictions = predictions.sort_values(ascending=False)
predictions.head(n=3)
###Output
_____no_output_____
###Markdown
And we have a winner! Turns out the most recommended movie for the user is [Best Men](http://www.imdb.com/title/tt0118702/).
###Code
items[items.movie_id == predictions.index[0]].title
###Output
_____no_output_____
###Markdown
Wrapping all together as a function.
###Code
def make_predictions(user_profiles, items, user_id):
user = user_profiles.loc[user_id]
items_rated = ratings[ratings.user_id == user_id].movie_id
items_unseen = items.set_index('movie_id').drop(items_rated.values)
items_unseen = items_unseen.drop(['title'], axis=1)
predictions = items_unseen.dot(user)
predictions = predictions.sort_values(ascending=False)
return predictions
predictions = make_predictions(user_profiles, items, user_id=1).head(n=3)
###Output
_____no_output_____
###Markdown
3 Item NormalizationYou may have noticed that a movie with many genres checked will have more influence on the user profile than one with only a few. (Why?)In order to adjust that, we must normalize the item vectors.It's important to make all the vectors the same length, so we don't penalize more obscure items.*Normalizing* is exactly that: transforming all our item vectors into *unit vectors* of length 1 with the same direction.We accomplish this with the following formula:$$ \vec{u} = {\frac{\vec{v}}{\parallel\vec{v}\parallel}} $$Where:$$ \parallel\vec{v}\parallel = \sqrt{v_1^2 + v_2^2 + ... + v_n^2} $$So, we take the non-normalized vector and divide it (i.e. all its components) by its own magnitude, also called length, or *norm*.
###Code
items_normalized = items.drop(['title'], axis=1).set_index('movie_id')
items_norms = items_normalized.apply(lambda x: math.sqrt((x*x).sum()), axis=1)
items_normalized = items_normalized.divide(items_norms, axis=0)
items_normalized.head(n=3)
###Output
_____no_output_____
###Markdown
In practice, it's not necessary to normalize a DataFrame rows by hand: we've got scikit for that!So, we're going to define a function to normalize a matrix's rows, using `normalize` from `sklearn.preprocessing`:
###Code
def normalize_matrix_rows(df):
return pd.DataFrame(normalize(df, norm='l2', axis=1), columns=df.columns,
index=df.index)
items_normalized = items.drop(['title'], axis=1).set_index('movie_id')
items_normalized = normalize_matrix_rows(items_normalized)
items_normalized.head(3)
def normalize_item_vectors(items):
items_colnames = list(items.columns.values)
items_normalized = items.drop(['title'], axis=1).set_index('movie_id')
items_normalized = normalize_matrix_rows(items_normalized)
items_normalized = pd.concat([items_normalized.reset_index(), items.title], axis=1)
items_normalized = items_normalized[items_colnames]
return items_normalized
items_normalized = normalize_item_vectors(items)
user_profiles = make_user_profiles(ratings, items_normalized)
predictions = make_predictions(user_profiles, items_normalized, user_id=1)
predictions.head(n=3)
items[items.movie_id == predictions.index[0]].title
###Output
_____no_output_____
###Markdown
Now the most recommended movie for the user is [Private Parts](http://www.imdb.com/title/tt0119951/?ref_=fn_al_tt_1).His taste is just terrible! :D An alternative to determine the predictions by normalizing the movie profile and computing the dot product with the user taste is to just compute the cosine between movie profile and the user taste.This is because the cosine is a dot-product that is already normalized:$$ cos(\vec{u}, \vec{v}) = \frac{\langle \vec{u}, \vec{v} \rangle}{\parallel\vec{u}\parallel\parallel\vec{v}\parallel} $$In fact, the predictions will be *scalled by the user taste vector norm*, but that's ok because the order between the items will be kept unchanged.
###Code
def make_predictions_normalized(user_profiles, items, user_id):
user = user_profiles.loc[user_id]
items_rated = ratings[ratings.user_id == user_id].movie_id
items_unseen = items.set_index('movie_id').drop(items_rated.values)
items_unseen = items_unseen.drop(['title'], axis=1)
predictions = pd.Series(cosine_similarity(items_unseen, user.values.reshape(1,-1)).transpose()[0],
index=items_unseen.index)
predictions = predictions.sort_values(ascending=False)
return predictions
predictions = make_predictions_normalized(user_profiles, items, user_id=1)
predictions.head(3)
###Output
_____no_output_____
###Markdown
4 Attribute Relevance What are the key attributes or *differentiators* of any given item, based on the different frequencies of each attribute?TF-IDF stands for *Term Frequency - Inverse Document Frequency* and is a *weighting function*, initially applied in information retrieval and adapted to content-based filtering.Why do we need it? Because *not all terms are equally relevant* to describe an item. TF-IDF assumes that rare terms have more descriptive power.Now, be aware though that rarity doesn't imply more significance in all contexts, but we will assume it is for the sake of this example. 4.1 TF-IDF Weighting* Term Frequency (TF), i.e. *intensity* = Number of occurences of a term in the document* Inverse Document Frequency (IDF), i.e. *distinctiveness* = How few documents contain this term, where:$$ IDF _{term} = log\left({\frac{TotalDocuments}{DocumentsWithTerm}} \right) $$And, thus:$$ TFIDF _{term} = TF _{term} * IDF _{term} $$Or, in short, we measure *the term frequency, weighted by its rarity in the entire corpus*. 4.2 TagsTipically, TF-IDF would be applied to documents, containing words in them, and each word being a *term*.A more interesting application though uses *tags*: individual words or phrases, that are applied by the community to describe the item. Just like words in a document, tags can be applied to an item by many different users, thus appearing multiple times.Additionally, some tags are rare, while others are quite common in our collection, thus we also need IDF to assess descriptive power.$$ IDF _{tag} = log\left({\frac{TotalDocuments}{DocumentsWithTag}} \right) $$And, thus:$$ TFIDF _{tag} = TF _{tag} * IDF _{tag} $$What TF-IDF will do is automatically demoting common tags, promoting core tags instead. 4.3 MetadataWe will start by counting the number of movies containing each one of the tags, or what we cal the tag or document frequency.
###Code
tag_frequency = items.drop(['title'], axis=1).set_index('movie_id').sum()
tag_frequency.sort_values()
###Output
_____no_output_____
###Markdown
Following the reasoning above, tags like `fantasy` or `film-noir` should have more descriptive weight.Now, we need the inverse document frequency. We will apply a slightly different formula:$$ IDF _{tag} = {\frac{1}{log(DocumentsWithAttribute)}} $$
###Code
inverse_tag_frequency = tag_frequency.apply(lambda x: 1/math.log(x))
inverse_tag_frequency.sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
Now, what we want to do is to weight the user profile (i.e. the frequencies) with the inverse tag frequencies, so that more obscure tags count more.First, let's look at the original user profile.
###Code
user
###Output
_____no_output_____
###Markdown
Let's apply the inverse frequency weights to the user vector, to promote distinctive tags the most.
###Code
user.multiply(inverse_tag_frequency, axis=0)
###Output
_____no_output_____
###Markdown
We can use this strategy to recompute the predictions.
###Code
def make_predictions_relevance(user_profiles, items, idf, user_id):
user = user_profiles.loc[user_id]
items_rated = ratings[ratings.user_id == user_id].movie_id
items_unseen = items.set_index('movie_id').drop(items_rated.values)
items_unseen = items_unseen.drop(['title'], axis=1)
predictions = items_unseen.dot(user.multiply(idf, axis=0))
predictions = predictions.sort_values(ascending=False)
return predictions
predictions = make_predictions_relevance(user_profiles, items_normalized,
inverse_tag_frequency, user_id=1)
predictions.head(n=3)
items[items.movie_id == predictions.index[0]].title
###Output
_____no_output_____ |
DataPreperation/6_T-SNE.ipynb | ###Markdown
t-SNE: t-Distributed Stochastic Neighbor Embedding Import Require Libraries
###Code
import pandas as pd
from sklearn.manifold import TSNE
from sklearn.datasets import load_iris
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
###Output
_____no_output_____
###Markdown
Load iris dataset (inbuilt in sklearn)
###Code
dataset = load_iris()
features = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
target = 'species'
iris = pd.DataFrame(
dataset.data,
columns=features)
iris[target] = dataset.target
iris.head()
###Output
_____no_output_____
###Markdown
Computing t-SNE
###Code
RANDOM_STATE = 42
tsne = TSNE(n_components=2, n_iter=1000, random_state=RANDOM_STATE)
points = tsne.fit_transform(iris[features])
###Output
_____no_output_____
###Markdown
Plot output
###Code
sns.set()
sns.set(rc={"figure.figsize": (10, 8)})
PALETTE = sns.color_palette('deep', n_colors=3)
CMAP = ListedColormap(PALETTE.as_hex())
flower_id_map = {0: 'Iris-setosa', 1: 'Iris-versicolor', 2: 'Iris-virginica'}
from matplotlib import pyplot as plt
import numpy as np
import math
with plt.style.context('seaborn-whitegrid'):
plt.figure(figsize=(15, 6))
for lab, col in zip((0, 1, 2),
('blue', 'red', 'green')):
plt.scatter(points[iris['species']==lab, 0],
points[iris['species']==lab, 1],
label=flower_id_map[lab],
c=col)
plt.title('Iris dataset visualized with t-SNE', fontsize=20, y=1.03)
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
###Output
_____no_output_____ |
Tensorflow - DeepLearning.AI - 01.Introduction to TensorFlow for AI, ML and DL/Week_01_A_New_Programming_Paradigm/NeuralNetwork.ipynb | ###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
The Hello World of Deep Learning with Neural Networks Like every first app you should start with something super simple that shows the overall scaffolding for how your code works. In the case of creating neural networks, the sample I like to use is one where it learns the relationship between two numbers. So, for example, if you were writing code for a function like this, you already know the 'rules' — ```float hw_function(float x){ float y = (2 * x) - 1; return y;}```So how would you train a neural network to do the equivalent task? Using data! By feeding it with a set of Xs, and a set of Ys, it should be able to figure out the relationship between them. This is obviously a very different paradigm than what you might be used to, so let's step through it piece by piece. ImportsLet's start with our imports. Here we are importing TensorFlow and calling it tf for ease of use.We then import a library called numpy, which helps us to represent our data as lists easily and quickly.The framework for defining a neural network as a set of Sequential layers is called keras, so we import that too.
###Code
import tensorflow as tf
import numpy as np
from tensorflow import keras
###Output
_____no_output_____
###Markdown
Define and Compile the Neural NetworkNext we will create the simplest possible neural network. It has 1 layer, and that layer has 1 neuron, and the input shape to it is just 1 value.
###Code
model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
###Output
_____no_output_____
###Markdown
Now we compile our Neural Network. When we do so, we have to specify 2 functions, a loss and an optimizer.If you've seen lots of math for machine learning, here's where it's usually used, but in this case it's nicely encapsulated in functions for you. But what happens here — let's explain...We know that in our function, the relationship between the numbers is y=2x-1. When the computer is trying to 'learn' that, it makes a guess...maybe y=10x+10. The LOSS function measures the guessed answers against the known correct answers and measures how well or how badly it did.It then uses the OPTIMIZER function to make another guess. Based on how the loss function went, it will try to minimize the loss. At that point maybe it will come up with somehting like y=5x+5, which, while still pretty bad, is closer to the correct result (i.e. the loss is lower)It will repeat this for the number of EPOCHS which you will see shortly. But first, here's how we tell it to use 'MEAN SQUARED ERROR' for the loss and 'STOCHASTIC GRADIENT DESCENT' for the optimizer. You don't need to understand the math for these yet, but you can see that they work! :)Over time you will learn the different and appropriate loss and optimizer functions for different scenarios.
###Code
model.compile(optimizer='sgd', loss='mean_squared_error')
###Output
_____no_output_____
###Markdown
Providing the DataNext up we'll feed in some data. In this case we are taking 6 xs and 6ys. You can see that the relationship between these is that y=2x-1, so where x = -1, y=-3 etc. etc. A python library called 'Numpy' provides lots of array type data structures that are a defacto standard way of doing it. We declare that we want to use these by specifying the values as an np.array[]
###Code
xs = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)
ys = np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float)
###Output
_____no_output_____
###Markdown
Training the Neural Network The process of training the neural network, where it 'learns' the relationship between the Xs and Ys is in the **model.fit** call. This is where it will go through the loop we spoke about above, making a guess, measuring how good or bad it is (aka the loss), using the opimizer to make another guess etc. It will do it for the number of epochs you specify. When you run this code, you'll see the loss on the right hand side.
###Code
model.fit(xs, ys, epochs=500)
###Output
Epoch 1/500
1/1 [==============================] - 2s 2s/step - loss: 57.5203
Epoch 2/500
1/1 [==============================] - 0s 19ms/step - loss: 45.6763
Epoch 3/500
1/1 [==============================] - 0s 7ms/step - loss: 36.3494
Epoch 4/500
1/1 [==============================] - 0s 6ms/step - loss: 29.0029
Epoch 5/500
1/1 [==============================] - 0s 7ms/step - loss: 23.2148
Epoch 6/500
1/1 [==============================] - 0s 5ms/step - loss: 18.6528
Epoch 7/500
1/1 [==============================] - 0s 6ms/step - loss: 15.0556
Epoch 8/500
1/1 [==============================] - 0s 4ms/step - loss: 12.2178
Epoch 9/500
1/1 [==============================] - 0s 6ms/step - loss: 9.9774
Epoch 10/500
1/1 [==============================] - 0s 4ms/step - loss: 8.2073
Epoch 11/500
1/1 [==============================] - 0s 3ms/step - loss: 6.8073
Epoch 12/500
1/1 [==============================] - 0s 4ms/step - loss: 5.6987
Epoch 13/500
1/1 [==============================] - 0s 3ms/step - loss: 4.8194
Epoch 14/500
1/1 [==============================] - 0s 5ms/step - loss: 4.1208
Epoch 15/500
1/1 [==============================] - 0s 6ms/step - loss: 3.5643
Epoch 16/500
1/1 [==============================] - 0s 5ms/step - loss: 3.1199
Epoch 17/500
1/1 [==============================] - 0s 4ms/step - loss: 2.7638
Epoch 18/500
1/1 [==============================] - 0s 4ms/step - loss: 2.4773
Epoch 19/500
1/1 [==============================] - 0s 4ms/step - loss: 2.2457
Epoch 20/500
1/1 [==============================] - 0s 5ms/step - loss: 2.0573
Epoch 21/500
1/1 [==============================] - 0s 3ms/step - loss: 1.9032
Epoch 22/500
1/1 [==============================] - 0s 3ms/step - loss: 1.7761
Epoch 23/500
1/1 [==============================] - 0s 5ms/step - loss: 1.6703
Epoch 24/500
1/1 [==============================] - 0s 3ms/step - loss: 1.5815
Epoch 25/500
1/1 [==============================] - 0s 4ms/step - loss: 1.5062
Epoch 26/500
1/1 [==============================] - 0s 4ms/step - loss: 1.4415
Epoch 27/500
1/1 [==============================] - 0s 6ms/step - loss: 1.3854
Epoch 28/500
1/1 [==============================] - 0s 3ms/step - loss: 1.3360
Epoch 29/500
1/1 [==============================] - 0s 3ms/step - loss: 1.2922
Epoch 30/500
1/1 [==============================] - 0s 3ms/step - loss: 1.2527
Epoch 31/500
1/1 [==============================] - 0s 4ms/step - loss: 1.2168
Epoch 32/500
1/1 [==============================] - 0s 5ms/step - loss: 1.1838
Epoch 33/500
1/1 [==============================] - 0s 3ms/step - loss: 1.1532
Epoch 34/500
1/1 [==============================] - 0s 4ms/step - loss: 1.1246
Epoch 35/500
1/1 [==============================] - 0s 4ms/step - loss: 1.0976
Epoch 36/500
1/1 [==============================] - 0s 3ms/step - loss: 1.0720
Epoch 37/500
1/1 [==============================] - 0s 4ms/step - loss: 1.0475
Epoch 38/500
1/1 [==============================] - 0s 4ms/step - loss: 1.0241
Epoch 39/500
1/1 [==============================] - 0s 4ms/step - loss: 1.0016
Epoch 40/500
1/1 [==============================] - 0s 3ms/step - loss: 0.9798
Epoch 41/500
1/1 [==============================] - 0s 3ms/step - loss: 0.9588
Epoch 42/500
1/1 [==============================] - 0s 4ms/step - loss: 0.9384
Epoch 43/500
1/1 [==============================] - 0s 3ms/step - loss: 0.9185
Epoch 44/500
1/1 [==============================] - 0s 3ms/step - loss: 0.8992
Epoch 45/500
1/1 [==============================] - 0s 4ms/step - loss: 0.8804
Epoch 46/500
1/1 [==============================] - 0s 4ms/step - loss: 0.8620
Epoch 47/500
1/1 [==============================] - 0s 5ms/step - loss: 0.8441
Epoch 48/500
1/1 [==============================] - 0s 4ms/step - loss: 0.8266
Epoch 49/500
1/1 [==============================] - 0s 6ms/step - loss: 0.8095
Epoch 50/500
1/1 [==============================] - 0s 4ms/step - loss: 0.7927
Epoch 51/500
1/1 [==============================] - 0s 3ms/step - loss: 0.7764
Epoch 52/500
1/1 [==============================] - 0s 5ms/step - loss: 0.7604
Epoch 53/500
1/1 [==============================] - 0s 3ms/step - loss: 0.7447
Epoch 54/500
1/1 [==============================] - 0s 3ms/step - loss: 0.7294
Epoch 55/500
1/1 [==============================] - 0s 4ms/step - loss: 0.7143
Epoch 56/500
1/1 [==============================] - 0s 4ms/step - loss: 0.6996
Epoch 57/500
1/1 [==============================] - 0s 4ms/step - loss: 0.6852
Epoch 58/500
1/1 [==============================] - 0s 4ms/step - loss: 0.6712
Epoch 59/500
1/1 [==============================] - 0s 4ms/step - loss: 0.6574
Epoch 60/500
1/1 [==============================] - 0s 4ms/step - loss: 0.6438
Epoch 61/500
1/1 [==============================] - 0s 3ms/step - loss: 0.6306
Epoch 62/500
1/1 [==============================] - 0s 4ms/step - loss: 0.6177
Epoch 63/500
1/1 [==============================] - 0s 3ms/step - loss: 0.6050
Epoch 64/500
1/1 [==============================] - 0s 4ms/step - loss: 0.5925
Epoch 65/500
1/1 [==============================] - 0s 4ms/step - loss: 0.5804
Epoch 66/500
1/1 [==============================] - 0s 3ms/step - loss: 0.5684
Epoch 67/500
1/1 [==============================] - 0s 4ms/step - loss: 0.5568
Epoch 68/500
1/1 [==============================] - 0s 4ms/step - loss: 0.5453
Epoch 69/500
1/1 [==============================] - 0s 4ms/step - loss: 0.5341
Epoch 70/500
1/1 [==============================] - 0s 4ms/step - loss: 0.5231
Epoch 71/500
1/1 [==============================] - 0s 3ms/step - loss: 0.5124
Epoch 72/500
1/1 [==============================] - 0s 4ms/step - loss: 0.5019
Epoch 73/500
1/1 [==============================] - 0s 4ms/step - loss: 0.4916
Epoch 74/500
1/1 [==============================] - 0s 3ms/step - loss: 0.4815
Epoch 75/500
1/1 [==============================] - 0s 5ms/step - loss: 0.4716
Epoch 76/500
1/1 [==============================] - 0s 3ms/step - loss: 0.4619
Epoch 77/500
1/1 [==============================] - 0s 5ms/step - loss: 0.4524
Epoch 78/500
1/1 [==============================] - 0s 4ms/step - loss: 0.4431
Epoch 79/500
1/1 [==============================] - 0s 4ms/step - loss: 0.4340
Epoch 80/500
1/1 [==============================] - 0s 7ms/step - loss: 0.4251
Epoch 81/500
1/1 [==============================] - 0s 5ms/step - loss: 0.4164
Epoch 82/500
1/1 [==============================] - 0s 4ms/step - loss: 0.4078
Epoch 83/500
1/1 [==============================] - 0s 3ms/step - loss: 0.3994
Epoch 84/500
1/1 [==============================] - 0s 5ms/step - loss: 0.3912
Epoch 85/500
1/1 [==============================] - 0s 5ms/step - loss: 0.3832
Epoch 86/500
1/1 [==============================] - 0s 4ms/step - loss: 0.3753
Epoch 87/500
1/1 [==============================] - 0s 4ms/step - loss: 0.3676
Epoch 88/500
1/1 [==============================] - 0s 5ms/step - loss: 0.3601
Epoch 89/500
1/1 [==============================] - 0s 4ms/step - loss: 0.3527
Epoch 90/500
1/1 [==============================] - 0s 3ms/step - loss: 0.3454
Epoch 91/500
1/1 [==============================] - 0s 5ms/step - loss: 0.3383
Epoch 92/500
1/1 [==============================] - 0s 4ms/step - loss: 0.3314
Epoch 93/500
1/1 [==============================] - 0s 3ms/step - loss: 0.3246
Epoch 94/500
1/1 [==============================] - 0s 4ms/step - loss: 0.3179
Epoch 95/500
1/1 [==============================] - 0s 6ms/step - loss: 0.3114
Epoch 96/500
1/1 [==============================] - 0s 6ms/step - loss: 0.3050
Epoch 97/500
1/1 [==============================] - 0s 4ms/step - loss: 0.2987
Epoch 98/500
1/1 [==============================] - 0s 4ms/step - loss: 0.2926
Epoch 99/500
1/1 [==============================] - 0s 3ms/step - loss: 0.2866
Epoch 100/500
1/1 [==============================] - 0s 2ms/step - loss: 0.2807
Epoch 101/500
1/1 [==============================] - 0s 6ms/step - loss: 0.2749
Epoch 102/500
1/1 [==============================] - 0s 3ms/step - loss: 0.2693
Epoch 103/500
1/1 [==============================] - 0s 3ms/step - loss: 0.2637
Epoch 104/500
###Markdown
Ok, now you have a model that has been trained to learn the relationship between X and Y. You can use the **model.predict** method to have it figure out the Y for a previously unknown X. So, for example, if X = 10, what do you think Y will be? Take a guess before you run this code:
###Code
print(model.predict([10.0]))
###Output
[[18.975655]]
|
Films101.ipynb | ###Markdown
Filmsite Functions
###Code
## Basic stuff
%load_ext autoreload
%autoreload
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
display(HTML("""<style>div.output_area{max-height:10000px;overflow:scroll;}</style>"""))
## Python Version
import sys
print("Python: {0}".format(sys.version))
%load_ext autoreload
%autoreload
from films101 import films101
from timeUtils import clock
import datetime as dt
start = dt.datetime.now()
print("Notebook Last Run Initiated: "+str(start))
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
Get/Parse/Merge/Process Films101 Data
###Code
f101 = films101()
f101.getFilms101YearlyData(startYear=1900, endYear=2018, debug=True)
_, _ = clock("Last Run")
f101.parseFilms101Data(debug=False)
_, _ = clock("Last Run")
###Output
Found 1 movies in 1902
Found 1 movies in 1903
Found 1 movies in 1914
Found 2 movies in 1915
Found 2 movies in 1916
Found 1 movies in 1919
Found 1 movies in 1920
Found 5 movies in 1921
Found 1 movies in 1922
Found 5 movies in 1923
Found 6 movies in 1924
Found 7 movies in 1925
Found 2 movies in 1926
Found 8 movies in 1927
Found 13 movies in 1928
Found 9 movies in 1929
Found 9 movies in 1930
Found 13 movies in 1931
Found 20 movies in 1932
Found 16 movies in 1933
Found 18 movies in 1934
Found 19 movies in 1935
Found 22 movies in 1936
Found 23 movies in 1937
Found 15 movies in 1938
Found 23 movies in 1939
Found 27 movies in 1940
Found 17 movies in 1941
Found 18 movies in 1942
Found 16 movies in 1943
Found 15 movies in 1944
Found 17 movies in 1945
Found 15 movies in 1946
Found 17 movies in 1947
Found 19 movies in 1948
Found 14 movies in 1949
Found 15 movies in 1950
Found 14 movies in 1951
Found 8 movies in 1952
Found 16 movies in 1953
Found 17 movies in 1954
Found 21 movies in 1955
Found 19 movies in 1956
Found 19 movies in 1957
Found 14 movies in 1958
Found 20 movies in 1959
Found 18 movies in 1960
Found 17 movies in 1961
Found 19 movies in 1962
Found 15 movies in 1963
Found 16 movies in 1964
Found 12 movies in 1965
Found 12 movies in 1966
Found 14 movies in 1967
Found 21 movies in 1968
Found 18 movies in 1969
Found 11 movies in 1970
Found 19 movies in 1971
Found 11 movies in 1972
Found 20 movies in 1973
Found 13 movies in 1974
Found 15 movies in 1975
Found 12 movies in 1976
Found 13 movies in 1977
Found 15 movies in 1978
Found 20 movies in 1979
Found 15 movies in 1980
Found 24 movies in 1981
Found 19 movies in 1982
Found 15 movies in 1983
Found 19 movies in 1984
Found 22 movies in 1985
Found 22 movies in 1986
Found 27 movies in 1987
Found 23 movies in 1988
Found 19 movies in 1989
Found 21 movies in 1990
Found 15 movies in 1991
Found 23 movies in 1992
Found 21 movies in 1993
Found 28 movies in 1994
Found 22 movies in 1995
Found 15 movies in 1996
Found 17 movies in 1997
Found 18 movies in 1998
Found 17 movies in 1999
Found 20 movies in 2000
Found 19 movies in 2001
Found 24 movies in 2002
Found 23 movies in 2003
Found 26 movies in 2004
Found 24 movies in 2005
Found 27 movies in 2006
Found 20 movies in 2007
Found 8 movies in 2008
Found 5 movies in 2009
Found 3 movies in 2010
Found 5 movies in 2011
Found 21 movies in 2012
Found 30 movies in 2013
Found 25 movies in 2014
Found 31 movies in 2015
Found 34 movies in 2016
Found 18 movies in 2017
Found 16 movies in 2018
Saving 105 Years of Filmsite Data to /Users/tgadfort/Documents/code/movies/filmsite/results/filmsite.json
Current Time is Thu Feb 14, 2019 19:31:08 for Last Run
|
SlicingByCycleAndTool.ipynb | ###Markdown
Demo notebook for how to study kHz motor data, partitioned over time according to machining stepsThe goal here is to first turn the raw modal and partcount data into an index table of "machining steps." This will then be suitable for programmatically slicing out chunks of kHz data. We can then plot individual slices and/or compute various time series features. Import packages, customize plot settings
###Code
"""
Missing any packages on your system? Uncomment these lines to install
Depending on your jupyter environment setup, you may need to replace `pip` -> `conda`
"""
# import sys
# !{sys.executable} -m pip install "numpy>=1.18.1"
# !{sys.executable} -m pip install "pandas>=1.0.1"
# !{sys.executable} -m pip install "matplotlib>=3.1.3"
# !{sys.executable} -m pip install "seaborn>=0.10.0"
import numpy as np
import pandas as pd
from time import time, sleep
from IPython.display import display, clear_output
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
matplotlib.rcParams['figure.dpi'] = 100
matplotlib.rcParams['axes.titlesize'] = 20
matplotlib.rcParams['axes.titleweight'] = 'bold'
matplotlib.rcParams['axes.labelsize'] = 16
matplotlib.rcParams['legend.fontsize'] = 16
def bigplot(xsize=16, ysize=5):
return plt.subplots(figsize=(xsize, ysize))
###Output
_____no_output_____
###Markdown
Specify which data we'll load inFor this demo, we'll just assume that you downloaded and unzipped some csv files. Change as needed.
###Code
khz_file_names = [
'data/2021-01-08T12-43-40Z_2021-01-08T13-13-49Z_khz.csv',
]
modal_file_names = [
'data/2021-01-08T12-28-37Z_2021-01-08T12-58-37Z_modal.csv',
'data/2021-01-08T12-58-37Z_2021-01-08T13-28-37Z_modal.csv',
]
partcount_file_names = [
'data/2021-01-08T12-28-45Z_2021-01-08T12-58-45Z_partcount_status.csv',
'data/2021-01-08T12-58-45Z_2021-01-08T13-28-45Z_partcount_status.csv',
]
###Output
_____no_output_____
###Markdown
Load the dense kHz time series and lightly groom it
###Code
df_khz = (
pd.concat([
pd.read_csv(file_name, comment='#')
.rename(columns={
'PATH-1 NAME-S1 SPINDLE-1 torque command':'load',
'PATH-1 NAME-S1 SPINDLE-1 motor speed':'speed'
})
[['timestamp', 'load', 'speed']]
for file_name in khz_file_names
])
.assign(
timestamp=lambda dfx: pd.to_datetime(dfx['timestamp']), # timestamp strings -> pandas timestamps
load=lambda dfx: dfx['load'] * 150 / 16384 # convert to "human-readable" load (% continuous rating)
)
.sort_values('timestamp') # just in case files were loaded out-of-order!
.set_index('timestamp')
)
t_start, t_end = df_khz.index[0], df_khz.index[-1]
display(df_khz)
bigplot()
df_khz.iloc[800000:900000]['speed'].plot()
plt.ylabel('Speed (RPM)')
plt.show()
###Output
_____no_output_____
###Markdown
Load modal and partcount data into a common machine-state dataframe, sorted by time
###Code
df_state = (
pd.concat([
pd.read_csv(file_name, comment='#')
.query('key == "T" or key == "partcount"')
for file_name in [*modal_file_names, *partcount_file_names]
])
.assign(timestamp=lambda dfx: pd.to_datetime(dfx['timestamp'])) # timestamp strings -> pandas timestamps
.sort_values('timestamp')
.set_index('timestamp')
)
print("Long-format table of machine state evolution over time:")
display(df_state)
gr_state = df_state.groupby('key')['value']
for key in ['partcount', 'T']:
df_state[key] = gr_state.transform(lambda gr: gr if gr.name == key else pd.NA)
print("Wide-format table of machine state evolution over time:")
display(df_state[['partcount', 'T']])
###Output
Long-format table of machine state evolution over time:
###Markdown
Machine state -> Machining stepsDerive start-time of each successive machining step, indexed by cycle and tool .We'll treat everything as if on a single path.Repeated tool numbers in the same cycle will be deduped in a simple way as follows: 404, 404, 404 -> 404.00, 404.01, 404.02
###Code
df_steps = (
df_state
.drop(columns=['path', 'key', 'value', 'ms since last read'])
.ffill()
.loc[lambda dfx: dfx.index != np.roll(dfx.index, -1)] # collapse redundant timestamps
.loc[t_start:t_end] # focus on kHz data time range
.assign(cycle=lambda dfx: # "cycle number" based on time range of interest
(dfx['partcount'].diff() != 0)
.fillna(False)
.astype(int)
.cumsum()
)
.assign(T=lambda dfx: # elementary dedupe of repeated tool periods, e.g. 404, 404, 404 -> 404.00, 404.01, 404.02
dfx
.groupby(['cycle', 'T'])
['T']
.transform(lambda x: x + 0.01 * np.arange(len(x)))
)
[['cycle', 'T']]
.reset_index() # pop out timestamp index -> column
.set_index(['cycle', 'T'])
)
print("Machining steps as indexers for time:")
display(df_steps)
print("Available cycles:", df_steps.index.unique(level='cycle').to_list())
print("Available tools:", sorted(df_steps.index.unique(level='T').to_list()))
###Output
Machining steps as indexers for time:
###Markdown
Define the time slicing functions
###Code
def get_cycle(cycle):
"""
Get a specific cycle from the kHz time series using efficient Pandas time slicing
"""
if cycle not in df_steps.index.unique(level='cycle'):
return df_khz.iloc[0:0]
start_timestamp, end_timestamp = df_steps.loc[cycle, 'timestamp'].iloc[[0, -1]]
if cycle + 1 in df_steps.index.unique(level='cycle'):
end_timestamp = df_steps.loc[cycle + 1, 'timestamp'].iloc[0]
return df_khz.loc[start_timestamp:end_timestamp]
def get_cycle_tool(cycle, T):
"""
Get a specific cycle & tool use period from the kHz time series using efficient Pandas time slicing
"""
if (cycle, T) not in df_steps.index:
return df_khz.iloc[0:0]
start_timestamp = df_steps.at[(cycle, T), 'timestamp']
end_timestamp = df_steps.shift(-1).at[(cycle, T), 'timestamp']
return df_khz.loc[start_timestamp:end_timestamp] if pd.notna(end_timestamp) else df_khz.iloc[0:0]
###Output
_____no_output_____
###Markdown
Test it out
###Code
test_cycle, test_tool = 4, 303.0
print(df_steps.loc[(test_cycle, test_tool)]['timestamp'], 'thru',
df_steps.shift(-1).loc[(test_cycle, test_tool)]['timestamp'])
dft = get_cycle_tool(test_cycle, test_tool)
display(dft)
bigplot()
dft['load'].plot()
plt.title(f'Cycle = {test_cycle}, Tool = {test_tool}')
plt.ylabel('Load (% continuous rated)')
plt.show()
print(f"All of cycle {test_cycle}:")
fix, ax = bigplot()
get_cycle(test_cycle)['load'].plot()
# plt.title(f'Cycle = {test_cycle}')
plt.ylabel('Load (% continuous rated)')
for tool in df_steps.loc[test_cycle].index:
t_tool = df_steps.at[(test_cycle, tool), 'timestamp']
color = 'black' if tool > 1.0 else '0.7'
plt.axvline(t_tool, color=color, linestyle='--')
if tool > 1.0:
plt.text(t_tool, ax.get_ylim()[1]*1.05, f'T{tool}',
color='red' if tool == test_tool else 'black', fontsize=10, rotation=90)
plt.show()
###Output
2021-01-08 13:08:19.341000+00:00 thru 2021-01-08 13:08:29.049000+00:00
###Markdown
Feature extraction per machining step
###Code
feature_rows = []
previous_cycle = 0
print("Working on cycle:", end=' ')
for cycle, T in df_steps.index:
if cycle != previous_cycle:
print(cycle, end=' ')
previous_cycle = cycle
dft = get_cycle_tool(cycle, T)
feature_rows.append({
'duration':len(dft),
'load_integral':dft['load'].sum(),
'load_median':dft['load'].median(),
# INSERT WHATEVER ELSE YOU WANT HERE!
})
print()
df_features = pd.DataFrame(index=df_steps.index, data=feature_rows)
display(df_features)
all_tools = filter(lambda x: x > 1.0, sorted(df_steps.index.unique(level='T').to_list()))
def plot_feature(feature, tools=all_tools, **plt_kwargs):
"""
Plot a derived time series feature versus partcount
Each tool use period is overlayed as a separate line
Features are automatically normalized to the median per-tool
"""
dft = df_features.swaplevel()
fix, ax = bigplot()
for T in tools:
(dft.loc[T, feature] / dft.loc[T, feature].median()).plot(ax=ax, label=f'tool {T}', **plt_kwargs)
ax.set_xlabel(f'partcount')
ax.set_ylabel('value / (tool median)', fontsize=20)
ax.set_title(feature)
plt.gca().relim()
plt.legend()
plt.show()
test_tools = [202.0, 505.0, 606.0]
plot_feature('load_median', tools=test_tools, ylim=[0.9, 1.2])
plot_feature('load_integral', tools=test_tools, ylim=[0.95, 1.05])
###Output
_____no_output_____ |
example_of_intent_and_entity_classification_with_NLU_engine_class.ipynb | ###Markdown
Example of intent and entity classification with NLU engine classThis is just a small example notebook to help users understand how to use the NLU engine.* Intent example* Entity example Load data set. For this example, we will use the cleaned dataset, although you can load any dataset you like.
###Code
nlu_data_df = DataUtils.load_data(
'data/NLU-Data-Home-Domain-Annotated-All-Cleaned.csv'
)
###Output
_____no_output_____
###Markdown
Intent classification: example of a single utterance Both the intents and the domains (scenarios/skills) can be used to label an utterance. In this example we will use domains to label the utterances' intents.
###Code
domains = nlu_data_df.scenario.values
LR_domain_classifier_model, tfidf_vectorizer = NLUEngine.train_intent_classifier(
data_df_path=nlu_data_df,
labels_to_predict='domain',
classifier=LR
)
intent = nlu_data_df.intent.values
LR_domain_classifier_model, tfidf_vectorizer = NLUEngine.train_intent_classifier(
data_df_path=nlu_data_df,
labels_to_predict='intent',
classifier=LR
)
###Output
_____no_output_____
###Markdown
Example: Let's try to predict an utterances intent label using the domains.
###Code
utterance = "turn off the kitchen lights"
print(IntentMatcher.predict_label(
LR_domain_classifier_model, tfidf_vectorizer, utterance))
###Output
_____no_output_____
###Markdown
Entity extraction The entity extraction could be greatly improved by improving the features it uses. It would be great if someone would take a look at this. Perhaps the CRF features similar to what Snips uses would be better such as Brown clustering (probably). It is important to have the NLTK tokenizer to be able to extract entities.
###Code
try:
nltk.data.find('tokenizers/punkt')
except LookupError:
nltk.download('punkt')
###Output
_____no_output_____
###Markdown
Example: Extracting entities from an utterance
###Code
crf_model = NLUEngine.train_entity_classifier(data_df=nlu_data_df)
utterance = 'wake me up at five pm this week'
###Output
_____no_output_____
###Markdown
We can get the entity tags of a specific utterance with the EntityExtractor.
###Code
EntityExtractor.get_entity_tags(utterance, crf_model)
###Output
_____no_output_____
###Markdown
We can also get the entity tagged utterance with the NLUEngine.
###Code
entity_tagged_utterance = NLUEngine.create_entity_tagged_utterance(
utterance, crf_model)
entity_tagged_utterance
#TODO remove everything from here (perhaps move it into another notebook?), this was just to quickly evaluate entity matching using spaCy for PoS.
import spacy
nlp = spacy.load("en_core_web_sm")
doc = nlp("Get busy living or get busy dying.")
print(f"{'text':{8}} {'POS':{6}} {'TAG':{6}} {'Dep':{6}} {'POS explained':{20}} {'tag explained'} ")
for token in doc:
print(f'{token.text:{8}} {token.pos_:{6}} {token.tag_:{6}} {token.dep_:{6}} {spacy.explain(token.pos_):{20}} {spacy.explain(token.tag_)}')
list_of_words_and_tags = []
for token in doc:
list_of_words_and_tags.append((token.text, token.tag_))
list_of_words_and_tags
EntityExtractor.pos_tag_utterance(
utterance="Get busy living or get busy dying.")
entity_reviewed_report_df = NLUEngine.evaluate_entity_classifier(
data_df=nlu_data_df)
entity_reviewed_report_df.to_csv('data/nltk_pos_entity_report.csv')
entity_reviewed_report_df
from nlu_engine import Analytics
from nlu_engine.entity_extractor import crf
import spacy
nlp = spacy.load("en_core_web_sm")
def spacy_pos_tag_utterance(utterance):
doc = nlp(utterance)
list_of_words_and_tags = []
for token in doc:
list_of_words_and_tags.append((token.text, token.tag_))
return list_of_words_and_tags
def create_feature_dataset(data_df):
"""
Creates a feature dataset from the annotated utterances.
"""
feature_dataset = []
for utterance, utterance_with_tagging in zip(data_df['answer_normalised'], data_df['answer_annotation']):
entities = EntityExtractor.extract_entities(utterance_with_tagging)
utterance_pos = spacy_pos_tag_utterance(utterance)
feature_dataset.append(
EntityExtractor.combine_pos_and_entity_tags(entities, utterance_pos))
return feature_dataset
def get_targets_and_labels(data_df):
feature_dataset = create_feature_dataset(data_df)
X = [EntityExtractor.utterance2features(utterance)
for utterance in feature_dataset]
y = [EntityExtractor.utterance2labels(utterance)
for utterance in feature_dataset]
return X, y
def evaluate_entity_classifier(data_df):
"""
Evaluates the entity classifier and generates a report
"""
print('Evaluating entity classifier')
X, y = get_targets_and_labels(data_df)
predictions = Analytics.cross_validate_classifier(crf, X, y)
report_df = Analytics.generate_entity_classification_report(
predictions, y)
return report_df
entity_spacy_report_df = evaluate_entity_classifier(nlu_data_df)
entity_spacy_report_df
entity_spacy_report_df.to_csv('data/spacy_entity_report.csv')
###Output
_____no_output_____ |
Fundamentals/Class_Notes/IO_While.ipynb | ###Markdown
File IO
###Code
##open() creates a new file or opens an existing file; the first str designates the file name, the 2nd designates the permission
test_file = open("sample.txt", "w")
## .name gives name of file attached to the variable
test_file.name
## .encoding displays the way the file is encoded
test_file.encoding
open("sample.txt", "w")
## ^io = a class, ^ name is the file name, ^ mode is the permissions, ^ encoding shows the format
## w+ = write + read; wb+ = writing + reading in binary; for all permissions go here: http://tutorialspoint.com/python/python_files_io.htm
comedians = open("comedians.txt", "w+")
comedians.write("Kevin Hart ")
comedians.write("Dave Chapelle ")
comedians.write("Romesh Ranganathan")
## must .close() the file otherwise it won't be saved
comedians.close()
names = ["Kevin", "Jimmy", "Dave", "Romesh"]
file = open("comedians.txt", "w")
file.writelines([ name + "\n" for name in names ])
file.close()
###Output
_____no_output_____
###Markdown
With
###Code
with open("comedians.txt", "r") as shit:
for name in shit.readlines():
print(name)
###Output
Kevin
Jimmy
Dave
Romesh
|
content/courses/ml_intro/10_k_means_clustering/01_unsupervised_learning.ipynb | ###Markdown
K-means clusteringIt should not surprise you at this point to learn that doing unsupervised learning in scikit-learn is pretty easy (which is not to say that doing it *well* is easy, of course). The code looks almost exactly the same is it does in the unsupervised setting. The main difference is that unsupervised learning estimators take only `X` data—there is, by definition, no set of labels `y`.Let's cluster the observations in the above plot using the [k-means clustering](https://en.wikipedia.org/wiki/K-means_clustering) algorithm:
###Code
from sklearn.cluster import KMeans
kmc = KMeans(3)
kmc.fit(X)
colors = kmc.predict(X)
plt.scatter(X[:, 0], X[:, 1], c=colors);
###Output
_____no_output_____
###Markdown
Job well done?At this point we've developed fully operational machine learning workflows for three separate kinds of machine learning problems: regression, classification, and clustering. Relatively little work was involved in each case. In the case of the regression problem, for example, we initialized a linear regression model, fit it to some data, used it to generate predictions, and scored those predictions— all in 3 or 4 lines of code! This seems pretty great. Maybe we should just stop here, pat ourselves on the back for a job well done, and head home for the day.For reasons that will shortly become clear, though, calling it quits here would be a really bad idea.Let's dig a little deeper. We'll start by asking an important question that echoes back to the definition of machine learning as the study of systems that can improve their output by learning from experience. Specifically: how does our model's performance evolve as we give it more data?Think back to our (continuous) age prediction problem. Intuitively, we might expect that our $R^2$ will go up as we increase the size of our dataset (because having more data to learn from seems like it should be a good thing). But we should probably verify that.Instead of fitting our `LinearRegression` estimator to just one dataset, let's systematically vary our sample size over a large range, and fit a linear regression to each one (actually, to stabilize our performance estimates, we'll average over multiple permutations at each sample size).The code below is much more involved than it needs to be; as we'll see in the next section, scikit-learn includes a number of utilities that can achieve the same goal much more compactly and efficiently. But I think it can be helpful to explicitly lay out all of the steps we're going through before we replace them with a single line of black magic.
###Code
# initialize the OLS estimator
est = LinearRegression()
# we'll plot a separate panel for each feature set
feature_sets = ['domains', 'facets', 'items']
# evaluate performance at each of these sample sizes
sample_sizes = [100, 200, 500, 1000, 2000, 5000, 10000, 20000, 50000]
# number of permutations to average over at each sample size
n_reps = 10
# store results for all permutations, sample sizes, and feature sets
results = np.zeros((n_reps, len(sample_sizes), len(feature_sets)))
# loop over permutations
for i in range(n_reps):
# loop over sample sizes
for j, n in enumerate(sample_sizes):
# get the appropriate features and labels
*Xs, age = get_features(data, *feature_sets, 'AGE', n=n)
# loop over feature sets
for k, X in enumerate(Xs):
# fit the model
est.fit(X, age)
# generate predictions
pred_y = est.predict(X)
# save R^2 in our results array
results[i, j, k] = r2_score(age, pred_y)
# Compute means and stdevs for error bars
r2_mean = results.mean(0)
r2_std = results.std(0)
###Output
_____no_output_____
###Markdown
Now we can plot the resulting $R^2$ values for each feature set as a function of sample size:
###Code
# used to display axis tick labels on a linear scale
from matplotlib.ticker import ScalarFormatter
# Set up plots=
fig, axes = plt.subplots(1, 3, figsize=(15, 4), sharey=True)
plt.ylim(0, 1)
# Plot results
for i, label in enumerate(feature_sets):
mean, sd = r2_mean[:, i], r2_std[:, i]
ax = axes[i]
line = ax.plot(sample_sizes, mean, 'o-')
ax.set_xscale('log')
ax.xaxis.set_major_formatter(ScalarFormatter())
ax.set_xlabel("Sample size (n)", fontsize=14)
ax.fill_between(sample_sizes, mean-sd, mean+sd, alpha=0.2)
ax.set_title(label, fontsize=18)
# Add y-axis labels on both sides
axes[0].set_ylabel("$R^2$", fontsize=16);
axes[-1].set_ylabel('$R^2$', fontsize=16)
axes[-1].yaxis.set_label_position("right")
###Output
_____no_output_____ |
tests/test_figure.ipynb | ###Markdown
GridFigure元のmatplotlibはpyplot.axesメソッドで自由にsubplotを配置できる.subplot のグリッド配置を直感的に行えるようにする.* 各subplotのプロットエリアサイズを直接指定する.既存のsubplotを基準として, 位置を相対的に指定できる.* gf.add_right(a) した後に, もう一度gf.add_right(a) した場合の挙動をどうするか. 1. [x] 既存のsubplot上に重なるようにする. * 入れ子のサブプロットができるようにする 2. [ ] 既存のsubplotの右に追加する. * subplot間の関係のマップを作る必要がある
###Code
# Tests
padding={
"left":0,
"right":0,
"top":0,
"bottom":0
}
gf = Matpos()
"""
a a a a
a a a a
a a a a
"""
a = gf.from_left_top(gf,(4,3))
print(
a.size == (4,3),
a.origin == (0,0),
gf.left_top == (0,0),
gf.right_bottom == (4,3)
)
"""
a a a a
a a a a b
a a a a b
"""
b = gf.add_right(a, (1,None), offset=(0,1))
print(
b.size == (1,2),
b.origin == (4,1),
gf.left_top == (0,0),
gf.right_bottom == (5,3)
)
"""
a a a a
a a a a b
a a a a b
c c c c c c
"""
c = gf.add_bottom(a, (None,1),offset=(-1,0))
print(
c.size == (6,1),
c.origin == (-1,3),
gf.left_top == (-1,0),
gf.right_bottom == (5,4)
)
"""
d d a a
d d a a b
a a a a b
c c c c c c
"""
d = gf.from_left_top(a, (2,2))
print(
d.size == (2,2),
d.origin == (0,0),
gf.left_top == (-1,0),
gf.right_bottom == (5,4)
)
print(
gf.get_size() == (6,4),
)
print(
gf.relative(a, padding) == ((1/6, 0), (5/6, 3/4)),
gf.relative(b, padding) == ((5/6, 1/4), (1, 3/4)),
gf.relative(c, padding) == ((0, 3/4), (1,1)),
gf.relative(d, padding) == ((1/6, 0), (3/6, 2/4))
)
print(
gf.axes_position(a, padding),
gf.axes_position(b, padding),
gf.axes_position(c, padding),
gf.axes_position(d, padding)
)
fig, axes = gf.figure_and_axes([a,b,c,d],padding=padding, facecolor="gray")
axes[0].text(0.5,0.5,"a")
axes[1].text(0.5,0.5,"b")
axes[2].text(0.5,0.5,"c")
axes[3].text(0.5,0.5,"d")
# Tests
gf = Matpos()
a = gf.from_left_top(gf,(4,3))
b = gf.add_right(a, (1,None), offset=(0.5,1))
c = gf.add_bottom(a, (None,1),offset=(1,0.5))
fig, axes = gf.figure_and_axes([a,b,c], padding={"left":1,"right":1,"top":1,"bottom":1})
axes[0].text(0.5,0.5,"a")
axes[1].text(0.5,0.5,"b")
axes[2].text(0.5,0.5,"c")
# Tests
gf = Matpos()
a = gf.from_left_top(gf,(4,3))
b = gf.add_right(a, (1,None), offset=(0.5,1))
c = gf.add_bottom(a, (None,1),offset=(1,0.5))
d = gf.from_left_top(a, (2,1.5), offset=(0,.5))
print(gf.get_size())
fig, axes = gf.figure_and_axes([a,b,c,d],figsize=(4,4))
axes[0].text(0.5,0.5,"a")
axes[1].text(0.5,0.5,"b")
axes[2].text(0.5,0.5,"c")
axes[3].text(0.5,0.5,"d")
axes[3].set_xticks([])
axes[3].set_yticks([])
# Tests
gf = Matpos(unit="")
a = gf.add_bottom(gf,(6,3))
b = gf.add_bottom(a, (6,3), offset=(0,0.5))
c = gf.add_bottom(b, (6,3), offset=(0,0.5))
d = gf.add_bottom(c, (6,3), offset=(0,0.5))
fig,axes = gf.figure_and_axes([a,b,c,d])
axes[0].text(0.5,0.5,"a")
axes[1].text(0.5,0.5,"b")
axes[2].text(0.5,0.5,"c")
axes[3].text(0.5,0.5,"d")
###Output
_____no_output_____
###Markdown
Subplot のサイズ指定方法について* Default: 同じサイズのsubplotを行列に並べる. 列数を指定* Later: データとアクションを指定した後からサイズを指定* Contemporaly: データ指定時にサイズも指定* Prior: 先にサイズを指定しておいて後からデータを流し込む```pythonfigure = Figure()a = figure.add_bottom(subplot_a, size, offset)b = figure.add_bottom(subplot_b, size, offset, a)figure.align(gridder.figure_and_axes([a,b])figure.show(gridder.figure_and_axes([a,b])) grid layoutfigure = Figure.grid(column)a = figure.add_subplot(subplot_a, size, offset)b = figure.add_subplot(subplot_a, size, offset)```
###Code
gf = Matpos(unit="px",dpi=100)
"""
reduce(
acc, e -> acc,
es,
init
)
"""
sgs = gf.add_grid([(400,200),(200,200),(400,300)], 2, (50,50))
d = gf.add_right(sgs[1], (400,None), (50,0))
gf.figure_and_axes([*sgs,d])
gf = Matpos()
"""
1 2 3
4 5 6
7 8 9
"""
"""
reduce(
acc, e -> acc,
es,
init
)
"""
sgs = gf.add_grid([(2,2) for i in range(9)], 3, (1,0.5))
print(gf.get_size())
print(gf.left_top)
fig, axes = gf.figure_and_axes(sgs, padding={"left":1, "right":1, "top":1,"bottom":1})
mp = Matpos()
a = mp.add_bottom(mp,(2,2))
b = mp.add_top(a, (1,1), margin=0.5)
c = mp.add_right(a, (1,1), margin=0.5)
d = mp.add_bottom(a, (1,1), margin=0.5)
e = mp.add_left(a, (1,1), margin=0.5)
fig, axes = mp.figure_and_axes([a,b,c,d,e],padding={"left":1, "right":1, "top":1,"bottom":1})
axes[0].text(0.5,0.5,"a")
axes[1].text(0.5,0.5,"b")
axes[2].text(0.5,0.5,"c")
axes[3].text(0.5,0.5,"d")
axes[4].text(0.5,0.5,"e")
###Output
_____no_output_____ |
anonymous_openasr20_system_description.ipynb | ###Markdown
Application of NVidia NeMo Quartz 15x5 model trained from scratch for 16000 Hz sample rate for Somali Anonymous team12 November 2020 AbstractWe describe our entry for NIST OpenASR20. In EVAL Constrained condition, this system scored a WER of 1.13849 and 4th place out of 5 on the Leaderboard for Somali. Core algorithmic approachWe used the NVidia NeMo ASR package [1] and followed their instructions [2] for training a new language from scratch (Constrained condition) using the QuartzNet 15x5 model [3]. This involved creating a YAML file for Somali by modifying the example YAML file [4].The main modifications were to * Input the **grapheme set** for Somali* Decide on **maximum duration** in seconds of input sample. We chose 10 seconds and limited our training samples to transcriptions that were 10 seconds or less in the BUILD set.* Decide on the **sample rate**. Because we initially worked with the pretrained model (Unconstrained condition), which uses 16000Hz sample rate, we stayed with 16000Hz rate for the Constrained condition (probably a mistake, as it increases parameter size for no added value).* Decide on the initial **learning rate**. We chose a relatively high rate of 0.02 which is double the normal Novograd recommended starting rate of 0.01, because of use of highly augmented samples for training.* Decide on the **batch size**. We chose a batch size of 180 to fit our GPU, because we felt that larger batch size would minimize overfitting.So, the following entries were changed in the base YAML file to make the Somali YAML file:
###Code
sample_rate: &sample_rate 16000
labels: &labels [' ', "'", 'a', 'b', 'c', 'd', 'e',
'f', 'g', 'h', 'i', 'j',
'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r',
's', 't', 'u', 'v', 'w', 'x', 'y', 'z']
model:
train_ds:
sample_rate: 16000
batch_size: 180
max_duration: 10.0
optim:
lr: .002
###Output
_____no_output_____
###Markdown
A new model from scratch is created by instantiating the `nemo_asr.models.EncDecCTCModel` class in NeMo. Additional features and tools used, including software packages and publicly available external resourcesWe used:* Python 3.7.9* `sph2pip_v2.5` for SPH to WAV conversion [5]* Python modules `IPython`, `Levenshtein`, `OpenASR_convert_reference_transcript`, `argparse`, `audioread`, `csv`, `datetime`, `glob`, `itertools`, `json`, `json`, `librosa`, `logging`, `matplotlib`, `multiprocessing`, `nemo`, `numpy`, `omegaconf`, `operator`, `os`, `pandas`, `pathlib`, `pickle`, `pprint`, `pytorch_lightning`, `random`, `re`, `ruamel`, `scipy`, `shutil`, `soundfile`, `sys`, `tarfile`, `torch`, `torchtext`, `tqdm`, `unidecode`, `warnings` Other data used (outside provided data)Only NIST BABEL Somali BUILD samples were used for training. Significant data pre-/post-processing Data augmentationTraining audio was split according to the transcript into smaller samples per the timecodes on the scripts.Each sample was then augmented with random variations using NeMo provided perturbations [6][7].In particular we applied the following 3 perturbations in sequence 10 times to get 10 new samples:* **Time stretch** from 0.8 to 1.2. (Not pitch preserving.)* **Speed change** from 0.8 to 1.2. (Pitch preserving.)* **White noise** from -70db to -35 db.This is implemented in the following class:
###Code
from nemo.collections.asr.parts import perturb
class Disturb:
def __init__(self, _sample_rate):
self.sample_rate = _sample_rate
self.white_noise = \
perturb.WhiteNoisePerturbation(min_level=-70, max_level=-35)
self.speed = perturb.SpeedPerturbation(self.sample_rate,
'kaiser_best', min_speed_rate=0.8,
max_speed_rate=1.2, num_rates=-1)
self.time_stretch = \
perturb.TimeStretchPerturbation(min_speed_rate=0.8,
max_speed_rate=1.2, num_rates=3)
def __call__(self, _sample):
sample=deepcopy(_sample)
self.time_stretch.perturb(sample)
self.speed.perturb(sample)
self.white_noise.perturb(sample)
return sample
###Output
_____no_output_____
###Markdown
Speaker activity detection and translation To reduce the unlabelled DEV and EVAL data to clips of at most 10 seconds in length, it is necessary to implement a Speaker Activity Detection function. We explored NeMo templates for training a neural network for this purpose. This approach resulted in a very slow function. We chose instead to implement an ad hoc manually tuned method which relies on the absolute value of the dB level of the mel spectogram to find suitably long periods of silence to cut the clips at. The method is implemented as follows:
###Code
def smooth(y, w):
box = np.ones(w)/w
y_smooth = np.convolve(y, box, mode='same')
return y_smooth
def smoothhtooms(A,w):
A1=smooth(A,w)
A2=smooth(A1[::-1],w)
return A2[::-1]
def listen_and_transcribe(C, model, max_duration, gold, audio):
audio /= max(abs(audio.min()), abs(audio.max()))
size=audio.shape[0]
T=size/C.sample_rate
X=np.arange(size)/C.sample_rate
Z=np.zeros(size)
S = librosa.feature.melspectrogram(y=audio,
sr=C.sample_rate, n_mels=64, fmax=8000)
dt_S=T/S.shape[1]
samples_per_spect=int(dt_S*C.sample_rate)
S_dB = librosa.power_to_db(S, ref=np.max)
s_dB_mean=np.mean(S_dB,axis=0)
max_samples=int(max_duration/dt_S)
min_samples=1
pred=[]
cutoffs = np.linspace(-80,-18,200)
max_read_head=s_dB_mean.shape[0]
max_read_head, max_samples, min_samples
read_head=0
transcriptions=[]
read_heads=[]
read_heads=[read_head]
while read_head < max_read_head:
finished = False
while not finished and read_head < max_read_head:
for cutoff in cutoffs:
speech_q=(s_dB_mean[read_head:]>cutoff)
silences=collect_false(speech_q)
silences=[(x,y) for x,y in silences
if x != y and y-x > min_samples]
n_silences = len(silences)
if n_silences==0:
continue
elif silences[0][0] == 0 and silences[0][1] != 0:
read_head +=silences[0][1]
break
elif silences[0][0] > max_samples:
continue
else:
silences=[(x,y) for x,y in silences
if x <= max_samples]
if not len(silences):
continue
start_at = read_head
stop_at= read_head + silences[0][0]
read_head = stop_at
finished = True
break
if not finished:
display_start=read_head*samples_per_spect
display_end=display_start+max_samples
start_at = read_head
stop_at = min(max_read_head,
read_head + max_samples)
read_head = stop_at
finished = True
read_heads.append(read_head)
start=start_at*samples_per_spect
end=start_at+stop_at*samples_per_spect
display_start=max(0, start-5*C.sample_rate)
display_end=end+5*C.sample_rate
smooth_abs=smoothhtooms(np.abs(audio[start:end]), 100)
smooth_abs_max=smooth_abs.max()
if smooth_abs_max >= 0.05:
try:
segment_transcript, timeline, \
normalized_power, speech_mask, clip_audio=\
predicted_segment_transcript(C, \
model, audio, \
start, end, s_dB_mean, \
samples_per_spect, dt_S)
transcriptions.extend(segment_transcript)
except:
print("empty translation")
transcriptions = [(time, time+duration, pred)
for time, duration, pred in transcriptions]
return transcriptions
###Output
_____no_output_____
###Markdown
The translation of a clip of 10 seconds or less in duration is performed by function `predicted_segment_transcript`. This relies on similar thinking to break the transcribed clip into silent and speech components, and then allocate the words of the result proportionally in word size to speech component size:
###Code
def normalize(A):
A=np.copy(A)
A=A-A.min()
A=A/A.max()
return A
def predicted_segment_transcript(C, model, audio,
start, end, s_dB_mean, samples_per_spect, dt_S):
clip_audio=audio[start:end]
prediction=transcribe(C, model, clip_audio)
print(f"PRED {start/C.sample_rate:2f} {prediction}")
spec_start=int(start/samples_per_spect)
spec_end=int(end/samples_per_spect)
clip_power=s_dB_mean[spec_start:spec_end]
normalized_power=normalize(np.copy(clip_power))
timeline=np.arange(spec_start,spec_end)*dt_S
w=min(30, normalized_power.shape[0])
smoothed_normalized_power=normalize(smooth(normalized_power,w))
speech_mask=extremize(smoothed_normalized_power, 0.2)
speech_segments=mask_boundaries(speech_mask)+spec_start
spec_to_words=allocate_pred_to_speech_segments(prediction, speech_segments)
if len(spec_to_words)==0:
return None
segment_transcript = \
[(spec1*dt_S, (spec2-spec1)*dt_S, word)
for spec1, spec2, word in spec_to_words]
return segment_transcript, timeline, \
normalized_power, speech_mask, clip_audio
###Output
_____no_output_____
###Markdown
This in turn relies on a function to call the model to transcribe the audio into graphemes:
###Code
def transcribe(C, model, audio):
fn='tmp.wav'
sf.write(fn, audio, C.sample_rate)
translations=model.transcribe(paths2audio_files=[fn], batch_size=1)
translation=translations[0]
translation=translation.split(' ')
translation=' '.join([x.strip() for x in translation if len(x)])
return translation.replace("\u200c",'') # Just Pashto but required
###Output
_____no_output_____
###Markdown
and a function to do the allocation of predicted text to speech segments:
###Code
def align_seg_words(seg_words):
([seg_start, seg_end], seg_wrds) = seg_words
seg_duration=seg_end-seg_start
n_seg_wrds=len(seg_wrds)
word_duration=seg_duration//n_seg_wrds
seg_duration, word_duration
seg_word_boundaries=np.hstack([np.linspace(seg_start, \
seg_end-word_duration, n_seg_wrds).astype(int), [seg_end]])
seg_aligned_wrds=[(seg_word_boundaries[i],
seg_word_boundaries[i+1], seg_wrds[i])
for i in range(n_seg_wrds)]
return seg_aligned_wrds
def align_segment_words(segment_words):
return [z for y in [align_seg_words(x) for x in segment_words] for z in y]
def allocate_pred_to_speech_segments(prediction, speech_segments):
pred_words=prediction.split(' ')
n_words=len(pred_words)
if n_words==0:
return []
segment_durations=np.diff(speech_segments)
speech_duration=segment_durations.sum()
segment_allocation=n_words*segment_durations/speech_duration
words_per_segment=np.round(segment_allocation).T.astype(int)[0]
# If count is under then add missing word to longest segment
words_per_segment[np.where(words_per_segment==\
words_per_segment.max())[0][0]] \
+= n_words-words_per_segment.sum()
word_segment_boundaries=np.cumsum(np.hstack([[0],\
words_per_segment]))
segment_words=list(zip(speech_segments.tolist(),
[pred_words[word_segment_boundaries[i]:\
word_segment_boundaries[i+1]]
for i in range(len(words_per_segment))]))
return align_segment_words(segment_words)
###Output
_____no_output_____ |
WEEK_2/RepMLA_Etivity_2_1.ipynb | ###Markdown
**Artificial Intelligence - MSc**CS6501 - MACHINE LEARNING AND APPLICATIONS**Business Analytics - MSc**ET5003 - MACHINE LEARNING APPLICATIONS ***Annual Repeat***Instructor: Enrique NaredoRepMLA_Etivity-2.1 Introduction [Classification](https://towardsdatascience.com/machine-learning-classifiers-a5cc4e1b0623) is the process of predicting the class of given data points.- An easy to understand example is classifying emails as “spam” or “not spam.”- In machine learning an algorithm learns how to assign a class label to examples from a problem domain.- Classification belongs to the category of supervised learning where the targets also provided with the input data. In this notebook we will solve a classification problem using the well-known Mnist dataset and the also well-known classifier algorithm Logistic Regression. Dataset The [MNIST](https://en.wikipedia.org/wiki/MNIST_database) database (Modified National Institute of Standards and Technology database) is a large database of handwritten digits.- The MNIST database contains 60,000 training images and 10,000 testing images.- An extended dataset similar to MNIST called EMNIST has been published in 2017, which contains 240,000 training images, and 40,000 testing images of handwritten digits and characters Import Dataset
###Code
# Import libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# import some data from sklearn
from sklearn import datasets
# load the MNIST (digits) dataset
mnist = datasets.load_digits()
# take only the dataset
X = mnist.data
# show first 3 data elements
print(X[0:3])
# show first 3 class labels
y = mnist.target
print(y[0:3])
###Output
[0 1 2]
###Markdown
Training and Testing set * The model is initially trained (fit) on a training dataset * The model is then tested on a different (separate) test dataset sklearn
###Code
from sklearn.model_selection import train_test_split
# if want to split the raw dataset into 80% traing and 20% for test
# then using the 'train_test_split' function define test set = 20%
# and the rest (80%) will be the training set
Xtrain,Xtest,ytrain,ytest = train_test_split(X,y,test_size=0.2)
# show the shape of both training and test sets
print(Xtrain.shape, ytrain.shape, Xtest.shape, ytest.shape)
# you could define the training set instead
Xtrain,Xtest,ytrain,ytest = train_test_split(X,y,train_size=0.8)
# show the shape of both training and test sets
print(Xtrain.shape, ytrain.shape, Xtest.shape, ytest.shape)
###Output
(1437, 64) (1437,) (360, 64) (360,)
###Markdown
**Train** - Xtrain: uint8 NumPy array of grayscale image data with shapes (1437, 64), containing the training data. - 1,437 images - Each image is a vector of 64 pixels - Pixel values range from 0 to 255. - ytrain: uint8 NumPy array of digit labels (integers in range 0-9) with shape (60000,) for the training data.
###Code
# shape returns the number of corresponding elements
print(Xtrain.shape)
print(ytrain.shape)
###Output
(1437, 64)
(1437,)
###Markdown
**Test** - Xtest: uint8 NumPy array of grayscale image data with shapes (360, 64), containing the test data. - 360 images - Each image is a vector of 64 pixels - Pixel values range from 0 to 255. - ytest: uint8 NumPy array of digit labels (integers in range 0-9) with shape (10000,) for the test data.
###Code
# shape returns the number of corresponding elements
print(Xtest.shape)
print(ytest.shape)
###Output
(360, 64)
(360,)
###Markdown
Showing the data
###Code
# each data element is a different image
# arranged in a matrix with pixel values range from 0 to 255
# pixels close to 0 tends to black
# pixels close to 255 tends to white
# here the first image (vector)
Xtrain[0]
# here the first image (matrix)
Xtrain[0].reshape(8, 8)
# show the first image in the training set
image_train = 0
plt.imshow(Xtrain[image_train].reshape(8, 8), cmap=plt.cm.gray_r, interpolation='nearest')
# class labels are the number
# corresponding to the handwriten
# here the first 10 in the training set
ytrain[0:10]
# show the first image in the test set
image_test = 0
plt.imshow(Xtest[image_train].reshape(8, 8), cmap=plt.cm.gray_r, interpolation='nearest')
# here the first 10 in the test set
ytest[0:10]
# Function to plot an arrange of images
def plot_images(instances, images_per_row=5, **options):
# images-> 8x8=64
size = 8
images_per_row = min(len(instances), images_per_row)
images = [instance.reshape(size,size) for instance in instances]
n_rows = (len(instances) - 1) // images_per_row + 1
row_images = []
n_empty = n_rows * images_per_row - len(instances)
images.append(np.zeros((size, size * n_empty)))
for row in range(n_rows):
rimages = images[row * images_per_row : (row + 1) * images_per_row]
row_images.append(np.concatenate(rimages, axis=1))
image = np.concatenate(row_images, axis=0)
plt.imshow(image, cmap='gray', **options)
plt.axis("off")
# Plotting a set of images from training
plt.figure(figsize=(7,7))
plot_images(Xtrain[0:27],images_per_row=9)
plt.title("Set of images from training", fontsize=14)
print("True value =\n",ytrain[0:27].reshape(-1,9))
# Plotting a set of images from test
plt.figure(figsize=(7,7))
plot_images(Xtest[0:27],images_per_row=9)
plt.title("Set of images from test", fontsize=14)
print("True value =\n",ytest[0:27].reshape(-1,9))
###Output
True value =
[[1 2 0 3 0 7 6 8 9]
[8 7 6 7 9 8 2 1 5]
[7 4 4 4 4 7 9 5 2]]
###Markdown
Classifying images An [image](https://en.wikipedia.org/wiki/Image) (from Latin: imago) is an artifact that depicts visual perception, such as a photograph or other two-dimensional picture, that resembles a subject—usually a physical object—and thus provides a depiction of it. - In the context of signal processing, an image is a distributed amplitude of color(s).- A [greyscale](https://en.wikipedia.org/wiki/Grayscale) image is one in which the value of each pixel is a single sample representing only an amount of light; that is, it carries only intensity information. - Greyscale images, a kind of black-and-white or grey monochrome, are composed exclusively of shades of grey. - The contrast ranges from black at the weakest intensity to white at the strongest.
###Code
# Visualize the intensity values
# and the actual tone in each pixel
# from the training set
image2show = 10
df = pd.DataFrame(Xtrain[image2show].reshape(8, 8))
df = df.style.background_gradient(cmap='gray')
display(df)
print('\n\n\nClass label for this image: ' + str(ytrain[image2show]))
###Output
_____no_output_____
###Markdown
Set of images into a tableThe images are already a **row** vector with dimension of $1 \times 64$.
###Code
datav = np.hstack([Xtrain, ytrain.reshape(-1,1)])
df = pd.DataFrame(datav[0:7,:])
df = df.style.background_gradient(cmap='gray')
display(df)
###Output
_____no_output_____
###Markdown
ScalingScale the resulting matrix to the interval $[0,1]$, so we can now apply a machine leaning such as:* logistic regression* multi-layer perceptron
###Code
## scale to [0,1]
# dividing over the max value: 255
Xtrain_r = Xtrain/255
Xtest_r = Xtest/255
###Output
_____no_output_____
###Markdown
Methods Logistic Regression [Logistic Regression](https://en.wikipedia.org/wiki/Logistic_regression), in statistics the logistic model (or logit model) is used to model the probability of a certain class or event existing such as pass/fail, win/lose, alive/dead or healthy/sick. * This can be extended to model several classes of events such as determining whether an image contains a cat, dog, lion, etc. * Each object being detected in the image would be assigned a probability between 0 and 1, with a sum of one.
###Code
# import the LogisticRegression
from sklearn.linear_model import LogisticRegression
LR = LogisticRegression(multi_class='multinomial',solver='lbfgs', fit_intercept=True, max_iter=100)
LR.fit(Xtrain_r,ytrain)
y_pred = LR.predict(Xtest_r)
print("Training set score: %f" % LR.score(Xtrain_r, ytrain))
# classification_report builds a text report showing the main classification metrics
from sklearn import metrics
print(f"Classification report for classifier {LR}:\n"
f"{metrics.classification_report(ytest, y_pred)}\n")
## Confusion matrix
# of the true digit values and the predicted digit values
disp = metrics.plot_confusion_matrix(LR, Xtest, ytest)
disp.figure_.suptitle("Confusion Matrix")
print(f"Confusion matrix:\n{disp.confusion_matrix}")
plt.show()
###Output
Confusion matrix:
[[37 0 0 0 0 0 0 0 0 0]
[ 0 34 0 0 0 0 2 0 5 5]
[ 0 0 30 0 0 0 0 0 1 0]
[ 0 0 0 28 0 0 0 1 5 0]
[ 1 1 0 0 38 0 1 0 2 0]
[ 0 0 0 0 1 32 0 0 1 3]
[ 0 1 0 0 0 0 30 0 0 0]
[ 0 0 0 0 0 0 0 28 0 0]
[ 0 2 0 0 0 0 0 0 38 0]
[ 0 0 0 1 1 0 0 2 2 27]]
###Markdown
Multi-layer perceptron A multilayer perceptron ( [MLP](https://en.wikipedia.org/wiki/Multilayer_perceptron)) is a class of feedforward artificial neural network (ANN). * The term MLP is used ambiguously, sometimes loosely to any feedforward ANN, sometimes strictly to refer to networks composed of multiple layers of perceptrons.* An MLP consists of at least three layers of nodes: an input layer, a hidden layer and an output layer. * Except for the input nodes, each node is a neuron that uses a nonlinear activation function. * MLP utilizes a supervised learning technique called backpropagation for training.* Its multiple layers and non-linear activation distinguish MLP from a linear perceptron.
###Code
# import the MLPClassifier
from sklearn.neural_network import MLPClassifier
MLPC = MLPClassifier(hidden_layer_sizes=(50,), max_iter=10, alpha=1e-4,
solver='sgd', verbose=10, tol=1e-4, random_state=1,
learning_rate_init=.1)
# train MLPClassifier
MLPC.fit(Xtrain_r, ytrain)
print("Training set score: %f" % MLPC.score(Xtrain_r, ytrain))
# classification_report builds a text report showing the main classification metrics
from sklearn import metrics
print(f"Classification report for classifier {MLPC}:\n"
f"{metrics.classification_report(ytest, y_pred)}\n")
## Confusion matrix
# of the true digit values and the predicted digit values
disp = metrics.plot_confusion_matrix(MLPC, Xtest, ytest)
disp.figure_.suptitle("Confusion Matrix")
print(f"Confusion matrix:\n{disp.confusion_matrix}")
plt.show()
###Output
Confusion matrix:
[[13 7 0 0 0 0 0 0 17 0]
[ 0 30 0 0 0 0 2 0 14 0]
[ 0 3 9 0 0 0 3 0 15 1]
[ 0 9 0 0 0 0 0 0 19 6]
[ 0 23 0 0 1 0 2 0 17 0]
[ 0 29 0 0 1 0 0 0 7 0]
[ 0 3 0 0 0 0 23 0 5 0]
[ 0 18 1 0 0 0 2 2 2 3]
[ 0 18 0 0 0 0 0 0 22 0]
[ 0 4 0 0 0 0 0 0 28 1]]
|
Eratoptimisthenes.ipynb | ###Markdown
###Code
def g_trial_div():
yield 2
yield 3
def g_potentials():
n = 1
while True:
yield 6 * n - 1
yield 6 * n + 1
n += 1
potentials = g_potentials()
priors = []
while True:
potential = next(potentials)
if all([potential % prior != 0 for prior in priors]):
priors.append(potential)
yield potential
trial_div = g_trial_div()
[next(trial_div) for i in range(1000)]
###Output
_____no_output_____ |
_build/jupyter_execute/ipynb/02b-computacao-simbolica.ipynb | ###Markdown
Vamos continuar nossa trilha em computação simbólica estendendo o conhecimento sobre o tipo `bool`, expressões e testes lógicos. Operadores lógicos Vimos que `True` e `False` são os dois valores atribuíves a um objeto de tipo `bool`. Eles são úteis para testar condições, realizar verificações e comparar quantidades. Vamos estudar *operadores de comparação*, *operadores de pertencimento* e *operadores de identidade*. Operadores de comparaçãoA tabela abaixo resume os operadores de comparação utilizados em Python.| operador | significado | símbolo matemático | |---|---|---| | `<` | menor do que | $<$ || `<=` | menor ou igual a | $\leq$ || `>` | maior do que | $>$ || `>=` | maior ou igual a | $\geq$ || `==` | igual a | $=$ || `!=` | diferente de | $\neq$ |Podemos usá-los para comparar objetos. **Nota:** `==` está relacionado à igualdade, ao passo que `=` é uma atribuição. São conceitos operadores com finalidade distinta.
###Code
2 < 3 # o resultado é um 'bool'
5 < 2 # isto é falso
2 <= 2 # isto é verdadeiro
4 >= 3 # isto é verdadeiro
6 != -2
4 == 4 # isto não é uma atribuição!
###Output
_____no_output_____
###Markdown
Podemos realizar comparações aninhadas:
###Code
x = 2
1 < x < 3
3 > x > 4
2 == x > 3
###Output
_____no_output_____
###Markdown
As comparações aninhadas acima são resolvidas da esquerda para a direita e em partes. Isso nos leva a introduzir os seguintes operadores.| operador | símbolo matemático | significado | uso relacionado a ||---|---|---|---|| `or` | $\vee$ | "ou" booleano | união, disjunção || `and` | $\wedge$ | "e" booleano | interseção, conjunção || `not` | $\neg$ | "não" booleano | exclusão, negação |
###Code
# parênteses não são necessários aqui
(2 == x) and (x > 3) # 1a. comparação: 'True'; 2a.: 'False'. Portanto, ambas: 'False'
# parênteses não são necessários aqui
(x < 1) or (x < 2) # nenhuma das duas é True. Portanto,
not (x == 2) # nega o "valor-verdade" que é 'True'
not x + 1 > 3 # estude a precedência deste exemplo. Por que é 'True'?
not (x + 1 > 3) # estude a precedência deste exemplo. Por que também é 'True'?
###Output
_____no_output_____
###Markdown
Operadores de pertencimentoA tabela abaixo resume os operadores de pertencimento. | operador | significado | símbolo matemático|---|---|---|| `in` | pertence a | $\in$ || `not in` | não pertence a | $\notin$ |Eles terão mais utilidade quando falarmos sobre sequências, listas. Neste momento, vejamos exemplos com objetos `str`.
###Code
'2' in '2 4 6 8 10' # o caracter '2' pertence à string
frase_teste = 'maior do que'
'maior' in frase_teste
'menor' in frase_teste # a palavra 'menor' está na frase
1 in 2 # 'in' e 'not in' não são aplicáveis aqui
###Output
_____no_output_____
###Markdown
Operadores de identidadeA tabela abaixo resume os operadores de identidade. | operador | significado |---|---|| `is` | "aponta para o mesmo objeto" | `is not` | "não aponta para o mesmo objeto" |Esses operadores são úteis para verificar se duas variáveis se referem ao mesmo objeto. Exemplo: ```pythona is ba is not b```- `is` é `True` se `a` e `b` se referem ao mesmo objeto; `False`, caso contrário.- `is not` é `False` se `a` e `b` se referem ao mesmo objeto; `True`, caso contrário.
###Code
a = 2
b = 3
a is b # valores distintos
a = 2
b = a
a is b # mesmos valores
a = 2
b = 3
a is not b # de fato, valores não são distintos
a = 2
b = a
a is not b # de fato, valores são distintos
###Output
_____no_output_____
###Markdown
Equações simbólicasEquações simbólicas podem ser formadas por meio de `Eq` e não com `=` ou `==`.
###Code
# importação
from sympy.abc import a,b
import sympy as sy
sy.init_printing(pretty_print=True)
sy.Eq(a,b) # equação simbólica
sy.Eq(sy.cos(a), b**3) # os objetos da equação são simbólicos
###Output
_____no_output_____
###Markdown
Resolução de equações algébricas simbólicasPodemos resolver equações algébricas da seguinte forma:```pythonsolveset(equação,variável,domínio)``` **Exemplo:** resolva $x^2 = 1$ no conjunto $\mathbb{R}$.
###Code
from sympy.abc import x
sy.solveset( sy.Eq( x**2, 1), x,domain=sy.Reals)
###Output
_____no_output_____
###Markdown
Podemos reescrever a equação como: $x^2 - 1 = 0$.
###Code
sy.solveset( sy.Eq( x**2 - 1, 0), x,domain=sy.Reals)
###Output
_____no_output_____
###Markdown
Com `solveset`, não precisamos de `Eq`. Logo, a equação é passada diretamente.
###Code
sy.solveset( x**2 - 1, x,domain=sy.Reals)
###Output
_____no_output_____
###Markdown
**Exemplo:** resolva $x^2 + 1 = 0$ no conjunto $\mathbb{R}$.
###Code
sy.solveset( x**2 + 1, x,domain=sy.Reals) # não possui solução real
###Output
_____no_output_____
###Markdown
**Exemplo:** resolva $x^2 + 1 = 0$ no conjunto $\mathbb{C}$.
###Code
sy.solveset( x**2 + 1, x,domain=sy.Complexes) # possui soluções complexas
###Output
_____no_output_____
###Markdown
**Exemplo:** resolva $\textrm{sen}(2x) = 3 + x$ no conjunto $\mathbb{R}$.
###Code
sy.solveset( sy.sin(2*x) - x - 3,x,sy.Reals) # a palavra 'domain' também pode ser omitida.
###Output
_____no_output_____
###Markdown
O conjunto acima indica que nenhuma solução foi encontrada. **Exemplo:** resolva $\textrm{sen}(2x) = 1$ no conjunto $\mathbb{R}$.
###Code
sy.solveset( sy.sin(2*x) - 1,x,sy.Reals)
###Output
_____no_output_____
###Markdown
Expansão, simplificação e fatoração de polinômiosVejamos exemplos de polinômios em uma variável.
###Code
a0, a1, a2, a3 = sy.symbols('a0 a1 a2 a3') # coeficientes
P3x = a0 + a1*x + a2*x**2 + a3*x**3 # polinômio de 3o. grau em x
P3x
b0, b1, b2, b3 = sy.symbols('b0 b1 b2 b3') # coeficientes
Q3x = b0 + b1*x + b2*x**2 + b3*x**3 # polinômio de 3o. grau em x
Q3x
R3x = P3x*Q3x # produto polinomial
R3x
R3x_e = sy.expand(R3x) # expande o produto
R3x_e
sy.simplify(R3x_e) # simplify às vezes não funciona como esperado
sy.factor(R3x_e) # 'factor' pode funcionar melhor
# simplify funciona para casos mais gerais
ident_trig = sy.sin(x)**2 + sy.cos(x)**2
ident_trig
sy.simplify(ident_trig)
###Output
_____no_output_____
###Markdown
Identidades trigonométricas Podemos usar `expand_trig` para expandir funções trigonométricas.
###Code
sy.expand_trig( sy.sin(a + b) ) # sin(a+b)
sy.expand_trig( sy.cos(a + b) ) # cos(a+b)
sy.expand_trig( sy.sec(a - b) ) # sec(a-b)
###Output
_____no_output_____
###Markdown
Propriedades de logaritmoCom `expand_log`, podemos aplicar propriedades válidas de logaritmo.
###Code
sy.expand_log( sy.log(a*b) )
###Output
_____no_output_____
###Markdown
A identidade não foi validada pois `a` e `b` são símbolos irrestritos.
###Code
a,b = sy.symbols('a b',positive=True) # impomos que a,b > 0
sy.expand_log( sy.log(a*b) ) # identidade validada
sy.expand_log( sy.log(a/b) )
m = sy.symbols('m', real = True) # impomos que m seja um no. real
sy.expand_log( sy.log(a**m) )
###Output
_____no_output_____
###Markdown
Com `logcombine`, compactamos as propriedades.
###Code
sy.logcombine( sy.log(a) + sy.log(b) ) # identidade recombinada
###Output
_____no_output_____
###Markdown
Fatorial A função `factorial(n)` pode ser usada para calcular o fatorial de um número.
###Code
sy.factorial(m)
sy.factorial(m).subs(m,10) # 10!
sy.factorial(10) # diretamente
###Output
_____no_output_____
###Markdown
**Exemplo:** Sejam $m,n,x$ inteiros positivos. Se $f(m) = 2m!$, $g(n) = \frac{(n + 1)!}{n^2!}$ e $h(x) = f(x)g(x)$, qual é o valor de $h(2)$?
###Code
from sympy.abc import m,n,x
f = 2*sy.factorial(m)
g = sy.factorial(n + 1)/sy.factorial(n**2)
h = (f.subs(m,x)*g.subs(n,x)).subs(x,4)
h
###Output
_____no_output_____
###Markdown
Funções anônimas A terceira classe de funções que iremos aprender é a de *funções anônimas*. Uma **função anônima** em Python consiste em uma função cujo nome não é explicitamente definido e que pode ser criada em apenas uma linha de código para executar uma tarefa específica.Funções anônimas são baseadas na palavra-chave `lambda`. Este nome tem inspiração em uma área da ciência da computação chamada de cálculo-$\lambda$.Uma função anônima tem a seguinte forma: ```pythonlambda lista_de_parâmetros: expressão```Funções anônimas podem são bastante úteis para tornar um código mais conciso. Por exemplo, na aula anterior, definimos a função```pythondef repasse(V): return 0.0103*V```para calcular o repasse financeiro ao corretor imobiliário. Com uma função anônima, a mesma função seria escrita como:
###Code
repasse = lambda V: 0.0103*V
###Output
_____no_output_____
###Markdown
Não necessariamente temos que atribui-la a uma variável. Neste caso, teríamos:
###Code
lambda V: 0.0103*V
###Output
_____no_output_____
###Markdown
Para usar a função, passamos um valor:
###Code
repasse(100000) # repasse sobre R$ 100.000,00
###Output
_____no_output_____
###Markdown
O modelo completo com "bonificação" seria escrito como:
###Code
r3 = lambda c,V,b: c*V + b # aqui há 3 parâmetros necessários
###Output
_____no_output_____
###Markdown
Redefinamos objetos simbólicos:
###Code
from sympy.abc import b,c,V
r3(b,c,V)
###Output
_____no_output_____
###Markdown
O resultado anterior continua sendo um objeto simbólico, mas obtido de uma maneira mais direta. Podemos usar funções anônimas para tarefas de menor complexidade. "Lambdificação" simbólica Usando `lambdify`, podemos converter uma expressão simbólica do *sympy* para uma expressão que pode ser numericamente avaliada em outra biblioteca. Essa função desempenha papel similar a uma função *lambda* (anônima).
###Code
expressao = sy.sin(x) + sy.sqrt(x) # expressão simbólica
f = sy.lambdify(x,expressao,"math") # lambdificação para o módulo math
f(0.2) # avalia
###Output
_____no_output_____
###Markdown
Para avaliações simples como a anterior, podemos usar `evalf` e `subs`. A lambdificação será útil quando quisermos avaliar uma função em vários pontos, por exemplo. Na próxima aula, introduziremos sequencias e listas. Para mostrar um exemplo de lambdificação melhor veja o seguinte exemplo.
###Code
from numpy import arange # importação de função do módulo numpy
X = arange(40) # gera 40 valores de 0 a 39
X
f = sy.lambdify(x,expressao,"numpy")(X) # avalia 'expressao' em X
f
###Output
_____no_output_____ |
S2/RITAL/TAL/TME/TME4/TP1-Seq-etu.ipynb | ###Markdown
TP sur l'analyse de phrase par HMMLe but de ce tp est de reprendre les modèles développés en MAPSI pour les appliquer sur un problème d'analyse de séqences.Nous allons travailler sur le Part-Of-Speech (POS) et optionnellement sur le chunking (le fait de regrouper les groupes nominaux et verbaux dans les phrases). Les données sont issus de CONLL 2000 [https://www.clips.uantwerpen.be/conll2000/chunking/]Les données sont disponibles en petite quantité (pour comprendre le fonctionnement des outils) puis en grande quantité pour effecter des expériences fiables.Le but du TP est de prendre en main les données sur une tâche simple (POS/Chunking) puis de donner des perforances sur la tâche de NER. Cette dernière partie est décrite dans l'avant dernière boite de ce TME, elle constitue cependant la plus grosse partie du travail.
###Code
import numpy as np
import matplotlib.pyplot as plt
from collections import Counter
from sklearn.metrics import confusion_matrix
# Chargement des données POS/Chunking
# Cette fonction doit être ré-écrite en v2 pour charger les données NER de connl 2003
def load(filename):
listeDoc = list()
with open(filename, "r") as f:
doc = list()
for ligne in f:
#print "l : ",len(ligne)," ",ligne
if len(ligne) < 2: # fin de doc
listeDoc.append(doc)
doc = list()
continue
mots = ligne.replace("\n","").split(" ")
doc.append((mots[0],mots[1])) # mettre mots[2] à la place de mots[1] pour le chuncking
return listeDoc
# =============== chargement ============
# sous ensemble du corpus => Idéal pour les premiers test
filename = "ressources/conll2000/chtrain.txt"
filenameT = "ressources/conll2000/chtest.txt"
# corpus plus gros => Pour valider les perf.
# filename = "ressources/conll2000/train.txt"
# filenameT = "ressources/conll2000/test.txt"
alldocs = load(filename)
alldocsT = load(filenameT)
print(len(alldocs)," docs read")
print(len(alldocsT)," docs (T) read")
print(alldocs[0])
print(alldocsT[0])
###Output
[('Rockwell', 'NNP'), ('International', 'NNP'), ('Corp.', 'NNP'), ("'s", 'POS'), ('Tulsa', 'NNP'), ('unit', 'NN'), ('said', 'VBD'), ('it', 'PRP'), ('signed', 'VBD'), ('a', 'DT'), ('tentative', 'JJ'), ('agreement', 'NN'), ('extending', 'VBG'), ('its', 'PRP$'), ('contract', 'NN'), ('with', 'IN'), ('Boeing', 'NNP'), ('Co.', 'NNP'), ('to', 'TO'), ('provide', 'VB'), ('structural', 'JJ'), ('parts', 'NNS'), ('for', 'IN'), ('Boeing', 'NNP'), ("'s", 'POS'), ('747', 'CD'), ('jetliners', 'NNS'), ('.', '.')]
[('Confidence', 'NN'), ('in', 'IN'), ('the', 'DT'), ('pound', 'NN'), ('is', 'VBZ'), ('widely', 'RB'), ('expected', 'VBN'), ('to', 'TO'), ('take', 'VB'), ('another', 'DT'), ('sharp', 'JJ'), ('dive', 'NN'), ('if', 'IN'), ('trade', 'NN'), ('figures', 'NNS'), ('for', 'IN'), ('September', 'NNP'), (',', ','), ('due', 'JJ'), ('for', 'IN'), ('release', 'NN'), ('tomorrow', 'NN'), (',', ','), ('fail', 'VB'), ('to', 'TO'), ('show', 'VB'), ('a', 'DT'), ('substantial', 'JJ'), ('improvement', 'NN'), ('from', 'IN'), ('July', 'NNP'), ('and', 'CC'), ('August', 'NNP'), ("'s", 'POS'), ('near-record', 'JJ'), ('deficits', 'NNS'), ('.', '.')]
###Markdown
Construction d'un modèle de référence POS à base de dictionnaire```mot => étiquette``` sans prise en compte de la séquence. Il faudra comparer tout résultat plus lourd à cette référence.On ne s'intéresse qu'à l'étiquette POS, sachant que le corpus a été décomposé en ```(mot, POS, Chunk)```.1. Création du dictionnaire d'équivalence à partir du *train*1. Mesure de l'efficacité en *test***Note** certains mots du test sont évidemment inconnus... Sur le plan technique, il faut remplacer:``` remplacerdico[cle] qui plante en cas de clé inconnue par dico.get(cle, valeurParDefaut)```Sur le plan linguistique, on peut affecter la classe majoritaire à tous les mots inconnus, on aura alors une référence plus forte.
###Code
# Construction du dictionnaire
dico = dict()
for ligne in alldocs :
for (mot,gram) in ligne :
dico[mot] = gram
def gram_plus_frequent(alldocs) :
liste = []
dico = dict()
for ligne in alldocs :
liste += ligne
for a,b in liste :
try :
dico[b] += 1
except :
dico[b] = 1
max_value = max(dico, key=dico.get)
return max_value
# evaluation des performances en test (et en apprentissage)
gram_plus_freq = gram_plus_frequent(alldocs)
print(gram_plus_freq)
cpt = 0
for ligne in alldocsT :
for (mot,gram) in ligne :
if dico.get(mot, gram_plus_freq) == gram :
cpt += 1
cpt = cpt
print(cpt)
###Output
NN
1527
###Markdown
Check: 1433 bonne réponses en test sur 1896(1527 avec 'NN' par défaut) Analyse de séquenceJe vous donne la correction des TME MAPSI: un apprentissage de HMM et une fonction viterbi. Vous allez devoir l'appliquer sur les données.Je vous demande seulement de comprendre la signification du paramètre ```eps``` dans l'algorithme HMM. C'est un paramètre important: jouez avec, touver la bonne valeur pour cette application.Il faut mettre en forme les données pour avoir des indices associés aux mots, sinon, on n'arrivera pas à faire des HMM... Tout le code pour la mise en forme est fourni ci-dessous.``` The cat is in the garden => 1 2 3 4 1 5```Pour une construction facilité du dictionnaire, on utilisera la méthode ```setdefault```Afin de produire des analyses qualitative, vous devez malgré tout comprendre le fonctionnement des dictionnaires pour retrouver les mots qui correspondent aux indices.
###Code
# allx: liste de séquences d'observations
# allq: liste de séquences d'états
# N: nb états
# K: nb observation
def learnHMM(allx, allq, N, K, initTo1=True,epsilon =1e-3):
if initTo1:
eps = epsilon # vous pouvez jouer avec ce paramètre de régularisation
A = np.ones((N,N))*eps
B = np.ones((N,K))*eps
Pi = np.ones(N)*eps
else:
A = np.zeros((N,N))
B = np.zeros((N,K))
Pi = np.zeros(N)
for x,q in zip(allx,allq):
Pi[int(q[0])] += 1
for i in range(len(q)-1):
A[int(q[i]),int(q[i+1])] += 1
B[int(q[i]),int(x[i])] += 1
B[int(q[-1]),int(x[-1])] += 1 # derniere transition
A = A/np.maximum(A.sum(1).reshape(N,1),1) # normalisation
B = B/np.maximum(B.sum(1).reshape(N,1),1) # normalisation
Pi = Pi/Pi.sum()
return Pi , A, B
def viterbi(x,Pi,A,B):
T = len(x)
N = len(Pi)
logA = np.log(A)
logB = np.log(B)
logdelta = np.zeros((N,T))
psi = np.zeros((N,T), dtype=int)
S = np.zeros(T)
logdelta[:,0] = np.log(Pi) + logB[:,int(x[0])]
#forward
for t in range(1,T):
logdelta[:,t] = (logdelta[:,t-1].reshape(N,1) + logA).max(0) + logB[:,int(x[t])]
psi[:,t] = (logdelta[:,t-1].reshape(N,1) + logA).argmax(0)
# backward
logp = logdelta[:,-1].max()
S[T-1] = logdelta[:,-1].argmax()
for i in range(2,T+1):
S[int(T-i)] = psi[int(S[int(T-i+1)]),int(T-i+1)]
return S, logp #, delta, psi
# alldocs etant issu du chargement des données
# la mise en forme des données est fournie ici
# afin de produire des analyses qualitative, vous devez malgré tout comprendre le fonctionnement des dictionnaires
buf = [[m for m,pos in d ] for d in alldocs]
mots = []
[mots.extend(b) for b in buf]
mots = np.unique(np.array(mots))
nMots = len(mots)+1 # mot inconnu
mots2ind = dict(zip(mots,range(len(mots))))
mots2ind["UUUUUUUU"] = len(mots)
buf2 = [[pos for m,pos in d ] for d in alldocs]
cles = []
[cles.extend(b) for b in buf2]
cles = np.unique(np.array(cles))
cles2ind = dict(zip(cles,range(len(cles))))
nCles = len(cles)
print(nMots,nCles," in the dictionary")
# mise en forme des données
allx = [[mots2ind[m] for m,pos in d] for d in alldocs]
allxT = [[mots2ind.setdefault(m,len(mots)) for m,pos in d] for d in alldocsT]
allq = [[cles2ind[pos] for m,pos in d] for d in alldocs]
allqT = [[cles2ind.setdefault(pos,len(cles)) for m,pos in d] for d in alldocsT]
# affichage du premier doc:
print(allx[0])
print(allq[0])
print(allxT[0])
print(allqT[0])
def predict(Pi,A,B,allxT):
return np.array([ viterbi(i,Pi,A,B)[0] for i in allxT ])
def optimise_eps(allx, allq,allxT,allqT, nCles, nMots, initTo2=True,list_epsilon=[1e-3]):
result = []
for e in list_epsilon :
# application des HMM sur ces données
Pi , A, B = learnHMM(allx, allq, nCles, nMots, initTo1=initTo2,epsilon=e)
# décodage des séquences de test & calcul de performances
cpt = 0
for i in range(len(allxT)) :
cpt += (viterbi(allxT[i],Pi,A,B)[0] == allqT[i]).sum()
result.append(cpt)
ind = np.argsort(result)[-1]
return result[ind],list_epsilon[ind]
list_epsilon = np.linspace(0.0000000000001,1,120)
optimise_eps(allx, allq,allxT,allqT, nCles, nMots, initTo2=True,list_epsilon=list_epsilon)
###Output
_____no_output_____
###Markdown
Check : 1564 en test Analyse qualitative:- A l'aide d'un imshow sur les paramètres (ou d'un argsort), montrer quels sont les enchaînements probables d'étiquettes.- Visualiser aussi les matrices de confusion pour comprendre ce qui est difficile dans cette tâche- Extraire les exemples qui sont effectivement corrigés par viterbi- Le fait de traiter le texte (enlever les majuscules, les ponctuations, etc) fait-il varier les performances (**ATTENTION**, ça fait varier le nombre de mots)Penser à sauvegarder quasi-systématiquement les figures que vous produisez. Vous privilégierez le format pdf vectoriel. Le repertoire ```out``` est là pour stocker toutes les sorties. Vous devez donc obtenir quelque chose de la forme:```plt.figure() nouvelle figure...plt.savefig("out/ma_figure.pdf")``` Imshow sur les parametres
###Code
plt.imshow(np.log(A))
###Output
_____no_output_____
###Markdown
Matrice confusion
###Code
y_true = []
y_pred = []
for q in allqT :
for i in q :
y_true.append(i)
for p in predict(Pi,A,B,allxT):
for i in p :
y_pred.append(i)
arr = confusion_matrix(y_true, y_pred)
plt.imshow(np.log(arr))
#Blanc -> Pas erreur
# 2 types de Part Of speach on sait classifier car on voit 2 lignes donc demannder a expert pour plus d'info
cles2ind
###Output
_____no_output_____
###Markdown
Test d'outils plus avancés1. On propose de faire des tests avec CRFTagger[https://tedboy.github.io/nlps/generated/generated/nltk.CRFTagger.html]1. PerceptronTagger de nltk1. En dehors de python, vous pouvez facilement utiliser le vénérable TreeTagger qui fonctionne toujours bien (mais pas forcément dans le même référenciel d'étiquetage:[http://www.cis.uni-muenchen.de/~schmid/tools/TreeTagger/]Il y a même un wrapper python en bas de la page pour intégrer ça dans votre code.Ce tagger a l'avantage d'avoir des modules pour le français.
###Code
# il faut parfois:
!pip install python-crfsuite
from nltk.tag.crf import CRFTagger
tagger = CRFTagger()
tagger.train(alldocs, 'out/crf.model') # apprentissage
# mesure de la performance (à aller chercher dans la documentation)
# le même travail qualitatif que précédemment est possible (et souhaitable !)...
# ... Il est aussi très simple si votre code est mis dans des fonctions
# TODO
###Output
_____no_output_____
###Markdown
Check: 1720 bonnes réponses
###Code
# perceptron
from nltk.tag.perceptron import PerceptronTagger
tagger = PerceptronTagger(load=False)
tagger.train(alldocs)
# Evaluation
# TODO
###Output
_____no_output_____ |
pastis/temporal_analysis/harris_mode-small.ipynb | ###Markdown
Define and create directory
###Code
root_dir = "/Users/asahoo/Desktop/data_repos/harris_data"
repo_dir = "/Users/asahoo/repos/PASTIS"
coronagraph_design = 'small' # user provides
overall_dir = util.create_data_path(root_dir, telescope='luvoir_'+coronagraph_design)
resDir = os.path.join(overall_dir, 'matrix_numerical')
print(resDir)
# Create necessary directories if they don't exist yet
os.makedirs(resDir, exist_ok=True)
os.makedirs(os.path.join(resDir, 'OTE_images'), exist_ok=True)
os.makedirs(os.path.join(resDir, 'psfs'), exist_ok=True)
###Output
_____no_output_____
###Markdown
Read from configfile
###Code
nb_seg = CONFIG_PASTIS.getint('LUVOIR', 'nb_subapertures')
wvln = CONFIG_PASTIS.getfloat('LUVOIR', 'lambda') * 1e-9 # m #this doesn't matter, luvoir.wvln
diam = CONFIG_PASTIS.getfloat('LUVOIR', 'diameter') # m
nm_aber = CONFIG_PASTIS.getfloat('LUVOIR', 'calibration_aberration') * 1e-9 # m
sampling = CONFIG_PASTIS.getfloat('LUVOIR', 'sampling')
coronagraph_design = CONFIG_PASTIS.get('LUVOIR','coronagraph_design')
optics_path_in_repo = CONFIG_PASTIS.get('LUVOIR', 'optics_path_in_repo')
aper_path = CONFIG_PASTIS.get('LUVOIR','aperture_path_in_optics')
aper_ind_path = CONFIG_PASTIS.get('LUVOIR', 'indexed_aperture_path_in_optics')
aper_read = hcipy.read_fits(os.path.join(repo_dir,optics_path_in_repo,aper_path))
aper_ind_read = hcipy.read_fits(os.path.join(repo_dir,optics_path_in_repo,aper_ind_path))
z_pup_downsample = CONFIG_PASTIS.getfloat('numerical', 'z_pup_downsample')
###Output
_____no_output_____
###Markdown
Load aperture files to make segmented mirror
###Code
pupil_grid = hcipy.make_pupil_grid(dims=aper_ind_read.shape[0], diameter=15)
aper = hcipy.Field(aper_read.ravel(), pupil_grid)
aper_ind = hcipy.Field(aper_ind_read.ravel(), pupil_grid)
wf_aper = hcipy.Wavefront(aper, wvln)
# Load segment positions from fits header
hdr = fits.getheader(os.path.join(repo_dir,optics_path_in_repo,aper_ind_path))
poslist = []
for i in range(nb_seg):
segname = 'SEG' + str(i+1)
xin = hdr[segname + '_X']
yin = hdr[segname + '_Y']
poslist.append((xin, yin))
poslist = np.transpose(np.array(poslist))
seg_pos = hcipy.CartesianGrid(hcipy.UnstructuredCoords(poslist))
plt.figure(figsize=(20,10))
plt.subplot(2,3,1)
plt.title("pupil_grid")
plt.plot(pupil_grid.x, pupil_grid.y, '+')
plt.xlabel('x')
plt.ylabel('y')
plt.subplot(2,3,2)
plt.title("aper")
hcipy.imshow_field(aper)
plt.tick_params(top=False, bottom=False, left=False, right=False,
labelleft=False, labelbottom=False)
plt.colorbar()
plt.subplot(2,3,3)
plt.title("aper_ind")
hcipy.imshow_field(aper_ind)
plt.colorbar()
plt.subplot(2,3,4)
plt.title("wf_aper.phase")
hcipy.imshow_field(wf_aper.phase)
plt.colorbar()
plt.subplot(2,3,5)
plt.title("wf_aper.amplitude")
hcipy.imshow_field(wf_aper.amplitude)
plt.colorbar()
plt.subplot(2,3,6)
plt.title("seg_pos")
plt.plot(seg_pos.x, seg_pos.y, '+')
plt.xlabel('x')
plt.ylabel('y')
plt.colorbar()
###Output
_____no_output_____
###Markdown
Instantiate LUVOIR
###Code
optics_input = os.path.join(util.find_repo_location(), CONFIG_PASTIS.get('LUVOIR', 'optics_path_in_repo'))
luvoir = LuvoirA_APLC(optics_input, coronagraph_design, sampling)
hcipy.imshow_field(luvoir.apodizer)
hcipy.imshow_field(luvoir.fpm)
N_pup_z = np.int(luvoir.pupil_grid.shape[0] / z_pup_downsample) #N_pup_z = 100
grid_zernike = hcipy.field.make_pupil_grid(N_pup_z, diameter=luvoir.diam)
plt.figure(figsize=(10,10))
plt.title("grid_zernike") #hcipy cartesian grid
plt.plot(grid_zernike.x, grid_zernike.y, '+')
plt.xlabel('x')
plt.ylabel('y')
###Output
_____no_output_____
###Markdown
load thermal modes files
###Code
filepath = "/Users/asahoo/repos/PASTIS/Jupyter Notebooks/LUVOIR/Sensitivities2.xlsx"
pad_orientation = np.pi/2*np.ones(nb_seg)
#pad_orientation = np.zeros(nb_seg)
###Output
_____no_output_____
###Markdown
create harris deformabale mirror
###Code
luvoir.create_segmented_harris_mirror(filepath,pad_orientation, thermal = True,mechanical=False,other=False)
luvoir.harris_sm #how to plot this?
###Output
_____no_output_____
###Markdown
creating single segment
###Code
segment = hcipy.hexagonal_aperture(luvoir.segment_circumscribed_diameter, np.pi/2) #function
segment_sampled = hcipy.evaluate_supersampled(segment,luvoir.pupil_grid, 10) #hcipy field
plt.figure(figsize=(5, 5))
hcipy.imshow_field(segment_sampled)
plt.colorbar()
###Output
_____no_output_____
###Markdown
creating nb_seg segments
###Code
aper2, segs2 = hcipy.make_segmented_aperture(segment,luvoir.seg_pos, segment_transmissions=1, return_segments=True)
luvoir_segmented_pattern = hcipy.evaluate_supersampled(aper2, luvoir.pupil_grid, 10) #plot with hcipy.imshow_field
seg_evaluated = []
for seg_tmp in segs2:
tmp_evaluated = hcipy.evaluate_supersampled(seg_tmp, luvoir.pupil_grid, 1)
seg_evaluated.append(tmp_evaluated)
plt.figure(figsize=(15, 5))
plt.subplot(1,2,1)
hcipy.imshow_field(luvoir_segmented_pattern) #no where used in rest of the code?
plt.colorbar()
plt.subplot(1,2,2)
hcipy.imshow_field(seg_evaluated[75])
plt.colorbar()
###Output
_____no_output_____
###Markdown
Plotting Harris_mode
###Code
df = pd.read_excel(filepath)
valuesA = np.asarray(df.a)
valuesB = np.asarray(df.b)
valuesC = np.asarray(df.c)
valuesD = np.asarray(df.d)
valuesE = np.asarray(df.e)
valuesF = np.asarray(df.f)
valuesG = np.asarray(df.g)
valuesH = np.asarray(df.h)
valuesI = np.asarray(df.i)
valuesJ = np.asarray(df.j)
valuesK = np.asarray(df.k)
seg_x = np.asarray(df.X)
seg_y = np.asarray(df.Y)
harris_seg_diameter = np.max([np.max(seg_x) - np.min(seg_x), np.max(seg_y) - np.min(seg_y)])
pup_dims = luvoir.pupil_grid.dims
x_grid = np.asarray(df.X) * luvoir.segment_circumscribed_diameter /harris_seg_diameter
y_grid = np.asarray(df.Y) * luvoir.segment_circumscribed_diameter /harris_seg_diameter
points = np.transpose(np.asarray([x_grid, y_grid]))
seg_evaluated = luvoir._create_evaluated_segment_grid()
def _transform_harris_mode(values, xrot, yrot, points, seg_evaluated, seg_num):
""" Take imported Harris mode data and transform into a segment mode on our aperture. """
zval = griddata(points, values, (xrot, yrot), method='linear')
zval[np.isnan(zval)] = 0
zval = zval.ravel() * seg_evaluated[seg_num]
return zval
harris_base_thermal = []
for seg_num in range(0, luvoir.nseg):
grid_seg = luvoir.pupil_grid.shifted(-luvoir.seg_pos[seg_num])
x_line_grid = np.asarray(grid_seg.x)
y_line_grid = np.asarray(grid_seg.y)
# Rotate the modes grids according to the orientation of the mounting pads
phi = pad_orientation[seg_num]
x_rotation = x_line_grid * np.cos(phi) + y_line_grid * np.sin(phi)
y_rotation = -x_line_grid * np.sin(phi) + y_line_grid * np.cos(phi)
# Transform all needed Harris modes from data to modes on our segmented aperture
ZA = _transform_harris_mode(valuesA, x_rotation, y_rotation, points, seg_evaluated, seg_num)
ZB = _transform_harris_mode(valuesB, x_rotation, y_rotation, points, seg_evaluated, seg_num)
ZC = _transform_harris_mode(valuesC, x_rotation, y_rotation, points, seg_evaluated, seg_num)
ZD = _transform_harris_mode(valuesD, x_rotation, y_rotation, points, seg_evaluated, seg_num)
ZE = _transform_harris_mode(valuesE, x_rotation, y_rotation, points, seg_evaluated, seg_num)
ZF = _transform_harris_mode(valuesF, x_rotation, y_rotation, points, seg_evaluated, seg_num)
ZG = _transform_harris_mode(valuesG, x_rotation, y_rotation, points, seg_evaluated, seg_num)
ZH = _transform_harris_mode(valuesH, x_rotation, y_rotation, points, seg_evaluated, seg_num)
ZI = _transform_harris_mode(valuesI, x_rotation, y_rotation, points, seg_evaluated, seg_num)
ZJ = _transform_harris_mode(valuesJ, x_rotation, y_rotation, points, seg_evaluated, seg_num)
ZK = _transform_harris_mode(valuesK, x_rotation, y_rotation, points, seg_evaluated, seg_num)
harris_base_thermal.append([ZA, ZB, ZC, ZD, ZE, ZF, ZG, ZH, ZI, ZJ, ZK])
plt.figure(figsize=(20,10))
plt.subplot(2,3,1)
plt.title("Segment Level 1mk Faceplates Silvered")
plt.imshow(np.reshape(ZA,(1000,1000))[150:275,800:925])
plt.colorbar()
plt.subplot(2,3,2)
plt.title("Segment Level 1mk bulk")
plt.imshow(np.reshape(ZH,(1000,1000))[150:275,800:925])
plt.colorbar()
plt.subplot(2,3,3)
plt.title("Segment Level 1mk gradiant radial")
plt.imshow(np.reshape(ZI,(1000,1000))[150:275,800:925])
plt.colorbar()
plt.subplot(2,3,4)
plt.title("Segment Level 1mk gradient X lateral")
plt.imshow(np.reshape(ZJ,(1000,1000))[150:275,800:925])
plt.colorbar()
plt.subplot(2,3,5)
plt.title("Segment Level 1mk gradient Z axial")
plt.imshow(np.reshape(ZK,(1000,1000)))
plt.colorbar()
###Output
_____no_output_____
###Markdown
Flatten all DMs and create unaberrated reference PSF
###Code
n_harris = luvoir.harris_sm.num_actuators #int = 5*120 =600
harris_mode =np.zeros(n_harris)
luvoir.harris_sm.actuators = harris_mode #setting all actuators to be zero
###Output
_____no_output_____
###Markdown
Calculate the unaberrated coro and direct PSFs in INTENSITY
###Code
unaberrated_coro_psf, ref = luvoir.calc_psf(ref=True, display_intermediate=False, norm_one_photon=True)
plt.figure(figsize=(13,5))
plt.subplot(1,2,1)
plt.title("unaberrated_coro_psf")
hcipy.imshow_field(np.log(np.abs(unaberrated_coro_psf)))
plt.colorbar()
plt.subplot(1,2,2)
plt.title("ref")
hcipy.imshow_field(np.log(np.abs(ref)))
plt.colorbar()
norm = np.max(ref)
print(norm)
dh_intensity = (unaberrated_coro_psf / norm) * luvoir.dh_mask
contrast_floor = np.mean(dh_intensity[np.where(luvoir.dh_mask != 0)])
print(f'contrast floor: {contrast_floor}')
hcipy.imshow_field(dh_intensity)
plt.title("dh_intensity")
###Output
_____no_output_____
###Markdown
Calculate the unaberrated coro and direct PSFs in E-FIELDS
###Code
# Calculate the unaberrated coro and direct PSFs in E-FIELDS
nonaberrated_coro_psf, ref, efield = luvoir.calc_psf(ref=True, display_intermediate=False, return_intermediate='efield',norm_one_photon=True)
Efield_ref = nonaberrated_coro_psf.electric_field
plt.figure(figsize=(15, 5))
plt.subplot(1,2,1)
hcipy.imshow_field(np.log(np.abs(nonaberrated_coro_psf.amplitude)))
plt.title("nonaberrated_coro_psf.amplitude")
plt.colorbar()
plt.subplot(1,2,2)
hcipy.imshow_field(np.log(np.abs(ref.amplitude)))
plt.title("ref.amplitude")
plt.colorbar()
print('Generating the E-fields for harris modes in science plane')
print(f'Calibration aberration used: {nm_aber} m')
start_time = time.time()
focus_fieldS = []
focus_fieldS_Re = []
focus_fieldS_Im = []
#harris_mode = np.zeros(n_harris)
for pp in range(0, n_harris):
print(f'Working on mode {pp}/{n_harris}')
# Apply calibration aberration to used mode
harris_mode = np.zeros(n_harris)
harris_mode[pp] = (nm_aber)/2
luvoir.harris_sm.actuators = harris_mode
# Calculate coronagraphic E-field and add to lists
aberrated_coro_psf, inter = luvoir.calc_psf(display_intermediate=False, return_intermediate='efield',norm_one_photon=True)
focus_field1 = aberrated_coro_psf
focus_fieldS.append(focus_field1)
focus_fieldS_Re.append(focus_field1.real)
focus_fieldS_Im.append(focus_field1.imag)
plt.figure(figsize=(10, 10))
hcipy.imshow_field(np.log(np.abs(focus_fieldS_Im[10])))
plt.colorbar()
luvoir_test = LuvoirA_APLC(optics_input, coronagraph_design, sampling)
luvoir_test.create_segmented_harris_mirror(filepath,pad_orientation, thermal = True,mechanical=False,other=False)
luvoir_test.harris_sm
harris_mode = np.zeros(n_harris)
harris_mode[116] = nm_aber
luvoir_test.harris_sm.actuators = harris_mode
#hcipy.imshow_field(((10*luvoir_test.harris_sm.surface+5*1e-8*luvoir_segmented_pattern)))
#plt.colorbar()
from astropy.io import fits as pf
for pp in range(0, 2):
print(f'Working on mode {pp}/{2}')
# Apply calibration aberration to used mode
harris_mode = np.zeros(n_harris)
harris_mode[550] = (nm_aber)/2
luvoir_test.harris_sm.actuators = harris_mode
# Calculate coronagraphic E-field and add to lists
aberrated_coro_psf_t, inter_t = luvoir_test.calc_psf(display_intermediate=False, return_intermediate='efield',norm_one_photon=True)
pupil_phase = np.zeros((1000,1000))
pupil_phase = np.array(np.reshape(inter_t['harris_seg_mirror'].phase,(1000,1000)))
focal_int = np.zeros((115,115))
focal_int = np.array(np.reshape(aberrated_coro_psf_t.amplitude,(115,115)))
plt.figure(figsize=(14,5))
plt.subplot(1,2,1)
hcipy.imshow_field((inter_t['harris_seg_mirror']).phase, mask=luvoir_test.aperture, cmap='RdBu', vmin=-0.1, vmax=0.1)
plt.title("Wavefront")
cbar = plt.colorbar()
cbar.ax.tick_params(labelsize = 15)
cbar.set_label("radians",fontsize =15)
plt.subplot(1,2,2)
hcipy.imshow_field(np.log((aberrated_coro_psf_t.amplitude)),cmap='RdBu')
plt.title("Aberrated Coronagraphic PSF")
cbar = plt.colorbar()
cbar.ax.tick_params(labelsize = 15)
cbar.set_label("contrast",fontsize =15)
print(type(pupil_phase), type(focal_int))
#plt.savefig('/Users/asahoo/Desktop/data_repos/harris_data/ball_del_01/plot_%d.png'%pp)
pf.writeto('/Users/asahoo/Desktop/data_repos/harris_data/ball_del_01/pupil_%d.fits'%pp,pupil_phase)
pf.writeto('/Users/asahoo/Desktop/data_repos/harris_data/ball_del_01/focal_%d.fits'%pp, focal_int)
focal_int?
#np.shape(aberrated_coro_psf_t.amplitude)
np.sqrt(13225)
plt.figure(figsize=(14,5))
plt.subplot(1,2,1)
hcipy.imshow_field((inter_t['harris_seg_mirror']).phase, mask=luvoir_test.aperture, cmap='RdBu')
plt.title("Wavefront")
cbar = plt.colorbar()
cbar.ax.tick_params(labelsize = 15)
cbar.set_label("radians",fontsize =15)
plt.subplot(1,2,2)
hcipy.imshow_field(np.log((aberrated_coro_psf_t.amplitude)))
plt.title("Aberrated Coronagraphic PSF")
cbar = plt.colorbar()
cbar.ax.tick_params(labelsize = 15)
cbar.set_label("contrast",fontsize =15)
###Output
_____no_output_____
###Markdown
Construct the PASTIS matrix from the E-fields
###Code
mat_fast = np.zeros([n_harris, n_harris]) # create empty matrix
for i in range(0, n_harris):
for j in range(0, n_harris):
test = np.real((focus_fieldS[i].electric_field - Efield_ref) * np.conj(focus_fieldS[j].electric_field - Efield_ref))
dh_test = (test / norm) * luvoir.dh_mask
contrast = np.mean(dh_test[np.where(luvoir.dh_mask != 0)])
mat_fast[i, j] = contrast
matrix_pastis = np.copy(mat_fast)
matrix_pastis /= np.square(nm_aber * 1e9)
plt.figure(figsize=(15,5))
#plt.subplot(1,2,1)
plt.imshow(np.log(np.abs(mat_fast)))
#plt.title("PASTIS matrix")
#plt.savefig('/Users/asahoo/Desktop/P_matrix.png')
#plt.colorbar()
# plt.subplot(1,2,2)
# plt.imshow(np.log(np.abs(matrix_pastis)))
# plt.title("np.log(np.abs(matrix_pastis))")
# plt.colorbar()
filename_matrix = 'PASTISmatrix_n_harris_' + str(n_harris)
hcipy.write_fits(matrix_pastis, os.path.join(resDir, filename_matrix + '.fits'))
print('Matrix saved to:', os.path.join(resDir, filename_matrix + '.fits'))
filename_matrix = 'EFIELD_Re_matrix_n_harris_' + str(n_harris)
hcipy.write_fits(focus_fieldS_Re, os.path.join(resDir, filename_matrix + '.fits'))
print('Efield Real saved to:', os.path.join(resDir, filename_matrix + '.fits'))
filename_matrix = 'EFIELD_Im_matrix_n_harris_' + str(n_harris)
hcipy.write_fits(focus_fieldS_Im, os.path.join(resDir, filename_matrix + '.fits'))
print('Efield Imag saved to:', os.path.join(resDir, filename_matrix + '.fits'))
end_time = time.time()
print('Runtime for harris modes:', end_time - start_time, 'sec =', (end_time - start_time) / 60, 'min')
print('Data saved to {}'.format(resDir))
###Output
_____no_output_____
###Markdown
error analysis
###Code
evals, evecs = np.linalg.eig(matrix_pastis)
sorted_evals = np.sort(evals)
sorted_indices = np.argsort(evals)
sorted_evecs = evecs[:, sorted_indices]
plt.figure(figsize=(10, 10))
#plt.plot(evals, label='Unsorted from eigendecomposition')
plt.plot(sorted_evals)
plt.semilogy()
plt.xlabel('Mode Index')
plt.ylabel('Sensitivity of contrast for each mode')
plt.tick_params(top=True, bottom=True, left=True, right=True,
labelleft=True, labelbottom=True)
#plt.legend()
# emodes = []
# eunit = 1e-9
# for mode in range(len(evals)):
# print('Working on mode {}/{}.'.format(mode+1, len(evals)))
# harris_coeffs = eunit*sorted_evecs[:, mode]/2
# luvoir.harris_sm.actuators = harris_coeffs
# wf_harris_sm = luvoir.harris_sm(luvoir.wf_aper)
# emodes.append(wf_harris_sm.phase)
c_target_log = -10
c_target = 10**(c_target_log)
n_repeat = 20
mu_map_harris = np.sqrt(((c_target) / (n_harris)) / (np.diag(matrix_pastis)))
plt.figure(figsize=(20,5))
plt.title("Segment-based PASTIS constraints from PASTIS matrix and PASTIS modes")
plt.plot(mu_map_harris)
sigma = np.sqrt(((c_target)) / (600 * sorted_evals))
plt.figure(figsize=(20,5))
plt.title("Max mode contribution(s) from the static-contrast target and eigen values")
plt.plot(sigma)
cont_cum_pastis = []
for maxmode in range(sorted_evecs.shape[0]):
aber = np.nansum(sorted_evecs[:, :maxmode+1] * sigma[:maxmode+1], axis=1)
aber *= u.nm
contrast_matrix = util.pastis_contrast(aber, matrix_pastis) + contrast_floor
cont_cum_pastis.append(contrast_matrix)
plt.figure(figsize=(10,10))
plt.plot(cont_cum_pastis)
plt.xlabel("modes")
plt.ylabel("List of cumulative contrast")
cont_ind_pastis = []
for maxmode in range(sorted_evecs.shape[0]):
aber = sorted_evecs[:, maxmode] * sigma[maxmode]
aber *=u.nm
contrast_matrix = util.pastis_contrast(aber, matrix_pastis)
cont_ind_pastis.append(contrast_matrix)
plt.figure(figsize=(20,10))
plt.plot((cont_ind_pastis))
plt.xlabel("modes")
plt.ylabel("List of Individual contrast")
plt.yscale('log')
npup = np.int(np.sqrt(luvoir.pupil_grid.x.shape[0]))
nimg = np.int(np.sqrt(luvoir.focal_det.x.shape[0]))
# Getting the flux together
sptype = 'A0V' # Put this on config
Vmag = 0.0 # Put this in loop
minlam = 500 * u.nanometer # Put this on config
maxlam = 600 * u.nanometer # Put this on config
star_flux = exoscene.star.bpgs_spectype_to_photonrate(spectype=sptype, Vmag=Vmag, minlam=minlam.value, maxlam=maxlam.value)
Nph = star_flux.value*15**2*np.sum(luvoir.apodizer**2) / npup**2
dark_current = 0#0.000072 #es per s
CIC = 0#0.00076 #electrons per sec
harris_mode = np.zeros(n_harris)
luvoir.harris_sm.actuators = harris_mode
nonaberrated_coro_psf, refshit,inter_ref = luvoir.calc_psf(ref=True, display_intermediate=False, return_intermediate='efield',norm_one_photon=True)
Efield_ref = nonaberrated_coro_psf.electric_field
harris_mode = np.zeros(n_harris)
luvoir.harris_sm.actuators = harris_mode
harris_ref2 = luvoir.calc_out_of_band_wfs(norm_one_photon=True)
harris_ref2_sub_real = hcipy.field.subsample_field(harris_ref2.real, z_pup_downsample, grid_zernike, statistic='mean')
harris_ref2_sub_imag = hcipy.field.subsample_field(harris_ref2.imag, z_pup_downsample, grid_zernike, statistic='mean')
Efield_ref_OBWFS = (harris_ref2_sub_real + 1j*harris_ref2_sub_imag) * z_pup_downsample
plt.figure(figsize = (20,10))
plt.subplot(1,2,1)
hcipy.imshow_field(Efield_ref_OBWFS.real, cmap ='RdBu')
plt.colorbar()
plt.subplot(1,2,2)
hcipy.imshow_field(Efield_ref_OBWFS.imag, cmap ='RdBu')
plt.colorbar()
nyquist_sampling = 2.
# Actual grid for LUVOIR images
grid_test = hcipy.make_focal_grid(
luvoir.sampling,
luvoir.imlamD,
pupil_diameter=luvoir.diam,
focal_length=1,
reference_wavelength=luvoir.wvln,
)
# Actual grid for LUVOIR images that are nyquist sampled
grid_det_subsample = hcipy.make_focal_grid(
nyquist_sampling,
np.floor(luvoir.imlamD),
pupil_diameter=luvoir.diam,
focal_length=1,
reference_wavelength=luvoir.wvln,
)
n_nyquist = np.int(np.sqrt(grid_det_subsample.x.shape[0]))
### Dark hole mask
design = 'small'
dh_outer_nyquist = hcipy.circular_aperture(2 * luvoir.apod_dict[design]['owa'] * luvoir.lam_over_d)(grid_det_subsample)
dh_inner_nyquist = hcipy.circular_aperture(2 * luvoir.apod_dict[design]['iwa'] * luvoir.lam_over_d)(grid_det_subsample)
dh_mask_nyquist = (dh_outer_nyquist - dh_inner_nyquist).astype('bool')
dh_size = len(np.where(luvoir.dh_mask != 0)[0])
dh_size_nyquist = len(np.where(dh_mask_nyquist != 0)[0])
dh_index = np.where(luvoir.dh_mask != 0)[0]
dh_index_nyquist = np.where(dh_mask_nyquist != 0)[0]
# E0_LOWFS = np.zeros([N_pup_z*N_pup_z,1,2])
# E0_LOWFS[:,0,0] = Efield_ref_LOWFS.real
# E0_LOWFS[:,0,1] = Efield_ref_LOWFS.imag
E0_OBWFS = np.zeros([N_pup_z*N_pup_z,1,2])
E0_OBWFS[:,0,0] = Efield_ref_OBWFS.real
E0_OBWFS[:,0,1] = Efield_ref_OBWFS.imag
E0_coron = np.zeros([nimg*nimg,1,2])
E0_coron[:,0,0] = Efield_ref.real
E0_coron[:,0,1] = Efield_ref.imag
E0_coron_nyquist = np.zeros([n_nyquist*n_nyquist,1,2])
tmp0 = hcipy.interpolation.make_linear_interpolator_separated(Efield_ref, grid=grid_test)
Efield_ref_nyquist = (luvoir.sampling/nyquist_sampling)**2*tmp0(grid_det_subsample)
E0_coron_nyquist[:,0,0] = Efield_ref_nyquist.real
E0_coron_nyquist[:,0,1] = Efield_ref_nyquist.imag
E0_coron_DH = np.zeros([dh_size,1,2])
E0_coron_DH[:,0,0] = Efield_ref.real[dh_index]
E0_coron_DH[:,0,1] = Efield_ref.imag[dh_index]
E0_coron_DH_nyquist = np.zeros([dh_size_nyquist,1,2])
E0_coron_DH_nyquist[:,0,0] = Efield_ref_nyquist.real[dh_index_nyquist]
E0_coron_DH_nyquist[:,0,1] = Efield_ref_nyquist.real[dh_index_nyquist]
filename_matrix = 'EFIELD_Re_matrix_n_harris_' + str(n_harris) + '.fits'
G_harris_real = fits.getdata(os.path.join(overall_dir, 'matrix_numerical', filename_matrix))
filename_matrix = 'EFIELD_Im_matrix_n_harris_' + str(n_harris) + '.fits'
G_harris_imag = fits.getdata(os.path.join(overall_dir, 'matrix_numerical', filename_matrix))
G_coron_harris_nyquist= np.zeros([n_nyquist*n_nyquist,2,n_harris])
for pp in range(0, n_harris):
tmp0 = G_harris_real[pp] + 1j*G_harris_imag[pp]
tmp1 = hcipy.interpolation.make_linear_interpolator_separated(tmp0, grid=grid_test)
tmp2 = (luvoir.sampling/nyquist_sampling)**2*tmp1(grid_det_subsample)
G_coron_harris_nyquist[:,0,pp] = tmp2.real - Efield_ref_nyquist.real
G_coron_harris_nyquist[:,1,pp] = tmp2.real - Efield_ref_nyquist.imag
G_coron_harris_DH= np.zeros([dh_size,2,n_harris])
for pp in range(0, n_harris):
G_coron_harris_DH[:,0,pp] = G_harris_real[pp,dh_index] - Efield_ref.real[dh_index]
G_coron_harris_DH[:,1,pp] = G_harris_imag[pp,dh_index] - Efield_ref.imag[dh_index]
G_coron_harris_DH_nyquist= np.zeros([dh_size_nyquist,2,n_harris])
for pp in range(0, n_harris):
tmp0 = G_harris_real[pp] + 1j*G_harris_imag[pp]
tmp1 = hcipy.interpolation.make_linear_interpolator_separated(tmp0, grid=grid_test)
tmp2 = (luvoir.sampling/nyquist_sampling)**2*tmp1(grid_det_subsample)
G_coron_harris_DH_nyquist[:,0,pp-1] = tmp2.real[dh_index_nyquist] - Efield_ref_nyquist.real[dh_index_nyquist]
G_coron_harris_DH_nyquist[:,1,pp-1] = tmp2.real[dh_index_nyquist] - Efield_ref_nyquist.imag[dh_index_nyquist]
G_coron_harris= np.zeros([nimg*nimg,2,n_harris])
for pp in range(0, n_harris):
G_coron_harris[:,0,pp] = G_harris_real[pp] - Efield_ref.real
G_coron_harris[:,1,pp] = G_harris_imag[pp] - Efield_ref.imag
start_time = time.time()
focus_fieldS = []
focus_fieldS_Re = []
focus_fieldS_Im = []
for pp in range(0, n_harris):
print(pp)
harris_modes = np.zeros(n_harris)
harris_modes[pp] = (nm_aber) / 2
luvoir.harris_sm.actuators = harris_mode
harris_meas = luvoir.calc_out_of_band_wfs(norm_one_photon=True)
harris_meas_sub_real = hcipy.field.subsample_field(harris_meas.real, z_pup_downsample, grid_zernike, statistic='mean')
harris_meas_sub_imag = hcipy.field.subsample_field(harris_meas.imag, z_pup_downsample, grid_zernike, statistic='mean')
focus_field1 = harris_meas_sub_real + 1j * harris_meas_sub_imag
focus_fieldS.append(focus_field1)
focus_fieldS_Re.append(focus_field1.real)
focus_fieldS_Im.append(focus_field1.imag)
filename_matrix = 'EFIELD_OBWFS_Re_matrix_num_harris_' + str(n_harris)
hcipy.write_fits(focus_fieldS_Re, os.path.join(resDir, filename_matrix + '.fits'))
print('Efield Real saved to:', os.path.join(resDir, filename_matrix + '.fits'))
filename_matrix = 'EFIELD_OBWFS_Im_matrix_num_harris_' + str(n_harris)
hcipy.write_fits(focus_fieldS_Im, os.path.join(resDir, filename_matrix + '.fits'))
print('Efield Imag saved to:', os.path.join(resDir, filename_matrix + '.fits'))
filename_matrix = 'EFIELD_OBWFS_Re_matrix_num_harris_' + str(n_harris)+'.fits'
G_OBWFS_real = fits.getdata(os.path.join(overall_dir, 'matrix_numerical', filename_matrix))
filename_matrix = 'EFIELD_OBWFS_Im_matrix_num_harris_' + str(n_harris)+'.fits'
G_OBWFS_imag = fits.getdata(os.path.join(overall_dir, 'matrix_numerical', filename_matrix))
G_OBWFS= np.zeros([N_pup_z*N_pup_z,2,n_harris])
for pp in range(0, n_harris):
G_OBWFS[:,0,pp] = G_OBWFS_real[pp]*z_pup_downsample - Efield_ref_OBWFS.real
G_OBWFS[:,1,pp] = G_OBWFS_imag[pp]*z_pup_downsample - Efield_ref_OBWFS.imag
def req_closedloop_calc_recursive(Gcoro, Gsensor, E0coro, E0sensor, Dcoro, Dsensor, t_exp, flux, Q, Niter, dh_mask,
norm):
P = np.zeros(Q.shape) # WFE modes covariance estimate
r = Gsensor.shape[2]
N = Gsensor.shape[0]
N_img = Gcoro.shape[0]
c = 1
# Iterations of ALGORITHM 1
contrast_hist = np.zeros(Niter)
intensity_WFS_hist = np.zeros(Niter)
cal_I_hist = np.zeros(Niter)
eps_hist = np.zeros([Niter, r])
averaged_hist = np.zeros(Niter)
contrasts = []
for pp in range(Niter):
eps = np.random.multivariate_normal(np.zeros(r), P + Q * t_exp).reshape((1, 1, r)) # random modes
G_eps = np.sum(Gsensor * eps, axis=2).reshape((N, 1, 2 * c)) + E0sensor # electric field
G_eps_squared = np.sum(G_eps * G_eps, axis=2, keepdims=True)
G_eps_G = np.matmul(G_eps, Gsensor)
G_eps_G_scaled = G_eps_G / np.sqrt(G_eps_squared + Dsensor / flux / t_exp) # trick to save RAM
cal_I = 4 * flux * t_exp * np.einsum("ijk,ijl->kl", G_eps_G_scaled, G_eps_G_scaled) # information matrix
P = np.linalg.inv(np.linalg.inv(P + Q * t_exp / 2) + cal_I)
# P = np.linalg.inv(cal_I)
# Coronagraph
G_eps_coron = np.sum(Gcoro * eps, axis=2).reshape((N_img, 1, 2 * c)) + E0coro
G_eps_coron_squared = np.sum(G_eps_coron * G_eps_coron, axis=2, keepdims=True)
intensity = G_eps_coron_squared * flux * t_exp + Dcoro
# Wavefront sensor
intensity_WFS = G_eps_squared * flux * t_exp + Dsensor
# Archive
test_DH0 = intensity[:, 0, 0] * luvoir.dh_mask
test_DH = np.mean(test_DH0[np.where(test_DH0 != 0)])
contrasts.append(test_DH / flux / t_exp / norm)
intensity_WFS_hist[pp] = np.sum(intensity_WFS) / flux
cal_I_hist[pp] = np.mean(cal_I) / flux
eps_hist[pp] = eps
averaged_hist[pp] = np.mean(contrasts)
# print("est. contrast", np.mean(contrasts))
outputs = {'intensity_WFS_hist': intensity_WFS_hist,
'cal_I_hist': cal_I_hist,
'eps_hist': eps_hist,
'averaged_hist': averaged_hist,
'contrasts': contrasts}
return outputs
def req_closedloop_calc_batch(Gcoro, Gsensor, E0coro, E0sensor, Dcoro, Dsensor, t_exp, flux, Q, Niter, dh_mask, norm):
P = np.zeros(Q.shape) # WFE modes covariance estimate
r = Gsensor.shape[2]
N = Gsensor.shape[0]
N_img = Gcoro.shape[0]
c = 1
# Iterations of ALGORITHM 1
contrast_hist = np.zeros(Niter)
intensity_WFS_hist = np.zeros(Niter)
cal_I_hist = np.zeros(Niter)
eps_hist = np.zeros([Niter, r])
averaged_hist = np.zeros(Niter)
contrasts = []
for pp in range(Niter):
eps = np.random.multivariate_normal(np.zeros(r), P + Q * t_exp).reshape((1, 1, r)) # random modes
G_eps = np.sum(Gsensor * eps, axis=2).reshape((N, 1, 2 * c)) + E0sensor # electric field
G_eps_squared = np.sum(G_eps * G_eps, axis=2, keepdims=True)
G_eps_G = np.matmul(G_eps, Gsensor)
G_eps_G_scaled = G_eps_G / np.sqrt(G_eps_squared + Dsensor / flux / t_exp) # trick to save RAM
cal_I = 4 * flux * t_exp * np.einsum("ijk,ijl->kl", G_eps_G_scaled, G_eps_G_scaled) # information matrix
# P = np.linalg.inv(np.linalg.inv(P+Q*t_exp/2) + cal_I)
P = np.linalg.pinv(cal_I)
# Coronagraph
G_eps_coron = np.sum(Gcoro * eps, axis=2).reshape((N_img, 1, 2 * c)) + E0coro
G_eps_coron_squared = np.sum(G_eps_coron * G_eps_coron, axis=2, keepdims=True)
intensity = G_eps_coron_squared * flux * t_exp + Dcoro
# Wavefront sensor
intensity_WFS = G_eps_squared * flux * t_exp + Dsensor
# Archive
test_DH0 = intensity[:, 0, 0] * luvoir.dh_mask
test_DH = np.mean(test_DH0[np.where(test_DH0 != 0)])
contrasts.append(test_DH / flux / t_exp / norm)
intensity_WFS_hist[pp] = np.sum(intensity_WFS) / flux
cal_I_hist[pp] = np.mean(cal_I) / flux
eps_hist[pp] = eps
averaged_hist[pp] = np.mean(contrasts)
# print("est. contrast", np.mean(contrasts))
# print("est. contrast", np.mean(contrasts))
outputs = {'intensity_WFS_hist': intensity_WFS_hist,
'cal_I_hist': cal_I_hist,
'eps_hist': eps_hist,
'averaged_hist': averaged_hist,
'contrasts': contrasts}
return outputs
flux = Nph
Qharris = np.diag(np.asarray(mu_map_harris**2))
Qharris?
# Running a bunch of tests for time series
Ntimes = 20
TimeMinus = -2
TimePlus = 3.5
Nwavescale = 8
WaveScaleMinus = -2
WaveScalePlus = 1
Nflux = 3
fluxPlus = 10
fluxMinus = 0
timeVec = np.logspace(TimeMinus,TimePlus,Ntimes)
WaveVec = np.logspace(WaveScaleMinus,WaveScalePlus,Nwavescale)
fluxVec = np.linspace(fluxMinus,fluxPlus,Nflux)
wavescaleVec = np.logspace(WaveScaleMinus,WaveScalePlus,Nwavescale)
niter = 10
print('harris modes with batch OBWFS and noise')
timer1 = time.time()
wavescale = 1.
StarMag = 1.0
result_1 = []
for tscale in np.logspace(TimeMinus, TimePlus, Ntimes):
Starfactor = 10**(-StarMag/2.5)
print(tscale)
tmp0 = req_closedloop_calc_batch(G_coron_harris, G_OBWFS, E0_coron, E0_OBWFS, dark_current+CIC/tscale,
dark_current+CIC/tscale, tscale, flux*Starfactor, wavescale**2*Qharris,
niter, luvoir.dh_mask, norm)
tmp1 = tmp0['averaged_hist']
n_tmp1 = len(tmp1)
result_1.append(tmp1[n_tmp1-1])
timer2 = time.time()
print(timer2 - timer1)
niter = 10
print('harris modes with batch OBWFS and noise')
timer1 = time.time()
wavescale = 1.
StarMag = 3.0
result_3 = []
for tscale in np.logspace(TimeMinus, TimePlus, Ntimes):
Starfactor = 10**(-StarMag/2.5)
print(tscale)
tmp0 = req_closedloop_calc_batch(G_coron_harris, G_OBWFS, E0_coron, E0_OBWFS, dark_current+CIC/tscale,
dark_current+CIC/tscale, tscale, flux*Starfactor, wavescale**2*Qharris,
niter, luvoir.dh_mask, norm)
tmp1 = tmp0['averaged_hist']
n_tmp1 = len(tmp1)
result_3.append(tmp1[n_tmp1-1])
timer2 = time.time()
print(timer2 - timer1)
niter = 10
print('harris modes with batch OBWFS and noise')
timer1 = time.time()
wavescale = 1.
StarMag = 5.0
result_5 = []
for tscale in np.logspace(TimeMinus, TimePlus, Ntimes):
Starfactor = 10**(-StarMag/2.5)
print(tscale)
tmp0 = req_closedloop_calc_batch(G_coron_harris, G_OBWFS, E0_coron, E0_OBWFS, dark_current+CIC/tscale,
dark_current+CIC/tscale, tscale, flux*Starfactor, wavescale**2*Qharris,
niter, luvoir.dh_mask, norm)
tmp1 = tmp0['averaged_hist']
n_tmp1 = len(tmp1)
result_5.append(tmp1[n_tmp1-1])
timer2 = time.time()
print(timer2 - timer1)
niter = 10
print('harris modes with batch OBWFS and noise')
timer1 = time.time()
wavescale = 1.
StarMag = 7.0
result_7 = []
for tscale in np.logspace(TimeMinus, TimePlus, Ntimes):
Starfactor = 10**(-StarMag/2.5)
print(tscale)
tmp0 = req_closedloop_calc_batch(G_coron_harris, G_OBWFS, E0_coron, E0_OBWFS, dark_current+CIC/tscale,
dark_current+CIC/tscale, tscale, flux*Starfactor, wavescale**2*Qharris,
niter, luvoir.dh_mask, norm)
tmp1 = tmp0['averaged_hist']
n_tmp1 = len(tmp1)
result_7.append(tmp1[n_tmp1-1])
timer2 = time.time()
print(timer2 - timer1)
niter = 10
print('harris modes with batch OBWFS and noise')
timer1 = time.time()
wavescale = 1.
StarMag = 9.0
result_9 = []
for tscale in np.logspace(TimeMinus, TimePlus, Ntimes):
Starfactor = 10**(-StarMag/2.5)
print(tscale)
tmp0 = req_closedloop_calc_batch(G_coron_harris, G_OBWFS, E0_coron, E0_OBWFS, dark_current+CIC/tscale,
dark_current+CIC/tscale, tscale, flux*Starfactor, wavescale**2*Qharris,
niter, luvoir.dh_mask, norm)
tmp1 = tmp0['averaged_hist']
n_tmp1 = len(tmp1)
result_9.append(tmp1[n_tmp1-1])
timer2 = time.time()
print(timer2 - timer1)
niter = 10
print('harris modes with batch OBWFS and noise')
timer1 = time.time()
wavescale = 1.
StarMag = 11.0
result_11 = []
for tscale in np.logspace(TimeMinus, TimePlus, Ntimes):
Starfactor = 10**(-StarMag/2.5)
print(tscale)
tmp0 = req_closedloop_calc_batch(G_coron_harris, G_OBWFS, E0_coron, E0_OBWFS, dark_current+CIC/tscale,
dark_current+CIC/tscale, tscale, flux*Starfactor, wavescale**2*Qharris,
niter, luvoir.dh_mask, norm)
tmp1 = tmp0['averaged_hist']
n_tmp1 = len(tmp1)
result_11.append(tmp1[n_tmp1-1])
timer2 = time.time()
print(timer2 - timer1)
niter = 10
print('harris modes with batch OBWFS and noise')
timer1 = time.time()
wavescale = 1.
StarMag = 13.0
result_13 = []
for tscale in np.logspace(TimeMinus, TimePlus, Ntimes):
Starfactor = 10**(-StarMag/2.5)
print(tscale)
tmp0 = req_closedloop_calc_batch(G_coron_harris, G_OBWFS, E0_coron, E0_OBWFS, dark_current+CIC/tscale,
dark_current+CIC/tscale, tscale, flux*Starfactor, wavescale**2*Qharris,
niter, luvoir.dh_mask, norm)
tmp1 = tmp0['averaged_hist']
n_tmp1 = len(tmp1)
result_13.append(tmp1[n_tmp1-1])
timer2 = time.time()
print(timer2 - timer1)
texp = np.logspace(TimeMinus, TimePlus, Ntimes)
plt.figure(figsize =(10,10))
#plt.plot(texp,result, label =0.0)
plt.plot(texp,result_1, label=r'$m_{v}=1$')
plt.plot(texp,result_3, label=r'$m_{v}=3$')
plt.plot(texp,result_5, label=r'$m_{v}=5$')
plt.plot(texp,result_7, label=r'$m_{v}=7$')
plt.plot(texp,result_9, label=r'$m_{v}=9$')
plt.plot(texp,result_11, label=r'$m_{v}=11$')
plt.plot(texp,result_13, label=r'$m_{v}=13$')
plt.plot
plt.xlabel("$t_{WFS}$ in secs")
plt.ylabel("$\Delta$ contrast")
plt.yscale('log')
plt.xscale('log')
plt.legend()
plt.show()
niter = 10
print('harris modes with batch OBWFS and noise')
timer1 = time.time()
wavescale = 0.1
StarMag = 5.0
result_wf_0_1 = []
for tscale in np.logspace(TimeMinus, TimePlus, Ntimes):
Starfactor = 10**(-StarMag/2.5)
print(tscale)
tmp0 = req_closedloop_calc_batch(G_coron_harris, G_OBWFS, E0_coron, E0_OBWFS, dark_current+CIC/tscale,
dark_current+CIC/tscale, tscale, flux*Starfactor, wavescale**2*Qharris,
niter, luvoir.dh_mask, norm)
tmp1 = tmp0['averaged_hist']
n_tmp1 = len(tmp1)
result_wf_0_1.append(tmp1[n_tmp1-1])
timer2 = time.time()
print(timer2 - timer1)
niter = 10
print('harris modes with batch OBWFS and noise')
timer1 = time.time()
wavescale = 0.3
StarMag = 5.0
result_wf_0_3 = []
for tscale in np.logspace(TimeMinus, TimePlus, Ntimes):
Starfactor = 10**(-StarMag/2.5)
print(tscale)
tmp0 = req_closedloop_calc_batch(G_coron_harris, G_OBWFS, E0_coron, E0_OBWFS, dark_current+CIC/tscale,
dark_current+CIC/tscale, tscale, flux*Starfactor, wavescale**2*Qharris,
niter, luvoir.dh_mask, norm)
tmp1 = tmp0['averaged_hist']
n_tmp1 = len(tmp1)
result_wf_0_3.append(tmp1[n_tmp1-1])
timer2 = time.time()
print(timer2 - timer1)
niter = 10
print('harris modes with batch OBWFS and noise')
timer1 = time.time()
wavescale = 0.5
StarMag = 5.0
result_wf_0_5 = []
for tscale in np.logspace(TimeMinus, TimePlus, Ntimes):
Starfactor = 10**(-StarMag/2.5)
print(tscale)
tmp0 = req_closedloop_calc_batch(G_coron_harris, G_OBWFS, E0_coron, E0_OBWFS, dark_current+CIC/tscale,
dark_current+CIC/tscale, tscale, flux*Starfactor, wavescale**2*Qharris,
niter, luvoir.dh_mask, norm)
tmp1 = tmp0['averaged_hist']
n_tmp1 = len(tmp1)
result_wf_0_5.append(tmp1[n_tmp1-1])
timer2 = time.time()
print(timer2 - timer1)
niter = 10
print('harris modes with batch OBWFS and noise')
timer1 = time.time()
wavescale = 0.7
StarMag = 5.0
result_wf_0_7 = []
for tscale in np.logspace(TimeMinus, TimePlus, Ntimes):
Starfactor = 10**(-StarMag/2.5)
print(tscale)
tmp0 = req_closedloop_calc_batch(G_coron_harris, G_OBWFS, E0_coron, E0_OBWFS, dark_current+CIC/tscale,
dark_current+CIC/tscale, tscale, flux*Starfactor, wavescale**2*Qharris,
niter, luvoir.dh_mask, norm)
tmp1 = tmp0['averaged_hist']
n_tmp1 = len(tmp1)
result_wf_0_7.append(tmp1[n_tmp1-1])
timer2 = time.time()
print(timer2 - timer1)
niter = 10
print('harris modes with batch OBWFS and noise')
timer1 = time.time()
wavescale = 1.0
StarMag = 5.0
result_wf_1_0 = []
for tscale in np.logspace(TimeMinus, TimePlus, Ntimes):
Starfactor = 10**(-StarMag/2.5)
print(tscale)
tmp0 = req_closedloop_calc_batch(G_coron_harris, G_OBWFS, E0_coron, E0_OBWFS, dark_current+CIC/tscale,
dark_current+CIC/tscale, tscale, flux*Starfactor, wavescale**2*Qharris,
niter, luvoir.dh_mask, norm)
tmp1 = tmp0['averaged_hist']
n_tmp1 = len(tmp1)
result_wf_1_0.append(tmp1[n_tmp1-1])
timer2 = time.time()
print(timer2 - timer1)
niter = 10
print('harris modes with batch OBWFS and noise')
timer1 = time.time()
wavescale = 1.3
StarMag = 5.0
result_wf_1_3 = []
for tscale in np.logspace(TimeMinus, TimePlus, Ntimes):
Starfactor = 10**(-StarMag/2.5)
print(tscale)
tmp0 = req_closedloop_calc_batch(G_coron_harris, G_OBWFS, E0_coron, E0_OBWFS, dark_current+CIC/tscale,
dark_current+CIC/tscale, tscale, flux*Starfactor, wavescale**2*Qharris,
niter, luvoir.dh_mask, norm)
tmp1 = tmp0['averaged_hist']
n_tmp1 = len(tmp1)
result_wf_1_3.append(tmp1[n_tmp1-1])
timer2 = time.time()
print(timer2 - timer1)
niter = 10
print('harris modes with batch OBWFS and noise')
timer1 = time.time()
wavescale = 1.5
StarMag = 5.0
result_wf_1_5 = []
for tscale in np.logspace(TimeMinus, TimePlus, Ntimes):
Starfactor = 10**(-StarMag/2.5)
print(tscale)
tmp0 = req_closedloop_calc_batch(G_coron_harris, G_OBWFS, E0_coron, E0_OBWFS, dark_current+CIC/tscale,
dark_current+CIC/tscale, tscale, flux*Starfactor, wavescale**2*Qharris,
niter, luvoir.dh_mask, norm)
tmp1 = tmp0['averaged_hist']
n_tmp1 = len(tmp1)
result_wf_1_5.append(tmp1[n_tmp1-1])
timer2 = time.time()
print(timer2 - timer1)
np.shape(result_wf_1_3)
texp = np.logspace(TimeMinus, TimePlus, Ntimes)
plt.figure(figsize =(15,10))
plt.plot(texp,result_wf_0_3-contrast_floor, label=r'$\Delta_{wf}=0.3$')
plt.plot(texp,result_wf_0_5-contrast_floor, label=r'$\Delta_{wf}=0.5$')
plt.plot(texp,result_wf_0_7-contrast_floor, label=r'$\Delta_{wf}=0.7$')
plt.plot(texp,result_wf_1_0-contrast_floor, label=r'$\Delta_{wf}=1.0$')
plt.plot(texp,result_wf_1_3-contrast_floor, label=r'$\Delta_{wf}=1.3$')
plt.plot(texp,result_wf_1_5-contrast_floor, label=r'$\Delta_{wf}=1.5$')
plt.plot
plt.xlabel("$t_{WFS}$ in secs")
plt.ylabel("$\Delta$ contrast")
plt.yscale('log')
plt.xscale('log')
plt.legend(loc = 'lower right')
plt.show()
N_zernike = 5
zernike_coeffs_numaps = np.zeros([N_zernike,n_harris])
MID_modes_std = mu_map_harris
for qq in range(N_zernike):
zernike_coeffs_tmp = np.zeros([n_harris])
for kk in range(120):
zernike_coeffs_tmp[qq+(kk)*N_zernike] = MID_modes_std[qq+(kk)*N_zernike]
zernike_coeffs_numaps[qq] = zernike_coeffs_tmp
zernike_coeffs_table = np.zeros([N_zernike,120])
for qq in range(N_zernike):
zernike_coeffs_tmp = np.zeros([120])
for kk in range(120):
zernike_coeffs_table[qq,kk] = MID_modes_std[qq+(kk)*N_zernike]
nu_maps = []
for qq in range(N_zernike):
zernike_coeffs = zernike_coeffs_numaps[qq]
luvoir.harris_sm.actuators = zernike_coeffs*nm_aber/ 2
nu_maps.append(luvoir.harris_sm.surface)
plt.figure(figsize=(45,10))
plt.subplot(1,3,1)
plt.title("Segment Level 1mk Faceplates Silvered", fontsize =30)
hcipy.imshow_field((nu_maps[0])*1e12, cmap = 'RdBu', vmin = -45, vmax = 65)
plt.tick_params(top=False, bottom=False, left=False, right=False,labelleft=False, labelbottom=False)
cbar = plt.colorbar()
cbar.ax.tick_params(labelsize=30)
cbar.set_label("pm", fontsize =30)
plt.subplot(1,3,2)
plt.title("Segment Level 1mk bulk",fontsize =30)
hcipy.imshow_field((nu_maps[1])*1e12, cmap = 'RdBu',vmin = -80, vmax = 20)
plt.tick_params(top=False, bottom=False, left=False, right=False, labelleft=False, labelbottom=False)
cbar = plt.colorbar()
cbar.ax.tick_params(labelsize=30)
cbar.set_label("pm", fontsize =30)
plt.subplot(1,3,3)
plt.title("Segment Level 1mk gradient radial",fontsize =30)
hcipy.imshow_field((nu_maps[2])*1e12, cmap = 'RdBu',vmin = -60, vmax = 40)
plt.tick_params(top=False, bottom=False, left=False, right=False,labelleft=False, labelbottom=False)
cbar = plt.colorbar()
cbar.ax.tick_params(labelsize=30)
cbar.set_label("pm",fontsize =30)
plt.show()
plt.figure(figsize=(30,10))
plt.subplot(1,2,1)
plt.title("Segment Level 1mk gradient X lateral", fontsize=30)
hcipy.imshow_field((nu_maps[3])*1e12, cmap = 'RdBu',vmin = -20, vmax = 20)
plt.tick_params(top=False, bottom=False, left=False, right=False,labelleft=False, labelbottom=False)
cbar = plt.colorbar()
cbar.ax.tick_params(labelsize=30)
cbar.set_label("pm",fontsize =30)
plt.subplot(1,2,2)
plt.title("Segment level 1mk gradient Z axial",fontsize =30)
hcipy.imshow_field((nu_maps[4])*1e12,cmap = 'RdBu', vmin = -20, vmax = 20)
plt.tick_params(top=False, bottom=False, left=False, right=False,labelleft=False, labelbottom=False)
cbar = plt.colorbar()
cbar.ax.tick_params(labelsize=30)
cbar.set_label("pm",fontsize =30)
five_vec_1 = np.zeros(120)
five_vec_2 = np.zeros(120)
five_vec_3 = np.zeros(120)
five_vec_4 = np.zeros(120)
five_vec_5 = np.zeros(120)
j = -1
for i in range (0,596,5):
j = j+1
print("i---",i,"j---",j)
five_vec_1[j]=mu_map_harris[i]
five_vec_2[j]=mu_map_harris[i+1]
five_vec_3[j]=mu_map_harris[i+2]
five_vec_4[j]=mu_map_harris[i+3]
five_vec_5[j]=mu_map_harris[i+4]
luvoir2 = LuvoirA_APLC(optics_input, coronagraph_design, sampling)
luvoir2.create_segmented_mirror(1)
luvoir2.sm.actuators = five_vec_1
luvoir3 = LuvoirA_APLC(optics_input, coronagraph_design, sampling)
luvoir3.create_segmented_mirror(1)
luvoir3.sm.actuators = five_vec_2
luvoir4 = LuvoirA_APLC(optics_input, coronagraph_design, sampling)
luvoir4.create_segmented_mirror(1)
luvoir4.sm.actuators = five_vec_3
luvoir5 = LuvoirA_APLC(optics_input, coronagraph_design, sampling)
luvoir5.create_segmented_mirror(1)
luvoir5.sm.actuators = five_vec_4
luvoir6 = LuvoirA_APLC(optics_input, coronagraph_design, sampling)
luvoir6.create_segmented_mirror(1)
luvoir6.sm.actuators = five_vec_5
plt.figure(figsize =(50,30))
plt.subplot(2,3,1)
plt.title("Faceplates Silvered",fontsize=40)
hcipy.imshow_field((luvoir2.sm.surface)*1000, cmap = 'RdBu', vmin =0, vmax = 15) #this is a hack
plt.tick_params(top=False, bottom=False, left=False, right=False,labelleft=False, labelbottom=False)
cbar = plt.colorbar()
cbar.ax.tick_params(labelsize=30)
cbar.set_label("mK",fontsize =30)
plt.subplot(2,3,2)
plt.title("Bulk",fontsize=40)
hcipy.imshow_field((luvoir3.sm.surface)*1000, cmap = 'RdBu',vmin =0, vmax = 70)
plt.tick_params(top=False, bottom=False, left=False, right=False, labelleft=False, labelbottom=False)
cbar = plt.colorbar()
cbar.ax.tick_params(labelsize=30)
cbar.set_label("mK",fontsize =30)
plt.subplot(2,3,3)
plt.title("Gradient Radial",fontsize=40)
hcipy.imshow_field((luvoir4.sm.surface)*1000, cmap = 'RdBu',vmin =0, vmax = 140)
plt.tick_params(top=False, bottom=False, left=False, right=False,labelleft=False, labelbottom=False)
cbar = plt.colorbar()
cbar.ax.tick_params(labelsize = 30)
cbar.set_label("mK",fontsize =30)
plt.subplot(2,3,4)
plt.title("Gradient X lateral", fontsize=40)
hcipy.imshow_field((luvoir5.sm.surface)*1000, cmap = 'RdBu',vmin =0, vmax = 30)
plt.tick_params(top=False, bottom=False, left=False, right=False,labelleft=False, labelbottom=False)
cbar = plt.colorbar()
cbar.ax.tick_params(labelsize = 30)
cbar.set_label("mK",fontsize =30)
plt.subplot(2,3,5)
plt.title("Gradient Z axial",fontsize =40)
hcipy.imshow_field((luvoir6.sm.surface)*1000, cmap = 'RdBu',vmin =0, vmax = 50)
plt.tick_params(top=False, bottom=False, left=False, right=False,labelleft=False, labelbottom=False)
cbar = plt.colorbar()
cbar.ax.tick_params(labelsize = 30)
cbar.set_label("mK",fontsize =30)
print(np.min(five_vec_1)*1000, np.max(five_vec_1)*1000, np.mean(five_vec_1)*1000, np.std(five_vec_1)*1000, '\n')
print(np.min(five_vec_2)*1000, np.max(five_vec_2)*1000, np.mean(five_vec_2)*1000, np.std(five_vec_2)*1000, '\n')
print(np.min(five_vec_3)*1000, np.max(five_vec_3)*1000, np.mean(five_vec_3)*1000, np.std(five_vec_3)*1000, '\n')
print(np.min(five_vec_4)*1000, np.max(five_vec_4)*1000, np.mean(five_vec_4)*1000, np.std(five_vec_4)*1000, '\n')
print(np.min(five_vec_5)*1000, np.max(five_vec_5)*1000, np.mean(five_vec_5)*1000, np.std(five_vec_5)*1000, '\n')
###Output
_____no_output_____
###Markdown
In mk/s
###Code
delta_wf = 0.5 #unit less
t_wfs = 4 #in sec
plt.figure(figsize =(50,30))
plt.subplot(2,3,1)
plt.title("Faceplates Silvered",fontsize=30)
hcipy.imshow_field((luvoir2.sm.surface)*1000*delta_wf*(1/t_wfs), cmap = 'RdBu',vmin =0, vmax = 1.6)
plt.tick_params(top=False, bottom=False, left=False, right=False,labelleft=False, labelbottom=False)
cbar = plt.colorbar()
cbar.ax.tick_params(labelsize=30)
cbar.set_label("mK/s",fontsize =40)
plt.subplot(2,3,2)
plt.title("Bulk",fontsize=30)
hcipy.imshow_field((luvoir3.sm.surface)*1000*delta_wf*(1/t_wfs), cmap = 'RdBu',vmin =0, vmax = 8)
plt.tick_params(top=False, bottom=False, left=False, right=False, labelleft=False, labelbottom=False)
cbar = plt.colorbar()
cbar.ax.tick_params(labelsize=30)
cbar.set_label("mK/s",fontsize =40)
plt.subplot(2,3,3)
plt.title("Gradient Radial",fontsize=30)
hcipy.imshow_field((luvoir4.sm.surface)*1000*delta_wf*(1/t_wfs), cmap = 'RdBu',vmin =0, vmax = 16)
plt.tick_params(top=False, bottom=False, left=False, right=False,labelleft=False, labelbottom=False)
cbar = plt.colorbar()
cbar.ax.tick_params(labelsize = 30)
cbar.set_label("mK/s",fontsize =40)
plt.subplot(2,3,4)
plt.title("Gradient X lateral", fontsize=30)
hcipy.imshow_field((luvoir5.sm.surface)*1000*delta_wf*(1/t_wfs), cmap = 'RdBu',vmin =0, vmax = 2.5)
plt.tick_params(top=False, bottom=False, left=False, right=False,labelleft=False, labelbottom=False)
cbar = plt.colorbar()
cbar.ax.tick_params(labelsize = 30)
cbar.set_label("mK/s",fontsize =40)
plt.subplot(2,3,5)
plt.title("Gradient Z axial",fontsize =30)
hcipy.imshow_field((luvoir6.sm.surface)*1000*delta_wf*(1/t_wfs), cmap = 'RdBu',vmin =0, vmax = 6)
plt.tick_params(top=False, bottom=False, left=False, right=False,labelleft=False, labelbottom=False)
cbar = plt.colorbar()
cbar.ax.tick_params(labelsize = 30)
cbar.set_label("mK/s",fontsize =40)
plt.show()
print(np.min(five_vec_1)*1000*delta_wf*(1/t_wfs),
np.max(five_vec_1)*1000*delta_wf*(1/t_wfs), np.mean(five_vec_1)*1000*delta_wf*(1/t_wfs)
, np.std(five_vec_1)*1000*delta_wf*(1/t_wfs), '\n')
print(np.min(five_vec_2)*1000*delta_wf*(1/t_wfs),
np.max(five_vec_2)*1000*delta_wf*(1/t_wfs), np.mean(five_vec_2)*1000*delta_wf*(1/t_wfs)
, np.std(five_vec_2)*1000*delta_wf*(1/t_wfs), '\n')
print(np.min(five_vec_3)*1000*delta_wf*(1/t_wfs),
np.max(five_vec_3)*1000*delta_wf*(1/t_wfs), np.mean(five_vec_3)*1000*delta_wf*(1/t_wfs)
, np.std(five_vec_3)*1000*delta_wf*(1/t_wfs), '\n')
print(np.min(five_vec_4)*1000*delta_wf*(1/t_wfs),
np.max(five_vec_4)*1000*delta_wf*(1/t_wfs), np.mean(five_vec_4)*1000*delta_wf*(1/t_wfs)
, np.std(five_vec_4)*1000*delta_wf*(1/t_wfs), '\n')
print(np.min(five_vec_5)*1000*delta_wf*(1/t_wfs),
np.max(five_vec_5)*1000*delta_wf*(1/t_wfs), np.mean(five_vec_5)*1000*delta_wf*(1/t_wfs)
, np.std(five_vec_5)*1000*delta_wf*(1/t_wfs), '\n')
#res = np.zeros([Ntimes, Nwavescale, Nflux, 1])
result_wf_test =[]
#i=-1
for wavescale in range (3,15,2):
#i=i+1
print('Harris modes with batch OBWFS and noise %f'% wavescale, "i",i)
niter = 10
timer1 = time.time()
StarMag = 5.0
#j=-1
for tscale in np.logspace(TimeMinus, TimePlus, Ntimes):
j=j+1
Starfactor = 10**(-StarMag/2.5)
print(tscale)
tmp0 = req_closedloop_calc_batch(G_coron_harris, G_OBWFS, E0_coron, E0_OBWFS, dark_current+CIC/tscale,
dark_current+CIC/tscale, tscale, flux*Starfactor, 0.1*wavescale**2*Qharris,
niter, luvoir.dh_mask, norm)
tmp1 = tmp0['averaged_hist']
n_tmp1 = len(tmp1)
result_wf_test.append(tmp1[n_tmp1-1])
result_wf_test[0:20]
texp = np.logspace(TimeMinus, TimePlus, Ntimes)
plt.figure(figsize =(15,10))
plt.plot(texp,result_wf_test[0:20]-contrast_floor, label=r'$\Delta_{wf}=0.3$')
plt.plot(texp,result_wf_test[20:40]-contrast_floor, label=r'$\Delta_{wf}=0.5$')
plt.plot(texp,result_wf_test[40:60]-contrast_floor, label=r'$\Delta_{wf}=0.7$')
plt.plot(texp,result_wf_test[60:80]-contrast_floor, label=r'$\Delta_{wf}=1.0$')
plt.plot(texp,result_wf_test[80:100]-contrast_floor, label=r'$\Delta_{wf}=1.3$')
plt.plot(texp,result_wf_test[100:120]-contrast_floor, label=r'$\Delta_{wf}=1.5$')
plt.plot
plt.xlabel("$t_{WFS}$ in secs")
plt.ylabel("$\Delta$ contrast")
plt.yscale('log')
plt.xscale('log')
plt.legend(loc = 'lower right')
plt.show()
luvoir2.sm.surface?
# wf_active_pupil = wf_aper
# wf_active_pupil = harris_sm(wf_active_pupil)
# wf_harris_sm = harris_sm(wf_aper)
# hcipy.imshow_field(wf_active_pupil.phase)
# hcipy.imshow_field(wf_harris_sm.phase)
# hcipy.imshow_field(wf_aper.phase)
# # All E-field propagations
# wf_dm1_coro = hcipy.Wavefront(wf_active_pupil.electric_field * np.exp(4 * 1j * np.pi/wvln * self.DM1), self.wavelength)
# wf_dm2_coro_before = fresnel(wf_dm1_coro)
# wf_dm2_coro_after = hcipy.Wavefront(wf_dm2_coro_before.electric_field * np.exp(4 * 1j * np.pi / self.wavelength * self.DM2) * self.DM2_circle, self.wavelength)
# wf_back_at_dm1 = self.fresnel_back(wf_dm2_coro_after)
# wf_apod_stop = hcipy.Wavefront(wf_back_at_dm1.electric_field * self.apod_stop, self.wavelength)
# wf_before_lyot = self.coro(wf_apod_stop)
# wf_lyot = self.lyot_stop(wf_before_lyot)
# wf_lyot.wavelength = self.wavelength
# wf_im_coro = self.prop(wf_lyot)
# wf_im_ref = self.prop(wf_back_at_dm1)
###Output
_____no_output_____ |
Orbit Models/stable version/eval_decompose_pred.ipynb | ###Markdown
Evaluating decomposed predictions by Orbit (**O**bject-**OR**iented **B**ayes**I**an **T**ime Series)- [Orbit: A Python Package for Bayesian Forecasting](https://github.com/uber/orbit/tree/master)- [Orbit’s Documentation](https://orbit-ml.readthedocs.io/en/stable/)- [Quick Start](https://orbit-ml.readthedocs.io/en/stable/tutorials/quick_start.html)- [Orbit: Probabilistic Forecast with Exponential Smoothing](https://arxiv.org/abs/2004.08492) Paper Implemented Models- ETS (which stands for Error, Trend, and Seasonality) Model- Methods of Estimations - Maximum a Posteriori (MAP) - Full Bayesian Estimation - Aggregated Posteriors- Damped Local Trend (DLT) - Global Trend Configurations: - Linear Global Trend - Log-Linear Global Trend - Flat Global Trend - Logistic Global Trend - Damped Local Trend Full Bayesian Estimation (DLTFull)- Local Global Trend (LGT) - Local Global Trend Maximum a Posteriori (LGTMAP) - Local Global Trend for full Bayesian prediction (LGTFull) - Local Global Trend for aggregated posterior prediction (LGTAggregated)- Using Pyro for Estimation - MAP Fit and Predict - VI Fit and Predict- Kernel-based Time-varying Regression (KTR) - Kernel-based Time-varying Regression Lite (KTRLite)
###Code
!pip install awswrangler
!pip install orbit-ml --no-input
import awswrangler as wr
import boto3
from sagemaker import get_execution_role
import pandas as pd
import numpy as np
import orbit
from orbit import *
from orbit.models.dlt import ETSFull, ETSMAP, ETSAggregated, DLTMAP, DLTFull, DLTMAP, DLTAggregated
from orbit.models.lgt import LGTMAP, LGTAggregated, LGTFull
from orbit.models.ktrlite import KTRLiteMAP
from orbit.estimators.pyro_estimator import PyroEstimatorVI, PyroEstimatorMAP
import warnings
warnings.simplefilter('ignore')
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
Uploading data- uploading data for **models**
###Code
role = get_execution_role()
bucket='...'
data_key = '...csv'
data_location = 's3://{}/{}'.format(bucket, data_key)
df = pd.DataFrame(pd.read_csv(data_location))
df = df.rename({'Unnamed: 0': 'Date'}, axis = 1)
df.index = df['Date']
df.shape
df
curve_df = df.drop(['curve'], axis = 0)
###Output
_____no_output_____
###Markdown
Orbit Models
###Code
# ETS (which stands for Error, Trend, and Seasonality)
# Methods of Estimations
# Maximum a Posteriori (MAP)
# The advantage of MAP estimation is a faster computational speed.
def ETSMAP_model(date_col, response_col, train_df, test_df):
ets = ETSMAP(
response_col=response_col,
date_col=date_col,
seasonality=52,
seed=8888,
)
ets.fit(df=train_df)
predicted_df_MAP = ets.predict(df=test_df)
return predicted_df_MAP['prediction'][:11]
# Full Bayesian Estimation
def ETSFull_model(date_col, response_col, train_df, test_df):
ets = ETSFull(
response_col=response_col,
date_col=date_col,
seasonality=52,
seed=8888,
num_warmup=400,
num_sample=400,
)
ets.fit(df=train_df)
predicted_df_ETSFull = ets.predict(df=test_df)
return predicted_df_ETSFull['prediction'][:11]
# Aggregated Posteriors
def ETSAggregated_model(date_col, response_col, train_df, test_df):
ets = ETSAggregated(
response_col=response_col,
date_col=date_col,
seasonality=52,
seed=8888,
)
ets.fit(df=train_df)
predicted_df_ETSAggregated = ets.predict(df=test_df)
return predicted_df_ETSAggregated['prediction'][:11]
# Damped Local Trend (DLT)
# Global Trend Configurations
# Linear Global Trend
# linear global trend
def DLTMAP_lin(date_col, response_col, train_df, test_df):
dlt = DLTMAP(
response_col=response_col,
date_col=date_col,
seasonality=52,
seed=8888,
)
dlt.fit(train_df)
predicted_df_DLTMAP_lin = dlt.predict(test_df)
return predicted_df_DLTMAP_lin['prediction'][:11]
# log-linear global trend
def DLTMAP_log_lin(date_col, response_col, train_df, test_df):
dlt = DLTMAP(
response_col=response_col,
date_col=date_col,
seasonality=52,
seed=8888,
global_trend_option='loglinear'
)
dlt.fit(train_df)
predicted_df_DLTMAP_log_lin = dlt.predict(test_df)
return predicted_df_DLTMAP_log_lin['prediction'][:11]
# log-linear global trend
def DLTMAP_flat(date_col, response_col, train_df, test_df):
dlt = DLTMAP(
response_col=response_col,
date_col=date_col,
seasonality=52,
seed=8888,
global_trend_option='flat'
)
dlt.fit(train_df)
predicted_df_DLTMAP_flat = dlt.predict(test_df)
return predicted_df_DLTMAP_flat['prediction'][:11]
# logistic global trend
def DLTMAP_logistic(date_col, response_col, train_df, test_df):
dlt = DLTMAP(
response_col=response_col,
date_col=date_col,
seasonality=52,
seed=8888,
global_trend_option='logistic'
)
dlt.fit(train_df)
predicted_df_DLTMAP_logistic = dlt.predict(test_df)
return predicted_df_DLTMAP_logistic['prediction'][:11]
# Damped Local Trend Full Bayesian Estimation (DLTFull)
def DLTFull_model(date_col, response_col, train_df, test_df):
dlt = DLTFull(
response_col=response_col,
date_col=date_col,
num_warmup=400,
num_sample=400,
seasonality=52,
seed=8888
)
dlt.fit(df=train_df)
predicted_df_DLTFull = dlt.predict(df=test_df)
return predicted_df_DLTFull['prediction'][:11]
# Damped Local Trend Full (DLTAggregated)
def DLTAggregated_model(date_col, response_col, train_df, test_df):
ets = DLTAggregated(
response_col=response_col,
date_col=date_col,
seasonality=52,
seed=8888,
)
ets.fit(df=train_df)
predicted_df_DLTAggregated = ets.predict(df=test_df)
return predicted_df_DLTAggregated['prediction'][:11]
# Local Global Trend (LGT) Model
# Local Global Trend Maximum a Posteriori (LGTMAP)
def LGTMAP_model(date_col, response_col, train_df, test_df):
lgt = LGTMAP(
response_col=response_col,
date_col=date_col,
seasonality=52,
seed=8888,
)
lgt.fit(df=train_df)
predicted_df_LGTMAP = lgt.predict(df=test_df)
return predicted_df_LGTMAP['prediction'][:11]
# LGTFull
def LGTFull_model(date_col, response_col, train_df, test_df):
lgt = LGTFull(
response_col=response_col,
date_col=date_col,
seasonality=52,
seed=8888,
)
lgt.fit(df=train_df)
predicted_df_LGTFull = lgt.predict(df=test_df)
return predicted_df_LGTFull['prediction'][:11]
# LGTAggregated
def LGTAggregated_model(date_col, response_col, train_df, test_df):
lgt = LGTAggregated(
response_col=response_col,
date_col=date_col,
seasonality=52,
seed=8888,
)
lgt.fit(df=train_df)
predicted_df_LGTAggregated = lgt.predict(df=test_df)
return predicted_df_LGTAggregated['prediction'][:11]
# Using Pyro for Estimation
# MAP Fit and Predict
def LGTMAP_PyroEstimatorMAP(date_col, response_col, train_df, test_df):
lgt_map = LGTMAP(
response_col=response_col,
date_col=date_col,
seasonality=52,
seed=8888,
estimator_type=PyroEstimatorMAP,
)
lgt_map.fit(df=train_df)
predicted_df_LGTMAP_pyro = lgt_map.predict(df=test_df)
return predicted_df_LGTMAP_pyro['prediction'][:11]
# VI Fit and Predict
def LGTFull_pyro(date_col, response_col, train_df, test_df):
lgt_vi = LGTFull(
response_col=response_col,
date_col=date_col,
seasonality=52,
seed=8888,
num_steps=101,
num_sample=100,
learning_rate=0.1,
n_bootstrap_draws=-1,
estimator_type=PyroEstimatorVI,
)
lgt_vi.fit(df=train_df)
predicted_df_LGTFull_pyro = lgt_vi.predict(df=test_df)
return predicted_df_LGTFull_pyro['prediction'][:11]
# Kernel-based Time-varying Regression (KTR)
# KTRLite
def ktrlite_MAP(date_col, response_col, train_df, test_df):
ktrlite = KTRLiteMAP(
response_col=response_col,
#response_col=np.log(df[response_col]),
date_col=date_col,
level_knot_scale=.1,
span_level=.05,
)
ktrlite.fit(train_df)
predicted_df_ktrlite_MAP = ktrlite.predict(df=test_df, decompose=True)
return predicted_df_ktrlite_MAP['prediction'][:11]
###Output
_____no_output_____
###Markdown
Root-Mean-Square Deviation (RMSD) or Root-Mean-Square Error (RMSE)
###Code
def rmse(actual, pred):
actual, pred = np.array(actual), np.array(pred)
return np.sqrt(np.square(np.subtract(actual,pred)).mean())
def evaluating_models(index, column):
'''
Parameters:
index: column index
column: column name
Returns:
models_df: new dataframe with
'''
tmp_df['Date'] = pd.to_datetime(curve_df['Date'].astype(str))
tmp_df['Penetration'] = curve_df[column].astype(float)
date_col = 'Date'
response_col = 'Penetration'
# Decompose Prediction
train_df = tmp_df[tmp_df['Date'] < '2022-01-01']
test_df = tmp_df[tmp_df['Date'] <= '2025-01-01']
models_df.at[index ,'Item Name'] = column
# Making predictions with each model
try:
models_df.at[index , 'ETSMAP'] = rmse(
tmp_df['Penetration'][:11],
(ETSMAP_model(date_col, response_col, train_df, test_df))).astype(float)
except:
models_df.at[index , 'ETSMAP'] = 100
try:
models_df.at[index , 'ETSFull'] = rmse(
tmp_df['Penetration'][:11],
(ETSFull_model(date_col, response_col, train_df, test_df))).astype(float)
except:
models_df.at[index , 'ETSFull'] = 100
try:
models_df.at[index , 'ETSAggregated'] = rmse(
tmp_df['Penetration'][:11],
(ETSAggregated_model(date_col, response_col, train_df, test_df))).astype(float)
except:
models_df.at[index , 'ETSAggregated'] = 100
try:
models_df.at[index , 'DLTMAP_lin'] = rmse(
tmp_df['Penetration'][:11],
(DLTMAP_lin(date_col, response_col, train_df, test_df))).astype(float)
except:
models_df.at[index , 'DLTMAP_lin'] = 100
try:
models_df.at[index , 'DLTMAP_log_lin'] = rmse(
tmp_df['Penetration'][:11],
(DLTMAP_log_lin(date_col, response_col, train_df, test_df))).astype(float)
except:
models_df.at[index , 'DLTMAP_log_lin'] = 100
try:
models_df.at[index , 'DLTMAP_flat'] = rmse(
tmp_df['Penetration'][:11],
(DLTMAP_flat(date_col, response_col, train_df, test_df))).astype(float)
except:
models_df.at[index , 'DLTMAP_flat'] = 100
try:
models_df.at[index , 'DLTMAP_logistic'] = rmse(
tmp_df['Penetration'][:11],
(DLTMAP_logistic(date_col, response_col, train_df, test_df))).astype(float)
except:
models_df.at[index , 'DLTMAP_logistic'] = 100
try:
models_df.at[index , 'DLTFull'] = rmse(
tmp_df['Penetration'][:11],
(DLTFull_model(date_col, response_col, train_df, test_df))).astype(float)
except:
models_df.at[index , 'DLTFull'] = 100
try:
models_df.at[index , 'DLTAggregated'] = rmse(
tmp_df['Penetration'][:11],
(DLTAggregated_model(date_col, response_col, train_df, test_df))).astype(float)
except:
models_df.at[index , 'DLTAggregated'] = 100
try:
models_df.at[index , 'LGTMAP'] = rmse(
tmp_df['Penetration'][:11],
(LGTMAP_model(date_col, response_col, train_df, test_df))).astype(float)
except:
models_df.at[index , 'LGTMAP'] = 100
try:
models_df.at[index , 'LGTFull'] = rmse(
tmp_df['Penetration'][:11],
(LGTFull_model(date_col, response_col, train_df, test_df))).astype(float)
except:
models_df.at[index , 'LGTFull'] = 100
try:
models_df.at[index , 'LGTAggregated'] = rmse(
tmp_df['Penetration'][:11],
(LGTAggregated_model(date_col, response_col, train_df, test_df))).astype(float)
except:
models_df.at[index , 'LGTAggregated'] = 100
# Using Pyro for Estimation
try:
models_df.at[index , 'LGTMAP_PyroEstimatorMAP'] = rmse(
tmp_df['Penetration'][:11], (LGTMAP_PyroEstimatorMAP(date_col,
response_col, train_df, test_df))).astype(float)
except:
models_df.at[index , 'LGTMAP_PyroEstimatorMAP'] = 100
try:
models_df.at[index , 'LGTFull_pyro4'] = rmse(
tmp_df['Penetration'][:11],
(LGTFull_pyro(date_col, response_col, train_df, test_df))).astype(float)
except:
models_df.at[index , 'LGTFull_pyro4'] = 100
# Kernel-based Time-varying Regression (KTR)
try:
models_df.at[index , 'KTR_Lite_MAP'] = rmse(
tmp_df['Penetration'][:11],
(ktrlite_MAP(date_col, response_col, train_df, test_df))).astype(float)
except:
models_df.at[index , 'KTR_Lite_MAP'] = 100
models_df.at[index, 'Curve Type'] = df[column].iloc[-1]
return models_df
###Output
_____no_output_____
###Markdown
Calculating minimal RMSE value for each item
###Code
def min_value(df):
'''
Parameters:
df: input dataframe with multiple columns and values in a row
Returns:
models_df: existing dataframe with added the new 'Model' column filled with
the name of the best-fitted model for each item
'''
df.iloc[:, 1:-1].apply(pd.to_numeric)
df['Model'] = df.iloc[:, 1:-1].idxmin(axis=1)
return models_df
###Output
_____no_output_____
###Markdown
Evaluating Orbit models for each item
###Code
import time
tmp_df = pd.DataFrame()
models_df = pd.DataFrame()
start = time.time()
for index, column in enumerate(curve_df.columns[1:2]):
evaluating_models(index, column)
end = time.time()
print(end - start)
models_df
min_value(models_df)
###Output
_____no_output_____ |
nbs/1.2_exp.gen.code.ipynb | ###Markdown
Data exploration (adapted from CodeSearchNet challenge)>> @danaderp A data exploratory analysis for CODE-GEN project
###Code
import json
import pandas as pd
from pathlib import Path
pd.set_option('max_colwidth',300)
from pprint import pprint
import re
#hide <--- Automate the installation of the requirements
#! pip install seaborn
#! pip install sklearn
#!pip install pyprg
#!pip install nltk
#!pip install tokenizers
#!pip install tensorflow-datasets
#importing ds4se
import ds4se as ds
from ds4se.mgmnt.prep.conv import *
def default_params():
return {
'system':'codesearchnet',
'saving_path': 'codesearch/',
'language': 'english',
'wiki_size': 60000,
#'bpe_filename':'test_data/sentencepiece/py_java_bpe_training.txt',
'model_prefix':'../../data/test_data/sentencepiece/py_bpe_32k_c' #c termination models are for codesearch
}
params = default_params()
prep = ConventionalPreprocessing(params, bpe = True)
#opening origal dataset
python_files = sorted(Path('codesearch/python/').glob('**/*.gz'))
java_files = sorted(Path('codesearch/java/').glob('**/*.gz'))
#Miscellaneous
columns_long_list = ['repo', 'path', 'url', 'code',
'code_tokens', 'docstring', 'docstring_tokens',
'language', 'partition']
columns_short_list = ['code_tokens', 'docstring_tokens',
'language', 'partition']
def jsonl_list_to_dataframe(file_list, columns=columns_long_list):
"""Load a list of jsonl.gz files into a pandas DataFrame."""
return pd.concat([pd.read_json(f,
orient='records',
compression='gzip',
lines=True)[columns]
for f in file_list], sort=False)
python_searchnet_df = jsonl_list_to_dataframe( python_files )
java_searchnet_df = jsonl_list_to_dataframe( java_files )
java_searchnet_df.head(2)
arr = java_searchnet_df['code'].values
arr[0]
prep.bpe_pieces_pipeline([arr[0]])
#df_java_frame = java_searchnet_df.copy()
df_java_frame = python_searchnet_df.copy() #small cheet TODO change it
df_java_frame['bpe32k'] = prep.bpe_pieces_pipeline(python_searchnet_df['code'].values)
df_java_frame.head(2)
###Output
_____no_output_____
###Markdown
Exploring the full DataSet
###Code
## You can LOAD a presaved dataset here!
df_java_frame = prep.LoadCorpus(timestamp = 1597073966.81902, language="java", sep="~")
df_java_frame.shape
df_java_frame.partition.value_counts()
df_java_frame.groupby(['partition', 'language'])['code_tokens'].count()
df_java_frame.groupby(['partition', 'language'])['bpe32k'].count()
df_java_frame['code_len'] = df_java_frame.code_tokens.apply( lambda x: len(x) )
df_java_frame['bpe32_len'] = df_java_frame.bpe32k.apply( lambda x: len(x) )
#java_df['query_len'] = java_df.docstring_tokens.apply(lambda x: len(x))
###Output
_____no_output_____
###Markdown
Tokens Length Percentile
###Code
code_len_summary = df_java_frame.groupby('language')['code_len'].quantile([.5, .7, .8, .9, .95])
code_len_summary
#display(pd.DataFrame(code_len_summary))
code_len_summary_bpe = df_java_frame.groupby('language')['bpe32_len'].quantile([.5, .7, .8, .9, .95])
code_len_summary_bpe
df_java_frame.head(2)
df_java_frame['bpe32_len'].mean()
column=['bpe32_len','code_len']
df_java_frame.groupby('partition')[column].describe()
#Central tendenct robust measure
df_java_frame.groupby('partition')[column].median()
#Variability robust measure
#Return the mean absolute deviation of the values for the requested axis.
df_java_frame.groupby('partition')[column].mad()
from scipy import stats
#The median absolute deviation (MAD, [1]) computes the median over the absolute deviations from the median.
#It is a measure of dispersion similar to the standard deviation but more robust to outliers [2].
x = df_java_frame[df_java_frame['partition'] == 'train']['bpe32_len'].values
stats.median_abs_deviation(x)
%matplotlib inline
df_java_frame.hist(by='partition',column=column, bins=50,figsize=[10,5])
df_java_frame.boxplot(by='partition',column=['bpe32_len','code_len'],color='k',figsize=[8,10])
#saving
prep.SaveCorpus(df_java_frame, language='py', sep='~', mode='a')
###Output
2020-08-10 15:56:35,475 : INFO : Saving in...codesearch/[codesearchnet-py-1597074936.765795].csv
###Markdown
Data transformation
###Code
pprint(java_df.columns)
src_code_columns = ['code', 'code_tokens', 'code_len','partition']
java_src_code_df = java_df[src_code_columns]
java_src_code_df.columns
java_src_code_df.shape
###Output
_____no_output_____
###Markdown
Visualizing examples
###Code
java_src_code_df[:10]['code']
data_type_new_column = ['src' for x in range(java_src_code_df.shape[0])]
len(data_type_new_column)
java_src_code_df.loc[:,'data_type'] = data_type_new_column
java_src_code_df.head()
###Output
_____no_output_____
###Markdown
Data cleaning Remove functions with syntax errors
###Code
!pip install radon
java_code_df.shape
type(java_code_df['code'][9071])
java_code_df['code'][9071]
###Output
_____no_output_____
###Markdown
Exploratory analysis
###Code
# export
# Imports
import dit
import math
import os
import logging
import matplotlib.pyplot as plt
import pandas as pd
import sentencepiece as sp
from collections import Counter
from pathlib import Path
from scipy.stats import sem, t
from statistics import mean, median, stdev
from tqdm.notebook import tqdm
# ds4se
from ds4se.mgmnt.prep.bpe import *
from ds4se.exp.info import *
from ds4se.desc.stats import *
java_path = Path('test_data/java/')
n_sample = int(len(code_df)*0.01)
sample_code_df = code_df.sample(n=n_sample)
sample_code_df.shape
sp_model_from_df(sample_code_df, output=java_path, model_name='_sp_bpe_modal', cols=['code'])
sp_processor = sp.SentencePieceProcessor()
sp_processor.Load(f"{java_path/'_sp_bpe_modal'}.model")
java_src_code_df.shape
n_sample_4_sp = int(java_src_code_df.shape[0]*0.01)
print(n_sample_4_sp)
java_code_df = java_src_code_df.sample(n=n_sample_4_sp)
java_code_df.shape
code_df.shape
# Use the model to compute each file's entropy
java_doc_entropies = get_doc_entropies_from_df(code_df, 'code', java_path/'_sp_bpe_modal', ['src'])
len(java_doc_entropies)
# Use the model to compute each file's entropy
java_corpus_entropies = get_corpus_entropies_from_df(code_df, 'code', java_path/'_sp_bpe_modal', ['src'])
java_corpus_entropies
# Use the model to compute each file's entropy
java_system_entropy = get_system_entropy_from_df(code_df, 'code', java_path/'_sp_bpe_modal')
java_system_entropy
flatten = lambda l: [item for sublist in l for item in sublist]
report_stats(flatten(java_doc_entropies))
java_doc_entropies
# Create a histogram of the entropy distribution
plt.hist(java_doc_entropies,bins = 20, color="blue", alpha=0.5, edgecolor="black", linewidth=1.0)
plt.title('Entropy histogram')
plt.ylabel("Num records")
plt.xlabel("Entropy score")
plt.show()
fig1, ax1 = plt.subplots()
ax1.set_title('Entropy box plot')
ax1.boxplot(java_doc_entropies, vert=False)
###Output
_____no_output_____
###Markdown
Descriptive metrics
###Code
#Libraries used in ds4se.desc.metrics.java nb
!pip install lizard
!pip install tree_sitter
!pip install bs4
from ds4se.desc.metrics import *
from ds4se.desc.metrics.java import *
import lizard
import chardet
java_src_code_df.head(1)
test_src_code = java_src_code_df['code'].values[0]
print(test_src_code)
###Output
protected final void fastPathOrderedEmit(U value, boolean delayError, Disposable disposable) {
final Observer<? super V> observer = downstream;
final SimplePlainQueue<U> q = queue;
if (wip.get() == 0 && wip.compareAndSet(0, 1)) {
if (q.isEmpty()) {
accept(observer, value);
if (leave(-1) == 0) {
return;
}
} else {
q.offer(value);
}
} else {
q.offer(value);
if (!enter()) {
return;
}
}
QueueDrainHelper.drainLoop(q, observer, delayError, disposable, this);
}
###Markdown
Sample of available metrics (for method level)
###Code
metrics = lizard.analyze_file.analyze_source_code('test.java', test_src_code)
func = metrics.function_list[0]
print('cyclomatic_complexity: {}'.format(func.cyclomatic_complexity))
print('nloc (length): {}'.format(func.length))
print('nloc: {}'.format(func.nloc))
print('parameter_count: {}'.format(func.parameter_count))
print('name: {}'.format(func.name))
print('token_count {}'.format(func.token_count))
print('long_name: {}'.format(func.long_name))
def add_method_mccabe_metrics_to_code_df(src_code_df, code_column):
"""Computes method level McCabe metrics and adds it as columns in the specified dataframe"""
#result_df = src_code_df.copy()
result_df = pd.DataFrame([])
for index, row in src_code_df.iterrows():
'''print('index{}'.format(index))
print('type:{}'.format(type(row[code_column])))'''
metrics = lizard.analyze_file.analyze_source_code('java_file.java', row[code_column])
metrics_obj = metrics.function_list
''' print('matrics_length', len(metrics_obj))'''
if(len(metrics_obj) == 0):
continue
row['cyclomatic_complexity'] = metrics_obj[0].cyclomatic_complexity
row['nloc'] = metrics_obj[0].nloc
row['parameter_count'] = metrics_obj[0].parameter_count
row['method_name'] = metrics_obj[0].name
row['token_count'] = metrics_obj[0].token_count
result_df = result_df.append(row)
'''
valid_indices.append(index)
cyclomatic_complexity.append(metrics_obj[0].cyclomatic_complexity)
nloc.append(metrics_obj[0].nloc)
parameter_count.append(metrics_obj[0].parameter_count)
method_name.append(metrics_obj[0].name)
token_count.append(metrics_obj[0].token_count)'''
'''src_code_df['cyclomatic_complexity'] = cyclomatic_complexity
src_code_df['nloc'] = nloc
src_code_df['parameter_count'] = parameter_count
src_code_df['method_name'] = method_name
src_code_df['token_count'] = token_count'''
return result_df
code_df = add_method_mccabe_metrics_to_code_df(java_src_code_df, 'code')
code_df.shape
code_df.head()
code_df.to_csv('test_data/clean_java.csv')
code_df.shape
java_code_df.shape
code_df.head()
code_df.describe()
display_numeric_col_hist(code_df['cyclomatic_complexity'], 'Cyclomatic complexity')
fig1, ax1 = plt.subplots()
ax1.set_title('Cyclomatic complexity box plot')
ax1.boxplot(code_df['cyclomatic_complexity'], vert=False)
display_numeric_col_hist(code_df['nloc'], 'Nloc')
fig1, ax1 = plt.subplots()
ax1.set_title('Nloc box plot')
ax1.boxplot(code_df['nloc'], vert=False)
display_numeric_col_hist(code_df['parameter_count'], 'Parameter count')
fig1, ax1 = plt.subplots()
ax1.set_title('Param. count box plot')
ax1.boxplot(code_df['parameter_count'], vert=False)
display_numeric_col_hist(code_df['token_count'], 'Token count')
fig1, ax1 = plt.subplots()
ax1.set_title('Token count box plot')
ax1.boxplot(code_df['token_count'], vert=False)
fig1, ax1 = plt.subplots()
ax1.set_title('Code len box plot')
ax1.boxplot(code_df['code_len'], vert=False)
code_df.shape
code_df[['cyclomatic_complexity', 'nloc', 'token_count', 'parameter_count']].corr()
import seaborn as sns
import numpy as np
def heatmap(x, y, **kwargs):
if 'color' in kwargs:
color = kwargs['color']
else:
color = [1]*len(x)
if 'palette' in kwargs:
palette = kwargs['palette']
n_colors = len(palette)
else:
n_colors = 256 # Use 256 colors for the diverging color palette
palette = sns.color_palette("Blues", n_colors)
if 'color_range' in kwargs:
color_min, color_max = kwargs['color_range']
else:
color_min, color_max = min(color), max(color) # Range of values that will be mapped to the palette, i.e. min and max possible correlation
def value_to_color(val):
if color_min == color_max:
return palette[-1]
else:
val_position = float((val - color_min)) / (color_max - color_min) # position of value in the input range, relative to the length of the input range
val_position = min(max(val_position, 0), 1) # bound the position betwen 0 and 1
ind = int(val_position * (n_colors - 1)) # target index in the color palette
return palette[ind]
if 'size' in kwargs:
size = kwargs['size']
else:
size = [1]*len(x)
if 'size_range' in kwargs:
size_min, size_max = kwargs['size_range'][0], kwargs['size_range'][1]
else:
size_min, size_max = min(size), max(size)
size_scale = kwargs.get('size_scale', 500)
def value_to_size(val):
if size_min == size_max:
return 1 * size_scale
else:
val_position = (val - size_min) * 0.99 / (size_max - size_min) + 0.01 # position of value in the input range, relative to the length of the input range
val_position = min(max(val_position, 0), 1) # bound the position betwen 0 and 1
return val_position * size_scale
if 'x_order' in kwargs:
x_names = [t for t in kwargs['x_order']]
else:
x_names = [t for t in sorted(set([v for v in x]))]
x_to_num = {p[1]:p[0] for p in enumerate(x_names)}
if 'y_order' in kwargs:
y_names = [t for t in kwargs['y_order']]
else:
y_names = [t for t in sorted(set([v for v in y]))]
y_to_num = {p[1]:p[0] for p in enumerate(y_names)}
plot_grid = plt.GridSpec(1, 15, hspace=0.2, wspace=0.1) # Setup a 1x10 grid
ax = plt.subplot(plot_grid[:,:-1]) # Use the left 14/15ths of the grid for the main plot
marker = kwargs.get('marker', 's')
kwargs_pass_on = {k:v for k,v in kwargs.items() if k not in [
'color', 'palette', 'color_range', 'size', 'size_range', 'size_scale', 'marker', 'x_order', 'y_order', 'xlabel', 'ylabel'
]}
ax.scatter(
x=[x_to_num[v] for v in x],
y=[y_to_num[v] for v in y],
marker=marker,
s=[value_to_size(v) for v in size],
c=[value_to_color(v) for v in color],
**kwargs_pass_on
)
ax.set_xticks([v for k,v in x_to_num.items()])
ax.set_xticklabels([k for k in x_to_num], rotation=45, horizontalalignment='right')
ax.set_yticks([v for k,v in y_to_num.items()])
ax.set_yticklabels([k for k in y_to_num])
ax.grid(False, 'major')
ax.grid(True, 'minor')
ax.set_xticks([t + 0.5 for t in ax.get_xticks()], minor=True)
ax.set_yticks([t + 0.5 for t in ax.get_yticks()], minor=True)
ax.set_xlim([-0.5, max([v for v in x_to_num.values()]) + 0.5])
ax.set_ylim([-0.5, max([v for v in y_to_num.values()]) + 0.5])
ax.set_facecolor('#F1F1F1')
ax.set_xlabel(kwargs.get('xlabel', ''))
ax.set_ylabel(kwargs.get('ylabel', ''))
# Add color legend on the right side of the plot
if color_min < color_max:
ax = plt.subplot(plot_grid[:,-1]) # Use the rightmost column of the plot
col_x = [0]*len(palette) # Fixed x coordinate for the bars
bar_y=np.linspace(color_min, color_max, n_colors) # y coordinates for each of the n_colors bars
bar_height = bar_y[1] - bar_y[0]
ax.barh(
y=bar_y,
width=[5]*len(palette), # Make bars 5 units wide
left=col_x, # Make bars start at 0
height=bar_height,
color=palette,
linewidth=0
)
ax.set_xlim(1, 2) # Bars are going from 0 to 5, so lets crop the plot somewhere in the middle
ax.grid(False) # Hide grid
ax.set_facecolor('white') # Make background white
ax.set_xticks([]) # Remove horizontal ticks
ax.set_yticks(np.linspace(min(bar_y), max(bar_y), 3)) # Show vertical ticks for min, middle and max
ax.yaxis.tick_right() # Show vertical ticks on the right
columns = ['cyclomatic_complexity', 'nloc', 'token_count', 'parameter_count']
corr = code_df[columns].corr()
corr = pd.melt(corr.reset_index(), id_vars='index') # Unpivot the dataframe, so we can get pair of arrays for x and y
corr.columns = ['x', 'y', 'value']
heatmap(
x=corr['x'],
y=corr['y'],
size=corr['value'].abs()
)
def corrplot(data, size_scale=500, marker='s'):
corr = pd.melt(data.reset_index(), id_vars='index').replace(np.nan, 0)
corr.columns = ['x', 'y', 'value']
heatmap(
corr['x'], corr['y'],
color=corr['value'], color_range=[-1, 1],
palette=sns.diverging_palette(20, 220, n=256),
size=corr['value'].abs(), size_range=[0,1],
marker=marker,
x_order=data.columns,
y_order=data.columns[::-1],
size_scale=size_scale
)
corrplot(code_df[columns].corr(), size_scale=300);
###Output
_____no_output_____ |
notebooks/6-Using-Pretrained-Models.ipynb | ###Markdown
OverviewIn this notebook, we'll show how to use a pretrained model for target concept extraction instead of defining rules. We'll then add our additional components to show how medSpaCy can be used to combine statistical NLP with other rule-based components.As an example, we'll download the model below which contains a model pretrained for clinical data. This model was trained with data from the i2b2 2012 shared task: [**"Evaluating temporal relations in clinical text"**](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3756273/). This model was trained on data for the first subtask in the shared task, referred to in the challenge as **"Clinically relevant events"**, specifically the following **clinical concepts**:- **Problems:** Diagnoses, signs, and symptoms- **Tests:** Lab and vital measurements- **Treatments:** Medications, procedures, and therapiesWe can install this model with `pip` using this GitHub link:```bashpip install https://github.com/abchapman93/spacy_models/raw/master/releases/en_info_3700_i2b2_2012-0.1.0/dist/en_info_3700_i2b2_2012-0.1.0.tar.gz```
###Code
# !pip install https://github.com/abchapman93/spacy_models/raw/master/releases/en_info_3700_i2b2_2012-0.1.0/dist/en_info_3700_i2b2_2012-0.1.0.tar.gz
with open("./discharge_summary.txt") as f:
text = f.read()
###Output
_____no_output_____
###Markdown
This model now can be loaded as any other spaCy model. We'll use `medspacy.load()` and pass in this model name as an example. Since this trained NER component will take care of entity extraction, we can disable the `target_matcher` from our pipeline (although you may want to add rule-based matching to reduce false negatives from the model):
###Code
nlp = medspacy.load("en_info_3700_i2b2_2012", disable=["target_matcher"])
nlp.pipe_names
ner = nlp.get_pipe("ner")
ner.labels
doc = nlp(text)
###Output
_____no_output_____
###Markdown
Process our textSimilar to the last notebook, we'll add new rules to some of our components. Let's first look at what our model extracts out of the box:
###Code
visualize_ent(doc)
###Output
_____no_output_____
###Markdown
Preprocessing
###Code
preprocessor = Preprocessor(nlp.tokenizer)
nlp.tokenizer = preprocessor
preprocess_rules = [
PreprocessingRule(
re.compile("\[\*\*[\d]{1,4}-[\d]{1,2}(-[\d]{1,2})?\*\*\]"),
repl="01-01-2010",
desc="Replace MIMIC date brackets with a generic date."
),
PreprocessingRule(
re.compile("\[\*\*[\d]{4}\*\*\]"),
repl="2010",
desc="Replace MIMIC year brackets with a generic year."
),
PreprocessingRule(
re.compile("dx'd"), repl="Diagnosed",
desc="Replace abbreviation"
),
PreprocessingRule(
re.compile("tx'd"), repl="Treated",
desc="Replace abbreviation"
),
PreprocessingRule(
re.compile("\[\*\*[^\]]+\]"),
desc="Remove all other bracketed placeholder text from MIMIC"
)
]
preprocessor.add(preprocess_rules)
###Output
_____no_output_____
###Markdown
Context
###Code
context = nlp.get_pipe("context")
context_rules = [
ConTextRule("diagnosed in <YEAR>", "HISTORICAL",
pattern=[
{"LOWER": "diagnosed"},
{"LOWER": "in"},
{"LOWER": {"REGEX": "^[\d]{4}$"}}
])
]
context.add(context_rules)
###Output
_____no_output_____
###Markdown
Section detection
###Code
sectionizer = Sectionizer(nlp, patterns="default")
nlp.add_pipe(sectionizer)
section_patterns = [
{"section_title": "hospital_course", "pattern": "Brief Hospital Course:"}
]
sectionizer.add(section_patterns)
###Output
_____no_output_____
###Markdown
PostprocessingHere, we'll show another example of how postprocessing can be used. The NER component extracts **"married"** as a **"TREATMENT"** entity. While some might agree with this in a philosophical sense, it doesn't match our clinical definition very well. This shows a challenge of statistical NLP: we have relatively little control over what concepts are extracted by our model. But we can use some postprocessing rules to clean this up.Postprocessing can be used to remove or clean up entities which we know are incorrect. In this example, we'll just remove any entity where the text is **"married"**:
###Code
postprocessor = Postprocessor(debug=False)
nlp.add_pipe(postprocessor)
postprocess_rules = [
PostprocessingRule(
patterns=[
PostprocessingPattern(condition=lambda ent: ent.lower_ == "married"),
],
action=postprocessing_functions.remove_ent,
description="Remove a specific misclassified span of text."
),
]
postprocessor.add(postprocess_rules)
###Output
_____no_output_____
###Markdown
Process our documentNow, let's process the text with our complete pipeline and show the results:
###Code
nlp.pipe_names
doc = nlp(text)
visualize_ent(doc)
short_text = "Colon cancer dx'd in [**2554**], tx'd with hemicolectomy"
short_doc = nlp(short_text)
visualize_ent(short_doc)
visualize_dep(short_doc)
###Output
_____no_output_____ |
notebooks/ames_deploy_dockerhub.ipynb | ###Markdown
Ames Housing Prices - Step 5: Model DeploymentNow that we have trained and selected our optimal model, its time to deploy it. This notebook demonstrates how to user our Experiment and Pipelines from the previous steps to easly deploy our model as a Cortex Action.
###Code
# Basic setup
%run config.ipynb
# Connect to Cortex 5 and create a Builder instance
cortex = Cortex.client()
builder = cortex.builder()
###Output
_____no_output_____
###Markdown
Load the ExperiementLet's load our experiment from the previous step and find the model we want to deploy.
###Code
exp = cortex.experiment('kaggle/ames-housing-regression')
exp
###Output
_____no_output_____
###Markdown
---The model created in the last run looks to be the best, let's deploy it
###Code
run = exp.get_run('sb04akg')
model = run.get_artifact('model')
model
###Output
_____no_output_____
###Markdown
Model deployment - Step 1: Configure Data Pipeline for InputsOur model was trained with data that has had cleaning and feature engineering steps applied to it. Since we want our users to send us the actual raw data, we need to deploy our pipeline to transform the input data into the form we expect. This requires applying some of the same steps from before, but also requires us to remember some of the data created during model training such as the median values of certain columns and the final list of _dummy_ categorical columns created during feature engineering. Luckily, our pipelines have a memory in the form of _context_ that we can reference here to achieve this.
###Code
train_ds = cortex.dataset('kaggle/ames-housing-train')
# Model our feature pipeline after the 'clean' pipeline
x_pipe = builder.pipeline('x_pipe')
x_pipe.from_pipeline(train_ds.pipeline('clean'))
# Same idea from our training prep, however we need to use the median values we computed before which we stored in our pipeline context
def fill_median_cols_ctx(pipeline, df):
fill_median_cols = ['GarageArea','TotalBsmtSF', 'MasVnrArea', 'BsmtFinSF1', 'LotFrontage', 'BsmtUnfSF', 'GarageYrBlt']
[df[j].fillna(pipeline.get_context('{}_median'.format(j)), inplace=True) for j in fill_median_cols]
# The dummy column conversion we did during training needs to be applied here. Afterwards there will be missing columns because
# our input instance will only contain at most one value per category. We need to fill in the other expected columns. We stored
# the expected set of columns in our pipeline so we can easily do this now.
def fix_columns(pipeline, df):
all_cols = pipeline.get_context('columns')
missing_cols = set(all_cols) - set(df.columns)
for c in missing_cols:
df[c] = 0
# make sure we have all the columns we need
assert(set(all_cols) - set(df.columns) == set())
return df[all_cols]
# The feature engineering pipeline contains the complete list of dummy columns in addition to some steps we need
engineer_pipe = train_ds.pipeline('engineer')
x_pipe.set_context('columns', engineer_pipe.get_context('columns'))
# Reuse steps from our clean, features, and engineer pipelines
fill_zero_cols = x_pipe.get_step('fill_zero_cols')
fill_na_none = x_pipe.get_step('fill_na_none')
get_dummies = engineer_pipe.get_step('get_dummies')
# Build our final input pipeline
x_pipe.reset()
x_pipe.add_step(fill_zero_cols)
x_pipe.add_step(fill_median_cols_ctx)
x_pipe.add_step(fill_na_none)
x_pipe.add_step(get_dummies)
x_pipe.add_step(fix_columns)
###Output
_____no_output_____
###Markdown
Model deployment - Step 2: Configure Data Pipeline for OutputIf you remember, we scaled our target variable using the numpy _log1p_ function. We need to inverse this using the _exp_ function so our predicted value is correct.
###Code
y_pipe = builder.pipeline('y_pipe')
def rescale_target(pipeline, df):
df['SalePrice'] = np.exp(df['SalePrice'])
y_pipe.add_step(rescale_target)
###Output
_____no_output_____
###Markdown
Model deployment - Step 3: Build and Deploy Cortex ActionNow that we have our input and output pipelines, we can use the Cortex Builder to package and deploy our model in one step.
###Code
builder.action('mattsanchez/ames-housing-predict')\
.from_model(model, x_pipeline=x_pipe, y_pipeline=y_pipe, target='SalePrice')\
.image_prefix('registry.cortex-dev.insights.ai:5000')\
.build()
action = cortex.action('mattsanchez/ames-housing-predict')
action
###Output
_____no_output_____
###Markdown
---Unit test for the Action. Make sure our action is ready for use.
###Code
%%time
params = {
"columns": ['MSSubClass', 'MSZoning', 'LotFrontage', 'LotArea', 'Street', 'Alley', 'LotShape', 'LandContour', 'Utilities', 'LotConfig', 'LandSlope', 'Neighborhood', 'Condition1', 'Condition2', 'BldgType', 'HouseStyle', 'OverallQual', 'OverallCond', 'YearBuilt', 'YearRemodAdd', 'RoofStyle', 'RoofMatl', 'Exterior1st', 'Exterior2nd', 'MasVnrType', 'MasVnrArea', 'ExterQual', 'ExterCond', 'Foundation', 'BsmtQual', 'BsmtCond', 'BsmtExposure', 'BsmtFinType1', 'BsmtFinSF1', 'BsmtFinType2', 'BsmtFinSF2', 'BsmtUnfSF', 'TotalBsmtSF', 'Heating', 'HeatingQC', 'CentralAir', 'Electrical', '1stFlrSF', '2ndFlrSF', 'LowQualFinSF', 'GrLivArea', 'BsmtFullBath', 'BsmtHalfBath', 'FullBath', 'HalfBath', 'BedroomAbvGr', 'KitchenAbvGr', 'KitchenQual', 'TotRmsAbvGrd', 'Functional', 'Fireplaces', 'FireplaceQu', 'GarageType', 'GarageYrBlt', 'GarageFinish', 'GarageCars', 'GarageArea', 'GarageQual', 'GarageCond', 'PavedDrive', 'WoodDeckSF', 'OpenPorchSF', 'EnclosedPorch', '3SsnPorch', 'ScreenPorch', 'PoolArea', 'PoolQC', 'Fence', 'MiscFeature', 'MiscVal', 'MoSold', 'YrSold', 'SaleType', 'SaleCondition'],
"values": [[20,"RH",80.0,11622,"Pave",None,"Reg","Lvl","AllPub","Inside","Gtl","NAmes","Feedr","Norm","1Fam","1Story",5,6,1961,1961,"Gable","CompShg","VinylSd","VinylSd","None",0.0,"TA","TA","CBlock","TA","TA","No","Rec",468.0,"LwQ",144.0,270.0,882.0,"GasA","TA","Y","SBrkr",896,0,0,896,0.0,0.0,1,0,2,1,"TA",5,"Typ",0,None,"Attchd",1961.0,"Unf",1.0,730.0,"TA","TA","Y",140,0,0,0,120,0,None,"MnPrv",None,0,6,2010,"WD","Normal"]]
}
result = action.invoke(message=Message.with_payload(params))
print(result.payload)
print()
###Output
{'columns': ['SalePrice'], 'values': [120665.4689426633]}
CPU times: user 17.6 ms, sys: 1.94 ms, total: 19.6 ms
Wall time: 559 ms
###Markdown
Building a Cortex SkillNow that our Action is ready and tested, we can move on to building a Cortex Skill. We start by creating a Schema that defines our input for Ames Housing price prediction. The schema will be built automatically using the parameters we already defined in our training dataset.
###Code
x_schema = builder.schema('kaggle/ames-housing-instance').title('Ames Housing Test Instance').from_parameters(train_ds.parameters[1:][:-1]).build()
###Output
_____no_output_____
###Markdown
The _builder_ has multiple entry points, we use the _skill_ method here to declare a new "Ames Housing Price Prediction" Skill. Each _builder_ method returns an instance of the builder so we can chain calls together.
###Code
b = builder.skill('kaggle/ames-housing-price-predict').title('Ames Housing Price Prediction').description('Predicts the price of a houses in Ames, Iowa.')
###Output
_____no_output_____
###Markdown
Next, we use the Input sub-builder to construct our Skill Input. This is where we declare how our Input will route messages. In this simple case, we use the _all_ routing which routes all input messages to same Action for processing and declares wich Output to route Action outputs to. We pass in our Action that we built previously to wire the Skill to the Action (we could have also passed in the Action name here). Calling _build_ on the Input will create the input object, add it to the Skill builder, and return the Skill builder.
###Code
b = b.input('ames-house').title('Ames House').use_schema(x_schema.name).all_routing(action, 'price-prediction').build()
###Output
_____no_output_____
###Markdown
In the previous step, we referenced an Output called **price-prediction**. We can create that Output here using the Output sub-builder.
###Code
b = b.output('price-prediction').title('Price Prediction').parameter(name='SalePrice', type='number', format='double').build()
###Output
_____no_output_____
###Markdown
We can preview the CAMEL document our builder will create to make sure everything looks correct.
###Code
b.to_camel()
###Output
_____no_output_____
###Markdown
--- Build and Publish the Skill to the MarketplaceThis will build the Skill and publish it to my private marketplace. It will then be available for use in the Agent Builder.
###Code
skill = b.build()
print('%s (%s) v%d' % (skill.title, skill.name, skill.version))
###Output
Ames Housing Price Prediction (kaggle/ames-housing-price-predict) v1
|
_notebooks/2021-05-17-swapped-pairs.ipynb | ###Markdown
"Swapped pair identification"> "Identifying swapped pairs and more..."- toc: false- branch: master- badges: true- comments: true- categories: [Python, statistics]- image: images/- hide: false- search_exclude: true- metadata_key1: metadata_value1- metadata_key2: metadata_value2- use_math: true This notebook is inspired from one of the projects I was pursuing during the final year of my PhD. I was dealing with several (of the order of hundred thousands) pair of numbers. My main goal was to identify swapped pairs and assign the swapped pair, the value of J corresponding to the original pair. For example, in the table below, (1,9) after swapping is (9,1) and therefore, (9,1) is the swapped pair. Then this swapped pair is assigned the value of J corresponding to the original pair (1,9) which is 3.45. \begin{array}{|c|c|c|}\hlineAtom_1&Atom_2&J\\\hline1&9&3.45\\\hline2&8&1.67\\\hline3&7&8.97\\\hline4&6&2.12\\\hline5&5&9.12\\\hline6&4&-\\\hline7&3&-\\\hline8&2&-\\\hline9&1&-\\\hline\end{array}Here's how I accomplished this: 1. Create a separate list for original and swapped numbers 2. Create an empty list to store repeated J values3. Then loop over the swapped pair list * For each pair, reverse it and check if it is present in the original pair list * Get the corresponding index and use that index to locate value of J * Append the value of J to the empty list created earlier4. Add both lists containing unique J values and repeated J values5. All done!
###Code
# Load some useful libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Let's first generate some pairs similar to the table above
###Code
# Note that we use list comprehension here as it is more efficient than using a for loop!
pairs = [(i,j) for i,j in zip(range(1,10,1), range(9,0,-1))]
print("Generated pairs are = ", pairs)
###Output
Generated pairs are = [(1, 9), (2, 8), (3, 7), (4, 6), (5, 5), (6, 4), (7, 3), (8, 2), (9, 1)]
###Markdown
Let's now separate orignal and swapped pairs
###Code
original_pairs = [] # Create an empty list for original pairs
swapped_pairs = [] # Create an empty list for swapped pairs
for pair in pairs:
if pair[::-1] not in original_pairs:
original_pairs.append(pair)
else:
swapped_pairs.append(pair)
print("Original pairs are = ", original_pairs)
print("Swapped pairs are = ", swapped_pairs)
###Output
Original pairs are = [(1, 9), (2, 8), (3, 7), (4, 6), (5, 5)]
Swapped pairs are = [(6, 4), (7, 3), (8, 2), (9, 1)]
###Markdown
Numbers for column J corresponding to original pair
###Code
unique_J= [3.45, 1.67, 8.97, 2.12, 9.12]
repeated_J = []
for swapped_pair in swapped_pairs:
if swapped_pair[::-1] in original_pairs:
# get corresponding coupling index and append to repeated_J list
coupling_index = original_pairs.index(swapped_pair[::-1])
repeated_J.append(unique_J[coupling_index])
print("Repeated J's are: ", repeated_J)
###Output
Repeated J's are: [2.12, 8.97, 1.67, 3.45]
###Markdown
Let's now combine repeated J's with unique J's
###Code
J = unique_J + repeated_J
print("Pair", "\t", "J")
for i, j in zip(pairs, J):
print(i,"\t", j)
###Output
Pair J
(1, 9) 3.45
(2, 8) 1.67
(3, 7) 8.97
(4, 6) 2.12
(5, 5) 9.12
(6, 4) 2.12
(7, 3) 8.97
(8, 2) 1.67
(9, 1) 3.45
|
scripts/Darren/randomphi/Jupyter Notebooks/old_file_i_cannot_delete.ipynb | ###Markdown
Imports
###Code
%load_ext autoreload
%autoreload 2
import numpy as np
from scipy.constants import c, elementary_charge
import pandas as pd
import pickle as pkl
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
import math
from mpl_toolkits.mplot3d import Axes3D
plt.rcParams['figure.figsize'] = [24,16] # bigger figures
from matplotlib import style
style.use('fivethirtyeight')
# import package
# installed via pip
from emtracks.particle import trajectory_solver # main solver object
from emtracks.conversions import one_gev_c2_to_kg # conversion for q factor (transverse momentum estimate)
from emtracks.tools import *#InitConds # initial conditions namedtuple
from emtracks.mapinterp import get_df_interp_func # factory function for creating Mu2e DS interpolation function
from emtracks.Bdist import get_B_df_distorted
import matplotlib.animation as animation
###Output
ERROR! Please set $EMTRACKS_DDIR and $EMTRACKS_PDIR. Setting defaults (current directory)
###Markdown
Directories
###Code
testdir = "/home/darren/Desktop/plots/"
datadir = "/home/shared_data/"
plotdir = datadir+"plots/randomphi/"
mapdir = datadir+"Bmaps/"
###Output
_____no_output_____
###Markdown
Some global initial parameters for B dist
###Code
#DO NOT CHANGE ON THIS DOCUMENT
start_point = 3
end_point = 14
initial_B = 50 #(rougly 1% distortion at z = 3.0, 0% at z = 14)
final_B = 0
###Output
_____no_output_____
###Markdown
Define Distorted B Field
###Code
#MU2E FIELD
df_Mu2e = pd.read_pickle(mapdir+"Mu2e_DSMap_V13.p")
B_Mu2e_func = get_df_interp_func(mapdir+"Mu2e_DSMap_V13.p", gauss=False)
#MU2E FIELD + DIS
df_Mu2e_dis = get_B_df_distorted(df_Mu2e, v="0", Bz0 = initial_B, Bzf = 0, z0 = start_point, zf = end_point)
B_Mu2e_dis = get_df_interp_func(df=df_Mu2e_dis, gauss=False)
###Output
_____no_output_____
###Markdown
Simple Visualize B Field
###Code
m = (final_B - initial_B) / (end_point - start_point)
n = 50
step = (end_point - start_point) / n
t = np.arange(start_point, end_point, step)
x = plt.plot(t, ((t - start_point)*m) + initial_B)
plt.title("Distortion")
plt.xlabel("Z (meters)")
plt.ylabel("B (gauss)")
###Output
_____no_output_____
###Markdown
Functions
###Code
#input N, return N random values between 0 and 2pi
def get_random_phi(N):
phis = np.random.uniform(0, 2*math.pi, N)
return phis
#input N, return N equally spaced values between 0 and 2pi
def get_uniform_phi(N):
phis = np.arange(0, 2*math.pi, 2*math.pi/N)
return phis
#input list of phis, number of steps for integrator, initial position / return dataframe trajectory
def run_solver(phi, N_calc, field, xnaught, ynaught, znaught):
ic_Mu2e = InitConds(t0=0., tf=4e-8, N_t=N_calc,
x0=xnaught, y0=ynaught, z0=znaught,
p0=104.96, theta0=np.pi/3, phi0=phi)
e_solver = trajectory_solver(ic_Mu2e, B_func=field, bounds=bounds_Mu2e)
sol = e_solver.solve_trajectory(verbose = False, atol=1e-8, rtol=1e-8) # high tolerance so it runs quickly for testing
df = e_solver.dataframe
#!!! CHECK ON THIS
df['r'] = ((df['x']-xnaught)**2 + (df['y']-ynaught)**2)**(1/2)
#!!! CHECK ON THIS
return df
#input previous dataframe of trajectory, any z value / return new dataframe with x, y, z, t, r values at a z
def find_track_at_z(df, z):
delta = 10/4001 #approximate z range divided by number of points
mask = (df.z < z + delta) & (df.z > z - delta)
while (len(df.z[mask]) > 2):
delta = delta / 2
mask = (df.z < z + delta) & (df.z > z - delta)
while (len(df.z[mask]) == 0):
delta = delta*2
mask = (df.z < z + delta) & (df.z > z - delta)
df2 = df.loc[mask]
df2 = df2.apply(pd.to_numeric)
return (df2.iloc[0]['x'], df2.iloc[0]['y'], df2.iloc[0]['z'], df2.iloc[0]['t'], df2.iloc[0]['r'])
def plot_impact_at_calorimeter(info, output_directory): #ts, phis, rs, xs, ys, zs
ts = info[0]
phis = info[1]
rs = info[2]
xs = info[3]
ys = info[4]
zs = info[5]
fig = plt.figure()
plt.scatter(xs, ys, c = phis)
plt.xlabel("x")
plt.ylabel("y")
plt.title("Y as a fn of X")
max_coord = 1.25*np.max(abs(np.array([xs,ys])))
plt.xlim(-max_coord, max_coord)
plt.ylim(-max_coord, max_coord)
fig.savefig(plotdir+output_directory+"Scatter_y_vs_x.pdf")
fig.savefig(plotdir+output_directory+"Scatter_y_vs_x.png")
fig2 = plt.figure()
plt.scatter(phis, xs)
plt.xlabel("phi_values (rad)")
plt.ylabel("x coordinate")
plt.title("X as a fn of phi")
#plt.xlim(-6, 6)
#plt.ylim(-2, 2)
fig2.savefig(plotdir+output_directory+"Scatter_x_vs_phi.pdf")
fig2.savefig(plotdir+output_directory+"Scatter_x_vs_phi.png")
fig3 = plt.figure()
plt.scatter(phis, ys)
plt.xlabel("phi_values (rad)")
plt.ylabel("y coordinate")
plt.title("Y as a fn of phi")
#plt.xlim(-6, 6)
#plt.ylim(-2, 2)
fig3.savefig(plotdir+output_directory+"Scatter_y_vs_phi.pdf")
fig3.savefig(plotdir+output_directory+"Scatter_y_vs_phi.png")
fig4 = plt.figure()
x = xs
num_bins = 50
n, bins, patches = plt.hist(x, num_bins, facecolor='blue')
plt.xlabel('xcoord')
plt.ylabel('number of occurences')
plt.title('Histogram of x-coord')
fig4.savefig(plotdir+output_directory+"Histogram_x.pdf")
fig4.savefig(plotdir+output_directory+"Histogram_x.png")
fig5 = plt.figure()
y = ys
num_bins = 50
n, bins, patches = plt.hist(y, num_bins, facecolor='blue')
plt.xlabel('ycoord')
plt.ylabel('number of occurences')
plt.title('Histogram of y-coord')
fig5.savefig(plotdir+output_directory+"Histogram_y.pdf")
fig5.savefig(plotdir+output_directory+"Histogram_y.png")
fig6 = plt.figure()
r = rs
num_bins = 50
n, bins, patches = plt.hist(r, num_bins, facecolor='blue')
plt.xlabel('radius')
plt.ylabel('number of occurences')
plt.title('Histogram of R')
fig6.savefig(plotdir+output_directory+"Histogram_R.pdf")
fig6.savefig(plotdir+output_directory+"Histogram_R.png")
actualrad = ((((max(xs) - min(xs)) / 2) + ((max(ys) - min(ys)) / 2)) / 2)
print ('actual radius: ' + str(actualrad))
def plotjoint(info, infograded):
#plt.hist(data, bins=range(min(data), max(data) + binwidth, binwidth))
x, y = info, infograded
x1, x2 = x[2], y[2]
num_bins = 20
m1 = np.mean(x1)
std1 = np.std(x1)
m2 = np.mean(x2)
std2 = np.std(x2)
x1num_bins = int((max(x1) - min(x1)) / 0.005)
x2num_bins = int((max(x2) - min(x2)) / 0.005)
fig1 = plt.figure()
plt.hist(x1, alpha = 0.3, bins = x1num_bins, facecolor='blue', label = 'Mu2e Field')
plt.hist(x2, alpha = 0.8, bins = x2num_bins, facecolor='orange', label = 'Graded Field')
plt.legend()
plt.legend()
plt.text(0.3, 21, '\u03BC = '+str(round(np.mean(x1), 5)), fontsize = 8)
plt.text(0.3, 20.6, '\u03C3 = '+str(round(np.std(x1), 5)), fontsize = 8)
plt.text(0.3, 20.2, '\u03BC graded = '+str(round(np.mean(x2), 5)), fontsize = 8)
plt.text(0.3, 19.8, '\u03C3 graded = '+str(round(np.std(x2), 5)), fontsize = 8)
plt.xlabel('Radius (meters)')
plt.ylabel('Occurences')
plt.title('Histogram of Radius At Z=13 Meters')
fig1.show
fig1.savefig(testdir)
#fig1.savefig(plotdir+output_directory+"6-11/2Graded_radiushist500.pdf")
#fig1.savefig(plotdir+output_directory+"6-11-2Graded_radiushist.png")
#---------------
x1, x2 = x[3], y[3]
num_bins = 20
m1 = np.mean(x1)
std1 = np.std(x1)
m2 = np.mean(x2)
std2 = np.std(x2)
fig2 = plt.figure()
plt.hist(x1, alpha = 0.3, bins = num_bins, facecolor='blue', label = 'normal')
plt.hist(x2, alpha = 0.8, bins = num_bins, facecolor='orange', label = 'graded')
plt.legend()
plt.text(0.2, 17, '\u03BC = '+str(round(np.mean(x1), 5)), fontsize = 8)
plt.text(0.2, 16.5, '\u03C3 = '+str(round(np.std(x1), 5)), fontsize = 8)
plt.text(0.2, 16, '\u03BC graded = '+str(round(np.mean(x2), 5)), fontsize = 8)
plt.text(0.2, 15.5, '\u03C3 graded = '+str(round(np.std(x2), 5)), fontsize = 8)
plt.xlabel('X (meters)')
plt.ylabel('Occurences')
plt.title('Histogram of X')
fig2.show
fig2.savefig(testdir)
#fig2.savefig(plotdir+output_directory+"6-11/2Graded_xhist500.pdf")
#fig2.savefig(plotdir+output_directory+"6-11-2Graded_xhist.pdf")
#-----
x1, x2 = x[4], y[4]
num_bins = 20
m1 = np.mean(x1)
std1 = np.std(x1)
m2 = np.mean(x2)
std2 = np.std(x2)
fig3 = plt.figure()
plt.hist(x1, alpha = 0.3, bins = num_bins, facecolor='blue', label = 'normal')
plt.hist(x2, alpha = 0.8, bins = num_bins, facecolor='orange', label = 'graded')
plt.legend()
plt.text(0.2, 16.5, '\u03BC = '+str(round(np.mean(x1), 5)), fontsize = 8)
plt.text(0.2, 16, '\u03C3 = '+str(round(np.std(x1), 5)), fontsize = 8)
plt.text(0.2, 15.5, '\u03BC graded = '+str(round(np.mean(x2), 5)), fontsize = 8)
plt.text(0.2, 15, '\u03C3 graded = '+str(round(np.std(x2), 5)), fontsize = 8)
plt.xlabel('Y (meters)')
plt.ylabel('Occurences')
plt.title('Histogram of X')
fig3.show
fig3.savefig(testdir)
#fig3.savefig(plotdir+output_directory+"6-11/2Graded_yhist500.pdf")
#fig3.savefig(plotdir+output_directory+"6-11-2Graded_yhist.pdf")
return x, y
def plot_joint_not_hist(x, y): #(ts, phis, rs, xs, ys, zs)
phis = x[1]
xs = x[2]
ys = y[2]
fig1 = plt.figure()
plt.scatter(phis, xs, c=x[0], label='reg')
plt.scatter(phis, ys, c=x[0], label='dis')
plt.xlabel("phis")
plt.ylabel("r")
plt.title("R as a fn of phi at z=13")
plt.xlim(0, 2*math.pi)
plt.ylim(0.25, 0.45)
plt.show()
dev = y[2]-x[2]
phisl= x[1]
fig2 = plt.figure()
plt.scatter(phis, dev, c=x[0], label='reg')
plt.xlabel("phis")
plt.ylabel("r")
plt.title("R as a fn of phi at z=13")
plt.xlim(0, 2*math.pi)
plt.ylim(0, 0.05)
plt.show()
def get_info(x, y):
phis = x[1]
rs1 = x[2]
rs2 = y[2]
xs1 = x[3]
xs2 = y[3]
ys1 = x[4]
ys2 = y[4]
a1 = abs(max(rs2 - rs1))
b1 = abs(max(xs1 - xs2))
c1 = abs(max(ys2 - ys1))
a2 = abs(min(rs2 - rs1))
b2 = abs(min(xs2 - xs1))
c2 = abs(min(ys2 - ys1))
return (a1, a2, b1, b2, c1, c2)
get_info(x, y) #rmax, rmin, xmax, xmin, ymax, ymin
###Output
_____no_output_____
###Markdown
First Run Function (No Graded Field)
###Code
def run(N, z):
phis = get_uniform_phi(N)
ts = []
xs = []
ys = []
zs = []
rs = []
# for each phi, run create solver object and save trajectory object
for phi in phis:
dataframe = run_solver(phi, 4001, B_Mu2e_func, 0.054094482, 0.03873037, 5.988900879) #second argument is how many steps in numerical integration
x, y, z, t, r = find_track_at_z(dataframe,z)
ts.append(t)
xs.append(x)
ys.append(y)
zs.append(z)
rs.append(r)
# convert everything to numpy arrays
ts = np.array(ts)
xs = np.array(xs)
ys = np.array(ys)
zs = np.array(zs)
rs = np.array(rs)
return (ts, phis, rs, xs, ys, zs)
###Output
_____no_output_____
###Markdown
Second Run Function (Graded Field)
###Code
def run2(N, z):
phis = get_uniform_phi(N)
ts = []
xs = []
ys = []
zs = []
rs = []
# for each phi, run create solver object and save trajectory object
for phi in phis:
dataframe = run_solver(phi, 4001, B_Mu2e_dis, 0.054094482, 0.03873037, 5.988900879) #second argument is how many steps in numerical integration
x, y, z, t, r = find_track_at_z(dataframe,z)
ts.append(t)
xs.append(x)
ys.append(y)
zs.append(z)
rs.append(r)
# convert everything to numpy arrays
ts = np.array(ts)
xs = np.array(xs)
ys = np.array(ys)
zs = np.array(zs)
rs = np.array(rs)
# plot results (and save plots)
return (ts, phis, rs, xs, ys, zs)
x = run(100, 13)
y = run2(100, 13)
plotjoint(x, y)
plot_impact_at_calorimeter(x, "run1/")
plot_impact_at_calorimeter(y, "run2/")
plot_joint_not_hist(x, y)
def hi(zstart, zend, numsteps, N): #rmax, rmin, xmax, xmin, ymax, ymin
step = (zend-zstart) / numsteps
q = np.arange(zstart, zend+step, step)
regdata = []
gradeddata = []
rmaxes = []
rmins = []
xmaxes = []
xmins = []
ymaxes = []
ymins = []
for i in q:
e = run(N, i)
f = run2(N, i)
g = get_info(e, f)
rmaxes.append(g[0])
rmins.append(g[1])
xmaxes.append(g[2])
xmins.append(g[3])
ymaxes.append(g[4])
ymins.append(g[5])
regdata.append(e)
gradeddata.append(f)
#step*2 numpy arrays stored, step in regdata, step in gradeddata
zstart = 6
zend = 13
numsteps = 7
step = (zend-zstart) / numsteps
q = np.arange(zstart, zend+step, step)
print(q)
for i in q:
print(i)
fig = plt.figure()
x1, x2 = x[2][0:5], y[2][0:5]
x1num_bins = int((max(x1) - min(x1)) / 0.005)
x2num_bins = int((max(x2) - min(x2)) / 0.005)
plt.hist(x1, alpha = 0.3, bins = x1num_bins, facecolor='blue', label = 'Mu2e Field')
plt.hist(x2, alpha = 0.8, bins = x2num_bins, facecolor='orange', label = 'Graded Field')
def animate(i):
x1, x2 = x[2][0:i+5], y[2][0:i+5]
x1num_bins = int((max(x1) - min(x1)) / 0.005)
x2num_bins = int((max(x2) - min(x2)) / 0.005)
plt.hist(x1, alpha = 0.3, bins = x1num_bins, facecolor='blue', label = 'Mu2e Field')
plt.hist(x2, alpha = 0.8, bins = x2num_bins, facecolor='orange', label = 'Graded Field')
ani = animation.FuncAnimation(fig, animate, interval = 1000)
plt.show()
#fig1 = plt.figure()
###Output
_____no_output_____ |
Baysian Networks/BNStruct Tests 2nd.ipynb | ###Markdown
Testing bnstruct R package. See documentation [here](https://cran.r-project.org/web/packages/bnstruct/vignettes/bnstruct.pdf) and [here](https://cran.r-project.org/web/packages/bnstruct/bnstruct.pdf).
###Code
library("bnstruct")
library("dagR")
count <- function(x) {
return(length(unique(x)))
}
quantize <- function(x) {
return(x - min(x) + 1)
}
# Given model:
#
# A -> B
# A -> C
# B -> D
# C -> D
#
SIZE <- 1000
A <- round(rnorm(SIZE, 10, 2))
B <- round(0.4 * A + rnorm(SIZE, 10, 1))
C <- round(0.8 * A + rnorm(SIZE, 10, 1.5))
D <- round(0.4 * B + 0.7 * C + rnorm(SIZE, 10, 1))
data = cbind(A, B, C, D)
data = apply(data, 2, quantize)
dataset <- BNDataset(data = as.matrix(data),
discreteness = rep('D', 4),
variables = c("A", "B", "C", "D"),
node.sizes = apply(data, 2, count))
dataset_bs <- bootstrap(dataset, num.boots = 1000)
net <- learn.network(dataset, scoring.func = "BDeu", algo="SEM")
dag(net)
plot(net)
net <- learn.network(dataset_bs, bootstrap = TRUE)
wpdag(net)
#plot(net, plot.wpdag=T)
plot(net)
# Given model:
#
# A -> B
# A -> C
# B -> D
# C -> D
#
SIZE <- 1000
A <- round(rnorm(SIZE, 10, 2))
B <- round(0.4 * A + rnorm(SIZE, 10, 1))
C <- round(0.8 * A + rnorm(SIZE, 10, 1.5))
D <- round(0.4 * B + 0.2 * C + rnorm(SIZE, 10, 1))
data = cbind(A, B, C, D)
data = apply(data, 2, quantize)
dataset <- BNDataset(data = as.matrix(data),
discreteness = rep('D', 4),
variables = c("A", "B", "C", "D"),
node.sizes = apply(data, 2, count))
dataset_bs <- bootstrap(dataset, num.boots = 1000)
net <- learn.network(dataset, scoring.func = "BDeu", algo="SEM")
dag(net)
plot(net)
###Output
bnstruct :: learning the structure using SEM ...
... bnstruct :: starting EM algorithm ...
... ... bnstruct :: learning network parameters ...
... ... bnstruct :: parameter learning done.
... ... bnstruct :: learning network parameters ...
... ... bnstruct :: parameter learning done.
... bnstruct :: EM algorithm completed.
... bnstruct :: learning the structure using MMHC ...
... bnstruct :: learning using MMHC completed.
... bnstruct :: learning network parameters ...
... bnstruct :: parameter learning done.
... bnstruct :: starting EM algorithm ...
... ... bnstruct :: learning network parameters ...
... ... bnstruct :: parameter learning done.
... ... bnstruct :: learning network parameters ...
... ... bnstruct :: parameter learning done.
... bnstruct :: EM algorithm completed.
... bnstruct :: learning the structure using MMHC ...
... bnstruct :: learning using MMHC completed.
... bnstruct :: learning network parameters ...
... bnstruct :: parameter learning done.
bnstruct :: learning using SEM completed.
bnstruct :: learning network parameters ...
bnstruct :: parameter learning done.
|
ml/08-jieba/jieba003.ipynb | ###Markdown
基于 TF-IDF 算法进行关键词提取
###Code
import jieba.analyse
keywords = "/".join(jieba.analyse.extract_tags(s, topK=20, withWeight=False, allowPOS=()))
print(keywords)
keywords =(jieba.analyse.extract_tags(s , topK=10, withWeight=True, allowPOS=(['n'])))
print(keywords)
###Output
[('脐带', 1.0262749147741936), ('畸形', 0.5139215133135484), ('胎儿', 0.508208390903871), ('右室', 0.40368978357741936), ('单活胎', 0.38563766138387096), ('眼距', 0.38563766138387096), ('胃泡', 0.38563766138387096), ('内脐', 0.38563766138387096), ('氏胶', 0.38563766138387096), ('室间隔', 0.38563766138387096)]
###Markdown
基于 TextRank 算法进行关键词提取
###Code
result = "/".join(jieba.analyse.textrank(s, topK=20, withWeight=False, allowPOS=('ns', 'n', 'vn', 'v')))
print(result)
###Output
脐带/畸形/胎儿/扩张/右手/缺损/侧弯/羊水/前臂/动静脉/姿势/缩短/走形/膀胱/胃泡/显示/增宽/右室/小脑/室间隔
###Markdown
例子
###Code
# 开始分词
segs=jieba.lcut(s)
print("/".join(segs))
# 去数字
segs = [v for v in segs if not str(v).isdigit()]#去数字
print("/".join(segs))
# 去左右空格
segs = list(filter(lambda x:x.strip(), segs)) #去左右空格
print("/".join(segs))
# 去停用词,这里只是举例
stopwords = ['无','双','出口','小','羊水',',','、',',','(',')']
segs = list(filter(lambda x:x not in stopwords, segs)) #去掉停用词
print("/".join(segs))
###Output
单活/胎/胎儿/多发/畸形/小脑/延髓/池/增宽/第三/脑室/扩张/眼距/窄/双肾/缺如/脊柱/侧弯/双侧/前臂/明显/缩短/右手/姿势/异常/胸廓/双肺/发育不良/膀胱/胃泡/显示/不清/腹内/段/脐/动脉/脐/静脉/走形/异常/扩张/脐带/增粗/脐带/内脐/血管/畸形/动静脉/瘘/华特/氏胶/水肿/脐带/囊肿/胎儿/室间隔/缺损/右室
|
machine_learning/reinforcement_learning/generalized_stochastic_policy_iteration/tabular/temporal_difference/np_temporal_difference/on_policy_stochastic_temporal_difference_expected_sarsa.ipynb | ###Markdown
Temporal Difference: On-policy Expected Sarsa, Stochastic
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Create environment
###Code
def create_environment_states():
"""Creates environment states.
Returns:
num_states: int, number of states.
num_terminal_states: int, number of terminal states.
num_non_terminal_states: int, number of non terminal states.
"""
num_states = 16
num_terminal_states = 2
num_non_terminal_states = num_states - num_terminal_states
return num_states, num_terminal_states, num_non_terminal_states
def create_environment_actions(num_non_terminal_states):
"""Creates environment actions.
Args:
num_non_terminal_states: int, number of non terminal states.
Returns:
max_num_actions: int, max number of actions possible.
num_actions_per_non_terminal_state: array[int], number of actions per
non terminal state.
"""
max_num_actions = 4
num_actions_per_non_terminal_state = np.repeat(
a=max_num_actions, repeats=num_non_terminal_states)
return max_num_actions, num_actions_per_non_terminal_state
def create_environment_successor_counts(num_states, max_num_actions):
"""Creates environment successor counts.
Args:
num_states: int, number of states.
max_num_actions: int, max number of actions possible.
Returns:
num_state_action_successor_states: array[int], number of successor
states s' that can be reached from state s by taking action a.
"""
num_state_action_successor_states = np.repeat(
a=1, repeats=num_states * max_num_actions)
num_state_action_successor_states = np.reshape(
a=num_state_action_successor_states,
newshape=(num_states, max_num_actions))
return num_state_action_successor_states
def create_environment_successor_arrays(
num_non_terminal_states, max_num_actions):
"""Creates environment successor arrays.
Args:
num_non_terminal_states: int, number of non terminal states.
max_num_actions: int, max number of actions possible.
Returns:
sp_idx: array[int], state indices of new state s' of taking action a
from state s.
p: array[float], transition probability to go from state s to s' by
taking action a.
r: array[float], reward from new state s' from state s by taking
action a.
"""
sp_idx = np.array(
object=[1, 0, 14, 4,
2, 1, 0, 5,
2, 2, 1, 6,
4, 14, 3, 7,
5, 0, 3, 8,
6, 1, 4, 9,
6, 2, 5, 10,
8, 3, 7, 11,
9, 4, 7, 12,
10, 5, 8, 13,
10, 6, 9, 15,
12, 7, 11, 11,
13, 8, 11, 12,
15, 9, 12, 13],
dtype=np.int64)
p = np.repeat(
a=1.0, repeats=num_non_terminal_states * max_num_actions * 1)
r = np.repeat(
a=-1.0, repeats=num_non_terminal_states * max_num_actions * 1)
sp_idx = np.reshape(
a=sp_idx,
newshape=(num_non_terminal_states, max_num_actions, 1))
p = np.reshape(
a=p,
newshape=(num_non_terminal_states, max_num_actions, 1))
r = np.reshape(
a=r,
newshape=(num_non_terminal_states, max_num_actions, 1))
return sp_idx, p, r
def create_environment():
"""Creates environment.
Returns:
num_states: int, number of states.
num_terminal_states: int, number of terminal states.
num_non_terminal_states: int, number of non terminal states.
max_num_actions: int, max number of actions possible.
num_actions_per_non_terminal_state: array[int], number of actions per
non terminal state.
num_state_action_successor_states: array[int], number of successor
states s' that can be reached from state s by taking action a.
sp_idx: array[int], state indices of new state s' of taking action a
from state s.
p: array[float], transition probability to go from state s to s' by
taking action a.
r: array[float], reward from new state s' from state s by taking
action a.
"""
(num_states,
num_terminal_states,
num_non_terminal_states) = create_environment_states()
(max_num_actions,
num_actions_per_non_terminal_state) = create_environment_actions(
num_non_terminal_states)
num_state_action_successor_states = create_environment_successor_counts(
num_states, max_num_actions)
(sp_idx,
p,
r) = create_environment_successor_arrays(
num_non_terminal_states, max_num_actions)
return (num_states,
num_terminal_states,
num_non_terminal_states,
max_num_actions,
num_actions_per_non_terminal_state,
num_state_action_successor_states,
sp_idx,
p,
r)
###Output
_____no_output_____
###Markdown
Set hyperparameters
###Code
def set_hyperparameters():
"""Sets hyperparameters.
Returns:
num_episodes: int, number of episodes to train over.
maximum_episode_length: int, max number of timesteps for an episode.
alpha: float, alpha > 0, learning rate.
epsilon: float, 0 <= epsilon <= 1, exploitation-exploration trade-off,
higher means more exploration.
gamma: float, 0 <= gamma <= 1, amount to discount future reward.
"""
num_episodes = 10000
maximum_episode_length = 200
alpha = 0.1
epsilon = 0.1
gamma = 1.0
return num_episodes, maximum_episode_length, alpha, epsilon, gamma
###Output
_____no_output_____
###Markdown
Create value function and policy arrays
###Code
def create_value_function_arrays(num_states, max_num_actions):
"""Creates value function arrays.
Args:
num_states: int, number of states.
max_num_actions: int, max number of actions possible.
Returns:
q: array[float], keeps track of the estimated value of each
state-action pair Q(s, a).
"""
q = np.repeat(a=0.0, repeats=num_states * max_num_actions)
q = np.reshape(a=q, newshape=(num_states, max_num_actions))
return q
def create_policy_arrays(num_non_terminal_states, max_num_actions):
"""Creates policy arrays.
Args:
num_non_terminal_states: int, number of non terminal states.
max_num_actions: int, max number of actions possible.
Returns:
policy: array[float], learned stochastic policy of which
action a to take in state s.
"""
policy = np.repeat(
a=1.0 / max_num_actions,
repeats=num_non_terminal_states * max_num_actions)
policy = np.reshape(
a=policy,
newshape=(num_non_terminal_states, max_num_actions))
return policy
###Output
_____no_output_____
###Markdown
Create algorithm
###Code
# Set random seed so that everything is reproducible
np.random.seed(seed=0)
def initialize_epsiode(num_non_terminal_states):
"""Initializes epsiode with initial state.
Args:
num_non_terminal_states: int, number of non terminal states.
Returns:
init_s_idx: int, initial state index from set of non terminal states.
"""
# Randomly choose an initial state from all non-terminal states
init_s_idx = np.random.randint(
low=0, high=num_non_terminal_states, dtype=np.int64)
return init_s_idx
def epsilon_greedy_policy_from_state_action_function(
max_num_actions, q, epsilon, s_idx, policy):
"""Create epsilon-greedy policy from state-action value function.
Args:
max_num_actions: int, max number of actions possible.
q: array[float], keeps track of the estimated value of each
state-action pair Q(s, a).
epsilon: float, 0 <= epsilon <= 1, exploitation-exploration trade-off,
higher means more exploration.
s_idx: int, current state index.
policy: array[float], learned stochastic policy of which action a to
take in state s.
Returns:
policy: array[float], learned stochastic policy of which action a to
take in state s.
"""
# Save max state-action value and find the number of actions that have the
# same max state-action value
max_action_value = np.max(a=q[s_idx, :])
max_action_count = np.count_nonzero(a=q[s_idx, :] == max_action_value)
# Apportion policy probability across ties equally for state-action pairs
# that have the same value and zero otherwise
if max_action_count == max_num_actions:
max_policy_prob_per_action = 1.0 / max_action_count
remain_prob_per_action = 0.0
else:
max_policy_prob_per_action = (1.0 - epsilon) / max_action_count
remain_prob_per_action = epsilon / (max_num_actions - max_action_count)
policy[s_idx, :] = np.where(
q[s_idx, :] == max_action_value,
max_policy_prob_per_action,
remain_prob_per_action)
return policy
def loop_through_episode(
num_non_terminal_states,
max_num_actions,
num_state_action_successor_states,
sp_idx,
p,
r,
q,
policy,
alpha,
epsilon,
gamma,
maximum_episode_length,
s_idx):
"""Loops through episode to iteratively update policy.
Args:
num_non_terminal_states: int, number of non terminal states.
max_num_actions: int, max number of actions possible.
num_state_action_successor_states: array[int], number of successor
states s' that can be reached from state s by taking action a.
sp_idx: array[int], state indices of new state s' of taking action a
from state s.
p: array[float], transition probability to go from state s to s' by
taking action a.
r: array[float], reward from new state s' from state s by taking
action a.
q: array[float], keeps track of the estimated value of each
state-action pair Q(s, a).
policy: array[float], learned stochastic policy of which
action a to take in state s.
alpha: float, alpha > 0, learning rate.
epsilon: float, 0 <= epsilon <= 1, exploitation-exploration trade-off,
higher means more exploration.
gamma: float, 0 <= gamma <= 1, amount to discount future reward.
maximum_episode_length: int, max number of timesteps for an episode.
s_idx: int, current state index.
Returns:
q: array[float], keeps track of the estimated value of each
state-action pair Q(s, a).
policy: array[float], learned stochastic policy of which
action a to take in state s.
"""
# Loop through episode steps until termination
for t in range(0, maximum_episode_length):
# Choose policy for chosen state by epsilon-greedy choosing from the
# state-action-value function
policy = epsilon_greedy_policy_from_state_action_function(
max_num_actions, q, epsilon, s_idx, policy)
# Get epsilon-greedy action
a_idx = np.random.choice(
a=max_num_actions, p=policy[s_idx, :])
# Get reward
successor_state_transition_idx = np.random.choice(
a=num_state_action_successor_states[s_idx, a_idx],
p=p[s_idx, a_idx, :])
reward = r[s_idx, a_idx, successor_state_transition_idx]
# Get next state
next_s_idx = sp_idx[s_idx, a_idx, successor_state_transition_idx]
# Check to see if we actioned into a terminal state
if next_s_idx >= num_non_terminal_states:
q[s_idx, a_idx] += alpha * (reward - q[s_idx, a_idx])
break # episode terminated since we ended up in a terminal state
else:
# Get next action, using expectation value
v_expected_value_on_policy = np.sum(
a=policy[next_s_idx, :] * q[next_s_idx, :])
# Calculate state-action-function expectation
delta = gamma * v_expected_value_on_policy - q[s_idx, a_idx]
q[s_idx, a_idx] += alpha * (reward + delta)
# Update state to next state
s_idx = next_s_idx
return q, policy
def on_policy_temporal_difference_expected_sarsa(
num_non_terminal_states,
max_num_actions,
num_state_action_successor_states,
sp_idx,
p,
r,
q,
policy,
alpha,
epsilon,
gamma,
maximum_episode_length,
num_episodes):
"""Loops through episodes to iteratively update policy.
Args:
num_non_terminal_states: int, number of non terminal states.
max_num_actions: int, max number of actions possible.
num_state_action_successor_states: array[int], number of successor
states s' that can be reached from state s by taking action a.
sp_idx: array[int], state indices of new state s' of taking action a
from state s.
p: array[float], transition probability to go from state s to s' by
taking action a.
r: array[float], reward from new state s' from state s by taking
action a.
q: array[float], keeps track of the estimated value of each
state-action pair Q(s, a).
policy: array[float], learned stochastic policy of which
action a to take in state s.
alpha: float, alpha > 0, learning rate.
epsilon: float, 0 <= epsilon <= 1, exploitation-exploration trade-off,
higher means more exploration.
gamma: float, 0 <= gamma <= 1, amount to discount future reward.
maximum_episode_length: int, max number of timesteps for an episode.
num_episodes: int, number of episodes to train over.
Returns:
q: array[float], keeps track of the estimated value of each
state-action pair Q(s, a).
policy: array[float], learned stochastic policy of which
action a to take in state s.
"""
for episode in range(0, num_episodes):
# Initialize episode to get initial state
init_s_idx = initialize_epsiode(num_non_terminal_states)
# Loop through episode and update the policy
q, policy = loop_through_episode(
num_non_terminal_states,
max_num_actions,
num_state_action_successor_states,
sp_idx,
p,
r,
q,
policy,
alpha,
epsilon,
gamma,
maximum_episode_length,
init_s_idx)
return q, policy
###Output
_____no_output_____
###Markdown
Run algorithm
###Code
def run_algorithm():
"""Runs the algorithm."""
(num_states,
num_terminal_states,
num_non_terminal_states,
max_num_actions,
num_actions_per_non_terminal_state,
num_state_action_successor_states,
sp_idx,
p,
r) = create_environment()
(num_episodes,
maximum_episode_length,
alpha,
epsilon,
gamma) = set_hyperparameters()
q = create_value_function_arrays(num_states, max_num_actions)
policy = create_policy_arrays(num_non_terminal_states, max_num_actions)
# Print initial arrays
print("\nInitial state-action value function")
print(q)
print("\nInitial policy")
print(policy)
# Run on policy temporal difference expected sarsa
q, policy = on_policy_temporal_difference_expected_sarsa(
num_non_terminal_states,
max_num_actions,
num_state_action_successor_states,
sp_idx,
p,
r,
q,
policy,
alpha,
epsilon,
gamma,
maximum_episode_length,
num_episodes)
# Print final results
print("\nFinal state-action value function")
print(q)
print("\nFinal policy")
print(policy)
run_algorithm()
###Output
Initial state-action value function
[[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]]
Initial policy
[[0.25 0.25 0.25 0.25]
[0.25 0.25 0.25 0.25]
[0.25 0.25 0.25 0.25]
[0.25 0.25 0.25 0.25]
[0.25 0.25 0.25 0.25]
[0.25 0.25 0.25 0.25]
[0.25 0.25 0.25 0.25]
[0.25 0.25 0.25 0.25]
[0.25 0.25 0.25 0.25]
[0.25 0.25 0.25 0.25]
[0.25 0.25 0.25 0.25]
[0.25 0.25 0.25 0.25]
[0.25 0.25 0.25 0.25]
[0.25 0.25 0.25 0.25]]
Final state-action value function
[[-3.37287365 -2.19625563 -1. -3.33369259]
[-4.37617941 -3.35755874 -2.19673315 -4.30122993]
[-4.36370404 -4.16984841 -3.37773859 -3.3777405 ]
[-3.33314617 -1. -2.19593578 -3.3723879 ]
[-4.33003665 -2.19670422 -2.19670106 -4.27179421]
[-3.35281672 -3.34921245 -3.33655474 -3.33654761]
[-3.35997858 -4.38409463 -4.29613151 -2.19664238]
[-4.22800704 -2.19670822 -3.31140923 -4.4045436 ]
[-3.33649577 -3.33640534 -3.35572145 -3.34343483]
[-2.19661961 -4.32084969 -4.29119013 -2.19662259]
[-2.19308075 -3.37407922 -3.33263346 -1. ]
[-3.37490642 -3.37482408 -4.33324106 -4.37411791]
[-2.19663429 -4.27236166 -4.34886424 -3.34269364]
[-1. -3.33326772 -3.37127791 -2.1954268 ]
[ 0. 0. 0. 0. ]
[ 0. 0. 0. 0. ]]
Final policy
[[0.03333333 0.03333333 0.9 0.03333333]
[0.03333333 0.03333333 0.9 0.03333333]
[0.03333333 0.03333333 0.9 0.03333333]
[0.03333333 0.9 0.03333333 0.03333333]
[0.03333333 0.03333333 0.9 0.03333333]
[0.03333333 0.03333333 0.03333333 0.9 ]
[0.03333333 0.03333333 0.03333333 0.9 ]
[0.03333333 0.9 0.03333333 0.03333333]
[0.03333333 0.9 0.03333333 0.03333333]
[0.03333333 0.03333333 0.03333333 0.9 ]
[0.03333333 0.03333333 0.03333333 0.9 ]
[0.9 0.03333333 0.03333333 0.03333333]
[0.9 0.03333333 0.03333333 0.03333333]
[0.9 0.03333333 0.03333333 0.03333333]]
|
jupyter notebooks/Trump_spaCy_final_templates.ipynb | ###Markdown
Using spaCy 's built in dependencies, we can create "syntactically" correct sentences based on different templates.- These templates are created to be short, in order to fit on an image and be considered "a meme"
###Code
import spacy
import pandas as pd
from pprint import pprint
from spacy import displacy
from collections import Counter
import en_core_web_sm
nlp = en_core_web_sm.load()
import random
import matplotlib.pyplot as plt
pd.set_option('max_colwidth', 120)
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
For SpaCy:- instantiate a doc with a variable = nlp()- then we can use the tokens to: traverse the dependency tree, get parts of speech, entities, dependency labels etc.- use random to select from a list (non-unique values) or from the keys of a counter class, if needed Importing the tweets and cleaning them - with regex expressions or not, then keep only the text field
###Code
tw = pd.read_csv("tweets_trump1.csv", low_memory = False)
tw = tw[tw["screen_name"] == "realDonaldTrump"]
tweets = tw[["screen_name", "text"]]
tweets["text"] = tweets["text"].replace(r'http\S+', '', regex=True).replace(r'www\S+', '', regex=True)
tweets["text"] = tweets["text"].replace(r'https?:\/\/.*[\r\n]*', '', regex=True).replace(r'www\S+', '', regex=True)
tweets["text"] = tweets["text"].replace(r'RT @\S+:', '', regex = True)
tweets["text"] = tweets["text"].replace(r'[_"\-;%()|.,+&=*%]', '', regex = True)
tweets["text"] = tweets["text"].replace(r'@\s+', '', regex=True)
tweets["text"] = tweets["text"].replace(r'@\S+', '', regex=True)
tweets["text"] = tweets["text"].replace(r'&', 'and', regex=True)
tweets["text"] = tweets["text"].str.replace(".@", "@")
tweets["text"] = tweets["text"].replace(r'\n','', regex=True)
tweets["text"] = tweets["text"].str.replace("w/", "with")
tweets["text"] = tweets["text"].str.replace("- ", "")
tweets["text"] = tweets["text"].str.replace("--", "")
tweets["text"] = tweets["text"].str.replace("RE:", "")
tweets["text"] = tweets["text"].str.replace('(&)', '')
#tweets["text"] = tweets["text"].str.replace(r"(?:\#+[\w_]+[\w\'_\-]*[\w_]+)", "")
tweets["text"] = tweets["text"].replace('\n', ' ').replace('\r', '')
tweets1 = tweets["text"]
# tweets shape of the column
tweets1.shape
#tweets[:12]
# transform the tweets to strings to be read by spaCy
tweets2 = tweets1.to_string(header=False, index=False)
tweets2 = tweets2.replace('\n', ' ').replace('\r', '').strip()
###Output
_____no_output_____
###Markdown
visualize with displacy how the dependencies look on a sample
###Code
doc = nlp(tweets1.sample().values[0])
doc
displacy.render(doc, style="dep", jupyter=True, options={'distance': 90})
# split the tweets since the spaCy parser cannot work on a huge corpus
tweets3 = tweets2[0:1000000]
doc = nlp(tweets3)
doc1 = nlp(tweets2[1000000:2000000])
doc2 = nlp(tweets2[2000000:3000000])
doc3 = nlp(tweets2[3000000:4000000])
# list of docs
docs = [doc, doc1, doc2, doc3]
###Output
_____no_output_____
###Markdown
analyze the words in the tweets- which words occur more often - using TF/IDF- which named entities are in detected by spaCy (NER)- which verbs appear most frequently /in their lemma form)- which adjectives appear most frequentlyetc. - Named Entity Recognition doesn't perform well because of the data
###Code
ners = Counter()
def get_ner(corpus, collection):
for ent in corpus.ents:
ners[ent.text, ent.label_] +=1
for i in docs:
get_ner(i, ners)
ner = ners.most_common(20)
ner
# get verbs following a subject
# Finding a verb with a subject from below — good
verbs = set()
for possible_subject in doc:
if possible_subject.dep_ == 'nsubj' and possible_subject.head.pos_ == 'VERB':
verbs.add(possible_subject.head.lemma_)
#print(verbs)
# get adjectives and nouns
def get_adj_noun(corpus, collection):
for token in corpus:
try:
if token.pos_ == "ADJ":
if corpus[token.i +1].pos_ == "NOUN":
collection.append([str(token), str(corpus[token.i+1])])
except IndexError:
pass
adj_nouns = list()
for i in docs:
get_adj_noun(i, adj_nouns)
#print(adj_nouns)
#try out some examples
r = random.choice(adj_nouns)
s = "I am an expert in "
s += r[0]
s += " "
s += r[1]
s += ". - Donald (the bot) Trump"
#print(s)
###Output
_____no_output_____
###Markdown
test with nltk - commented outfrom string import punctuationfrom nltk.corpus import stopwordsfrom nltk import word_tokenizenltk.download("stopwords")from sklearn.feature_extraction.text import TfidfVectorizerstop_words = stopwords.words("english") + list(punctuation)c = Counter()def tokenize(text): words = word_tokenize(text) words = [w.lower() for w in words] w = (w for w in words if w not in stop_words and not w.isdigit()) c[w] +=1 Most common Gerund verbs
###Code
ger = Counter()
def get_top_gerunds(corpus, collection):
for token in corpus:
if token.tag_ == "VBD":
collection[str(token.text).lower()] +=1
for i in docs:
get_top_gerunds(i, ger)
#print(ger)
###Output
_____no_output_____
###Markdown
Template 0 - veni, vidi vici type of thing - example only
###Code
g = random.choice(list(ger.keys()))
g1 = random.choice(list(ger.keys()))
g2 = random.choice(list(ger.keys()))
s1 = "I have "
s1 += str(g).lower()
s1 += ", "
s1 += str(g1).lower()
s1 += " and "
s1 += str(g2).lower()
s1 += " without being scared. -Donald (the bot) Trump"
print(s1)
###Output
I have rigged, dispensed and profited without being scared. -Donald (the bot) Trump
###Markdown
Most common lemmatized verbs
###Code
c = Counter()
def get_top_verbs(corpus, collection):
for token in corpus:
if token.tag_ in ["VB", "VBG", "VBP", "VBD", "VBN", "VBZ"]:
collection[str(token.lemma_).lower()] +=1
for i in docs:
get_top_verbs(i, c)
###Output
_____no_output_____
###Markdown
Plot the most common verbs (lemmatized form)
###Code
a = c.most_common(20)
a = pd.DataFrame(a, columns = ["word", "frequency"])
plt.barh(a["word"], a["frequency"])
###Output
_____no_output_____
###Markdown
Random examplex = random.choice(list(c.keys()))x1 = random.choice(list(c.keys()))s = "You can't make me "s += str(x).lower()s += " what you "s += str(x1).lower()s += ". - Donald (the bot) Trump"print(s) Most common nouns appearing after "be a"
###Code
c2 = Counter()
def get_words_after_is(corpus, collection):
for token in corpus:
try:
if token.text == "is" and corpus[token.i+1].text == "a":
if corpus[token.i +2].tag_ == "NN" and corpus[token.i+2].pos:
collection[str(corpus[token.i +2]).lower()] += 1
except IndexError:
pass
for i in docs:
get_words_after_is(i, c2)
# plot the most common nouns appearing after IS (BE)
a = c2.most_common(20)
a = pd.DataFrame(a, columns = ["word", "frequency"])
plt.barh(a["word"], a["frequency"])
###Output
_____no_output_____
###Markdown
Get top nouns in singulas and plural
###Code
c1 = Counter()
def get_top_nouns(corpus, collection):
for token in corpus:
if token.tag_ in ["NN", "NNS"]:
c1[str(token.text).lower()] +=1
# run the function
for i in docs:
get_top_nouns(i, c1)
c1.most_common(10)
###Output
_____no_output_____
###Markdown
Try noun_chunks
###Code
nc = set()
def get_noun_chunks(doc, collection):
for np in doc.noun_chunks:
collection.add(np.text)
get_noun_chunks(doc, nc)
#print(nc)
###Output
_____no_output_____
###Markdown
Functions
###Code
def get_dependencies(doc, collection, dep1 = None, dep2 = None, dep3 = None):
"""get the dependencies (up to 3) and store them in separate collections as lists
dependencies available are (examples): dobj, nsubj, csubj, aux, neg, ROOT, det, quantmod etc."""
try:
if dep2 == None:
for token in doc:
if token.dep_ == dep1:
collection.append([str(token.text)])
elif dep3 == None:
for token in doc:
if token.dep_ == dep1:
if doc[token.i +1].dep_ == dep2:
collection.append([str(token.text), str(doc[token.i+1].lemma_)])
else:
for token in doc:
if token.dep_ == dep1:
if doc[token.i +1].dep_ == dep2 and doc[token.i +2].dep_ == dep3:
collection.append([str(token.text), str(doc[token.i+1].text), str(doc[token.i+2].text)])
except IndexError:
pass
def get_tags(doc, collection, tag1, tag2):
"""get 2 tags in the documents"""
for tag in doc:
if tag.tag_ == "tag1":
if doc[tag.i + 1].tag_ == "tag2":
collection.add([str(tag), str(doc[tag.i+1])])
def get_dependencies_lemmatized(doc, collection, dep1 = None, dep2 = None, dep3 = None):
"""get the dependencies (up to 3) and store them in separate collections as lists
dependencies available are (examples): dobj, nsubj, csubj, aux, neg, ROOT, det, quantmod etc."""
try:
if dep2 == None:
for token in doc:
if token.dep_ == dep1:
collection.append([str(token.text), str(doc[token.i+1])])
elif dep3 == None:
for token in doc:
if token.dep_ == dep1:
if doc[token.i +1].dep_ == dep2:
collection.append([str(token.text), str(doc[token.i+1].lemma_)])
else:
for token in doc:
if token.dep_ == dep1:
if doc[token.i +1].dep_ == dep2 and doc[token.i +2].dep_ == dep3:
collection.append([str(token.lemma_), str(doc[token.i+1]), str(doc[token.i+2])])
except IndexError:
pass
def deps_printout(sentence):
"""Prints out the text, tag, dep, head text, head tag, token lemma and part
of speech for each word in a sentence"""
doc1 = nlp(sentence)
for token in doc1:
print("{0}/{1} <--{2}-- {3}/{4} {5} {6}".format(
token.text, token.tag_, token.dep_, token.head.text, token.head.tag_, token.lemma_, token.pos_))
###Output
_____no_output_____
###Markdown
Templates:- In order to get a sense of what we need, there is a function that simply displays the dependency tree:
###Code
deps_printout("The beauty of me is that I am very rich.")
###Output
The/DT <--det-- beauty/NN the DET
beauty/NN <--nsubj-- is/VBZ beauty NOUN
of/IN <--prep-- beauty/NN of ADP
me/PRP <--pobj-- of/IN -PRON- PRON
is/VBZ <--ROOT-- is/VBZ be VERB
that/IN <--mark-- am/VBP that ADP
I/PRP <--nsubj-- am/VBP -PRON- PRON
am/VBP <--ccomp-- is/VBZ be VERB
very/RB <--advmod-- rich/JJ very ADV
rich/JJ <--acomp-- am/VBP rich ADJ
./. <--punct-- is/VBZ . PUNCT
###Markdown
Template 1
###Code
# get dependencies for the template You should never try To
def get_ngrams_for_template1():
adv_acomp = []
for i in docs:
for i in docs:
get_dependencies(i, adv_acomp, dep1 = "advmod", dep2 = 'acomp')
return adv_acomp
t1_src = get_ngrams_for_template1()
def template1(source):
# complete the sentence and return it
a1 = random.choice(source)
temp1 = "The beauty of me is that I am " + " ".join([x for x in a1])
temp1 += ". - Donald (the bot) Trump"
return temp1
template1(t1_src)
###Output
_____no_output_____
###Markdown
Template 2
###Code
def get_ngrams_for_template2():
# Get ROOT + AMOD + DOBJ IN THE FORM OF VERB + ADJ + NOUN
van = []
for i in docs:
for token in i:
if token.dep_ == "amod" and token.pos_ == "ADJ":
if i[token.i + 1].dep_ == "dobj" and i[token.i+1].pos_ == "NOUN" and i[token.i+1].tag_ == "NN" \
and i[token.i+2].dep_ == "punct":
van.append([str(token.text).lower(), str(i[token.i+1].text).lower()])
return van
t2_src = get_ngrams_for_template2()
def template2(source):
v = random.choice(source)
start = "Is there such a thing as an "
start1 = "Is there such a thing as "
start2 = "Is there such a thing as a "
end = "? - Donald (the bot) Trump"
if len(v[1]) <=2:
pass
elif v[0][0] in["a", "e", "i", "o", "u"]:
temp2 = start + " ".join([x for x in v]) + end
elif v[0] == "great":
temp2 = start1 + " ".join([x for x in v]) + end
else:
temp2 = start2 + " ".join([x for x in v]) + end
return temp2
template2(t2_src)
###Output
_____no_output_____
###Markdown
Template 3
###Code
# show the dependencies in the sample
deps_printout("My Twitter has become so powerful that I can actually make my enemies tell the truth.")
def get_ngrams_for_template3():
det_d = []
for i in docs:
get_dependencies_lemmatized(i, det_d, dep1 = "ccomp", dep2 = 'det', dep3 = "dobj")
return det_d
t3_src = get_ngrams_for_template3()
def template3(source1, source2):
a2 = random.choice(source1)
a3 = random.choice(source2)
t = "My Twitter has become an "
t_1 = "My Twitter has become a "
mid = ", I can actually make my enemies "
end = ". - Donald (the bot) Trump"
if len(a3[0]) <= 2:
pass
elif a3[0][0] in ["a", "e", "i", "o", "u"]:
#a = (*a3, sep=" ")
temp3 = t + " ".join([x for x in a3]) + mid + " ".join([x for x in a2]) + end
else:
temp3 = t_1 + " ".join([x for x in a3]) + mid + " ".join([x for x in a2]) + end
return temp3
template3(t3_src, t2_src)
###Output
_____no_output_____
###Markdown
Templates 4, 5 and 6
###Code
# see how the text looks like in dependencies
deps_printout("I think the only difference between me and the other candidates "
"is that I'm more beautiful and more attentive.")
# see how the text looks like in dependencies
deps_printout("I think the only difference between me and the other candidates "
"is that I'm better and cheaper.")
def get_ngrams_for_template4():
jjr = []
for i in docs:
for token in i:
if token.dep_ == "ccomp" and token.tag_ == "VBP":
if i[token.i+1].tag_ == "JJR" and i[token.i+1].text != "more":
jjr.append(str(i[token.i+1]).lower())
jjs = []
for i in docs:
for token in i:
if token.dep_ == "det":
if i[token.i+1].tag_ == "JJS" and i[token.i+1].text != "most":
jjs.append(str(i[token.i+1]).lower())
jj = []
for i in docs:
for token in i:
if token.dep_ == "advmod" and token.tag_ == "RBR":
if i[token.i+1].tag_ == "JJ" and i[token.i+1].text != "more":
jj.append(str(i[token.i+1]).lower())
jj1 = set(jj)
jj = list(jj1)
jjr1 = set(jjr)
jjr1.remove("less")
jjr = list(jjr1)
jjs1 = set(jjs)
jjs = list(jjs1)
return jj, jjr, jjs
t4_src, t5_src, t6_src = get_ngrams_for_template4()
def template4(source):
jj = random.choice(source)
jj1 = random.choice(source)
start = "I think the only difference between me and other candidates is that I'm more "
mid = " and more "
end = ". - Donald (the bot) Trump"
temp4 = start + jj + mid + jj1 + end
return temp4
def template5(source):
jjr = random.choice(source)
jjr1 = random.choice(source)
start = "I think the only difference between me and other candidates is that I'm "
mid = " and "
end = ". - Donald (the bot) Trump"
temp5 = start + jjr + mid + jjr1 + end
return temp5
def template6(source):
jjs = random.choice(source)
start = "I am NOT a shmuck. I am the "
end = " there is. - Donald (the bot) Trump"
temp6 = start + jjs + end
return temp6
template4(t4_src)
template5(t5_src)
template6(t6_src)
###Output
_____no_output_____
###Markdown
Template 7
###Code
def get_ngrams_for_temp7():
obj = []
for i in docs:
get_dependencies(i, obj, dep1 = "det", dep2 = "dobj")
return obj
t7_src = get_ngrams_for_temp7()
def template7(source):
o = random.choice(source)
start = "I would say I'm the all-time judge of "
end = ". - Donald (the bot) Trump"
temp7 = start + " ".join([x for x in o]) + end
return temp7
template7(t7_src)
###Output
_____no_output_____
###Markdown
FINAL FUNCTION - randomizez the template
###Code
functions = [template1(t1_src), template2(t2_src), template3(t3_src, t2_src),
template4(t4_src), template5(t5_src), template6(t6_src), template7(t7_src)]
def temp_output(funcs):
"""choose at random from a list of the template functions"""
f = random.choice(funcs)
return f
temp_output(functions)
###Output
_____no_output_____ |
section2/lecture15_arith.ipynb | ###Markdown
Vector Arithmetic
###Code
import numpy as np
#create numpy arrays
x = np.array([1,2,3])
y = np.array([2,3,4])
###Output
_____no_output_____
###Markdown
addition
###Code
print(x+y)
#scalar addition
print(x+2)
###Output
[3 4 5]
###Markdown
subtract
###Code
print(y-x)
###Output
[1 1 1]
###Markdown
multiply
###Code
#hadmard product
print(x*y)
#dot product
print(np.dot(y,x))
print(y/x)
###Output
[2 1 1]
|
alc/ep2/ep2.ipynb | ###Markdown
Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\rightarrow$Run All).Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE", as well as your name and collaborators below:
###Code
NAME = "Breno Poggiali de Sousa"
COLLABORATORS = ""
###Output
_____no_output_____
###Markdown
Exercício Prático 2: SVD truncadoNeste exercício vamos estudar as aproximações obtidas pelo SVD truncado. Vamos começar carregando os dados do 20_newsgroups, conforme visto em sala.
###Code
import numpy as np
from sklearn.datasets import fetch_20newsgroups
from sklearn import decomposition
from scipy import linalg
import matplotlib.pyplot as plt
from sklearn.feature_extraction.text import CountVectorizer
%matplotlib inline
np.set_printoptions(suppress=True)
categories = ['alt.atheism', 'talk.religion.misc', 'comp.graphics', 'sci.space']
remove = ('headers', 'footers', 'quotes')
newsgroups_train = fetch_20newsgroups(subset='train', categories=categories, remove=remove)
vectorizer = CountVectorizer(stop_words='english')
vectors = vectorizer.fit_transform(newsgroups_train.data).todense() # (documents, vocab)
vectors.shape #, vectors.nnz / vectors.shape[0], row_means.shape
%time U, s, Vt = linalg.svd(vectors, full_matrices=False)
###Output
CPU times: user 2min 48s, sys: 50.1 s, total: 3min 38s
Wall time: 30.3 s
###Markdown
Questão 1Plote uma curva contendo os valores singulares $s$.
###Code
# solucao da Questao 1
plt.plot(s)
plt.ylabel('valores singulares')
###Output
_____no_output_____
###Markdown
Questão 2Repita o gráfico da questão anterior, mas desta vez, tente dar um "zoom" para mostrar onde está o joelho da curva, ou seja, o ponto a partir do qual os valores passam a ser muito baixos. Para isto, você pode pegar um slice de ```s```, ou usar a função ```plt.xlims```.
###Code
# solucao da Questao 2
plt.plot(s[150:])
plt.ylabel('valores singulares')
###Output
_____no_output_____
###Markdown
Questão 3Seja $A$ uma matriz $m \times n$. O SVD reduzido de $A$ retorna $U_{m \times k}$, $\Sigma_{k \times k}$ e $V^\top_{k \times n}$, onde $k = \min(m,n)$. Já o SVD truncado de posto $r < \min(m,n)$ retorna apenas as $r$ primeiras colunas de $U$, os $r$ maiores valores singulares de $\Sigma$ e as $r$ primeiras linhas de $V^\top$.Uma propriedade importante do SVD truncado é que ele retorna a melhor aproximação $A_r$ para uma matriz $A$ dentre todas as matrizes de posto $r$, onde a qualidade da aproximação é medida por $\| A - A_r \|_F$, sendo $\| B \|_F = \sqrt{\sum_i \sum_j B_{i,j}^2}$ a norma de Frobenius de uma matriz $B$.Nesta questão, vamos ver como a qualidade da aproximação aumenta com $r$, variando $r$ em $\{1,2,4,\ldots,2^7\}$. Primeiramente, vamos encontrar a decomposição SVD reduzida para a matriz ```vectors```. Depois disso, vamos variar o número $r$ de valores singulares considerados para encontrar aproximações $A_r$ para, finalmente, calcular $\| A - A_r \|_F$. Para facilitar a resolução deste problema, parte do código já foi fornecida. Para calcular a norma Frobenius, consulte a documentação de ```np.linalg.norm```.
###Code
m,n = vectors.shape
k = min(m,n)
A_r = np.zeros((m,n))
erro = np.zeros(8)
r_values = 2**np.arange(8)
for i in range(len(r_values)):
r = r_values[i] # r = 2^i
U_r = U[:, :r] # U_r = r primeiras colunas de U
s_r = np.diag(s[:r]) # s_r = matriz diagonal com os r maiores elementos
Vt_r = Vt[:r,] # Vt_r = r primeiras linhas de Vt
A_r = U_r@s_r@Vt_r # SVD de r
erro[i] = np.linalg.norm(vectors - A_r) # erro[i] = norma || A - A_r ||
# código para plotar o vetor erro
plt.plot(r_values,erro)
plt.ylabel(r'Erro $\|A-A_r\|_F$')
plt.xlabel('r')
###Output
_____no_output_____
###Markdown
Questão 4Vamos criar e implementar uma heurística para a escolha de $k$. Deseja-se obter uma representação de baixa dimensão $k$ para a matriz ```vectors```. Implemente uma função que recebe um vetor de valores singulares em ordem descrente e retorna o número de valores singulares que é maior ou igual a 2x a média. (Dica: você pode usar ```np.mean```).
###Code
# solucao da Questao 4
def escolheK(s):
""" Retorna o inteiro k contendo o número de valores singulares que é pelo menos 2x maior que a média.
Entrada:
s é um vetor contendo os valores singulares em ordem decrescente
"""
k = 0
mean_2 = np.mean(s) * 2
for i in s:
if i < mean_2:
return k
k = k+1
s_example = np.hstack((np.arange(1000,100,-100),np.arange(100,10,-10),np.arange(10,1,-1)))
print(s_example)
assert escolheK(s_example) == 6
assert escolheK(s) == 191
###Output
[1000 900 800 700 600 500 400 300 200 100 90 80 70 60
50 40 30 20 10 9 8 7 6 5 4 3 2]
|
COVID_tests.ipynb | ###Markdown
Testing strategies for an infection test at low infection ratesInspired by this [blog post](http://www.bureauwo.com/uncategorized/te-weinig-tests-gebruik-ze-slimmer/) (in Dutch) I decided to look at simple versions of testing strategies for infection tests, in a rather quick-and-dirty way. Disclaimer: besides the exercises below being of almost trivial over-simplicity, I’m a data scientist and not an epidemiologist. Please believe specialist and not me! This is written up in a [blog post](http://www.marcelhaas.com/index.php/2020/04/07/test-for-covid-19-in-groups-and-be-done-much-quicker/), too.The idea is that if being infected (testing positive) is rare, you could start out by testing in a large group. If there's no infection in that whole group you're done with one test for all of them! If there, on the other hand, *is* an infection, you can cut the group in two and do the same for both halves. You can continue this process until you have isolated the few individuals that are infected.It is clear, though, that many people get tested more than once and especially the infected people are getting tested quite a number of times. Therefore, this is only going to help if a relatively low number of people is infected. Here, I look at the numbers with very simple simulations.I create batches of people. Randomly, some fraction gets assigned "infected", the rest is "healthy". Then I start the test, which I assume to be perfect (i.e. every infected person gets detected and there are no false positives). For different infection rates (true percentage overall that is infected), and for different original batch sizes (the size of the group that initially gets tested) I study how many tests are needed to isolate every single infected person.Normally, by testing people one by one, you would need as many tests as people to do this. To investigate the gain by group testing, I divide the number of tests the simulation needs by this total number. The total number of people that can be tested is a factor *gain* higher, given a maximum number of tests, like we have available in the Netherlands.Here goes!
###Code
def n_tests_necessary(infections, n_tests=0):
"""Determine the number of tests necessary
assuming that we start with the whole group
and consequently cut intwo equally large groups
if there is an infection.
Returns total number of tests used to find every
single infected person."""
n_tests +=1
# Cut the group in two if necessary and call this same
# function on both subgroups, tracking number of tests
if infections.sum() >= 1 and len(infections) > 1:
cut = int(len(infections)/2)
n_tests = n_tests_necessary(infections[:cut], n_tests=n_tests)
n_tests = n_tests_necessary(infections[cut:], n_tests=n_tests)
return n_tests
###Output
_____no_output_____
###Markdown
With the functionality above, I set up a test for the whole of the Netherlands (let's pretend we can in fact test everybody, how cool would that be?!) below. I test in groups of 256 people (conveniently a power of two, but this is not at all necessary!) and assume an overall infection rate of 1%. Let's see how much we can gain.
###Code
# Parameters
n_per_batch = 2**8
# n_batches = int(17e6/n_per_batch) # Do the whole of the Netherlands
n_batches = 10000 # Test fewer
n_people = n_batches * n_per_batch
fraction_inf = 0.01
# Set up population
r.seed(42)
people = np.zeros(n_people) + r.random(size=n_people)
infected = (people < fraction_inf).reshape(n_batches, -1)
nt = np.apply_along_axis(n_tests_necessary, 1, infected)
print(f"{int(nt.sum())} tests necessary to test {n_people} people in batches of {n_per_batch} with a infected fraction of {fraction_inf}.")
print(f"Factor in lowering the number of tests: {n_people/nt.sum():3.2f}")
###Output
339486 tests necessary to test 2560000 people in batches of 256 with a infected fraction of 0.01.
Factor in lowering the number of tests: 7.54
###Markdown
Almost a factor of 8! Not too bad. Apparently we can easily test a factor of 8 more people with the same amount of tests than we do now. That is, given that my assumptions weren't too far off (and I believe they aren't too crazy).Now let's see how that gain factor depends on the overall infected fraction and the batch size we use. The results are shown in the graph below.
###Code
labels = ["0.1% infected", "0.5% infected", "1% infected", "5% infected", "10% infected"]
factors = np.zeros(shape=(len(labels), 13))
n_batches = 10000 # Do the whole of the Netherlands
r.seed(42)
batchnumbers = 2**np.arange(1, 11)
for ifr, fraction_inf in enumerate([.001, .005, .01, .05, .1]):
for iba, n_per_batch in enumerate(batchnumbers):
n_people = n_batches * n_per_batch
# Set up population
people = np.zeros(n_people) + r.random(size=n_people)
infected = (people < fraction_inf).reshape(n_batches, -1)
nt = np.apply_along_axis(n_tests_necessary, 1, infected)
factors[ifr, iba] = (n_people/nt.sum())
plt.plot(batchnumbers[:10], factors[ifr][:10], 'o-', label=labels[ifr])
plt.legend(loc=(.6, .4), fontsize=12)
plt.xlabel('Batch sizes of tested people', fontsize=12)
plt.ylabel('Factor gain in number of people tested\n with the same number of tests', fontsize=12);
plt.savefig('COVID_gainfactors.png', dpi=400)
###Output
_____no_output_____ |
exercise2-line-charts.ipynb | ###Markdown
**This notebook is an exercise in the [Data Visualization](https://www.kaggle.com/learn/data-visualization) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/line-charts).**--- In this exercise, you will use your new knowledge to propose a solution to a real-world scenario. To succeed, you will need to import data into Python, answer questions using the data, and generate **line charts** to understand patterns in the data. ScenarioYou have recently been hired to manage the museums in the City of Los Angeles. Your first project focuses on the four museums pictured in the images below.You will leverage data from the Los Angeles [Data Portal](https://data.lacity.org/) that tracks monthly visitors to each museum.  SetupRun the next cell to import and configure the Python libraries that you need to complete the exercise.
###Code
import pandas as pd
pd.plotting.register_matplotlib_converters()
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
print("Setup Complete")
###Output
Setup Complete
###Markdown
The questions below will give you feedback on your work. Run the following cell to set up the feedback system.
###Code
# Set up code checking
import os
if not os.path.exists("../input/museum_visitors.csv"):
os.symlink("../input/data-for-datavis/museum_visitors.csv", "../input/museum_visitors.csv")
from learntools.core import binder
binder.bind(globals())
from learntools.data_viz_to_coder.ex2 import *
print("Setup Complete")
###Output
Setup Complete
###Markdown
Step 1: Load the dataYour first assignment is to read the LA Museum Visitors data file into `museum_data`. Note that:- The filepath to the dataset is stored as `museum_filepath`. Please **do not** change the provided value of the filepath.- The name of the column to use as row labels is `"Date"`. (This can be seen in cell A1 when the file is opened in Excel.)To help with this, you may find it useful to revisit some relevant code from the tutorial, which we have pasted below:```python Path of the file to readspotify_filepath = "../input/spotify.csv" Read the file into a variable spotify_dataspotify_data = pd.read_csv(spotify_filepath, index_col="Date", parse_dates=True)```The code you need to write now looks very similar!
###Code
# Path of the file to read
museum_filepath = "../input/museum_visitors.csv"
# Fill in the line below to read the file into a variable museum_data
museum_data = pd.read_csv(museum_filepath,index_col='Date',parse_dates=True)
# Run the line below with no changes to check that you've loaded the data correctly
step_1.check()
# Uncomment the line below to receive a hint
#step_1.hint()
# Uncomment the line below to see the solution
#step_1.solution()
###Output
_____no_output_____
###Markdown
Step 2: Review the dataUse a Python command to print the last 5 rows of the data.
###Code
# Print the last five rows of the data
museum_data.tail() # Your code here
###Output
_____no_output_____
###Markdown
The last row (for `2018-11-01`) tracks the number of visitors to each museum in November 2018, the next-to-last row (for `2018-10-01`) tracks the number of visitors to each museum in October 2018, _and so on_.Use the last 5 rows of the data to answer the questions below.
###Code
# Fill in the line below: How many visitors did the Chinese American Museum
# receive in July 2018?
ca_museum_jul18 = 2620
# Fill in the line below: In October 2018, how many more visitors did Avila
# Adobe receive than the Firehouse Museum?
avila_oct18 = 14658
# Check your answers
step_2.check()
# Lines below will give you a hint or solution code
#step_2.hint()
#step_2.solution()
###Output
_____no_output_____
###Markdown
Step 3: Convince the museum board The Firehouse Museum claims they ran an event in 2014 that brought an incredible number of visitors, and that they should get extra budget to run a similar event again. The other museums think these types of events aren't that important, and budgets should be split purely based on recent visitors on an average day. To show the museum board how the event compared to regular traffic at each museum, create a line chart that shows how the number of visitors to each museum evolved over time. Your figure should have four lines (one for each museum).> **(Optional) Note**: If you have some prior experience with plotting figures in Python, you might be familiar with the `plt.show()` command. If you decide to use this command, please place it **after** the line of code that checks your answer (in this case, place it after `step_3.check()` below) -- otherwise, the checking code will return an error!
###Code
# Line chart showing the number of visitors to each museum over time
# Set the width and height of the figure
plt.figure(figsize=(12,6))
# Line chart showing the number of visitors to each museum over time
sns.lineplot(data=museum_data)
# Add title
plt.title("Monthly Visitors to Los Angeles City Museums")
# Check your answer
step_3.check()
# Lines below will give you a hint or solution code
#step_3.hint()
#step_3.solution_plot()
###Output
_____no_output_____
###Markdown
Step 4: Assess seasonalityWhen meeting with the employees at Avila Adobe, you hear that one major pain point is that the number of museum visitors varies greatly with the seasons, with low seasons (when the employees are perfectly staffed and happy) and also high seasons (when the employees are understaffed and stressed). You realize that if you can predict these high and low seasons, you can plan ahead to hire some additional seasonal employees to help out with the extra work. Part ACreate a line chart that shows how the number of visitors to Avila Adobe has evolved over time. (_If your code returns an error, the first thing that you should check is that you've spelled the name of the column correctly! You must write the name of the column exactly as it appears in the dataset._)
###Code
# Line plot showing the number of visitors to Avila Adobe over time
# Your code here
# Set the width and height of the figure
plt.figure(figsize=(12,6))
# Line chart showing the number of visitors to each museum over time
sns.lineplot(data=museum_data['Avila Adobe'])
# Add title
plt.title("Monthly Visitors to Avila Adobe")
# Add label for horizontal axis
plt.xlabel("Date")
# Check your answer
step_4.a.check()
# Lines below will give you a hint or solution code
#step_4.a.hint()
#step_4.a.solution_plot()
###Output
_____no_output_____
###Markdown
Part BDoes Avila Adobe get more visitors:- in September-February (in LA, the fall and winter months), or - in March-August (in LA, the spring and summer)? Using this information, when should the museum staff additional seasonal employees?
###Code
#step_4.b.hint()
# Check your answer (Run this code cell to receive credit!)
step_4.b.solution()
###Output
_____no_output_____ |
data/euclid_generation_example/HST2Euclid.ipynb | ###Markdown
Simulate Euclid Images Using HST OnesIn this notebook, we are going to simulate step by a Euclid space telescope image using a HST one.First things first, we start by preparing the worksapce.
###Code
# to correctly show figures
%matplotlib inline
# import libraries here
import galsim
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Here we are going to load the Euclid and HST required parameters.> Euclid telescope specifications can be found [here](https://github.com/LSSTDESC/WeakLensingDeblending/blob/9f851f79f6f820f815528d11acabf64083b6e111/descwl/survey.pyL366).
###Code
pixel_scale = 0.101
wcs = galsim.wcs.PixelScale(pixel_scale) #wcs: world coordinate system. Variable used to draw images in galsim
lam = 700 # nm
diam = 1.3 # meters
lam_over_diam = (lam * 1.e-9) / diam # radians
lam_over_diam *= 206265 # Convert to arcsec
exp_time = 2260# exposure time
euclid_eff_area = 1.15 #effective area
###Output
_____no_output_____
###Markdown
Load the [COSMOS](https://github.com/GalSim-developers/GalSim/wiki/RealGalaxy%20Data) catalog and generate a galaxy and a PSF.
###Code
catalog = galsim.COSMOSCatalog() # load catalog
img_len = 64 # arbitrary value, practical because power of 2
gal_ind = 133 # galaxy index in the catalog
gal = catalog.makeGalaxy(gal_ind, noise_pad_size=img_len * pixel_scale * np.sqrt(2))
psf = galsim.OpticalPSF(lam=lam, diam=diam, scale_unit=galsim.arcsec)
###Output
_____no_output_____
###Markdown
Now that we have loaded a galaxy from the catalog, let's rescale its flux such that it corresponds to a Euclid flux.> Flux rescaling formula telescope specifications can be found [here](https://github.com/GalSim-developers/GalSim/blob/releases/2.2/examples/demo11.pyL110).
###Code
hst_eff_area = 2.4**2 * (1.-0.33**2)
flux_scaling = (euclid_eff_area/hst_eff_area) * exp_time
gal *= flux_scaling
###Output
_____no_output_____
###Markdown
Apply the simulated Euclid PSF on the galaxy image.
###Code
gal = galsim.Convolve(gal, psf)
###Output
_____no_output_____
###Markdown
Let's have a look at the galaxy and the PSF. In the galaxy image, noted $X$, we try to visually separate the noise (which standard deviation is noted $\sigma$) from the useful signal by applying the following transform:\begin{equation}\text{ArcSinh}\left(\frac{X}{k\sigma}\right).k\sigma\end{equation}> Technical precision: Usually the noise standard deviation is usually estimated with more accurate methods such as using a window to mask the galaxy then estimate the standard deviation on the rest of the samples which only contain noise. For sake of simplicity, in this example, we only considered an area of the image that only contains noise and estimated the noise standard deviation in it.
###Code
# Get the standard deviation value of the noise for real images
gal_im = gal.drawImage(wcs=wcs, nx=img_len,ny=img_len)
# Empirically estimate the standard deviation by considering a part of the image containing only noise
hst_std = np.std(gal_im.array[0:25,0:25])
k=4
plt.figure(figsize=(20,20))
plt.subplot(121)
plt.title('ArcSinh of Convolved COSMOS Galaxy {}'.format(gal_ind))
plt.imshow(np.arcsinh(gal_im.array/(k*hst_std))*k*hst_std)
plt.subplot(122)
plt.imshow(np.log10(psf.drawImage(wcs=wcs, nx=img_len,ny=img_len).array))
plt.title(r'Log$_{10}$ Euclid-like PSF')
plt.show()
###Output
_____no_output_____
###Markdown
The noise that we see in image above corresponds to HST noise (which is also correlated due to the division by the HST PSF and the multiplication by the Euclid-like one), we are going to adapt this noise to Euclid. First we compute Euclid global noise standard deviation. To do so, we compute $\lambda$ (in electrons per pixel), the Poisson parameter of the noise and aprroximate it with a white Gaussian noise such that its standard deviation is $\sqrt{\lambda}$.> The $\lambda$ parameter corresponds to the `mean_sky_level` which expression can be find [here](https://github.com/LSSTDESC/WeakLensingDeblending/blob/9f851f79f6f820f815528d11acabf64083b6e111/descwl/survey.pyL110)
###Code
def get_flux(ab_magnitude):
zero_point = 6.85
return exp_time*zero_point*10**(-0.4*(ab_magnitude-24))
sky_brightness = 22.9207
pixel_scale = 0.101
mean_sky_level = get_flux(sky_brightness)*pixel_scale**2 # it is the Poisson noise parameter
sigma = np.sqrt(mean_sky_level) # we modelize the noise as a Gaussian noise such that it std
# is the sqrt of the Poisson parameter
print('Euclid global noise standard deviation: {:.2f}'.format(sigma))
###Output
Euclid global noise standard deviation: 20.66
###Markdown
Then we estimate the value of HST noise standard deviation and we take it into to account while adding the noise such that we end up with Euclid noise standard deviation.> Reminder: For any independent random variables, the variance of the sum of those variables is equal to the sum of the variances.
###Code
# Add noise
delta_std = np.sqrt(sigma**2 - hst_std**2)
random_seed = 24783923 #same as galsim demo 11
noise = galsim.GaussianNoise(galsim.BaseDeviate(random_seed), sigma=delta_std)
gal_im.addNoise(noise)
image = gal_im.array
###Output
_____no_output_____
###Markdown
Now that we simulated the Euclid observed image, let's show it and estimate its noise standard deviation as a check.
###Code
plt.figure(2, figsize=(10,10))
plt.imshow(image)
plt.imshow(np.arcsinh(image/(k*sigma)*k*sigma))
plt.show()
print('Standard Deviation Value of Euclid Simulated Image: {:.2f}'.format(np.std(image[0:25,0:25])))
###Output
_____no_output_____ |
Language Identification from Very Short Strings - Load Model from Weights.ipynb | ###Markdown
Abdul WahabNatural Language Identification (Embedded Devices) - Using Deep Neural NetworkIn this project, I pulled text data from TED Talks in 63 languages.I converted the text into its binary reperesentation of 4 byte for each letter, utf-8 encoding. Using Tensorflow, I trained a simple deep neural network to classify input language. I acheived 91% accuracy with mostly spoken 17 languages and 80% accuracy with all 56 languages. Dataset: https://www.kaggle.com/wahabjawed/text-dataset-for-63-langauges
###Code
# Required libraries
%config IPCompleter.greedy=True
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
tf.get_logger().setLevel('ERROR')
from tensorflow.compat.v2.keras.models import Sequential
from tensorflow.compat.v2.keras.layers import Dense,Dropout
from tensorflow.compat.v2.keras import initializers, optimizers
import numpy as np
import pandas as pd
import re
from unidecode import unidecode
from array import array
from nltk.tokenize import sent_tokenize
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report
from sklearn.model_selection import cross_val_score
import os
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Configuration
###Code
# Map language index to natural language
labels_extended = {
0: ['Vietnamese','vi'], 1:['Albanian','sq'], 2:['Arabic','ar'],
3: ['Armenian','hy'], 4: ['Azerbaijani','az'],
5: ['Belarusian','be'],6: ['Bengali','bn'],
7: ['Bosnian','bs'], 8: ['Bulgarian','bg'],
9: ['Burmese','my'], 10: ['Catalan', 'ca'],
11: ['Chinese Simplified','zh-cn'], 12: ['Chinese Traditional','zh-tw'],
13: ['Chinese Yue','zh'], 14: ['Croatian','hr'],
15: ['Czech','cs'], 16: ['Danish','da'],
17: ['Dutch','nl'], 18: ['English','en'],
19: ['Esperanto','eo'], 20: ['Estonian','et'],
21: ['Finnish','fi'], 22:['French','fr'],
23: ['Galician','gl'], 24: ['Georgian','ka'],
25: ['German','de'],26: ['Urdu','ur'],
27: ['Gujarati','gu'], 28: ['Hebrew','he'],
29: ['Hindi','hi'], 30: ['Hungarian', 'hu'],
31: ['Indonesian','id'], 32: ['Italian','it'],
33: ['Japanese','ja'], 34: ['Korean','ko'],
35: ['Latvian','lv'], 36: ['Lithuanian','lt'],
37: ['Macedonian','mk'], 38: ['Malay','ms'],
39: ['Marathi','mr'], 40: ['Mongolian','mn'],
41: ['Norwegian','nb'], 42: ['Persian','bg'],
43: ['Polish','pl'], 44: ['Portuguese','pt'],
45: ['Romanian','ro'],46: ['Russian','ru'],
47: ['Serbian','sr'], 48: ['Slovak','sk'],
49: ['Slovenian','sl'], 50: ['Spanish', 'es'],
51: ['Swedish','sv'], 52: ['Tamil','ta'],
53: ['Thai','th'], 54: ['Turkish','tr'],
55: ['Ukrainian','uk']
}
labels_standard = {
0: ['Indonesian','id'], 1:['English','en'], 2:['German','de'],
3: ['Turkish','tr'],4:['Hindi','hn'],
5: ['Spanish','es'],6: ['Bengali','bn'],
7: ['French','fr'], 8: ['Italian','it'],
9: ['Dutch','nl'], 10: ['Portuguese', 'pt'],
11: ['Swedish','sv'], 12: ['Russian','ru'],
13: ['Czech','cs'], 14: ['Arabic','ar'],
15: ['Chinese Traditional','zh-cn'],16: ['Persian','fa']
}
#['STANDARD','EXTENDED']
# STANDARD supports 16 languages
# EXTENDED supports 56 languages
TYPE = 'STANDARD'
# assign number of languages to process
if(TYPE =='STANDARD'):
LABEL = labels_standard
else:
LABEL = labels_extended
# regular expression pattern used to filter out data
pattern = r'[^\w\s]+|[0-9]'
# Max length of input text
MAX_INPUT_LENGTH = 13
#MAX data length for each language to balnace the dataset
MAX_LENGTH_DATA = 300000
###Output
_____no_output_____
###Markdown
Helper Functions
###Code
# Helper Functions
def clean_sentences(sentences):
'''
Goal: Filter out non predictive text about speaker using regular expression pattern
@param sentences: (list) sentences is a list of strings, where each string is a sentence.
Note: The raw language_transcription should be tokenized by sentence prior
to being passed into this function.
'''
return re.sub(pattern,'',sentences)
def convertTextToBinary(word):
word_vec = []
vec = ''
n = len(word)
for i in range(n):
current_letter = word[i]
ind = ord(current_letter)
placeholder = bin(ind)[2:].zfill(32)
vec = vec + placeholder
vec = vec.zfill(32*MAX_INPUT_LENGTH)
for digit in vec:
word_vec.append(int(digit))
return word_vec
###Output
_____no_output_____
###Markdown
Deep Neural Network - Helper Function
###Code
def createModelStandard():
initializer = initializers.he_uniform()
model = Sequential()
model.add(Dense(416, activation='relu', kernel_initializer=initializer, input_dim=MAX_INPUT_LENGTH*32))
model.add(Dense(512, activation='relu', kernel_initializer=initializer))
model.add(Dense(128, activation='relu', kernel_initializer=initializer))
model.add(Dropout(0.15))
model.add(Dense(len(LABEL), activation='softmax'))
model.summary()
model.compile(loss='sparse_categorical_crossentropy', optimizer=tf.keras.optimizers.Adam(1e-3), metrics=['accuracy'])
return model
def createModelExtended():
initializer = initializers.he_uniform()
model = Sequential()
model.add(Dense(416, activation='relu', kernel_initializer=initializer, input_dim=MAX_INPUT_LENGTH*32))
model.add(Dense(1024, activation='relu', kernel_initializer=initializer))
model.add(Dense(256, activation='relu', kernel_initializer=initializer))
model.add(Dropout(0.15))
model.add(Dense(len(LABEL), activation='softmax'))
model.summary()
model.compile(loss='sparse_categorical_crossentropy', optimizer=tf.keras.optimizers.Adam(1e-3), metrics=['accuracy'])
return model
def loadWeights():
model.load_weights(f'weights/{TYPE}/weights_{TYPE}.chk')
def detectLanguage(text, model):
#test for results
if len(text) > MAX_INPUT_LENGTH:
text = text[:MAX_INPUT_LENGTH]
text = clean_sentences(text)
word_vec = convertTextToBinary(text)
word_vec =np.array(word_vec,dtype='float32')
word_vec = np.reshape(word_vec, (1,word_vec.shape[0]))
output = model.predict(word_vec)
digit = np.argmax(output[0])
print(f"the language for input {text}: {LABEL[digit][0]}")
for i in range(len(LABEL)):
lang = LABEL[i][0]
score = output[0][i]
print(lang + ': ' + str(round(100*score, 2)) + '%')
print('\n')
###Output
_____no_output_____
###Markdown
Deep Neural Network - Load Weights From Disk
###Code
# create model
if(TYPE =='STANDARD'):
model = createModelStandard()
else:
model = createModelExtended()
# load weights
loadWeights()
#test for results
text_arr = ['father','মানবতা','بچے','الأطفال','إنسانية','mänskligheten']
for text in text_arr:
detectLanguage(text, model)
###Output
WARNING: AutoGraph could not transform <function Model.make_predict_function.<locals>.predict_function at 0x15ca23a70> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: 'arguments' object has no attribute 'posonlyargs'
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
the language for input father: English
Indonesian: 0.0%
English: 99.91%
German: 0.0%
Turkish: 0.0%
Hindi: 0.01%
Spanish: 0.0%
Bengali: 0.0%
French: 0.01%
Italian: 0.0%
Dutch: 0.07%
Portuguese: 0.0%
Swedish: 0.01%
Russian: 0.0%
Czech: 0.0%
Arabic: 0.0%
Chinese Traditional: 0.0%
Persian: 0.0%
the language for input মনবত: Bengali
Indonesian: 0.0%
English: 0.0%
German: 0.0%
Turkish: 0.0%
Hindi: 0.0%
Spanish: 0.0%
Bengali: 100.0%
French: 0.0%
Italian: 0.0%
Dutch: 0.0%
Portuguese: 0.0%
Swedish: 0.0%
Russian: 0.0%
Czech: 0.0%
Arabic: 0.0%
Chinese Traditional: 0.0%
Persian: 0.0%
the language for input بچے: Persian
Indonesian: 0.0%
English: 0.0%
German: 0.0%
Turkish: 0.0%
Hindi: 0.0%
Spanish: 0.0%
Bengali: 0.0%
French: 0.0%
Italian: 0.0%
Dutch: 0.0%
Portuguese: 0.0%
Swedish: 0.0%
Russian: 0.0%
Czech: 0.0%
Arabic: 0.0%
Chinese Traditional: 0.0%
Persian: 100.0%
the language for input الأطفال: Arabic
Indonesian: 0.0%
English: 0.0%
German: 0.0%
Turkish: 0.0%
Hindi: 0.0%
Spanish: 0.0%
Bengali: 0.0%
French: 0.0%
Italian: 0.0%
Dutch: 0.0%
Portuguese: 0.0%
Swedish: 0.0%
Russian: 0.0%
Czech: 0.0%
Arabic: 100.0%
Chinese Traditional: 0.0%
Persian: 0.0%
the language for input إنسانية: Arabic
Indonesian: 0.0%
English: 0.0%
German: 0.0%
Turkish: 0.0%
Hindi: 0.0%
Spanish: 0.0%
Bengali: 0.0%
French: 0.0%
Italian: 0.0%
Dutch: 0.0%
Portuguese: 0.0%
Swedish: 0.0%
Russian: 0.0%
Czech: 0.0%
Arabic: 100.0%
Chinese Traditional: 0.0%
Persian: 0.0%
the language for input mänskligheten: Swedish
Indonesian: 0.0%
English: 0.0%
German: 0.0%
Turkish: 0.0%
Hindi: 0.0%
Spanish: 0.0%
Bengali: 0.0%
French: 0.0%
Italian: 0.0%
Dutch: 0.0%
Portuguese: 0.0%
Swedish: 100.0%
Russian: 0.0%
Czech: 0.0%
Arabic: 0.0%
Chinese Traditional: 0.0%
Persian: 0.0%
|
BERT_FineTuning_Quora_Question_Pairs.ipynb | ###Markdown
BERT FineTuning on Quora Questions Pairs ---In this Colab Notebook, We will try to reproduce state of the art results on Quora Questions Pairs using BERT Model FineTuning. If you are not familiar with BERT, Please visit [The Illustrated BERT](http://jalammar.github.io/illustrated-bert/), [BERT Research Paper](https://arxiv.org/abs/1810.04805) and [BERT Github Repo](https://github.com/google-research/bert). Run in Google Colab View Copy on GitHub \\This colab notebook supports both TPU and GPU runtype. Setting Up Environment **USE_TPU :-** True, If you want to use TPU runtime. First change Colab Notebook runtype to TPU**BERT_MODEL:-** Choose BERT model1. **uncased_L-12_H-768_A-12**: uncased BERT base model2. **uncased_L-24_H-1024_A-16**: uncased BERT large model3. **cased_L-12_H-768_A-12:** cased BERT large model**BUCKET:-** Add bucket details, It is necessary to add bucket for TPU. For GPU runtype, If Bucket is empty, We will use disk.
###Code
import os
import sys
import json
import datetime
import pprint
import tensorflow as tf
# Authenticate, so we can access storage bucket and TPU
from google.colab import auth
auth.authenticate_user()
# If you want to use TPU, first switch to tpu runtime in colab
USE_TPU = True #@param{type:"boolean"}
# We will use base uncased bert model, you can give try with large models
# For large model TPU is necessary
BERT_MODEL = 'uncased_L-12_H-768_A-12' #@param {type:"string"}
# BERT checkpoint bucket
BERT_PRETRAINED_DIR = 'gs://cloud-tpu-checkpoints/bert/' + BERT_MODEL
print('***** BERT pretrained directory: {} *****'.format(BERT_PRETRAINED_DIR))
!gsutil ls $BERT_PRETRAINED_DIR
# Bucket for saving checkpoints and outputs
BUCKET = 'quorabert' #@param {type:"string"}
if BUCKET!="":
OUTPUT_DIR = 'gs://{}/outputs'.format(BUCKET)
tf.gfile.MakeDirs(OUTPUT_DIR)
elif USE_TPU:
raise ValueError('Must specify an existing GCS bucket name for running on TPU')
else:
OUTPUT_DIR = 'out_dir'
os.mkdir(OUTPUT_DIR)
print('***** Model output directory: {} *****'.format(OUTPUT_DIR))
if USE_TPU:
# getting info on TPU runtime
assert 'COLAB_TPU_ADDR' in os.environ, 'ERROR: Not connected to a TPU runtime; Change notebook runtype to TPU'
TPU_ADDRESS = 'grpc://' + os.environ['COLAB_TPU_ADDR']
print('TPU address is', TPU_ADDRESS)
###Output
***** BERT pretrained directory: gs://cloud-tpu-checkpoints/bert/uncased_L-12_H-768_A-12 *****
gs://cloud-tpu-checkpoints/bert/uncased_L-12_H-768_A-12/bert_config.json
gs://cloud-tpu-checkpoints/bert/uncased_L-12_H-768_A-12/bert_model.ckpt.data-00000-of-00001
gs://cloud-tpu-checkpoints/bert/uncased_L-12_H-768_A-12/bert_model.ckpt.index
gs://cloud-tpu-checkpoints/bert/uncased_L-12_H-768_A-12/bert_model.ckpt.meta
gs://cloud-tpu-checkpoints/bert/uncased_L-12_H-768_A-12/checkpoint
gs://cloud-tpu-checkpoints/bert/uncased_L-12_H-768_A-12/vocab.txt
***** Model output directory: gs://quorabert/outputs *****
TPU address is grpc://10.3.4.202:8470
###Markdown
Clone BERT Repo and Download Quora Questions Pairs Dataset
###Code
# Clone BERT repo and add bert in system path
!test -d bert || git clone -q https://github.com/google-research/bert.git
if not 'bert' in sys.path:
sys.path += ['bert']
# Download QQP Task dataset present in GLUE Tasks.
TASK_DATA_DIR = 'glue_data/QQP'
!test -d glue_data || git clone https://gist.github.com/60c2bdb54d156a41194446737ce03e2e.git glue_data
!test -d $TASK_DATA_DIR || python glue_data/download_glue_data.py --data_dir glue_data --tasks=QQP
!ls -als $TASK_DATA_DIR
###Output
total 106400
4 drwxr-xr-x 3 root root 4096 Feb 23 16:28 .
4 drwxr-xr-x 4 root root 4096 Feb 23 16:28 ..
5680 -rw-r--r-- 1 root root 5815716 Feb 23 16:28 dev.tsv
4 drwxr-xr-x 2 root root 4096 Feb 23 16:28 original
49572 -rw-r--r-- 1 root root 50759408 Feb 23 16:28 test.tsv
51136 -rw-r--r-- 1 root root 52360463 Feb 23 16:28 train.tsv
###Markdown
Model Configs and Hyper Parameters
###Code
import modeling
import optimization
import tokenization
import run_classifier
# Model Hyper Parameters
TRAIN_BATCH_SIZE = 32 # For GPU, reduce to 16
EVAL_BATCH_SIZE = 8
PREDICT_BATCH_SIZE = 8
LEARNING_RATE = 2e-5
NUM_TRAIN_EPOCHS = 2.0
WARMUP_PROPORTION = 0.1
MAX_SEQ_LENGTH = 200
# Model configs
SAVE_CHECKPOINTS_STEPS = 1000
ITERATIONS_PER_LOOP = 1000
NUM_TPU_CORES = 8
VOCAB_FILE = os.path.join(BERT_PRETRAINED_DIR, 'vocab.txt')
CONFIG_FILE = os.path.join(BERT_PRETRAINED_DIR, 'bert_config.json')
INIT_CHECKPOINT = os.path.join(BERT_PRETRAINED_DIR, 'bert_model.ckpt')
DO_LOWER_CASE = BERT_MODEL.startswith('uncased')
###Output
_____no_output_____
###Markdown
Read Questions PairsWe will read data from TSV file and covert to list of InputExample. For `InputExample` and `DataProcessor` class defination refer to [run_classifier](https://github.com/google-research/bert/blob/master/run_classifier.py) file
###Code
class QQPProcessor(run_classifier.DataProcessor):
"""Processor for the Quora Question pair data set."""
def get_train_examples(self, data_dir):
"""Reading train.tsv and converting to list of InputExample"""
return self._create_examples(
self._read_tsv(os.path.join(data_dir,"train.tsv")), 'train')
def get_dev_examples(self, data_dir):
"""Reading dev.tsv and converting to list of InputExample"""
return self._create_examples(
self._read_tsv(os.path.join(data_dir,"dev.tsv")), 'dev')
def get_test_examples(self, data_dir):
"""Reading train.tsv and converting to list of InputExample"""
return self._create_examples(
self._read_tsv(os.path.join(data_dir,"test.tsv")), 'test')
def get_predict_examples(self, sentence_pairs):
"""Given question pairs, conevrting to list of InputExample"""
examples = []
for (i, qpair) in enumerate(sentence_pairs):
guid = "predict-%d" % (i)
# converting questions to utf-8 and creating InputExamples
text_a = tokenization.convert_to_unicode(qpair[0])
text_b = tokenization.convert_to_unicode(qpair[1])
# We will add label as 0, because None is not supported in converting to features
examples.append(
run_classifier.InputExample(guid=guid, text_a=text_a, text_b=text_b, label=0))
return examples
def _create_examples(self, lines, set_type):
"""Creates examples for the training, dev and test sets."""
examples = []
for (i, line) in enumerate(lines):
guid = "%s-%d" % (set_type, i)
if set_type=='test':
# removing header and invalid data
if i == 0 or len(line)!=3:
print(guid, line)
continue
text_a = tokenization.convert_to_unicode(line[1])
text_b = tokenization.convert_to_unicode(line[2])
label = 0 # We will use zero for test as convert_example_to_features doesn't support None
else:
# removing header and invalid data
if i == 0 or len(line)!=6:
continue
text_a = tokenization.convert_to_unicode(line[3])
text_b = tokenization.convert_to_unicode(line[4])
label = int(line[5])
examples.append(
run_classifier.InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
return examples
def get_labels(self):
"return class labels"
return [0,1]
###Output
_____no_output_____
###Markdown
Convert to FeaturesWe will read examples and tokenize using Wordpiece based tokenization. Finally We will convert to `InputFeatures`.BERT follows below tokenization procedure1. Instantiate an instance of tokenizer = tokenization.FullTokenizer2. Tokenize the raw text with tokens = tokenizer.tokenize(raw_text).3. Truncate to the maximum sequence length.4. Add the [CLS] and [SEP] tokens in the right place.We need to create `segment_ids`, `input_mask` for `InputFeatures`. `segment_ids` will be `0` for question1 tokens and `1` for question2 tokens.We will use following functions from [run_classifier](https://github.com/google-research/bert/blob/master/run_classifier.py) file for converting examples to features :-1. `convert_single_example` :- Converts a single `InputExample` into a single `InputFeatures`.2. `file_based_convert_examples_to_features` :- Convert a set of `InputExamples` to a TF_Record file.For more details observe outputs for below cells
###Code
# Instantiate an instance of QQPProcessor and tokenizer
processor = QQPProcessor()
label_list = processor.get_labels()
tokenizer = tokenization.FullTokenizer(vocab_file=VOCAB_FILE, do_lower_case=DO_LOWER_CASE)
# Converting training examples to features
print("################ Processing Training Data #####################")
TRAIN_TF_RECORD = os.path.join(OUTPUT_DIR, "train.tf_record")
train_examples = processor.get_train_examples(TASK_DATA_DIR)
num_train_examples = len(train_examples)
num_train_steps = int( num_train_examples / TRAIN_BATCH_SIZE * NUM_TRAIN_EPOCHS)
num_warmup_steps = int(num_train_steps * WARMUP_PROPORTION)
run_classifier.file_based_convert_examples_to_features(train_examples, label_list, MAX_SEQ_LENGTH, tokenizer, TRAIN_TF_RECORD)
###Output
_____no_output_____
###Markdown
Creating Classification Model
###Code
def create_model(bert_config, is_training, input_ids, input_mask, segment_ids,
labels, num_labels, use_one_hot_embeddings):
"""Creates a classification model."""
# Bert Model instant
model = modeling.BertModel(
config=bert_config,
is_training=is_training,
input_ids=input_ids,
input_mask=input_mask,
token_type_ids=segment_ids,
use_one_hot_embeddings=use_one_hot_embeddings)
# Getting output for last layer of BERT
output_layer = model.get_pooled_output()
# Number of outputs for last layer
hidden_size = output_layer.shape[-1].value
# We will use one layer on top of BERT pretrained for creating classification model
output_weights = tf.get_variable(
"output_weights", [num_labels, hidden_size],
initializer=tf.truncated_normal_initializer(stddev=0.02))
output_bias = tf.get_variable(
"output_bias", [num_labels], initializer=tf.zeros_initializer())
with tf.variable_scope("loss"):
if is_training:
# 0.1 dropout
output_layer = tf.nn.dropout(output_layer, keep_prob=0.9)
# Calcaulte prediction probabilites and loss
logits = tf.matmul(output_layer, output_weights, transpose_b=True)
logits = tf.nn.bias_add(logits, output_bias)
probabilities = tf.nn.softmax(logits, axis=-1)
log_probs = tf.nn.log_softmax(logits, axis=-1)
one_hot_labels = tf.one_hot(labels, depth=num_labels, dtype=tf.float32)
per_example_loss = -tf.reduce_sum(one_hot_labels * log_probs, axis=-1)
loss = tf.reduce_mean(per_example_loss)
return (loss, per_example_loss, logits, probabilities)
###Output
_____no_output_____
###Markdown
Model Function Builder for EstimatorBased on mode, We will create optimizer for training, evaluation metrics for evalution and estimator spec
###Code
def model_fn_builder(bert_config, num_labels, init_checkpoint, learning_rate,
num_train_steps, num_warmup_steps, use_tpu,
use_one_hot_embeddings):
"""Returns `model_fn` closure for TPUEstimator."""
def model_fn(features, labels, mode, params):
"""The `model_fn` for TPUEstimator."""
# reading features input
input_ids = features["input_ids"]
input_mask = features["input_mask"]
segment_ids = features["segment_ids"]
label_ids = features["label_ids"]
is_real_example = None
if "is_real_example" in features:
is_real_example = tf.cast(features["is_real_example"], dtype=tf.float32)
else:
is_real_example = tf.ones(tf.shape(label_ids), dtype=tf.float32)
# checking if training mode
is_training = (mode == tf.estimator.ModeKeys.TRAIN)
# create simple classification model
(total_loss, per_example_loss, logits, probabilities) = create_model(
bert_config, is_training, input_ids, input_mask, segment_ids, label_ids,
num_labels, use_one_hot_embeddings)
# getting variables for intialization and using pretrained init checkpoint
tvars = tf.trainable_variables()
initialized_variable_names = {}
scaffold_fn = None
if init_checkpoint:
(assignment_map, initialized_variable_names
) = modeling.get_assignment_map_from_checkpoint(tvars, init_checkpoint)
if use_tpu:
def tpu_scaffold():
tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
return tf.train.Scaffold()
scaffold_fn = tpu_scaffold
else:
tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
output_spec = None
if mode == tf.estimator.ModeKeys.TRAIN:
# defining optimizar function
train_op = optimization.create_optimizer(
total_loss, learning_rate, num_train_steps, num_warmup_steps, use_tpu)
# Training estimator spec
output_spec = tf.contrib.tpu.TPUEstimatorSpec(
mode=mode,
loss=total_loss,
train_op=train_op,
scaffold_fn=scaffold_fn)
elif mode == tf.estimator.ModeKeys.EVAL:
# accuracy, loss, auc, F1, precision and recall metrics for evaluation
def metric_fn(per_example_loss, label_ids, logits, is_real_example):
predictions = tf.argmax(logits, axis=-1, output_type=tf.int32)
loss = tf.metrics.mean(values=per_example_loss, weights=is_real_example)
accuracy = tf.metrics.accuracy(
labels=label_ids, predictions=predictions, weights=is_real_example)
f1_score = tf.contrib.metrics.f1_score(
label_ids,
predictions)
auc = tf.metrics.auc(
label_ids,
predictions)
recall = tf.metrics.recall(
label_ids,
predictions)
precision = tf.metrics.precision(
label_ids,
predictions)
return {
"eval_accuracy": accuracy,
"eval_loss": loss,
"f1_score": f1_score,
"auc": auc,
"precision": precision,
"recall": recall
}
eval_metrics = (metric_fn,
[per_example_loss, label_ids, logits, is_real_example])
# estimator spec for evalaution
output_spec = tf.contrib.tpu.TPUEstimatorSpec(
mode=mode,
loss=total_loss,
eval_metrics=eval_metrics,
scaffold_fn=scaffold_fn)
else:
# estimator spec for predictions
output_spec = tf.contrib.tpu.TPUEstimatorSpec(
mode=mode,
predictions={"probabilities": probabilities},
scaffold_fn=scaffold_fn)
return output_spec
return model_fn
###Output
_____no_output_____
###Markdown
Creating TPUEstimator
###Code
# Define TPU configs
if USE_TPU:
tpu_cluster_resolver = tf.contrib.cluster_resolver.TPUClusterResolver(TPU_ADDRESS)
else:
tpu_cluster_resolver = None
run_config = tf.contrib.tpu.RunConfig(
cluster=tpu_cluster_resolver,
model_dir=OUTPUT_DIR,
save_checkpoints_steps=SAVE_CHECKPOINTS_STEPS,
tpu_config=tf.contrib.tpu.TPUConfig(
iterations_per_loop=ITERATIONS_PER_LOOP,
num_shards=NUM_TPU_CORES,
per_host_input_for_training=tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2))
# create model function for estimator using model function builder
model_fn = model_fn_builder(
bert_config=modeling.BertConfig.from_json_file(CONFIG_FILE),
num_labels=len(label_list),
init_checkpoint=INIT_CHECKPOINT,
learning_rate=LEARNING_RATE,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps,
use_tpu=USE_TPU,
use_one_hot_embeddings=True)
# Defining TPU Estimator
estimator = tf.contrib.tpu.TPUEstimator(
use_tpu=USE_TPU,
model_fn=model_fn,
config=run_config,
train_batch_size=TRAIN_BATCH_SIZE,
eval_batch_size=EVAL_BATCH_SIZE,
predict_batch_size=PREDICT_BATCH_SIZE)
###Output
WARNING:tensorflow:Estimator's model_fn (<function model_fn_builder.<locals>.model_fn at 0x7f79aa471378>) includes params argument, but params are not passed to Estimator.
INFO:tensorflow:Using config: {'_model_dir': 'gs://quorabert/outputs', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 1000, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true
cluster_def {
job {
name: "worker"
tasks {
value: "10.3.4.202:8470"
}
}
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': None, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f79b558eb00>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': 'grpc://10.3.4.202:8470', '_evaluation_master': 'grpc://10.3.4.202:8470', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1, '_tpu_config': TPUConfig(iterations_per_loop=1000, num_shards=8, num_cores_per_replica=None, per_host_input_for_training=3, tpu_job_name=None, initial_infeed_sleep_secs=None, input_partition_dims=None), '_cluster': <tensorflow.python.distribute.cluster_resolver.tpu_cluster_resolver.TPUClusterResolver object at 0x7f79b499e908>}
INFO:tensorflow:_TPUContext: eval_on_tpu True
###Markdown
Finetune Training
###Code
# Train the model.
print('QQP on BERT base model normally takes about 1 hour on TPU and 15-20 hours on GPU. Please wait...')
print('***** Started training at {} *****'.format(datetime.datetime.now()))
print(' Num examples = {}'.format(num_train_examples))
print(' Batch size = {}'.format(TRAIN_BATCH_SIZE))
tf.logging.info(" Num steps = %d", num_train_steps)
# we are using `file_based_input_fn_builder` for creating input function from TF_RECORD file
train_input_fn = run_classifier.file_based_input_fn_builder(TRAIN_TF_RECORD,
seq_length=MAX_SEQ_LENGTH,
is_training=True,
drop_remainder=True)
estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
print('***** Finished training at {} *****'.format(datetime.datetime.now()))
###Output
QQP on BERT base model normally takes about 1 hour on TPU and 15-20 hours on GPU. Please wait...
***** Started training at 2019-02-23 17:50:00.906270 *****
Num examples = 363849
Batch size = 32
INFO:tensorflow: Num steps = 22740
INFO:tensorflow:Querying Tensorflow master (grpc://10.3.4.202:8470) for TPU system metadata.
INFO:tensorflow:Found TPU system:
INFO:tensorflow:*** Num TPU Cores: 8
INFO:tensorflow:*** Num TPU Workers: 1
INFO:tensorflow:*** Num TPU Cores Per Worker: 8
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:CPU:0, CPU, -1, 12018642110560417299)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 17179869184, 5785529947456626056)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:0, TPU, 17179869184, 7883019315390049702)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:1, TPU, 17179869184, 10377537302524390808)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:2, TPU, 17179869184, 16588443086942417458)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:3, TPU, 17179869184, 10907570108189753361)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:4, TPU, 17179869184, 971213159501917186)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:5, TPU, 17179869184, 1464411961604989108)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:6, TPU, 17179869184, 1469392857332876480)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:7, TPU, 17179869184, 4005027274495629088)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU_SYSTEM:0, TPU_SYSTEM, 17179869184, 12360692828163990549)
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py:435: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
INFO:tensorflow:Calling model_fn.
WARNING:tensorflow:From bert/run_classifier.py:550: map_and_batch (from tensorflow.contrib.data.python.ops.batching) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.experimental.map_and_batch(...)`.
WARNING:tensorflow:From bert/run_classifier.py:530: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/data/ops/dataset_ops.py:1720: DatasetV1.make_initializable_iterator (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `for ... in dataset:` to iterate over a dataset. If using `tf.estimator`, return the `Dataset` object directly from your input function. As a last resort, you can use `tf.compat.v1.data.make_initializable_iterator(dataset)`.
WARNING:tensorflow:From bert/modeling.py:358: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
WARNING:tensorflow:From bert/modeling.py:671: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.dense instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/learning_rate_decay_v2.py:321: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Deprecated in favor of operator or tf.math.divide.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:TPU job name worker
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Saving checkpoints for 0 into gs://quorabert/outputs/model.ckpt.
INFO:tensorflow:Initialized dataset iterators in 0 seconds
INFO:tensorflow:Installing graceful shutdown hook.
INFO:tensorflow:Creating heartbeat manager for ['/job:worker/replica:0/task:0/device:CPU:0']
INFO:tensorflow:Configuring worker heartbeat: shutdown_mode: WAIT_FOR_COORDINATOR
INFO:tensorflow:Init TPU system
INFO:tensorflow:Initialized TPU in 4 seconds
INFO:tensorflow:Starting infeed thread controller.
INFO:tensorflow:Starting outfeed thread controller.
INFO:tensorflow:Enqueue next (1000) batch(es) of data to infeed.
INFO:tensorflow:Dequeue next (1000) batch(es) of data from outfeed.
INFO:tensorflow:Saving checkpoints for 1000 into gs://quorabert/outputs/model.ckpt.
INFO:tensorflow:loss = 0.8386303, step = 1000
INFO:tensorflow:Enqueue next (1000) batch(es) of data to infeed.
INFO:tensorflow:Dequeue next (1000) batch(es) of data from outfeed.
INFO:tensorflow:Saving checkpoints for 2000 into gs://quorabert/outputs/model.ckpt.
INFO:tensorflow:loss = 0.037307255, step = 2000 (101.798 sec)
INFO:tensorflow:global_step/sec: 9.82346
INFO:tensorflow:examples/sec: 314.351
INFO:tensorflow:Enqueue next (1000) batch(es) of data to infeed.
INFO:tensorflow:Dequeue next (1000) batch(es) of data from outfeed.
INFO:tensorflow:Saving checkpoints for 3000 into gs://quorabert/outputs/model.ckpt.
INFO:tensorflow:loss = 0.023139466, step = 3000 (95.075 sec)
INFO:tensorflow:global_step/sec: 10.5179
INFO:tensorflow:examples/sec: 336.571
INFO:tensorflow:Enqueue next (1000) batch(es) of data to infeed.
INFO:tensorflow:Dequeue next (1000) batch(es) of data from outfeed.
INFO:tensorflow:Saving checkpoints for 4000 into gs://quorabert/outputs/model.ckpt.
INFO:tensorflow:loss = 0.61749035, step = 4000 (97.361 sec)
INFO:tensorflow:global_step/sec: 10.2711
INFO:tensorflow:examples/sec: 328.675
INFO:tensorflow:Enqueue next (1000) batch(es) of data to infeed.
INFO:tensorflow:Dequeue next (1000) batch(es) of data from outfeed.
INFO:tensorflow:Saving checkpoints for 5000 into gs://quorabert/outputs/model.ckpt.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/saver.py:966: remove_checkpoint (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to delete files with this prefix.
INFO:tensorflow:loss = 1.9537156, step = 5000 (94.422 sec)
INFO:tensorflow:global_step/sec: 10.5904
INFO:tensorflow:examples/sec: 338.893
INFO:tensorflow:Enqueue next (1000) batch(es) of data to infeed.
INFO:tensorflow:Dequeue next (1000) batch(es) of data from outfeed.
INFO:tensorflow:Saving checkpoints for 6000 into gs://quorabert/outputs/model.ckpt.
INFO:tensorflow:loss = 1.120376, step = 6000 (95.191 sec)
INFO:tensorflow:global_step/sec: 10.5055
INFO:tensorflow:examples/sec: 336.175
INFO:tensorflow:Enqueue next (1000) batch(es) of data to infeed.
INFO:tensorflow:Dequeue next (1000) batch(es) of data from outfeed.
INFO:tensorflow:Saving checkpoints for 7000 into gs://quorabert/outputs/model.ckpt.
INFO:tensorflow:loss = 0.84950703, step = 7000 (92.637 sec)
INFO:tensorflow:global_step/sec: 10.7948
INFO:tensorflow:examples/sec: 345.435
INFO:tensorflow:Enqueue next (1000) batch(es) of data to infeed.
INFO:tensorflow:Dequeue next (1000) batch(es) of data from outfeed.
INFO:tensorflow:Saving checkpoints for 8000 into gs://quorabert/outputs/model.ckpt.
INFO:tensorflow:loss = 0.0071571404, step = 8000 (95.847 sec)
INFO:tensorflow:global_step/sec: 10.4333
INFO:tensorflow:examples/sec: 333.865
INFO:tensorflow:Enqueue next (1000) batch(es) of data to infeed.
INFO:tensorflow:Dequeue next (1000) batch(es) of data from outfeed.
INFO:tensorflow:Saving checkpoints for 9000 into gs://quorabert/outputs/model.ckpt.
INFO:tensorflow:loss = 0.004391548, step = 9000 (96.762 sec)
INFO:tensorflow:global_step/sec: 10.3346
INFO:tensorflow:examples/sec: 330.707
INFO:tensorflow:Enqueue next (1000) batch(es) of data to infeed.
INFO:tensorflow:Dequeue next (1000) batch(es) of data from outfeed.
INFO:tensorflow:Saving checkpoints for 10000 into gs://quorabert/outputs/model.ckpt.
INFO:tensorflow:loss = 0.011896152, step = 10000 (93.423 sec)
INFO:tensorflow:global_step/sec: 10.7041
INFO:tensorflow:examples/sec: 342.531
INFO:tensorflow:Enqueue next (1000) batch(es) of data to infeed.
INFO:tensorflow:Dequeue next (1000) batch(es) of data from outfeed.
INFO:tensorflow:Saving checkpoints for 11000 into gs://quorabert/outputs/model.ckpt.
INFO:tensorflow:loss = 0.008425392, step = 11000 (95.291 sec)
INFO:tensorflow:global_step/sec: 10.4942
INFO:tensorflow:examples/sec: 335.813
INFO:tensorflow:Enqueue next (1000) batch(es) of data to infeed.
INFO:tensorflow:Dequeue next (1000) batch(es) of data from outfeed.
INFO:tensorflow:Saving checkpoints for 12000 into gs://quorabert/outputs/model.ckpt.
INFO:tensorflow:loss = 0.8160696, step = 12000 (95.138 sec)
INFO:tensorflow:global_step/sec: 10.5112
INFO:tensorflow:examples/sec: 336.357
INFO:tensorflow:Enqueue next (1000) batch(es) of data to infeed.
INFO:tensorflow:Dequeue next (1000) batch(es) of data from outfeed.
INFO:tensorflow:Saving checkpoints for 13000 into gs://quorabert/outputs/model.ckpt.
INFO:tensorflow:loss = 0.6142087, step = 13000 (95.948 sec)
INFO:tensorflow:global_step/sec: 10.4223
INFO:tensorflow:examples/sec: 333.514
INFO:tensorflow:Enqueue next (1000) batch(es) of data to infeed.
INFO:tensorflow:Dequeue next (1000) batch(es) of data from outfeed.
INFO:tensorflow:Saving checkpoints for 14000 into gs://quorabert/outputs/model.ckpt.
INFO:tensorflow:loss = 0.4740952, step = 14000 (93.625 sec)
INFO:tensorflow:global_step/sec: 10.6809
INFO:tensorflow:examples/sec: 341.789
INFO:tensorflow:Enqueue next (1000) batch(es) of data to infeed.
INFO:tensorflow:Dequeue next (1000) batch(es) of data from outfeed.
INFO:tensorflow:Saving checkpoints for 15000 into gs://quorabert/outputs/model.ckpt.
INFO:tensorflow:loss = 0.0003787365, step = 15000 (95.261 sec)
INFO:tensorflow:global_step/sec: 10.4975
INFO:tensorflow:examples/sec: 335.92
INFO:tensorflow:Enqueue next (1000) batch(es) of data to infeed.
INFO:tensorflow:Dequeue next (1000) batch(es) of data from outfeed.
INFO:tensorflow:Saving checkpoints for 16000 into gs://quorabert/outputs/model.ckpt.
INFO:tensorflow:loss = 0.010612472, step = 16000 (95.127 sec)
INFO:tensorflow:global_step/sec: 10.5123
INFO:tensorflow:examples/sec: 336.394
INFO:tensorflow:Enqueue next (1000) batch(es) of data to infeed.
INFO:tensorflow:Dequeue next (1000) batch(es) of data from outfeed.
INFO:tensorflow:Saving checkpoints for 17000 into gs://quorabert/outputs/model.ckpt.
INFO:tensorflow:loss = 0.00039491506, step = 17000 (96.878 sec)
INFO:tensorflow:global_step/sec: 10.3218
INFO:tensorflow:examples/sec: 330.297
INFO:tensorflow:Enqueue next (1000) batch(es) of data to infeed.
INFO:tensorflow:Dequeue next (1000) batch(es) of data from outfeed.
INFO:tensorflow:Saving checkpoints for 18000 into gs://quorabert/outputs/model.ckpt.
INFO:tensorflow:loss = 0.0002867375, step = 18000 (95.810 sec)
INFO:tensorflow:global_step/sec: 10.4377
INFO:tensorflow:examples/sec: 334.005
INFO:tensorflow:Enqueue next (1000) batch(es) of data to infeed.
INFO:tensorflow:Dequeue next (1000) batch(es) of data from outfeed.
INFO:tensorflow:Saving checkpoints for 19000 into gs://quorabert/outputs/model.ckpt.
INFO:tensorflow:loss = 0.0026563092, step = 19000 (98.965 sec)
INFO:tensorflow:global_step/sec: 10.1047
INFO:tensorflow:examples/sec: 323.351
INFO:tensorflow:Enqueue next (1000) batch(es) of data to infeed.
INFO:tensorflow:Dequeue next (1000) batch(es) of data from outfeed.
INFO:tensorflow:Saving checkpoints for 20000 into gs://quorabert/outputs/model.ckpt.
INFO:tensorflow:loss = 0.0039358316, step = 20000 (93.890 sec)
INFO:tensorflow:global_step/sec: 10.6509
INFO:tensorflow:examples/sec: 340.829
INFO:tensorflow:Enqueue next (1000) batch(es) of data to infeed.
INFO:tensorflow:Dequeue next (1000) batch(es) of data from outfeed.
INFO:tensorflow:Saving checkpoints for 21000 into gs://quorabert/outputs/model.ckpt.
INFO:tensorflow:loss = 0.0034660033, step = 21000 (95.893 sec)
INFO:tensorflow:global_step/sec: 10.4282
INFO:tensorflow:examples/sec: 333.702
INFO:tensorflow:Enqueue next (1000) batch(es) of data to infeed.
INFO:tensorflow:Dequeue next (1000) batch(es) of data from outfeed.
INFO:tensorflow:Saving checkpoints for 22000 into gs://quorabert/outputs/model.ckpt.
INFO:tensorflow:loss = 0.9870303, step = 22000 (94.637 sec)
INFO:tensorflow:global_step/sec: 10.5667
INFO:tensorflow:examples/sec: 338.136
INFO:tensorflow:Enqueue next (740) batch(es) of data to infeed.
INFO:tensorflow:Dequeue next (740) batch(es) of data from outfeed.
INFO:tensorflow:loss = 1.4805896, step = 22740 (55.165 sec)
INFO:tensorflow:global_step/sec: 13.4141
INFO:tensorflow:examples/sec: 429.251
INFO:tensorflow:Saving checkpoints for 22740 into gs://quorabert/outputs/model.ckpt.
INFO:tensorflow:Stop infeed thread controller
INFO:tensorflow:Shutting down InfeedController thread.
INFO:tensorflow:InfeedController received shutdown signal, stopping.
INFO:tensorflow:Infeed thread finished, shutting down.
INFO:tensorflow:infeed marked as finished
INFO:tensorflow:Stop output thread controller
INFO:tensorflow:Shutting down OutfeedController thread.
INFO:tensorflow:OutfeedController received shutdown signal, stopping.
INFO:tensorflow:Outfeed thread finished, shutting down.
INFO:tensorflow:outfeed marked as finished
INFO:tensorflow:Shutdown TPU system.
INFO:tensorflow:Loss for final step: 1.4805896.
INFO:tensorflow:training_loop marked as finished
***** Finished training at 2019-02-23 18:28:23.504515 *****
###Markdown
Evalute FineTuned modelFirst we will evalute on Train set and Then on Dev set
###Code
# eval the model on train set.
print('***** Started Train Set evaluation at {} *****'.format(datetime.datetime.now()))
print(' Num examples = {}'.format(num_train_examples))
print(' Batch size = {}'.format(EVAL_BATCH_SIZE))
# eval input function for train set
train_eval_input_fn = run_classifier.file_based_input_fn_builder(TRAIN_TF_RECORD,
seq_length=MAX_SEQ_LENGTH,
is_training=False,
drop_remainder=True)
# evalute on train set
result = estimator.evaluate(input_fn=train_eval_input_fn,
steps=int(num_train_examples/EVAL_BATCH_SIZE))
print('***** Finished evaluation at {} *****'.format(datetime.datetime.now()))
print("***** Eval results *****")
for key in sorted(result.keys()):
print(' {} = {}'.format(key, str(result[key])))
# Converting eval examples to features
print("################ Processing Dev Data #####################")
EVAL_TF_RECORD = os.path.join(OUTPUT_DIR, "eval.tf_record")
eval_examples = processor.get_dev_examples(TASK_DATA_DIR)
num_eval_examples = len(eval_examples)
run_classifier.file_based_convert_examples_to_features(eval_examples, label_list, MAX_SEQ_LENGTH, tokenizer, EVAL_TF_RECORD)
# Eval the model on Dev set.
print('***** Started Dev Set evaluation at {} *****'.format(datetime.datetime.now()))
print(' Num examples = {}'.format(num_eval_examples))
print(' Batch size = {}'.format(EVAL_BATCH_SIZE))
# eval input function for dev set
eval_input_fn = run_classifier.file_based_input_fn_builder(EVAL_TF_RECORD,
seq_length=MAX_SEQ_LENGTH,
is_training=False,
drop_remainder=True)
# evalute on dev set
result = estimator.evaluate(input_fn=eval_input_fn, steps=int(num_eval_examples/EVAL_BATCH_SIZE))
print('***** Finished evaluation at {} *****'.format(datetime.datetime.now()))
print("***** Eval results *****")
for key in sorted(result.keys()):
print(' {} = {}'.format(key, str(result[key])))
###Output
***** Started Dev Set evaluation at 2019-02-23 18:32:32.236462 *****
Num examples = 40430
Batch size = 8
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2019-02-23T18:32:37Z
INFO:tensorflow:TPU job name worker
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from gs://quorabert/outputs/model.ckpt-22740
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Init TPU system
INFO:tensorflow:Initialized TPU in 7 seconds
INFO:tensorflow:Starting infeed thread controller.
INFO:tensorflow:Starting outfeed thread controller.
INFO:tensorflow:Initialized dataset iterators in 0 seconds
INFO:tensorflow:Enqueue next (5053) batch(es) of data to infeed.
INFO:tensorflow:Dequeue next (5053) batch(es) of data from outfeed.
INFO:tensorflow:Evaluation [5053/5053]
INFO:tensorflow:Stop infeed thread controller
INFO:tensorflow:Shutting down InfeedController thread.
INFO:tensorflow:InfeedController received shutdown signal, stopping.
INFO:tensorflow:Infeed thread finished, shutting down.
INFO:tensorflow:infeed marked as finished
INFO:tensorflow:Stop output thread controller
INFO:tensorflow:Shutting down OutfeedController thread.
INFO:tensorflow:OutfeedController received shutdown signal, stopping.
INFO:tensorflow:Outfeed thread finished, shutting down.
INFO:tensorflow:outfeed marked as finished
INFO:tensorflow:Shutdown TPU system.
INFO:tensorflow:Finished evaluation at 2019-02-23-18:33:13
INFO:tensorflow:Saving dict for global step 22740: auc = 0.89233345, eval_accuracy = 0.896324, eval_loss = 0.4719124, f1_score = 0.86166024, global_step = 22740, loss = 0.47700334, precision = 0.8466528, recall = 0.8772095
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 22740: gs://quorabert/outputs/model.ckpt-22740
INFO:tensorflow:evaluation_loop marked as finished
***** Finished evaluation at 2019-02-23 18:33:14.659408 *****
***** Eval results *****
auc = 0.89233345
eval_accuracy = 0.896324
eval_loss = 0.4719124
f1_score = 0.86166024
global_step = 22740
loss = 0.47700334
precision = 0.8466528
recall = 0.8772095
###Markdown
Evaluation Results---Evaluation results are on BERT base uncased model. For reproducing similar results, train for 3 epochs.|**Metrics** | **Train Set** | **Dev Set** ||---|---|---||**Loss**|0.150|0.497||**Accuracy**|0.969|0.907||**F1**|0.959|0.875||**AUC**|0.969|0.902||**Precision**|0.949|0.864||**Recall**|0.969|0.886| Predictions on ModelFirst We will predict on custom examples.For test set, We will get predictions and save in file.
###Code
# examples sentences, feel free to change and try
sent_pairs = [("how can i improve my english?", "how can i become fluent in english?"), ("How can i recover old gmail account ?","How can i delete my old gmail account ?"),
("How can i recover old gmail account ?","How can i access my old gmail account ?")]
print("******* Predictions on Custom Data ********")
# create `InputExample` for custom examples
predict_examples = processor.get_predict_examples(sent_pairs)
num_predict_examples = len(predict_examples)
# For TPU, We will append `PaddingExample` for maintaining batch size
if USE_TPU:
while(len(predict_examples)%EVAL_BATCH_SIZE!=0):
predict_examples.append(run_classifier.PaddingInputExample())
# Converting to features
predict_features = run_classifier.convert_examples_to_features(predict_examples, label_list, MAX_SEQ_LENGTH, tokenizer)
print(' Num examples = {}'.format(num_predict_examples))
print(' Batch size = {}'.format(PREDICT_BATCH_SIZE))
# Input function for prediction
predict_input_fn = run_classifier.input_fn_builder(predict_features,
seq_length=MAX_SEQ_LENGTH,
is_training=False,
drop_remainder=True)
result = list(estimator.predict(input_fn=predict_input_fn))
print(result)
for ex_i in range(num_predict_examples):
print("****** Example {} ******".format(ex_i))
print("Question1 :", sent_pairs[ex_i][0])
print("Question2 :", sent_pairs[ex_i][1])
print("Prediction :", result[ex_i]['probabilities'][1])
# Converting test examples to features
print("################ Processing Test Data #####################")
TEST_TF_RECORD = os.path.join(OUTPUT_DIR, "test.tf_record")
test_examples = processor.get_test_examples(TASK_DATA_DIR)
num_test_examples = len(test_examples)
run_classifier.file_based_convert_examples_to_features(test_examples, label_list, MAX_SEQ_LENGTH, tokenizer, TEST_TF_RECORD)
# Predictions on test set.
print('***** Started Prediction at {} *****'.format(datetime.datetime.now()))
print(' Num examples = {}'.format(num_test_examples))
print(' Batch size = {}'.format(PREDICT_BATCH_SIZE))
# predict input function for test set
test_input_fn = run_classifier.file_based_input_fn_builder(TEST_TF_RECORD,
seq_length=MAX_SEQ_LENGTH,
is_training=False,
drop_remainder=True)
tf.logging.set_verbosity(tf.logging.ERROR)
# predict on test set
result = list(estimator.predict(input_fn=test_input_fn))
print('***** Finished Prediction at {} *****'.format(datetime.datetime.now()))
# saving test predictions
output_test_file = os.path.join(OUTPUT_DIR, "test_predictions.txt")
with tf.gfile.GFile(output_test_file, "w") as writer:
for (example_i, predictions_i) in enumerate(result):
writer.write("%s , %s\n" % (test_examples[example_i].guid, str(predictions_i['probabilities'][1])))
###Output
_____no_output_____ |
Trying to do tests.ipynb | ###Markdown
It's an explanationThis is a document in which I will try to work out how to process data from various sources into data that can be uploaded into a DSpace repository
###Code
import pandas as pd
###Output
_____no_output_____ |
storage.ipynb | ###Markdown
Upload, download, and serve files using Google Cloud Storage.
###Code
from google.cloud import storage
client = storage.Client()
###Output
_____no_output_____
###Markdown
BucketsBuckets are where we store files. Think of them as globally-addressable folders.
###Code
list(client.list_buckets())
###Output
_____no_output_____
###Markdown
Upload a simple text file
###Code
bucket = client.get_bucket('strong-charge-202921')
blob = bucket.blob('example.txt')
blob.upload_from_string("Hello, PyCon!")
###Output
_____no_output_____
###Markdown
Make it public so we can serve it directly from GCS
###Code
blob.make_public()
blob.public_url
###Output
_____no_output_____
###Markdown
Download an image
###Code
blob = bucket.blob('kermit.jpg')
data = blob.download_to_filename('local.jpg')
###Output
_____no_output_____ |
9.Quantum criptography.ipynb | ###Markdown
9. Quantum cryptographyThe advent of quantum computation, which introduces the possibility of using quantum mechanics for information processing, gave rise to the following question: can quantum information be shared more securely than classical information?In 1982, a very interesting property of quantum states was discovered [1,2]. This is the so-called "no-cloning theorem", which proved how the laws of quantum mechanics prohibit the copy of an unknown quantum state. Therefore, the no-cloning theorem assures us that qubits can hide the quantum information better than classical bits. This has important implication for example for secure communications, where it allows for the sharing of private keys which cannot be eavesdropped by a third party. We consider the first protocol, the BB84 protocol, which exploits the quantum mechanical properties of qubits for secure exchange of a secret key between two parties. 9.1 No-cloning theoremLet us prove the no-cloning theorem, the fact that an unknown quantum state cannot be copied.First let us clearly state our problem:We have a qubit in an unknown quantum state $\lvert \psi \rangle$ and we wish to copy his state on another qubit initilized to the state $\lvert s \rangle$. Therefore, we want to implement the following quantum gate:\begin{equation}U\lvert \psi \rangle \lvert s \rangle =\lvert \psi \rangle \lvert \psi \rangle \end{equation}Let us take the unknown quantum state to be \begin{equation}\lvert \psi \rangle =\alpha \lvert 0 \rangle + \beta \lvert 1 \rangle \end{equation}where the amplitudes $\alpha$ and $\beta$ are unknown.Therefore we have:\begin{equation}U\lvert \psi \rangle \lvert s \rangle =\lvert \psi \rangle \lvert \psi \rangle = (\alpha \lvert 0\rangle +\beta \lvert 1\rangle) (\alpha \lvert 0\rangle +\beta \lvert 1\rangle) = (\alpha^2 \lvert 0\rangle \lvert 0\rangle + \alpha \beta \lvert 0\rangle \lvert 1\rangle + \beta \alpha \lvert 1 \rangle \lvert 0\rangle + \beta^2 \lvert 1\rangle \lvert 1\rangle)\tag{1}\end{equation}Because of the linearity of operators, we can equivalently write:\begin{equation}U\lvert \psi \rangle \lvert s \rangle = U(\alpha \lvert 0\rangle + \beta \lvert 1\rangle )\lvert s\rangle = U(\alpha \lvert 0\rangle \lvert s\rangle + \beta \lvert 1\rangle \lvert s\rangle )=\alpha \lvert 00\rangle +\beta \lvert 11\rangle \tag{2}\end{equation}Comparing Eqs. (1) and (2), one can see that we come to a contraddiction! Thus, the operation $U$ which copies an unknown quantum state of a qubit onto another qubit is not possible. 9.2 BB84 protocol$$\text{1. BB84 protocol overview.}$$In Ref. [3], the first protocol for the distribution of a secret quantum key between two parties is described.First, let us assume that Alice and Bob may exchange qubits and classical information. Also, Alice can prepare a qubit in the $\lvert 0 \rangle$, $\lvert 1 \rangle$, $\lvert + \rangle = \frac{1}{\sqrt{2}} \left( \lvert 0 \rangle + \lvert 1 \rangle\right)$ and $\lvert - \rangle = \frac{1}{\sqrt{2}} \left( \lvert 0 \rangle - \lvert 1 \rangle\right)$ state, and Bob can measure in the standard (Z) $\left\{ \lvert 0 \rangle, \lvert 1 \rangle \right\}$ basis and in the Hadamard (H) $\left\{ \lvert + \rangle, \lvert - \rangle \right\} $ basis. Note that the two bases are non-orthogonal with respect to each other. Measuring in the $\left\{ \lvert + \rangle, \lvert - \rangle \right\} $ basis means that before the standard measurement in the $\left\{ \lvert 0 \rangle, \lvert 1 \rangle \right\} $ basis, Bob applies the Hadamard gate to the qubit. Thus\begin{equation}\lvert + \rangle =\frac{1}{\sqrt{2}}(\lvert 0\rangle +\lvert 1\rangle )\end{equation}gives $\lvert 0 \rangle$ when measured in the Hadamard basis, and\begin{equation}\lvert - \rangle =\frac{1}{\sqrt{2}}(\lvert 0\rangle -\lvert 1\rangle )\end{equation}gives $\lvert 1 \rangle$ when measured in the Hadamard basis.The protocol then works in the following way. Alice picks the bit that she wants to transmit to Bob, either $0$ or $1$. She then prepares a qubit in the corresponding state $\lvert 0 \rangle$ or $\lvert 1 \rangle$, respectively. After that, she randomly decides whether or not to transform her qubit from the standard (Z) basis to the Hadamard (H) basis by applying or not the Hadamard gate her qubit, thus preparing the state $\lvert + \rangle$ or $\lvert - \rangle$. Then Alice sends her first qubit to Bob. Bob receives Alice's qubit, selects one of the measurement bases at random and measures it. After that, Alice and Bob tell each other which basis they used through a classical communication channel. In general, for every qubit Alice sends to Bob there are four possible scenarios:Both Alice and Bob used the Hadamard basis.They both used the standard basis.Alice transformed to the Hadamard basis, and Bob measured in the standard basis.Alice used the standard basis, and Bob the Hadamard basis.When Alice and Bob agree on the same basis, they keep the transferred bit. When they disagree, they discard it. Thus, it is possible for Alice and Bob to securely communicate an $n$ bit private key using $2n$ qubits. Example For example, let us consider the case where Alice wants to send the bit $0$. She prepares her qubit in the $\lvert 0 \rangle$ state and then randomly selects whether or not she applies the Hadamard gate to it. Let's say she does apply the Hadamard gate to her qubit, obtaining the $\lvert + \rangle$ state. Then, consider, the cased where Bob measures the qubit in the standard basis. After Bob's measurement, Alice and Bob communicate through the classical channel. Alice tells Bob that she applied the Hadamard gate to her qubit and Bob tells Alice that he measured it in the standard basis. So, they abandon the first bit.$$\text{2. Example of one application of the BB84 protocol. In this case, Alice and Bob will discard this bit.}$$Next, Alice picks a second bit, $1$, encodes it into a qubit and selects at random whether to apply or not the Hadamard gate. Let us now assume that she does not apply the Hadamard gate. Thus, the qubit is in the state $\lvert 1\rangle $. Alice then sends her qubit to Bob. Bob selects at random one of his two measurement bases. Let us consider in thiscase that he measures in the standard basis. As the qubit is in the state $\lvert 1\rangle $ the outcome of the measurement will be $1$. Thus, Bob chooses value $1$ for his second classical bit, the same as Alice did. Finally, Alice tells Bob that she did not apply the Hadamard gate, and Bob tells Alice that he measured in the standard basis. So, both Alice and Bob will use the bit with the value $1$ as the first bit in their secret key.$$\text{3. Example of another application of the BB84 protocol.} \\ \text{In this case, Alice and Bob successfully communicate the value of a bit.}$$ QISKit: BB84 protocol 1) Show the communication of one bit
###Code
from initialize import *
import random
#initialize quantum program
my_alg = initialize(circuit_name = 'bb84', qubit_number=1, bit_number=1, backend = 'local_qasm_simulator', shots = 1)
#add gates to the circuit
# Alice encodes the bit 1 into a qubit
my_alg.q_circuit.x(my_alg.q_reg[0])
# Alice randomly applies the Hadamard gate to go to the Hadamard basis
a = random.randint(0,1)
if a==1:
my_alg.q_circuit.h(my_alg.q_reg[0])
# Bob randomly applies the Hadamard gate to go to the Hadamard basis
b = random.randint(0,1)
if b==1:
my_alg.q_circuit.h(my_alg.q_reg[0])
my_alg.q_circuit.measure(my_alg.q_reg[0], my_alg.c_reg[0]) # measures first qubit
# print list of gates in the circuit
print('List of gates:')
for circuit in my_alg.q_circuit:
print(circuit.name)
#Execute the quantum algorithm
result = my_alg.Q_program.execute(my_alg.circ_name, backend=my_alg.backend, shots= my_alg.shots)
#Show the results obtained from the quantum algorithm
counts = result.get_counts(my_alg.circ_name)
print('\nThe measured outcomes of the circuits are:',counts)
if a == b:
print('Alice and Bob agree on the basis, thus they keep the bit')
else:
print("Alice and Bob don't agree the same basis, thus they discard the bit")
###Output
List of gates:
x
h
measure
The measured outcomes of the circuits are: {'0': 1}
Alice and Bob don't agree the same basis, thus they discard the bit
|
tasks/nlp-text-pre-processor/Experiment.ipynb | ###Markdown
Text Pre Processor - ExperimentoEste é um componente que utiliza a biblioteca [nltk](https://www.nltk.org/) e [ftfy](https://pypi.org/project/ftfy/) e [regex](https://docs.python.org/3/library/re.html) para pré processar textos que entrrão em outros componentes. Declaração de parâmetros e hiperparâmetrosDeclare parâmetros com o botão na barra de ferramentas.A variável `dataset` possui o caminho para leitura do arquivos importados na tarefa de "Upload de dados".Você também pode importar arquivos com o botão na barra de ferramentas.
###Code
# parâmetros
dataset = "/tmp/data/imdb.csv" #@param {type:"string"}
target = "review" #@param {type:"string", label:"Atributo alvo", description:"Seu modelo será treinado para prever os valores do alvo."}
language = "Português" #@param ["Português", "Inglês"] {type:"string", label:"Linguagem", description:"Linguagem da qual os stopwords pertencem. Deve ser a mesma utilizada no dataset."}
# preprocessamento
case = "Caixa baixa" #@param ["Caixa baixa","Caixa alta","Não Aplicar"] {type:"string",label:"Aplicação de casing", description:"Caixa baixa, caixa alta ou não aplicação de caixa"}
remove_stop_words = True #@param {type:"boolean",label:"Remoção de Stop Words", description:"Remoção de palavras, conjunções, artigos e outros"}
remove_top_words = True #@param {type:"boolean",label:"Remoção de Top Words", description:"Remoção da porcentagem de palavras mais frequentes no texto"}
top_words_percentage = 0.01 #@param {type:"number",label:"Porcentagem de Top Words", description:"Porcentagem das palavras mais frequentes no texto"}
stemming = False #@param {type:"boolean",label:"Stemming"}
lemmatization = True #@param {type:"boolean",label:"Lemmatization"}
remove_punctuation = True #@param {type:"boolean",label:"Remoção de pontuação"}
remove_line_breaks = True #@param {type:"boolean",label:"Remoção de quebras de lina",description:"Remoção de quebras de linha por \n e \r"}
remove_accents = True #@param {type:"boolean",label:"Remoção de acentos"}
remove_html = True #@param {type:"boolean",label:"Remoção de HTML"}
remove_css = True #@param {type:"boolean",label:"Remoção de CSS"}
###Output
_____no_output_____
###Markdown
Traduz parâmetros
###Code
language = "portuguese" if language == "Português" else "english"
if case == "Caixa baixa":
case = "Lower"
elif case == "Caixa alta":
case = "Upper"
else:
case = "NotApply"
###Output
_____no_output_____
###Markdown
Acesso ao conjunto de dadosO conjunto de dados utilizado nesta etapa será o mesmo carregado através da plataforma.O tipo da variável retornada depende do arquivo de origem:- [pandas.DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) para CSV e compressed CSV: .csv .csv.zip .csv.gz .csv.bz2 .csv.xz- [Binary IO stream](https://docs.python.org/3/library/io.htmlbinary-i-o) para outros tipos de arquivo: .jpg .wav .zip .h5 .parquet etc
###Code
import pandas as pd
df = pd.read_csv(dataset)
columns = df.columns
###Output
_____no_output_____
###Markdown
Remoção de linhas com valores faltantes no atributo alvoCaso haja linhas em que o atributo alvo contenha valores faltantes, é feita a remoção dos casos faltantes.
###Code
import numpy as np
df.dropna(subset=[target], inplace=True)
X = df[target].to_numpy()
###Output
_____no_output_____
###Markdown
Chamada da classe de Pré-Processamento
###Code
preprocessing_tasks = {"case": case,
"remove_stop_words": remove_stop_words,
"remove_top_words": remove_top_words,
"top_words_percentage": top_words_percentage,
"stemming": stemming,
"lemmatization": lemmatization,
"remove_punctuation": remove_punctuation,
"remove_line_breaks": remove_line_breaks,
"remove_accents": remove_accents,
"remove_html": remove_html,
"remove_css": remove_css}
model_parameters = {'language': language}
from pre_processor import Preprocessor
processor = Preprocessor(preprocessing_tasks, model_parameters)
text = processor.preprocess(X)
###Output
_____no_output_____
###Markdown
Salva alterações no conjunto de dadosO conjunto de dados será salvo (e sobrescrito com as respectivas mudanças) localmente, no container da experimentação, utilizando a função `pandas.DataFrame.to_csv`.
###Code
df[target] = text
df.to_csv(dataset, index=False)
###Output
_____no_output_____
###Markdown
Salva resultados da tarefa A plataforma guarda o conteúdo de `/tmp/data/` para as tarefas subsequentes.
###Code
from joblib import dump
artifacts = {
"preprocessing_tasks": preprocessing_tasks,
"model_parameters": model_parameters,
"columns": columns,
"target": target
}
dump(artifacts, "/tmp/data/preprocessor.joblib")
###Output
_____no_output_____ |
deep_rl/reinforcement-learning-with-openai-gym.ipynb | ###Markdown
Reinforcement Learning Tutorial with OpenAI GymJay Urbain, PhD OpenAI Gym is a toolkit for developing and comparing reinforcement learning algorithms that includes many game environments.This notebook provides a tutorial with example implementations for using the OpenAI Gym environment:- Interacting with Gym. - Value iteration in deterministic environments. - Q-learning in deterministic environments. - Q-learning in non-determinisitc environments. - **ON YOUR OWN:** Complete Q-Learning for Gym environment of your choice. References: https://gym.openai.com/ https://www.kaggle.com/kernels/scriptcontent/6183449/notebook First, review the Gym toolkit and sample environments: https://gym.openai.com/
###Code
import gym # openAi gym
from gym import envs
import numpy as np
import datetime
import keras
import tensorflow as tf
import matplotlib.pyplot as plt
%matplotlib inline
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from time import sleep
print("OK")
###Output
_____no_output_____
###Markdown
GymThere are many games that are available.
###Code
print(envs.registry.all())
###Output
_____no_output_____
###Markdown
We can start with a basic game called Taxi.
###Code
env = gym.make('Taxi-v2')
env.reset()
env.render()
###Output
_____no_output_____
###Markdown
Taxi-v2This task was introduced in [Dietterich2000] to illustrate some issues in hierarchical reinforcement learning. There are 4 locations (labeled by different letters) and your job is to pick up the passenger at one location and drop him off in another. You receive +20 points for a successful dropoff, and lose 1 point for every timestep it takes. There is also a 10 point penalty for illegal pick-up and drop-off actions.[Dietterich2000] T Erez, Y Tassa, E Todorov, "Hierarchical Reinforcement Learning with the MAXQ Value Function Decomposition", 2011.Actions: There are 6 discrete deterministic actions: - 0: move south - 1: move north - 2: move east - 3: move west - 4: pickup passenger - 5: dropoff passenger Rewards: There is a reward of -1 for each action and an additional reward of +20 for delievering the passenger. There is a reward of -10 for executing actions "pickup" and "dropoff" illegally. Rendering: - blue: passenger - magenta: destination - yellow: empty taxi - green: full taxi - other letters: locationshttps://gym.openai.com/envs/Taxi-v2/ Interacting with the Gym environment The OpenAI Gym toolkit follows a standard RL/Markov Decision Process (MDP) for handling interactions with the game. Source: [OpenAI](https://openai.com/) At each timestep, the agent chooses an action, and the environment returns an observation and a reward: *observation, reward, done, info = env.step(action)* * observation (object): an environment-specific object representing your observation of the environment. For example, pixel data from a camera, joint angles and joint velocities of a robot, or the board state in a board game like Taxi.* reward (float): amount of reward achieved by the previous action. The scale varies between environments, but the goal is always to increase your total reward.* done (boolean): whether it’s time to reset the environment again. Most (but not all) tasks are divided up into well-defined episodes, and done being True indicates the episode has terminated. (For example, perhaps the pole tipped too far, or you lost your last life.)* info (dict): ignore, diagnostic information useful for debugging. Official evaluations of your agent are not allowed to use this for learning. To illustrate interacting with the enviornment, we can do some random steps:
###Code
# Let's first do some random steps in the game so you see how the game looks like
rew_tot=0
obs= env.reset() # always reset the environmebnt
env.render() # display initial environment state
for _ in range(6):
action = env.action_space.sample() # sample random action from possible actions (action_space)
obs, rew, done, info = env.step(action) # execute action in the environment
rew_tot = rew_tot + rew # add to cumulative reward
env.render() # render update environment state
#Print the reward of these random action
print("Reward: %r" % rew_tot)
###Output
_____no_output_____
###Markdown
Actions Action (a): the action the agent provides to the environment. env.action_space defines the set of environment actions available to the agen. tell youActions available to the Taxi game $[0..5]$: * 0: move south* 1: move north* 2: move east * 3: move west * 4: pickup passenger* 5: dropoff passenger
###Code
# action space has 6 possible actions, the meaning of the actions is nice to know for us humans but the neural network will figure it out
print(env.action_space)
NUM_ACTIONS = env.action_space.n
print("Possible actions: [0..%a]" % (NUM_ACTIONS-1))
###Output
_____no_output_____
###Markdown
State State (s): Represents the board state of the game and is returned as the observation. In the Taxi game, the observation is an integer, one of 500 possible states. Each state can be translated into a graphic with the render function. *Note: this is specific for the Taxi game. In an Atari style game the observation is the game screen with many colored pixels.*
###Code
print(env.observation_space)
print()
env.env.s=42 # some random number, you might recognize it
env.render()
env.env.s = 222 # and some other
env.render()
###Output
_____no_output_____
###Markdown
Markov decision process(MDP)The Taxi game is an example of an [Markov decision process ](https://en.wikipedia.org/wiki/Markov_decision_process). The game can be described in states, possible actions in a state (leading to a next state with a certain probability) and rewards associated with that state transition.A [Markovian property](https://en.wikipedia.org/wiki/Markov_property) means that the current state encapsulates all prior information.The Reinforcement Learning environment is modeled as an MDP. Given this environment, the agent takes actions to maximize the cumulative reward. Since the internal workings of the environment is essentially a "black box," it can be referred to as a `hidden markov model` that we will learn. Policy Policy ($\pi$): The strategy that the agent uses to determine the next action `a` to take in state `s`. The optimal policy ($\pi^*$), is the policy that maximizes the expected cumulative reward. Our goal is to learn $\pi^*$ by solvoing the Bellman equation. Bellman equation $V^*(s) \leftarrow max_a\sum_{s'}P(s'|s,a)[R(s,a,s') + \gamma V^*(s')]$where* *R(s,a,s')* - Reward for action a in state s, transitioning to s'.* *P(s'|s,a)* - Probability (expectation) of going to state s' given action a in state s. The Taxi game actions are deterministic so the probability that selected action will lead to expected state is 100%. * $\gamma$ - Discount rate for future rewards. It must be between 0 and <1. The higher gamma the more focus on long term rewards. May not converge if $\gamma=1$.The value iteration algorithm: * $V(s)$ represents the cumulative reward for state $s$. $V_{\pi}(s)$ is the expected cumulative reward of the current state $s$ sunder policy $\pi$. The Q learning algorithm: * The action-value $Q(s,a)$ function represents the cumulative reward of the current state $s$ and taking action $a$ under policy $\pi$. Value iteration algorithm The idea is to iteratively calculate the value (expected long-term cumulative reward) for each state. The algorithm iterates over all states $s$ and possible actions $a$ to explore the value (cumulative discounted rewards) $V[s]$ for a given state $s$. The algorithm iteratess until $V[s]$ converges. The Optimal policy $\pi^*$ is the action taken at each state $s$ that maximizes the value. This value iteration algorithm is an example of [dynamic programming](https://en.wikipedia.org/wiki/Dynamic_programming) (DP).
###Code
# Value iteration algorithm
NUM_ACTIONS = env.action_space.n
NUM_STATES = env.observation_space.n
V = np.zeros([NUM_STATES]) # Value for each state
Pi = np.zeros([NUM_STATES], dtype=int) # policy, iteratively updated, to get the optimal policy
gamma = 0.9 # discount factor
significant_improvement = 0.01
def best_action_value(s):
# finds the highest value action (max_a) in state s
best_a = None
best_value = float('-inf')
# iterate through all possible actions to find the best current action
for a in range (0, NUM_ACTIONS):
env.env.s = s
s_new, rew, done, info = env.step(a) #take the action
v = rew + gamma * V[s_new]
if v > best_value:
best_value = v
best_a = a
return best_a
iteration = 0
while True:
# biggest_change - delta
delta = 0
for s in range (0, NUM_STATES):
old_v = V[s]
action = best_action_value(s) # choose an action with the highest future reward
env.env.s = s # goto the state
s_new, reward, done, info = env.step(action) #take the action
V[s] = reward + gamma * V[s_new] # update Value for the state using Bellman equation
Pi[s] = action
delta = max(delta, np.abs(old_v - V[s]))
iteration += 1
if delta < significant_improvement:
print (iteration,' iterations done')
break
# Review how the algorithm solves the taxi game
rew_tot=0
obs= env.reset()
env.render()
done=False
while done != True:
action = Pi[obs]
obs, rew, done, info = env.step(action) # take step using selected action
rew_tot = rew_tot + rew
env.render()
# Print the reward of these actions
print("Reward: %r" % rew_tot)
###Output
_____no_output_____
###Markdown
Model vs Model-free based methods Value iteration solves the Taxi game. However, we have to know all environment states/transitions upfront so the algorithm works. In Reinforcement Learning, this is refered to as a model based method. If all states are not known upfront, we can learn states and actions during learning. This is refered to as a model-free method. Basic Q-learning algorithm In the [Q-learning](https://en.wikipedia.org/wiki/Q-learning) algorithm, the agent (Taxi) interacts with its environment to update its knowledge about the model so it can learn an optimal policy.The $Q-matrix Q(s,a)$ is used to store the current maximum discounted future reward when the agent performs an action $a$ in state $s$. $Q(s, a)$ provides estimates for the best course of action for a given $a$ in state $s$. Upon convergence, the optimal policy $\po^*$ can be read from the $Q-matrix$. t After every step we update $Q(s,a)$ using the reward and the max $Q-value$ for new state resulting from the action. This update is done using the action-value form of the Bellman equation. $Q_{t+1}(s_t,a_t) = Q_{t}(s_t,a_t) + \alpha_t(s_t,a_t) * [R_{t+1} + \gamma * max_a Q_t(s_{t+1},a_t) - Q_t(s_t,a_t)]$Notes: - Q-learning was the basis for Deep Q-learning (Deep referring to Neural Network technology) - [Temporal difference learning](https://en.wikipedia.org/wiki/Temporal_difference_learning) and [Sarsa](https://en.wikipedia.org/wiki/State%E2%80%93action%E2%80%93reward%E2%80%93state%E2%80%93action) algorithems explored simular value expressions. .
###Code
NUM_ACTIONS = env.action_space.n
NUM_STATES = env.observation_space.n
Q = np.zeros([NUM_STATES, NUM_ACTIONS]) #You could also make this dynamic if you don't know all games states upfront
gamma = 0.9 # discount factor
alpha = 0.9 # learning rate
for episode in range(1,1001):
done = False
reward_total = 0
obs = env.reset()
while done != True:
action = np.argmax(Q[obs]) #choosing the action with the highest Q value
obs2, reward, done, info = env.step(action) #take the action
Q[obs,action] += alpha * (reward + gamma * np.max(Q[obs2]) - Q[obs,action]) #Update Q-marix using Bellman equation
#Q[obs,action] = rew + gamma * np.max(Q[obs2]) # same equation but with learning rate = 1 returns the basic Bellman equation
reward_total = reward_total + reward
obs = obs2
if episode % 50 == 0:
print('Episode {} Total Reward: {}'.format(episode, reward_total))
###Output
_____no_output_____
###Markdown
So, what is the magic, how does it solve it? The Q-matrix is initialized with zero's. So initially it starts moving randomly until it hits a state/action with rewards or state/actions with a penalty. For understanding, let's simplify the problem that it needs to go to a certain drop-off position to get a reward. So random moves get no rewards but by luck (brute force enough tries) the state/action is found where a reward is given. So next game the immediate actions preceding this state/action will direct toward it by use of the Q-Matrix. The next iteration the actions before that, etc, etc. In other words, it solves "the puzzle" backwards from end-result (drop-off passenger) towards steps to be taken to get there in a iterative fashion. Note that in case of the Taxi game there is a reward of -1 for each action. So if in a state the algorithm explored eg south which let to no value the Q-matrix is updated to -1 so next iteration (because values were initialized on 0) it will try an action that is not yet tried and still on 0. So also by design it encourages systematic exploration of states and actions If you put the learning rate on 1 the game also solves. Reason is that there is only one reward (dropoff passenger), so the algorithm will find it whatever learning rate. In case a game has more reward places the learning rate determines if it should prioritize longer term or shorter term rewards
###Code
# Let's see how the algorithm solves the taxi game by following the policy to take actions delivering max value
rew_tot=0
obs= env.reset()
env.render()
done=False
while done != True:
action = np.argmax(Q[obs])
obs, rew, done, info = env.step(action) #take step using selected action
rew_tot = rew_tot + rew
env.render()
#Print the reward of these actions
print("Reward: %r" % rew_tot)
###Output
_____no_output_____
###Markdown
Exploration vs. exploitationThe taxi game operates in a deterministic environment one terminal state with the reward: dropoff passenger, receive +20. 100% of the time, our algorithm *exploits* action = np.argmax(Q[obs]). To deal with more complex environments, we need to update our algorithm to explore. This is called the tradeoff between "exploitation" and "exploration".* Exploitation: Make the best decision given current information (Go to the restaurant you know you like)* Exploration: Gather more information (Try a new restaurant)Approaches: Epsilon Greedy * Exploit with probability $(1 — \epsilon)$ and explore probability $\epsilon$, the rates of exploration and exploitation are fixed.Epsilon-Decreasing * Epsilon Greedy with epsilon decreasing over time. Thompson sampling * The rates of exploration and exploitation are dynamically updated with respect to the entire probability distribution. Epsilon-Decreasing with Softmax * Epsilon-Decreasing, however in the case of exploring a new option, we don’t just pick an option at random, but instead we estimate the outcome of each option, and then pick based on that (this is the softmax part). [Frozen lakes](https://gym.openai.com/envs/FrozenLake-v0/) of OpenAI/Gym. Frozen lakes provides simple non-deterministic envrionment.Description: "Winter is here. You and your friends were tossing around a frisbee at the park when you made a wild throw that left the frisbee out in the middle of the lake. The water is mostly frozen, but there are a few holes where the ice has melted. If you step into one of those holes, you'll fall into the freezing water. At this time, there's an international frisbee shortage, so it's absolutely imperative that you navigate across the lake and retrieve the disc. However, the ice is slippery, so you won't always move in the direction you intend." Notice that the game is not deterministic anymore: "won't always move in the direction you intend". Note it is really slippery, the chance you move in the direction you want is relatively small.S- Start G - Goal F- Frozen (safe) H- Hole (dead) Game layout:
###Code
env = gym.make('FrozenLake-v0')
rew_tot=0
obs= env.reset()
env.render()
env = gym.make('FrozenLake-v0')
env.reset()
NUM_ACTIONS = env.action_space.n
NUM_STATES = env.observation_space.n
Q = np.zeros([NUM_STATES, NUM_ACTIONS]) #You could also make this dynamic if you don't know all games states upfront
gamma = 0.95 # discount factor
alpha = 0.01 # learning rate
epsilon = 0.1 #
for episode in range(1,500001):
done = False
obs = env.reset()
while done != True:
if np.random.rand(1) < epsilon:
# exploration with a new option with probability epsilon, the epsilon greedy approach
action = env.action_space.sample()
else:
# exploitation
action = np.argmax(Q[obs])
obs2, reward, done, info = env.step(action) #take the action
Q[obs,action] += alpha * (reward + gamma * np.max(Q[obs2]) - Q[obs,action]) #Update Q-marix using Bellman equation
obs = obs2
if episode % 5000 == 0:
#report every 5000 steps, test 100 games to get avarage point score for statistics and verify if it is solved
rew_average = 0.
for i in range(100):
obs= env.reset()
done=False
while done != True:
action = np.argmax(Q[obs])
obs, rew, done, info = env.step(action) #take step using selected action
rew_average += rew
rew_average=rew_average/100
print('Episode {} avarage reward: {}'.format(episode,rew_average))
if rew_average > 0.8:
# FrozenLake-v0 defines "solving" as getting average reward of 0.78 over 100 consecutive trials.
# Test it on 0.8 so it is not a one-off lucky shot solving it
print("Frozen lake solved")
break
# Let's see how the algorithm solves the frozen-lakes game
rew_tot=0.
obs= env.reset()
done=False
while done != True:
action = np.argmax(Q[obs])
obs, rew, done, info = env.step(action) #take step using selected action
rew_tot += rew
env.render()
print("Reward:", rew_tot)
###Output
_____no_output_____ |
nbs/00a_torch_utils.ipynb | ###Markdown
Torch Utils> Some useful utils to extend pytorch functions
###Code
# export
class InfiniteDl():
def __init__(self, dl):
self.dl = dl
self.it = iter(self.dl)
def next(self):
try:
return self.it.next()
except StopIteration:
self.it = iter(self.dl)
return self.it.next()
# export
def isin(t, ids):
''' Returns ByteTensor where True values are positions that contain ids. '''
return (t[..., None] == torch.tensor(ids, device=t.device)).any(-1)
t = torch.tensor([[12, 11, 0, 0],
[9, 1, 5, 0]])
mask = isin(t, [0, 1])
test_eq(mask, torch.tensor([[0, 0, 1, 1],
[0, 1, 0, 1]]).bool())
# export
def get_src_mask(cap_len, max_seq_len, device='cpu'):
''' cap_len: (bs,), max_seq_len: int '''
return torch.arange(max_seq_len, device=device)[None, :] >= cap_len[:, None]
cap_len = torch.tensor([2, 1, 3])
max_seq_len = 5
src_mask = get_src_mask(cap_len, max_seq_len)
test_eq(src_mask, torch.tensor([[False, False, True, True, True],
[False, True, True, True, True],
[False, False, False, True, True]]))
# export
class Normalizer():
" normalize input image to -1 ~ 1 "
def __init__(self, device='cpu'):
self.mean = torch.tensor([0.5, 0.5, 0.5], device=device)[None, ..., None, None] # (1, 3, 1, 1)
self.std = torch.tensor([0.5, 0.5, 0.5], device=device)[None, ..., None, None]
def set_device(device='cpu'):
self.mean.to(device)
self.std.to(device)
def encode(self, x):
"x: (bs, 3, _, _)"
return (x.float()/255-self.mean) / self.std
def decode(self, x):
x = x*self.std + self.mean
return (x.clamp(0., 1.)*255).long()
normalizer = Normalizer()
img = torch.randint(0, 255, (2, 3, 16, 16))
img_encoded = normalizer.encode(img)
img_decoded = normalizer.decode(img_encoded)
test_close(img, img_decoded, eps=2)
# test encoded img is in range -1~1
test_eq((img_encoded>=-1).long() + (img_encoded<=1).long(), torch.ones(2, 3, 16, 16).long()*2 )
# test decoded img is in range 0~255
test_eq((img_decoded>=0).long() + (img_decoded<=255).long(), torch.ones(2, 3, 16, 16).long()*2 )
# export
def to_device(tensors, device='cpu'):
return [t.to(device) for t in tensors]
def detach(tensors, is_to_cpu=False):
return [t.cpu().detach() if is_to_cpu else t.detach() for t in tensors]
def is_models_equal(model_1, model_2):
models_differ = 0
for key_item_1, key_item_2 in zip(model_1.state_dict().items(), model_2.state_dict().items()):
if torch.equal(key_item_1[1], key_item_2[1]):
pass
else:
models_differ += 1
if (key_item_1[0] == key_item_2[0]):
print('Mismtach found at', key_item_1[0])
return False
else:
print('Oops somethings wrong')
return False
if models_differ == 0:
return True
class MultiWrapper(nn.Module):
def __init__(self, layer, n_returns=1):
super().__init__()
assert n_returns>=1
self.layer = layer
self.n_returns = n_returns
def forward(self, x, *others):
if self.n_returns==1:
return self.layer(x)
else:
return (self.layer(x), *others[:self.n_returns-1])
class MultiSequential(nn.Sequential):
def forward(self, *inputs):
for module in self._modules.values():
if type(inputs) == tuple:
inputs = module(*inputs)
else:
inputs = module(inputs)
return inputs
class IdentityModule(nn.Module):
def forward(self, x):
return x
# export
noise_gen = torch.distributions.normal.Normal(0, torch.exp(torch.tensor(-1/np.pi)))
noise = noise_gen.sample((2, 100))
test_eq(noise.shape, (2, 100))
###Output
_____no_output_____
###Markdown
Export -
###Code
# hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_torch_utils.ipynb.
Converted 02a_data_anime_heads.ipynb.
Converted 02b_data_birds.ipynb.
Converted 03a_model.ipynb.
Converted 04a_trainer_DAMSM.ipynb.
Converted 04b_trainer.ipynb.
Converted 05a_inference_anime_heads.ipynb.
Converted 05b_inference_birds.ipynb.
Converted index.ipynb.
|
IA/Deep Learning.ipynb | ###Markdown
Deep Learning_“Deep learning methods are representation-learning methods with multiple levels of representation, obtained by composing simple but nonlinear modules that each transform the representation at one level (starting with the raw input) into a representation at a higher, slightly more abstract level. [. . . ] The key aspect of deep learning is that these layers are not designed by human engineers: they are learned from data using a general-purpose learning procedure”_ – Yann LeCun, Yoshua Bengio, andGeoffrey Hinton, Nature 2015The central goal of AI is to provide a set of algorithms and techniques that can be used to solve problems that humans perform intuitively and near automatically, but are otherwise very challenging for computers. A great example of such a class of AI problems is interpreting and understanding the contents of an image – this task is something that a human can do with little-to-no effort, but it has proven to be extremely difficult for machines to accomplish. ANNsArtificial Neural Networks (ANNs) are a class of machine learning algorithms that learn from data and specialize in pattern recognition, inspired by the structure and function of the brain. The word “neural” is the adjective form of “neuron”, and “network” denotes a graph-like structure; therefore, an “Artificial Neural Network” is a computation system that attempts to mimic (or at least, is inspired by) the neural connections in our nervous system.Our brain is composed by neurons, which we would describe using a binary operation: once exposed to external inputs, the neuron is _fired_ or not, without different 'grades' of _firing_. ANNs are described using models:values _x1, x2 and x3_ are inputs to the NN, while constant 1 is called bias, necessary to avoid poor fit results. Each input is connected to the neuron using weights, and the product between both inputs and weights is 'evaluated' by the neuron to trigger or not the output. One typical math notation to the output is: > out = f(w1*x1 + w2*x2 + ... + wn*xn) About the activation function, they vary depending of both application and algorithm to implement, being the most common the ones below: Perceptron Pseudocode1. Initialize the weight vector w with small random values2. Until Perceptron converges: (a) Loop over each feature vector x j and true class label d i in our training set D (b) Take x and pass it through the network, calculating the output value: y j = f (w(t) · x j ) (c) Update the weights w: w_i (t + 1) = w_i (t) + α(d_j − y_j )x_(j,i) for all features 0 <= i <= nThe value α is the learning rate, if set too higher we would go in the 'right' direction to solve the problem but with the risk to go down into a non-optimal local/global minimum; while if it is too slow may give us a non-practical time to solve the solution.The perceptron training is finished once all trainings samples are classified correctly or after an amount of iterations or _epochs_
###Code
# implementing a simple perceptron
# import the necessary packages
import numpy as np
class Perceptron:
def __init__(self, N, alpha=0.1):
# initialize the weight matrix and store the learning rate
self.W = np.random.randn(N + 1) / np.sqrt(N)
self.alpha = alpha
def step(self, x):
# apply the step function
return 1 if x > 0 else 0
def fit(self, X, y, epochs=10):
# insert a column of 1's as the last entry in the feature
# matrix -- this little trick allows us to treat the bias
# as a trainable parameter within the weight matrix
X = np.c_[X, np.ones((X.shape[0]))]
# loop over the desired number of epochs
for epoch in np.arange(0, epochs):
# loop over each individual data point
for (x, target) in zip(X, y):
# take the dot product between the input features
# and the weight matrix, then pass this value
# through the step function to obtain the prediction
p = self.step(np.dot(x, self.W))
# only perform a weight update if our prediction
# does not match the target
if p != target:
# determine the error
error = p - target
# update the weight matrix
self.W += -self.alpha * error * x
def predict(self, X, addBias=True):
# ensure our input is a matrix
X = np.atleast_2d(X)
# check to see if the bias column should be added
if addBias:
# insert a column of 1's as the last entry in the feature
# matrix (bias)
X = np.c_[X, np.ones((X.shape[0]))]
# take the dot product between the input features and the
# weight matrix, then pass the value through the step
# function
return self.step(np.dot(X, self.W))
###Output
_____no_output_____
###Markdown
Example: OR and XOR datasetsLet's apply the perceptron to solve the OR dataset
###Code
# construct the OR dataset
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([[0], [1], [1], [1]])
# define our perceptron and train it
print("training perceptron...")
p = Perceptron(X.shape[1], alpha=0.1)
p.fit(X, y, epochs=20)
# now that our perceptron is trained we can evaluate it
print("testing perceptron...")
# now that our network is trained, loop over the data points
for (x, target) in zip(X, y):
pred = p.predict(x)
print("data={}, ground-truth={}, pred={}".format(x, target[0], pred))
# construct the AND dataset
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([[0], [0], [0], [1]])
# define our perceptron and train it
print("training perceptron...")
p = Perceptron(X.shape[1], alpha=0.1)
p.fit(X, y, epochs=20)
# now that our perceptron is trained we can evaluate it
print("testing perceptron...")
# now that our network is trained, loop over the data points
for (x, target) in zip(X, y):
pred = p.predict(x)
print("data={}, ground-truth={}, pred={}".format(x, target[0], pred))
# construct the XOR dataset
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([[0], [1], [1], [0]])
# define our perceptron and train it
print("training perceptron...")
p = Perceptron(X.shape[1], alpha=0.1)
p.fit(X, y, epochs=20)
# now that our perceptron is trained we can evaluate it
print("testing perceptron...")
# now that our network is trained, loop over the data points
for (x, target) in zip(X, y):
pred = p.predict(x)
print("data={}, ground-truth={}, pred={}".format(x, target[0], pred))
###Output
_____no_output_____
###Markdown
Feedforward NetworkThe most common NN architecture, there are not backward-layers and a connection is implemented only from nodes in layer _i_ to layer _i+1_.To describe a feedforward network, we normally use a sequence of integers to quickly and concisely denote the number of nodes in each layer. For example, the network in Figure above is a 3-2-3-2 feedforward network.FN are based on the backpropagation algorithm defined in two phases:**1.** The forward pass where our inputs are passed through the network and output predictions obtained (also known as the propagation phase).To propagate the values through the network and obtain the final classification, we need to take the dot product between the inputs and the weight values, followed by applying an activation function σ().Assuming that our activation function is the sigmoid:**First layer:**- σ ((0 × 0.351) + (1 × 1.076) + (1 × 1.116)) = 0.899- σ ((0 × −0.097) + (1 × −0.165) + (1 × 0.542)) = 0.593- σ ((0 × 0.457) + (1 × −0.165) + (1 × −0.331)) = 0.378**Out layer:**- σ ((0.899 × 0.383) + (0.593 × −0.327) + (0.378 × −0.329)) = 0.506**2.** The backward pass where we compute the gradient of the loss function at the final layer (i.e., predictions layer) of the network and use this gradient to recursively apply the chain rule to update the weights in our network (also known as the weight update phase).
###Code
class NeuralNetwork:
def __init__(self, layers, alpha=0.1):
# initialize the list of weights matrices, then store the
# network architecture and learning rate
self.W = []
self.layers = layers
self.alpha = alpha
# start looping from the index of the first layer but
# stop before we reach the last two layers
for i in np.arange(0, len(layers) - 2):
# randomly initialize a weight matrix connecting the
# number of nodes in each respective layer together,
# adding an extra node for the bias
w = np.random.randn(layers[i] + 1, layers[i + 1] + 1)
self.W.append(w / np.sqrt(layers[i]))
# the last two layers are a special case where the input
# connections need a bias term but the output does not
w = np.random.randn(layers[-2] + 1, layers[-1])
self.W.append(w / np.sqrt(layers[-2]))
def sigmoid(self, x):
# compute and return the sigmoid activation value for a
# given input value
return 1.0 / (1 + np.exp(-x))
def sigmoid_deriv(self, x):
# compute the derivative of the sigmoid function ASSUMING
# that `x` has already been passed through the `sigmoid`
# function
return x * (1 - x)
def fit(self, X, y, epochs=1000, displayUpdate=100):
# insert a column of 1's as the last entry in the feature
# matrix -- this little trick allows us to treat the bias
# as a trainable parameter within the weight matrix
X = np.c_[X, np.ones((X.shape[0]))]
# loop over the desired number of epochs
for epoch in np.arange(0, epochs):
# loop over each individual data point and train
# our network on it
for (x, target) in zip(X, y):
self.fit_partial(x, target)
# check to see if we should display a training update
if epoch == 0 or (epoch + 1) % displayUpdate == 0:
loss = self.calculate_loss(X, y)
print("[INFO] epoch={}, loss={:.7f}".format(
epoch + 1, loss))
def fit_partial(self, x, y):
# construct our list of output activations for each layer
# as our data point flows through the network; the first
# activation is a special case -- it's just the input
# feature vector itself
A = [np.atleast_2d(x)]
# FEEDFORWARD:
# loop over the layers in the network
for layer in np.arange(0, len(self.W)):
# feedforward the activation at the current layer by
# taking the dot product between the activation and
# the weight matrix -- this is called the "net input"
# to the current layer
net = A[layer].dot(self.W[layer])
# computing the "net output" is simply applying our
# non-linear activation function to the net input
out = self.sigmoid(net)
# once we have the net output, add it to our list of
# activations
A.append(out)
# BACKPROPAGATION
# the first phase of backpropagation is to compute the
# difference between our *prediction* (the final output
# activation in the activations list) and the true target
# value
error = A[-1] - y
# from here, we need to apply the chain rule and build our
# list of deltas `D`; the first entry in the deltas is
# simply the error of the output layer times the derivative
# of our activation function for the output value
D = [error * self.sigmoid_deriv(A[-1])]
# once you understand the chain rule it becomes super easy
# to implement with a `for` loop -- simply loop over the
# layers in reverse order (ignoring the last two since we
# already have taken them into account)
for layer in np.arange(len(A) - 2, 0, -1):
# the delta for the current layer is equal to the delta
# of the *previous layer* dotted with the weight matrix
# of the current layer, followed by multiplying the delta
# by the derivative of the non-linear activation function
# for the activations of the current layer
delta = D[-1].dot(self.W[layer].T)
delta = delta * self.sigmoid_deriv(A[layer])
D.append(delta)
# since we looped over our layers in reverse order we need to
# reverse the deltas
D = D[::-1]
# WEIGHT UPDATE PHASE
# loop over the layers
for layer in np.arange(0, len(self.W)):
# update our weights by taking the dot product of the layer
# activations with their respective deltas, then multiplying
# this value by some small learning rate and adding to our
# weight matrix -- this is where the actual "learning" takes
# place
self.W[layer] += -self.alpha * A[layer].T.dot(D[layer])
def predict(self, X, addBias=True):
# initialize the output prediction as the input features -- this
# value will be (forward) propagated through the network to
# obtain the final prediction
p = np.atleast_2d(X)
# check to see if the bias column should be added
if addBias:
# insert a column of 1's as the last entry in the feature
# matrix (bias)
p = np.c_[p, np.ones((p.shape[0]))]
# loop over our layers in the network
for layer in np.arange(0, len(self.W)):
# computing the output prediction is as simple as taking
# the dot product between the current activation value `p`
# and the weight matrix associated with the current layer,
# then passing this value through a non-linear activation
# function
p = self.sigmoid(np.dot(p, self.W[layer]))
# return the predicted value
return p
def calculate_loss(self, X, targets):
# make predictions for the input data points then compute
# the loss
targets = np.atleast_2d(targets)
predictions = self.predict(X, addBias=False)
loss = 0.5 * np.sum((predictions - targets) ** 2)
# return the loss
return loss
# let's try again
# construct the XOR dataset
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([[0], [1], [1], [0]])
# define our 2-2-1 neural network and train it
nn = NeuralNetwork([2, 2, 1], alpha=0.5)
nn.fit(X, y, epochs=20000)
# now that our network is trained, loop over the XOR data points
for (x, target) in zip(X, y):
# make a prediction on the data point and display the result
# to our console
pred = nn.predict(x)[0][0]
step = 1 if pred > 0.5 else 0
print("[INFO] data={}, ground-truth={}, pred={:.4f}, step={}".format(x, target[0], pred, step))
###Output
_____no_output_____
###Markdown
MNIST HandWritten ExampleThe Mnist handwritten dataset includes 1,797 example digits, each of which are 8 × 8 grayscale images, that once flatened, becomes to a 8x8 = 64-dim vector
###Code
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn import datasets
# load the MNIST dataset and apply min/max scaling to scale the
# pixel intensity values to the range [0, 1] (each image is
# represented by an 8 x 8 = 64-dim feature vector)
print("[INFO] loading MNIST (sample) dataset...")
digits = datasets.load_digits()
data = digits.data.astype("float")
data = (data - data.min()) / (data.max() - data.min())
print("[INFO] samples: {}, dim: {}".format(data.shape[0],
data.shape[1]))
# Let's show the image to see how this array 'looks'
import matplotlib.pyplot as plt
%matplotlib inline
images_and_labels = list(zip(digits.images, digits.target))
for index, (image, label) in enumerate(images_and_labels[:4]):
plt.subplot(2, 4, index + 1)
plt.axis('off')
plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
plt.title('Training: %i' % label)
plt.show()
# construct the training and testing splits
(trainX, testX, trainY, testY) = train_test_split(data,
digits.target, test_size=0.25)
trainY
# convert the labels from integers to vectors, more info: https://blog.contactsunny.com/data-science/label-encoder-vs-one-hot-encoder-in-machine-learning
trainY = LabelBinarizer().fit_transform(trainY)
testY = LabelBinarizer().fit_transform(testY)
trainY
# train the network
print("[INFO] training network...")
nn = NeuralNetwork([trainX.shape[1], 32, 16, 10])
print("[INFO] {}".format(nn))
nn.fit(trainX, trainY, epochs=1000)
# evaluate the network
print("[INFO] evaluating network...")
predictions = nn.predict(testX)
predictions = predictions.argmax(axis=1)
print(classification_report(testY.argmax(axis=1), predictions))
###Output
_____no_output_____
###Markdown
ML Framework: Keras
###Code
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from keras.models import Sequential
from keras.layers.core import Dense
from keras.optimizers import SGD
from sklearn import datasets
import matplotlib.pyplot as plt
import numpy as np
from keras.datasets import mnist
# grab the MNIST dataset (if this is your first time running this
# script, the download may take a minute -- the 55MB MNIST dataset
# will be downloaded)
print("[INFO] loading MNIST (full) dataset...")
(trainX, trainY), (testX, testY) = mnist.load_data()
# building the input vector from the 28x28 pixels
trainX = trainX.reshape(60000, 784)
testX = testX.reshape(10000, 784)
trainX = trainX.astype('float32')
testX = testX.astype('float32')
trainX /= 255
testX /= 255
# convert the labels from integers to vectors
lb = LabelBinarizer()
trainY = lb.fit_transform(trainY)
testY = lb.transform(testY)
###Output
_____no_output_____
###Markdown
###Code
# define the 784-256-128-10 architecture using Keras
model = Sequential()
model.add(Dense(256, input_shape=(784,), activation="sigmoid"))
model.add(Dense(128, activation="sigmoid"))
model.add(Dense(10, activation="softmax"))
# train the model usign SGD
print("[INFO] training network...")
sgd = SGD(0.01) # Learning rate 0.01
model.compile(loss="categorical_crossentropy", optimizer=sgd,
metrics=["accuracy"])
H = model.fit(trainX, trainY, validation_data=(testX, testY),
epochs=100, batch_size=128)
# evaluate the network
print("[INFO] evaluating network...")
predictions = model.predict(testX, batch_size=128)
print(classification_report(testY.argmax(axis=1),
predictions.argmax(axis=1),
target_names=[str(x) for x in lb.classes_]))
# plot the training loss and accuracy
plt.style.use("ggplot")
plt.figure()
plt.plot(np.arange(0, 100), H.history["loss"], label="train_loss")
plt.plot(np.arange(0, 100), H.history["val_loss"], label="val_loss")
plt.plot(np.arange(0, 100), H.history["acc"], label="train_acc")
plt.plot(np.arange(0, 100), H.history["val_acc"], label="val_acc")
plt.title("Training Loss and Accuracy")
plt.xlabel("Epoch #")
plt.ylabel("Loss/Accuracy")
plt.legend()
# Saves and load models
model.save("mnist-udi.hdf5")
from keras.models import load_model
model_1 = load_model("mnist-udi.hdf5")
# evaluate the network
print("[INFO] evaluating network...")
predictions = model_1.predict(testX, batch_size=128)
print(classification_report(testY.argmax(axis=1),
predictions.argmax(axis=1),
target_names=[str(x) for x in lb.classes_]))
###Output
[INFO] evaluating network...
precision recall f1-score support
0 0.94 0.98 0.96 980
1 0.97 0.98 0.97 1135
2 0.92 0.91 0.91 1032
3 0.91 0.91 0.91 1010
4 0.92 0.93 0.93 982
5 0.90 0.86 0.88 892
6 0.93 0.94 0.94 958
7 0.94 0.93 0.93 1028
8 0.89 0.90 0.89 974
9 0.91 0.91 0.91 1009
micro avg 0.93 0.93 0.93 10000
macro avg 0.92 0.92 0.92 10000
weighted avg 0.92 0.93 0.92 10000
|
notebooks/losses_evaluation/Dstripes/basic/ell/convolutional/VAE/DstripesVAE_Convolutional_reconst_1ell_1sharpdiff.ipynb | ###Markdown
Settings
###Code
%env TF_KERAS = 1
import os
sep_local = os.path.sep
import sys
sys.path.append('..'+sep_local+'..')
print(sep_local)
os.chdir('..'+sep_local+'..'+sep_local+'..'+sep_local+'..'+sep_local+'..')
print(os.getcwd())
import tensorflow as tf
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Dataset loading
###Code
dataset_name='Dstripes'
import tensorflow as tf
train_ds = tf.data.Dataset.from_generator(
lambda: training_generator,
output_types=tf.float32 ,
output_shapes=tf.TensorShape((batch_size, ) + image_size)
)
test_ds = tf.data.Dataset.from_generator(
lambda: testing_generator,
output_types=tf.float32 ,
output_shapes=tf.TensorShape((batch_size, ) + image_size)
)
_instance_scale=1.0
for data in train_ds:
_instance_scale = float(data[0].numpy().max())
break
_instance_scale
import numpy as np
from collections.abc import Iterable
if isinstance(inputs_shape, Iterable):
_outputs_shape = np.prod(inputs_shape)
_outputs_shape
###Output
_____no_output_____
###Markdown
Model's Layers definition
###Code
units=20
c=50
menc_lays = [
tf.keras.layers.Conv2D(filters=units//2, kernel_size=3, strides=(2, 2), activation='relu'),
tf.keras.layers.Conv2D(filters=units*9//2, kernel_size=3, strides=(2, 2), activation='relu'),
tf.keras.layers.Flatten(),
# No activation
tf.keras.layers.Dense(latents_dim)
]
venc_lays = [
tf.keras.layers.Conv2D(filters=units//2, kernel_size=3, strides=(2, 2), activation='relu'),
tf.keras.layers.Conv2D(filters=units*9//2, kernel_size=3, strides=(2, 2), activation='relu'),
tf.keras.layers.Flatten(),
# No activation
tf.keras.layers.Dense(latents_dim)
]
dec_lays = [
tf.keras.layers.Dense(units=units*c*c, activation=tf.nn.relu),
tf.keras.layers.Reshape(target_shape=(c , c, units)),
tf.keras.layers.Conv2DTranspose(filters=units, kernel_size=3, strides=(2, 2), padding="SAME", activation='relu'),
tf.keras.layers.Conv2DTranspose(filters=units*3, kernel_size=3, strides=(2, 2), padding="SAME", activation='relu'),
# No activation
tf.keras.layers.Conv2DTranspose(filters=3, kernel_size=3, strides=(1, 1), padding="SAME")
]
###Output
_____no_output_____
###Markdown
Model definition
###Code
model_name = dataset_name+'VAE_Convolutional_reconst_1ell_1sharpdiff'
experiments_dir='experiments'+sep_local+model_name
from training.autoencoding_basic.autoencoders.VAE import VAE as AE
inputs_shape=image_size
variables_params = \
[
{
'name': 'inference_mean',
'inputs_shape':inputs_shape,
'outputs_shape':latents_dim,
'layers': menc_lays
}
,
{
'name': 'inference_logvariance',
'inputs_shape':inputs_shape,
'outputs_shape':latents_dim,
'layers': venc_lays
}
,
{
'name': 'generative',
'inputs_shape':latents_dim,
'outputs_shape':inputs_shape,
'layers':dec_lays
}
]
from utils.data_and_files.file_utils import create_if_not_exist
_restore = os.path.join(experiments_dir, 'var_save_dir')
create_if_not_exist(_restore)
_restore
#to restore trained model, set filepath=_restore
ae = AE(
name=model_name,
latents_dim=latents_dim,
batch_size=batch_size,
variables_params=variables_params,
filepath=None
)
from evaluation.quantitive_metrics.sharp_difference import prepare_sharpdiff
from statistical.losses_utilities import similarity_to_distance
from statistical.ae_losses import expected_loglikelihood as ell
ae.compile(loss={'x_logits': lambda x_true, x_logits: ell(x_true, x_logits)+similarity_to_distance(prepare_sharpdiff([ae.batch_size]+ae.get_inputs_shape()))(x_true, x_logits)})
###Output
_____no_output_____
###Markdown
Callbacks
###Code
from training.callbacks.sample_generation import SampleGeneration
from training.callbacks.save_model import ModelSaver
es = tf.keras.callbacks.EarlyStopping(
monitor='loss',
min_delta=1e-12,
patience=12,
verbose=1,
restore_best_weights=False
)
ms = ModelSaver(filepath=_restore)
csv_dir = os.path.join(experiments_dir, 'csv_dir')
create_if_not_exist(csv_dir)
csv_dir = os.path.join(csv_dir, ae.name+'.csv')
csv_log = tf.keras.callbacks.CSVLogger(csv_dir, append=True)
csv_dir
image_gen_dir = os.path.join(experiments_dir, 'image_gen_dir')
create_if_not_exist(image_gen_dir)
sg = SampleGeneration(latents_shape=latents_dim, filepath=image_gen_dir, gen_freq=5, save_img=True, gray_plot=False)
###Output
_____no_output_____
###Markdown
Model Training
###Code
from training.callbacks.disentangle_supervied import DisentanglementSuperviedMetrics
from training.callbacks.disentangle_unsupervied import DisentanglementUnsuperviedMetrics
gts_mertics = DisentanglementSuperviedMetrics(
ground_truth_data=eval_dataset,
representation_fn=lambda x: ae.encode(x),
random_state=np.random.RandomState(0),
file_Name=gts_csv,
num_train=10000,
num_test=100,
batch_size=batch_size,
continuous_factors=False,
gt_freq=10
)
gtu_mertics = DisentanglementUnsuperviedMetrics(
ground_truth_data=eval_dataset,
representation_fn=lambda x: ae.encode(x),
random_state=np.random.RandomState(0),
file_Name=gtu_csv,
num_train=20000,
num_test=500,
batch_size=batch_size,
gt_freq=10
)
ae.fit(
x=train_ds,
input_kw=None,
steps_per_epoch=int(1e4),
epochs=int(1e6),
verbose=2,
callbacks=[ es, ms, csv_log, sg, gts_mertics, gtu_mertics],
workers=-1,
use_multiprocessing=True,
validation_data=test_ds,
validation_steps=int(1e4)
)
###Output
_____no_output_____
###Markdown
Model Evaluation inception_score
###Code
from evaluation.generativity_metrics.inception_metrics import inception_score
is_mean, is_sigma = inception_score(ae, tolerance_threshold=1e-6, max_iteration=200)
print(f'inception_score mean: {is_mean}, sigma: {is_sigma}')
###Output
_____no_output_____
###Markdown
Frechet_inception_distance
###Code
from evaluation.generativity_metrics.inception_metrics import frechet_inception_distance
fis_score = frechet_inception_distance(ae, training_generator, tolerance_threshold=1e-6, max_iteration=10, batch_size=32)
print(f'frechet inception distance: {fis_score}')
###Output
_____no_output_____
###Markdown
perceptual_path_length_score
###Code
from evaluation.generativity_metrics.perceptual_path_length import perceptual_path_length_score
ppl_mean_score = perceptual_path_length_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200, batch_size=32)
print(f'perceptual path length score: {ppl_mean_score}')
###Output
_____no_output_____
###Markdown
precision score
###Code
from evaluation.generativity_metrics.precision_recall import precision_score
_precision_score = precision_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200)
print(f'precision score: {_precision_score}')
###Output
_____no_output_____
###Markdown
recall score
###Code
from evaluation.generativity_metrics.precision_recall import recall_score
_recall_score = recall_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200)
print(f'recall score: {_recall_score}')
###Output
_____no_output_____
###Markdown
Image Generation image reconstruction Training dataset
###Code
%load_ext autoreload
%autoreload 2
from training.generators.image_generation_testing import reconstruct_from_a_batch
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'reconstruct_training_images_like_a_batch_dir')
create_if_not_exist(save_dir)
reconstruct_from_a_batch(ae, training_generator, save_dir)
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'reconstruct_testing_images_like_a_batch_dir')
create_if_not_exist(save_dir)
reconstruct_from_a_batch(ae, testing_generator, save_dir)
###Output
_____no_output_____
###Markdown
with Randomness
###Code
from training.generators.image_generation_testing import generate_images_like_a_batch
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'generate_training_images_like_a_batch_dir')
create_if_not_exist(save_dir)
generate_images_like_a_batch(ae, training_generator, save_dir)
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'generate_testing_images_like_a_batch_dir')
create_if_not_exist(save_dir)
generate_images_like_a_batch(ae, testing_generator, save_dir)
###Output
_____no_output_____
###Markdown
Complete Randomness
###Code
from training.generators.image_generation_testing import generate_images_randomly
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'random_synthetic_dir')
create_if_not_exist(save_dir)
generate_images_randomly(ae, save_dir)
from training.generators.image_generation_testing import interpolate_a_batch
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'interpolate_dir')
create_if_not_exist(save_dir)
interpolate_a_batch(ae, testing_generator, save_dir)
###Output
100%|██████████| 15/15 [00:00<00:00, 19.90it/s]
|
C1_classification_vec_spaces/Week2/C1_W2_Assignment.ipynb | ###Markdown
Assignment 2: Naive BayesWelcome to week two of this specialization. You will learn about Naive Bayes. Concretely, you will be using Naive Bayes for sentiment analysis on tweets. Given a tweet, you will decide if it has a positive sentiment or a negative one. Specifically you will: * Train a naive bayes model on a sentiment analysis task* Test using your model* Compute ratios of positive words to negative words* Do some error analysis* Predict on your own tweetYou may already be familiar with Naive Bayes and its justification in terms of conditional probabilities and independence.* In this week's lectures and assignments we used the ratio of probabilities between positive and negative sentiments.* This approach gives us simpler formulas for these 2-way classification tasks.Load the cell below to import some packages.You may want to browse the documentation of unfamiliar libraries and functions.
###Code
from utils import process_tweet, lookup
import pdb
from nltk.corpus import stopwords, twitter_samples
import numpy as np
import pandas as pd
import nltk
import string
from nltk.tokenize import TweetTokenizer
from os import getcwd
###Output
_____no_output_____
###Markdown
If you are running this notebook in your local computer,don't forget to download the twitter samples and stopwords from nltk.```nltk.download('stopwords')nltk.download('twitter_samples')```
###Code
# add folder, tmp2, from our local workspace containing pre-downloaded corpora files to nltk's data path
filePath = f"{getcwd()}/../tmp2/"
nltk.data.path.append(filePath)
# get the sets of positive and negative tweets
all_positive_tweets = twitter_samples.strings('positive_tweets.json')
all_negative_tweets = twitter_samples.strings('negative_tweets.json')
# split the data into two pieces, one for training and one for testing (validation set)
test_pos = all_positive_tweets[4000:]
train_pos = all_positive_tweets[:4000]
test_neg = all_negative_tweets[4000:]
train_neg = all_negative_tweets[:4000]
train_x = train_pos + train_neg
test_x = test_pos + test_neg
# avoid assumptions about the length of all_positive_tweets
train_y = np.append(np.ones(len(train_pos)), np.zeros(len(train_neg)))
test_y = np.append(np.ones(len(test_pos)), np.zeros(len(test_neg)))
###Output
_____no_output_____
###Markdown
Part 1: Process the DataFor any machine learning project, once you've gathered the data, the first step is to process it to make useful inputs to your model.- **Remove noise**: You will first want to remove noise from your data -- that is, remove words that don't tell you much about the content. These include all common words like 'I, you, are, is, etc...' that would not give us enough information on the sentiment.- We'll also remove stock market tickers, retweet symbols, hyperlinks, and hashtags because they can not tell you a lot of information on the sentiment.- You also want to remove all the punctuation from a tweet. The reason for doing this is because we want to treat words with or without the punctuation as the same word, instead of treating "happy", "happy?", "happy!", "happy," and "happy." as different words.- Finally you want to use stemming to only keep track of one variation of each word. In other words, we'll treat "motivation", "motivated", and "motivate" similarly by grouping them within the same stem of "motiv-".We have given you the function `process_tweet()` that does this for you.
###Code
custom_tweet = "RT @Twitter @chapagain Hello There! Have a great day. :) #good #morning http://chapagain.com.np"
# print cleaned tweet
print(process_tweet(custom_tweet))
###Output
['hello', 'great', 'day', ':)', 'good', 'morn']
###Markdown
Part 1.1 Implementing your helper functionsTo help train your naive bayes model, you will need to build a dictionary where the keys are a (word, label) tuple and the values are the corresponding frequency. Note that the labels we'll use here are 1 for positive and 0 for negative.You will also implement a `lookup()` helper function that takes in the `freqs` dictionary, a word, and a label (1 or 0) and returns the number of times that word and label tuple appears in the collection of tweets.For example: given a list of tweets `["i am rather excited", "you are rather happy"]` and the label 1, the function will return a dictionary that contains the following key-value pairs:{ ("rather", 1): 2 ("happi", 1) : 1 ("excit", 1) : 1}- Notice how for each word in the given string, the same label 1 is assigned to each word.- Notice how the words "i" and "am" are not saved, since it was removed by process_tweet because it is a stopword.- Notice how the word "rather" appears twice in the list of tweets, and so its count value is 2. InstructionsCreate a function `count_tweets()` that takes a list of tweets as input, cleans all of them, and returns a dictionary.- The key in the dictionary is a tuple containing the stemmed word and its class label, e.g. ("happi",1).- The value the number of times this word appears in the given collection of tweets (an integer). Hints Please use the `process_tweet` function that was imported above, and then store the words in their respective dictionaries and sets. You may find it useful to use the `zip` function to match each element in `tweets` with each element in `ys`. Remember to check if the key in the dictionary exists before adding that key to the dictionary, or incrementing its value. Assume that the `result` dictionary that is input will contain clean key-value pairs (you can assume that the values will be integers that can be incremented). It is good practice to check the datatype before incrementing the value, but it's not required here.
###Code
# UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def count_tweets(result, tweets, ys):
'''
Input:
result: a dictionary that will be used to map each pair to its frequency
tweets: a list of tweets
ys: a list corresponding to the sentiment of each tweet (either 0 or 1)
Output:
result: a dictionary mapping each pair to its frequency
'''
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
for y, tweet in zip(ys, tweets):
for word in process_tweet(tweet):
# define the key, which is the word and label tuple
pair = word, y
# if the key exists in the dictionary, increment the count
if pair in result:
result[pair] += 1
# else, if the key is new, add it to the dictionary and set the count to 1
else:
result[pair] = 1
### END CODE HERE ###
return result
# Testing your function
result = {}
tweets = ['i am happy', 'i am tricked', 'i am sad', 'i am tired', 'i am tired']
ys = [1, 0, 0, 0, 0]
count_tweets(result, tweets, ys)
###Output
_____no_output_____
###Markdown
**Expected Output**: {('happi', 1): 1, ('trick', 0): 1, ('sad', 0): 1, ('tire', 0): 2} Part 2: Train your model using Naive BayesNaive bayes is an algorithm that could be used for sentiment analysis. It takes a short time to train and also has a short prediction time. So how do you train a Naive Bayes classifier?- The first part of training a naive bayes classifier is to identify the number of classes that you have.- You will create a probability for each class.$P(D_{pos})$ is the probability that the document is positive.$P(D_{neg})$ is the probability that the document is negative.Use the formulas as follows and store the values in a dictionary:$$P(D_{pos}) = \frac{D_{pos}}{D}\tag{1}$$$$P(D_{neg}) = \frac{D_{neg}}{D}\tag{2}$$Where $D$ is the total number of documents, or tweets in this case, $D_{pos}$ is the total number of positive tweets and $D_{neg}$ is the total number of negative tweets. Prior and LogpriorThe prior probability represents the underlying probability in the target population that a tweet is positive versus negative. In other words, if we had no specific information and blindly picked a tweet out of the population set, what is the probability that it will be positive versus that it will be negative? That is the "prior".The prior is the ratio of the probabilities $\frac{P(D_{pos})}{P(D_{neg})}$.We can take the log of the prior to rescale it, and we'll call this the logprior$$\text{logprior} = log \left( \frac{P(D_{pos})}{P(D_{neg})} \right) = log \left( \frac{D_{pos}}{D_{neg}} \right)$$.Note that $log(\frac{A}{B})$ is the same as $log(A) - log(B)$. So the logprior can also be calculated as the difference between two logs:$$\text{logprior} = \log (P(D_{pos})) - \log (P(D_{neg})) = \log (D_{pos}) - \log (D_{neg})\tag{3}$$ Positive and Negative Probability of a WordTo compute the positive probability and the negative probability for a specific word in the vocabulary, we'll use the following inputs:- $freq_{pos}$ and $freq_{neg}$ are the frequencies of that specific word in the positive or negative class. In other words, the positive frequency of a word is the number of times the word is counted with the label of 1.- $N_{pos}$ and $N_{neg}$ are the total number of positive and negative words for all documents (for all tweets), respectively.- $V$ is the number of unique words in the entire set of documents, for all classes, whether positive or negative.We'll use these to compute the positive and negative probability for a specific word using this formula:$$ P(W_{pos}) = \frac{freq_{pos} + 1}{N_{pos} + V}\tag{4} $$$$ P(W_{neg}) = \frac{freq_{neg} + 1}{N_{neg} + V}\tag{5} $$Notice that we add the "+1" in the numerator for additive smoothing. This [wiki article](https://en.wikipedia.org/wiki/Additive_smoothing) explains more about additive smoothing. Log likelihoodTo compute the loglikelihood of that very same word, we can implement the following equations:$$\text{loglikelihood} = \log \left(\frac{P(W_{pos})}{P(W_{neg})} \right)\tag{6}$$ Create `freqs` dictionary- Given your `count_tweets()` function, you can compute a dictionary called `freqs` that contains all the frequencies.- In this `freqs` dictionary, the key is the tuple (word, label)- The value is the number of times it has appeared.We will use this dictionary in several parts of this assignment.
###Code
# Build the freqs dictionary for later uses
freqs = count_tweets({}, train_x, train_y)
###Output
_____no_output_____
###Markdown
InstructionsGiven a freqs dictionary, `train_x` (a list of tweets) and a `train_y` (a list of labels for each tweet), implement a naive bayes classifier. Calculate $V$- You can then compute the number of unique words that appear in the `freqs` dictionary to get your $V$ (you can use the `set` function). Calculate $freq_{pos}$ and $freq_{neg}$- Using your `freqs` dictionary, you can compute the positive and negative frequency of each word $freq_{pos}$ and $freq_{neg}$. Calculate $N_{pos}$ and $N_{neg}$- Using `freqs` dictionary, you can also compute the total number of positive words and total number of negative words $N_{pos}$ and $N_{neg}$. Calculate $D$, $D_{pos}$, $D_{neg}$- Using the `train_y` input list of labels, calculate the number of documents (tweets) $D$, as well as the number of positive documents (tweets) $D_{pos}$ and number of negative documents (tweets) $D_{neg}$.- Calculate the probability that a document (tweet) is positive $P(D_{pos})$, and the probability that a document (tweet) is negative $P(D_{neg})$ Calculate the logprior- the logprior is $log(D_{pos}) - log(D_{neg})$ Calculate log likelihood- Finally, you can iterate over each word in the vocabulary, use your `lookup` function to get the positive frequencies, $freq_{pos}$, and the negative frequencies, $freq_{neg}$, for that specific word.- Compute the positive probability of each word $P(W_{pos})$, negative probability of each word $P(W_{neg})$ using equations 4 & 5.$$ P(W_{pos}) = \frac{freq_{pos} + 1}{N_{pos} + V}\tag{4} $$$$ P(W_{neg}) = \frac{freq_{neg} + 1}{N_{neg} + V}\tag{5} $$**Note:** We'll use a dictionary to store the log likelihoods for each word. The key is the word, the value is the log likelihood of that word).- You can then compute the loglikelihood: $log \left( \frac{P(W_{pos})}{P(W_{neg})} \right)\tag{6}$.
###Code
# UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def train_naive_bayes(freqs, train_x, train_y):
'''
Input:
freqs: dictionary from (word, label) to how often the word appears
train_x: a list of tweets
train_y: a list of labels correponding to the tweets (0,1)
Output:
logprior: the log prior. (equation 3 above)
loglikelihood: the log likelihood of you Naive bayes equation. (equation 6 above)
'''
loglikelihood = {}
logprior = 0
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# calculate V, the number of unique words in the vocabulary
vocab = set([pair[0] for pair in freqs.keys()])
V = len(vocab)
# calculate N_pos and N_neg
N_pos = N_neg = 0
for pair in freqs.keys():
# if the label is positive (greater than zero)
if pair[1] > 0:
# Increment the number of positive words by the count for this (word, label) pair
N_pos += freqs.get(pair)
# else, the label is negative
else:
# increment the number of negative words by the count for this (word,label) pair
N_neg += freqs.get(pair)
# Calculate D, the number of documents
D = len(train_y)
# Calculate D_pos, the number of positive documents (*hint: use sum(<np_array>))
D_pos = sum(train_y)
# Calculate D_neg, the number of negative documents (*hint: compute using D and D_pos)
D_neg = D - D_pos
# Calculate logprior
logprior = np.log(D_pos) - np.log(D_neg)
# For each word in the vocabulary...
for word in vocab:
# get the positive and negative frequency of the word
freq_pos = lookup(freqs,word,1)
freq_neg = lookup(freqs,word,0)
# calculate the probability that each word is positive, and negative
p_w_pos = (freq_pos + 1.)/(N_pos + V)
p_w_neg = (freq_neg + 1.)/(N_neg + V)
# calculate the log likelihood of the word
loglikelihood[word] = np.log(p_w_pos/p_w_neg)
### END CODE HERE ###
return logprior, loglikelihood
# UNQ_C3 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# You do not have to input any code in this cell, but it is relevant to grading, so please do not change anything
logprior, loglikelihood = train_naive_bayes(freqs, train_x, train_y)
print(logprior)
print(len(loglikelihood))
###Output
0.0
9089
###Markdown
**Expected Output**:0.09089 Part 3: Test your naive bayesNow that we have the `logprior` and `loglikelihood`, we can test the naive bayes function by making predicting on some tweets! Implement `naive_bayes_predict`**Instructions**:Implement the `naive_bayes_predict` function to make predictions on tweets.* The function takes in the `tweet`, `logprior`, `loglikelihood`.* It returns the probability that the tweet belongs to the positive or negative class.* For each tweet, sum up loglikelihoods of each word in the tweet.* Also add the logprior to this sum to get the predicted sentiment of that tweet.$$ p = logprior + \sum_i^N (loglikelihood_i)$$ NoteNote we calculate the prior from the training data, and that the training data is evenly split between positive and negative labels (4000 positive and 4000 negative tweets). This means that the ratio of positive to negative 1, and the logprior is 0.The value of 0.0 means that when we add the logprior to the log likelihood, we're just adding zero to the log likelihood. However, please remember to include the logprior, because whenever the data is not perfectly balanced, the logprior will be a non-zero value.
###Code
# UNQ_C4 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def naive_bayes_predict(tweet, logprior, loglikelihood):
'''
Input:
tweet: a string
logprior: a number
loglikelihood: a dictionary of words mapping to numbers
Output:
p: the sum of all the logliklihoods of each word in the tweet (if found in the dictionary) + logprior (a number)
'''
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# process the tweet to get a list of words
word_l = process_tweet(tweet)
# initialize probability to zero
p = 0
# add the logprior
p += logprior
for word in word_l:
# check if the word exists in the loglikelihood dictionary
if word in loglikelihood:
# add the log likelihood of that word to the probability
p += loglikelihood.get(word)
### END CODE HERE ###
return p
# UNQ_C5 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# You do not have to input any code in this cell, but it is relevant to grading, so please do not change anything
# Experiment with your own tweet.
my_tweet = 'She smiled.'
p = naive_bayes_predict(my_tweet, logprior, loglikelihood)
print('The expected output is', p)
###Output
The expected output is 1.5740278623499175
###Markdown
**Expected Output**:- The expected output is around 1.57- The sentiment is positive. Implement test_naive_bayes**Instructions**:* Implement `test_naive_bayes` to check the accuracy of your predictions.* The function takes in your `test_x`, `test_y`, log_prior, and loglikelihood* It returns the accuracy of your model.* First, use `naive_bayes_predict` function to make predictions for each tweet in text_x.
###Code
# UNQ_C6 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def test_naive_bayes(test_x, test_y, logprior, loglikelihood):
"""
Input:
test_x: A list of tweets
test_y: the corresponding labels for the list of tweets
logprior: the logprior
loglikelihood: a dictionary with the loglikelihoods for each word
Output:
accuracy: (# of tweets classified correctly)/(total # of tweets)
"""
accuracy = 0 # return this properly
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
y_hats = []
for tweet in test_x:
# if the prediction is > 0
if naive_bayes_predict(tweet, logprior, loglikelihood) > 0:
# the predicted class is 1
y_hat_i = 1
else:
# otherwise the predicted class is 0
y_hat_i = 0
# append the predicted class to the list y_hats
y_hats.append(y_hat_i)
# error is the average of the absolute values of the differences between y_hats and test_y
error = np.sum(np.squeeze(np.array(test_y)) != np.squeeze(np.array(y_hats)))/test_y.size
# Accuracy is 1 minus the error
accuracy = 1 - error
### END CODE HERE ###
return accuracy
len(np.squeeze(np.array([1,0,0,2])))
print(type(train_y))
print("Naive Bayes accuracy = %0.4f" %
(test_naive_bayes(test_x, test_y, logprior, loglikelihood)))
###Output
Naive Bayes accuracy = 0.9940
###Markdown
**Expected Accuracy**:0.9940
###Code
# UNQ_C7 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# You do not have to input any code in this cell, but it is relevant to grading, so please do not change anything
# Run this cell to test your function
for tweet in ['I am happy', 'I am bad', 'this movie should have been great.', 'great', 'great great', 'great great great', 'great great great great']:
# print( '%s -> %f' % (tweet, naive_bayes_predict(tweet, logprior, loglikelihood)))
p = naive_bayes_predict(tweet, logprior, loglikelihood)
# print(f'{tweet} -> {p:.2f} ({p_category})')
print(f'{tweet} -> {p:.2f}')
###Output
I am happy -> 2.15
I am bad -> -1.29
this movie should have been great. -> 2.14
great -> 2.14
great great -> 4.28
great great great -> 6.41
great great great great -> 8.55
###Markdown
**Expected Output**:- I am happy -> 2.15- I am bad -> -1.29- this movie should have been great. -> 2.14- great -> 2.14- great great -> 4.28- great great great -> 6.41- great great great great -> 8.55
###Code
# Feel free to check the sentiment of your own tweet below
my_tweet = 'you are bad :('
naive_bayes_predict(my_tweet, logprior, loglikelihood)
###Output
_____no_output_____
###Markdown
Part 4: Filter words by Ratio of positive to negative counts- Some words have more positive counts than others, and can be considered "more positive". Likewise, some words can be considered more negative than others.- One way for us to define the level of positiveness or negativeness, without calculating the log likelihood, is to compare the positive to negative frequency of the word. - Note that we can also use the log likelihood calculations to compare relative positivity or negativity of words.- We can calculate the ratio of positive to negative frequencies of a word.- Once we're able to calculate these ratios, we can also filter a subset of words that have a minimum ratio of positivity / negativity or higher.- Similarly, we can also filter a subset of words that have a maximum ratio of positivity / negativity or lower (words that are at least as negative, or even more negative than a given threshold). Implement `get_ratio()`- Given the `freqs` dictionary of words and a particular word, use `lookup(freqs,word,1)` to get the positive count of the word.- Similarly, use the `lookup()` function to get the negative count of that word.- Calculate the ratio of positive divided by negative counts$$ ratio = \frac{\text{pos_words} + 1}{\text{neg_words} + 1} $$Where pos_words and neg_words correspond to the frequency of the words in their respective classes. Words Positive word count Negative Word Count glad 41 2 arriv 57 4 :( 1 3663 :-( 0 378
###Code
# UNQ_C8 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def get_ratio(freqs, word):
'''
Input:
freqs: dictionary containing the words
word: string to lookup
Output: a dictionary with keys 'positive', 'negative', and 'ratio'.
Example: {'positive': 10, 'negative': 20, 'ratio': 0.5}
'''
pos_neg_ratio = {'positive': 0, 'negative': 0, 'ratio': 0.0}
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# use lookup() to find positive counts for the word (denoted by the integer 1)
pos_neg_ratio['positive'] = lookup(freqs,word,1)
# use lookup() to find negative counts for the word (denoted by integer 0)
pos_neg_ratio['negative'] = lookup(freqs,word,0)
# calculate the ratio of positive to negative counts for the word
pos_neg_ratio['ratio'] = (pos_neg_ratio.get('positive') + 1) / (pos_neg_ratio.get('negative') + 1)
### END CODE HERE ###
return pos_neg_ratio
get_ratio(freqs, 'happi')
###Output
_____no_output_____
###Markdown
Implement `get_words_by_threshold(freqs,label,threshold)`* If we set the label to 1, then we'll look for all words whose threshold of positive/negative is at least as high as that threshold, or higher.* If we set the label to 0, then we'll look for all words whose threshold of positive/negative is at most as low as the given threshold, or lower.* Use the `get_ratio()` function to get a dictionary containing the positive count, negative count, and the ratio of positive to negative counts.* Append a dictionary to a list, where the key is the word, and the dictionary is the dictionary `pos_neg_ratio` that is returned by the `get_ratio()` function.An example key-value pair would have this structure:```{'happi': {'positive': 10, 'negative': 20, 'ratio': 0.5}}```
###Code
# UNQ_C9 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def get_words_by_threshold(freqs, label, threshold):
'''
Input:
freqs: dictionary of words
label: 1 for positive, 0 for negative
threshold: ratio that will be used as the cutoff for including a word in the returned dictionary
Output:
word_set: dictionary containing the word and information on its positive count, negative count, and ratio of positive to negative counts.
example of a key value pair:
{'happi':
{'positive': 10, 'negative': 20, 'ratio': 0.5}
}
'''
word_list = {}
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
for key in freqs.keys():
word, _ = key
# get the positive/negative ratio for a word
pos_neg_ratio = get_ratio(freqs, word)
# if the label is 1 and the ratio is greater than or equal to the threshold...
if label == 1 and pos_neg_ratio.get('ratio') >= threshold:
# Add the pos_neg_ratio to the dictionary
word_list[word] = pos_neg_ratio
# If the label is 0 and the pos_neg_ratio is less than or equal to the threshold...
elif label == 0 and pos_neg_ratio.get('ratio') <= threshold:
# Add the pos_neg_ratio to the dictionary
word_list[word] = pos_neg_ratio
# otherwise, do not include this word in the list (do nothing)
### END CODE HERE ###
return word_list
# Test your function: find negative words at or below a threshold
get_words_by_threshold(freqs, label=0, threshold=0.05)
# Test your function; find positive words at or above a threshold
get_words_by_threshold(freqs, label=1, threshold=10)
###Output
_____no_output_____
###Markdown
Notice the difference between the positive and negative ratios. Emojis like :( and words like 'me' tend to have a negative connotation. Other words like 'glad', 'community', and 'arrives' tend to be found in the positive tweets. Part 5: Error AnalysisIn this part you will see some tweets that your model missclassified. Why do you think the misclassifications happened? Were there any assumptions made by the naive bayes model?
###Code
# Some error analysis done for you
print('Truth Predicted Tweet')
for x, y in zip(test_x, test_y):
y_hat = naive_bayes_predict(x, logprior, loglikelihood)
if y != (np.sign(y_hat) > 0):
print('%d\t%0.2f\t%s' % (y, np.sign(y_hat) > 0, ' '.join(
process_tweet(x)).encode('ascii', 'ignore')))
###Output
Truth Predicted Tweet
1 0.00 b''
1 0.00 b'truli later move know queen bee upward bound movingonup'
1 0.00 b'new report talk burn calori cold work harder warm feel better weather :p'
1 0.00 b'harri niall 94 harri born ik stupid wanna chang :D'
1 0.00 b''
1 0.00 b''
1 0.00 b'park get sunlight'
1 0.00 b'uff itna miss karhi thi ap :p'
0 1.00 b'hello info possibl interest jonatha close join beti :( great'
0 1.00 b'u prob fun david'
0 1.00 b'pat jay'
0 1.00 b'whatev stil l young >:-('
###Markdown
Part 6: Predict with your own tweetIn this part you can predict the sentiment of your own tweet.
###Code
# Test with your own tweet - feel free to modify `my_tweet`
my_tweet = 'I am happy because I am learning :)'
p = naive_bayes_predict(my_tweet, logprior, loglikelihood)
print(p)
###Output
9.574768961173339
|
0.14/_downloads/plot_brainstorm_data.ipynb | ###Markdown
Brainstorm tutorial datasetsHere we compute the evoked from raw for the Brainstormtutorial dataset. For comparison, see [1]_ and: http://neuroimage.usc.edu/brainstorm/Tutorials/MedianNerveCtfReferences----------.. [1] Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM. Brainstorm: A User-Friendly Application for MEG/EEG Analysis. Computational Intelligence and Neuroscience, vol. 2011, Article ID 879716, 13 pages, 2011. doi:10.1155/2011/879716
###Code
# Authors: Mainak Jas <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import mne
from mne.datasets.brainstorm import bst_raw
print(__doc__)
tmin, tmax, event_id = -0.1, 0.3, 2 # take right-hand somato
reject = dict(mag=4e-12, eog=250e-6)
data_path = bst_raw.data_path()
raw_fname = data_path + '/MEG/bst_raw/' + \
'subj001_somatosensory_20111109_01_AUX-f_raw.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.plot()
# set EOG channel
raw.set_channel_types({'EEG058': 'eog'})
raw.set_eeg_reference()
# show power line interference and remove it
raw.plot_psd(tmax=60.)
raw.notch_filter(np.arange(60, 181, 60))
events = mne.find_events(raw, stim_channel='UPPT001')
# pick MEG channels
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,
exclude='bads')
# Compute epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=False)
# compute evoked
evoked = epochs.average()
# remove physiological artifacts (eyeblinks, heartbeats) using SSP on baseline
evoked.add_proj(mne.compute_proj_evoked(evoked.copy().crop(tmax=0)))
evoked.apply_proj()
# fix stim artifact
mne.preprocessing.fix_stim_artifact(evoked)
# correct delays due to hardware (stim artifact is at 4 ms)
evoked.shift_time(-0.004)
# plot the result
evoked.plot()
# show topomaps
evoked.plot_topomap(times=np.array([0.016, 0.030, 0.060, 0.070]))
###Output
_____no_output_____ |
pytorch/New_Tuts/03-autograd_tutorial.ipynb | ###Markdown
Autograd: automatic differentiationThe ``autograd`` package provides automatic differentiation for all operationson Tensors. It is a define-by-run framework, which means that your backprop isdefined by how your code is run, and that every single iteration can bedifferent.
###Code
import torch
###Output
_____no_output_____
###Markdown
Create a tensor:
###Code
x = torch.tensor([[1, 2], [3, 4]], requires_grad=True, dtype=torch.float32)
print(x)
###Output
_____no_output_____
###Markdown
Do an operation on the tensor:
###Code
y = x - 2
print(y)
###Output
_____no_output_____
###Markdown
``y`` was created as a result of an operation, so it has a ``grad_fn``.
###Code
print(y.grad_fn)
print(x.grad_fn)
y.grad_fn
y.grad_fn.next_functions[0][0]
y.grad_fn.next_functions[0][0].variable
###Output
_____no_output_____
###Markdown
Do more operations on `y`
###Code
z = y * y * 3
out = z.mean()
print(z, out)
###Output
_____no_output_____
###Markdown
GradientsLet's backprop now `out.backward()` is equivalent to doing `out.backward(torch.tensor([1.0]))`
###Code
out.backward()
###Output
_____no_output_____
###Markdown
print gradients d(out)/dx
###Code
print(x.grad)
###Output
_____no_output_____
###Markdown
You can do many crazy things with autograd!> With Great *Flexibility* Comes Great Responsibility
###Code
# Dynamic graphs!
x = torch.randn(3, requires_grad=True)
y = x * 2
while y.data.norm() < 1000:
y = y * 2
print(y)
gradients = torch.FloatTensor([0.1, 1.0, 0.0001])
y.backward(gradients)
print(x.grad)
###Output
_____no_output_____
###Markdown
Inference
###Code
n = 3
x = torch.arange(1, n + 1, requires_grad=True)
w = torch.ones(n, requires_grad=True)
z = w @ x
z.backward()
print(x.grad, w.grad, sep='\n')
x = torch.arange(1, n + 1)
w = torch.ones(n, requires_grad=True)
z = w @ x
z.backward()
print(x.grad, w.grad, sep='\n')
with torch.no_grad():
x = torch.arange(1, n + 1)
w = torch.ones(n, requires_grad=True)
z = w @ x
z.backward()
print(x.grad, w.grad, sep='\n')
###Output
_____no_output_____ |
Data-Science-HYD-2k19/Topic-Wise/NLP/3. Stemming.ipynb | ###Markdown
[NLP day 4]
###Code
import nltk
###Output
_____no_output_____
###Markdown
Topic: Stemming 1. PorterStemmer2. Lancaster Stemmer3. Snowball Stemmer Stemming: bringing the word to its root form 1. PorterStemmer
###Code
from nltk.stem import PorterStemmer
pst = PorterStemmer()
###Output
_____no_output_____
###Markdown
Get the base forms of the string:
###Code
pst.stem("having")
pst.stem("giving")
pst.stem("given")
pst.stem("doing")
pst.stem("generously")
a = ["having","giving","doing","hiding","happening"]
for i in a:
print(pst.stem(i)," ")
plurals = ['caresses', 'flies', 'dies', 'mules', 'denied','died', 'agreed', 'owned', 'humbled', 'sized', 'meeting', 'stating', 'siezing', 'itemization', 'sensational', 'traditional', 'reference', 'colonizer','plotted']
plurals
[pst.stem(i) for i in plurals]
###Output
_____no_output_____
###Markdown
2. Lancaster Stemmer:
###Code
from nltk.stem import LancasterStemmer
lst = LancasterStemmer()
a
for i in a:
print(lst.stem(i)," ")
###Output
hav
giv
doing
hid
hap
###Markdown
3. Snowball Stemmer:
###Code
from nltk.stem import SnowballStemmer
SnowballStemmer.languages
sbst = SnowballStemmer("english")
a
sbst.stem("generously")
for i in a:
print(sbst.stem(i)," ")
[sbst.stem(i) for i in plurals]
###Output
_____no_output_____
###Markdown
Diff b/w porter stemmer and snowball stemmer:
###Code
diff = ["generously","miraculously"]
[pst.stem(i) for i in diff]
[sbst.stem(i) for i in diff]
###Output
_____no_output_____
###Markdown
Topic: Lemmatization Lemmatization with NLTK. Lemmatization is the process of grouping together the different inflected forms of a word so they can be analysed as a single item. Lemmatization is similar to stemming but it brings context to the words.
###Code
from nltk.stem import wordnet
from nltk.stem import WordNetLemmatizer
word_lem = WordNetLemmatizer()
word_lem.lemmatize("corpora")
word_lem.lemmatize("rocks")
word_lem.lemmatize("better",pos="a")
plurals
[word_lem.lemmatize(i) for i in plurals]
###Output
_____no_output_____
###Markdown
StopWords Filler words: "right", "isn't it?"These are the words that a person frequently use (mostly at the ending of the sentence)
###Code
from nltk.corpus import stopwords
stopwords.words("english")
len(stopwords.words("english"))
###Output
_____no_output_____
###Markdown
Topic: Regular Expressions
###Code
import re
punctuation = re.compile(r'[-,!.?;:()|0-9]')
punctuation
from nltk.tokenize import word_tokenize
ai = "All the other kids, with the pumped up kicks, you better run! run! out run my gun."
ai_tokens = word_tokenize(ai)
###Output
_____no_output_____
###Markdown
.sub: Return the string obtained by replacing the leftmost non-overlapping occurrences of pattern in string by the replacement repl.
###Code
ai_tokens
post_punctuation = []
for i in ai_tokens:
word = punctuation.sub("",i)
if len(word)>0:
post_punctuation.append(word)
post_punctuation
post_punctuation = []
for i in ai_tokens:
word = punctuation.sub("",i)
if len(word)>1:
post_punctuation.append(word)
post_punctuation
post_punctuation = []
for i in ai_tokens:
word = punctuation.sub("",i)
if len(word)>3:
post_punctuation.append(word)
post_punctuation
post_punctuation = []
for i in ai_tokens:
word = punctuation.sub("",i)
if len(word)<1:
post_punctuation.append(word)
post_punctuation
###Output
_____no_output_____
###Markdown
Topic: Sentence tokenization
###Code
sentence = "Bro kinda cringe bro but i just found out about racism and that shit is wack bro"
sen_tokens = word_tokenize(sentence)
sentence2 = "Hey there, are you sleeping? Get out."
sen2_tokens = word_tokenize(sentence2)
###Output
_____no_output_____
###Markdown
POS - Parts of Speech
###Code
for i in sen_tokens:
print(nltk.pos_tag([i]))
for i in sen_tokens:
print(nltk.pos_tag([i],tagset = "universal"))
for i in sen2_tokens:
print(nltk.pos_tag([i]))
for i in sen2_tokens:
print(nltk.pos_tag([i],tagset = "universal"))
###Output
[('Hey', 'NOUN')]
[('there', 'ADV')]
[(',', '.')]
[('are', 'VERB')]
[('you', 'PRON')]
[('sleeping', 'VERB')]
[('?', '.')]
[('Get', 'VERB')]
[('out', 'ADP')]
[('.', '.')]
|
nickel/B03_Adders_And_Numbers_Checking.ipynb | ###Markdown
prepared by Adam Glos and Özlem Salehi This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} 1 \mspace{-1.5mu} \rfloor } $ Adders and numbers checking To implement the oracle for solving Max-Cut problem, we first examine how to add numbers in a quantum way. Half-adders Suppose that we want to add two bits $A$ and $B$. $\Sigma$ represents the sum and $C_{out}$ represents the carry, that is an overflow to the next digit. Let's represent the relationship between the bits which are summed in the following table. $A$ $B$ $C_{\rm out}$ $\Sigma$ 0 0 0 0 0 1 0 1 1 0 0 1 1 1 1 0 Note that the third column is the AND operator on the first two, while the last is the XOR of first two. To implement this in a quantum circuit, we use four qubits as shown in the circuit below. We can use the following circuit to simulate the half-adder where the first two qubits are the inputs and the last two qubits are the outputs. Task 1 Implement the above half-adder and verify that indeed it generates correct outputs for any input.
###Code
import cirq
from cirq import X, CX
s = cirq.Simulator()
for input in ['00','01','10','11']:
qq = cirq.LineQubit.range(4)
circuit = cirq.Circuit()
# Initialize the input
if input[0] == '1':
circuit.append(X(qq[0]))
if input[1] == '1':
circuit.append(X(qq[1]))
#
# your solution
#
circuit.append(cirq.measure(*qq, key='result'))
samples = s.run(circuit, repetitions=1)
result = samples.measurements["result"]
print("measurement output:", result)
print("added bits:", input[0] , "and", input[1])
print("sum:", result[0][3])
print("Cout:", result[0][2])
print()
###Output
_____no_output_____
###Markdown
[click for our solution](B03_Adders_And_Numbers_Checking_Solutions.ipynbtask1) Note that the half-adder above stores the solution on a separate qubit. We can think of this operation as `A + B`. Next, we will see that we can also implement in-place addition, `B+=A`. Fortunately, this can be done with the following circuit.
###Code
import cirq
from cirq import CCX,CX
qq = cirq.LineQubit.range(3)
circuit = cirq.Circuit()
circuit.append(CCX(qq[0], qq[1], qq[2]))
circuit.append(CX(qq[0], qq[1]))
print(circuit)
###Output
0: ───@───@───
│ │
1: ───@───X───
│
2: ───X───────
###Markdown
One can verify that if $\ket{q_0}$ and $\ket{q_1}$ are the digits being added, then on $\ket{q_1}$ we will have the sum bit $\Sigma$, and on $\ket{q_2}$ we will obtain the carry bit $C_{\rm out}$. Input Output $q_0$ $q_1$ $q_2$ $q_0$ $q_1$ $q_2$ 0 0 0 0 0 0 0 1 0 0 1 0 1 0 0 1 1 0 1 1 0 1 0 1 What happens if the qubit $\ket{q_2}$ is originally in state $\ket{1}$? Let's check the following table. Input Output $q_0$ $q_1$ $q_2$ $q_0$ $q_1$ $q_2$ 0 0 1 0 0 1 0 1 1 0 1 1 1 0 1 1 1 1 1 1 1 1 0 0 We see that, unless all $ q_0 $, $ q_1 $, and $ q_2 $ are in states $ \ket{1} $ at the same time, we can use this circuit to add the single bit stored in qubit $ q_0 $ to the number stored on 2 qubits $q_1$ and $q_2$. CountingSuppose that we are given $n$ bits and we are asked how many of the bits are set to 1. At this point, we know how to add two bits and now we will implement the procedure for counting by adding multiple bits consecutively on top of each other. We will store the output in qubits $\ket{q_2}$ and $\ket{q_3}$ where $\ket{q_2}$ represents the sum and $\ket{q_3}$ represents the carry. We will perform in place addition, meaning that we will first add the bit represented by $\ket{q_0}$ to the output and then do the same for the bit stored in $\ket{q_1}$.In summary, we will perform the following operations: (`sum` is representing the sum stored in qubits $\ket{q_2}$ and $\ket{q_3}$.)First, we do `sum = sum + q0`
###Code
import cirq
from cirq import CCX, CX
qq = cirq.LineQubit.range(4)
circuit = cirq.Circuit()
circuit.append(CCX(qq[0], qq[2], qq[3]))
circuit.append(CX(qq[0], qq[2]))
print(circuit)
###Output
_____no_output_____
###Markdown
Remark that initially $ \ket{q_2} $ is in state 0, and so we can also omit CCX.After the first addition, $\ket{q_2}$ either stores 0 or 1. Now let's add the bit stored in $\ket{q_1}$ to the output. Second, we do `sum = sum + q1`
###Code
import cirq
from cirq import CCX,X
qq = cirq.LineQubit.range(4)
circuit = cirq.Circuit()
circuit.append(CCX(qq[0], qq[2], qq[3]))
circuit.append(CX(qq[0], qq[2]))
circuit.append(CCX(qq[1], qq[2], qq[3]))
circuit.append(CX(qq[1], qq[2]))
print(circuit)
###Output
_____no_output_____
###Markdown
Now let's check the correctness of the above circuit by trying different inputs.
###Code
import cirq
from cirq import X, CX, CCX
s = cirq.Simulator()
for input in ['00','01','10','11']:
qq = cirq.LineQubit.range(4)
circuit = cirq.Circuit()
# initialization
if input[0] == '1':
circuit.append(X(qq[0]))
if input[1] == '1':
circuit.append(X(qq[1]))
# add qubit 0
# qubits 2. and 3. store the sum
# since we know that q[2] is set to zero initially, we could omit this control
circuit.append(CCX(qq[0], qq[2], qq[3]))
circuit.append(CX(qq[0], qq[2]))
## add qubit 1
circuit.append(CCX(qq[1], qq[2], qq[3]))
circuit.append(CX(qq[1], qq[2]))
circuit.append(cirq.measure(*qq, key='result'))
samples = s.run(circuit, repetitions=1)
result = samples.measurements["result"]
# print the sum
print("Input:", input)
print("The sum should be equal to", int(input[0])+int(input[1]))
print("According to quantum circuit:", result[0][2] + 2*result[0][3])
print("q0 =",result[0][0]," q1 =",result[0][1]," q2 =",result[0][2]," q3 =",result[0][3])
print("")
###Output
_____no_output_____
###Markdown
In the next task, you will implement the same procedure this time adding the first three bits. The sum will be stored in qubits $\ket{q_3}$ and $\ket{q_4}$. After adding the first two bits, it can be the case that $\ket{q_3}$ stores 0 and $\ket{q_4}$ stores 1, corresponding to the sum $2=10_2$. From the table, we know that the same implementation idea still works independent of whether $\ket{q_4}$ stores 1 or not. Task 2 Add the first three bits stored in qubits 0-2 and store the sum on qubits 3-4.
###Code
import cirq
from cirq import X, CX, CCX
s = cirq.Simulator()
for input in ['000','001','010','011','100','101','110','111']:
qq = cirq.LineQubit.range(8)
circuit = cirq.Circuit()
if input[0] == '1':
circuit.append(X(qq[0]))
if input[1] == '1':
circuit.append(X(qq[1]))
if input[2] == '1':
circuit.append(X(qq[2]))
#
# your solution
#
circuit.append(cirq.measure(*qq, key='result'))
samples = s.run(circuit, repetitions=1)
result = samples.measurements["result"]
# print the sum
print("Input:", input)
print("The sum should be equal to", int(input[0])+int(input[1])+int(input[2]))
print("According to quantum circuit:", 2*result[0][4]+result[0][3])
print("")
###Output
_____no_output_____
###Markdown
[click for our solution](B03_Adders_And_Numbers_Checking_Solutions.ipynbtask2) Generalization Let's review the whole process from the beginning.There are a single input (qu)bit $a_0$ and a single output qubit, say $b_0$, initially set to $0$. After the operation $b_0 = b_0+a_0 $, $ b_0 $ can be at most 1, which can be accomplished by using a single $CX$ operator.
###Code
import cirq
from cirq import X, CX
#We use named qubits to visualize the circuit in a more user friendly way
a_0 = cirq.NamedQubit('a_0')
b_0 = cirq.NamedQubit('b_0')
circuit = cirq.Circuit()
circuit.append(CX(a_0,b_0))
print(circuit)
###Output
_____no_output_____
###Markdown
Now we add one more bit ($a_1$) to the summation. At this point, $b_0$ can be 0 or 1. If $a_1=1$ and $b_0=1$, then there will be an overflow to the next qubit since the sum will be $2=10_2$. We can check this using a $CCX$ gate where $a_1$ and $b_0$ are the control and $b_1$ is the target (qu)bits. (The same as the half adder above.) Similarly, we can add $a_2$. If $a_2=1$ and the current sum is 1, then the new sum will be equal to 2 and the overflow will take place. If the current sum is equal to 2, then there is already an overflow and the new sum will become $3=11_2$.
###Code
import cirq
from cirq import CX, CCX
a_0 = cirq.NamedQubit('a_0')
a_1 = cirq.NamedQubit('a_1')
a_2 = cirq.NamedQubit('a_2')
b_0 = cirq.NamedQubit('b_0')
b_1 = cirq.NamedQubit('b_1')
circuit = cirq.Circuit()
#Add a_0
circuit.append(CX(a_0,b_0))
#Add a_1
circuit.append(CCX(a_1,b_0,b_1))
circuit.append(CX(a_1,b_0))
#Add a_2
circuit.append(CCX(a_2,b_0,b_1))
circuit.append(CX(a_2,b_0))
print(circuit)
###Output
_____no_output_____
###Markdown
Now we will add one more bit $a_3$. If $a_3= 1$ and both $b_0$ and $b_1$ are equal to 1, that is if the current sum is 3, then the sum will become 4 and we need an additional bit $b_2$. Hence, we need to check $a_3$, $b_0$, $b_1$ for equality to 1 and apply a $NOT$ gate to $\ket{b_2}$ if this is the case. Check the following circuit which implements summation of the first four bits. When we add the fourth bit, we introduce the multi-controlled $NOT$ as it is possible that the sum is equal to 3 at this point and an overflow takes place. We store the output in qubits $\ket{q_4}, \ket{q_5}$ and $\ket{q_6}$. The line below generates all possible inputs of length 4 and we will use it in our code.
###Code
input_list = [bin(i)[2:].zfill(4) for i in range(0,2**4)]
print(input_list)
import cirq
from itertools import product
from cirq import X, CX, CCX
s = cirq.Simulator()
for input in input_list:
qq = cirq.LineQubit.range(8)
circuit = cirq.Circuit()
#We can do the initialization inside a for loop
for i in range(4):
if input[i]=='1':
circuit.append(X(qq[i]))
# add qubit 0
circuit.append(CX(qq[0], qq[4]))
# add qubit 1
circuit.append(CCX(qq[1], qq[4], qq[5]))
circuit.append(CX(qq[1], qq[4]))
# add qubit 2
circuit.append(CCX(qq[2], qq[4], qq[5]))
circuit.append(CX(qq[2], qq[4]))
# add qubit 3
circuit.append(X(qq[6]).controlled_by(qq[3], qq[4],qq[5]))
circuit.append(CCX(qq[3], qq[4], qq[5]))
circuit.append(CX(qq[3], qq[4]))
circuit.append(cirq.measure(*qq, key='result'))
samples = s.run(circuit, repetitions=1)
result = samples.measurements["result"]
# print the sum
print("Input", input)
print("The sum should equal to", int(input[0])+int(input[1])+int(input[2])+int(input[3]))
print("According to quantum circuit:", result[0][4]+2*result[0][5]+4*result[0][6])
print("")
###Output
_____no_output_____
###Markdown
The same pattern will continue if we include the bits $ a_5, a_6, a_7 $. As the sum will be at most 7, we use still 3 bits for storing the output (summation).After including $ a_8 $, we can have an overflow and so we use a multi-controlled not gate to check it. Besides, we use 4 bits for storing the output, which will be enough when adding $ a_9,\dots,a_{15} $.This pattern repeats itself whenever including the new bits. A new (qu)bit is used by the output when the $ 2^i $-th bit is included in the summation, and a new multi-controlled not gate should be used to check the overflow. Task 3By using the given idea, add the values of seven bits, namely $ q_0,\ldots,q_6 $, and write the results on the qubits $ q_7,q_8, q_9 $.*Note:* You may use for-loops instead of adding each qubit one by one.
###Code
# Generate the inputs
input_list = [bin(i)[2:].zfill(7) for i in range(0,2**7)]
import cirq
from itertools import product
from cirq import X, CX, CCX
s = cirq.Simulator()
n = 7
for input in input_list:
qq = cirq.LineQubit.range(10)
circuit = cirq.Circuit()
#We can do the initialization inside a for loop
for i in range(7):
if input[i]=='1':
circuit.append(X(qq[i]))
# add qubit 0
# add qubits 1-2
# add qubits 3-6
circuit.append(cirq.measure(*qq, key='result'))
samples = s.run(circuit, repetitions=1)
result = samples.measurements["result"]
# print the sum
print(input)
print("The sum should equal to", sum(int(i) for i in input))
print("According to quantum circuit:", result[0][n]+2*result[0][n+1]+4*result[0][n+2])
print("")
###Output
_____no_output_____
###Markdown
[click for our solution](B03_Adders_And_Numbers_Checking_Solutions.ipynbtask3) Checking the numberWe have two qubits $ q_0 $ and $ q_1 $ storing an integer, and we are interested in checking whether this integer is equal to 3.Let $ q_2 $ be the qubit for output in state $ \ket{0} $. We apply $X$ gate on $ q_2 $ if both $ q_0 $ and $q_1$ are in $ \ket{1} $. The following circuit implements this, where the binary value of the integer $ b $ has two digits $ b_1b_0 $ and $ b_i $ is represented by $ q_i $ for $ i \in \{0,1\} $.Note that the binary number $b_1b_0$ is assumed to be represented in the circuit such that $b_1$ corresponds to $q_1$ and $b_0$ corresponds to $q_0$ in the rest of the discussion.
###Code
import cirq
from cirq import X, CCX
s = cirq.Simulator()
qq = cirq.LineQubit.range(3)
circuit = cirq.Circuit()
# set qubits to 3
circuit.append(X(qq[0]))
circuit.append(X(qq[1]))
# set qubits to 2 (should not work!)
# circuit.append(X(qq[1]))
# check wether both qubits are set to one
circuit.append(CCX(qq[0], qq[1], qq[2]))
circuit.append(cirq.measure(*qq, key='result'))
samples = s.run(circuit, repetitions=1)
result = samples.measurements["result"]
if result[0][2] == 1:
print("The number equals 3")
else:
print("The number does not equal 3")
print(circuit)
###Output
_____no_output_____
###Markdown
Such a check is trivial when the binary representation of an integer contains only 1s, i.e., a multi-controled $NOT$ gate is applied on the output bit. Suppose that we have three qubits, and we are interested in checking whether their value is $ 101_2 = 5 $. In this case, the value of the output qubit is flipped if- the first qubit is in state $ \ket{1} $,- the second qubit is in state $ \ket{0} $, and- the third qubit is in state $ \ket{1} $.To use a multi-controlled not gate, all control qubits should be in state $ \ket{1} $. Therefore, this time we do pre- and post- processing for the middle qubit. We apply $X$ gate, and so the second qubit will be in state $ \ket{1} $ when applying the multi-controlled not gate if it is originally in $ \ket{0} $. After applying the multi-controlled not gate, we apply $X$ gate again to return the original value so that the change at this point will not affect the rest of computation. Task 4Implement the algorithm that checks whether the first three qubits store the binary representation of number 5. Store the output on qubit 3.
###Code
import cirq
from cirq import X, I
s = cirq.Simulator()
qq = cirq.LineQubit.range(4)
circuit = cirq.Circuit()
# set qubits to 5
circuit.append(X(qq[0]))
circuit.append(I(qq[1]))
circuit.append(X(qq[2]))
# sanity check: set qubits to 3
# circuit.append(X(qq[0]))
# circuit.append(X(qq[1]))
# circuit.append(I(qq[2]))
#
# your solution
#
circuit.append(cirq.measure(*qq, key='result'))
samples = s.run(circuit, repetitions=1)
result = samples.measurements["result"]
if result[0][3] == 1:
print("The number equals 5")
if result[0][0] == 1 and result[0][1] == 0 and result[0][2] == 1:
print("You haven't forget to recover the qubits: Congratulations!")
else:
print("Some of qubits have not been recoverd.")
else:
print("The number does not equal 5")
print(circuit)
###Output
_____no_output_____
###Markdown
[click for our solution](B03_Adders_And_Numbers_Checking_Solutions.ipynbtask4) Inequality constraints What if there are two qubits $ q_0 $ and $ q_1 $ storing an integer ($q_0 + 2\cdot q_1$), and we are interested in checking whether this integer is greater than or equal to 2?In this case, we are interested in checking wheter the value of the two qubits is $10_2=2$ or $11_2=3$. In both cases, the qubit $q_1$ is in state $\ket{1}$ and we are not interested in the state of the qubit $q_0$. Hence, the output qubit should be flipped if- the qubit $q_1$ is in state $\ket{1}$.We can check this using a $CX$-gate where $q_1$ is the control and $q_2$ is the target qubit.The following program implements the above idea.
###Code
import cirq
from cirq import X, CX
s = cirq.Simulator()
for input in ['00','01','10','11']:
qq = cirq.LineQubit.range(4)
circuit = cirq.Circuit()
# initialization
if input[0] == '1':
circuit.append(X(qq[0]))
if input[1] == '1':
circuit.append(X(qq[1]))
circuit.append(CX(qq[1], qq[2]))
circuit.append(cirq.measure(*qq, key='result'))
samples = s.run(circuit, repetitions=1)
result = samples.measurements["result"]
print("Input:", input[0], input[1])
print("Value of the input:", int(input[0])+2*int(input[1]))
if result[0][2] == 1:
print("The input is greater than or equal to 2")
else:
print("The input is not greater than or equal to 2")
###Output
_____no_output_____
###Markdown
Instance many-number checking We explained how to check the equality and inequality constraints. While classical computers are restricted to check one number at a time, quantum computers can do it instantly on a *superposition* of states. Hence, we can create a superposition of all possible integers represented by the input qubits using Hadamard gates, and then using the method above, we can check for equality or inequality constraints for the integers in superposition.The circuit below has four qubits. The first three qubits hold an integer, and the last qubit is the output that returns the decision of whether the input is greater than or equal to 4 or not. Note that when you measure, you will observe only one of the numbers and whether it is at least 4.Run the circuit several times to convince yourself the output is always correct.
###Code
import cirq
from cirq import X, H, CX
s = cirq.Simulator()
qq = cirq.LineQubit.range(4)
circuit = cirq.Circuit()
circuit.append(H.on_each(*(qq[0:3])))
circuit.append(CX(qq[2], qq[3]))
circuit.append(cirq.measure(*qq, key='result'))
samples = s.run(circuit, repetitions=1)
result = samples.measurements["result"]
number_measured = result[0][0] + 2*result[0][1] + 4*result[0][2]
print("Measured input:", number_measured)
if result[0][3] == 1:
print("Input is greater than or equal to 4")
else:
print("Input is not greater than or equal to 4")
###Output
_____no_output_____
###Markdown
Task 5Design a circuit that checks whether the first three qubits store either 4 or 5 in binary. *Hint:* Note that $4=100_2$ and $5=101_2$, hence the $\ket{q_2}$ has to be set to $\ket{1}$, and $\ket{q_1}$ to $\ket{0}_2$, while $\ket{q_0}$ can be arbitrary.*Hint:* Don't forget to recover the original state of the qubit by applying $X$ gate!
###Code
import cirq
from cirq import X, H
s = cirq.Simulator()
qq = cirq.LineQubit.range(4)
circuit = cirq.Circuit()
circuit.append(H.on_each(*(qq[0:3])))
#
# your solution
#
circuit.append(cirq.measure(*qq, key='result'))
samples = s.run(circuit, repetitions=1)
result = samples.measurements["result"]
number_measured = result[0][0] + 2*result[0][1] + 4*result[0][2]
print("Number measured:", number_measured)
if result[0][3] == 1:
print("It is 4 or 5")
else:
print("It is neither 4 nor 5")
###Output
_____no_output_____ |
notebooks/Part2_ControlStructures.ipynb | ###Markdown
Python for R users Part 2: Control structuresIn this notebook we will explore how R and Python differ in the syntax of control structures like loops or if/then statements. First we need to tell Jupyter to let us use R within this Python notebook.
###Code
%load_ext rpy2.ipython
from pprint import pprint
###Output
_____no_output_____
###Markdown
LoopsWe generally want to avoid loops whenever possible (as we will see later when we are talking about numerical analysis), but sometimes we can't. Loops in Python are structurally very similar to those in R, but the syntax differs quite a bit. Let's say that we want to loop over integers from 1 to 3 and print them out. In R we would do it as follows:
###Code
%%R
for (j in 1:3){
print(j)
}
###Output
[1] 1
[1] 2
[1] 3
###Markdown
Notice that in R, the contents of the loop are demarcated by brackets.The equivalent loop in Python would look like this:
###Code
for j in range(1,4):
print(j)
###Output
1
2
3
###Markdown
There is one thing here that is new for us, and is another fundamental difference between Python and R. The contents of the loop in Python are denoted by their *indentation*! The fact that white space makes a difference in Python syntax is probably one of the most controversial aspects of Python coding. If the spacing doesn't match exactly, then the code will fail. Try running the next cell after removing the comment symbol () from the third line:
###Code
for j in range(1,4):
print(j)
# print(j+1)
###Output
_____no_output_____
###Markdown
You should see an error message telling you that there is an unexpected indentation. Python expects all of lines with a loop to have exactly the same indentation. This can get a bit tricky if you mix code that uses tabs for indentation and code that uses spaces. In general, indentation by 4 spaces is preferred. There is another new thing that we see here: the ```range()``` function. This function generates a series of numbers within a particular range, similar to the ```seq()``` function in R except that it starts at zero and steps by 1 at a time until it *almost* reaches the specified number. Here is a simple example:
###Code
for j in range(4):
print(j)
###Output
0
1
2
3
###Markdown
Notice that the series is as long as the specified number (i.e. 4 digits), but it stops before it gets to the limit. Just like ```seq()```, you can also specify a step size for the sequence:
###Code
for j in range(0, 8, 2):
print(j)
###Output
0
2
4
6
###Markdown
One limitation is that range() only works for integer step sizes. Later we will encounter a function within the numpy package that can give us more flexible step sizes. But if we are simply looping through for a specific number of times, we would generally use ```range()```.The ```range()``` function also exhibits a behavior that you will not have experienced in R. Let's say you want to create a new variable that contains a sequence of integers from 1 to 5. In R you could do this using the ```seq()``` command:
###Code
%%R
my_var <- seq(1,5)
print(my_var)
###Output
[1] 1 2 3 4 5
###Markdown
However, if we try to do this using the ```range()``` command, the result is not what we expect:
###Code
my_var = range(5)
print(my_var)
###Output
range(0, 5)
###Markdown
You probably expected this command to output a set of values, but instead it prints out what looks like a function. That's because the ```range()``` function is a special type of Python function known as a *generator*, which is meant to generate a sequence. You don't need to know how generators work under the hood (though if you do, you can read more [here](https://wiki.python.org/moin/Generators)), but you should be aware that you can't simply use them to define a new variable --- they have to be part of a loop. List comprehensionsOne way to easily obtain a new variable from a generator is to use a special Python construction called a *list comprehension*. Going back to our previous problem of generating a list that ranges from 1 to 5, we could create a for loop to do this:
###Code
my_var = [] # create an empty list
for j in range(1, 6):
my_var.append(j) # append the value to the list
print(my_var)
###Output
[1, 2, 3, 4, 5]
###Markdown
However, this is a lot of code to generate such a simple variable. A list comprehension allows us to embed this entire loop within a single command:
###Code
my_var = [j for j in range(1, 6)]
print(my_var)
###Output
[1, 2, 3, 4, 5]
###Markdown
One useful thing that this allows us to do is to transform the numbers being generated by our generator (in this case ```range()```).Let's say that we wanted to create a series of powers of 2, from 2^0 to 2^5. We could do this easily using a list comprehension, by simply raising the initial value ```i``` to the power of 2:
###Code
power_series = [2**j for j in range(0, 6)]
print(power_series)
###Output
[1, 2, 4, 8, 16, 32]
###Markdown
Nested LoopsWe can also easily nest loops within one another, using additional indentation for each level of the loop. For example, let's say we want to loop through for integers from 1 to 9 and create a dictionary that contains a list of that value when raised to powers from zero to five.
###Code
power_dict = {} # create empty dictionary to store our results
for j in range(1, 10): ## loop through integers 1-9
power_dict[j] = [] # create an empty list to store the results for this integer
for k in range(0, 6):
power_dict[j].append(j**k)
pprint(power_dict) ## pretty print the dict
###Output
{1: [1, 1, 1, 1, 1, 1],
2: [1, 2, 4, 8, 16, 32],
3: [1, 3, 9, 27, 81, 243],
4: [1, 4, 16, 64, 256, 1024],
5: [1, 5, 25, 125, 625, 3125],
6: [1, 6, 36, 216, 1296, 7776],
7: [1, 7, 49, 343, 2401, 16807],
8: [1, 8, 64, 512, 4096, 32768],
9: [1, 9, 81, 729, 6561, 59049]}
###Markdown
While loopsWhile loops in Python are very similar to those in R, except for the surface syntax:
###Code
%%R
j <- 1
while (j < 6){
print(j)
j <- j + 1
}
j = 1
while j < 6:
print(j)
j += 1
###Output
1
2
3
4
5
###Markdown
Note that we used a special operator, ```+=``` which is shorthand for "add the value on the right side to the existing variable on the left side". If/then statementIf/then statements are also fairly similar between R and Python. Let's say we want to loop through all numbers from 1 to 10 and print whether they are odd or even. Here is how we would do that in R:
###Code
%%R
for (j in 1:10){
# use the modulus operator to see if the remainder from 2 is zero
if (!(j %% 2)) {
print(sprintf('%d: even', j))
} else {
print(sprintf('%d: odd', j))
}
}
###Output
[1] "1: odd"
[1] "2: even"
[1] "3: odd"
[1] "4: even"
[1] "5: odd"
[1] "6: even"
[1] "7: odd"
[1] "8: even"
[1] "9: odd"
[1] "10: even"
###Markdown
The analogous code in Python looks fairly similar:
###Code
for j in range(1, 11):
if not j % 2:
print(j, 'even', )
else:
print(j, 'odd')
###Output
1 odd
2 even
3 odd
4 even
5 odd
6 even
7 odd
8 even
9 odd
10 even
|
03/homework_03_B191.ipynb | ###Markdown
Úkol č. 3 - Segmentace zákazníků e-shopu (do 29. listopadu)Jednou z důležitých aplikací shlukování je **segmentace zákazníků** (angl. **customer segmentation**). Předpokládejme, že máme následující obchodní údaje o prodejích (resp. nákupech z pohledu zákazníků):TransactionID - ID nákupu,CustomerID - ID zákazníka, Date - datum nákupu, Total - celková cena nákupu.Chceme najít segmenty zákazníků, kteří se chovají podobně. K tomu je dobré informace z jednotlivých nákupů pro individuální zákazníky agregovat. Tj. získat pro každého zákazníka jeden řádek.Populárním přístupem je **RFM**, což znamená:- **R**ecency: Počet dnů od posledního nákupu (poslední datum v datasetu pro daného zákazníka). - Počet dnů počítejte ke dni uskutečnění poslendní transakce v celém datasetu (tj. 12/19/2015), nikoli k dnešku. Tváříme se, že jde o aktuální data.- **F**requency: Počet nákupů. Občas se vynechávají zákazníci s jediným nákupem. Pro jednoduchost je zde ale necháme.- **M**onetary: Celková suma, kterou daný zákazník utratil. Zdroj datBudeme pracovat s daty z jednoho (skoro) vymyšleného eshopu:
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df = pd.read_csv("eshop.csv")
df.head(10)
###Output
_____no_output_____
###Markdown
Pokyny k vypracování**Základní body zadání**, za jejichž (poctivé) vypracování získáte **8 bodů**:- Vytvořte `rfm` data frame, kde každý řádek odpovídá jednomu zákazníkovi a sloupce (příznaky) jsou uvedené výše.- Pomocí algoritmu `K-means` proveďte shlukování. Nějakým způsobem také odhadněte nejlepší počet shluků (podrobně vysvětlete).- Zabývejte se vlivem přeškálování dat (standardizace příznaků). Tj. určete, zda je přeškálování vhodné, a proveďte ho.- Interpretujte jednotlivé shluky. Použijte získané shluky k odlišení "superstar" zákazníků (vysoká monetary, vysoká frequency a nízká recency) od nezajímavých zákazníků (vysoká recency, nízká frequency, nízká monetary).**Další body zadání** za případné další body (můžete si vybrat, maximum bodů za úkol je každopádně 12 bodů):- (až +4 body) Proveďte analýzu vytvořených shluků pomocí metody silhouette (https://en.wikipedia.org/wiki/Silhouette_(clustering)).- (až +4 body) Zkuste provést to samé s modifikovanou verzí **RFM**, kde Recency = "maximum počtu měsíců od posledního nákupu a čísla 1", Frequency = "maximum počtu nákupů daného zákazníka v posledních 12 měsících a čísla 1", Monetary = "Nejvyšší hodnota nákupu daného zákazníka". Porovnejte s původním přístupem. Poznámky k odevzdání * Řiďte se pokyny ze stránky https://courses.fit.cvut.cz/BI-VZD/homeworks/index.html. * Odevzdejte Jupyter Notebook. * Ke komentování toho, co v notebooku děláte, použijte Markdown buňky. * Opravující Vám může umožnit úkol dodělat či opravit a získat tak další body. První verze je ale důležitá a bude-li odbytá, budete za to penalizováni Řešení Vytvoření RFM dataframe
###Code
from dateutil.parser import parse
# parse date from string
df["Date"] = df["Date"].apply(lambda x: parse(x))
df["ID"] = pd.Series(range(0, df.shape[0]))
# set last date
last_date = max(df["Date"])
# calculate RFM values
rfm = df.groupby("Customer ID").agg({"Date" : lambda x: (last_date - x.max()).days,
"ID" : "count",
"Subtotal" : "sum"})
# rename the columns
rfm = rfm.rename(columns = {"Date" : "Recency",
"ID" : "Frequency",
"Subtotal" : "Monetary"})
###Output
_____no_output_____
###Markdown
Smazání outlierůV dataframu se nachází jeden výjimečný případ zákazníka, který má velmi vysokou monetary oproti ostatním. Zbavíme se ho.
###Code
print(rfm.loc[4912])
rfm = rfm.drop(4912)
rfm.head(20)
###Output
_____no_output_____
###Markdown
Histogramy příznaků před standardizací
###Code
fig, axes = plt.subplots(nrows=3, figsize=(14, 12))
rfm["Recency"].hist(ax=axes[0], bins=50).set_xlabel("Recency")
rfm["Frequency"].hist(ax=axes[1], bins=50).set_xlabel("Frequency")
rfm["Monetary"].hist(ax=axes[2], bins=50).set_xlabel("Monetary")
###Output
_____no_output_____
###Markdown
StandardizaceJak můžeme vidět, tak histogramy všech tří příznaků mají kladný koeficient šikmosti (mají tzv. pravý ocas). Proto by bylo vhodné je nějak transformovat do stavu, kdy budou mít koeficient šikmosti co nejblíže nule, a budou se podobat normálnímu rozdělení. Toto půjde krásně udělat u "Recency" a "Monetary". Bohužel u "Frequency" je minimální hodnota 1, která je ještě k tomu zastoupená v drtivé většině. Proto transformujeme pouze příznaky "Recency" a "Monetary". Využijeme PowerTransformer ze scikitu, který provádí "chytrou" (umí rozhodnout, kterou funkci použít na transformaci) transformaci a zároveň umí i škálovat příznaky. Škálovat hodnoty u příznaku "Frequency" nebudeme, dá nám to větší důraz tomuto příznaku a k-means bude více rozdělovat podle frekvence. Power transformhttps://en.wikipedia.org/wiki/Power_transformyeo-johnson je v podstatě box-cox doplněn o vstupy s negativními hodnotami
###Code
from sklearn.preprocessing import PowerTransformer
pt = PowerTransformer(copy=True, method="yeo-johnson", standardize=True)
rfm_transformed = pd.DataFrame(pt.fit_transform(rfm[["Recency", "Monetary"]]),
index=rfm[["Recency", "Monetary"]].index,
columns=rfm[["Recency", "Monetary"]].columns)
rfm_transformed["Frequency"] = rfm["Frequency"].copy()
###Output
_____no_output_____
###Markdown
Histogramy příznaků po standardizaci
###Code
fig, axes = plt.subplots(nrows=3, figsize=(14, 12))
rfm_transformed["Recency"].hist(ax=axes[0], bins=50).set_xlabel("Recency")
rfm_transformed["Frequency"].hist(ax=axes[1], bins=50).set_xlabel("Frequency")
rfm_transformed["Monetary"].hist(ax=axes[2], bins=50).set_xlabel("Monetary")
###Output
_____no_output_____
###Markdown
K-Means Zjištění nejlepší hodnoty počtu shluků pro standardizovaný dataframeObecně rozhodnout o nějlepším počtu shluků je problematické. Využijeme "elbow" metodu. Jako nejlepší vychází 4-7 shluků.
###Code
from sklearn.cluster import KMeans
import seaborn as sns
wcss = {}
for k in range(1, 11):
kmeans = KMeans(n_clusters=k, init="k-means++")
kmeans.fit(rfm_transformed)
wcss[k] = kmeans.inertia_
sns.pointplot(x=list(wcss.keys()), y=list(wcss.values()))
plt.xlabel("Clusters")
plt.ylabel("WCSS")
plt.show()
###Output
_____no_output_____
###Markdown
K-Means pro standardizovaný dataframe, vizualizaceZvolíme tedy hodnotu 5 pro počet shluků.
###Code
import plotly
plotly.offline.init_notebook_mode()
def plot_kmeans(df, description="plot"):
default_colors = [
"#1f77b4", # muted blue
"#ff7f0e", # safety orange
"#2ca02c", # cooked asparagus green
"#d62728", # brick red
"#9467bd", # muted purple
"#ffff00", # yellow
"#e377c2", # raspberry yogurt pink
"#7f7f7f", # middle gray
"#8c564b", # chestnut brown
"#17becf" # blue-teal
]
def get_scatters(clusters):
scatters = []
for i, cluster in zip(range(0, len(clusters)), clusters):
scatters.append(dict(mode = "markers",
name = "Cluster " + str(i+1),
type = "scatter3d",
x = cluster.values[:,0], y = cluster.values[:,1], z = cluster.values[:,2],
marker = dict( size=2, color=default_colors[i])))
return scatters
scatters = get_scatters([df.loc[df["Cluster"] == x] for x in range(0, df["Cluster"].nunique())])
layout = dict(title = description,
scene = dict(xaxis = dict(zeroline=True),
yaxis = dict(zeroline=True),
zaxis = dict(zeroline=True),
xaxis_title="Recency",
yaxis_title="Frequency",
zaxis_title="Monetary")
)
plotly.offline.iplot(dict(data=scatters, layout=layout), filename="mesh3d_sample")
km = KMeans(n_clusters=5, init= "k-means++", random_state=42)
km.fit(rfm_transformed)
rfm_with_pt = rfm.copy()
rfm_with_pt["Cluster"] = km.labels_
plot_kmeans(rfm_with_pt, description="Visualization of clusters with power transform")
###Output
_____no_output_____
###Markdown
Popis shluků- shluk 3 (zelená) - naše tzv. superstar, zákaznící s vysokou Frequency, Monetary, nižší Recency- shluk 2 (oranžová) - věrní zákazníci, často u nás nakupují- shluk 5 (fialová) - zákazníci, co u nás nakoupili více než 1-2x, ale pořád ne tolikrát, aby byli považováni za věrné- shluk 4 (červená) - zákaznící co nedavno u nás nakoupili poprvé či podruhé, potenciální budoucí věrní zákaznící- shluk 1 (modrá) - zákazníci, kteří u nás nakoupili maximálně dvakrát a navíc už delší dobu nenakoupili K-Means pro dataframe bez standardizace, vizualizacePro porovnání s dataframem, kde jsme neprovedli standardizaci. Je vidět, že algoritmus bere více v potaz příznak "Recency" a snaží se rozdělovat především podle něj.
###Code
km = KMeans(n_clusters=5, init="k-means++", random_state=42)
km.fit(rfm)
rfm["Cluster"] = km.labels_
plot_kmeans(rfm, description="Visualization of clusters without power transform")
###Output
_____no_output_____
###Markdown
Modifikovaná verze RFMProvedeme podobně jako u původního RFM
###Code
from dateutil.relativedelta import relativedelta
# function computing the month difference between two dates
def diff_month(d1, d2):
return (d1.year - d2.year) * 12 + d1.month - d2.month
# calculate RM values
rfm_modified = df.groupby("Customer ID").agg({"Date" : lambda x: max(1, diff_month(last_date, x.max())),
"Subtotal" : "max"})
# rename the columns
rfm_modified = rfm_modified.rename(columns = {"Date" : "Recency",
"Subtotal" : "Monetary"})
# calculate F values
freq = df[df["Date"] > last_date - relativedelta(years=1)].groupby("Customer ID").agg({"ID" : "count"})
# add F to RM
rfm_modified.insert(1, "Frequency", freq["ID"])
rfm_modified["Frequency"] = rfm_modified["Frequency"].fillna(1)
# drop outliers
rfm_modified = rfm_modified.drop(4912)
rfm_modified = rfm_modified.drop(14263)
# power transform
pt = PowerTransformer(copy=True, method="yeo-johnson", standardize=True)
rfm_modified_transformed = pd.DataFrame(pt.fit_transform(rfm_modified[["Recency", "Monetary"]]),
index=rfm_modified[["Recency", "Monetary"]].index,
columns=rfm_modified[["Recency", "Monetary"]].columns)
rfm_modified_transformed["Frequency"] = rfm_modified["Frequency"].copy()
# k-means
km = KMeans(n_clusters=5, init="k-means++", random_state=42)
km.fit(rfm_modified_transformed)
rfm_modified["Cluster"] = km.labels_
# plot
plot_kmeans(rfm_modified, description="Visualization of modified RFM")
###Output
_____no_output_____ |
tutorials/basic1.ipynb | ###Markdown
`dicom.open` reads a DICOM formatted file and returns a DataSet object, which holds all information.
###Code
ds = dicom.open("CT2_JLSN") # file is available at ftp://medical.nema.org/MEDICAL/Dicom/DataSets/WG04
###Output
_____no_output_____
###Markdown
`dataset.pixelData()` returns a numpy array contains pixel values, which is the most of you are interested in. You need to install `numpy` for this (if you don't have it yet...)
###Code
ds.pixelData()
###Output
_____no_output_____
###Markdown
`dataset.toPilImage()` returns a `pillow` image object. You may display the image or save it in jpeg or png or whatever `pillow` supporting formats. You need to install `pillow`.
###Code
ds.toPilImage()
###Output
_____no_output_____
###Markdown
`dataset.getPixelDataInfo()` will return values related pixel data.
###Code
ds.getPixelDataInfo()
###Output
_____no_output_____
###Markdown
`ds.dump()` returns a dump string of the `dataset`. Each line represents a `DataElement` which has tag and value. To access the value of a `DataElement`, you need tag or keyword. You can find tag or keyword from this dump string.
###Code
print(ds.dump()[:2000]+' ...')
###Output
TAG VR LEN VM OFFSET KEYWORD
'00020000' UL 4 1 0x8c 192 # FileMetaInformationGroupLength
'00020001' OB 2 1 0x9c '\x00\x01' # FileMetaInformationVersion
'00020002' UI 26 1 0xa6 '1.2.840.10008.5.1.4.1.1.2' = CT Image Storage # MediaStorageSOPClassUID
'00020003' UI 46 1 0xc8 '1.3.6.1.4.1.5962.1.1.2.1.7.20040826185059.5457' # MediaStorageSOPInstanceUID
'00020010' UI 22 1 0xfe '1.2.840.10008.1.2.4.81' = JPEG-LS Lossy (Near-Lossless) Image Compression # TransferSyntaxUID
'00020012' UI 18 1 0x11c '1.3.6.1.4.1.5962.2' # ImplementationClassUID
'00020013' SH 10 1 0x136 'DCTOOL100' # ImplementationVersionName
'00020016' AE 8 1 0x148 'CLUNIE1' # SourceApplicationEntityTitle
'00080008' CS 24 3 0x158 'DERIVED\SECONDARY\AXIAL' # ImageType
'00080012' DA 8 1 0x178 '20040826' # InstanceCreationDate
'00080013' TM 6 1 0x188 '185120' # InstanceCreationTime
'00080014' UI 18 1 0x196 '1.3.6.1.4.1.5962.3' # InstanceCreatorUID
'00080016' UI 26 1 0x1b0 '1.2.840.10008.5.1.4.1.1.2' = CT Image Storage # SOPClassUID
'00080018' UI 46 1 0x1d2 '1.3.6.1.4.1.5962.1.1.2.1.7.20040826185059.5457' # SOPInstanceUID
'00080020' DA 8 1 0x208 '20040826' # StudyDate
'00080022' DA 8 1 0x218 '19960521' # AcquisitionDate
'00080023' DA 8 1 0x228 '19970915' # ContentDate
'00080030' TM 6 1 0x238 '185059' # StudyTime
'00080032' TM 10 1 0x246 '094906.900' # AcquisitionTime
'00080033' TM 10 1 0x258 '184116.000' # ContentTime
'00080050' SH 0 0 0x26a (no value) # AccessionNumber
'00080060' CS 2 1 0x272 'CT' # Modality
'00080070' LO 8 1 0x27c 'TOSHIBA' # Manufacturer
'00080080' LO 8 1 0x28c 'TOSHIBA' # InstitutionName
'00080090' PN 0 0 0x29c (no value) # ReferringPhysicianName
'00080201' SH 6 1 0x2a4 '-0400' # TimezoneOffsetFromUTC
'00081010' SH 6 1 0x2b2 '000001' # StationName
'00081070' PN 0 0 0x2c0 (no value) # OperatorsName
'00081090' LO 10 1 0x2c8 'Xpress/GX' # ManufacturerModelName
'00082111' ST 26 1 0x2da 'JPEG-LS near-lossless 10:1' # DerivationDescription
'00082112' SQ 206 1 0x300 SEQUENCE WITH 1 DATASET(s) # Sour ...
###Markdown
From above string, you can find `StudyDate`. A `DataElement` with tag (0008,0020) holds a value for `StudyDate`. You can get same value in several ways. You may choose whatever you fill easy.
###Code
print(ds.StudyDate)
print(ds['(0008,0020)'])
print(ds[0x00080020])
print(ds['00080020'])
print(ds.getDataElement(0x00080020).toString())
print(ds.getDataElement(0x00080020).value())
###Output
20040826
20040826
20040826
20040826
20040826
20040826
###Markdown
If a `DataElement` with a given tag does not exist, above line will return `None`
###Code
print(ds.InstitutionAddress)
###Output
None
###Markdown
You can retrieve multiple `DataElement`s values in one line.
###Code
ds.getValues(['StudyDate', 'StudyTime', 'InstitutionName', 'InstitutionAddress'])
###Output
_____no_output_____ |
LightAutoML demo (NLP).ipynb | ###Markdown
AutoML на текстовых данных  Чуть больше про стратегии получения представлений текстов на основе представлений слов:Про методы случайных алгоритмов можно подробнее прочитать в [статье](https://arxiv.org/abs/1901.10444) "No Training Required: Exploring Random Encoders for Sentence Classification". Импорты
###Code
import pandas as pd
import numpy as np
import pickle
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import train_test_split
from lightautoml.automl.presets.text_presets import TabularNLPAutoML
from lightautoml.tasks import Task
from lightautoml.addons.interpretation import LimeTextExplainer
from lightautoml.report import ReportDecoNLP
# Выключим предупреждения от HuggingFace
import transformers
transformers.logging.set_verbosity(50)
###Output
_____no_output_____
###Markdown
Чтение данных
###Code
%%time
df = pd.read_csv("./example_data/nlp_data.csv")
print(df.shape)
df.sample(5, random_state=0)
###Output
(13842, 6)
###Markdown
Разбиение на обучающую и контрольные выборки
###Code
train, test = train_test_split(df, test_size=3_000, random_state=42, stratify=df.IsGood)
###Output
_____no_output_____
###Markdown
Скачиваем эмбеддинги для русского языка
###Code
!wget https://storage.yandexcloud.net/natasha-navec/packs/navec_hudlit_v1_12B_500K_300d_100q.tar
from navec import Navec
path = 'navec_hudlit_v1_12B_500K_300d_100q.tar'
navec = Navec.load(path)
###Output
_____no_output_____
###Markdown
Обучение AutoML или День Сурка День 1. Стандартные параметры, ЦПУ
###Code
roles = {'target': 'IsGood',
'text': ['BankName', 'Message'],
'drop': ['MessageRecognized', 'WER']}
task = Task('binary')
automl = TabularNLPAutoML(task = task,
timeout = 3600,
gpu_ids = None,
text_params = {'lang': 'ru'},
verbose=2)
oof_pred = automl.fit_predict(train, roles=roles)
not_nan = np.any(~np.isnan(oof_pred.data), axis=1)
print('AUC OOF score: {}'.format(roc_auc_score(train[roles['target']].values[not_nan], oof_pred.data[not_nan][:, 0])))
%%time
test_pred = automl.predict(test)
print('AUC TEST score: {}'.format(roc_auc_score(test[roles['target']].values, test_pred.data[:, 0])))
###Output
Feature concated__BankName__Message transformed
AUC TEST score: 0.8397933778340239
CPU times: user 7.46 s, sys: 1.74 s, total: 9.2 s
Wall time: 11.5 s
###Markdown
День 2. Пользовательские представления слов, ЦПУ
###Code
roles = {'target': 'IsGood',
'text': ['BankName', 'Message'],
'drop': ['MessageRecognized', 'WER']}
task = Task('binary')
automl = TabularNLPAutoML(task = task,
timeout = 3600,
gpu_ids = None,
text_params = {'lang': 'ru'},
autonlp_params={'model_name': 'wat', 'embedding_model': navec,
'transformer_params': {'model_params': {'embed_size': 300},
'weight_type': 'idf', 'use_svd': True}},
verbose=2)
oof_pred = automl.fit_predict(train, roles=roles)
not_nan = np.any(~np.isnan(oof_pred.data), axis=1)
print('AUC OOF score: {}'.format(roc_auc_score(train[roles['target']].values[not_nan], oof_pred.data[not_nan][:, 0])))
%%time
test_pred = automl.predict(test)
print('AUC TEST score: {}'.format(roc_auc_score(test[roles['target']].values, test_pred.data[:, 0])))
###Output
Feature concated__BankName__Message transformed
AUC TEST score: 0.8433377172303697
CPU times: user 11.7 s, sys: 1.82 s, total: 13.5 s
Wall time: 15.4 s
###Markdown
День 3. Стандартные параметры, ГПУ
###Code
roles = {'target': 'IsGood',
'text': ['BankName', 'Message'],
'drop': ['MessageRecognized', 'WER']}
task = Task('binary')
automl = TabularNLPAutoML(task = task,
timeout = 3600,
gpu_ids = '1',
text_params = {'lang': 'ru'},
nn_params = {'lang': 'ru'},
verbose=2)
oof_pred = automl.fit_predict(train, roles=roles)
not_nan = np.any(~np.isnan(oof_pred.data), axis=1)
print('AUC OOF score: {}'.format(roc_auc_score(train[roles['target']].values[not_nan], oof_pred.data[not_nan][:, 0])))
%%time
test_pred = automl.predict(test)
print('AUC TEST score: {}'.format(roc_auc_score(test[roles['target']].values, test_pred.data[:, 0])))
###Output
100%|██████████| 10/10 [00:16<00:00, 1.69s/it]
###Markdown
День 4. Пользовательские представления слов, ГПУ, LightGBM
###Code
roles = {'target': 'IsGood',
'text': ['BankName', 'Message'],
'drop': ['MessageRecognized', 'WER']}
task = Task('binary')
automl = TabularNLPAutoML(task = task,
timeout = 3600,
gpu_ids = '1',
general_params = {'use_algos': ['lgb']},
text_params = {'lang': 'ru'},
autonlp_params={'model_name': 'random_lstm', 'embedding_model': navec},
verbose=2)
oof_pred = automl.fit_predict(train, roles=roles)
not_nan = np.any(~np.isnan(oof_pred.data), axis=1)
print('AUC OOF score: {}'.format(roc_auc_score(train[roles['target']].values[not_nan], oof_pred.data[not_nan][:, 0])))
%%time
test_pred = automl.predict(test)
print('AUC TEST score: {}'.format(roc_auc_score(test[roles['target']].values, test_pred.data[:, 0])))
###Output
100%|██████████| 3/3 [00:04<00:00, 1.40s/it]
###Markdown
День 5. Выбор агрегации представлений слов, ГПУ, линейная модель и LightGBM
###Code
roles = {'target': 'IsGood',
'text': ['BankName', 'Message'],
'drop': ['MessageRecognized', 'WER']}
task = Task('binary')
automl = TabularNLPAutoML(task = task,
timeout = 3600,
gpu_ids = '1',
general_params = {'use_algos': ['linear_l2', 'lgb']},
text_params = {'lang': 'ru'},
autonlp_params={'model_name': 'pooled_bert'},
verbose=2)
oof_pred = automl.fit_predict(train, roles=roles)
not_nan = np.any(~np.isnan(oof_pred.data), axis=1)
print('AUC OOF score: {}'.format(roc_auc_score(train[roles['target']].values[not_nan], oof_pred.data[not_nan][:, 0])))
%%time
test_pred = automl.predict(test)
print('AUC TEST score: {}'.format(roc_auc_score(test[roles['target']].values, test_pred.data[:, 0])))
###Output
100%|██████████| 10/10 [00:16<00:00, 1.64s/it]
###Markdown
День 6. Выбор модели Transformers, ГПУrubert-tiny. Подробнее в [статье](https://habr.com/ru/post/562064/).
###Code
roles = {'target': 'IsGood',
'text': ['BankName', 'Message'],
'drop': ['MessageRecognized', 'WER']}
task = Task('binary')
automl = TabularNLPAutoML(task = task,
timeout = 3600,
gpu_ids = '1',
general_params = {'use_algos': ['nn']},
nn_params = {'lang': 'ru', 'bert_name': "cointegrated/rubert-tiny"},
verbose=2)
oof_pred = automl.fit_predict(train, roles=roles)
not_nan = np.any(~np.isnan(oof_pred.data), axis=1)
print('AUC OOF score: {}'.format(roc_auc_score(train[roles['target']].values[not_nan], oof_pred.data[not_nan][:, 0])))
%%time
test_pred = automl.predict(test)
print('AUC TEST score: {}'.format(roc_auc_score(test[roles['target']].values, test_pred.data[:, 0])))
###Output
test: 100%|██████████| 188/188 [00:03<00:00, 52.82it/s]
test: 100%|██████████| 188/188 [00:03<00:00, 52.87it/s]
test: 100%|██████████| 188/188 [00:03<00:00, 52.69it/s]
###Markdown
Что дальше? Интерпретация LIMEПримерный алгоритм работы:1. Выбирается текстовая колонка (perturb_column), с помощью которой будем интерпретировать выделенное предсказание модели. При этом все остальные признаки фиксированные.2. Создается датасет размера n_sample (по-умолчанию 5000) путем случайных удалениий токенов (группами). Датасет бинарный (есть токен / нет токена).3. Опционально производится отбор признаков (важных токенов) с помощью LASSO (feature_selection='lasso', можно также 'none', чтобы не производить отбор). Количество признаков равно n_feautres (10 по умолчанию).4. Обучаем на этом объясняемую модель (линейную с весами, способ подсчета весов -- косинусное расстояние по-умолчанию, также можно и свою функцию или название расстояния из sklearn.metrics.pairwise_distances). 5. После этого веса линейной модели и являются интерпретацией.tips: force_order отвечает за то, использовать ли признаки как мешок слов(force_order=False) или важен их порядок (force_order=True).
###Code
lime = LimeTextExplainer(automl, feature_selection='lasso', force_order=False)
instance = test.iloc[0] # объект для интерпретации
exp = lime.explain_instance(instance, labels=(0, 1), perturb_column='Message')
exp.visualize_in_notebook(label=1)
instance = test.iloc[-1] # объект для интерпретации
exp = lime.explain_instance(instance, labels=(0, 1), perturb_column='Message')
exp.visualize_in_notebook(label=1)
###Output
test: 100%|██████████| 313/313 [00:05<00:00, 57.36it/s]
test: 100%|██████████| 313/313 [00:05<00:00, 57.73it/s]
test: 100%|██████████| 313/313 [00:05<00:00, 54.12it/s]
###Markdown
Отчет
###Code
RD = ReportDecoNLP(output_path='NLP_REPORT',
report_file_name='report_nlp.html')
roles = {'target': 'IsGood',
'text': ['BankName', 'Message'],
'drop': ['MessageRecognized', 'WER']}
task = Task('binary')
automl = TabularNLPAutoML(task = task,
timeout = 3600,
gpu_ids = '1',
general_params = {'use_algos': ['linear_l2']},
linear_pipeline_params = {'text_features': "embed"},
text_params = {'lang': 'ru'},
autonlp_params={'model_name': 'pooled_bert',
'transformer_params': {'model_params': {'pooling': 'cls'}}},
verbose=2)
automl_rd = RD(automl)
oof_pred = automl_rd.fit_predict(train, roles=roles)
not_nan = np.any(~np.isnan(oof_pred.data), axis=1)
print('AUC OOF score: {}'.format(roc_auc_score(train[roles['target']].values[not_nan], oof_pred.data[not_nan][:, 0])))
%%time
test_pred = automl_rd.predict(test)
print('AUC TEST score: {}'.format(roc_auc_score(test[roles['target']].values, test_pred.data[:, 0])))
###Output
100%|██████████| 3/3 [00:16<00:00, 5.45s/it]
###Markdown
Отчет лежит [здесь](./NLP_REPORT/report_nlp.html). Сохранение модели
###Code
with open('LAMA_model.pkl', 'wb') as f:
pickle.dump(automl_rd, f)
###Output
_____no_output_____ |
Lab 4/Lab4_DT_Task2_Wine_Dataset.ipynb | ###Markdown
**Roll Number : CE137****Exercise** Task 2: Apply algorithm on wine dataset - LabelEncoding of features: and Train test Division 66%-34%
###Code
# Import necessary libraries
import numpy as np
import pandas as pd
from sklearn.metrics import accuracy_score, precision_score, recall_score, confusion_matrix
from sklearn import preprocessing
from sklearn.tree import DecisionTreeClassifier
import tkinter
import graphviz
from google.colab import drive
drive.mount('/content/drive')
# Loading wine dataset red one
dataset = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/ML Labs/Lab 4/winequality-red.csv')
print("\nData :\n", dataset)
###Output
Data :
fixed acidity volatile acidity citric acid ... sulphates alcohol quality
0 7.4 0.700 0.00 ... 0.56 9.4 5
1 7.8 0.880 0.00 ... 0.68 9.8 5
2 7.8 0.760 0.04 ... 0.65 9.8 5
3 11.2 0.280 0.56 ... 0.58 9.8 6
4 7.4 0.700 0.00 ... 0.56 9.4 5
... ... ... ... ... ... ... ...
1594 6.2 0.600 0.08 ... 0.58 10.5 5
1595 5.9 0.550 0.10 ... 0.76 11.2 6
1596 6.3 0.510 0.13 ... 0.75 11.0 6
1597 5.9 0.645 0.12 ... 0.71 10.2 5
1598 6.0 0.310 0.47 ... 0.66 11.0 6
[1599 rows x 12 columns]
###Markdown
**All the feature values in Dataset are numeric so no need of label encoding**
###Code
#creating labelEncoder
# le = preprocessing.LabelEncoder()
# trying to encode float point type
# fixed_acidity_encoded = le.fit_transform(dataset["fixed acidity"])
# print("fixed_acidity_encoded :", fixed_acidity_encoded)
# Considering quality as class label and rest as features
cols = dataset.columns.drop('quality')
# print(cols)
features = dataset[dataset.columns.drop('quality')]
# print(features)
target = dataset["quality"]
# print(target)
#import the necessary module
from sklearn.model_selection import train_test_split
#split data set into train and test sets into 66% - 34%
data_train, data_test, target_train, target_test = train_test_split(features,
target, test_size = 0.34, random_state = 137)
#Create a Decision Tree Classifier (using Entropy)
dt_classifier = DecisionTreeClassifier()
# Train the model using the training sets
model = dt_classifier.fit(data_train, target_train)
# Predict the classes of test data
test_pred = model.predict(data_test)
# print(test_pred.dtype)
from sklearn.tree import export_graphviz
export_graphviz(model, out_file='wine_tree.dot',feature_names=cols,
class_names="quality", filled=True)
# Convert to png
from subprocess import call
call(['dot', '-Tpng', 'wine_tree.dot', '-o', 'wine_tree.png', '-Gdpi=600'])
# Displaying in python
import matplotlib.pyplot as plt
plt.figure(figsize = (14, 18))
plt.imshow(plt.imread('wine_tree.png'))
plt.axis('off')
plt.show()
###Output
_____no_output_____ |
Final Project 1 - Wine Quality Prediction.ipynb | ###Markdown
PROJECT 1 : Wine Quality PredictionTEAM MEMBERS: Shubham Nanche, Yogesh Jyoti
###Code
import numpy as np
import pandas as pd
import matplotlib as plt
import seaborn as sns
from sklearn import metrics
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score, classification_report
dataset= pd.read_csv('Final Project 1.csv')
dataset.head()
dataset.describe()
dataset.info()
dataset.isnull().sum()
sns.set()
sns.histplot(dataset['pH'])
sns.set()
sns.distplot(dataset['quality'])
dataset.hist(figsize=(20,20))
corrltn = dataset.corr()
plt.pyplot.subplots(figsize=(20,20))
sns.heatmap(corrltn, cbar=True, square = True, fmt='.1f', annot = True, annot_kws = {'size':10}, cmap='Greens')
dataset['goodquality'] = [1 if x >= 7 else 0 for x in dataset['quality']]
# Separate features and target
X = dataset.drop(['quality','goodquality'], axis = 1)
y = dataset['goodquality']
X.head()
X.describe()
y.head()
from sklearn.preprocessing import StandardScaler
x_features = X
x = StandardScaler().fit_transform(X)
from sklearn.preprocessing import scale
X_scale = scale(x)
X_train, X_test, Y_train, Y_test = train_test_split(X_scale, y, test_size=0.25, random_state=0)
print(X_train.shape, X_test.shape)
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
model.fit(X_train, Y_train)
Y_pred=model.predict(X_train)
training_data_accuracy=accuracy_score(Y_pred,Y_train)
training_data_accuracy
model_linear_svm = SVC(kernel='linear')
model_linear_svm.fit(X_train, Y_train)
y_pred = model_linear_svm.predict(X_test)
print("Accuracy : ", accuracy_score(Y_test, y_pred))
print("Confusion Matrix : ", confusion_matrix(Y_test, y_pred))
non_linear_SVM = SVC(kernel='rbf')
non_linear_SVM.fit(X_train, Y_train)
y_pred = non_linear_SVM.predict(X_test)
print("Accuracy of Non Linear Model: ", accuracy_score(Y_test, y_pred))
import xgboost as xgb
model5 = xgb.XGBClassifier(use_label_encoder=False, random_state=2)
model5.fit(X_train, Y_train)
y_pred5 = model5.predict(X_test)
print(classification_report(Y_test, y_pred5))
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report
model2 = RandomForestClassifier(random_state=0)
model2.fit(X_train, Y_train)
y_pred2 = model2.predict(X_test)
print(classification_report(Y_test, y_pred2))
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors =25, metric = 'minkowski')
knn.fit(X_train, Y_train)
knn_Y_pred = knn.predict(X_test)
knn_Y_pred
from sklearn.metrics import confusion_matrix
knn_cm = confusion_matrix(Y_test, knn_Y_pred)
sns.heatmap(knn_cm, annot=True)
print(classification_report(Y_test, knn_Y_pred))
###Output
precision recall f1-score support
0 0.92 0.96 0.94 355
1 0.52 0.36 0.42 45
accuracy 0.89 400
macro avg 0.72 0.66 0.68 400
weighted avg 0.88 0.89 0.88 400
|
ab_tests/ab_test_template.ipynb | ###Markdown
Template For A/B TestingA/B testing is a general methodology used when you want to test out whether a new feature or change is better. What you're doing is you're going show one set of features, the control set (your existing feature) to one user group and another set, your experiment set (your new feature) to another user group and test how did these users respond differently so that you can determine which set of your feature is better. If the ultimate goal is to decide which model or design is the best, then A/B testing is the right framework, along with its many gotchas to watch out for.The following section describes a possible workflow for conducting A/B testing. Generate High Level Business Goal- **Define your business objectives.** e.g. A business objective for an online flower store is to "Increase our sales by receiving online orders for our bouquets."- **Define your Key Performance Indicators.** e.g. Our flower store’s business objective is to sell bouquets. Our KPI could be number of bouquets sold online.- **Define your target metrics.** e.g. For our imaginary flower store, we can define a monthly target of 175 bouquets sold. Segmenting And Understand The WhysAfter defining the high level goal, find out (not guess) which parts of your business are underperforming or trending and why. Quantitative methods do a much better job answering how many and how much types of questions. Whereas qualitative methods such as User Experience Group (you go really deep with a few users. It can take form of observing users doing tasks or you ask users to self-document their behaviors) and surveys are much better suited for answering questions about why or how to fix a problem.**Take a look at your conversion funnel**. Examine the flow from the persuasive end (top of the funnel) and the transactional end (bottom of the funnel). e.g. You can identify problems by starting from the top 5 highest bounce rate pages. During the examination, segment to spot underlying underperformance or trends.- **Segment by source:** Separate people who arrive on your website from e-mail campaigns, google, twitter, youtube, etc. Find answers to questions like: Is there a difference between bounce rates for those segments? Is there a difference in Visitor Loyalty between those who came from Youtube versus those who came from Twitter? What products do people who come from Youtube care about more than people who come from Google?- **Segment by behavior:** Focus on groups of people who have similar behaviors For example, you can separate out people who visit more than ten times a month versus those that visit only twice. Do these people look for products in different price ranges? Are they from different regions? Or separate people out by the products they purchase, by order size, by people who have signed up.Consider segmenting your users into different buckets and testing against that. Because mobile visitors perform differently than desktop ones, new visitors are different than returning visitors, and e-mail traffic is different than organic. Start thinking "segment first."During the process ask yourself: 1) Why is it happening? 2) How can we spread the success of other areas of the site.e.g. You’re looking at your metric of total active users over time and you see a spike in one of the timelines. After confirming that this is not caused by seasonal variation, we can look at different segment of our visitors to see if one of the segment is causing the spike. Suppose we have chosen segment to be geographic, it might just happen that we’ve identify a large proportion of the traffic is generated by a specific region and it might be best for us to dig deeper and understand why.**Three simple ideas for gathering qualitative data to understand the why**- Add an exit survey on your site, asking why your visitors did/didn't complete the goal of the site.- Send out a feedback surveys to your clients to find out more about them and their motives.- Simply track what your customers are saying in social media and on review sites. Generate a Well-Defined Metric Set the "Lower" Level GoalsNow that you've identify the overall business goal and the possible problem (e.g. Less than one percent of visitors sign up for our newsletter). **It's time to prioritize your website goals. Three categories of goals include:**- Do x: Add better product images.- Increase y: Increase click-through rates.- Reduce z: Reduce our shopping cart abandonment rate. Define the SubjectWhat you need to do is to decide how to assign users to either the control or the experiment. There’re three commonly used categories, namely user id, anonymous id (cookie) and event.- **user id:** e.g. Log in user names. Choosing this as the proxy for your user means that all the events that corresponds to the same user id are either in the control or experiment group, regardless of whether that user is switching between a mobile phone or desktop. This also means that if the user has not log in then he / she will neither be assgined to a control or experiment group.- **anonymous id (cookie):** The cookie is specific for a browser and device, thus if the user switches from Chrome to Firefox, they’ll be assigned to a different cookie. Also note that users can clear the cookie, in which case the next time they visit the website they’ll get assigned to a new cookie even if they’re still using the same browser and device. For experiments that will be crossing the sign-in border, using cookie is preferred. e.g. Suppose you’re changing the layout of the page or locations of the sign in bar then you should use a cookie.- **event:** Should only be used when you’re testing a non-user-visible change. e.g. page load time. If not, what will happen is : The user will see the change when they first visit the page and after reloading the page, the user will not see the change, leading to confusion. Define the PopulationIf you think you can identify which population of your users will be affected by your experiment, you might want to target your experiment to that traffic (e.g. changing features specific to one language’s users) so that the rest of the population won’t dilute the effect.Next, depending on the problem you’re looking at, you might want to use a cohort instead of a population. A cohort makes much more sense than looking at the entire population when testing out learning effects, examining user retention or anything else that requires the users to be established for some reason.A quick note on cohort. The gist of cohort analysis is basically putting your customers into buckets so you can track their behaviours over a period of time. The term cohort stands for a group of customers grouped by the timeline (can be week, month) where they first made a purchase (can be a different action that’s valuable to the business). In a cohort, you can have roughly the same parameters in your two user group, which makes them more comparable.e.g. You’re an educational platform has an existing course that’s already up and running. Some of the students have completed the course, some of them are midway through and there’re students who have not yet started. If you want to change the structure of of one of the lessons to see if it improves the completion rate of the entire course and they started the experiment at time X. For students who have started before the experiment initiated they may have already finished the lesson already leading to the fact that they may not even see the change. So taking the whole population of students and running the experiment on them isn’t what you want. Instead, you want to segment out the cohort, the group of customers, that started the lesson are the experiment was launched and split that into an experiment and control group. Define the Size and Duration**When do I want to run the experiment and for how long.**e.g. Suppose we’ve chosen the goal to increase click-through rates, which is defined by the unique number of people who click the button versus the number of users who visited the page that the button was located. But to actually use the definition, we’ll also have to address some other questions. Such as, if the same user visits the page once and comes back a week or two later, do we still only want to count that once? Thus we’ll also need to specify a time periodTo account for this, if 99% of your visitors convert after 1 week, then you should do the following.- Run your test for two weeks.- Include in the test only users who show up in the first week. If a user shows up on day 13, you have not given them enough time to convert (click-through).- At the end of the test, if a user who showed up on day 2 converts more than 7 days after he first arrived, he must be counted as a non-conversion.**So one version of the fully-defined metric will be: For each week, the number of cookies that clicked divided by the number of cookies that interacted with the page (also add the population definition).**Running the test for a least a week is adviced since it'll make sure that the experiment captures the different user behaviour of weekdays, weekends and try to avoid holidays ....If your population is defined and you have a large enough traffic, another consideration is what fraction of the traffic are you going to send through the experiment. There’re some reasons that you might not want to run the experiment on all of your traffic to get the result faster.- The first consideration might be you’re just uncertained of how your users will react to the new feature, so you might want to test it out a bit before you get users blogging about it. - The same notion applies to riskier cases, such as you’re completely switching your backend system, if it doesn’t work well, then the site might go down. Prioritize**After collating all the ideas, prioritize them based on three simple metrics:** (give them scores)- **Potential** How much potential for a conversion rate increase? You can check to see if this kind of idea worked before.- **Importance** How many visitors will be impacted from the test?- **Ease** How easy is it to implement the test? Go for the low-hanging fruit first. Every test that's developed is documented so that we can review and prioritize ideas that are inspired by winning tests.Some ideas worth experimenting are: Headlines, CTA (call to actions), check-out pages, forms and the elements include:- Wording. e.g. Call to action or value proposition.- Image. e.g. Replacing a general logistics image with the image of an actual employee.- Layout. e.g. Increased the size of the contact form or amount of content on the page. A/B Testing CaveatsThis section lists out some caveats that are not really mentioned or covered briefly in the template above. Avoid Biased Stopping TimesNO PEEKING. When you run an A/B test, you should avoid stopping the experiment as soon as the results "look" significant. Using a stopping time that is dependent upon the results of the experiment can inflate your false-positive rate substantially.To understand why this is so, let's look at a simpler experimental problem. Let's say that we have a coin in front of us, and we want to know whether it's biased -- whether it lands heads-up with probability other than 50%. If we flip the coin n times and it lands heads-up on k of them, then we know that the posterior distribution for the coin's bias is $p \sim Beta(k+1,n−k+1)$. So if we do this and 0.5 isn't within a 95% credible interval for $p$, then we would conclude that the coin is biased with p-value <=0.05. This is all fine as long as the number of flips we perform, n, doesn't depend on the results of the previous flips. If we do *that*, then we bias the experiment to favor extremal outcomes.Let's clarify this simulating these two experimental procedures in code.- **Unbiased Procedure:** We flip the coin 1000 times. Let k be the number of times that the coin landed heads-up. After all 1000 flips, we look at the $p \sim Beta(k+1,1000−k+1)$ distribution. If 0.5 lies outside a the 95% credible interval for $p$, then we conclude that $p \neq 0.5$; if 0.5 does lie within the 95% credible interval, then we're not sure -- we don't reject the idea that $p = 0.5$.- **Biased Procedure:** We start flipping the coin. For each n with $1 < n \leq 1000$, let $k_n$ be the number of times the coin lands heads-up after the first n flips. After each flip, we look at the distribution $p \sim Beta(k_n+1,n−k_n+1)$. If 0.5 lies outside a the 95% credible interval for $p$, then we immediately halt the experiment and conclude that $p \neq 0.5$; if 0.5 does lie within the 95% credible interval, we continue flipping. If we make it to 1000 flips, we stop completely and follow the unbiased procedure.How many false positives do you think that the Biased Procedure will produce? We chose our p-value to be 0.05, so that the false positive rate would be about 5%. Let's repeat each procedure 1000 times, assuming that the coin really is fair, and see what the false positive rates really are:
###Code
def unbiased_procedure(n):
"""
Parameters
----------
n : int
number of experiments (1000 coin flips) to run
"""
false_positives = 0
for _ in range(n):
# success[-1] : total number of heads after the 1000 flips
success = ( np.random.random(size = 1000) > 0.5 ).cumsum()
beta_cdf = stats.beta( success[-1] + 1, 1000 - success[-1] + 1 ).cdf(0.5)
if beta_cdf >= 0.975 or beta_cdf <= 0.025:
false_positives += 1
return false_positives / n
def biased_procedure(n):
false_positives = 0
for _ in range(n):
success = ( np.random.random(size = 1000) > 0.5 ).cumsum()
trials = np.arange( 1, 1001 )
history = stats.beta( success + 1, trials - success + 1 ).cdf(0.5)
if ( history >= 0.975 ).any() or ( history <= 0.025 ).any():
false_positives += 1
return false_positives / n
# simulating 10k experiments under each procedure
print( unbiased_procedure(10000) )
print( biased_procedure(10000) )
###Output
0.0527
0.4871
|
04_stats_for_data_analysis/08_w1p_stat.binomial_test_with_plots.ipynb | ###Markdown
Биномиальный критерий для доли
###Code
import numpy as np
from scipy import stats
%pylab inline
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
Shaken, not stirred Джеймс Бонд говорит, что предпочитает мартини взболтанным, но не смешанным. Проведём слепой тест (blind test): $n$ раз предложим ему пару напитков и выясним, какой из двух он предпочитает. Получаем: * **выборка:** бинарный вектор длины $n$, где 1 — Джеймс Бонд предпочел взболтанный напиток, 0 — смешанный;* **гипотеза $H_0$:** Джеймс Бонд не различает 2 вида напитков и выбирает наугад;* **статистика $T$:** количество единиц в выборке. Если нулевая гипотеза справедлива и Джеймс Бонд действительно выбирает наугад, то мы можем с одинаковой вероятностью получить любой из $2^n$ бинарных векторов длины $n$. Мы могли бы перебрать все такие векторы, посчитать на каждом значение статистики $T$ и получить таким образом её нулевое распределение. Но в данном случае этот этап можно пропустить: мы имеем дело с выборкой, состоящей из 0 и 1, то есть, из распределения Бернулли $Ber(p)$. Нулевая гипотеза выбора наугад соответствует значению $p=\frac1{2}$, то есть, в каждом эксперименте вероятность выбора взболтанного мартини равна $\frac1{2}$. Сумма $n$ одинаково распределённых бернуллиевских случайных величин с параметром $p$ имеет биномиальное распределение $Bin(n, p)$. Следовательно, нулевое распределение статистики $T$ — $Bin\left(n, \frac1{2}\right)$.Пусть $n=16.$
###Code
n = 16
F_H0 = stats.binom(n, 0.5)
x = np.linspace(0,16,17)
pylab.bar(x, F_H0.pmf(x), align = 'center')
xlim(-0.5, 16.5)
pylab.show()
###Output
_____no_output_____
###Markdown
Односторонняя альтернатива **гипотеза $H_1$:** Джеймс Бонд предпочитает взболтанный мартини.При такой альтернативе более вероятны большие значения статистики; при расчёте достигаемого уровня значимости будем суммировать высоту столбиков в правом хвосте распределения.
###Code
pylab.bar(x, F_H0.pmf(x), align = 'center')
pylab.bar(np.linspace(12,16,5), F_H0.pmf(np.linspace(12,16,5)), align = 'center', color='red')
xlim(-0.5, 16.5)
pylab.show()
stats.binom_test(12, 16, 0.5, alternative = 'greater')
pylab.bar(x, F_H0.pmf(x), align = 'center')
pylab.bar(np.linspace(11,16,6), F_H0.pmf(np.linspace(11,16,6)), align = 'center', color='red')
xlim(-0.5, 16.5)
pylab.show()
stats.binom_test(11, 16, 0.5, alternative = 'greater')
###Output
_____no_output_____
###Markdown
Двусторонняя альтернатива **гипотеза $H_1$:** Джеймс Бонд предпочитает какой-то определённый вид мартини.При такой альтернативе более вероятны очень большие и очень маленькие значения статистики; при расчёте достигаемого уровня значимости будем суммировать высоту столбиков в правом и левом хвостах распределения.
###Code
pylab.bar(x, F_H0.pmf(x), align = 'center')
pylab.bar(np.linspace(12,16,5), F_H0.pmf(np.linspace(12,16,5)), align = 'center', color='red')
pylab.bar(np.linspace(0,4,5), F_H0.pmf(np.linspace(0,4,5)), align = 'center', color='red')
xlim(-0.5, 16.5)
pylab.show()
stats.binom_test(12, 16, 0.5, alternative = 'two-sided')
pylab.bar(x, F_H0.pmf(x), align = 'center')
pylab.bar(np.linspace(13,16,4), F_H0.pmf(np.linspace(13,16,4)), align = 'center', color='red')
pylab.bar(np.linspace(0,3,4), F_H0.pmf(np.linspace(0,3,4)), align = 'center', color='red')
xlim(-0.5, 16.5)
pylab.show()
stats.binom_test(13, 16, 0.5, alternative = 'two-sided')
###Output
_____no_output_____ |
ECVs/07_Permafrost-exercise-zarr.ipynb | ###Markdown
Mapping the permafrost active layer thickness using Cate and the Zarr Data StoreThe Zarr Data Store hosts one permafrost dataset. This exercise will show you how to visualize and process it. PreparationsIf you haven't done so please follow the [Cate tutorial](futurelearn.com/tbd) to get started using the exercises.
###Code
# To get things started we need to initialize a few things
#Load some python modules to make them accessible to the notebook
from cate.core.ds import DATA_STORE_POOL
import cate.ops as ops
from cate.util.monitor import ConsoleMonitor
from cate.core.ds import get_metadata_from_descriptor
from cate.ops.io import open_dataset
# the following is needed to run Cate in a Jupyter Notebook
from xcube.util.ipython import enable_asyncio
enable_asyncio()
# utilities
from IPython.display import display
import numpy as np
from datetime import datetime
monitor=ConsoleMonitor()
###Output
_____no_output_____
###Markdown
To begin, let us see which data stores are available in the Data Store Pool.
###Code
DATA_STORE_POOL.store_instance_ids
###Output
_____no_output_____
###Markdown
We see three stores. The 'cci-store' is a store that provides access to all datasets from the CCI Open Data Portal. The 'cci-zarr-store' is a store that contains selected data from the Open Data Portal, converted to the zarr format. The datasets from this store can be opened and processed faster, but the store provides only a small subset of what is offered by the 'cci-store'. The 'local' data store finally allows to access locally provided data. Also, when you select to cache data, you will find it in this store. Cached data can also be opened quickly.For this exercise we use the 'cci-zarr-store'. As this datat store only holds a few datasets, it is fine to list its content entirely.
###Code
data_store = DATA_STORE_POOL.get_store('cci-zarr-store')
list(data_store.get_data_ids())
###Output
_____no_output_____
###Markdown
There is one permafrost dataset included. We then may proceed to show the contents of this dataset:
###Code
permafrost_descriptor=data_store.describe_data('ESACCI-PERMAFROST-L4-ALT-MODISLST-AREA4_PP-1997-2018-fv02.0.zarr')
display(permafrost_descriptor)
###Output
_____no_output_____
###Markdown
Now let us open it. The parameter 'data_store_id' is not absolutely necessary, but it makes the opening a little faste. The parameter 'normalize' should be used so that the dataset is preprocessed in a way that it can be optimally used in Cate.
###Code
permaALTDset=open_dataset(ds_id="ESACCI-PERMAFROST-L4-ALT-MODISLST-AREA4_PP-1997-2018-fv02.0.zarr",
data_store_id='cci-zarr-store',
normalize=True)
###Output
_____no_output_____
###Markdown
Now that we have opened the dataset, we can plot it:
###Code
%matplotlib inline
import matplotlib.pyplot as mpl
import cartopy.crs as ccrs
it=0
mpl.figure(figsize=(16,10))
crs=ccrs.NorthPolarStereo(0,true_scale_latitude=71)
ax = mpl.subplot(projection=crs)
permaALTDset.ALT[it,:,:].plot.imshow(vmin=0,vmax=4,ax=ax)
ax.coastlines(resolution='10m')
ax.set_title("Permafrost active layer thickness in the Northern hemisphere")
###Output
_____no_output_____ |
DIP/DIP.ipynb | ###Markdown
Utility Functions
###Code
def pprint_df(dframe):
print(tabulate(dframe, headers='keys', tablefmt='psql', showindex=True))
def clamp(min_v, max_v, val):
return min(max(val, min_v), max_v)
def r(x, i=2):
return str(round(x, i))
def calc_intensity(red, green, blue):
return round((blue + green + red) / 3)
def calc_saturation(red, green, blue):
minimum = min(red, green, blue)
return round((1 - (3 / (red + green + blue + 0.001) * minimum))*255)
def calc_hue(red, green, blue):
minv = min(red, green, blue)
maxv = max(red, green, blue)
if minv == maxv:
return 0
hue = 0
if maxv == red:
hue = (green - blue) / (maxv - minv)
elif maxv == green:
hue = 2 + (blue - red) / (maxv - minv)
else:
hue = 4 + (red - green) / (maxv - minv)
hue = hue * 60;
hue = hue + 360 if hue < 0 else hue;
return round(hue)
def rgb_to_hsi(R, G, B):
H = list(map(lambda x: calc_hue(R[x], G[x], B[x]), range(len(R))))
S = list(map(lambda x: calc_saturation(R[x], G[x], B[x]), range(len(R))))
I = list(map(lambda x: calc_intensity(R[x], G[x], B[x]), range(len(R))))
return (H, S, I)
def hsi_to_rgb(H, S, I):
def calc_rgb(hsi):
h, s, i = hsi
rad = math.pi / 180
s = s/255
i = i/255
r, g, b = 0, 0, 0
if h >= 0 and h < 120:
r = i * (1 + ((s * math.cos(h * rad)) / math.cos((60 - h) * rad)))
b = i * (1 - s)
g = i * 3 - (r + b)
elif h >= 120 and h < 240:
h = h - 120
g = i * (1 + ((s * math.cos(h * rad)) / math.cos((60 - h) * rad)))
r = i * (1 - s)
b = i * 3 - (r + g)
else:
h = h - 240
b = i * (1 + ((s * math.cos(h * rad)) / math.cos((60 - h) * rad)))
g = i * (1 - s)
r = i * 3 - (b + g)
return [round(clamp(0, 1, x) * 255) for x in (r, g, b)]
return list(map(calc_rgb, list(zip(H, S, I))))
def calculate_hist(plane, L):
hist = np.bincount(plane)
pdf = hist / L
cdf = np.cumsum(pdf)
lookup = cdf * bit_rr
df = pd.DataFrame({
'count': hist.flatten(),
'pdf': pdf.flatten(),
'cdf': cdf.flatten(),
'lookup': [clamp(0, 255, x) for x in lookup.flatten()]
})
df = df[df['count'] != 0]
return df
def rescale_range(range, s_min, s_max):
r_min = min(range)
r_max = max(range)
return list(map(lambda r:round(((((s_max - s_min)/(r_max - r_min)))*(r-r_min)+ s_min)), range))
###Output
_____no_output_____
###Markdown
Range Scaling
###Code
rescale_range([0, 10, 20, 30, 80, 100, 255], 175, 255)
###Output
_____no_output_____
###Markdown
Bit Plane Slicing
###Code
img = np.array([
[255, 200, 150],
[120, 140, 250],
[20, 50, 20],
[100, 50, 200]
], np.uint8)
bit_rate = 8;
bin_img = []
for i in range(img.shape[0]):
for j in range(img.shape[1]):
bin_img.append(np.binary_repr(img[i][j], width=bit_rate))
for plane in range(bit_rate):
print(f'\n{plane+1} bit plane')
sliced = (np.array([int(i[plane]) for i in bin_img], dtype = np.uint8)).reshape(img.shape[0],img.shape[1])
reconstructed = (np.array([int(i[plane]) for i in bin_img], dtype = np.uint8) * 2 ** plane).reshape(img.shape[0],img.shape[1])
pprint_df(pd.DataFrame(sliced))
print(f'\n{plane+1} bit plane (reconstructed)')
pprint_df(pd.DataFrame(reconstructed))
###Output
1 bit plane
+----+-----+-----+-----+
| | 0 | 1 | 2 |
|----+-----+-----+-----|
| 0 | 1 | 1 | 1 |
| 1 | 0 | 1 | 1 |
| 2 | 0 | 0 | 0 |
| 3 | 0 | 0 | 1 |
+----+-----+-----+-----+
1 bit plane (reconstructed)
+----+-----+-----+-----+
| | 0 | 1 | 2 |
|----+-----+-----+-----|
| 0 | 1 | 1 | 1 |
| 1 | 0 | 1 | 1 |
| 2 | 0 | 0 | 0 |
| 3 | 0 | 0 | 1 |
+----+-----+-----+-----+
2 bit plane
+----+-----+-----+-----+
| | 0 | 1 | 2 |
|----+-----+-----+-----|
| 0 | 1 | 1 | 0 |
| 1 | 1 | 0 | 1 |
| 2 | 0 | 0 | 0 |
| 3 | 1 | 0 | 1 |
+----+-----+-----+-----+
2 bit plane (reconstructed)
+----+-----+-----+-----+
| | 0 | 1 | 2 |
|----+-----+-----+-----|
| 0 | 2 | 2 | 0 |
| 1 | 2 | 0 | 2 |
| 2 | 0 | 0 | 0 |
| 3 | 2 | 0 | 2 |
+----+-----+-----+-----+
3 bit plane
+----+-----+-----+-----+
| | 0 | 1 | 2 |
|----+-----+-----+-----|
| 0 | 1 | 0 | 0 |
| 1 | 1 | 0 | 1 |
| 2 | 0 | 1 | 0 |
| 3 | 1 | 1 | 0 |
+----+-----+-----+-----+
3 bit plane (reconstructed)
+----+-----+-----+-----+
| | 0 | 1 | 2 |
|----+-----+-----+-----|
| 0 | 4 | 0 | 0 |
| 1 | 4 | 0 | 4 |
| 2 | 0 | 4 | 0 |
| 3 | 4 | 4 | 0 |
+----+-----+-----+-----+
4 bit plane
+----+-----+-----+-----+
| | 0 | 1 | 2 |
|----+-----+-----+-----|
| 0 | 1 | 0 | 1 |
| 1 | 1 | 0 | 1 |
| 2 | 1 | 1 | 1 |
| 3 | 0 | 1 | 0 |
+----+-----+-----+-----+
4 bit plane (reconstructed)
+----+-----+-----+-----+
| | 0 | 1 | 2 |
|----+-----+-----+-----|
| 0 | 8 | 0 | 8 |
| 1 | 8 | 0 | 8 |
| 2 | 8 | 8 | 8 |
| 3 | 0 | 8 | 0 |
+----+-----+-----+-----+
5 bit plane
+----+-----+-----+-----+
| | 0 | 1 | 2 |
|----+-----+-----+-----|
| 0 | 1 | 1 | 0 |
| 1 | 1 | 1 | 1 |
| 2 | 0 | 0 | 0 |
| 3 | 0 | 0 | 1 |
+----+-----+-----+-----+
5 bit plane (reconstructed)
+----+-----+-----+-----+
| | 0 | 1 | 2 |
|----+-----+-----+-----|
| 0 | 16 | 16 | 0 |
| 1 | 16 | 16 | 16 |
| 2 | 0 | 0 | 0 |
| 3 | 0 | 0 | 16 |
+----+-----+-----+-----+
6 bit plane
+----+-----+-----+-----+
| | 0 | 1 | 2 |
|----+-----+-----+-----|
| 0 | 1 | 0 | 1 |
| 1 | 0 | 1 | 0 |
| 2 | 1 | 0 | 1 |
| 3 | 1 | 0 | 0 |
+----+-----+-----+-----+
6 bit plane (reconstructed)
+----+-----+-----+-----+
| | 0 | 1 | 2 |
|----+-----+-----+-----|
| 0 | 32 | 0 | 32 |
| 1 | 0 | 32 | 0 |
| 2 | 32 | 0 | 32 |
| 3 | 32 | 0 | 0 |
+----+-----+-----+-----+
7 bit plane
+----+-----+-----+-----+
| | 0 | 1 | 2 |
|----+-----+-----+-----|
| 0 | 1 | 0 | 1 |
| 1 | 0 | 0 | 1 |
| 2 | 0 | 1 | 0 |
| 3 | 0 | 1 | 0 |
+----+-----+-----+-----+
7 bit plane (reconstructed)
+----+-----+-----+-----+
| | 0 | 1 | 2 |
|----+-----+-----+-----|
| 0 | 64 | 0 | 64 |
| 1 | 0 | 0 | 64 |
| 2 | 0 | 64 | 0 |
| 3 | 0 | 64 | 0 |
+----+-----+-----+-----+
8 bit plane
+----+-----+-----+-----+
| | 0 | 1 | 2 |
|----+-----+-----+-----|
| 0 | 1 | 0 | 0 |
| 1 | 0 | 0 | 0 |
| 2 | 0 | 0 | 0 |
| 3 | 0 | 0 | 0 |
+----+-----+-----+-----+
8 bit plane (reconstructed)
+----+-----+-----+-----+
| | 0 | 1 | 2 |
|----+-----+-----+-----|
| 0 | 128 | 0 | 0 |
| 1 | 0 | 0 | 0 |
| 2 | 0 | 0 | 0 |
| 3 | 0 | 0 | 0 |
+----+-----+-----+-----+
###Markdown
Dilation & Erosion
###Code
img = np.array([
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 0, 0, 0],
[0, 1, 1, 1, 1, 0, 0],
[0, 1, 1, 1, 1, 1, 0],
[0, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
], np.uint8)
kernel = np.array([
[0, 1, 0],
[1, 1, 1],
[0, 1, 0]
], np.uint8)
img_dilation = cv2.dilate(img, kernel, iterations=1)
print('Dilated:')
pprint_df(pd.DataFrame(img_dilation))
img_erosion = cv2.erode(img, kernel, iterations=1)
print('\nEroded:')
pprint_df(pd.DataFrame(img_erosion))
###Output
Dilated:
+----+-----+-----+-----+-----+-----+-----+-----+
| | 0 | 1 | 2 | 3 | 4 | 5 | 6 |
|----+-----+-----+-----+-----+-----+-----+-----|
| 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 |
| 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 |
| 2 | 1 | 1 | 1 | 1 | 1 | 1 | 0 |
| 3 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 4 | 1 | 1 | 1 | 1 | 1 | 1 | 0 |
| 5 | 0 | 1 | 1 | 1 | 1 | 0 | 0 |
| 6 | 0 | 0 | 1 | 1 | 0 | 0 | 0 |
+----+-----+-----+-----+-----+-----+-----+-----+
Eroded:
+----+-----+-----+-----+-----+-----+-----+-----+
| | 0 | 1 | 2 | 3 | 4 | 5 | 6 |
|----+-----+-----+-----+-----+-----+-----+-----|
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| 2 | 0 | 0 | 1 | 1 | 0 | 0 | 0 |
| 3 | 0 | 0 | 1 | 1 | 1 | 0 | 0 |
| 4 | 0 | 0 | 1 | 1 | 0 | 0 | 0 |
| 5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| 6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+----+-----+-----+-----+-----+-----+-----+-----+
###Markdown
Internal & External Boundaries
###Code
img = np.array([
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0],
[0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0],
[0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0],
[0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
], np.uint8)
kernel = np.array([
[1, 1, 1],
[1, 1, 1],
[1, 1, 1]
], np.uint8)
img_external = (cv2.dilate(img, kernel, iterations=1) - img)
print('External:')
pprint_df(pd.DataFrame(img_external))
img_internal = (img - cv2.erode(img, kernel, iterations=1))
print('\nInternal:')
pprint_df(pd.DataFrame(img_internal))
###Output
External:
+----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+------+------+------+------+
| | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 |
|----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+------+------+------+------|
| 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
| 2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
| 3 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
| 4 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
| 5 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 |
| 6 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 |
+----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+------+------+------+------+
Internal:
+----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+------+------+------+------+
| | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 |
|----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+------+------+------+------|
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 0 |
| 2 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 |
| 3 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 0 |
| 4 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 |
| 5 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 |
| 6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+------+------+------+------+
###Markdown
Histogram Equalization
###Code
img = np.array([
[10, 25, 160, 10],
[10, 80, 160, 25],
[10, 80, 160, 25],
[90, 80, 160, 90],
], np.uint8)
size = img.shape
bit_rate = 8;
bit_rr = 2**bit_rate
print('Range 0 -', bit_rr)
hist = cv2.calcHist([img], [0], None, [bit_rr], [0,bit_rr])
pdf = hist / (size[0] * size[1])
cdf = np.cumsum(pdf)
lookup = cdf * bit_rr
df = pd.DataFrame({
'count': hist.flatten(),
'pdf': pdf.flatten(),
'cdf': cdf.flatten(),
'lookup': lookup.flatten()
})
df = df[df['count'] != 0]
pprint_df(df)
newImg = np.zeros(size)
for i in range(size[0]):
for j in range(size[1]):
newImg[i][j] = lookup[img[i][j]]
print('\nnew Image:');
pprint_df(newImg)
###Output
Range 0 - 256
+-----+---------+--------+--------+----------+
| | count | pdf | cdf | lookup |
|-----+---------+--------+--------+----------|
| 10 | 4 | 0.25 | 0.25 | 64 |
| 25 | 3 | 0.1875 | 0.4375 | 112 |
| 80 | 3 | 0.1875 | 0.625 | 160 |
| 90 | 2 | 0.125 | 0.75 | 192 |
| 160 | 4 | 0.25 | 1 | 256 |
+-----+---------+--------+--------+----------+
new Image:
+----+-----+-----+-----+-----+
| | 0 | 1 | 2 | 3 |
|----+-----+-----+-----+-----|
| 0 | 64 | 112 | 256 | 64 |
| 1 | 64 | 160 | 256 | 112 |
| 2 | 64 | 160 | 256 | 112 |
| 3 | 192 | 160 | 256 | 192 |
+----+-----+-----+-----+-----+
###Markdown
Colored Image Histogram Equalization
###Code
# RGB Regions
regions = np.array([
[250, 200, 150],
[120, 140, 250],
[40, 100, 40],
[110, 50, 200]
])
# per plane
bit_rate = 8;
bit_rr = 2**bit_rate
R, G, B = list(zip(*regions))
###Output
_____no_output_____
###Markdown
Equalization in RGB Space
###Code
print('R Plane: ')
r_df = calculate_hist(R, len(regions))
pprint_df(r_df)
print('\nG Plane: ')
g_df= calculate_hist(G, len(regions))
pprint_df(g_df)
print('\nB Plane: ')
b_df = calculate_hist(B, len(regions))
pprint_df(b_df)
newR = list(map(lambda x: r_df.loc[[x]]['lookup'].to_numpy()[0], R))
newG = list(map(lambda x: g_df.loc[[x]]['lookup'].to_numpy()[0], G))
newB = list(map(lambda x: b_df.loc[[x]]['lookup'].to_numpy()[0], B))
print('\nRGB Equalized Regions:')
print(np.vstack(list(zip(newR, newG, newB))))
###Output
R Plane:
+-----+---------+-------+-------+----------+
| | count | pdf | cdf | lookup |
|-----+---------+-------+-------+----------|
| 40 | 1 | 0.25 | 0.25 | 64 |
| 110 | 1 | 0.25 | 0.5 | 128 |
| 120 | 1 | 0.25 | 0.75 | 192 |
| 250 | 1 | 0.25 | 1 | 255 |
+-----+---------+-------+-------+----------+
G Plane:
+-----+---------+-------+-------+----------+
| | count | pdf | cdf | lookup |
|-----+---------+-------+-------+----------|
| 50 | 1 | 0.25 | 0.25 | 64 |
| 100 | 1 | 0.25 | 0.5 | 128 |
| 140 | 1 | 0.25 | 0.75 | 192 |
| 200 | 1 | 0.25 | 1 | 255 |
+-----+---------+-------+-------+----------+
B Plane:
+-----+---------+-------+-------+----------+
| | count | pdf | cdf | lookup |
|-----+---------+-------+-------+----------|
| 40 | 1 | 0.25 | 0.25 | 64 |
| 150 | 1 | 0.25 | 0.5 | 128 |
| 200 | 1 | 0.25 | 0.75 | 192 |
| 250 | 1 | 0.25 | 1 | 255 |
+-----+---------+-------+-------+----------+
RGB Equalized Regions:
[[255. 255. 128.]
[192. 192. 255.]
[ 64. 128. 64.]
[128. 64. 192.]]
###Markdown
Equalization in HSI
###Code
print('RGB values:')
print(np.vstack(regions))
# HSI
H, S, I = rgb_to_hsi(R, G, B)
print('\nRGB to HSI:')
print(np.vstack(list(zip(H, S, I))))
# equalize Intensity plane
hist = calculate_hist(I, len(regions))
print('\nEqualize Intensity: ')
pprint_df(hist)
newI = list(map(lambda x: hist.loc[[x]]['lookup'].to_numpy()[0], I))
print('\nNew Regions in HSI:')
print(np.vstack(list(zip(H, S, newI))))
print('\nNew Region in RGB:')
print(np.vstack(hsi_to_rgb(H, S, newI)))
###Output
RGB values:
[[250 200 150]
[120 140 250]
[ 40 100 40]
[110 50 200]]
RGB to HSI:
[[ 30 64 200]
[231 75 170]
[120 85 60]
[264 149 120]]
Equalize Intensity:
+-----+---------+-------+-------+----------+
| | count | pdf | cdf | lookup |
|-----+---------+-------+-------+----------|
| 60 | 1 | 0.25 | 0.25 | 64 |
| 120 | 1 | 0.25 | 0.5 | 128 |
| 170 | 1 | 0.25 | 0.75 | 192 |
| 200 | 1 | 0.25 | 1 | 255 |
+-----+---------+-------+-------+----------+
New Regions in HSI:
[[ 30. 64. 255.]
[231. 75. 192.]
[120. 85. 64.]
[264. 149. 128.]]
New Region in RGB:
[[255 255 191]
[136 160 255]
[ 43 107 43]
[118 53 212]]
###Markdown
Nearest Neighbour Interplotation
###Code
def nni(image, point):
max_x = round(point[0])
max_y = round(point[1])
print(f"x: {max_x}, y: {max_y}")
return image[max_x, max_y]
img = np.array([
[10, 30],
[20, 90],
], np.uint8)
# NOTE: convert matlab index to python index (ie python index starts from 0 inseat of 1)
val = nni(img, (0.5, 0.6))
print(f'val: {val}')
###Output
x: 0, y: 1
val: 30
###Markdown
Bilinear Interplotaion
###Code
def bli(img, point):
point = point[1], point[0]
x = math.floor(point[0])
y = math.floor(point[1])
x_offset = point[0] - x
y_offset = point[1] - y
print(f'{img[x, y]} {img[x, y+1]}')
print(f'{img[x+1, y]} {img[x+1, y+1]}')
ix1 = (1-x_offset) * img[x, y] + (x_offset) * img[x, y+1]
ix2 = (1-x_offset) * img[x+1, y] + (x_offset) * img[x+1, y+1]
p = y_offset * ix2 + (1-y_offset) * ix1
print("")
display(Latex(f'I_{{({x}, {point[1]})}} = \\frac{{{x+1} - {point[0]}}}{{{x+1} - {x}}} \\cdotp {img[x, y]} + \\frac{{{point[0]} - {x}}}{{{x+1} - {x}}} \\cdotp {img[x, y+1]} = {r(ix1)}'))
print("")
display(Latex(f'I_{{({x+1}, {point[1]})}} = \\frac{{{x+1} - {point[0]}}}{{{x+1} - {x}}} \\cdotp {img[x+1, y]} + \\frac{{{point[0]} - {x}}}{{{x+1} - {x}}} \\cdotp {img[x+1, y+1]} = {r(ix2)}'))
print("")
display(Latex(f'I_{{({point[0]}, {point[1]})}} = \\frac{{{y+1} - {point[1]}}}{{{y+1} - {y}}} \\cdotp {r(ix1)} + \\frac{{{point[1]} - {y}}}{{{y+1} - {y}}} \\cdotp {r(ix2)} = {r(p)}'))
def bli2(img, point):
x = math.floor(point[0])
y = math.floor(point[1])
x_offset = point[0] - x
y_offset = point[1] - y
print(f'{img[x, y]} {img[x, y+1]}')
print(f'{img[x+1, y]} {img[x+1, y+1]}')
print(f'= [({r(1-x_offset)} * {img[x, y]}) + ({r(x_offset)} * {img[x+1, y]})] * {r(1-y_offset)} + [({r(1-x_offset)} * {img[x, y+1]}) + ({r(x_offset)} * {img[x+1, y+1]})] * {r(y_offset)}')
print(f'= [{r((1-x_offset) * img[x, y])} + {r((x_offset) * img[x+1, y])}] * {r(1-y_offset)} + [{r((1-x_offset) * img[x, y+1])} + {r((x_offset) * img[x+1, y+1])}] * {r(y_offset)}')
print(f'= {r((1-x_offset) * img[x, y] + (x_offset) * img[x+1, y])} * {r(1-y_offset)} + {r((1-x_offset) * img[x, y+1] + (x_offset) * img[x+1, y+1])} * {r(y_offset)}')
print(f'= {r((((1-x_offset) * img[x, y] + (x_offset) * img[x+1, y]) * (1-y_offset)) + (((1-x_offset) * img[x, y+1] + (x_offset) * img[x+1, y+1]) * (y_offset)))}')
img = np.array([
[10, 40, 3],
[80, 90, 3],
[2, 2, 1],
], np.uint8)
# NOTE: convert matlab index to python index (ie python index starts from 0 inseat of 1)
bli(img, (0.2, 0.8))
# bli(img, (0.2, 0.5))
# bli(img, (0.5, 0.2))
# bli2(img, (0.2, 0.5))
# bli2(img, (1.6, 1.7))
###Output
10 40
80 90
###Markdown
Color Models RGB to HSI
###Code
def rgb2hsi(r, g, b):
re, ge, be = sp.symbols('r g b')
ie = sp.simplify("1/3") * (re + ge + be)
se = sp.simplify("1") - 1/ie * min(r, g, b)
te = sp.acos((1/2*((re-ge) + (re-be))) / sp.sqrt((re- ge)**2 + (re-be) * (ge-be)))
display(sp.Eq(sp.symbols('I'), ie))
print("")
display(sp.Eq(sp.symbols('S'), se))
print("")
display(sp.Eq(sp.symbols('theta'), te))
i = sp.lambdify((re, ge, be), ie)(r, g, b)
s = sp.lambdify((re, ge, be), se)(r, g, b)
t = sp.lambdify((re, ge, be), te)(r, g, b)
t = 0 if np.isnan(t) else t
h = t if b <= g else 2*math.pi - t
display(sp.Eq(sp.symbols('theta'), t))
display(sp.Eq(sp.symbols('H'), h))
display(sp.Eq(sp.symbols('S'), s))
display(sp.Eq(sp.symbols('I'), i))
return h * 180 / math.pi, s*255, i
h, s, i = rgb2hsi(100, 50, 200)
print("")
print(r(h, 0), r(s, 0), r(i, 0))
###Output
_____no_output_____
###Markdown
HSI to RGB
###Code
def hsi2rgb(h, s, i):
he, se, ie = sp.symbols('H S I')
rs, gs, bs = sp.symbols('R G B')
h = h * math.pi / 180
print(h, s, i)
if h >= 0 and h < 2 * math.pi / 3:
display("h >= 0 and h < 120")
re = ie * (1 + ((se * sp.cos(he)) / sp.cos(sp.pi/3 - he)))
be = ie * (1 - se)
ge = 3 * ie - (rs + bs)
r = sp.lambdify((he, se, ie), re)(h, s, i)
b = sp.lambdify((he, se, ie), be)(h, s, i)
g = sp.lambdify((rs, bs, ie), ge)(r, b, i)
elif h >= 2 * math.pi / 3 and h < 4 * math.pi / 3:
display("h >= 120 and h < 240")
h = h - 2 * math.pi / 3
re = ie * (1 - se)
ge = ie * (1 + ((se * sp.cos(he)) / sp.cos(sp.pi/3 - he)))
be = 3 * ie - (rs + gs)
r = sp.lambdify((he, se, ie), re)(h, s, i)
g = sp.lambdify((he, se, ie), ge)(h, s, i)
b = sp.lambdify((rs, gs, ie), be)(r, g, i)
else:
display("h >= 240 and h < 360")
h = h - 4 * math.pi / 3
be = ie * (1 + ((se * sp.cos(he)) / sp.cos(sp.pi/3 - he)))
ge = ie * (1 - se)
re = 3 * ie - (gs + bs)
b = sp.lambdify((he, se, ie), be)(h, s, i)
g = sp.lambdify((he, se, ie), ge)(h, s, i)
r = sp.lambdify((gs, bs, ie), re)(g, b, i)
display(sp.Eq(rs, re))
print("")
display(sp.Eq(gs, ge))
print("")
display(sp.Eq(bs, be))
print("")
display(sp.Eq(rs, r ))
print("")
display(sp.Eq(gs, g ))
print("")
display(sp.Eq(bs, b))
print("")
return [round(clamp(0, 1, x) * 255) for x in (r, g, b)]
hsi2rgb(260, 146/255, 117/255)
###Output
4.537856055185257 0.5725490196078431 0.4588235294117647
###Markdown
Distance Measure
###Code
def euclidean_distance(p, q):
d = math.sqrt((p[0] - q[0]) ** 2 + (p[1] - q[1]) ** 2)
display(Latex('D_e(p, q) = \\sqrt{(x - s)^2 + (y - t)^2}'))
display(Latex(f'D_e(p, q) = \\sqrt{{({p[0]} - {q[0]})^2 + ({p[1]} - {q[1]})^2}}'))
display(Latex(f'D_e(p, q) = \\sqrt{{{(p[0] - q[0]) ** 2} + {(p[1] - q[1]) ** 2}}}'))
display(Latex(f'D_e(p, q) = {r(d)}'))
return d
euclidean_distance((10, 20), (12, 22))
def city_block_distance(p, q):
d = abs(p[0] - q[0]) + abs(p[1] - q[1])
display(Latex('D_4(p, q) = \\left|x - s\\right| + \\left|y - t\\right|'))
display(Latex(f'D_4(p, q) = \\left|{p[0]} - {q[0]}\\right| + \\left|{p[1]} - {q[1]}\\right|'))
display(Latex(f'D_4(p, q) = {abs(p[0] - q[0])} + {abs(p[1] - q[1])}'))
display(Latex(f'D_4(p, q) = {r(d)}'))
return d
city_block_distance((10, 20), (12, 22))
def chess_board_distance(p, q):
d = max(abs(p[0] - q[0]), abs(p[1] - q[1]))
display(Latex('D_8(p, q) = max\\left(\\left|x - s\\right|, \\left|y - t\\right|\\right)'))
display(Latex(f'D_8(p, q) = max\\left(\\left|{p[0]} - {q[0]}\\right|, \\left|{p[1]} - {q[1]}\\right|\\right)'))
display(Latex(f'D_8(p, q) = max\\left({abs(p[0] - q[0])}, {abs(p[1] - q[1])}\\right)'))
display(Latex(f'D_8(p, q) = {r(d)}'))
return d
chess_board_distance((10, 20), (12, 22))
###Output
_____no_output_____ |
Testing Distributions.ipynb | ###Markdown
How normal are you? Checking distributional assumptions The need to understand the underlying distribution of data is critical in most parts of quantitative finance. Statistical tests can be applied for this purpose. The assumption of a normal distribution is commonplace, so let's start with simple tests to check whether data follows a normal distribution.
###Code
import warnings
warnings.filterwarnings('ignore')
import numpy as np
from scipy import stats
# Create a normal distribution
np.random.seed(42)
mean = 1.2
std = 0.3
n = 10000
returns = np.random.normal(mean, std, n)
# Shapiro-Wilk test for normality
# See here for more details https://docs.scipy.org/doc/scipy-0.19.1/reference/generated/scipy.stats.shapiro.html
print('Test stat: ', stats.shapiro(returns)[0],
'p value: ', stats.shapiro(returns)[1])
###Output
Test stat: 0.9999347925186157 p value: 0.9987024664878845
###Markdown
The confidence level for the Shapiro-Wilk test is 95% and the null hypothesis is that returns follows a normal distribution. Since the P-value >> 0.05 (i.e. 5%), we accept the hypothesis that the returns follow a normal distribution
###Code
# let's create another distribution, this time an exponential one
returns_exp = np.random.exponential(0.2, n)
import matplotlib
import matplotlib.pyplot as plt
% matplotlib inline
plt.figure(figsize=(12,6))
plt.subplot(121)
plt.hist(returns_exp, bins=30)
plt.title('Exponential')
plt.subplot(122)
plt.hist(returns, bins=30)
plt.title('Normal')
plt.show()
print('Test stat: ', stats.shapiro(returns_exp)[0],
'p value: ', stats.shapiro(returns_exp)[1])
###Output
Test stat: 0.8156617879867554 p value: 0.0
###Markdown
Compare and contrast. This case, the P-value is < 0.05, and so there is enough evidence to reject the null hypothesis of the distribution being normal. Now let's check the normality of real distributions
###Code
import quandl
import datetime
quandl.ApiConfig.api_key = ""
end = datetime.datetime.now()
start = end - datetime.timedelta(365*5)
AAPL = quandl.get('EOD/AAPL', start_date=start, end_date=end)
AAPLreturns = (AAPL['Close']/AAPL['Close'].shift(1))-1
# Let's plot the distribution of prices and returns
plt.figure(figsize=(16,6))
plt.subplot(121)
AAPL['Close'].hist(bins=100, label='AAPL_Prices')
plt.subplot(122)
AAPLreturns.hist(bins=100, label='AAPL_Returns')
plt.show()
###Output
_____no_output_____
###Markdown
Price levels are obviously not normal, but returns look close
###Code
AAPL_array = np.array(AAPL['Close'])
print('AAPL Prices\nTest statistic: ', stats.shapiro(AAPL_array)[0],
'\nP-value: ', stats.shapiro(AAPL_array)[1])
AAPLreturns_array = np.array(AAPLreturns.dropna())
print('AAPL Returns\nTest statistic: ', stats.shapiro(AAPLreturns_array)[0],
'\nP-value: ', stats.shapiro(AAPLreturns_array)[1])
###Output
AAPL Returns
Test statistic: 0.3324963450431824
P-value: 0.0
###Markdown
Looks like neither are normal since the p-value is < 5% Another test - Anderson Darling
###Code
print('Anderson Darling test')
print('Stats for normal distribution\n', stats.anderson(returns))
print('\n')
print('Stats for exp distribution\n', stats.anderson(returns_exp))
print('\n')
print('Stats for AAPL Returns\'s distribution\n', stats.anderson(AAPLreturns_array))
###Output
Anderson Darling test
Stats for normal distribution
AndersonResult(statistic=0.093276307066844311, critical_values=array([ 0.576, 0.656, 0.787, 0.918, 1.092]), significance_level=array([ 15. , 10. , 5. , 2.5, 1. ]))
Stats for exp distribution
AndersonResult(statistic=461.96932223869953, critical_values=array([ 0.576, 0.656, 0.787, 0.918, 1.092]), significance_level=array([ 15. , 10. , 5. , 2.5, 1. ]))
Stats for AAPL Returns's distribution
AndersonResult(statistic=127.40971068941758, critical_values=array([ 0.574, 0.654, 0.785, 0.915, 1.089]), significance_level=array([ 15. , 10. , 5. , 2.5, 1. ]))
###Markdown
Here, we have three sets of values: the Anderson-Darling test statistic, a set of critical values, and a set of corresponding confidence levels, such as 15 percent, 10 percent, 5 percent, 2.5 percent, and 1 percent, as shown in the previous output. For the normal distribution, if we choose a 1 percent confidence level—the last value of the critical values, we see 1.088. The test statistic of 0.34 is < 1.088, hence the null hypothesis of a normal distribution cannot be rejected.The converse is true for the exponential distribution and the distribution of AAPL returns, which mean the null hypothesis of normality can be rejected. The stats.anderson package can test for other distributions too
###Code
help(stats.anderson)
###Output
Help on function anderson in module scipy.stats.morestats:
anderson(x, dist='norm')
Anderson-Darling test for data coming from a particular distribution
The Anderson-Darling test is a modification of the Kolmogorov-
Smirnov test `kstest` for the null hypothesis that a sample is
drawn from a population that follows a particular distribution.
For the Anderson-Darling test, the critical values depend on
which distribution is being tested against. This function works
for normal, exponential, logistic, or Gumbel (Extreme Value
Type I) distributions.
Parameters
----------
x : array_like
array of sample data
dist : {'norm','expon','logistic','gumbel','gumbel_l', gumbel_r',
'extreme1'}, optional
the type of distribution to test against. The default is 'norm'
and 'extreme1', 'gumbel_l' and 'gumbel' are synonyms.
Returns
-------
statistic : float
The Anderson-Darling test statistic
critical_values : list
The critical values for this distribution
significance_level : list
The significance levels for the corresponding critical values
in percents. The function returns critical values for a
differing set of significance levels depending on the
distribution that is being tested against.
Notes
-----
Critical values provided are for the following significance levels:
normal/exponenential
15%, 10%, 5%, 2.5%, 1%
logistic
25%, 10%, 5%, 2.5%, 1%, 0.5%
Gumbel
25%, 10%, 5%, 2.5%, 1%
If A2 is larger than these critical values then for the corresponding
significance level, the null hypothesis that the data come from the
chosen distribution can be rejected.
References
----------
.. [1] http://www.itl.nist.gov/div898/handbook/prc/section2/prc213.htm
.. [2] Stephens, M. A. (1974). EDF Statistics for Goodness of Fit and
Some Comparisons, Journal of the American Statistical Association,
Vol. 69, pp. 730-737.
.. [3] Stephens, M. A. (1976). Asymptotic Results for Goodness-of-Fit
Statistics with Unknown Parameters, Annals of Statistics, Vol. 4,
pp. 357-369.
.. [4] Stephens, M. A. (1977). Goodness of Fit for the Extreme Value
Distribution, Biometrika, Vol. 64, pp. 583-588.
.. [5] Stephens, M. A. (1977). Goodness of Fit with Special Reference
to Tests for Exponentiality , Technical Report No. 262,
Department of Statistics, Stanford University, Stanford, CA.
.. [6] Stephens, M. A. (1979). Tests of Fit for the Logistic Distribution
Based on the Empirical Distribution Function, Biometrika, Vol. 66,
pp. 591-595.
###Markdown
** The 4 moments of a distribution ** It's also useful to go beyond the usual mean and standard deviation into the other moments, as they provide us with clues on the normality of a distribution. Mean:$$ Mean = \bar R = \frac{\sum_i^n R_i}{n} $$Variance/Standard Deviation:$$ Variance = (Standard Deviation)^2 = \sigma^2 = \frac{\sum_i^n (R_i-\bar R)^2}{n-1} $$ Skew (skewed left, skewed right):$$ Skew = \frac{\sum_i^n (R_i-\bar R)^3}{(n-1)\sigma^3} $$ Kurtosis/Excess Kurtosis (peakness or fat tails)$$ Skew = \frac{\sum_i^n (R_i-\bar R)^4}{(n-1)\sigma^4} $$ or$$ Skew = \frac{\sum_i^n (R_i-\bar R)^4}{(n-1)\sigma^4} - 3 $$ The way to do it with libraries is much much simpler than the above
###Code
print('\n Mean: ', np.mean(returns), '\n',
'Standard Deviation: ', np.std(returns), '\n',
'Skew: ', stats.skew(returns), '\n',
'Kurtosis', stats.kurtosis(returns))
###Output
Mean: 1.19935920499
Standard Deviation: 0.301023661839
Skew: 0.0019636977663557873
Kurtosis 0.026479272360443673
###Markdown
For a normal distribution, skewness ~ 0.03, kurtosis ~ 3. In this case, the stat being computed is excess kurtosis, so the value is close to 0 instead.
###Code
print ('Moments for AAPL Returns')
print('\n Mean: ', np.mean(AAPLreturns_array), '\n',
'Standard Deviation: ', np.std(AAPLreturns_array), '\n',
'Skew: ', stats.skew(AAPLreturns_array), '\n',
'Kurtosis', stats.kurtosis(AAPLreturns_array))
###Output
Moments for AAPL Returns
Mean: 0.000328091673061
Standard Deviation: 0.0279855695958
Skew: -22.675005004193835
Kurtosis 691.1559815602365
###Markdown
Obviously not normal!
###Code
# T tests - test statistic follows a student's t distribution if the null hypothesis is supported
# Recall that the mean of our rets was 1.2
# Let's apply t and p tests on different means
print('\n Mean of 0.2\n', stats.ttest_1samp(returns, 0.2))
###Output
Mean of 0.2
Ttest_1sampResult(statistic=331.9703274071677, pvalue=0.0)
###Markdown
We reject the null hypothesis since the T-value is huge and the P-value is 0.
###Code
print('\n Mean of 1.205\n', stats.ttest_1samp(returns, 1.205))
###Output
Mean of 1.205
Ttest_1sampResult(statistic=-1.8737772736094489, pvalue=0.060990271728498281)
###Markdown
** Tests of Equal Variances **
###Code
# Bartlett test
# Create a normal distribution with std of 0.6
np.random.seed(42)
mean = 1.2
std = 0.6
n = 10000
returns_2 = np.random.normal(mean, std, n)
np.std(returns)
stats.bartlett(returns, returns_2)
###Output
_____no_output_____
###Markdown
stat >> pvalue, so the variances are not equal
###Code
np.random.seed(42)
mean = 1.2
std = 0.3000
n = 10000
returns_3 = np.random.normal(mean, std, n)
stats.bartlett(returns, returns_3)
###Output
_____no_output_____
###Markdown
stat << pvalue, so the variances are equal ** Test seasonality ** Check to see if AAPL's Jan price are statistically different from other months
###Code
AAPL_Jan = AAPL[AAPL.index.month==1]['Close']
AAPL_OtherMonths = AAPL[AAPL.index.month!=1]['Close']
stats.ttest_ind(AAPL_Jan, AAPL_OtherMonths)
###Output
_____no_output_____ |
HMM/HMM Tagger.ipynb | ###Markdown
Project: Part of Speech Tagging with Hidden Markov Models --- IntroductionPart of speech tagging is the process of determining the syntactic category of a word from the words in its surrounding context. It is often used to help disambiguate natural language phrases because it can be done quickly with high accuracy. Tagging can be used for many NLP tasks like determining correct pronunciation during speech synthesis (for example, _dis_-count as a noun vs dis-_count_ as a verb), for information retrieval, and for word sense disambiguation.In this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/) library to build a hidden Markov model for part of speech tagging using a "universal" tagset. Hidden Markov models have been able to achieve [>96% tag accuracy with larger tagsets on realistic text corpora](http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf). Hidden Markov models have also been used for speech recognition and speech generation, machine translation, gene recognition for bioinformatics, and human gesture recognition for computer vision, and more. The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated to complete the project; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you must provide code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully! **Note:** Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You must then **export the notebook** by running the last cell in the notebook, or by using the menu above and navigating to **File -> Download as -> HTML (.html)** Your submissions should include both the `html` and `ipynb` files. **Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode. The Road AheadYou must complete Steps 1-3 below to pass the project. The section on Step 4 includes references & resources you can use to further explore HMM taggers.- [Step 1](Step-1:-Read-and-preprocess-the-dataset): Review the provided interface to load and access the text corpus- [Step 2](Step-2:-Build-a-Most-Frequent-Class-tagger): Build a Most Frequent Class tagger to use as a baseline- [Step 3](Step-3:-Build-an-HMM-tagger): Build an HMM Part of Speech tagger and compare to the MFC baseline- [Step 4](Step-4:-[Optional]-Improving-model-performance): (Optional) Improve the HMM tagger **Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
###Code
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers, tests
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from IPython.core.display import HTML
from itertools import chain
from collections import Counter, defaultdict
from helpers import show_model, Dataset
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
###Output
_____no_output_____
###Markdown
Step 1: Read and preprocess the dataset---We'll start by reading in a text corpus and splitting it into a training and testing dataset. The data set is a copy of the [Brown corpus](https://en.wikipedia.org/wiki/Brown_Corpus) (originally from the [NLTK](https://www.nltk.org/) library) that has already been pre-processed to only include the [universal tagset](https://arxiv.org/pdf/1104.2086.pdf). You should expect to get slightly higher accuracy using this simplified tagset than the same model would achieve on a larger tagset like the full [Penn treebank tagset](https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html), but the process you'll follow would be the same.The `Dataset` class provided in helpers.py will read and parse the corpus. You can generate your own datasets compatible with the reader by writing them to the following format. The dataset is stored in plaintext as a collection of words and corresponding tags. Each sentence starts with a unique identifier on the first line, followed by one tab-separated word/tag pair on each following line. Sentences are separated by a single blank line.Example from the Brown corpus. ```b100-38532Perhaps ADVit PRONwas VERBright ADJ; .; .b100-35577...```
###Code
data = Dataset("tags-universal.txt", "brown-universal.txt", train_test_split=0.8)
print("There are {} sentences in the corpus.".format(len(data)))
print("There are {} sentences in the training set.".format(len(data.training_set)))
print("There are {} sentences in the testing set.".format(len(data.testing_set)))
assert len(data) == len(data.training_set) + len(data.testing_set), \
"The number of sentences in the training set + testing set should sum to the number of sentences in the corpus"
###Output
There are 57340 sentences in the corpus.
There are 45872 sentences in the training set.
There are 11468 sentences in the testing set.
###Markdown
The Dataset InterfaceYou can access (mostly) immutable references to the dataset through a simple interface provided through the `Dataset` class, which represents an iterable collection of sentences along with easy access to partitions of the data for training & testing. Review the reference below, then run and review the next few cells to make sure you understand the interface before moving on to the next step.```Dataset-only Attributes: training_set - reference to a Subset object containing the samples for training testing_set - reference to a Subset object containing the samples for testingDataset & Subset Attributes: sentences - a dictionary with an entry {sentence_key: Sentence()} for each sentence in the corpus keys - an immutable ordered (not sorted) collection of the sentence_keys for the corpus vocab - an immutable collection of the unique words in the corpus tagset - an immutable collection of the unique tags in the corpus X - returns an array of words grouped by sentences ((w11, w12, w13, ...), (w21, w22, w23, ...), ...) Y - returns an array of tags grouped by sentences ((t11, t12, t13, ...), (t21, t22, t23, ...), ...) N - returns the number of distinct samples (individual words or tags) in the datasetMethods: stream() - returns an flat iterable over all (word, tag) pairs across all sentences in the corpus __iter__() - returns an iterable over the data as (sentence_key, Sentence()) pairs __len__() - returns the nubmer of sentences in the dataset```For example, consider a Subset, `subset`, of the sentences `{"s0": Sentence(("See", "Spot", "run"), ("VERB", "NOUN", "VERB")), "s1": Sentence(("Spot", "ran"), ("NOUN", "VERB"))}`. The subset will have these attributes:```subset.keys == {"s1", "s0"} unorderedsubset.vocab == {"See", "run", "ran", "Spot"} unorderedsubset.tagset == {"VERB", "NOUN"} unorderedsubset.X == (("Spot", "ran"), ("See", "Spot", "run")) order matches .keyssubset.Y == (("NOUN", "VERB"), ("VERB", "NOUN", "VERB")) order matches .keyssubset.N == 7 there are a total of seven observations over all sentenceslen(subset) == 2 because there are two sentences```**Note:** The `Dataset` class is _convenient_, but it is **not** efficient. It is not suitable for huge datasets because it stores multiple redundant copies of the same data. Sentences`Dataset.sentences` is a dictionary of all sentences in the training corpus, each keyed to a unique sentence identifier. Each `Sentence` is itself an object with two attributes: a tuple of the words in the sentence named `words` and a tuple of the tag corresponding to each word named `tags`.
###Code
key = 'b100-38532'
print("Sentence: {}".format(key))
print("words:\n\t{!s}".format(data.sentences[key].words))
print("tags:\n\t{!s}".format(data.sentences[key].tags))
###Output
Sentence: b100-38532
words:
('Perhaps', 'it', 'was', 'right', ';', ';')
tags:
('ADV', 'PRON', 'VERB', 'ADJ', '.', '.')
###Markdown
**Note:** The underlying iterable sequence is **unordered** over the sentences in the corpus; it is not guaranteed to return the sentences in a consistent order between calls. Use `Dataset.stream()`, `Dataset.keys`, `Dataset.X`, or `Dataset.Y` attributes if you need ordered access to the data. Counting Unique ElementsYou can access the list of unique words (the dataset vocabulary) via `Dataset.vocab` and the unique list of tags via `Dataset.tagset`.
###Code
print("There are a total of {} samples of {} unique words in the corpus."
.format(data.N, len(data.vocab)))
print("There are {} samples of {} unique words in the training set."
.format(data.training_set.N, len(data.training_set.vocab)))
print("There are {} samples of {} unique words in the testing set."
.format(data.testing_set.N, len(data.testing_set.vocab)))
print("There are {} words in the test set that are missing in the training set."
.format(len(data.testing_set.vocab - data.training_set.vocab)))
assert data.N == data.training_set.N + data.testing_set.N, \
"The number of training + test samples should sum to the total number of samples"
###Output
There are a total of 1161192 samples of 56057 unique words in the corpus.
There are 928458 samples of 50536 unique words in the training set.
There are 232734 samples of 25112 unique words in the testing set.
There are 5521 words in the test set that are missing in the training set.
###Markdown
Accessing word and tag SequencesThe `Dataset.X` and `Dataset.Y` attributes provide access to ordered collections of matching word and tag sequences for each sentence in the dataset.
###Code
# accessing words with Dataset.X and tags with Dataset.Y
for i in range(2):
print("Sentence {}:".format(i + 1), data.X[i])
print()
print("Labels {}:".format(i + 1), data.Y[i])
print()
###Output
Sentence 1: ('Mr.', 'Podger', 'had', 'thanked', 'him', 'gravely', ',', 'and', 'now', 'he', 'made', 'use', 'of', 'the', 'advice', '.')
Labels 1: ('NOUN', 'NOUN', 'VERB', 'VERB', 'PRON', 'ADV', '.', 'CONJ', 'ADV', 'PRON', 'VERB', 'NOUN', 'ADP', 'DET', 'NOUN', '.')
Sentence 2: ('But', 'there', 'seemed', 'to', 'be', 'some', 'difference', 'of', 'opinion', 'as', 'to', 'how', 'far', 'the', 'board', 'should', 'go', ',', 'and', 'whose', 'advice', 'it', 'should', 'follow', '.')
Labels 2: ('CONJ', 'PRT', 'VERB', 'PRT', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'ADP', 'ADV', 'ADV', 'DET', 'NOUN', 'VERB', 'VERB', '.', 'CONJ', 'DET', 'NOUN', 'PRON', 'VERB', 'VERB', '.')
###Markdown
Accessing (word, tag) SamplesThe `Dataset.stream()` method returns an iterator that chains together every pair of (word, tag) entries across all sentences in the entire corpus.
###Code
# use Dataset.stream() (word, tag) samples for the entire corpus
print("\nStream (word, tag) pairs:\n")
for i, pair in enumerate(data.stream()):
print("\t", pair)
if i > 5: break
###Output
Stream (word, tag) pairs:
('Mr.', 'NOUN')
('Podger', 'NOUN')
('had', 'VERB')
('thanked', 'VERB')
('him', 'PRON')
('gravely', 'ADV')
(',', '.')
###Markdown
For both our baseline tagger and the HMM model we'll build, we need to estimate the frequency of tags & words from the frequency counts of observations in the training corpus. In the next several cells you will complete functions to compute the counts of several sets of counts. Step 2: Build a Most Frequent Class tagger---Perhaps the simplest tagger (and a good baseline for tagger performance) is to simply choose the tag most frequently assigned to each word. This "most frequent class" tagger inspects each observed word in the sequence and assigns it the label that was most often assigned to that word in the corpus. IMPLEMENTATION: Pair CountsComplete the function below that computes the joint frequency counts for two input sequences. First, Let's create a helper function that streams each word and tag and separates them into two lists
###Code
def tag_and_words(arr=data.training_set):
tags = [item[1] for item in data.training_set.stream()]
words = [item[0] for item in data.training_set.stream()]
return tags, words
tags, words = tag_and_words()
def pair_counts(sequences_A, sequences_B):
"""Return a dictionary keyed to each unique value in the first sequence list
that counts the number of occurrences of the corresponding value from the
second sequences list.
For example, if sequences_A is tags and sequences_B is the corresponding
words, then if 1244 sequences contain the word "time" tagged as a NOUN, then
you should return a dictionary such that pair_counts[NOUN][time] == 1244
"""
# TODO: Finish this function!
# raise NotImplementedError
pair_dict = {i:{} for i in set(sequences_A)}
for tag, word in zip(sequences_A, sequences_B):
if word not in pair_dict[tag].keys():
pair_dict[tag][word] = 1
else:
pair_dict[tag][word] += 1
return pair_dict
# Calculate C(t_i, w_i)
emission_counts = pair_counts(tags,words)
assert len(emission_counts) == 12, \
"Uh oh. There should be 12 tags in your dictionary."
assert max(emission_counts["NOUN"], key=emission_counts["NOUN"].get) == 'time', \
"Hmmm...'time' is expected to be the most common NOUN."
HTML('<div class="alert alert-block alert-success">Your emission counts look good!</div>')
###Output
_____no_output_____
###Markdown
IMPLEMENTATION: Most Frequent Class TaggerUse the `pair_counts()` function and the training dataset to find the most frequent class label for each word in the training data, and populate the `mfc_table` below. The table keys should be words, and the values should be the appropriate tag string.The `MFCTagger` class is provided to mock the interface of Pomegranite HMM models so that they can be used interchangeably.
###Code
# Create a lookup table mfc_table where mfc_table[word] contains the tag label most frequently assigned to that word
from collections import namedtuple
FakeState = namedtuple("FakeState", "name")
class MFCTagger:
# NOTE: You should not need to modify this class or any of its methods
missing = FakeState(name="<MISSING>")
def __init__(self, table):
self.table = defaultdict(lambda: MFCTagger.missing)
self.table.update({word: FakeState(name=tag) for word, tag in table.items()})
def viterbi(self, seq):
"""This method simplifies predictions by matching the Pomegranate viterbi() interface"""
return 0., list(enumerate(["<start>"] + [self.table[w] for w in seq] + ["<end>"]))
# TODO: calculate the frequency of each tag being assigned to each word (hint: similar, but not
# the same as the emission probabilities) and use it to fill the mfc_table
# Let's create a helper function that iterates through word_counts and makes
# each word a key and the tag with the highest count of the word, the corresponding vales
def mfc_dict(word_counts=emission_counts):
"""Take the word_counts dict and,
returns a dictionary, with each
word in word_count dict.keys(),
for each sub-key as key and,
the highest tag each word appears,
under as its corresponding value.
"""
mfc_dict = {}
for key, val in word_counts.items():
for word, count in val.items():
if word not in mfc_dict.keys():
mfc_dict[word] = [key, count]
else:
if count > mfc_dict[word][1]:
mfc_dict[word] = [key, count]
# Now iterate through the dict and make each value just the tag
for key, value in mfc_dict.items():
mfc_dict[key] = value[0]
return mfc_dict
# instantiate an mfc_dict object
mfc_table = mfc_dict()
# DO NOT MODIFY BELOW THIS LINE
mfc_model = MFCTagger(mfc_table) # Create a Most Frequent Class tagger instance
assert len(mfc_table) == len(data.training_set.vocab), ""
assert all(k in data.training_set.vocab for k in mfc_table.keys()), ""
assert sum(int(k not in mfc_table) for k in data.testing_set.vocab) == 5521, ""
HTML('<div class="alert alert-block alert-success">Your MFC tagger has all the correct words!</div>')
###Output
_____no_output_____
###Markdown
Making Predictions with a ModelThe helper functions provided below interface with Pomegranate network models & the mocked MFCTagger to take advantage of the [missing value](http://pomegranate.readthedocs.io/en/latest/nan.html) functionality in Pomegranate through a simple sequence decoding function. Run these functions, then run the next cell to see some of the predictions made by the MFC tagger.
###Code
def replace_unknown(sequence):
"""Return a copy of the input sequence where each unknown word is replaced
by the literal string value 'nan'. Pomegranate will ignore these values
during computation.
"""
return [w if w in data.training_set.vocab else 'nan' for w in sequence]
def simplify_decoding(X, model):
"""X should be a 1-D sequence of observations for the model to predict"""
_, state_path = model.viterbi(replace_unknown(X))
return [state[1].name for state in state_path[1:-1]] # do not show the start/end state predictions
###Output
_____no_output_____
###Markdown
Example Decoding Sequences with MFC Tagger
###Code
for key in data.testing_set.keys[:3]:
print("Sentence Key: {}\n".format(key))
print("Predicted labels:\n-----------------")
print(simplify_decoding(data.sentences[key].words, mfc_model))
print()
print("Actual labels:\n--------------")
print(data.sentences[key].tags)
print("\n")
###Output
Sentence Key: b100-28144
Predicted labels:
-----------------
['CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN', '.', '.']
Actual labels:
--------------
('CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN', '.', '.')
Sentence Key: b100-23146
Predicted labels:
-----------------
['PRON', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'VERB', '.', 'ADP', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', '.']
Actual labels:
--------------
('PRON', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'VERB', '.', 'ADP', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', '.')
Sentence Key: b100-35462
Predicted labels:
-----------------
['DET', 'ADJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'ADP', 'ADJ', 'NOUN', '.', 'CONJ', 'ADP', 'DET', '<MISSING>', 'ADP', 'ADJ', 'ADJ', '.', 'ADJ', '.', 'CONJ', 'ADJ', 'NOUN', 'ADP', 'ADV', 'NOUN', '.']
Actual labels:
--------------
('DET', 'ADJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'ADP', 'ADJ', 'NOUN', '.', 'CONJ', 'ADP', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', '.', 'ADJ', '.', 'CONJ', 'ADJ', 'NOUN', 'ADP', 'ADJ', 'NOUN', '.')
###Markdown
Evaluating Model AccuracyThe function below will evaluate the accuracy of the MFC tagger on the collection of all sentences from a text corpus.
###Code
def accuracy(X, Y, model):
"""Calculate the prediction accuracy by using the model to decode each sequence
in the input X and comparing the prediction with the true labels in Y.
The X should be an array whose first dimension is the number of sentences to test,
and each element of the array should be an iterable of the words in the sequence.
The arrays X and Y should have the exact same shape.
X = [("See", "Spot", "run"), ("Run", "Spot", "run", "fast"), ...]
Y = [(), (), ...]
"""
correct = total_predictions = 0
for observations, actual_tags in zip(X, Y):
# The model.viterbi call in simplify_decoding will return None if the HMM
# raises an error (for example, if a test sentence contains a word that
# is out of vocabulary for the training set). Any exception counts the
# full sentence as an error (which makes this a conservative estimate).
try:
most_likely_tags = simplify_decoding(observations, model)
correct += sum(p == t for p, t in zip(most_likely_tags, actual_tags))
except:
pass
total_predictions += len(observations)
return correct / total_predictions
###Output
_____no_output_____
###Markdown
Evaluate the accuracy of the MFC taggerRun the next cell to evaluate the accuracy of the tagger on the training and test corpus.
###Code
mfc_training_acc = accuracy(data.training_set.X, data.training_set.Y, mfc_model)
print("training accuracy mfc_model: {:.2f}%".format(100 * mfc_training_acc))
mfc_testing_acc = accuracy(data.testing_set.X, data.testing_set.Y, mfc_model)
print("testing accuracy mfc_model: {:.2f}%".format(100 * mfc_testing_acc))
assert mfc_training_acc >= 0.955, "Uh oh. Your MFC accuracy on the training set doesn't look right."
assert mfc_testing_acc >= 0.925, "Uh oh. Your MFC accuracy on the testing set doesn't look right."
HTML('<div class="alert alert-block alert-success">Your MFC tagger accuracy looks correct!</div>')
###Output
training accuracy mfc_model: 95.72%
testing accuracy mfc_model: 93.00%
###Markdown
Step 3: Build an HMM tagger---The HMM tagger has one hidden state for each possible tag, and parameterized by two distributions: the emission probabilties giving the conditional probability of observing a given **word** from each hidden state, and the transition probabilities giving the conditional probability of moving between **tags** during the sequence.We will also estimate the starting probability distribution (the probability of each **tag** being the first tag in a sequence), and the terminal probability distribution (the probability of each **tag** being the last tag in a sequence).The maximum likelihood estimate of these distributions can be calculated from the frequency counts as described in the following sections where you'll implement functions to count the frequencies, and finally build the model. The HMM model will make predictions according to the formula:$$t_i^n = \underset{t_i^n}{\mathrm{argmax}} \prod_{i=1}^n P(w_i|t_i) P(t_i|t_{i-1})$$Refer to Speech & Language Processing [Chapter 10](https://web.stanford.edu/~jurafsky/slp3/10.pdf) for more information. IMPLEMENTATION: Unigram CountsComplete the function below to estimate the co-occurrence frequency of each symbol over all of the input sequences. The unigram probabilities in our HMM model are estimated from the formula below, where N is the total number of samples in the input. (You only need to compute the counts for now.)$$P(tag_1) = \frac{C(tag_1)}{N}$$
###Code
def unigram_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequence list that
counts the number of occurrences of the value in the sequences list. The sequences
collection should be a 2-dimensional array.
For example, if the tag NOUN appears 275558 times over all the input sequences,
then you should return a dictionary such that your_unigram_counts[NOUN] == 275558.
"""
# TODO: Finish this function!
# raise NotImplementedError
tag_list, word_list = tag_and_words(sequences)
dicts = pair_counts(tag_list, word_list)
count_dict = {}
for key, val in dicts.items():
count_dict[key] = 0
for word, count in val.items():
count_dict[key] += count
return count_dict
# TODO: call unigram_counts with a list of tag sequences from the training set
tag_unigrams = unigram_counts(data.training_set)
assert set(tag_unigrams.keys()) == data.training_set.tagset, \
"Uh oh. It looks like your tag counts doesn't include all the tags!"
assert min(tag_unigrams, key=tag_unigrams.get) == 'X', \
"Hmmm...'X' is expected to be the least common class"
assert max(tag_unigrams, key=tag_unigrams.get) == 'NOUN', \
"Hmmm...'NOUN' is expected to be the most common class"
HTML('<div class="alert alert-block alert-success">Your tag unigrams look good!</div>')
###Output
_____no_output_____
###Markdown
IMPLEMENTATION: Bigram CountsComplete the function below to estimate the co-occurrence frequency of each pair of symbols in each of the input sequences. These counts are used in the HMM model to estimate the bigram probability of two tags from the frequency counts according to the formula: $$P(tag_2|tag_1) = \frac{C(tag_2|tag_1)}{C(tag_2)}$$
###Code
def bigram_counts(sequences):
"""Return a dictionary keyed to each unique PAIR of values in the input sequences
list that counts the number of occurrences of pair in the sequences list. The input
should be a 2-dimensional array.
For example, if the pair of tags (NOUN, VERB) appear 61582 times, then you should
return a dictionary such that your_bigram_counts[(NOUN, VERB)] == 61582
"""
# TODO: Finish this function!
# raise NotImplementedError
# First define a bigram dictionary of all 144 tags set
bigram_dict = {}
for i in sequences.tagset:
for j in sequences.tagset:
bigram_dict[(i, j)] = 0
# Next compute the count of pairs in bigram_dict
tags = list(tag_and_words(sequences)[0])
for i in range(len(tags)):
if i == len(tags) - 1:
bigram_dict[(tags[i],tags[i])] += 1
break
for j in range(i+1, len(tags)):
x, y = tags[i], tags[j]
try:
bigram_dict[(x,y)] += 1
except:
break
break
return bigram_dict
# TODO: call bigram_counts with a list of tag sequences from the training set
tag_bigrams = bigram_counts(data.training_set)
assert len(tag_bigrams) == 144, \
"Uh oh. There should be 144 pairs of bigrams (12 tags x 12 tags)"
assert min(tag_bigrams, key=tag_bigrams.get) in [('X', 'NUM'), ('PRON', 'X')], \
"Hmmm...The least common bigram should be one of ('X', 'NUM') or ('PRON', 'X')."
assert max(tag_bigrams, key=tag_bigrams.get) in [('DET', 'NOUN')], \
"Hmmm...('DET', 'NOUN') is expected to be the most common bigram."
HTML('<div class="alert alert-block alert-success">Your tag bigrams look good!</div>')
###Output
_____no_output_____
###Markdown
IMPLEMENTATION: Sequence Starting CountsComplete the code below to estimate the bigram probabilities of a sequence starting with each tag.
###Code
def starting_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequences list
that counts the number of occurrences where that value is at the beginning of
a sequence.
For example, if 8093 sequences start with NOUN, then you should return a
dictionary such that your_starting_counts[NOUN] == 8093
"""
# TODO: Finish this function!
# raise NotImplementedError
# First get the distribution from bigram_counts()
starting_dict = {i:0 for i in sequences.tagset}
for tagTuples in sequences.Y:
first = tagTuples[0]
starting_dict[first] += 1
return starting_dict
# TODO: Calculate the count of each tag starting a sequence
tag_starts = starting_counts(data.training_set)
assert len(tag_starts) == 12, "Uh oh. There should be 12 tags in your dictionary."
assert min(tag_starts, key=tag_starts.get) == 'X', "Hmmm...'X' is expected to be the least common starting bigram."
assert max(tag_starts, key=tag_starts.get) == 'DET', "Hmmm...'DET' is expected to be the most common starting bigram."
HTML('<div class="alert alert-block alert-success">Your starting tag counts look good!</div>')
###Output
_____no_output_____
###Markdown
IMPLEMENTATION: Sequence Ending CountsComplete the function below to estimate the bigram probabilities of a sequence ending with each tag.
###Code
def ending_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequences list
that counts the number of occurrences where that value is at the end of
a sequence.
For example, if 18 sequences end with DET, then you should return a
dictionary such that your_starting_counts[DET] == 18
"""
# TODO: Finish this function!
# raise NotImplementedError
ending_dict = {i:0 for i in sequences.tagset}
for tagTuples in sequences.Y:
last = tagTuples[-1]
ending_dict[last] += 1
return ending_dict
# TODO: Calculate the count of each tag ending a sequence
tag_ends = ending_counts(data.training_set)
assert len(tag_ends) == 12, "Uh oh. There should be 12 tags in your dictionary."
assert min(tag_ends, key=tag_ends.get) in ['X', 'CONJ'], "Hmmm...'X' or 'CONJ' should be the least common ending bigram."
assert max(tag_ends, key=tag_ends.get) == '.', "Hmmm...'.' is expected to be the most common ending bigram."
HTML('<div class="alert alert-block alert-success">Your ending tag counts look good!</div>')
###Output
_____no_output_____
###Markdown
IMPLEMENTATION: Basic HMM TaggerUse the tag unigrams and bigrams calculated above to construct a hidden Markov tagger.- Add one state per tag - The emission distribution at each state should be estimated with the formula: $P(w|t) = \frac{C(t, w)}{C(t)}$- Add an edge from the starting state `basic_model.start` to each tag - The transition probability should be estimated with the formula: $P(t|start) = \frac{C(start, t)}{C(start)}$- Add an edge from each tag to the end state `basic_model.end` - The transition probability should be estimated with the formula: $P(end|t) = \frac{C(t, end)}{C(t)}$- Add an edge between _every_ pair of tags - The transition probability should be estimated with the formula: $P(t_2|t_1) = \frac{C(t_1, t_2)}{C(t_1)}$ I'd create a method to make it easy to iteraate over the tags and creates states
###Code
def addStatesToTags(emissions=emission_counts, unigram=tag_unigrams):
"""Add states to specified tags
"""
final_dict = {}
sum_unigrams = sum(unigram.values())
for tag, word_dict in emissions.items():
states_dict = {}
sum_emissions = sum(emissions[tag].values())
for word, value in word_dict.items():
pWord = value / sum_emissions
pTag = unigram[tag] / sum_unigrams
#pWord_given_pTag = value / unigram[tag]
pWord_given_pTag = (pWord * pTag) / pTag
states_dict[word] = pWord_given_pTag
tag_emissions = DiscreteDistribution(states_dict)
final_dict[tag] = State(tag_emissions, name=tag)
return final_dict
# I'd pass the states/tags mapping to a variable
states_map = addStatesToTags()
basic_model = HiddenMarkovModel(name="base-hmm-tagger")
# TODO: create states with emission probability distributions P(word | tag) and add to the model
# (Hint: you may need to loop & create/add new states)
# Now, I add the states to the basic_model
basic_model.add_states(list(states_map.values()))
# Adding start and end states
for tag in data.training_set.tagset:
state = states_map[tag]
basic_model.add_transition(basic_model.start, state, tag_starts[tag]/sum(tag_starts.values()))
basic_model.add_transition(state, basic_model.end, tag_ends[tag]/sum(tag_ends.values()))
# # TODO: add edges between states for the observed transition frequencies P(tag_i | tag_i-1)
# # (Hint: you may need to loop & add transitions
# basic_model.add_transition()
for tag1 in data.training_set.tagset:
state1 = states_map[tag1]
for tag2 in data.training_set.tagset:
state2 = states_map[tag2]
basic_model.add_transition(state1, state2, tag_bigrams[(tag1, tag2)]/tag_unigrams[tag1])
# # NOTE: YOU SHOULD NOT NEED TO MODIFY ANYTHING BELOW THIS LINE
# # finalize the model
basic_model.bake()
assert all(tag in set(s.name for s in basic_model.states) for tag in data.training_set.tagset), \
"Every state in your network should use the name of the associated tag, which must be one of the training set tags."
assert basic_model.edge_count() == 168, \
("Your network should have an edge from the start node to each state, one edge between every " +
"pair of tags (states), and an edge from each state to the end node.")
HTML('<div class="alert alert-block alert-success">Your HMM network topology looks good!</div>')
hmm_training_acc = accuracy(data.training_set.X, data.training_set.Y, basic_model)
print("training accuracy basic hmm model: {:.2f}%".format(100 * hmm_training_acc))
hmm_testing_acc = accuracy(data.testing_set.X, data.testing_set.Y, basic_model)
print("testing accuracy basic hmm model: {:.2f}%".format(100 * hmm_testing_acc))
assert hmm_training_acc > 0.97, "Uh oh. Your HMM accuracy on the training set doesn't look right."
assert hmm_testing_acc > 0.955, "Uh oh. Your HMM accuracy on the testing set doesn't look right."
HTML('<div class="alert alert-block alert-success">Your HMM tagger accuracy looks correct! Congratulations, you\'ve finished the project.</div>')
###Output
training accuracy basic hmm model: 97.53%
testing accuracy basic hmm model: 95.96%
###Markdown
Example Decoding Sequences with the HMM Tagger
###Code
for key in data.testing_set.keys[:3]:
print("Sentence Key: {}\n".format(key))
print("Predicted labels:\n-----------------")
print(simplify_decoding(data.sentences[key].words, basic_model))
print()
print("Actual labels:\n--------------")
print(data.sentences[key].tags)
print("\n")
###Output
Sentence Key: b100-28144
Predicted labels:
-----------------
['CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN', '.', '.']
Actual labels:
--------------
('CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN', '.', '.')
Sentence Key: b100-23146
Predicted labels:
-----------------
['PRON', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'VERB', '.', 'ADP', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', '.']
Actual labels:
--------------
('PRON', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'VERB', '.', 'ADP', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', '.')
Sentence Key: b100-35462
Predicted labels:
-----------------
['DET', 'ADJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'ADP', 'ADJ', 'NOUN', '.', 'CONJ', 'ADP', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', '.', 'ADJ', '.', 'CONJ', 'ADJ', 'NOUN', 'ADP', 'ADJ', 'NOUN', '.']
Actual labels:
--------------
('DET', 'ADJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'ADP', 'ADJ', 'NOUN', '.', 'CONJ', 'ADP', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', '.', 'ADJ', '.', 'CONJ', 'ADJ', 'NOUN', 'ADP', 'ADJ', 'NOUN', '.')
###Markdown
Finishing the project---**Note:** **SAVE YOUR NOTEBOOK**, then run the next cell to generate an HTML copy. You will zip & submit both this file and the HTML copy for review.
###Code
#!!jupyter nbconvert *.ipynb
###Output
_____no_output_____
###Markdown
Step 4: [Optional] Improving model performance---There are additional enhancements that can be incorporated into your tagger that improve performance on larger tagsets where the data sparsity problem is more significant. The data sparsity problem arises because the same amount of data split over more tags means there will be fewer samples in each tag, and there will be more missing data tags that have zero occurrences in the data. The techniques in this section are optional.- [Laplace Smoothing](https://en.wikipedia.org/wiki/Additive_smoothing) (pseudocounts) Laplace smoothing is a technique where you add a small, non-zero value to all observed counts to offset for unobserved values.- Backoff Smoothing Another smoothing technique is to interpolate between n-grams for missing data. This method is more effective than Laplace smoothing at combatting the data sparsity problem. Refer to chapters 4, 9, and 10 of the [Speech & Language Processing](https://web.stanford.edu/~jurafsky/slp3/) book for more information.- Extending to Trigrams HMM taggers have achieved better than 96% accuracy on this dataset with the full Penn treebank tagset using an architecture described in [this](http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf) paper. Altering your HMM to achieve the same performance would require implementing deleted interpolation (described in the paper), incorporating trigram probabilities in your frequency tables, and re-implementing the Viterbi algorithm to consider three consecutive states instead of two. Obtain the Brown Corpus with a Larger TagsetRun the code below to download a copy of the brown corpus with the full NLTK tagset. You will need to research the available tagset information in the NLTK docs and determine the best way to extract the subset of NLTK tags you want to explore. If you write the following the format specified in Step 1, then you can reload the data using all of the code above for comparison.Refer to [Chapter 5](http://www.nltk.org/book/ch05.html) of the NLTK book for more information on the available tagsets.
###Code
import nltk
from nltk import pos_tag, word_tokenize
from nltk.corpus import brown
nltk.download('brown')
training_corpus = nltk.corpus.brown
training_corpus.tagged_sents()[0]
###Output
[nltk_data] Downloading package brown to
[nltk_data] C:\Users\sisok\AppData\Roaming\nltk_data...
[nltk_data] Package brown is already up-to-date!
|
scratch/tf-practice.ipynb | ###Markdown
Image segmentationThis notebook adapted from [this tutorial](https://www.tensorflow.org/tutorials/images/segmentation)
###Code
import tensorflow as tf
from tensorflow_examples.models.pix2pix import pix2pix
import tensorflow_datasets as tfds
from IPython.display import clear_output
import matplotlib.pyplot as plt
###Output
_____no_output_____ |
Python_MySQL_Tutorial.ipynb | ###Markdown
Python MySQL Create Database First of all you have to install mysql connector. I'll install it via pip Open Anaconda command prompt: ```consolepip install mysql-connector-python``` Now create config to hold data
###Code
config = {
"host": "localhost",
"user": "root",
"passwd": "Admin123/.",
"database": "python_mysql_tutorial",
}
###Output
_____no_output_____
###Markdown
Test MySQL Connector
###Code
import mysql.connector
myConn = mysql.connector.connect(
host = config["host"],
user = config["user"],
passwd = config["passwd"]
)
if (myConn):
print("All done! Enjoy.")
print(mydb)
###Output
All done! Enjoy.
<mysql.connector.connection.MySQLConnection object at 0x000001E35382B6C8>
###Markdown
Creating a Database
###Code
myCursor = mydb.cursor()
myCursor.execute("CREATE DATABASE "+config["database"])
###Output
_____no_output_____
###Markdown
Check if database successfully created
###Code
myConn = mysql.connector.connect(
host = config["host"],
user = config["user"],
passwd = config["passwd"],
database = config["database"],
)
myCursor = myConn.cursor()
myCursor.execute("SHOW DATABASES")
for x in myCursor:
#print(x)
if (x == (config['database'],)):
print(x)
###Output
('python_mysql_tutorial',)
###Markdown
Creating a TableMake sure you define the name of the database when you create the connection
###Code
touch = myConn.cursor()
touch.execute("CREATE TABLE users (name VARCHAR(255), email VARCHAR(255), password VARCHAR(255))")
touch = myConn.cursor()
touch.execute("SHOW TABLES")
for x in touch:
print(x)
###Output
('users',)
|
Duplicates/nb_ex9_5_tfwithkeras2-cpu.ipynb | ###Markdown
9.5 텐서플로 함수 활용하기 새로운 구능 기현시 유연하게 대처하는 방법으로 텐서플로와 케라스를 섞어 쓰는 방법을 다룹니다.
###Code
# set to use CPU
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
import keras
keras.__version__
###Output
_____no_output_____
###Markdown
9.5.1 텐서플로와 케라스 패키지 임포트 및 상호 연결
###Code
import tensorflow as tf
#sess = tf.Session()
from keras import backend as K
#K.set_session(sess)
###Output
_____no_output_____
###Markdown
9.5.2 완전 연결층 인공지능망 모델링
###Code
from keras.models import Sequential, Model
from keras.layers import Dense, Dropout
from keras.metrics import categorical_accuracy, categorical_crossentropy
class DNN():
def __init__(self, Nin, Nh_l, Nout):
self.X_ph = tf.placeholder(tf.float32, shape=(None, Nin))
self.L_ph = tf.placeholder(tf.float32, shape=(None, Nout))
# Modeling
H = Dense(Nh_l[0], activation='relu')(self.X_ph)
H = Dropout(0.5)(H)
H = Dense(Nh_l[1], activation='relu')(H)
H = Dropout(0.25)(H)
self.Y_tf = Dense(Nout, activation='softmax')(H)
# Operation
self.Loss_tf = tf.reduce_mean(
categorical_crossentropy(self.L_ph, self.Y_tf))
self.Train_tf = tf.train.AdamOptimizer().minimize(self.Loss_tf)
self.Acc_tf = categorical_accuracy(self.L_ph, self.Y_tf)
self.Init_tf = tf.global_variables_initializer()
###Output
_____no_output_____
###Markdown
9.5.3 데이터 준비 단계
###Code
import numpy as np
from keras import datasets # mnist
from keras.utils import np_utils # to_categorical
def Data_func():
(X_train, y_train), (X_test, y_test) = datasets.mnist.load_data()
Y_train = np_utils.to_categorical(y_train)
Y_test = np_utils.to_categorical(y_test)
L, W, H = X_train.shape
X_train = X_train.reshape(-1, W * H)
X_test = X_test.reshape(-1, W * H)
X_train = X_train / 255.0
X_test = X_test / 255.0
return (X_train, Y_train), (X_test, Y_test)
###Output
_____no_output_____
###Markdown
9.5.4 학습 진행 및 효과 분석 단계
###Code
from keraspp.skeras import plot_loss, plot_acc
import matplotlib.pyplot as plt
def run(model, data, sess, epochs, batch_size=100):
# epochs = 2
# batch_size = 100
(X_train, Y_train), (X_test, Y_test) = data
sess.run(model.Init_tf)
with sess.as_default():
N_tr = X_train.shape[0]
for epoch in range(epochs):
for b in range(N_tr // batch_size):
X_tr_b = X_train[batch_size * (b-1):batch_size * b]
Y_tr_b = Y_train[batch_size * (b-1):batch_size * b]
model.Train_tf.run(feed_dict={model.X_ph: X_tr_b, model.L_ph: Y_tr_b, K.learning_phase(): 1})
loss = sess.run(model.Loss_tf, feed_dict={model.X_ph: X_test, model.L_ph: Y_test, K.learning_phase(): 0})
acc = model.Acc_tf.eval(feed_dict={model.X_ph: X_test, model.L_ph: Y_test, K.learning_phase(): 0})
print("Epoch {0}: loss = {1:.3f}, acc = {2:.3f}".format(epoch, loss, np.mean(acc)))
###Output
_____no_output_____
###Markdown
9.5.5 주 함수 구현 및 실행
###Code
def main():
Nin = 784
Nh_l = [100, 50]
number_of_class = 10
Nout = number_of_class
data = Data_func()
model = DNN(Nin, Nh_l, Nout)
run(model, data, sess, 10, 100)
main()
###Output
_____no_output_____
###Markdown
--- 9.2.7 전체 코드
###Code
import tensorflow as tf
sess = tf.Session()
from keras import backend as K
K.set_session(sess)
# 분류 DNN 모델 구현 ########################
from keras.models import Sequential, Model
from keras.layers import Dense, Dropout
from keras.metrics import categorical_accuracy, categorical_crossentropy
class DNN():
def __init__(self, Nin, Nh_l, Nout):
self.X_ph = tf.placeholder(tf.float32, shape=(None, Nin))
self.L_ph = tf.placeholder(tf.float32, shape=(None, Nout))
# Modeling
H = Dense(Nh_l[0], activation='relu')(self.X_ph)
H = Dropout(0.5)(H)
H = Dense(Nh_l[1], activation='relu')(H)
H = Dropout(0.25)(H)
self.Y_tf = Dense(Nout, activation='softmax')(H)
# Operation
self.Loss_tf = tf.reduce_mean(
categorical_crossentropy(self.L_ph, self.Y_tf))
self.Train_tf = tf.train.AdamOptimizer().minimize(self.Loss_tf)
self.Acc_tf = categorical_accuracy(self.L_ph, self.Y_tf)
self.Init_tf = tf.global_variables_initializer()
# 데이터 준비 ##############################
import numpy as np
from keras import datasets # mnist
from keras.utils import np_utils # to_categorical
def Data_func():
(X_train, y_train), (X_test, y_test) = datasets.mnist.load_data()
Y_train = np_utils.to_categorical(y_train)
Y_test = np_utils.to_categorical(y_test)
L, W, H = X_train.shape
X_train = X_train.reshape(-1, W * H)
X_test = X_test.reshape(-1, W * H)
X_train = X_train / 255.0
X_test = X_test / 255.0
return (X_train, Y_train), (X_test, Y_test)
# 학습 효과 분석 ##############################
from keraspp.skeras import plot_loss, plot_acc
import matplotlib.pyplot as plt
def run(model, data, sess, epochs, batch_size=100):
# epochs = 2
# batch_size = 100
(X_train, Y_train), (X_test, Y_test) = data
sess.run(model.Init_tf)
with sess.as_default():
N_tr = X_train.shape[0]
for epoch in range(epochs):
for b in range(N_tr // batch_size):
X_tr_b = X_train[batch_size * (b-1):batch_size * b]
Y_tr_b = Y_train[batch_size * (b-1):batch_size * b]
model.Train_tf.run(feed_dict={model.X_ph: X_tr_b, model.L_ph: Y_tr_b, K.learning_phase(): 1})
loss = sess.run(model.Loss_tf, feed_dict={model.X_ph: X_test, model.L_ph: Y_test, K.learning_phase(): 0})
acc = model.Acc_tf.eval(feed_dict={model.X_ph: X_test, model.L_ph: Y_test, K.learning_phase(): 0})
print("Epoch {0}: loss = {1:.3f}, acc = {2:.3f}".format(epoch, loss, np.mean(acc)))
# 분류 DNN 학습 및 테스팅 ####################
def main():
Nin = 784
Nh_l = [100, 50]
number_of_class = 10
Nout = number_of_class
data = Data_func()
model = DNN(Nin, Nh_l, Nout)
run(model, data, sess, 10, 100)
main()
###Output
Targets: [5 7 9]
Predictions: [4.9328327 6.8842306 8.835628 ]
Errors: [0.06716728 0.11576939 0.16437244]
|
c2-regression/w2-multiple-regression/gradient_descent_multi_reg.ipynb | ###Markdown
define gradient descent function
###Code
def get_numpy_data(data, features, output_column):
data['constant'] = 1
fea = ['constant']
fea.extend(features)
x_matrix = data[fea].to_numpy()
y_vector = data[output_column].to_numpy()
return x_matrix, y_vector
def get_prediction(x_matrix, weights):
return x_matrix @ weights
def feature_derivative(errors, feature):
return 2 * errors @ feature
def reg_gd(x_matrix, y_true, initial_weights, step_size=0.01, tolerance=1e-5):
weights = np.array(initial_weights, dtype=float)
converged = False
num_steps = 0
while not converged:
pred = get_prediction(x_matrix, weights)
errors = y_true - pred
gradient_square_norm = 0
for i in range(len(weights)):
partiali = feature_derivative(errors, x_matrix[:,i])
weights[i] += step_size * partiali
gradient_square_norm += partiali ** 2
num_steps += 1
# if num_steps % 1 == 0:
# print(num_steps, weights, '%.6e'%np.sqrt(gradient_square_norm))
converged = True if gradient_square_norm <= tolerance ** 2 else False
return weights, num_steps
###Output
_____no_output_____
###Markdown
model1: fit simple regression model
###Code
feature_matrix, output = get_numpy_data(train_data, ['sqft_living'], 'price')
w,steps = reg_gd(feature_matrix, output, initial_weights= [-47000,1], step_size=7e-12, tolerance=2.5e7)
print(w)
###Output
[-46999.88716555 281.91211918]
###Markdown
predict
###Code
test_feature, test_output = get_numpy_data(test_data, ['sqft_living'], 'price')
test_pred = get_prediction(test_feature, w)
print(test_pred[0])
rss1 = np.linalg.norm(test_pred- test_output) ** 2
###Output
356134.4432550024
###Markdown
model2: multiple regression
###Code
feature_matrix,ouput = get_numpy_data(train_data, ['sqft_living','sqft_living15'], 'price')
ini_weight = [-100000,1,1]
w2, steps = reg_gd(feature_matrix, output, initial_weights=ini_weight, step_size= 4e-12, tolerance=1e9)
test_feature,_ = get_numpy_data(test_data, ['sqft_living','sqft_living15'], 'price')
test_pred2 = get_prediction(test_feature, w2)
print(test_pred2[0], test_output[0])
rss2 = np.linalg.norm(test_pred2 - test_output) ** 2
print('%.2e %.2e'%(rss1, rss2))
###Output
2.75e+14 2.70e+14
|
notebooks/folding_in_priors.ipynb | ###Markdown
Folding new priors into the inferenceThis notebook demonstrates how to fold in new priors on various hyper-parameters. We'll consider two different priors:1) a prior on the normalization and slope of the subhalo mass function inferred from stellar streams2) a prior on the concentration-mass relation based on observations of dwarf galaxies First, we will load the pre-computed lensing-only likelihoods. These can be computed and saved for later use with the notebook "inference_5D_from_scatch".
###Code
nbins = 20
likelihoods, likelihoods_coupled = [], []
param_names = ['LOS_normalization', 'beta', 'c0', 'delta_power_law_index', 'sigma_sub']
param_ranges = [all_param_ranges[name] for name in param_names]
load_from_pickle = True
save_to_pickle = False
filename_extension = '_joint'
filename_extension_coupled = '_joint_coupled'
base_path = './../lenslikelihood/precomputed_likelihoods/'
for lens in all_lens_names:
fname = base_path + lens + filename_extension
print('loading joint likelihoods for lens '+lens+' ...')
f = open(fname, 'rb')
single_lens_likelihood = pickle.load(f)
f.close()
likelihoods.append(single_lens_likelihood)
fname = base_path + lens + filename_extension_coupled
f = open(fname, 'rb')
single_lens_likelihood = pickle.load(f)
f.close()
likelihoods_coupled.append(single_lens_likelihood)
likelihood = IndepdendentLikelihoods(likelihoods)
likelihood_coupled = IndepdendentLikelihoods(likelihoods_coupled)
###Output
loading joint likelihoods for lens B1422 ...
loading joint likelihoods for lens HE0435 ...
loading joint likelihoods for lens WGD2038 ...
loading joint likelihoods for lens WGDJ0405 ...
loading joint likelihoods for lens WFI2033 ...
loading joint likelihoods for lens PSJ1606 ...
loading joint likelihoods for lens WFI2026 ...
loading joint likelihoods for lens RXJ0911 ...
loading joint likelihoods for lens RXJ1131 ...
loading joint likelihoods for lens MG0414 ...
loading joint likelihoods for lens PG1115 ...
###Markdown
Prior on the amplitude and slope of the subhalo mass function inferred from stellar streams https://arxiv.org/pdf/1911.02662.pdfBanik et al. (2021) used a paramterization for the subhalo mass function\begin{equation}\frac{dn}{dm} = a \left(\frac{m}{m_0}\right)^{\alpha} \exp{ \Big\{ -\frac{2}{\alpha_r}\left[\left(\frac{r}{r_{-2}}\right)^{\alpha_r} - 1\right] \Big\} }\end{equation}with $m_0 = 2.52 \times 10^7 M_{\odot}$, $\alpha_r = 0.672$, $r_{-2} = 162.4 \rm{kpc}$, and normalization $a = 2.02\times 10^{-13} M_{\odot}^{-1} \rm{kpc^{-3}}$. To map into lensing units $\Sigma_{\rm{sub}}$, we have to project this into two dimensions.
###Code
def projected_mass_function(r2d, a=2.02e-13, alpha=-1.9, m0=2.52e+7, r_minus2=162.4,
alpha_r=0.672, mlow=10**6, mhigh=10**10):
def _spatial_integrand(z):
r = np.sqrt(r2d**2 + z**2)
x = r/r_minus2
return np.exp(-(2/alpha_r) * ((x)**alpha_r - 1))
mass_function_1st_moment = a/(2 + alpha)/m0**alpha * ( mhigh**(2+alpha) - mlow ** (2+alpha))
spatial_integral = 2 * quad(_spatial_integrand, 0, 100 * r_minus2)[0]
# returns the projected mass in substructure [M_sun / kpc^2] using the parameterization in Banik et al. 2021
return spatial_integral * mass_function_1st_moment
def projected_mass_lensing(sigma_sub, m_host, z, mlow=10**6, mhigh=10**10, alpha=-1.9, m0=10**8):
host_redshift_scaling = 0.88 * np.log10(m_host/10**13) + 1.7 * np.log10(1+z)
# returns the projected mass in substructure as parameterized in the lensing analysis
return sigma_sub/(2 + alpha)/m0**(1+alpha) * ( mhigh**(2+alpha) - mlow ** (2+alpha))
r_minus2 = 162.4
r = np.logspace(-1.5, 1, 100) * r_minus2
projected_mass_density = [projected_mass_function(ri) for ri in r]
plt.loglog(r, projected_mass_density)
# since the projected density is almost constant in lenses around the Einstein radius, we will compare to the projected
# mass at r = 0.
print('projected mass at r=0 [M/kpc^2]: ', projected_mass_function(0.))
print('projected mass using lensing parameterization with sigma_sub = 0.0244 [M/kpc^2]: ',
projected_mass_lensing(0.0244, 1.4 * 10**12, 0.))
###Output
projected mass at r=0 [M/kpc^2]: 2327310.34705692
projected mass using lensing parameterization with sigma_sub = 0.0244 [M/kpc^2]: 23276034.690734457
###Markdown
Using the functions in the cell above, we can propagate the value of sigma_sub inferred from lensing to Milky Way like halos and vice versa. As shown in the cell above, the stellar streams inference on the subhalo mass function cast in terms of the lensing parameterization corresponds to $\Sigma_{\rm{sub}} = 0.0244$, assuming that tidal stripping is equally efficient in the Milky Way compared to massive ellipticls. If disruption is equally efficient in the Milky Way as in lens halos, then the matching value of sigma_sub would be 0.0244. If twice as many subhalos are disrupted in the MW relative to lens halos, then the matching value would be 0.0488. To see how this information affects the lensing results, we can add a prior on the subhalo mass function inferred from the lenses. We will also add a prior on the line of sight halo mass function of $\delta_{\rm{LOS}} = 1 \pm 0.2$, corresponding to the Sheth-Tormen halo mass function prediction with an uncertainty of $20\%$.
###Code
# Recall the parameter ordering:
# param_names = ['sigma_sub', 'LOS_normalization', 'beta', 'c0', 'delta_power_law_index']
# assumes subhalos are disrupted at the same rate in the Milky Way as in ellipticals
means = [None, None, None, None, 0.025]
sigmas = [None, None, None, None, means[-1] * 0.2]
prior_on_sigma_sub = MultivariateNormalPriorHyperCube(means, sigmas, param_names, param_ranges, nbins)
likelihoods_coupled_with_prior = likelihoods_coupled + [prior_on_sigma_sub]
likelihood_coupled_with_prior = IndepdendentLikelihoods(likelihoods_coupled_with_prior)
# assumes subhalos are disrupted twice as much in the MW as in ellipticals
means = [None, None, None, None, 0.05]
sigmas = [None, None, None, None, means[-1] * 0.2]
prior_on_sigma_sub_2 = MultivariateNormalPriorHyperCube(means, sigmas, param_names, param_ranges, nbins)
likelihoods_coupled_with_prior_2 = likelihoods_coupled + [prior_on_sigma_sub_2]
likelihood_coupled_with_prior_2 = IndepdendentLikelihoods(likelihoods_coupled_with_prior_2)
from trikde.triangleplot import TrianglePlot
triangle_plot = TrianglePlot([likelihood, likelihood_coupled_with_prior, likelihood_coupled_with_prior_2])
axes = triangle_plot.make_triplot(filled_contours=True, show_intervals=True, show_contours=True)
###Output
_____no_output_____
###Markdown
Custom priorsTo add more complicated priors, we can use a CustomPriorHyperCube object. This class takes as input a function that computes a chi2 for each sample, given the full array of samples in the likelihood. As an example, we'll use this to enforce a prior on the normalization and slope of the concentration-mass relation such that a 10^10 solar mass halo as a concentration of 12.5 plus/minus 4 at z=0. Note: this cell requires the software package colossus http://www.benediktdiemer.com/code/colossus/
###Code
from colossus.halo.concentration import peaks
from colossus.cosmology import cosmology
from trikde.pdfs import CustomPriorHyperCube
kwargs_cosmo = {'H0': 69.7, 'Om0': 0.0464 + 0.235, 'Ob0': 0.0464, 'ns': 0.9608, 'sigma8': 0.82}
cosmology.setCosmology('custom', kwargs_cosmo)
# from pyHalo.Halos.lens_cosmo import LensCosmo
# lc = LensCosmo()
# c = [lc.NFW_concentration(10**10, 0., scatter=True) for i in range(0, 1000)]
# print(np.mean(c), np.std(c))
def anchor_mc_relation_function(samples, log10_m_anchor=10, c_at_anchor=12.5, sigma=4, little_h=0.697):
c8_samples = samples[:, 2]
beta_samples = samples[:, 1]
m_ref = 10 ** 8 * little_h
m = 10 ** log10_m_anchor * little_h
nu = peaks.peakHeight(m, z=0)
nu_ref = peaks.peakHeight(m_ref, z=0)
nu_ratio = nu/nu_ref
c_model = c8_samples * nu_ratio ** -beta_samples
chi2 = (c_at_anchor - c_model)**2/sigma**2
return chi2
prior_on_mc_relation = CustomPriorHyperCube(anchor_mc_relation_function, param_names, param_ranges, nbins)
likelihoods_coupled_with_prior_on_MC = likelihoods_coupled + [prior_on_mc_relation] + [prior_on_sigma_sub_2]
likelihood_coupled_with_prior_on_MC = IndepdendentLikelihoods(likelihoods_coupled_with_prior_on_MC)
from trikde.triangleplot import TrianglePlot
triangle_plot = TrianglePlot([likelihood, likelihood_coupled_with_prior_2, likelihood_coupled_with_prior_on_MC])
axes = triangle_plot.make_triplot(filled_contours=True, show_intervals=True, show_contours=True)
###Output
_____no_output_____
###Markdown
Finally, we can experiment with priors on $\beta$, $\Sigma_{\rm{sub}}$, and $\delta_{\rm{LOS}}$.
###Code
means = [1., 0.83, 18, None, 2 * 0.0244]
sigmas = [0.2, 0.2, 5., None, 0.005]
prior_on_beta_sigmasub_deltalos = MultivariateNormalPriorHyperCube(means, sigmas, param_names, param_ranges, nbins)
likelihoods_coupled_with_prior_on_beta_and_sigma_sub = likelihoods_coupled + [prior_on_beta_sigmasub_deltalos]
likelihood_coupled_with_prior_on_beta_and_sigma_sub = IndepdendentLikelihoods(likelihoods_coupled_with_prior_on_beta_and_sigma_sub)
triangle_plot = TrianglePlot([likelihood, likelihood_coupled_with_prior_2, likelihood_coupled_with_prior_on_MC,
likelihood_coupled_with_prior_on_beta_and_sigma_sub])
axes = triangle_plot.make_triplot(filled_contours=True, show_intervals=True, show_contours=True)
help(triangle_plot.get_parameter_confidence_interval)
print('bin_width: ', (0.9 + 0.6)/20)
triangle_plot.get_parameter_confidence_interval('delta_power_law_index', 0., chain_num=None, thresh=0.68)
help(triangle_plot.get_parameter_confidence_interval)
print('bin_width: ', (0.9 + 0.6)/20)
triangle_plot.get_parameter_confidence_interval('delta_power_law_index', 0., chain_num=None, thresh=0.95)
triangle_plot = TrianglePlot([likelihood_coupled_with_prior, likelihood_coupled_with_prior_2])
axes = triangle_plot.make_triplot(filled_contours=True, show_intervals=True, show_contours=True,
display_params=['LOS_normalization', 'beta', 'c0', 'delta_power_law_index'])
beta = r'$\beta$'
beta_ticks = [0., 1., 2., 3., 4.]
c0 = r'$c_8$'
c0_ticks = [1, 50, 100, 150, 200]
delta_power_law_index = r'$\Delta \alpha$'
dpli_ticks = [-0.6, -0.3, 0., 0.3, 0.6, 0.9]
sigma_sub = r'$\Sigma_{\rm{sub}}$'
sigma_sub_ticks = [0., 0.025, 0.05, 0.075, 0.1]
delta_LOS = r'$\delta_{\rm{LOS}}$'
dlos_ticks = [0., 0.5, 1., 1.5, 2.]
axes[4].set_ylabel(beta)
axes[4].set_yticks(beta_ticks)
axes[4].set_yticklabels(beta_ticks)
axes[8].set_ylabel(c0)
axes[8].set_yticks(c0_ticks)
axes[8].set_yticklabels(c0_ticks)
axes[12].set_ylabel(delta_power_law_index)
axes[12].set_yticks(dpli_ticks)
axes[12].set_yticklabels(dpli_ticks)
axes[12].set_xlabel(delta_LOS)
axes[12].set_xticks(dlos_ticks)
axes[12].set_xticklabels(dlos_ticks)
axes[13].set_xlabel(beta)
axes[13].set_xticks(beta_ticks)
axes[13].set_xticklabels(beta_ticks)
axes[14].set_xlabel(c0)
axes[14].set_xticks(c0_ticks)
axes[14].set_xticklabels(c0_ticks)
axes[15].set_xlabel(delta_power_law_index)
axes[15].set_xticks(dpli_ticks)
axes[15].set_xticklabels(dpli_ticks)
plt.savefig('pk_inference.pdf')
###Output
_____no_output_____
###Markdown
Coupled subhalo and field halo mass functionsA reasonable assumption to impose on the inference is that the number of subhalos varies proportionally with the number of field halos, since subhalos are accreted from the field. We can enforce this by choosing an expected amplitude for the subhalo mass function in $\Lambda$CDM, and then coupling variations to $\Sigma_{\rm{sub}}$ around this value to $\delta_{\rm{LOS}}$.
###Code
def couple_mass_functions(samples, sigma_sub_theory=0.05, coupling_strength=0.2):
delta_los_samples = samples[:, 0]
sigma_sub_samples = samples[:, -1]
delta_sigma_sub = sigma_sub_samples/sigma_sub_theory
chi2 = (delta_sigma_sub - delta_los_samples)**2/coupling_strength**2
return chi2
kwargs_1 = {'sigma_sub_theory': 0.05}
kwargs_2 = {'sigma_sub_theory': 0.025}
prior_on_mass_functions_1 = CustomPriorHyperCube(couple_mass_functions, param_names, param_ranges, nbins, kwargs_1)
prior_on_mass_functions_2 = CustomPriorHyperCube(couple_mass_functions, param_names, param_ranges, nbins, kwargs_2)
likelihoods_coupled_with_prior_mass_functions = likelihoods_coupled + [prior_on_mass_functions_1]
likelihood_coupled_with_prior_mass_functions_1 = IndepdendentLikelihoods(likelihoods_coupled_with_prior_mass_functions)
likelihoods_coupled_with_prior_mass_functions = likelihoods_coupled + [prior_on_mass_functions_2]
likelihood_coupled_with_prior_mass_functions_2 = IndepdendentLikelihoods(likelihoods_coupled_with_prior_mass_functions)
triangle_plot = TrianglePlot([likelihood_coupled_with_prior_mass_functions_1, likelihood_coupled_with_prior_mass_functions_2])
axes = triangle_plot.make_triplot(filled_contours=True, show_intervals=True)
###Output
_____no_output_____ |
relational/relational2-binrel-sol.ipynb | ###Markdown
Relational data 2 - binary relations [Download exercises zip](../_static/generated/relational.zip)[Browse files online](https://github.com/DavidLeoni/softpython-en/tree/master/relational)We can use graphs to model relations of many kinds, like _isCloseTo,_ _isFriendOf,_ _loves,_ etc. Here we review some of them and their properties.**Before going on, make sure to have read the first tutorial** [Relational data](https://en.softpython.org/relational/relational1-intro-sol.html) What to do- unzip exercises in a folder, you should get something like this: ```relational relational1-intro.ipynb relational1-intro-sol.ipynb relational2-binrel.ipynb relational2-binrel-sol.ipynb relational3-chal.ipynb jupman.py soft.py ```**WARNING**: to correctly visualize the notebook, it MUST be in an unzipped folder !- open Jupyter Notebook from that folder. Two things should open, first a console and then browser. The browser should show a file list: navigate the list and open the notebook `relational/relational1-binrel.ipynb`**WARNING 2**: DO NOT use the _Upload_ button in Jupyter, instead navigate in Jupyter browser to the unzipped folder !- Go on reading that notebook, and follow instuctions inside.Shortcut keys:- to execute Python code inside a Jupyter cell, press `Control + Enter`- to execute Python code inside a Jupyter cell AND select next cell, press `Shift + Enter`- to execute Python code inside a Jupyter cell AND a create a new cell aftwerwards, press `Alt + Enter`- If the notebooks look stuck, try to select `Kernel -> Restart` Reflexive relationsA graph is reflexive when each node links to itself.In real life, the typical reflexive relation could be "is close to" , supposing "close to" means being within a 100 meters distance. Obviously, any place is always close to itself, let's see an example (Povo is a small town around Trento):
###Code
from soft import draw_adj
draw_adj({
'Trento Cathedral' : ['Trento Cathedral', 'Trento Neptune Statue'],
'Trento Neptune Statue' : ['Trento Neptune Statue', 'Trento Cathedral'],
'Povo' : ['Povo'],
})
###Output
_____no_output_____
###Markdown
Some relations might not always be necessarily reflexive, like "did homeworks for". You should always do your own homeworks, but to our dismay, university intelligence services caught some of you cheating. In the following example we expose the situation - due to privacy concerns, we identify students with numbers starting from zero included:
###Code
from soft import draw_mat
draw_mat(
[
[True, False, False, False],
[False, False, False, False],
[False, True, True, False],
[False, False, False, False],
]
)
###Output
_____no_output_____
###Markdown
From the graph above, we see student 0 and student 2 both did their own homeworks. Student 3 did no homerworks at all. Alarmingly, we notice student 2 did the homeworks for student 1. Resulting conspiration shall be severely punished with a one year ban from having spritz at Emma's bar. Exercise - is_reflexive_mat✪✪ Implement a function that RETURN `True` if nxn boolean matrix mat as list of lists is reflexive, `False` otherwise. A graph is _reflexive_ when all nodes point to themselves. - Please at least try to make the function efficient
###Code
def is_reflexive_mat(mat):
#jupman-raise
n = len(mat)
for i in range(n):
if not mat[i][i]:
return False
return True
#/jupman-raise
assert is_reflexive_mat([ [False] ]) == False # m1
assert is_reflexive_mat([ [True] ]) == True # m2
assert is_reflexive_mat([ [False, False],
[False, False] ]) == False # m3
assert is_reflexive_mat([ [True, True],
[True, True] ]) == True # m4
assert is_reflexive_mat([ [True, True],
[False, True] ]) == True # m5
assert is_reflexive_mat([ [True, False],
[True, True] ]) == True # m6
assert is_reflexive_mat([ [True, True],
[True, False] ]) == False # m7
assert is_reflexive_mat([ [False, True],
[True, True] ]) == False # m8
assert is_reflexive_mat([ [False, True],
[True, False] ]) == False # m9
assert is_reflexive_mat([ [False, False],
[True, False] ]) == False # m10
assert is_reflexive_mat([ [False, True, True],
[True, False, False],
[True, True, True] ]) == False # m11
assert is_reflexive_mat([ [True, True, True],
[True, True, True],
[True, True, True] ]) == True # m12
###Output
_____no_output_____
###Markdown
Exercise - is_reflexive_adj✪✪ Implement now the same function for dictionaries of adjacency lists:RETURN `True` if provided graph as dictionary of adjacency lists is reflexive, `False` otherwise. - A graph is _reflexive_ when all nodes point to themselves.- Please at least try to make the function efficient.
###Code
def is_reflexive_adj(d):
#jupman-raise
for v in d:
if not v in d[v]:
return False
return True
#/jupman-raise
assert is_reflexive_adj({ 'a':[] }) == False # d1
assert is_reflexive_adj({ 'a':['a'] }) == True # d2
assert is_reflexive_adj({ 'a':[],
'b':[]
}) == False # d3
assert is_reflexive_adj({ 'a':['a'],
'b':['b']
}) == True # d4
assert is_reflexive_adj({ 'a':['a','b'],
'b':['b']
}) == True # d5
assert is_reflexive_adj({ 'a':['a'],
'b':['a','b']
}) == True # d6
assert is_reflexive_adj({ 'a':['a','b'],
'b':['a']
}) == False # d7
assert is_reflexive_adj({ 'a':['b'],
'b':['a','b']
}) == False # d8
assert is_reflexive_adj({ 'a':['b'],
'b':['a']
}) == False # d9
assert is_reflexive_adj({ 'a':[],
'b':['a']
}) == False # d10
assert is_reflexive_adj({ 'a':['b','c'],
'b':['a'],
'c':['a','b','c']
}) == False # d11
assert is_reflexive_adj({ 'a':['a','b','c'],
'b':['a','b','c'],
'c':['a','b','c']
}) == True # d12
###Output
_____no_output_____
###Markdown
Symmetric relationsA graph is symmetric when for all nodes, if a node A links to another node B, there is a also a link from node B to A. In real life, the typical symmetric relation is "is friend of". If you are friend to somene, that someone should be also be your friend.For example, since Scrooge typically is not so friendly with his lazy nephew Donald Duck, but certainly both Scrooge and Donald Duck enjoy visiting the farm of Grandma Duck, we can model their friendship relation like this:
###Code
from soft import draw_adj
draw_adj({
'Donald Duck' : ['Grandma Duck'],
'Scrooge' : ['Grandma Duck'],
'Grandma Duck' : ['Scrooge', 'Donald Duck'],
})
###Output
_____no_output_____
###Markdown
Not that Scrooge is not linked to Donald Duck, but this does not mean the whole graph cannot be considered symmetric. If you pay attention to the definition above, there is _if_ written at the beginning: _if_ a node A links to another node B, there is a also a link from node B to A. **QUESTION**: Looking purely at the above definition (so do _not_ consider 'is friend of' relation), should a symmetric relation be necessarily reflexive? **ANSWER**: No, in a symmetric relation some nodes can be linked to themseves, while some other nodes may have no link to themselves. All we care about to check symmetry is links from a node to _other_ nodes. **QUESTION**: Think about the semantics of the specific "is friend of" relation: can you think of a social network where the relation is not shown as reflexive? **ANSWER**: In the particular case of "is friend to" relation is interesting, as it prompts us to think about the semantic meaning of the relation: obviously, everybody _should_ be a friend of himself/herself - but if were to implement say a social network service like Facebook, it would look rather useless to show in your your friends list the information that you are a friend of yourself. **QUESTION**: Always talking about the specific semantics of "is friend of" relation: can you think about some case where it should be meaningful to store information about individuals _not_ being friends of themselves ? **ANSWER**: in real life it may always happen to find fringe cases - suppose you are given the task to model a network of possibly depressed people with self-harming tendencies. So always be sure your model correctly fits the problem at hand. Some relations sometimes may or not be symmetric, depending on the graph at hand. Think about the relation _loves_. It is well known that Mickey Mouse lovel Minnie and the sentiment is reciprocal, and Donald Duck loves Daisy Duck and the sentiment is reciprocal. We can conclude this particular graph is symmetrical:
###Code
from soft import draw_adj
draw_adj({
'Donald Duck' : ['Daisy Duck'],
'Daisy Duck' : ['Donald Duck'],
'Mickey Mouse' : ['Minnie'],
'Minnie' : ['Mickey Mouse']
})
###Output
_____no_output_____
###Markdown
But what about this one? Donald Duck is not the only duck in town and sometimes a contender shows up: [Gladstone Gander](https://en.wikipedia.org/wiki/Gladstone_Gander) (Gastone in Italian) also would like the attention of Daisy ( never mind in some comics he actually gets it when Donald Duck messes up big time):
###Code
from soft import draw_adj
draw_adj({
'Donald Duck' : ['Daisy Duck'],
'Daisy Duck' : ['Donald Duck'],
'Mickey Mouse' : ['Minnie'],
'Minnie' : ['Mickey Mouse'],
'Gladstone Gander' : ['Daisy Duck']
})
###Output
_____no_output_____
###Markdown
Exercise - is_symmetric_mat✪✪ Implement an automated procedure to check whether or not a graph is symmetrical, which given a matrix as a list of lists that RETURN `True` if `n`x`n` boolean matrix mat as list of lists is symmetric, `False` otherwise. - A graph is symmetric when for all nodes, if a node A links to another node B, there is a also a link from node B to A.
###Code
def is_symmetric_mat(mat):
#jupman-raise
n = len(mat)
for i in range(n):
for j in range(n):
if mat[i][j] and not mat[j][i]:
return False
return True
#/jupman-raise
assert is_symmetric_mat([ [False] ]) == True # m1
assert is_symmetric_mat([ [True] ]) == True # m2
assert is_symmetric_mat([ [False, False],
[False, False] ]) == True # m3
assert is_symmetric_mat([ [True, True],
[True, True] ]) == True # m4
assert is_symmetric_mat([ [True, True],
[False, True] ]) == False # m5
assert is_symmetric_mat([ [True, False],
[True, True] ]) == False # m6
assert is_symmetric_mat([ [True, True],
[True, False] ]) == True # m7
assert is_symmetric_mat([ [False, True],
[True, True] ]) == True # m8
assert is_symmetric_mat([ [False, True],
[True, False] ]) == True # m9
assert is_symmetric_mat([ [False, False],
[True, False] ]) == False # m10
assert is_symmetric_mat([ [False, True, True],
[True, False, False],
[True, True, True] ]) == False # m11
assert is_symmetric_mat([ [False, True, True],
[True, False, True],
[True, True, True] ]) == True # m12
###Output
_____no_output_____
###Markdown
Exercise - is_symmetric_adj✪✪ Now implement the same as before but for a dictionary of adjacency lists:RETURN `True` if given dictionary of adjacency lists is symmetric, `False` otherwise. - Assume all the nodes are represented in the keys. - A graph is symmetric when for all nodes, if a node A links to another node B, there is a also a link from node B to A.
###Code
def is_symmetric_adj(d):
#jupman-raise
for k in d:
for v in d[k]:
if not k in d[v]:
return False
return True
#/jupman-raise
assert is_symmetric_adj({ 'a':[] }) == True # d1
assert is_symmetric_adj({ 'a':['a'] }) == True # d2
assert is_symmetric_adj({ 'a' : [],
'b' : []
}) == True # d3
assert is_symmetric_adj({ 'a' : ['a','b'],
'b' : ['a','b']
}) == True # d4
assert is_symmetric_adj({ 'a' : ['a','b'],
'b' : ['b']
}) == False # d5
assert is_symmetric_adj({ 'a' : ['a'],
'b' : ['a','b']
}) == False # d6
assert is_symmetric_adj({ 'a' : ['a','b'],
'b' : ['a']
}) == True # d7
assert is_symmetric_adj({ 'a' : ['b'],
'b' : ['a','b']
}) == True # d8
assert is_symmetric_adj({ 'a' : ['b'],
'b' : ['a']
}) == True # d9
assert is_symmetric_adj({ 'a' : [],
'b' : ['a']
}) == False # d10
assert is_symmetric_adj({ 'a' : ['b', 'c'],
'b' : ['a'],
'c' : ['a','b','c']
}) == False # d11
assert is_symmetric_adj({ 'a' : ['b', 'c'],
'b' : ['a','c'],
'c' : ['a','b','c']
}) == True # d12
###Output
_____no_output_____
###Markdown
Surjective relationsIf we consider a graph as a nxn binary relation where the domain is the same as the codomain, such relation is called _surjective_ if every node is reached by _at least_ one edge. For example, `G1` here is surjective, because there is at least one edge reaching into each node (self-loops as in 0 node also count as incoming edges)
###Code
G1 = [
[True, True, False, False],
[False, False, False, True],
[False, True, True, False],
[False, True, True, True],
]
draw_mat(G1)
###Output
_____no_output_____
###Markdown
`G2` down here instead does not represent a surjective relation, as there is _at least_ one node ( `2` in our case) which does not have any incoming edge:
###Code
G2 = [
[True, True, False, False],
[False, False, False, True],
[False, True, False, False],
[False, True, False, False],
]
draw_mat(G2)
###Output
_____no_output_____
###Markdown
Exercise - surjective✪✪ RETURN `True` if provided graph `mat` as list of boolean lists is an `n`x`n` surjective binary relation, otherwise return `False`
###Code
def surjective(mat):
#jupman-raise
n = len(mat)
c = 0 # number of incoming edges found
for j in range(len(mat)): # go column by column
for i in range(len(mat)): # go row by row
if mat[i][j]:
c += 1
break # as you find first incoming edge, increment c and stop search for that column
return c == n
#/jupman-raise
m1 = [ [False] ]
assert surjective(m1) == False
m2 = [ [True] ]
assert surjective(m2) == True
m3 = [ [True, False],
[False, False] ]
assert surjective(m3) == False
m4 = [ [False, True],
[False, False] ]
assert surjective(m4) == False
m5 = [ [False, False],
[True, False] ]
assert surjective(m5) == False
m6 = [ [False, False],
[False, True] ]
assert surjective(m6) == False
m7 = [ [True, False],
[True, False] ]
assert surjective(m7) == False
m8 = [ [True, False],
[False, True] ]
assert surjective(m8) == True
m9 = [ [True, True],
[False, True] ]
assert surjective(m9) == True
m10 = [ [True, True, False, False],
[False, False, False, True],
[False, True, False, False],
[False, True, False, False] ]
assert surjective(m10) == False
m11 = [ [True, True, False, False],
[False, False, False, True],
[False, True, True, False],
[False, True, True, True] ]
assert surjective(m11) == True
###Output
_____no_output_____ |
12_Clustering/Clustering.ipynb | ###Markdown
Clustering Methods> Weitong Zhang> 2015011493>> No empty subset in k-meansFirst of all, let's suppose that there are at least one empty subset $D_\phi$ generated by k-means clustering method, since in most of the case, $c \le n$ (otherwise, the k-means method is useless), therefore, the number of non-empty subset $c_{D\ne\phi}$ would be less than $n$According to the pigeonhole principle, there are at least one subset which contains at least 2 samples, now lets set this subset as $D_\kappa$Therefore, we can reconstruct the subset by moving the first sample $D_{\kappa0}$ to one empty subset, and keep the other subset fixed, therefore we remove an empty subset and get a subset with only one sample.Let's assume that the original division strategy would lead to a loss determined by $J$, while the new one will have a loss defined by $J'$, we can easily conclude that:$$J - J' = \sum_{i=0, D_{\kappa}}(x_i - m_{D_\kappa})^2 - \sum_{i=1, D_{\kappa'}}(x_i - m_{D_\kappa'})^2 \ge \sum_{i=1, D_{\kappa}}(x_i - m_{D_\kappa})^2- \sum_{i=1, D_{\kappa'}}(x_i - m_{D_\kappa'})^2$$Note that it is obvious that the subset with only one sample have no lossIn the subset of $D{\kappa}'$, it is obvious that the center $m_{D_\kappa'}$ will lead to the lowest loss in function $\sum_{i=1, D_{\kappa'}}(x_i - m)^2$, that is to say, $\sum_{i=1, D_{\kappa}}(x_i - m_{D_\kappa})^2- \sum_{i=1, D_{\kappa'}}(x_i - m_{D_\kappa'})^2 \ge 0$Therefore, $J - J' \ge 0$, $J = J'$ only when in each subset, all of the samples are the same point, which makes any clustering method useless.In conclusion, once we can find a division strategy with a empty subset, we can find one with this subset non-empty and improve the performance of the clustering, that is to say, no empty subset will be provided by k-means method. Programming Brief analysis of the time complexity k-means methodFor MNIST data, the similarity function between two sampls would be a $\mathcal O(m)$ algorithm, where $m = 784$.Therefore, in each loop of k-means method, the algorithm will carry on an $\mathcal O(n*k)$ calculate, therefore, if there are total $T$ times loop executed, we can conclude that the time complexity would be $\mathcal O(T * n * k * m)$.Therefore, the total time complexity of k-means method is $\mathcal O(n)$ with respect to $n$ (Regarding the $k,m,T$ as constant, where $T$ might be determined by the distribution of the samples)An $\mathcal O(n)$ can manage $1\times 10^6$ samples in seconds in the most case (considering the big constant $T\times k\times m$) Hierarhical clustering For MNIST data, the similarity function between two sampls would be a $\mathcal O(m)$ algorithm, where $m = 784$.For each iteration, the time cost will vary from 1 to $\mathcal O(n^2)$, therefore, we can briefly conclude that the total time cost would be up to $\mathcal o(n^3)$An $\mathcal O(n^3)$ can manage $1\times 10^2$ samples in seconds in the most case. Spectral clusteringThe time cost of construct the similarity matrix is $\mathcal O(n^2)$, and no other step have the longer time cost, therefore, the total time complexity is $\mathcal O(n^2)$An $\mathcal O(n^2)$ can manage $1\times 10^3$ samples in seconds in the most case. Estimation Verification
###Code
import numpy as np
np.seterr('ignore')
from matplotlib import pyplot as plt
import time
class cluster:
'''Base Class of all of the 3 cluter'''
def __init__(self, k):
'''
@para k: number of cluster
'''
self.k = k
def predict(self, X):
'''
@para X: samples x dimension array
@ret y : samples x 1 array, range from 0 ~ k-1
'''
raise NotImplementedError
class k_means(cluster):
def predict(self, X):
y = np.array([index % self.k for index in range(X.shape[0])])
while True:
# construct the center of each class
M = np.zeros((0,X.shape[1]))
count = []
for clazz in range(self.k):
samples = X[y == clazz,:]
count.append(samples.shape[0])
samples_mean = np.mean(samples,axis=0)
M = np.vstack((M,samples_mean))
# Correct the cluster by sample
if_continue = False
for index in range(X.shape[0]):
new_class = y[index]
score_ori = count[y[index]] / (count[y[index]] - 1) * np.linalg.norm(X[index,:] - M[y[index],:])
for clazz in range(self.k):
score_des = count[clazz] / (count[clazz] - 1) * np.linalg.norm(X[index,:] - M[clazz,:])
if score_des < score_ori:
score_ori = score_des
new_class = clazz
if_coutinue = True
y[index] = new_class
if not if_continue:
break
return y
class hierarhical(cluster):
def __init__(self,k,method='mean'):
cluster.__init__(self,k)
if method == 'mean':
self.method = self.mean_dist
elif method == 'min':
self.method = self.min_dist
elif method == 'max':
self.method = self.max_dist
else:
raise NoMethodError
def predict(self, X):
T = self.build_up(X)
leafs = [T]
for _ in range(self.k - 1):
is_cal = False
for idx in range(len(leafs)):
#split the leaf with the largest diameter
if leafs[idx].ismerged:
d = self.diameter(X[leafs[idx].index,:])
if not is_cal or D < d:
best_idx = idx
D = d
is_cal = True
leafs[best_idx].ismerged = False
leafs.append(leafs[best_idx].lchild)
leafs.append(leafs[best_idx].rchild)
y = self.k * np.ones(X.shape[0])
clazz = 0
for leaf in leafs:
if leaf.ismerged:
y[leaf.index] = clazz
clazz += 1
return y
def mean_dist(self,X1,X2):
'''
Calculate the mean distance of between the cluster
@para X1,X2: samples x dimension
'''
count = 0
total = 0
for index1 in range(X1.shape[0]):
for index2 in range(X2.shape[0]):
total += np.linalg.norm(X1[index1,:] - X2[index2,:])
count += 1
return total / count
def min_dist(self,X1,X2):
'''
Calculate the min distance of between the cluster
@para X1,X2: samples x dimension
'''
is_cal = False
for index1 in range(X1.shape[0]):
for index2 in range(X2.shape[0]):
d = np.linalg.norm(X1[index1,:] - X2[index2,:])
if not is_cal or min_d > d:
min_d = d
is_cal = True
return min_d
def max_dist(self,X1,X2):
'''
Calculate the min distance of between the cluster
@para X1,X2: samples x dimension
'''
is_cal = False
for index1 in range(X1.shape[0]):
for index2 in range(X2.shape[0]):
d = np.linalg.norm(X1[index1,:] - X2[index2,:])
if not is_cal or max_d < d:
max_d = d
is_cal = True
return max_d
def diameter(self,X):
'''
Calculate the diameter of a cluster
@para X1: samples x dimension
'''
D = 0
for index1 in range(X.shape[0]):
for index2 in range(X.shape[0]):
d = np.linalg.norm(X[index1,:] - X[index2,:])
if d > D:
D = d
return D
class train_node:
''' Node of the train to be build'''
def __init__(self,index,lchild,rchild):
self.index = index
self.lchild = lchild
self.rchild = rchild
self.ismerged = False
def build_up(self,X):
'''
Build up the hierarhical tree
@para X: samples x dimensions
@ret T: a build_up tree
'''
# Init a lot of tree nodes
nodes = []
for idx in range(X.shape[0]):
nodes.append(self.train_node([idx],None,None))
# Loop until all merged:
while True:
calculated = False
for idx in range(len(nodes)):
if nodes[idx].ismerged:
continue
for idx_cmp in range(len(nodes)):
if nodes[idx_cmp].ismerged or idx == idx_cmp:
continue
score = self.method(X[nodes[idx].index,:],X[nodes[idx_cmp].index,:])
if not calculated or score < best_score:
best_score = score
calculated = True
best1 = idx
best2 = idx_cmp
calculated = True
if not calculated:
break
# merge the two best one
nodes[best1].ismerged = True
nodes[best2].ismerged = True
nodes.append(self.train_node(nodes[best1].index + nodes[best2].index,nodes[best1],nodes[best2]))
for node in nodes:
if not node.ismerged:
node.ismerged = True
return node
class spectral(cluster):
def __init__(self,k):
cluster.__init__(self,k)
self.sigma = 1
def predict(self, X):
W = np.zeros((X.shape[0],X.shape[0]))
for i in range(X.shape[0]):
for j in range(X.shape[0]):
delta = X[i,:] - X[j,:]
W[i,j] = np.exp(-np.matmul(delta.T,delta)/2 / self.sigma)
d = np.sum(W,axis=0)
D = np.diag(d)
DD = np.diag(np.true_divide(1, np.sqrt(d)))
L = np.matmul(np.matmul(DD,D - W),DD)
w,v = np.linalg.eig(L)
W= np.vstack((w,v))
W = W[:,W[0].argsort()]
data = W[1:,:self.k]
c = k_means(self.k)
return c.predict(data)
plt.rcParams["figure.figsize"] = [15,5]
# k-means
plt.subplot(1,3,1)
rec = []
num = range(100,50000,100)
for n in num:
c = k_means(4)
a = time.clock()
c.predict(np.random.rand(n,2))
rec.append(time.clock() - a)
plt.plot(np.log(np.array(rec)),np.log(np.array(num)))
plt.title('k-means Clustering')
# hierarhical
plt.subplot(1,3,2)
rec = []
num = range(10,200,10)
for n in num:
c = hierarhical(4)
a = time.clock()
c.predict(np.random.rand(n,2))
rec.append(time.clock() - a)
plt.plot(np.log(np.array(rec)),np.log(np.array(num)))
plt.title('Hierarhical Clustering')
# spectral
plt.subplot(1,3,3)
rec = []
num = range(10,1000,10)
for n in num:
c = spectral(4)
a = time.clock()
c.predict(np.random.rand(n,2))
rec.append(time.clock() - a)
plt.plot(np.log(np.array(rec)),np.log(np.array(num)))
plt.title('Spectral Clustering')
plt.show()
###Output
_____no_output_____
###Markdown
Experiment ResultsThe plot above is the $N~T$ plot of three algorithm in random input. The vertical axis is $\ln N$ and the horizontal axis stands for $\ln T$. We can find out that - k-means: $\ln N \approx \ln T + b\ \Rightarrow T = \mathcal O(N)$- Hierarhical: $\ln N \approx 0.25 \ln T + b \ \Rightarrow T = \mathcal O(N^4)$- Spectral: $\ln N \approx 0.5\ln T + b \ \Rightarrow T = \mathcal O(N^2)$The time complexity is the same with our estimation in k-means and Spectral. The time complexity for hierarhical is a little bit greater than the estimation (but this might because the stupid implementation) Speed up methodsThere are indeed some speed up method using in the hierarhical method, for example, we can use the 'distance from the center' to replace the 'mean distance', which will speed up the calculate between classes from $\mathcal O(n^2)$ to $\mathcal O(n)$ Testing with MNIST dataset Data set splitIn order to speed up the calculation, we use 100 samples for hierarhical clustering, 1,000 samples for spectral clustering and 10,000 samples for k-means clustering method
###Code
from sklearn.datasets import fetch_mldata
from sklearn.model_selection import train_test_split
mnist = fetch_mldata('MNIST original')
data = mnist['data'] / 255
label = mnist['target']
_, X_hierarhical, _, y_hierarhical = train_test_split(data, label, test_size=100)
_, X_spectral, _, y_spectral = train_test_split(data, label, test_size=1000)
_, X_k_means, _, y_k_means = train_test_split(data, label, test_size=10000)
###Output
_____no_output_____
###Markdown
Solving k-means Deal with the initial sensitiveIn our implementation of k-means, we just put every point into a random cluster simply, however, we can initial the k-means algorithm with a certain method called k-means++, which select the first cluster center randomly, and select the following cluster center as far from the existing center as possible. k-means++ method could refine the initial to a great extent. $J_e$ with NMI
###Code
def Je(X,y,k):
'''
Calc the J_e for k-means
@para X: samples x dimensions
@para y: samples x 1
@para k: numbers of cluster
@ret Je: \sum|x-m|^2
'''
M = np.zeros((0,X.shape[1]))
for clazz in range(k):
samples = X[y == clazz,:]
samples_mean = np.mean(samples,axis=0)
M = np.vstack((M,samples_mean))
loss = 0
for idx in range(X.shape[0]):
delta = X[idx,:] - M[y[idx],:]
loss += np.matmul(delta.T, delta)
return loss / X.shape[0]
# Je and NMI for k_means
from sklearn.metrics.cluster import normalized_mutual_info_score as NMI
c = k_means(10)
y = c.predict(X_k_means)
je = Je(X_k_means,y,10)
nmi = NMI(y_k_means,y)
print ('J_e = {:.3f}, NMI = {:5f}'.format(je,nmi))
# Is Je and NMI match?
# random labeling
je = []
nmi = []
for _ in range(1000):
y_random = np.random.randint(10,size=(10000,))
je.append(Je(X_k_means,y_random,10))
nmi.append(NMI(y_k_means,y_random))
plt.scatter(np.array(je),np.array(nmi))
plt.ylim((0,0.004))
plt.show()
###Output
_____no_output_____
###Markdown
Is Je and NMI match?From the random testing, we can find that $J_e$ and NMI does not match exactly, however, the problem of random testing is that the NMI score is too low. In fact, when the cluster is good enough to get the lower $J_e$, e.g. k-means can generate the label with $J_e 50$. In this relatively 'greater' scale, a lower $J_e$ might lead to a greater NMI Solving hierarchical clusteringSince using python instead of MATLAB, we could just implement some simple linkage method manually instead using the linking method provided by MATLAB. According to the experiment, it seems that the max linkage method is the best linkage method among the 3 method provided below
###Code
# mean distance:
for method in ['mean','min','max']:
c = hierarhical(10,method)
y_predict = c.predict(X_hierarhical)
nmi = NMI(y_hierarhical,y_predict)
print ('Using {} linkage method, NMI = {}'.format(method,nmi))
###Output
Using mean linkage method, NMI = 0.5523632982305717
Using min linkage method, NMI = 0.20548233411537595
Using max linkage method, NMI = 0.5475338769365315
###Markdown
Solving spectral clusteringIt is easily to find out that the kNN similarity graph is time cost and not recommended. We have already carried out some simple experiment (which is not included in this notebook) to carry out kNN similarity matrix. According to the previous expermients, we can conclude that using the gauss distance is a good idea.The following code is dealing with the selection of $\sigma$ in $s(i,j) = \exp(\frac{d(i,j)^2}{2\sigma})$According to the experiment, it seems that the $sigma = 0.3 ~ 3$ is ok.
###Code
c = spectral(10)
for sigma in [0.1,0.3,1,3]:
c.sigma = sigma
y_predict = c.predict(X_spectral)
nmi = NMI(y_spectral,y_predict)
print ('Using sigma = {}, NMI = {}'.format(sigma,nmi))
###Output
Using sigma = 0.1, NMI = 0.18529179592749784
Using sigma = 0.3, NMI = 0.08539695907142038
Using sigma = 1, NMI = 0.2840494648254438
Using sigma = 3, NMI = 0.2977927319444501
###Markdown
> CLAIM:>> Here, we use the NMI function provided by sklearn, however, it seems that a larger data set might provided lower NMI, it might because a larger data set is not easy to be divided. Choosing the right $k$For all method, we can find out that the $J_e$ loss is decreasing with the increasment of $k$. We choose the $k$ where the second-order differential is at its largest:$$k = \arg\max_iJ_e(i+1) - 2J_e(i) + J_e(i-1)$$
###Code
def find_k(method,method_name,X_method):
je = []
for k in range(5,15):
c = method(k)
y_predict = c.predict(X_method).astype(int)
#print(y_predict)
je.append(Je(X_method,y_predict,k))
sec_diff = []
for idx in range(len(je)-2):
sec_diff.append(je[idx + 2] - 2 * je[idx + 1] + je[idx])
print ('For {}, the best k is {}'.format(method_name, np.argmax(np.array(sec_diff)) + 6))
find_k(k_means,'k-means',X_k_means)
find_k(hierarhical,'hierarhical',X_hierarhical)
find_k(spectral,'spectral',X_spectral)
###Output
For k-means, the best k is 12
For hierarhical, the best k is 6
|
LSTM for text classification-using pytorch.ipynb | ###Markdown
LSTM in Pytorch
###Code
#library imports
!pip install torch
!pip install spacy
!pip install jovian
import torch
import torch.nn as nn
import pandas as pd
import numpy as np
import re
import spacy
import jovian
from collections import Counter
from torch.utils.data import Dataset, DataLoader
import torch.nn.functional as F
import string
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
from sklearn.metrics import mean_squared_error
#input (Basic LSTM in PyTorch with Random numbers)
x = torch.tensor([[1,2, 12,34, 56,78, 90,80],
[12,45, 99,67, 6,23, 77,82],
[3,24, 6,99, 12,56, 21,22]])
###Output
_____no_output_____
###Markdown
using two different models
###Code
model1 = nn.Embedding(100, 7, padding_idx=0)
model2 = nn.LSTM(input_size=7, hidden_size=3, num_layers=1, batch_first=True)
out1 = model1(x)
out2 = model2(out1)
print(out1.shape)
print(out1)
out, (ht, ct) = model2(out1)
print(ht)
###Output
tensor([[[-0.1589, -0.0241, -0.0439],
[ 0.1044, -0.1058, 0.0247],
[-0.2264, 0.2256, 0.2958]]], grad_fn=<StackBackward>)
###Markdown
using nn.sequential
###Code
model3 = nn.Sequential(nn.Embedding(100, 7, padding_idx=0),
nn.LSTM(input_size=7, hidden_size=3, num_layers=1, batch_first=True))
out, (ht, ct) = model3(x)
print(out)
###Output
tensor([[[ 0.0582, 0.0995, 0.0592],
[ 0.2026, 0.3528, -0.3194],
[ 0.2184, 0.2725, -0.5097],
[ 0.3748, -0.1079, -0.3855],
[ 0.3830, -0.0189, -0.3125],
[ 0.2868, -0.1447, -0.2994],
[ 0.1497, 0.0120, -0.3394],
[ 0.2409, 0.0047, -0.0993]],
[[ 0.1437, 0.0394, -0.2014],
[ 0.1259, 0.1149, 0.0271],
[ 0.1465, 0.2317, 0.0763],
[ 0.2318, -0.0348, -0.0095],
[ 0.3190, 0.0747, -0.3237],
[ 0.2444, -0.1477, -0.2047],
[ 0.1595, -0.0876, -0.0358],
[ 0.0333, -0.0167, -0.0888]],
[[ 0.1608, 0.1862, -0.1811],
[ 0.0418, -0.0486, 0.0225],
[ 0.2414, 0.0597, -0.2811],
[ 0.1673, 0.1486, 0.0118],
[ 0.2229, 0.0901, -0.2066],
[ 0.3826, 0.0506, -0.2140],
[ 0.2935, -0.3682, -0.0411],
[ 0.1645, -0.1411, -0.0221]]], grad_fn=<TransposeBackward0>)
###Markdown
Multiclass Text ClassificationWe are going to predict item ratings based on customer reviews based on a e commerce store of clothing dataset from Kaggle
###Code
#loading the data
reviews = pd.read_csv("C:/Users/simmu/Downloads/archive (1)/Womens Clothing E-Commerce Reviews.csv")
print(reviews.shape)
reviews.head()
reviews['Title'] = reviews['Title'].fillna('')
reviews['Review Text'] = reviews['Review Text'].fillna('')
reviews['review'] = reviews['Title'] + ' ' + reviews['Review Text']
#keeping only relevant columns and calculating sentence lengths
reviews = reviews[['review', 'Rating']]
reviews.columns = ['review', 'rating']
reviews['review_length'] = reviews['review'].apply(lambda x: len(x.split()))
reviews.head()
#changing ratings to 0-numbering
zero_numbering = {1:0, 2:1, 3:2, 4:3, 5:4}
reviews['rating'] = reviews['rating'].apply(lambda x: zero_numbering[x])
#mean sentence length
np.mean(reviews['review_length'])
#tokenization
from spacy.cli.download import download
download(model="en_core_web_sm")
tok = spacy.load('en_core_web_sm')
def tokenize (text):
text = re.sub(r"[^\x00-\x7F]+", " ", text)
regex = re.compile('[' + re.escape(string.punctuation) + '0-9\\r\\t\\n]') # remove punctuation and numbers
nopunct = regex.sub(" ", text.lower())
return [token.text for token in tok.tokenizer(nopunct)]
#count number of occurences of each word
counts = Counter()
for index, row in reviews.iterrows():
counts.update(tokenize(row['review']))
#deleting infrequent words
print("num_words before:",len(counts.keys()))
for word in list(counts):
if counts[word] < 2:
del counts[word]
print("num_words after:",len(counts.keys()))
#creating vocabulary
vocab2index = {"":0, "UNK":1}
words = ["", "UNK"]
for word in counts:
vocab2index[word] = len(words)
words.append(word)
def encode_sentence(text, vocab2index, N=70):
tokenized = tokenize(text)
encoded = np.zeros(N, dtype=int)
enc1 = np.array([vocab2index.get(word, vocab2index["UNK"]) for word in tokenized])
length = min(N, len(enc1))
encoded[:length] = enc1[:length]
return encoded, length
reviews['encoded'] = reviews['review'].apply(lambda x: np.array(encode_sentence(x,vocab2index )))
reviews.head()
#check how balanced the dataset is
Counter(reviews['rating'])
X = list(reviews['encoded'])
y = list(reviews['rating'])
from sklearn.model_selection import train_test_split
X_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size=0.2)
class ReviewsDataset(Dataset):
def __init__(self, X, Y):
self.X = X
self.y = Y
def __len__(self):
return len(self.y)
def __getitem__(self, idx):
return torch.from_numpy(self.X[idx][0].astype(np.int32)), self.y[idx], self.X[idx][1]
train_ds = ReviewsDataset(X_train, y_train)
valid_ds = ReviewsDataset(X_valid, y_valid)
def train_model(model, epochs=10, lr=0.001):
parameters = filter(lambda p: p.requires_grad, model.parameters())
optimizer = torch.optim.Adam(parameters, lr=lr)
for i in range(epochs):
model.train()
sum_loss = 0.0
total = 0
for x, y, l in train_dl:
x = x.long()
y = y.long()
y_pred = model(x, l)
optimizer.zero_grad()
loss = F.cross_entropy(y_pred, y)
loss.backward()
optimizer.step()
sum_loss += loss.item()*y.shape[0]
total += y.shape[0]
val_loss, val_acc, val_rmse = validation_metrics(model, val_dl)
if i % 5 == 1:
print("train loss %.3f, val loss %.3f, val accuracy %.3f, and val rmse %.3f" % (sum_loss/total, val_loss, val_acc, val_rmse))
def validation_metrics (model, valid_dl):
model.eval()
correct = 0
total = 0
sum_loss = 0.0
sum_rmse = 0.0
for x, y, l in valid_dl:
x = x.long()
y = y.long()
y_hat = model(x, l)
loss = F.cross_entropy(y_hat, y)
pred = torch.max(y_hat, 1)[1]
correct += (pred == y).float().sum()
total += y.shape[0]
sum_loss += loss.item()*y.shape[0]
sum_rmse += np.sqrt(mean_squared_error(pred, y.unsqueeze(-1)))*y.shape[0]
return sum_loss/total, correct/total, sum_rmse/total
batch_size = 5000
vocab_size = len(words)
train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=True)
val_dl = DataLoader(valid_ds, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
LSTM with fixed length input
###Code
class LSTM_fixed_len(torch.nn.Module) :
def __init__(self, vocab_size, embedding_dim, hidden_dim) :
super().__init__()
self.embeddings = nn.Embedding(vocab_size, embedding_dim, padding_idx=0)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, batch_first=True)
self.linear = nn.Linear(hidden_dim, 5)
self.dropout = nn.Dropout(0.2)
def forward(self, x, l):
x = self.embeddings(x)
x = self.dropout(x)
lstm_out, (ht, ct) = self.lstm(x)
return self.linear(ht[-1])
model_fixed = LSTM_fixed_len(vocab_size, 50, 50)
train_model(model_fixed, epochs=30, lr=0.01)
train_model(model_fixed, epochs=30, lr=0.01)
train_model(model_fixed, epochs=30, lr=0.01)
###Output
_____no_output_____ |
Prosjekt_del_1_Emil_Neby_+_Cornelia_Plesner.ipynb | ###Markdown
I used contents from these sources to create this Colab notebook: 1. https://colab.research.google.com/github/asifahmed90/pyspark-ML-in-Colab/blob/master/PySpark_Regression_Analysis.ipynb 2. https://gist.github.com/dvainrub/b6178dc0e976e56abe9caa9b72f73d4a **OUTCOME: having an enviornment to develop Spark apps in Python3** **Step 0: setting things up in Google Colab**First, we need to install all the dependencies in Colab environment like Apache `Spark 3 with Hadoop 2.7`, `Python 3.6`, `Java 11` (and a helper Python package named `Findspark`)
###Code
!apt-get install openjdk-11-jdk-headless -qq > /dev/null
!wget -q https://downloads.apache.org/spark/spark-3.0.2/spark-3.0.2-bin-hadoop2.7.tgz
!tar xf spark-3.0.2-bin-hadoop2.7.tgz
!rm -rf spark-3.0.2-bin-hadoop2.7.tgz*
!pip -q install findspark pyspark
###Output
_____no_output_____
###Markdown
Now that you installed Spark and Java in Colab, it is time to set some environment variables. We need to set the values for `JAVA_HOME` and `SPARK_HOME` (and `HADOOP_HOME`), as shown below:
###Code
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-11-openjdk-amd64"
os.environ["SPARK_HOME"] = "/content/spark-3.0.2-bin-hadoop2.7"
os.environ["HADOOP_HOME"] = os.environ["SPARK_HOME"]
###Output
_____no_output_____
###Markdown
**Step 1: downloading project's dataset**Now let's download the project's dataset from Github. You can read the dataset for the course's project from `datasets/data/TDT4305_S2021`
###Code
!rm -rf datasets
!git clone --depth=1 -q https://github.com/habedi/datasets
!ls datasets/data/TDT4305_S2021
###Output
badges.csv.gz 'Description of the data.pdf' users.csv.gz
comments.csv.gz posts.csv.gz
###Markdown
**Step 2: checking the Spark installation**Run a local spark session to test your installation:
###Code
import findspark
findspark.init()
###Output
_____no_output_____
###Markdown
**Step 3: making a helper method for creating a SaprkContext variable**You can use `init_spark` to create a new `SaprkContext variable` and use it
###Code
from pyspark.sql import SparkSession
def init_spark(app_name="HelloWorldApp", execution_mode="local[*]"):
spark = SparkSession.builder.master(execution_mode).appName(app_name).getOrCreate()
sc = spark.sparkContext
return spark, sc
###Output
_____no_output_____
###Markdown
**Step 4: a HelloWorld Spark app**Our first Spark application; it takes a list of numbers and squares each element and returns the list of squared numbers
###Code
def main1():
_, sc = init_spark()
nums = sc.parallelize([1, 2, 3, 4])
print(nums.map(lambda x: x*x).collect())
if __name__ == '__main__':
main1()
###Output
[1, 4, 9, 16]
###Markdown
**Step 5: another Saprk app that loades a CSV files into an RDD**Apps that loads the `users.csv.gz`, into an RDD Spark Project Part 1The goal of part 1 of this project is to become familiar with Big Data processing, Spark and useSpark to carry out a simple analysis of data provided.Big Data is characterized as large and/or complex datasets, that requires specialized applications toprocess its content. By providing large amounts of valueable information, it may help stakeholdersrecognize or obtain an understanding of various insights and thus be a large part of decision-making.[2]To operate on Big Data, a framework commonly used is Apache Spark. It is supposed to efficientlyrun applications while supporting several languages and advanced analytics[1]. The fundamentaldata structure in Apache Spark is the Resilient Distributed Datasets (RDD)[3]. The RDDs areimmutable, partitioned collections of objects, where different actions can be executed to collectvarious results. A DataFrame (DF) is a distributed collection of rad-objects, and is equivalent toa table in a relation database system Task 1The first task consists of five subtasks, mainly regarding loading the different csv-files into RDDs.We solved this with the textFile()-command for each of the files Task 1.1**Load the posts.csv.gz into an RDD**
###Code
def loadPosts():
_, sc = init_spark()
rddP = sc.textFile('datasets/data/TDT4305_S2021/posts.csv.gz')
print("The 'posts.csv.gz' file is loaded into the RDD 'rddP'")
if __name__ == '__main__':
loadPosts()
###Output
The 'posts.csv.gz' file is loaded into the RDD 'rddP'
###Markdown
Task 1.2**Load the comments.csv.gz into an RDD**
###Code
def loadComments():
_, sc = init_spark()
rddC = sc.textFile('datasets/data/TDT4305_S2021/comments.csv.gz')
print("The 'comments.csv.gz' file is loaded into the RDD 'rddC'")
if __name__ == '__main__':
loadComments()
###Output
The 'comments.csv.gz' file is loaded into the RDD 'rddC'
###Markdown
Task 1.3**Load the users.csv.gz into an RDD**
###Code
def loadUsers():
_, sc = init_spark()
rddU = sc.textFile('datasets/data/TDT4305_S2021/users.csv.gz')
print("The 'users.csv.gz' file is loaded into the RDD 'rddU'")
if __name__ == '__main__':
loadUsers()
###Output
The 'users.csv.gz' file is loaded into the RDD 'rddU'
###Markdown
Task 1.4**Load the badges.csv.gz into an RDD**
###Code
def loadBadges():
_, sc = init_spark()
rddB = sc.textFile('datasets/data/TDT4305_S2021/badges.csv.gz')
print("The 'badges.csv.gz' file is loaded into the RDD 'rddB'")
if __name__ == '__main__':
loadBadges()
###Output
The 'badges.csv.gz' file is loaded into the RDD 'rddB'
###Markdown
Task 1.5**Print the number of rows for each of four RDDs.**To count the number of rows in each RDD, thecount()-command was used before printing. Forthis to work, it was important to ensure that the string in the file was separated by the delimitercharacter('\t'), so that there were produced multiple columns
###Code
_, sc = init_spark()
rddPosts = sc.textFile('datasets/data/TDT4305_S2021/posts.csv.gz')
rddP = rddPosts.map(lambda x: x.split('\t'))
#print("The 'posts.csv.gz' file is loaded into the RDD 'rddP'")
rddComments = sc.textFile('datasets/data/TDT4305_S2021/comments.csv.gz')
rddC = rddComments.map(lambda x: x.split('\t'))
#print("The 'comments.csv.gz' file is loaded into the RDD 'rddC'")
rddUsers = sc.textFile('datasets/data/TDT4305_S2021/users.csv.gz')
rddU = rddUsers.map(lambda x: x.split('\t'))
#print("The 'users.csv.gz' file is loaded into the RDD 'rddU'")
rddBadges = sc.textFile('datasets/data/TDT4305_S2021/badges.csv.gz')
rddB = rddBadges.map(lambda x: x.split('\t'))
#print("The 'badges.csv.gz' file is loaded into the RDD 'rddB'")
def printRows():
numberOfRowsP = rddP.count()
numberOfRowsC = rddC.count()
numberOfRowsU = rddU.count()
numberOfRowsB = rddB.count()
print("Count of rows in Posts: "+ str(numberOfRowsP))
print("Count of rows in Comments: "+ str(numberOfRowsC))
print("Count of rows in Users: "+ str(numberOfRowsU))
print("Count of rows in Badges: "+ str(numberOfRowsB))
if __name__ == '__main__':
printRows()
_, sc = init_spark()
posts = sc.textFile('datasets/data/TDT4305_S2021/posts.csv.gz').map(lambda x: x.split('\tab'))
users = sc.textFile('datasets/data/TDT4305_S2021/users.csv.gz').map(lambda x: x.split('\tab'))
badges = sc.textFile('datasets/data/TDT4305_S2021/badges.csv.gz').map(lambda x: x.split('\tab'))
comments = sc.textFile('datasets/data/TDT4305_S2021/comments.csv.gz').map(lambda x: x.split('\tab'))
postscolumns = ['Id', 'PostTypeId', 'CreationDate','Score','ViewCount',"Body",'OwnerUserId','LastActivityDate',"Title","Tags",'AnswerCount','CommentCount','FavoriteCount','Closedate']
commentscolumns = ['PostId', "Score", "Text", "CreationDate", 'UserId']
badgescolumns = ['UserId', "Name", "Date", "Class"]
userscolumns = ['Id', "Reputation", "CreationDate", "DisplayName", "LastAccessDate", "AboutMe", "Views", "UpVotes", "DownVotes"]
rowsPosts = posts.count()
print("Rows in Posts:" + str(rowsPosts))
rowsUsers = users.count()
print("Rows in Users:" + str(rowsUsers))
rowsComments = comments.count()
print("Rows in Comments: " + str(rowsComments))
rowsBadges = badges.count()
print("Rows in Badges:" + str(rowsBadges))
###Output
Rows in Posts:56218
Rows in Users:91617
Rows in Comments: 58736
Rows in Badges:105641
###Markdown
**Task 2** Task 2.1**Find the average length of the questions, answers, and comments in character.**Firstly we made a method for decoding. Then we extracted the ”Body” of questions, decodedeach body, found length and the average by stats().mean(). Did the same for answers and comments. The main difference between the three was in the type of decoding. Questions and answersused ”utf-8” decoding, while comments used ”ISO-8859-1” decoding
###Code
import base64
import re
#Average length of questions in characters
def decode(cleanText, encoding):
decodedText = cleanText.map(lambda x: str(base64.b64decode(x), encoding)) #decode from base64
cleanDecoded = re.compile('<.*?>') #remove html tags
cleanText = decodedText.map(lambda x: re.sub(cleanDecoded, '', x).replace("
",''))
return cleanText
#Average length of answers in characters
rddP.map(lambda x: x[postscolumns.index("PosttypeId")])
codedQuestions = rddP.filter(lambda x: x[postscolumns.index("PostTypeId")] == "1").map(lambda x: x[postscolumns.index("Body")])
decodedQuestions = decode(codedQuestions, "utf-8")
avLengthQuestions = decodedQuestions.map(lambda x: len(x)).stats().mean()
print(str(avLengthQuestions))
codedAnswers = rddP.filter(lambda x: x[postscolumns.index("PostTypeId")] == "2").map(lambda x: x[postscolumns.index("Body")])
decodedAnswers = decode(codedAnswers, "utf-8")
avLengthAnswers = decodedAnswers.map(lambda x: len(x)).stats().mean()
print(str(avLengthAnswers))
codedComments = rddC.map(lambda x: x[commentscolumns.index("Text")])
decodedComments = decode(codedComments, "ISO-8859-1")
avLengthComments = decodedComments.map(lambda x: len(x)).stats().mean()
print(str(avLengthComments))
###Output
894.1216180219943
794.282617409307
168.8353309724881
###Markdown
Task 2.2**Find the dates when the first and the last questions were asked. Also, find the displayname of users who posted those questions **First we filtered on questions and extracted ”CreationDate” from the posts-rdd. Performed reduce(min) to find the earliest date. Found the ”DisplayName” by indexing with corresponding”Id” in the users-rdd. Found the last question posted by the same procedure, only replacing minby max in the reduce() operation. The first question was asked at **2014-05-13 23:58:30** and posted by **Doorknob**, and the last question was asked at **2020-12-06 03:01:58** and posted by **mon**.
###Code
#Find all questions asked
datesAndIds = rddP.filter(lambda x: x[postscolumns.index("PostTypeId")] == "1").map(lambda x: (x[postscolumns.index("CreationDate")], x[postscolumns.index("OwnerUserId")]))
#First question
firstQuestionAsked = datesAndIds.reduce(min) #Will select by first index argumment, i.e. date
print(firstQuestionAsked[0])
firstName = rddU.filter(lambda x: x[userscolumns.index("Id")] == firstQuestionAsked[1] ).map(lambda x: x[userscolumns.index("DisplayName")]).collect()
print(str(firstName[0]))
#Last question
lastQuestionAsked = datesAndIds.reduce(max) #Will select by first index argumment, i.e. date
print(lastQuestionAsked[0])
lastName = rddU.filter(lambda x: x[userscolumns.index("Id")] == lastQuestionAsked[1] ).map(lambda x: x[userscolumns.index("DisplayName")]).collect()
print(str(lastName[0]))
###Output
2014-05-13 23:58:30
Doorknob
2020-12-06 03:01:58
mon
###Markdown
Task 2.3**Find the ids of users who wrote the greatest number of answers and questions. Ignore the user with OwnerUserId equal to -1.**We started by removing posts from the posts-rdd which contained ”OwnerUserId” equal to ”NULL”(equivalent to -1). Then filtered on questions, extracted ”OwnerUserId” and performed reduceByKey to count the occurrence of each user. Sorted by occurrences, collected into list and extracted the last elementto obtain the user with the greatest count of questions. Same procedure for answers.The UserId of of the user that has written the greatest number of questions is **8820**, and the amount is **103 questions**. The UserId of of the user that has written the greatest number of answers is **64377**, and the amount is **579 answers**
###Code
#Filter out users with ownerUserId == "NULL" (-1)
ownerUserId = rddP.filter(lambda x: not(x[postscolumns.index("OwnerUserId")] == "NULL"))
#Find user with greatest number of questions
userQuestions = ownerUserId.filter(lambda x: x[postscolumns.index("PostTypeId")] == "1").map(lambda x: (x[postscolumns.index("OwnerUserId")], 1))
userQuestions = userQuestions.reduceByKey(lambda a,b: a+b)
userQuestions = userQuestions.sortBy(lambda row: row[1]).collect()
print(str(userQuestions[-1]))
#Find user with greatest number of answers, same procedure as questions
userAnswers = ownerUserId.filter(lambda x: x[postscolumns.index("PostTypeId")] == "2").map(lambda x: (x[postscolumns.index("OwnerUserId")], 1))
userAnswers = userAnswers.reduceByKey(lambda a,b: a+b)
userAnswers = userAnswers.sortBy(lambda row: row[1]).collect()
print(str(userAnswers[-1]))
###Output
('8820', 103)
('64377', 579)
###Markdown
Task 2.4**Calculate the number of users who received less than three badges.**Started by extracting ”UserId” column from the badges-rdd. Added a columns of ”1” on each row and performed reduceByKey to remove duplicates and count the occurrence of each unique id. We then filtered to keep only the users with less than three badges and finally performed acount() operation to obtain the answer. The number of users with less than three badges is **37190**
###Code
userBadge = rddB.map(lambda x: (x[badgescolumns.index("UserId")], 1))
userBadge = userBadge.reduceByKey(lambda a,b: a+b)
amountLessThanThree = userBadge.filter(lambda x: (x[1]<3)).count()
print(str(amountLessThanThree))
###Output
37190
###Markdown
Task 2.5**Calculate the Pearson correlation coefficient (or Pearson’s r) between the number ofupvotes and downvotes cast by a user. **We made a method implementing the logic of the formula with the use of a simple for-loop and numpy-lists. Found average of upvotes and downvotes by performing stats().mean() on respectively columns in the users-rdd. Collected upvotes and downvotes in each own list. Fed this four arguments in the method for Pearsons coefficient and calculated the answer. The Pearson correlation coefficient calculated was **0.2684978771516632**
###Code
import numpy as np
def pearson(x,y, mean_x, mean_y):
teller = 0
nevnerx = 0
nevnery = 0
for i in range (0,len(x)):
teller += (x[i]- mean_x)*(y[i]-mean_y)
nevnerx += (x[i]-mean_x)**2
nevnery += (y[i]-mean_y)**2
pearson = teller/(np.sqrt(nevnerx)*np.sqrt(nevnery))
return pearson
ups = rddU.map(lambda x: x[userscolumns.index("UpVotes")]).collect()[1:]
ups = sc.parallelize(ups)
avarageUp = ups.map(lambda x: float(x)).stats().mean()
ups = ups.map(lambda x: float(x)).collect()
downs = rddU.map(lambda x: x[userscolumns.index("DownVotes")]).collect()[1:]
downs = sc.parallelize(downs)
avarageDown = downs.map(lambda x: float(x)).stats().mean()
downs = downs.map(lambda x: float(x)).collect()
print(pearson(ups, downs, avarageUp, avarageDown))
###Output
0.2684978771516632
###Markdown
Task 2.6**Calculate the entropy of id of users (that is UserId column from comments data) who wrote one or more comments.**We found the total number of users by performing a count() operation on ”UserId” in the comments-rdd. Collected all UserdIds in a numpy list. Could then implement the logic of the formula presented in the task by a simple ”for-loop” over the list of UserIds. The entropy calculated was **47.080874619623344**
###Code
import numpy as np
def P(x, totalNumberOfUsers):
return x/totalNumberOfUsers
def H():
userIds = rddC.map(lambda x: (x[commentscolumns.index("UserId")],1))
userIds = userIds.reduceByKey(lambda a,b: a+b)
totalNumberOfUsers = userIds.count()
userIds = userIds.collect()
entropy = 0
for i in range (0,totalNumberOfUsers):
entropy -= P(userIds[i][1], totalNumberOfUsers)*np.log2(P(userIds[i][1], totalNumberOfUsers))
return entropy
if __name__ == '__main__':
print(H())
###Output
47.080874619623344
###Markdown
Task 3 Task 3.1Create a graph of posts and comments. Nodes are users, and there is an edge from node 𝑖 to node 𝑗 if 𝑖 wrote a comment for 𝑗’s post. Each edge has a weight 𝑤𝑖𝑗 that is the number of times 𝑖 has commented a post by 𝑗.The idea was to make a rdd to represent the graph. We made a rdd with three columns, one with source (id of the user who made the comment), destination (id of the user who had madethe post) and a weight. This was achieved by making a new rdd of the posts-rdd, just containing the id of the post and the ”OwnerUserId”. Then we made another rdd of the comments-rdd, just containing the ”PostId” and the ”UserId”. Then we performed a join operation on the two new rdds, resulting in a new rdd. To be able to perform a reduceByKey operation to add up the weights for each unique, we were forced to have only two columns in our rdd. To achieve this we put ”UserId” as source and ”OwnerUserId” as destination together as a list of length two within the key column. Now we were able to perform the reduceByKey operation to add up the weightsin the graph. To make the rdd complete we extracted source and destination to each separate column.
###Code
from pyspark.sql import DataFrame
userIds = rddP.map(lambda x: ( x[postscolumns.index("Id")],x[postscolumns.index("OwnerUserId")]))
comments = rddC.map(lambda x: ((x[commentscolumns.index("PostId")], x[commentscolumns.index("UserId")])))
result = userIds.join(comments).map(lambda x: (x[1],1))
rddEdges = result.reduceByKey(lambda a,b: a+b)
#Making it on the format src (=id of commenter), dst (=id of poster), w
rddEdges = rddEdges.map(lambda x: (x[0][1], x[0][0], x[1]))
print(str(rddEdges.collect()[:11]))
###Output
[('24', '22', 1), ('53', '66', 1), ('115', '84', 1), ('2723', '84', 1), ('21825', '84', 1), ('70', '96', 1), ('14', '14', 11), ('18481', '14', 1), ('434', '59', 1), ('13023', '59', 1), ('471', '151', 1)]
###Markdown
Task 3.2**Convert the result of the previous step into a Spark DataFrame (DF) and answer the following subtasks using DataFrame API, namely using Spark SQL.**This was accomplished by the use of rdd’s inbuilt function toDF(). We could perform toDF()directly on the rdd we got from previous subtask
###Code
dfEdges = rddEdges.toDF()
dfEdges = dfEdges.withColumnRenamed('_1', "src").withColumnRenamed('_2', "dst").withColumnRenamed('_3', "w")
dfEdges.show()
###Output
+-----+---+---+
| src|dst| w|
+-----+---+---+
| 24| 22| 1|
| 53| 66| 1|
| 115| 84| 1|
| 2723| 84| 1|
|21825| 84| 1|
| 70| 96| 1|
| 14| 14| 11|
|18481| 14| 1|
| 434| 59| 1|
|13023| 59| 1|
| 471|151| 1|
| 146| 84| 3|
| 84| 84| 10|
| 1156| 84| 1|
| 157|158| 1|
| 158|158| 10|
| 178|178| 4|
| 249| 26| 1|
| 189| 84| 1|
| 116| 21| 3|
+-----+---+---+
only showing top 20 rows
###Markdown
Task 3.3**Find the user ids of top 10 users who wrote the most comments**We extracted the source-column (”src”) and the weight-column (”w”) from the DF in the previous task. Then we performed a sum() operation with respect to weigth, adding up all weights corresponding to the same source. To find the top 10 users, we sorted the new DF by descending order and extracted the first 10 rows.
###Code
comment = dfEdges.groupBy("src").sum().withColumnRenamed('sum(w)', "w")
comment.sort(comment.w.desc()).select("src").show(n=10)
###Output
+-----+
| src|
+-----+
| 836|
| 381|
|28175|
|64377|
|35644|
|55122|
| 924|
|71442|
| 21|
|45264|
+-----+
only showing top 10 rows
###Markdown
Task 3.4**Find the display names of top 10 users who their posts received the greatest number of comments. To do so, you can load users information (or table) into a DF and join the DF from previous subtasks (that the DF containing the graph of posts and comments) with it to produce the results.**We extracted the destination-column (”dst”) and the weight-column (”w”) from the DF in subtask3.2. Then we performed a sum() operation with respect to weigth, adding up all weights corresponding to the same destination. hen we made a new rdd of the users-rdd, containing ”Id” and ”DisplayName”, and removed the first row containing headers. Converting the rdd to DF using toDF(), performed a left join, sorted the new DF by descending order, selecting ”DisplayName” and showing the first 10 rows, this gave us the result.
###Code
posts = dfEdges.groupBy("dst").sum().withColumnRenamed('sum(w)', "w")
users = rddU.map(lambda x: (x[userscolumns.index("Id")], x[userscolumns.index("DisplayName")])).collect()[1:]
dfUsers = sc.parallelize(users).toDF()
dfUsers = dfUsers.withColumnRenamed('_1', "Id").withColumnRenamed('_2', "DisplayName")
dfMerge = dfUsers.join(posts, dfUsers.Id == posts.dst, how = 'left')
dfMerge.sort(dfMerge.w.desc()).select("DisplayName").show(10)
###Output
+--------------------+
| DisplayName|
+--------------------+
| Neil Slater|
| Erwan|
| Media|
| n1k31t4|
|Has QUIT--Anony-M...|
| JahKnows|
| Leevo|
| David Masip|
| Noah Weber|
| Brian Spiering|
+--------------------+
only showing top 10 rows
###Markdown
Task 3.5**Save the DF containing the information for the graph of posts and comments (from subtask 2) into a persistence format (like CSV) on your filesystem so that later could be loaded back into a Spark application’s workspace**This subtask was accomplished by using DF’s inbuilt function write.csv. Path to the csv’s: ”/con-tent/edges.csv”
###Code
dfEdges.write.csv('edges.csv')
###Output
_____no_output_____ |
Multi_Dim_Sine/Reptile_multidimensional_sinewave.ipynb | ###Markdown
Imports
###Code
import matplotlib.pyplot as plt
import numpy as np
# Required imports for neural network
import torch.nn as nn
import torch
from torch.autograd import Variable
import random
###Output
_____no_output_____
###Markdown
Data Loading and Generation This Sine function generator is based on the repostory: https://github.com/AdrienLE/ANIML/blob/master/ANIML.ipynb
###Code
from data import SineWaveTask_multi
TRAIN_SIZE = 20000
TEST_SIZE = 1000
SINE_TRAIN = [SineWaveTask_multi() for _ in range(TRAIN_SIZE)]
SINE_TEST = [SineWaveTask_multi() for _ in range(TEST_SIZE)]
x, y_true = SINE_TRAIN[0].training_set()
y_true.shape
###Output
_____no_output_____
###Markdown
Neural Network Model
###Code
# Define network
class Neural_Network_multi(nn.Module):
def __init__(self, input_size=1, hidden_size=40, output_size=20):
super(Neural_Network_multi, self).__init__()
# network layers
self.hidden1 = nn.Linear(input_size,hidden_size)
self.hidden2 = nn.Linear(hidden_size,hidden_size)
self.output_layer = nn.Linear(hidden_size,output_size)
#Activation functions
self.relu = nn.ReLU()
def forward(self, x):
x = self.hidden1(x)
x = self.relu(x)
x = self.hidden2(x)
x = self.relu(x)
x = self.output_layer(x)
y = x
return y
###Output
_____no_output_____
###Markdown
Helper functions
###Code
# The Minimum Square Error is used to evaluate the difference between prediction and ground truth
criterion = nn.MSELoss()
def copy_existing_model(model):
# Function to copy an existing model
# We initialize a new model
new_model = Neural_Network_multi()
# Copy the previous model's parameters into the new model
new_model.load_state_dict(model.state_dict())
return new_model
def get_samples_in_good_format(wave):
#This function is used to sample data from a wave
x, y_true = wave.training_set()
# We add [:,None] to get the right dimensions to pass to the model: we want K x 1 (we have scalars inputs hence the x 1)
# Note that we convert everything torch tensors
x = torch.tensor(x)
y_true = torch.tensor(y_true)
return x,y_true
def initialization_to_store_meta_losses():
# This function creates lists to store the meta losses
global store_train_loss_meta; store_train_loss_meta = []
global store_test_loss_meta; store_test_loss_meta = []
def test_set_validation(model,new_model,wave,lr_inner,k,store_test_loss_meta):
# This functions does not actually affect the main algorithm, it is just used to evaluate the new model
new_model = training(model, wave, lr_inner, k)
# Obtain the loss
loss = evaluation(new_model, wave)
# Store loss
store_test_loss_meta.append(loss)
def train_set_evaluation(new_model,wave,store_train_loss_meta):
loss = evaluation(new_model, wave)
store_train_loss_meta.append(loss)
def print_losses(epoch,store_train_loss_meta,store_test_loss_meta,printing_step=1000):
if epoch % printing_step == 0:
print(f'Epochh : {epoch}, Average Train Meta Loss : {np.mean(store_train_loss_meta)}, Average Test Meta Loss : {np.mean(store_test_loss_meta)}')
#This is based on the paper update rule, we calculate the difference between parameters and then this is used by the optimizer, rather than doing the update by hand
def reptile_parameter_update(model,new_model):
# Zip models for the loop
zip_models = zip(model.parameters(), new_model.parameters())
for parameter, new_parameter in zip_models:
if parameter.grad is None:
parameter.grad = torch.tensor(torch.zeros_like(parameter))
# Here we are adding the gradient that will later be used by the optimizer
parameter.grad.data.add_(parameter.data - new_parameter.data)
# Define commands in order needed for the metaupdate
# Note that if we change the order it doesn't behave the same
def metaoptimizer_update(metaoptimizer):
# Take step
metaoptimizer.step()
# Reset gradients
metaoptimizer.zero_grad()
def metaupdate(model,new_model,metaoptimizer):
# Combine the two previous functions into a single metaupdate function
# First we calculate the gradients
reptile_parameter_update(model,new_model)
# Use those gradients in the optimizer
metaoptimizer_update(metaoptimizer)
def evaluation(new_model, wave, item = True):
# Get data
x, label = get_samples_in_good_format(wave)
# Make model prediction
prediction = new_model(x)
# Get loss
if item == True: #Depending on whether we need to return the loss value for storing or for backprop
loss = criterion(prediction,label).item()
else:
loss = criterion(prediction,label)
return loss
def training(model, wave, lr_k, k):
# Create new model which we will train on
new_model = copy_existing_model(model)
# Define new optimizer
koptimizer = torch.optim.SGD(new_model.parameters(), lr=lr_k)
# Update the model multiple times, note that k>1 (do not confuse k with K)
for i in range(k):
# Reset optimizer
koptimizer.zero_grad()
# Evaluate the model
loss = evaluation(new_model, wave, item = False)
# Backpropagate
loss.backward()
koptimizer.step()
return new_model
###Output
_____no_output_____
###Markdown
Reptile
###Code
#Define important variables
epochs = 10000 * 25 # match number of tasks as maml #int(1e5) # number of epochs
lr_meta=0.001 # Learning rate for meta model (outer loop)
printing_step=1000 # how many epochs should we wait to print the loss
lr_k=0.01 # Internal learning rate
k=5 # Number of internal updates for each task
# Initializations
initialization_to_store_meta_losses()
model = Neural_Network_multi()
metaoptimizer = torch.optim.Adam(model.parameters(), lr=lr_meta)
# Training loop
for epoch in range(epochs):
# Sample a sine wave (Task from training data)
wave = random.sample(SINE_TRAIN, 1)
# Update model predefined number of times based on k
new_model = training(model, wave[0], lr_k, k)
# Evalaute the loss for the training data
train_set_evaluation(new_model,wave[0],store_train_loss_meta)
#Meta-update --> Get gradient for meta loop and update
metaupdate(model,new_model,metaoptimizer)
# Evalaute the loss for the test data
# Note that we need to sample the wave from the test data
wave = random.sample(SINE_TEST, 1)
test_set_validation(model,new_model,wave[0],lr_k,k,store_test_loss_meta)
# Print losses every 'printing_step' epochs
print_losses(epoch,store_train_loss_meta,store_test_loss_meta,printing_step)
###Output
<ipython-input-6-7e1e6714c74e>:17: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
x = torch.tensor(x)
<ipython-input-6-7e1e6714c74e>:18: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
y_true = torch.tensor(y_true)
<ipython-input-6-7e1e6714c74e>:48: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
parameter.grad = torch.tensor(torch.zeros_like(parameter))
###Markdown
Few Shot learning with new meta-model The model performs good few shot learning
###Code
wave = SineWaveTask_multi();
k_shot_updates = 4
initialization_to_store_meta_losses()
for shots in range(k_shot_updates):
new_model = training(model, wave, lr_k, shots)
train_set_evaluation(new_model,wave,store_train_loss_meta)
plt.plot(store_train_loss_meta,label = 'Loss')
plt.legend()
plt.xlabel('k shots')
all_losses = []
num_eval = 1000
num_k_shots = 6
for test_eval in range(num_eval):
wave = SineWaveTask_multi();
k_shot_updates = num_k_shots
initialization_to_store_meta_losses()
for shots in range(k_shot_updates):
new_model = training(model, wave, lr_k, shots)
train_set_evaluation(new_model,wave,store_train_loss_meta)
all_losses.append(np.array(store_train_loss_meta))
all_losses = np.array(all_losses)
np.save(f"reptile_multi_sine_{num_k_shots}.npy", all_losses)
fig, ax = plt.subplots(figsize=(8,4))
mean_loss = np.mean(all_losses, axis=0)
# confidence interval plotting help from: https://stackoverflow.com/questions/59747313/how-to-plot-confidence-interval-in-python
y = mean_loss
x = list(range(num_k_shots))
ci = 1.96 * np.std(all_losses, axis=0)**2/np.sqrt(len(y))
ax_size=16
title_size=18
ax.plot(x, y, linewidth=3, label=f"Mean Loss")
ax.fill_between(x, (y-ci), (y+ci), alpha=.5,label=f"95% CI")
ax.set_xlabel("Gradient Steps",fontsize=ax_size)
ax.set_ylabel("Mean Squared Error (MSE)",fontsize=ax_size)
ax.set_title("Sine Wave Regression: k-Shot Evaluation",fontsize=title_size)
ax.legend()#loc="upper right")
plt.savefig("reptile_sine_wave_multidim_reg_kshot.png")
analysis_steps = [0, 1, num_k_shots-1]
for analysis_step in analysis_steps:
print(f"Step: {analysis_step}, Error: {mean_loss[analysis_step]}, Var: {ci[analysis_step]}")
###Output
Step: 0, Error: 3.0552478387355806, Var: 0.3992171627993859
Step: 1, Error: 0.5247177988365292, Var: 0.10564654234035513
Step: 5, Error: 0.04501637527067214, Var: 0.01355739429137957
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.